content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Table of Contents [gerrit filter:] Extension of the ODL based DHCP server, which add support for dynamic address allocation to end point users, that are not controlled (known) by OpenStack Neutron. Each DHCP pool can be configured with additional information such as DNS servers, lease time (not yet), static allocations based on MAC address, etc. The feature supports IPv4 only. In a non-neutron northbounds environment e.g. SD-WAN solution (unimgr), there is currently no dynamic DHCP service for end-points or networks that are connected to OVS. Every DHCP packet that is received by the controller, the controller finds the neutron port based on the inport of the packet, extracts the ip which was allocated by neutron for that vm, and replies using that info. If the dhcp packet is from a non-neutron port, the packet won’t even reach the controller. a DHCP packet that is received by the odl, from a port that is managed by Netvirt and was configured using the netvirt API, rather then the neutron API, in a way that there is no pre-allocated IP for network interfaces behind that port - will be handled by the DHCP dynamic allocation pool that is configured on the network associated with the receiving OVS port. We wish to forward to the controller, every dhcp packet coming from a non-neutron port as well (as long as it is configured to use the controller dhcp). Once a DHCP packet is recieved by the controller, the controller will check if there is already a pre-allocated address by checking if packet came from a neutron port. if so, the controller will reply using the information from the neutron port. Otherwise, the controller will find the allocation pool for the network which the packet came from and will allocate the next free ip. The operation of each allocation pool will be managed through the Genius ID Manager service that will support the allocation and release of IP addresses (ids), persistent mapping across controller restarts and more. Neutron IP allocations will be added to the relevant pools to avoid allocation of the same addresses. The allocation pool DHCP server will support: This new rule in table 60 will be responsible for forwarding dhcp packets to the controller: cookie=0x6800000, duration=121472.576s, table=60, n_packets=1, n_bytes=342, priority=49,udp,tp_src=68,tp_dst=67 actions=CONTROLLER:65535 New YANG model to support the configuration of the DHCP allocation pools and allocations, per network and subnet. container dhcp_allocation_pool { config true; description "contains DHCP Server dynamic allocations"; list network { key "network-id"; leaf network-id { description "network (elan-instance) id"; type string; } list allocation { key "subnet"; leaf subnet { description "subnet for the dhcp to allocate ip addresses"; type inet:ip-prefix; } list allocation-instance { key "mac"; leaf mac { description "requesting mac"; type yang:phys-address; } leaf allocated-ip { description "allocated ip address"; type inet:ip-address; } } } list allocation-pool { key "subnet"; leaf subnet { description "subnet for the dhcp to allocate ip addresses"; type inet:ip-prefix; } leaf allocate-from { description "low allocation limit"; type inet:ip-address; } leaf allocate-to { description "high allocation limit"; type inet:ip-address; } leaf gateway { description "default gateway for dhcp allocation"; type inet:ip-address; } leaf-list dns-servers { description "dns server list"; type inet:ip-address; } list static-routes { description "static routes list for dhcp allocation"; key "destination"; leaf destination { description "destination in CIDR format"; type inet:ip-prefix; } leaf nexthop { description "router ip address"; type inet:ip-address; } } } } } The feature is activated in the configuration (disabled by default). adding dhcp-dynamic-allocation-pool-enabled leaf to dhcpservice-config: container dhcpservice-config { leaf controller-dhcp-enabled { description "Enable the dhcpservice on the controller"; type boolean; default false; } leaf dhcp-dynamic-allocation-pool-enabled { description "Enable dynamic allocation pool on controller dhcpservice"; type boolean; default false; } } and netvirt-dhcpservice-config.xml: <dhcpservice-config <controller-dhcp-enabled>false</controller-dhcp-enabled> <dhcp-dynamic-allocation-pool-enabled>false</dhcp-dynamic-allocation-pool-enabled> </dhcpservice-config> Support clustering. None. Carbon. Implement and maintain an external DHCP server. This feature can be used by installing odl-netvirt-openstack. This feature doesn’t add any new karaf feature. Introducing a new REST API for the feature URL: /config/dhcp_allocation_pool:dhcp_allocation_pool/ Sample JSON data { "dhcp_allocation_pool": { "network": [ { "network-id": "d211a14b-e5e9-33af-89f3-9e43a270e0c8", "allocation-pool": [ { "subnet": "10.1.1.0/24", "dns-servers": [ "8.8.8.8" ], "gateway": "10.1.1.1", "allocate-from": "10.1.1.2", "allocate-to": "10.1.1.200" "static-routes": [ { "destination": "5.8.19.24/16", "nexthop": "10.1.1.254" }] }] }] } } URL: /config/dhcp_allocation_pool:dhcp_allocation_pool/ Sample JSON data {"dhcp_allocation_pool": { "network": [ { "network-id": "d211a14b-e5e9-33af-89f3-9e43a270e0c8", "allocation": [ { "subnet": "10.1.1.0/24", "allocation-instance": [ { "mac": "fa:16:3e:9d:c6:f5", "allocated-ip": "10.1.1.2" } ]}]}]}} Here is the link for the Trello Card: None. N.A. N.A.
https://docs.opendaylight.org/projects/netvirt/en/stable-oxygen/specs/dhcp-dynamic-allocation-pool.html
2019-06-16T04:55:48
CC-MAIN-2019-26
1560627997731.69
[]
docs.opendaylight.org
TOC & Recently Viewed Recently Viewed Topics Link an Agent Required User Role: Scan Manager or Administrator This procedure describes how to link a Nessus Agent. Once linked, a Nessus Agent automatically downloads and initializes plugins from Tenable.io. To link an agent: - In Tenable.io, click Scans > Agents. The Agents section appears. - In the Linked Agents subsection, copy the Linking Key. - Access the Nessus Agent. Link the Nessus Agent to Tenable.io during Nessus Agent installation. For more information about the linking options, see the Nessus User Guide.
https://docs.tenable.com/cloud/Content/Scans/LinkAnAgent.htm
2019-06-16T05:41:04
CC-MAIN-2019-26
1560627997731.69
[]
docs.tenable.com
XDCR API The XDCR REST API is used to manage Cross Datacenter Replication (XDCR) operations. Description Cross Datacenter Replication (XDCR) configuration automatically replicates data between clusters and between data buckets. When using XDCR, the source and destination clusters are specified. A source cluster is the cluster from where you want to copy data. A destination cluster is the cluster where you want the replica data to be stored. When configuring replication, specify your selections for an individual cluster using Couchbase Web Console. XDCR replicates data between specific buckets and specific clusters and replications can be configured to be either uni-directional or bi-directional. Uni-directional replication means that XDCR replicates from a source to a destination. Bi-directional replication means that XDCR replicates from a source to a destination and also replicates from the destination to the source.
https://docs.couchbase.com/server/6.0/rest-api/rest-xdcr-intro.html
2019-06-16T04:29:36
CC-MAIN-2019-26
1560627997731.69
[]
docs.couchbase.com
This article covers: Frequently Asked Questions - How do I download the Expensify Sync Manager? - Why won't the Expensify Sync Manager Install? - Can I install the Expensify Sync Manager in more than one place? - How do I sync my connection? - How do I export reports to QuickBooks Desktop? - How do I find reports after they have been exported? - Why are my Company Card expenses exporting to the wrong account? - How do I export to my users' Employee Records instead of their Vendor Records? - How does Tax work with QuickBooks Desktop? - How does multi-currency work with QuickBooks Desktop? - How do negative expenses work with QuickBooks Desktop? Specific Errors - The Expensify Sync Manager Could Not Be Reached - The Wrong QuickBooks Company is Open - Billable Transactions Require an Associated Customer - No Vendor Found For Email in QuickBooks - Please Close Any Dialog Boxes Open Within QuickBooks and Try Again - Do Not Have Permission to Access Company Data File - Transaction Split Lines to Accounts Payable Must Include a Vendor On That Split Line General Troubleshooting If you are running into trouble with your QuickBooks Desktop connection, first check your set up: Are QuickBooks and the Sync Manager both running? The Sync Manager and QuickBooks Desktop both need to be running in order to sync or export. Is the Sync Manager installed in the correct location?* The Expensify Sync Manager should be installed in the same location as your QuickBooks application. If QuickBooks is installed on your local desktop, the Sync Manager should be too. If QuickBooks is installed on a remote server, the Sync Manager should be installed there instead. Is there only one Expensify Sync Manager running? There can only be one Expensify Sync Manager running with the same token at one time.* If your Sync Manager is installed on a remote server, make sure that the Expensify Sync Manager is not also installed on your local desktop. Frequently Asked Questions: How do I download the Expensify Sync Manager? You will be provided a link to download the Expensify Sync Manager during the actual connection process. You do not need to install the Sync Manager beforehand. If you have already connected your policy to QuickBooks Desktop and need to install a new copy of the Sync Manager, you can find a link to install a new copy on the Connections page of your policy settings: Accessing the Expensify Sync Manager logs Syncing and export issues that cannot be resolved using our error guides may require additional information contained in your Sync Manager's log file. To access and send the log file: - Open the Expensify Sync Manager - Double click on the version number in the bottom left corner. This will open a folder containing the log file - Send us the log file as an attachment via our built-in support chat, or to [email protected] Note: If you are using a hosted server such as RightNetworks, you might not be able to access the log file yourself. If you receive an error after double clicking on the version number in the bottom left corner, please reach out to your server hosts and ask them to place your log file on your desktop so that you can access it. Problems Installing the Expensify Sync Manager If you are trying to install the Sync Manager on a Windows server, you will need to make sure .NET 3.5 is enabled. To do this: - Click Start and type Turn Windows features On or Off. - Click Turn Windows features On or Off. - If it is not enabled, click in the white check box to enable it and click Ok. - Once this has been enabled, reboot your computer and try installing the Sync Manager again. Once this has been enabled, reboot your computer and try installing the Sync Manager again. *Can I install the Expensify Sync Manager in more than one place? Even if you have multiple users syncing and exporting to QuickBooks Desktop, we don't recommend installing the Sync Manager in multiple locations. As long as the Sync Manager is running with QuickBooks in the single location that it is installed, any admin on your policy will still be able to sync and export successfully from their own computer. How do I sync my connection? We recommend syncing your QuickBooks Desktop connection at least once a week, or whenever you make adjustments in QuickBooks Desktop that may affect how your reports export from Expensify. This would include making changes to your Chart of Accounts, Vendors, Employees, Customers/Jobs, or Items. Step 1: Make sure that the Expensify Sync Manager and QuickBooks Desktop are both running. Step 2: On the Expensify website, go to Settings > Policies > Group > [Policy Name] > Connections > QuickBooks Desktop and click Sync now. Step 3: Wait for the syncing process to complete. This typically takes about 2-5 minutes, however it's not uncommon for the process to last longer. The time it takes to sync your policy will vary depending on how long it's been since the last time you synced and the size of your QuickBooks company file. Once the syncing process is complete the page will refresh automatically. You're all set! How do I export reports to QuickBooks Desktop? Exporting an Individual Report: You can export reports to QuickBooks Desktop one at a time from within an individual report on the Expensify website by clicking the "Export to" button. Exporting Reports in Bulk: To export multiple reports at a time, select the reports that you'd like to export from the Reports page on the website and click the "Export to" button near the top of the page. Once reports have been exported to QuickBooks Desktop successfully, you will see a green QuickBooks icon next to each report on the Reports page. You can check to see when a report was exported in the Reports History & Commentssection of the individual report. How do I find reports after they have been exported? How your expenses show up in QuickBooks desktop will depend on the export options that you've selected in your connection's configuration settings. If you are having trouble locating a report in QuickBooks Desktop after it has been exported, you can use the report's Expensify Report ID to search the "Ref. No." Field in QuickBooks. To search in QuickBooks Desktop by Ref. No., go to Edit > Find, select the appropriate Transaction Type according to the export options that you are using, and then enter your Report ID in the "Ref. No." field. Why are my > QuickBooks Desktop > > QuickBooks Desktop > Configure must be a domain admin as well. If the report exporter is not a domain admin, all company card expenses will export to the default account selected in the Non-Reimbursable section of your Export configuration settings under Settings > Policies > Group > [Policy Name] > Connections > QuickBooks Desktop >. How do I export to my users' Employee Records instead of their Vendor Records? If you want to export reports to your users' Employee Records instead of their Vendor Records, you will need to select Check or Journal Entry for your reimbursable export option. There isn't a way to export as a Vendor Bill to an Employee Record. If you are setting up Expensify users as employees, you will need to activate QuickBooks Desktop Payroll to view the Employee Profile tab where submitter's email addresses needs to be entered. How does Tax work with QuickBooks Desktop? At this time, Expensify doesn't support tax import from QuickBooks Desktop. We're currently gathering use cases on our Community Forum, found here. How does multi-currency work with QuickBooks Desktop? When using QuickBooks Desktop Multi-Currency, there are a few limitations based on your export options. Vendor bills and Checks: The vendor currency and the account currency have to match but do not have to be in the home currency. Credit Card: Expenses that don't match an existing vendor in QuickBooks export to the Credit Card Misc. vendor that we create. When you try to export a report in a currency other than your home currency, the transaction will be created under the vendor currency with a 1:1 conversion. A transaction in Expensify for $50 CAD will show in QuickBooks as $50 USD. Journal Entries: Multi-currency exports will fail as the account currency has to match both the vendor currency and the home currency. Exporting Negative Expenses In general, you can export negative expenses successfully to QuickBooks Desktop regardless of which Export Option you choose. The one thing to keep in mind is that if you have Check selected as your export option, the total of the report can not be negative. Specific Errors: The Expensify Sync Manager Could Not Be Reached To resolve this error, take the following steps: - Make sure that both the Sync Manager and QuickBooks Desktop are running. - Make sure that the Sync Manager is installed in the correct location. The Sync Manager should be installed in the same location as your QuickBooks application. If QuickBooks is installed on your local desktop, the Sync Manager should be too. If QuickBooks is installed on a remote server, the Sync Manager should be installed there instead. - Make sure that the Sync Manager's status is "Connected." - If the Sync Manager status already shows as "Connected," click Edit and Save to refresh the connection and try syncing your policy again. If the error persist, double check that the token you see in the Sync Manager matches the token in your connection settings: The Wrong QuickBooks Company is Open This error indicates that you have the wrong company file is open in QuickBooks Desktop. To resolve, after taking the general troubleshooting steps outlined here, take the steps outlined in the error message: - If you can see that the wrong company file is actually open in Quickbooks, go to QuickBooks and Select File > Open or Restore Company > [Company Name]. Once the correct company file is open, attempt to sync your policy again. - If the correct company file is open and you're still getting this error, close out of QuickBooks Desktop completely, reopen the desired company file, then sync again. If you are still getting this error after taking the steps above, log into QuickBooks as an admin in single user mode, go to Edit > Preferences > Integrated Applications > Company Preferences, and remove the Expensify Sync Manager listed there. Next, try sync your policy again in Expensify. You'll be prompted to re-authorize the connection in QuickBooks, and this should allow you to sync successfully. If the error persists, double check that the token you see in the Sync Manager matches the token in your connection settings: Billable Transactions Require an Associated Customer This error messages indicates that some of the expenses on the report that you are trying to export are flagged as "Billable" but have not beed coded with an associated customer/job. To resolve, open the report and select a customer/job for each billable expense: If you do not see a list of Customers/Jobs to choose from when editing your report, you may need to enable them in your configuration settings. Once all billable expenses have been properly tagged, try exporting your report again. No Vendor Found For Email in QuickBooks Each submitter’s email must be saved as the "Main Email" in their Vendor record within QuickBooks Desktop. To resolve, click into your vendor section of QuickBooks: Next, make sure that the email mentioned in the error matches the "Main Email" field in their record. This is case sensitive, so you will need to change any capitalized letters to be lowercase. If you want to export reports to your users' employee records instead of their vendor records, you will need to select Check or Journal Entry for your reimbursable export option. If you are setting up Expensify users as employees, you will need to activate QuickBooks Desktop Payroll to view the Employee Profile tab where submitter's email addresses needs to be entered. Once you have added the correct email to the vendor record, save this change and sync your policy before trying to export this report again. Please Close Any Dialog Boxes Open Within QuickBooks and Try Again This error indicates that there is a dialogue box open within QuickBooks that is interfering with attempts to sync or export. To resolve this, simply close any open windows within your QuickBooks Desktop so that you only see a gray screen and then try exporting or syncing again. Do Not Have Permission to Access Company Data File To resolve this error, you will need to log into QuickBooks Desktop as an Admin in single-user mode and go to Edit > Preferences > Integrated Applications > Company Preferences. From here, select the Expensify Sync Manager and click Properties. Make sure that "Allow this application to login automatically" is checked and click OK. Close all windows within QuickBooks If you are still getting this error after taking the steps above, go to Edit > Preferences > Integrated Applications > Company Preferences, and remove the Expensify Sync Manager listed there. Next, try sync your policy again in Expensify. You'll be prompted to re-authorize. Transaction Split Lines to Accounts Payable Must Include a Vendor On That Split Line When exporting as journal entries to an Accounts Payable, this requires a vendor record, not an employee. The vendor record must have the email address of the report creator/submitter. If the report creator/submitter also has an employee record, you need to remove the email, because Expensify will try to export to the employee record first for journal entries.
https://docs.expensify.com/articles/1273576-quickbooks-desktop-troubleshooting
2019-06-16T05:19:36
CC-MAIN-2019-26
1560627997731.69
[array(['https://downloads.intercomcdn.com/i/o/90578354/02c058740550ad9307ca6881/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/90578637/5e487e9cdd31bad30ef2b4b9/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/90579138/6f950089fd68b370a9cb22df/image.png', None], dtype=object) ]
docs.expensify.com
Other package managers¶ Besides conda, Gammapy and some of the optional dependencies (Sherpa, Astropy-affiliated packages) are not yet available in other package managers, such as e.g. apt-get or yum on Linux or Macports or Homebrew on Mac. So installing Gammapy this way is not recommended at this time. (The recommended method is conda as mentioned above). Still, it’s possible and common on systems where users have root access to install some of the dependencies using those package managers, and then to use pip to do the rest of the installation. So as a convenience, here we show the commands to install those packages that are available, so that you don’t have to look up the package names. We do hope this situation will improve in the future as more astronomy packages become available in those distributions and versions are updated. apt-get¶ On Ubuntu or Debian Linux, you can use apt-get and pip to install Gammapy and it’s dependencies. The following packages are available: sudo apt-get install \ python3-pip python3-scipy python3-matplotlib python3-skimage \ python3-yaml ipython3-notebook python3-uncertainties \ python3-astropy python3-click The following packages have to be installed with pip: python3 -m pip install --user \ gammapy naima reproject \ iminuit emcee healpy sherpa Another option to install software on Debian (and any system) is to use conda. yum¶ yum is a popular package manager on Linux, e.g. on Scientific linux or Red Hat Linux. If you are a yum user, please contribute the equivalent commands (see e.g. the Macports section below). Homebrew¶ Homebrew is a popular package manager on Mac. Gammapy currently isn’t packaged with Homebrew. It should be possible to install Python / pip / Numpy / Astropy with brew and then to install Gammapy with pip. If you’re a brew user, please let us know if it works and what the exact commands are. Note that we have some Gammapy developers and users on Mac that use Macports. For this you can find detailed instructions here: Installation with Macports Fermi ScienceTools¶ The Fermi ScienceTools ships with it’s own Python 2.7 interpreter. The last release of Gammapy to support Python 2.7 was Gammapy v0.10 from January 2019. pip should know about this and automatically install Gammapy v0.10, and not try to install a later version of Gammapy. If you want to use Astropy or Gammapy with that Python, you have to install it using that Python interpreter, other existing Python interpreters or installed packages can’t be used (when they have C extensions, like Astropy does). Fermi ScienceTools version v10r0p5 (released Jun 24, 2015) includes Python 2.7.8, Numpy 1.9.1, Scipy 0.14.0, matplotlib 1.1.1, PyFITS 3.1.2. Unfortunately pip, ipython or Astropy are not included. So first in stall pip (see pip install instructions), and then $ python -m pip install ipython astropy gammapy If this doesn’t work (which is not uncommon, this is known to fail to compile the C extensions of Astropy on some platforms), ask your Python-installation-savvy co-worker or on the Astropy or Gammapy mailing list.
https://docs.gammapy.org/0.12/install/other.html
2019-06-16T05:32:48
CC-MAIN-2019-26
1560627997731.69
[]
docs.gammapy.org
. Import from a BACPAC file in the Azure portal The Azure portal only supports creating a single database in Azure SQL Database and only from a BACPAC file stored in Azure Blob storage. Note A managed instance does not currently support migrating a database into an instance database from a BACPAC file using the Azure portal. To import into a managed instance, use SQL Server Management Studio or SQLPackage.. Import from a BACPAC file using SqlPackage To import a SQL Server database using the SqlPackage command-line utility, see import parameters and properties. SqlPackage has the latest SQL Server Management Studio and SQL Server Data Tools for Visual Studio.. For scale and performance, we recommend using SqlPackage in most production environments.=mynewserver20170403.database.windows.net;Initial Catalog=myMigratedDatabase;User Id=<your_server_admin_account_user_id>;Password=<your_server_admin_account" Import into a single database from a BACPAC file using PowerShell Note A managed instance does not currently support migrating a database into an instance database from a BACPAC file using Azure PowerShell. To import into a managed instance, use SQL Server Management Studio or SQLPackage.. Use the New-AzSqlDatabaseImport cmdlet to submit an import database request to the Azure SQL Database service. Depending on database size, the import may take some time to complete. $importRequest = New-AzSqlDatabaseImport -ResourceGroupName "<your_resource_group>" ` -ServerName "<your_server>" ` -DatabaseName "<your_database>" ` -DatabaseMaxSizeBytes "<database_size_in_bytes>" ` -StorageKeyType "StorageAccessKey" ` -StorageKey $(Get-AzStorageAccountKey -ResourceGroupName "<your_resource_group>" -StorageAccountName "<your_storage_account").Value[0] ` -StorageUri "" ` -Edition "Standard" ` -ServiceObjectiveName "P6" ` -AdministratorLogin "<your_server_admin_account_user_id>" ` -AdministratorLoginPassword $(ConvertTo-SecureString -String "<your_server_admin_account. Feedback Send feedback about:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-import
2019-06-16T05:29:36
CC-MAIN-2019-26
1560627997731.69
[]
docs.microsoft.com
Contents Now Platform Administration Previous Topic Next Topic Integrating ServiceNow with your Intranet Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Integrating ServiceNow with your Intranet There are several ways you can add a ServiceNow login link to your intranet. You can add a login link by: Enabling the PortletLogin Script include to be Client Callable. Creating a simple HTML link to your instance that takes your users directly to the ServiceNow login page Adding an iframe link to the ServiceNow login portlet in one of your HTML pages to permit direct login. Note: The ServiceNow login portlet is the only content supported within an iframe HTML element. To deliver ServiceNow content from a web page, see Service Portal instead. Creating a Simple LinkEdit a web page on your intranet and add a direct link to your ServiceNow instance.Enabling the PortletLogin Script include to be Client CallableThe login portlet () uses the PortletLogin Script Include.Add the Login PortletAdding the login portlet to an iframe HTML element creates an unbranded user and password prompt on any HTML page. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/helsinki-platform-administration/page/integrate/inbound-other-web-services/concept/c_IntegratServiceNowIntranet.html
2019-06-16T05:51:43
CC-MAIN-2019-26
1560627997731.69
[]
docs.servicenow.com
Contents Now Platform Capabilities Previous Topic Next Topic The matching of a sender email address to a user Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share The matching of incident's caller_id to the value returned by gs.getUserID(). If multiple users have the same email address, the instance first searches for an active user with the email address. The instance does not match inactive users. Note: It is strongly recommended to have unique email addresses for each user record. Otherwise, the instance can not reliably match the email to the correct user and unpredictable matches may occur.If providing a unique email address to each user is not possible, only having one active user with the shared email address is recommended. This configuration guarantees that the instance always matches incoming email from this address to the active user. Related ConceptsThe matching of incoming email to an inbound action typeRelated ReferenceRecognized reply prefixesRecognized forward prefixesEmail forwards as repliesIncoming email matchingExamples of matching watermarks in the Subject line or BodyExamples of matching record numbers in the Subject line On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/administer/notification/reference/r_MatchSenderEmailAddToUser.html
2019-06-16T05:46:34
CC-MAIN-2019-26
1560627997731.69
[]
docs.servicenow.com
After running impact assessment, perform mitigation tasks on at-risk endpoints. Column Name Information First Observed Date and time when an artifact's presence is detected on target endpoints Host Name Name of the agent endpoint that harbors the matching suspicious object Clicking a value in the Host Name column opens a screen that shows a graph of the execution flow of any suspicious activities involving or originating from that endpoint. This lets you analyze the enterprise-wide chain of events involved in a targeted attack. For details, see Detailed Mindmap. User Name Name of the user logged on to the endpoint IP Address IPv4 or IPv6 address of the endpoint Importance Importance assigned by a Control Manager administrator to the endpoint. For details, see Working with User or Endpoint Importance. Take immediate action on important endpoints. Matching Object(s) Identifier(s) or component(s) of an attack that indicate what attacks are and how they are established Action Options to isolate or restore the connection of an endpoint. For details, see Endpoint Isolation and Connection Restoration.
http://docs.trendmicro.com/en-us/enterprise/control-manager-60-service-pack-3/va_intro/ioc/atriskendpoints.aspx
2019-06-16T05:11:21
CC-MAIN-2019-26
1560627997731.69
[]
docs.trendmicro.com
Gateways Brigade Gateways This guide explains how gateways work, and provides guidance for creating your own gateway. What Is A Brigade Gateway? The Brigade architecture is oriented around the concept that Brigade scripts run as a response to one or more events. In Brigade, a gateway is an entity that generates events. Often times, it translates some external trigger into a Brigade event. Brigade ships with the ability to enable various gateways that are ready to go. These include the Container Registry Gateway, the Github Gateway and the Generic Gateway. They can all be enabled via top-level Helm chart flags. All of these provide HTTP-based listeners that receive incoming requests (from a container registry, Github or other platforms and systems) and generate Brigade events as a result. However, Brigade’s gateway system works with more than just webhooks. For example, the brig client also acts as a gateway. When you execute a brig run command, brig creates a Brigade event. By default, it emits an exec event. And Brigade itself processes this event no differently than it processes the GitHub or container registry events. There are no rules about what can be used as a trigger for an event. One could write a gateway that listens on a message queue, or runs as a chat bot, or watches files on a filesystem… any of these could be used to trigger a new Brigade event. The remainder of this guide explains how gateways work and how you can create custom gateways. An Event Is A Secret The most important thing to understand about a Brigade event is that it is simply a Kubernetes Secret with special labels and data. When a new and appropriately labeled secret is created in Kubernetes, the Brigade controller will read that secret and start a new Brigade worker to handle the event. Secrets have several characteristics that make them a great fit for this role: - They are designed to protect data (and we expect them to mature in this capacity) - They can be mounted as volumes and environment variables. - The payload of a secret is flexible - Secrets have been a stable part of the Kubernetes ecosystem since Kubernetes 1.2 Because of these features, the Brigade system uses secrets for bearing event information. The Anatomy of a Brigade Event Secret Here is the structure of a Brigade event secret. It is annotated to explain what data belongs in what fields. # The main structure is a normal Kubernetes secret apiVersion: v1 kind: Secret metadata: # Every event has an automatically generated name. The main requirement of # this is that it MUST BE UNIQUE. name: example # Brigade uses several labels to determine whether a secret carries a # Brigade event. labels: # 'heritage: brigade' is mandatory, and signals that this is a Brigade event. heritage: brigade # This should point to the Brigade project ID in which this event is to be # executed project: brigade-1234567890 # This MUST be a unique ID. Where possible, it SHOULD be a ULID # Substituting a UUID is fine, though some sorting functions won't be as # expected. (A UUID v1 will be sortable like ULIDs, but longer). build: 01C1R2SYTYAR2WQ2DKNTW8SH08 # 'component: build' is REQUIRED and tells brigade to create a new build # record (and trigger a new worker run). component: build # Any other labels you add will be ignored by Brigade. type: brigade.sh/build data: # IMPORTANT: We show these fields as clear text, but they MUST be base-64 # encoded. # The name of the thing that caused this event. event_provider: github # The type of event. This field is freeform. Brigade does not have a list of # pre-approved event names. Thus, you can define your own event_type event_type: push # Revision describes a vcs revision. revision: # Commit is the commitish/reference for any associated VCS repository. By # default, this should be `master` for Git. commit: 6913b2703df943fed7a135b671f3efdafd92dbf3 # Ref is the symbolic ref name. (refs/heads/master, refs/pull/12/head, refs/tags/v0.1.0) ref: master # This should be the same as the `name` field on the secret build_name: example # This should be the same as the 'project' label project_id: brigade-1234567890 # This should be the same as the 'build' label build_id: 01C1R2SYTYAR2WQ2DKNTW8SH08 # The payload can contain arbitrary data that will be passed to the worker # JavaScript. It is passed to the script unparsed, and the script can parse # it as desired. payload: "{ 'foo': 'bar' }" # An event can supply a script to execute. If it does not supply a script, # Brigade will try to locate a 'brigade.js' file in the project's source # code repository using the commit provided above. script: "console.log('hello');" Again, note that any fields in the data: section above are shown cleartext, though in reality you must base-64 encode them. The easiest way to create a secret like the above is to do so with the kubectl command, though there are a host of language-specific libraries now for creating secrets in code. Creating Custom Gateways Given the above description of how gateways work, we can now talk about a gateway as anything that generates a secret following the format above. In this final section, we will create a simple shell script that triggers a new event every 60 seconds. In the payload, it sends the system time of the host that is running the script. #!/usr/bin/env bash set -euo pipefail # The Kubernetes namespace in which Brigade is running. namespace="default" event_provider="simple-event" event_type="my_event" # This is github.com/brigadecore/empty-testbed" base64=(base64) uuidgen=(uuidgen) if [[ "$(uname)" != "Darwin" ]]; then base64+=(-w 0) uuidgen+=(-t) # generate UUID v1 for sortability fi # This is the brigade script to execute script=$(cat <<EOF const { events } = require("brigadier"); events.on("my_event", (e) => { console.log("The system time is " + e.payload); }); EOF ) # Now we will generate a new event every 60 seconds. while :; do # We'll use a UUID instead of a ULID. But if you want a ULID generator, you # can grab one here: uuid="$("${uuidgen[@]}" | tr '[:upper:]' '[:lower:]')" # We can use the UUID to make sure we get a unique name name="simple-event-$uuid" # This will just print the system time for the system running the script. payload=$(date) cat <<EOF | kubectl --namespace ${namespace} create -f - apiVersion: v1 kind: Secret metadata: name: ${name} labels: heritage: brigade project: ${project_id} build: ${uuid} component: build type: "brigade.sh/build" data: revision: commit: $("${base64[@]}" <<<"${commit_id}") ref: $("${base64[@]}" <<<"${commit_ref}") event_provider: $("${base64[@]}" <<<"${event_provider}") event_type: $("${base64[@]}" <<<"${event_type}") project_id: $("${base64[@]}" <<<"${project_id}") build_id: $("${base64[@]}" <<<"${uuid}") payload: $("${base64[@]}" <<<"${payload}") script: $("${base64[@]}" <<<"${script}") EOF sleep 60 done While the main point of the script above is just to show how to create a basic event, it should also demonstrate how flexible the system is. A script can take input from just about anything and use it to trigger a new event. Creating A Cron Job Gateway Beginning with the code above, we can build a gateway that runs as a scheduled job in Kubernetes. In this example, we use a Kubernetes CronJob object to create the secret. First we can begin with a simplified version of the script above. This one does not run in a loop. It just runs once to completion. Here is cron-event.sh: #!/usr/bin/env bash set -euo pipefail # The Kubernetes namespace in which Brigade is running. namespace=${NAMESPACE:-default} event_provider="simple-event" event_type="my_event"" uuid="$(uuidgen | tr '[:upper:]' '[:lower:]')" name="simple-event-$uuid" payload=$(date) script=$(cat <<EOF const { events } = require("brigadier"); events.on("my_event", (e) => { console.log("The system time is " + e.payload); }); EOF ) cat <<EOF | kubectl --namespace ${namespace} create -f - apiVersion: v1 kind: Secret metadata: name: ${name} labels: heritage: brigade project: ${project_id} build: ${uuid} component: build type: "brigade.sh/build" data: revision: commit: $(base64 -w 0 <<<"${commit_id}") ref: $(base64 -w 0 <<<"${commit_ref}") event_provider: $(base64 -w 0 <<<"${event_provider}") event_type: $(base64 -w 0 <<<"${event_type}") project_id: $(base64 -w 0 <<<"${project_id}") build_id: $(base64 -w 0 <<<"${uuid}") payload: $(base64 -w 0 <<<"${payload}") script: $(base64 -w 0 <<<"${script}") EOF Next, we will package the above as a Docker image. To do that, we create a Dockerfile in the same directory as the cron-event.sh script above. The Dockerfile just sets up the commands we need, and then copies the script into the image: FROM debian:jessie-slim RUN apt-get update && apt-get install -y uuid-runtime curl RUN curl -LO(curl -s)/bin/linux/amd64/kubectl \ && mv kubectl /usr/local/bin/kubectl && chmod 755 /usr/local/bin/kubectl COPY ./cron-event.sh /usr/local/bin/cron-event.sh CMD /usr/local/bin/cron-event.sh (The really long line just installs kubectl) And we can pack that into a Docker image by running docker build -t technosophos/example-cron:latest .. You should replace technosophos with your Dockerhub username (or modify the above to store in your Docker registry of choice). Then push the image to a repository that your Kubernetes cluster can reach: $ docker push technosophos/example-cron Now we create our third (and last) file: a CronJob definition. Our cron.yaml should look something like this: apiVersion: batch/v1beta1 kind: CronJob metadata: name: example-cron-gateway labels: heritage: brigade component: gateway spec: schedule: "*/1 * * * *" jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - name: cron-example image: technosophos/example-cron:latest imagePullPolicy: IfNotPresent We can install it with kubectl create -f cron.yaml. Now, every minute our new gateway will create an event. Whenever you are done with this example, you can delete it with kubectl delete cronjob example-cron-gateway. That’s it! We have create both a local shell script gateway and an in-cluster cron gateway. Again, there are other programming libraries and platforms that interoperate with Kubernetes. Many of them are hosted on GitHub in the kubernetes-client org.
https://docs.brigade.sh/topics/gateways/
2019-06-16T05:36:03
CC-MAIN-2019-26
1560627997731.69
[]
docs.brigade.sh
Covered in this article: - Before getting started - Categories overview - Adding policy categories individually - Adding categories via a spreadsheet - Adding sub-categories - Category-specific rules and description hints - Auto-categorize card expenses with default categories - Implicit categorization Before getting started - If you are using Expensify for individual use, you will want to set up categories in your personal policy's settings. These categories will be overridden if you are reporting on a company policy. - Go to Settings > Policies > Individual > [Policy name] > Categories to find your categories setup page. - Are you an admin for a group policy? Read on! Categories Overview - Group Policies In Expensify, Categories is a term that refers to the Chart of Accounts, GL Accounts, Expense Accounts or Expense Categories (depending on your accounting system). They are line-item expense details that correspond to your accounting and financial reporting systems. - If you are using one of our web service integrations with QuickBooks Online, QuickBooks Desktop, Intacct, Xero or NetSuite, then we will automatically pull in these categories from your accounting system. If you are not using one of these systems, then you can import your categories either by CSV file upload or individually. - Each category can also be set up to have specific rules like required receipts, maximum expense amounts, and description being required. Adding Policy Categories Individually Go to Settings > Policies > Group > [Policy name] > Categories in order to add categories individually:. - Tap Categories. On the next screen, you can add categories using the + button, swipe left to delete a category, or tap a category name to edit it. Adding Categories via a Spreadsheet: 1. Create the spreadsheet of these accounts and save this as a CSV file. - One column (column A) should be the user-facing account name (the "category" the user will select on their edit screen. - A second column (column B) can have the GL number associated with the account. Here is an example of what this file looks like: 2. Upload this file into the Categories section of the group policy. (Settings > Policies > Group > [Policy name] > Categories). Here's an example of how to do this: Adding Sub-categories If you would like to create sub-categories under your category selection drop-down list, you can do so by adding a colon after the name of the desired category and then type the sub-category (without spaces around the punctuation). For example, to add transportation sub-categories, you would add them like so: Which will then show up in the category drop-down list like shown here, with the text before the colon showing up as the category header (which will not be selectable): Category-Specific Rules and Description Hints Control policy admins have the ability to enable specific rules based on the category of the expense. This feature allows admins to have control of expense reporting on a more granular level. Enabling Category Rules The following are the rules that can be set specifically at the category level. To enable rules for a given category: - Go to Settings > Policies > Group > [Select Policy] > Categories - Click Edit Rules next to the category name that you would like to define rules for. GL Code and Payroll Code: These are optional fields if these categories need to be associated with either of these codes in your accounting or payroll systems Max Amount: Allows you to set specific expense amount caps based on the expense category. Using Limit type, you can define this per individual expense, or per day (for expenses in a category on an expense report). Receipts: Allows you to decide whether you want to require receipts based on the category of the expense. For instance, it’s common for companies to disable the receipt requirement for mileage expenses. Description: Allows you to decide whether to require the description field to be filled out based on the category of the expense. Description Hint: Allows you to place a hint in the description field. This will appear in light gray font on the expense edit screen in this field to prompt the expense creator to fill in the field accordingly. Updating Category Rules via Spreadsheet Want to quickly update category rules in-bulk? You can do so by exporting to CSV, editing the spreadsheet, then importing back into the categories page. This will allow you to quickly add new categories, and set GL codes, payroll codes, description hints etc. Rule Enforcement If users are in violation of these rules, those violations will be shown in red on the report. Any category-specific violations will only be shown once a category has been selected for a given expense. Expenses with violations will not be auto-submitted if Scheduled Submit is enabled for a policy. Description Hints Description hints will be displayed both in the expenses table (shown above) and in the expense editor (shown below) once a category is selected. Auto-Categorize Card Expenses with Default Categories If you're importing card transactions, Default Categorization will provide a massive benefit to your company's workflow by automatically coding expenses to the proper GL. -. - You're well on your way to expense automation freedom! Default Categories based on specific MCC codes If you require more granular detail, the MCC Editor gives you even greater control over which MCC Codes are assigned to which Categories. The MCC Editor can be found just below the Default Categories table: Implicit Categorization Over time, Expensify will learn how you categorize certain merchants and then automatically apply that category to the same merchant in the future. - You can always change the category; we'll try to remember that correction for next. For a live overview of the Policy Admin role, policy management and administration, register for our free Admin Onboarding Webinar! Still looking for answers? Search our Community for more content on this topic!
https://docs.expensify.com/articles/2888-policy-categories
2019-06-16T05:37:16
CC-MAIN-2019-26
1560627997731.69
[array(['https://downloads.intercomcdn.com/i/o/67417855/99d49a4d7a3f9a0d9429d265/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/120016877/405d35283a380fabf7cc3d85/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/67418072/974515cb2f114320ddf4611b/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/67429101/b67a615621bc743ff3e0e34a/import+categories.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/120018774/f14b50d32bb7c37e56f29483/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/120018887/c44908ed347dca10c44652e6/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/120017192/fa046083dda66541060900ac/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/120017206/1aaa5bb33718ad9324a1dd3a/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/89597683/b650ba9c5fc33def39bc91fb/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/89599354/1a6480fcba626396e15667e6/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/67418816/ee4a3b3d94714de89fcdd373/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/67418862/d2228d867b10995340a0c0fc/image.png', None], dtype=object) ]
docs.expensify.com
Common .NET Libraries for Developers Craig Utley CIO Briefings February 2002 Summary: Identifies and defines many of the common namespaces that you will use when developing .NET applications, and provides examples of the most common classes and methods in those namespaces. (24 printed pages) Objectives - Identify .NET namespaces that will be used by most developers - Learn how to use the .NET functionality found in common namespaces in your code - Examine examples of the most common classes and methods in those namespaces Contents The .NET Framework Examining Some Common Namespaces Summary About the Author About Informant Communications Group The .NET Framework As you have no doubt heard, the .NET Framework is the underlying library for all .NET languages. This is a huge library of functions that is programming language-independent, meaning that any programming language targeted at the .NET platform can make use of this functionality. The importance of this underlying set of types and members cannot be overstated; developers in any language will have access to the same types, and they will behave exactly the same way in any .NET language. If you need to create a multithreaded application, for example, you use the threading classes present in the .NET Framework, instead of having to rely on your language of choice having built-in support for multithreading. While the actual syntax to call the objects varies from language to language, the object is the same across all languages, because they are created from the .NET Framework base class. Given this common, underlying set of libraries, .NET developers will spend a large percentage of their time working with Framework objects. This should make programming easier in the long run, regardless of what .NET language you are using. Which Namespaces Will Be Common? In looking at the libraries in the .NET Framework, the question becomes, "Which ones will developers use more often than others?" Since the purpose of this article is to discuss the more commonly used libraries, there will obviously be libraries that do not get covered. It is inevitable that there are developers whose most commonly used library will not be discussed in this paper. It is also inevitable that some developers will never use one or more of the libraries covered in this paper. This is not unexpected, and it will only serve to show the incredible breadth and depth of applications that can be built with the .NET Platform. The criteria used to include a library as common in this paper is that it will be used by a majority of developers building applications that use such elements as Windows Forms, Web Forms, Web services, Class Libraries, databases, and XML. By using these rather broad areas as the basis for choosing the libraries, the list of libraries included is large, but not overwhelming. Once you understand some of these libraries, working with other libraries is much easier. Namespaces Covered in This Paper The namespaces chosen for inclusion in this paper are: - System - System.Collections - System.Data - System.Drawing - System.IO - System.Text - System.Threading - System.Timers - System.Web - System.Web.Services - System.Windows.Forms - System.Xml Examining Some Common Namespaces There are a number of namespaces to examine in this paper. Therefore, most of the sections will include a brief description of the library, along with a mention of some of the more important objects. Finally, most sections will include a short code example showing at least a small portion of what you can do with that particular library. The System Namespace At the top of the namespace hierarchy is the System namespace itself. While all the rest of the libraries in this paper will be under the System namespace, it is important to understand that the System namespace does contain the types that represent the base data types, such as Object, Int32, String, and so forth. In addition to these base data types, there are dozens of classes in the System namespace. These classes cover such areas as garbage collection (the GC class) and certain exceptions (covered with the SystemException, StackOverflowException, and OverflowException classes, among others). These classes represent core functionality within the .NET Framework. Finally, the System namespace contains many second-level namespaces that provide the rest of the .NET Framework functionality. These secondary namespaces are generally grouped into areas of functionality. For example, data access is handled by System.Data, System.Xml, and System.Xml.Serialization. Graphical User Interfaces are handled with the System.Drawing and System.Windows.Forms namespaces. The important point to remember with the System namespace is that it contains the base data types and some of the core functionality of .NET. It is also the top of the namespace hierarchy in the .NET Framework. In Visual Basic® .NET the System namespace is automatically imported, so you do not need to prefix any of the data types or any of the other namespaces with "System." You may simply use these namespaces. You will see examples of this usage throughout the document. System namespace example The System namespace is the home of the base data types. In .NET, the base data types are defined in the Framework itself, and any .NET language can use those base data types. Most languages provide their own data types, but these typically map directly to base data types in the Framework. For example, the Integer data type in Visual Basic .NET maps to the System.Int32 base type. The long data type in Visual Basic .NET maps to System.Int64. Base data types are actually structures in the System namespace. Structures are similar to classes in that they can contain data and functions. This means that the base data types can be treated as standard values, or you can treat them as objects with methods. For example, if you need to convert a Visual Basic .NET Integer to a string, you can simply use the ToString method of the Integer type to convert the integer to a string, as shown in the example below. Dim MeaningOfLife As Integer = 42 MsgBox(MeaningOfLife.ToString) Performing these conversions is fairly straightforward, but what if you need to convert the integer to something other than a string? For example, you might want to convert an Integer to a Byte. This type of conversion is normally allowed by Visual Basic .NET, but not C#. In Visual Basic .NET, if you turn on Option Strict, then this type of conversion is disallowed as well, because you are converting a larger number to a smaller one, which could cause problems if the value in the Integer is too large for the Byte. In order to perform such conversions, the System namespace includes a Convert class, which converts a base type to another base type. Not all base types are supported, but many are. Not all conversions are supported, such as Char to Double. Such invalid conversions throw an InvalidCastException. If a conversion is attempted from a larger size to a smaller size, such as from Int32 to Byte, and the value in Int32 is too large for a Byte, then an OverflowException is thrown. Using the Convert class is fairly easy. For example, the following code converts an Integer to a Byte successfully in the first case, and then fails in the second case. Dim MeaningOfLife As Integer = 42 Dim ShortMeaningOfLife As Byte 'This conversion will work fine ShortMeaningOfLife = Convert.ToByte(MeaningOfLife) 'This conversion will fail Try MeaningOfLife = 42000 ShortMeaningOfLife = Convert.ToByte(MeaningOfLife) Catch except As OverflowException MsgBox("Overflow occurred") End Try The System.Collections Namespace The System.Collections Namespace is one of the four basic programming namespaces (along with System.IO, System.Text, and System.Threading). System.Collections contains all the classes and interfaces needed to define collections of objects. Some of the classes include: - ArrayList: An ArrayList is a type of collection to which you can add items and have it automatically grow. You can move through the items using a MoveNext on the enumerator of the class. The ArrayList can have a maximum capacity and a fixed size. - CollectionBase: This class is the base for a strongly-typed collection, critical for creating collection classes, which are described below. - DictionaryBase: This class is the base for a strongly-typed collection utilizing associated keys and values. This is similar to the Dictionary object found in VBScript and used by many ASP developers. - SortedList: This class represents a collection of keys and values, and is automatically sorted by the key. You can still access the values with either the key or the index number. - Hashtable: This class represents a collection of keys and values, but the keys and values are hashed using a hashing algorithm. Searching based on key is very fast, and is moderately fast when based on value. The order of the items cannot be determined, but if you need searching capabilities, this is the best collection to use. - Queue: This class represents a collection that is to be used on a first-in, first-out or FIFO basis. This unsorted list lets you add items to the end and read (and optionally remove) the items from the top. - Stack: In contrast to the FIFO nature of the Queue class, the Stack class represents a collection for last-in, first-out or LIFO. Some of the classes are for specialized purposes, such as a Stack class that represents a last-in-first-out collection implemented as a circular buffer. As you can see, some of the classes are quite specialized. The main use of the System.Collections namespace, however, will be to create collection classes. Many Visual Basic 6.0 developers were confused by the concept of collection classes. This following example should help clarify this concept. Assume you create a class called Patient. The Patient class has a number of properties, methods, and events. You then want to create several Patient objects within one structure so you can iterate over each object and perform some operation upon each object. You could just create a variable of type Collection, but a standard collection can hold any type of value. In other words, you could add a Patient object to it, but you could also add a Form object, a text string, an integer, and any other type of item. To get around this limitation, you create your own class that looks like a collection. It has Add and Remove methods, as well as an Item property, but the class is coded so you can only add Patient objects. A class that mimics a collection is called a collection class. In Visual Basic 6.0, you simply started by creating an empty class, creating an Add method, a Remove method, an Item property, a Count property, and so on. This is easier in .NET, thanks to System.Collections. System.Collections namespace example Assume that you have created a Patient object. The structure of the Patient object is immaterial for this example as you only want to create a collection of Patient objects. This collection will be called Patients, following the standard of having object names as singular and collection names as plural. First, you create a class called Patients. In this class, you inherit the System.Collections.Collection base. Inheriting CollectionBase automatically gives you a Clear method (to empty the collection) and a Count property. There is also a protected member called List, which acts as an internal collection to hold your objects. Public Class Patients Inherits System.Collections.CollectionBase End Class Next, you create an Add method to add objects to the list, a Remove method to remove items from the list, and an Item property to retrieve an individual object from the list. Here is the basic code to create these items: Public Sub Add(ByVal pPatient As Patient) list.Add(pPatient) End Sub Public Sub Remove(ByVal pIndex As Integer) If pIndex > Count - 1 Or pIndex < 0 Then 'return error message Else list.RemoveAt(pIndex) End If End Sub Public ReadOnly Property Item(ByVal pIndex As Integer) _ As Patient Get If pIndex > Count - 1 Or pIndex < 0 Then 'return error message Else Return CType(list.Item(pIndex), Patient) End If End Get End Property In this code, you can see that the Add method simply adds the object of type Patient to the list item that is given to you when you inherit from CollectionBase. In the Remove method, you receive the index and remove that specific item from the list if it exists. Finally, in the Item method, you receive an index number and return the current object from the list as type Patient. As you can see, inheriting from System.Collections.CollectionBase has made creating collection classes easier than it was in Visual Basic 6.0. In this example, the order in which your patients were stored in the collection was unimportant. However, there might well be times where you do care about the order in which items are stored. The SortedList class allows you to easily sort items, and easily retrieve those items with either the key or the index of the array. For example, some companies always want to see their products sorted in a certain order, even if that order is not alphabetic. You could add the items to the SortedList in any order, but provide a key that would keep them sorted. In the following code, the key is a simple numeric sort order, but could be any other way you wanted to sort the items. Dim myItems As New SortedList() With myItems .Add("2", "Apples") .Add("1", "Oranges") .Add("4", "Lemons") .Add("5", "Grapes") .Add("3", "Limes") End With Dim i As Integer For i = 0 To myItems.Count - 1 Console.WriteLine(myItems.GetByIndex(i)) Next The System.Data Namespace The System.Data namespace will be used in the vast majority of applications because it is the namespace that holds the classes for ADO.NET. This means that any application using ADO.NET will call System.Data. It is interesting to note that the exact name of the class will depend on the managed provider you are using. The major categories will be: - Connection: In the first version of ADO.NET, this will be either SqlConnection or OleDbConnection. This class is responsible for making the actual connection to the database, and it also handles starting a transaction. - Command: This will be either SqlCommand or OleDbCommand. This class allows you to issue SQL statements or call stored procedures. It is the object that will actually execute the command, and may return data. - DataReader: Either SqlDataReader or OleDbDataReader, this object is used to read data in a forward-only manner. This is the firehose cursor with which most ADO developers are familiar. - DataAdapters: Either SqlDataAdapter or OleDbDataAdapter, this object encapsulates a connection and one or more commands. It gets data and fills a DataSet object with data. - DataSets: Either SqlDataSet or OleDbDataSet, this is the ADO.NET disconnected data cache. It stores data in memory in a schema, and applications can use it like a database, accessing and updating data. Making database access part of the underlying .NET Framework means that any language targeting the .NET platform will have access to the same set of data objects. In addition, if the .NET Framework is ported to other platforms, the database access objects will be the same on all platforms. ADO.NET is different from previous Microsoft database access technologies in several ways. For example, the DataSet is the object used to store data in memory in ADO.NET. The DataSet is disconnected by nature, having no understanding of the underlying data source. In addition, the DataSet holds data in memory in a schema format, which allows you to define tables, relationships, and constraints. Finally, the DataSet speaks fluent XML, having the ability to both read and write XML. This makes it quite easy to transmit an entire DataSet, with the data and schema, from one application to another, even if the applications are on separate machines. System.Data namespace example In the following example, you see several of the major ADO.NET objects. First, an ADO.NET Connection (SqlConnection) object is created, which is what is used to connect you to a particular server and database. Next, an ADO.NET Command (SqlCommand) object is created, which will be used to issue a command to the database to which you are connected. Next, a DataAdapter (SqlDataAdapter) object is created, which is what sits between a DataSet and a data source, and retrieves records and handles updates. Finally, a DataSet object is created, which will be used to hold data in memory. After the objects are created, the connection is opened, and the Fill method of the DataAdapter is called. This issues the command and places the records returned into the DataSet. The GetXml method is called in order to get the entire DataSet in XML format, and it is displayed in a message box. Normally, of course, this XML string would be returned from the method, but it is easy to see here inside a message box. xmlAuthor As String = authorDS.GetXml MsgBox(xmlAuthor) cn.Close() The following example used the System.Data.SqlClient namespace. This namespace is for accessing Microsoft SQL Server 7.0 or 2000. If you are using any other database, you would use the System.Data.OleDb namespace. The object names are slightly different, as you saw in the bulleted list earlier. The methods and properties are the same, as are all the concepts. However, while the SqlClient uses a new, direct connection driver, the OleDb namespace uses OLE DB as the underlying mechanism to connect to databases. This gives you backwards compatibility to databases that have not yet created a .NET managed provider. The System.Drawing Namespace The System.Drawing namespace gives you easy access to the GDI+ graphics system. Certain controls from previous versions of Visual Basic 6.0, such as the Line and Shape ActiveX® controls, are no longer in Visual Basic .NET. In order to get lines and shapes on your Visual Basic .NET Windows Forms, you use the GDI+ functions in System.Drawing. In addition to drawing simple shapes, System.Drawing allows you to perform a number of interesting tasks, such as having forms that are not just rectangular in nature. If you've always wanted a round form for some reason, you can achieve that with System.Drawing. In addition, there are several lower-level namespaces for advanced functionality, such as System.Drawing.Drawing2D and System.Drawing.Text. System.Drawing namespace example In this example, you have a simple form with one button. When the user clicks the button, the form takes on the shape of a circle. Because the circle is contained in a rectangle that starts at coordinates 0,0 the user will able to click on part of the title bar and move the form around. Imports System.Drawing.Drawing2D Public Class Form1 Inherits System.Windows.Forms.Form Private Sub Button1_Click _ (ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles Button2.Click Dim grafPath As GraphicsPath = New GraphicsPath() grafPath.AddEllipse(New Rectangle(0, 0, 200, 200)) Me.Region = New [Region](grafPath) End Sub End Class As mentioned earlier, the line and shape are both gone. You can use a couple of methods to generate a line. The quick and dirty way is to use a label with a height of one and turn on the border. This looks exactly like a line. Or, you can use the DrawLine method in System.Drawing.Graphics. In fact, System.Drawing.Graphics also includes methods such as DrawEllipse, DrawArc, DrawRectangle, and many other methods for drawing simple shapes. The System.IO Namespace As another one of the four basic programming namespaces, System.IO namespace allows you to read and write files and data streams. Reading and writing can be done synchronously or asynchronously. IO is performed against a base Stream class, which is a generic way to access the data without regard to the underlying platform on which the data resides. Streams can support any combination of reading, writing, and seeking (modifying the current position within a stream). A File class allows you to perform actions such as copying and deleting files. In addition, there are Exists, Open, OpenRead, OpenText, and OpenWrite shared methods for performing file IO. Since the methods of the File class are defined as Shared, you do not need to create an instance of a File class, you can just use the methods. System.IO provides several classes that inherit from a few base classes, of which StreamReader and StreamWriter are commonly used for reading and writing from text files. System.IO namespace example In this example, you use the File object in order to create a file and then write text to it using the WriteLine method of the StreamWriter. Imports System.IO Public Class Foo Private Sub WriteFile(ByVal psFileName As String) If File.Exists(psFileName) Then 'return error Else Dim myFile As StreamWriter = _ File.CreateText(psFileName) myFile.WriteLine _ ("I got this to work at " & TimeString) myFile.Close() End If End Sub End Class The System.Text Namespace The System.Text namespace is one of the four basic programming namespaces, and is all about working with text. String manipulation is a common activity in any application, but it is also one of the most expensive activities in terms of processor cycles. If you create a variable of type String, you are actually creating a String object; all types in .NET inherit from the base Object class, so all data types are actually objects. The String object cannot be changed, despite the fact that it can appear to be changed by assigning a new string to a String variable. For example, there is a method called Concat that allows you to concatenate strings (or objects). However, because the String object is immutable, a new string is actually created and returned. This consumes a significant amount of overhead, but you do have an alternative—the StringBuilder class. If you are performing a few simple assignments or concatenations, don't worry about the overhead. However, if you are building a string inside a loop, the StringBuilder is going to give you much better performance with lower overhead. The StringBuilder class is found in System.Text. StringBuilder allows you to change a string without creating a new String object. Therefore, if you want to append, insert, remove, or replace characters in a string, the StringBuilder may be a better choice if you have a significant amount of manipulation to perform. The System.Text namespace also contains classes for encoding characters into bytes and decoding bytes into characters. For example, there are also encoders for ASCII and Unicode. System.Text namespace example In this example, you create a StringBuilder object and set it to a string using the constructor. Next, you create a message box showing the character in position six (caution: start counting with zero, not one). Next, you change that character in position six to a 'g' (which changes 'think' to 'thing'). Next, you replace the string 'thing' with the string '.NET' and show it in a message box. Finally, you append a string to the end of the string, claiming that Descartes said "I .NET, therefore I am." Imports System.Text Public Class Form1 Inherits System.Windows.Forms.Form Private Sub Button1_Click _ (ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles Button4.Click Dim sbQuote As New StringBuilder _ ("I think, therefore I am.") MsgBox(sbQuote.Chars(6)) sbQuote.Chars(6) = "g" MsgBox(sbQuote.ToString) sbQuote.Replace("thing", ".NET") MsgBox(sbQuote.ToString) sbQuote.Insert(sbQuote.Length, " -- Descartes", 1) MsgBox(sbQuote.ToString) End Sub End Class The System.Threading Namespace System.Threading is another of the four basic programming namespaces. Visual Basic .NET is truly multithreaded because the .NET Framework provides the System.Threading namespace, which provides the classes needed for multithreaded application development. The Thread class represents a thread on which you can execute code. You can create a thread and specify its priority by instantiating a new Thread object. Creating multithreaded applications requires planning and forethought. Many Visual Basic 6.0 developers will want to dive in and create multithreaded applications because they can. However, synchronization can be a challenge for multithreaded applications. If multiple threads attempt to access a shared resource at the same time, deadlock issues can result. Therefore, you must synchronize access to shared resources. Fortunately, the .NET Framework provides objects to manage thread synchronization. Another challenge with multithreading is that you cannot place Functions on a separate thread in Visual Basic .NET. Therefore, you must make the procedure a Sub, which cannot return a value. In addition, the constructor for a new thread will only take a procedure that does not accept any arguments. Given these two limitations, you may have to come up with alternative ways of coding your application. One way around the inability to pass a parameter to a Sub running on a separate thread is to place the Sub in a separate class, set a property in that class, and then call the Sub. To handle a return value, the called Sub should raise an event, which is handled by the calling class. This call-back functionality is not new, but it was not commonly implemented by Visual Basic 6.0 developers. System.Threading namespace example This example is more involved than most, but is fairly simple code. Create a new Visual Basic .NET Windows Application project. On the form, add a button. Create a second form and add a label. In the code for the first form, instantiate an object to point to the second form, set a property, and then call a method. The property you set, MaxCount, can be altered depending on the speed of your processor and how long you want to wait. Public Class Form1 Inherits System.Windows.Forms.Form #Region " Windows Form Designer generated code " … #End Region Private Sub Button1_Click _ (ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles Button1.Click Dim myForm As New Form2() myForm.Show() myForm.MaxCount = 50000 myForm.DoStuff() End Sub End Class On the second form, add a label, and then add the code you see below. Realize that you won't actually see the label update because there is not a DoEvents, but the label is being updated or the code would run too fast for you to see what is happening. There is a Beep to let you know when it is done. Public Class Form2 Inherits System.Windows.Forms.Form #Region " Windows Form Designer generated code " … #End Region Public MaxCount As Long Public Sub DoStuff() Dim Counter As Long For Counter = 1 To MaxCount Label1.Text = Counter 'without a DoEvents, you won't 'actually see the label update. 'However, it helps make this 'take more time Next Beep() End Sub End Class When you run this code, you'll click the button on Form1. Form2 will open, but it will be on the same thread as Form1. Form2 will consume all processor cycles for the thread, so if you try to click back on Form1, you will find it will not get the focus until the loop is done. The next step is to have the DoStuff method in Form2 launch on a separate thread. In order to do this, you need only make changes in Form1. There are no changes required to DoStuff or anything else in Form2. You create a new Thread object and pass it the address of the DoStuff procedure in Form2. Then, you simply start the thread. Your new code will look like this: Private Sub Button1_Click _ (ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles Button1.Click Dim myForm As New Form2() myForm.Show() myForm.MaxCount = 50000 Dim busyThread As New _ System.Threading.Thread(AddressOf myForm.DoStuff) busyThread.Start() End Sub If you run the code now, you'll see that when Form2 opens, you can click on Form1 immediately. As a side benefit, you now see the label on Form2 being updated, since the application's threads each get time at the processor, and this frees up time for the label to be updated. The System.Timers Namespace Visual Basic 6.0 could use a Timer ActiveX control on a form to fire an event at a regular interval. The challenge with this is what if you needed to fire an event on a regular basis in a COM Component? You could call the Windows API, but many Visual Basic 6.0 developers just went ahead and created regular applications and used a hidden form to host the Timer control. In Visual Studio .NET, the Timer control exists, and is part of the System.Windows.Forms namespace. However, you also have the System.Timers namespace, which contains a Timer class. This Timer class allows you to create one or more timers and have them fire events at a regular interval. The Timer in .NET is more accurate than previous Windows timers. The Timer is designed to work in a multithreaded environment, which allows it to move among threads in order to handle the raising of its Elapsed event. Because of the multithreaded nature of the Timer, it is possible, although unlikely, that you issue the Stop command while an Elapsed event is being handled on another thread. This could result in an Elapsed event occurring after you stop the Timer. To check for this, the Timer has a SignalTime property that lets you determine exactly when the Elapsed event gets raised. System.Timers namespace example There are two pieces to this example—a component that implements the timer, and a form that handles the event returned from the component. Create a new component called timerComponent. In timerComponent create an Event called TimerFired, which is what you will send back to the client. You then create a public method called StartTimer, which is what you will call from the client in order to start the timer. Notice that there isn't a StopTimer method, but you could easily add one. Finally, you add an event handler to handle the event of the timer firing. The timer fires and you must handle that in the component. Then, you raise an event that can be handled back in the client program. In this case, you handle the timer's event with a sub called TimerHandler, and you raise an event to be handled on the client. You also pass back the DateTime of the Elapsed event, which is called SignalTime. Your code for the component looks like this: Imports System.Timers Public Class timerComponent Inherits System.ComponentModel.Component #Region " Component Designer generated code " ... #End Region Event TimerFired(ByVal TimeFired As DateTime) Dim tTimer As New Timer() Public Sub StartTimer() AddHandler tTimer.Elapsed, AddressOf TimerHandler tTimer.Interval = 5000 tTimer.Enabled = True End Sub Public Sub TimerHandler _ (ByVal sender As Object, _ ByVal e As System.timers.ElapsedEventArgs) RaiseEvent TimerFired(e.SignalTime) End Sub End Class Now, you need to create the client. This is a simple form that instantiates timerComponent, but it instantiates it using the WithEvents keyword so it can handle any events raised by the component. Then, in a button, you turn on the timer. Finally, you have an event handler that handles the event and creates a message box showing the time the event was fired. You have five seconds to close the message box before the next event is received. Imports System.Timers Public Class Form2 Inherits System.Windows.Forms.Form #Region " Windows Form Designer generated code " ... #End Region Dim WithEvents tTimer As New timerComponent() Private Sub Button1_Click _ (ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles Button1.Click tTimer.StartTimer() End Sub Private Sub tTimer_TimerFired _ (ByVal TimeFired As Date) Handles tTimer.TimerFired MsgBox(TimeFired) End Sub End Class The System.Web Namespace One of the most exciting changes in .NET is the fact that ASP.NET is part of the Framework. This means that Visual InterDev® as a product is gone, and Web applications can be built in any .NET language. Further, ASP.NET uses truly compiled code instead of scripting languages, as were used in traditional ASP applications. By making ASP.NET part of the Framework, Web applications take full advantage of the Framework's services, such as memory management and security. This allows you to build powerful Web applications using the classes of the Framework, such as the System.IO, System.Data, and System.XML namespaces covered in this paper. System.Web includes Windows Forms that allow you to build pages by simply dragging and dropping from within Visual Studio® .NET. System.Web also includes a host of Windows Form controls that work like objects from a coding standpoint, but generate standard HTML when it comes time to interact with a browser. These controls can be as simple as text boxes and labels, or more complex, such as the validator controls and the calendar control. System.Web namespace example As simple as this example may be, it shows several things. First, there are three controls added to this form. These are ASP.NET server controls, which allow you to code against the objects using familiar property and method calls. In addition, there is no programming code in this file, but if you save it as a file with an ASPX extension in a directory in Inetpub\wwwroot, it will run without adding any code. If you click the button with no value in the textbox, you will get the message that the first name is required. Fill in a value and click the button, and no message appears. <HTML> <HEAD> <title>WebForm1</title> </HEAD> <form id="Form1" method="post" runat="server"> <asp:TextBox</asp:TextBox> <asp:RequiredFieldValidator </asp:RequiredFieldValidator> <br> <asp:Button</asp:Button> </form> </body> </HTML> The System.Web.Services Namespace XML Web services are one of the newest and most exciting additions to the world of development. XML Web services have been available for some time on the Microsoft platform, thanks to the SOAP Toolkit. Basically, a Web service is a way to take a component and make it accessible through the Web. Distributed components have been available for years in the form of ActiveX components accessed through DCOM, but Web services make components available over HTTP using SOAP. DCOM is a binary standard that is often not transmitted through firewalls or proxy servers. In addition, DCOM is only supported on platforms running COM, which has basically limited COM/DCOM to Microsoft Windows® platforms. With Web services, however, the landscape has changed. The call and response are in XML format, and are passed over HTTP. This means that the component can be called using simple text and can be called by any client on any platform. The result comes back in XML format, meaning that any client that speaks XML can consume the results. Finally, you can code your components using your favorite Microsoft tools and have those components used by people running on Unix or an AS/400, or any other platform that can make HTTP calls and consume XML. This opens up a new world of interoperability, and means you can distribute your application on servers around the world and tie them together with HTTP. Creating Web services is fairly straightforward, as you will see in a moment. However, how Web services are discovered and accessed is a topic worthy of an article unto itself. Therefore, this paper will just show how to create a Web service, and a simple way to access it. Realize that the access could come from a Windows application, a Web application, or any other client you can imagine, including the growing wireless devices such as PDAs and Web-enabled phones. System.Web.Services namespace example The easiest way to create a new Web service is to create a new project in Visual Studio of type ASP.NET Web Service and name it TestWebService. The shell of the first service is already done for you, including commented lines of code showing the way to build public methods. Make your Web services code look like this: Imports System.Web.Services Public Class Service1 Inherits System.Web.Services.WebService #Region " Web Services Designer Generated Code " ... #End Region <WebMethod()> Public Function GetLatestSteelPrice() _ As String 'look up steel price using whatever method GetLatestSteelPrice = 35 End Function End Class The one public function here, GetLatestSteelPrice, is simplified to show you how it works. Normally, you'd actually look up the latest steel price by looking in a database or accessing some form of streaming quote system. Regardless, you would return that value to the client. This simple example uses a hard-coded value in order to see how this works. You can compile the project and test the Web service immediately. Open Internet Explorer and type in the following URL, replacing the server name and project name if necessary: //localhost/TestWebService/Service1.asmx/GetLatestSteelPrice? The result that will appear in your browser is the XML response from the Web service. It will look like this: <?xml version="1.0" encoding="utf-8" ?> <string xmlns="">35</string> If you're not very excited, think about it this way—you called a method in a component over standard HTTP and retrieved the result. Normally, you'd call this Web service from a client application. In Visual Studio .NET, you can simply add a Web Reference, which is just like adding a reference to a COM component in previous versions of Visual Studio. After adding the Web Reference, you refer to the Web service as you would any other component. For example, if you create the XML Web service and then start a new Windows application, you can right-click on the References node in the Solution Explorer window and choose Add Web Reference. This opens the Add Web Reference dialog box, and in the Address box, you type in the name of the service (in this case, //localhost/TestWebService/Service1.asmx.) Now, you will see that the Add Reference button is enabled, as shown in Figure 1. Figure 1. Add Reference button is enabled After clicking the Add Reference button, a Web References node is added to the Solution Explorer, showing the reference to the XML Web service. Figure 2. Web Reference node in the Solution Explorer Now, in your code, you reference the XML Web service as any other component. In your client Windows Form, you can call the XML Web service using the following code: Dim SteelPrice As New localhost.Service1() MsgBox(SteelPrice.GetLatestSteelPrice) The System.Windows.Forms Namespace With Visual Studio .NET, Microsoft introduces a new forms engine, replacing the forms that had been in Visual Basic since the beginning. By making System.Windows.Forms (or Windows Forms) part of the Framework, any .NET language can take advantage of a forms engine designed to take full advantage of the graphical richness of the Windows operating system. It has never been easier to create powerful GUI applications on the Windows platform, and the Windows Forms engine adds functionality that developers have been craving for some time. Forms have been objects in Visual Basic since the start, but you didn't always specifically treat them as objects. Windows Forms have a Form class that represents a form, and thanks to .NET's built-in inheritance, you can finally create a base form and inherit from that base form as you build other forms. In addition, most of the common controls are part of the System.Windows.Forms namespace. For example, there are classes such as Button, Label, and TextBox. Since the basic form controls are part of the .NET Framework, you can create a form entirely with a text file. Unlike previous versions of Visual Basic, there is no .FRX file that contains the binary data of a form. Binary data, such as background images, does get stored in an RESX file, however. The RESX file is an XML file that mixes XML with the binary data. You can still open the file with Notepad and view and manipulate the XML. You can now create the forms in Notepad if you desire by creating instances of the classes for the controls you want. While Windows Forms have these new .NET-based controls, they can still use most ActiveX controls. Using ActiveX controls on Windows Forms is the subject of another article in this series, but basically .NET creates an AxHost class that wraps around the ActiveX control and extends it with the new properties available with .NET Windows Forms controls. System.Windows.Forms namespace example There is no code to write in this example. Instead, you'll see the resulting code from some of your actions within the IDE. On an empty form, drag and drop some controls, such as a Button and Label. Right-click on the Toolbox and choose Customize Toolbox. Now, choose an ActiveX control and add it to the toolbox. Now, drag that ActiveX control onto the form. Switch to code view and expand the region Windows Form Designer generated code and look at the resulting code. Buttons are declared as Friend outside of any procedures, such as this: Friend WithEvents Button1 As System.Windows.Forms.Button The ActiveX control gets the letters Ax as a prefix, as shown here: Friend WithEvents AxCUtley1 As _ AxCUtleyCtrl.AxGenAmortSched Inside the InitializeComponent section, you can see the controls actually being instantiated, such as: Me.Button1 = New System.Windows.Forms.Button() Me.AxCUtley1 = New AxCUtleyCtrl.AxGenAmortSched() CType(Me.AxCUtley1, _ System.ComponentModel.ISupportInitialize).BeginInit() Finally, you'll see the code that positions the controls and sets properties: 'Button1 ' Me.Button1.Location = New System.Drawing.Point(104, 16) Me.Button1.Name = "Button1" Me.Button1.TabIndex = 0 Me.Button1.Text = "Button1" ' 'AxKProClarity1 ' Me.AxCUtley1.Enabled = True Me. AxCUtley1.Location = _ New System.Drawing.Point(232, 120) Me. AxCUtley1.Name = " AxCUtley1" Me. AxCUtley1.OcxState = _ CType(resources.GetObject("AxCUtley1.OcxState"), _ System.Windows.Forms.AxHost.State) Me. AxCUtley1.Size = New System.Drawing.Size(120, 23) Me. AxCUtley1.TabIndex = 4 The System.Xml Namespace The System.Xml Namespace is used for processing XML. This namespace supports a host of XML standards, such as: - XML 1.0 - Namespaces - Schemas - XSL/T - SOAP 1.1 The System.Xml namespace contains classes that represent the XML elements. For example, there is an XmlDocument class, an XmlEntity class, and an XmlNode class. The XmlValidatingReader can be used to read XML and validate it against a DTD, XDR, or XSD schema. The System.Xml namespace also includes a reader and writer that provide fast, forward-only reading and writing of XML streams. Back in the System.Data discussion you learned about the DataReader class, used for fast forward-only data access. The XmlTextReader provides the same basic functionality against an XML stream. When a Reader is used to read XML, the Reader can determine each node type and act accordingly. The Writer has methods such as WriteCData, WriteDocType, and WriteNode in order to create an XML document. System.Xml namespace example This example combines several of the namespaces that were covered in this paper. First, you use System.Data to read data from SQL Server. Next, you use System.IO to output that data into an XML file. Finally, you reopen that XML file and use System.Xml to read the contents of that file and dump the data to the console. There are more possible values when reading in an XML document than shown in the case statement below. However, for brevity, the number of types examined is small. Imports System.Data.SqlClient Imports System.IO Imports System.Xml Module Module1 Sub Main() sFileName As String = "c:\test.xml" Dim xmlAuthor As String = authorDS.GetXml 'MsgBox(xmlAuthor) Dim srFile As StreamWriter = _ File.CreateText(sFileName) srFile.WriteLine(authorDS.GetXml) srFile.Close() cn.Close() Dim reader = New XmlTextReader(sFileName) reader.WhitespaceHandling = WhitespaceHandling.None 'Parse the file and display each of the nodes. While reader.Read() Select Case reader.NodeType Case XmlNodeType.Element Console.Write("<{0}>", reader.Name) Case XmlNodeType.Text Console.Write(reader.Value) Case XmlNodeType.XmlDeclaration Console.Write("<?xml version='1.0'?>") Case XmlNodeType.Document Case XmlNodeType.DocumentType Console.Write("<!DOCTYPE {0} [{1}]", _ reader.Name, reader.Value) Case XmlNodeType.EndElement Console.Write("</{0}>", reader.Name) End Select End While End Sub End Module Summary This paper covered some of the more common namespaces you will use when developing your .NET applications. There are many other namespaces, and you may find yourself often using a namespace that is not covered in this paper.Regardless of which namespaces you use, gaining a familiarity with the functionality of the .NET Framework and how to use it in your applications is important. The .NET Framework will open up a new era in application development. Because the .NET Framework can be ported to any platform, your applications can run, unchanged, on any platform. Unlike the Virtual Machine that Java uses, .NET applications are compiled to native code on each platform. You've no doubt read about new features in Visual Basic .NET, such as inheritance and multithreading. These new features are available to Visual Basic .NET, but they are provided by the .NET Framework, not the Visual Basic .NET language itself. By using the .NET Framework classes, .NET languages gain access to a modern, object-oriented platform designed for building distributed applications. About the Author Craig Utley is President of CIOBriefings LLC, a consulting and training company focused on Microsoft development. He has been working with and writing about .NET since it was first announced. Recently, Craig has been writing about .NET for Sams Publishing and Volant Training. About Informant Communications.
https://docs.microsoft.com/en-us/previous-versions/dotnet/articles/ms973806(v%3Dmsdn.10)
2019-06-16T04:55:40
CC-MAIN-2019-26
1560627997731.69
[array(['images/ms973806.commnetlibs01%28en-us%2cmsdn.10%29.gif', None], dtype=object) array(['images/ms973806.commnetlibs02%28en-us%2cmsdn.10%29.gif', None], dtype=object) ]
docs.microsoft.com
Gantt Bar Text Settings: Configure the settings for the text that is displayed next to the task bar in the Gantt chart. - Show Gantt Bar text: Select this option to display text next to the task bar in the Gantt chart. - Display text from this column: Select the column that contains the text you want to display. - Font color: Enter in the Hex color number or select a color from the drop-down list. Gantt View Display Settings: Use these settings to configure the appearance of the Gantt chart. * Filter and show all tasks starting from this date: Enter the date that you want to use to filter the tasks in the Gantt view. For example, if you enter 6/1/2011, then all tasks with a start date prior to 6/1/2011 will not be displayed in the Gantt view. NOTE: This filtered date is applied to all selected list views. - Scale for Gantt View: Select the default time scale for the Gantt chart (Day, Week, Month, Quarter or Year). - Gantt View Start Date: Enter the Start Date for the date interval in the Gantt view, select the Today option to use today’s date, or click to select a date. - Show week number in Gantt view: Select this option to display the week numbers in the Gantt view. This option displays the week number in Week, Month and Quarter views only. - Display graphical bar as an absolute task duration in Quarter and Year view: Select this option so that the bar in the Gantt view represents the absolute task’s duration when using the Quarter or Year views. If you do not select this option, the bar represents the relative task’s duration.
https://docs.bamboosolutions.com/document/task_master_gantt_settings/
2019-06-16T05:51:06
CC-MAIN-2019-26
1560627997731.69
[array(['/wp-content/uploads/2017/06/hw45-2010-ganttviewdisplaysettings2.jpg', 'hw45-2010-ganttviewdisplaysettings2.jpg'], dtype=object) ]
docs.bamboosolutions.com
Contents IT Service Management Previous Topic Next Topic Define SLAs that apply to service offerings Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Define. Figure 1. SLA service commitment definition On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/jakarta-it-service-management/page/product/service-portfolio-management/task/t_DfnSLAsApplySvcOfr.html
2019-06-16T05:17:00
CC-MAIN-2019-26
1560627997731.69
[]
docs.servicenow.com
Breaking: #55759 - HTML in link titles not working anymore¶ See Issue #55759 Description¶ By introducing proper handling of double quotes in link titles (TypoLink fields) the processing of the link title is adjusted. Escaping will be done automatically now. Impact¶ Existing link titles, which contain HTML escape sequences, will not be shown correctly anymore in Frontend. Example: A link title Some "special" title will be output as Some &quot;special&quot; title Affected Installations¶ Any installation using links with titles containing HTML escape sequences like " or > Migration¶ Change the affected link titles to contain the plain characters, the correct encoding will be taken care of automatically. Example: Some "special" title If you need to encode a TypoLink manually in code, use the TypoLinkCodecService class, which provides a convenient way to encode a TypoLink from its fragments.
https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/7.4/Breaking-55759-HTMLInLinkTitlesNotWorkingAnymore.html
2019-06-16T06:03:34
CC-MAIN-2019-26
1560627997731.69
[]
docs.typo3.org
Feature: #83334 - Add improved building of query strings¶ See Issue #83334 Description¶ The new method \TYPO3\CMS\Core\Utility\HttpUtility::buildQueryString() has been added as an enhancement to the PHP function http_build_query() to implode multidimensional parameter arrays, properly encode parameter names as well as values with an optional prepend of ? or & if the query string is not empty and skipping empty parameters.
https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/9.5.x/Feature-83334-AddImprovedBuildQueryString.html
2019-06-16T05:53:48
CC-MAIN-2019-26
1560627997731.69
[]
docs.typo3.org
public class GroovyRowResult extends GroovyObjectSupport Represents an extent of objects. It's primarily used by methods of Groovy's Sql class to return ResultSet data in map form; allowing access to the result of a SQL query by the name of the column, or by the column number. Checks if the result contains (ignoring case) the given key. key- the property name to look for Find the property value for the given name (ignoring case). property- the name of the property to get Retrieve the value of the property by its index. A negative index will count backwards from the last column. index- is the number of the column to look at Retrieve the value of the property by its (case-insensitive) name. property- is the name of the property to look at Associates the specified value with the specified property name in this result. key- the property name for the result value- the property value for the result Copies all of the mappings from the specified map to this result. If the map contains different case versions of the same (case-insensitive) key only the last (according to the natural ordering of the supplied map) will remain after the putAll method has returned. t- the mappings to store in this result
http://docs.groovy-lang.org/docs/next/html/gapi/groovy/sql/GroovyRowResult.html
2019-06-16T05:41:47
CC-MAIN-2019-26
1560627997731.69
[]
docs.groovy-lang.org
Control Manager user accounts allow administrators to specify which products or directories other users can access.. Add user accounts to do the following: Allow administrators to specify which products or directories other users can access Allow other users to log on to the Control Manager web console Allow administrators to specify the user on the recipient list for notifications Allow the administrator to add the user to user groups. Trend Micro suggests configuring user role and user account settings in the following order: Specify which products/directories the user can access (step 4 of Editing a User Account). Specify which menu items the user can access. If the default user roles are not sufficient, see Adding a User Role or Editing a User Role. Specify the user role for the user account (step 4 of Editing a User Account). When adding a user account, you need to provide information to identify the user, assign a user role, and set folder access rights. Active Directory users cannot have their accounts disabled from Control Manager. To disable an Active Directory user, you must disable the account from the Active Directory server.
http://docs.trendmicro.com/en-us/enterprise/control-manager-60-service-pack-3/ch_ag_user_access_configure/understand_user_account/account_user_add_about.aspx
2019-06-16T05:41:35
CC-MAIN-2019-26
1560627997731.69
[]
docs.trendmicro.com
Overview REST Command Invocation service (RCI) is a standard Kaa platform service that exposes REST APIs for invoking commands on endpoints. Interfaces RCI supports a number of interfaces to perform its functional role. The key supported interfaces are summarized in the following diagram. For inter-service communication, Kaa services mainly use REST APIs and messaging protocols that run over NATS messaging system. Command Invocation Protocol (12/CIP) RCI implements Command Invocation Protocol (12/CIP) for forwarding commands to endpoints and consuming invocation results. It acts as Command invocation caller and listens to results from Command invocation agent. Command invocation flow The overall command invocation sequence diagram can be illustrated as follows:
https://docs.kaaiot.io/RCI/docs/current/Overview/
2019-06-16T05:39:48
CC-MAIN-2019-26
1560627997731.69
[]
docs.kaaiot.io
rxBTrees: Parallel External Memory Algorithm for Stochastic Gradient Boosted Decision Trees Description Fit stochastic gradient boosted decision trees on an .xdf file or data frame for small or large data using parallel external memory algorithm. Usage rxBTrees(formula, data, outFile = NULL, writeModelVars = FALSE, overwrite = FALSE, pweights = NULL, fweights = NULL, cost = NULL, minSplit = NULL, minBucket = NULL, maxDepth = 1, cp = 0, maxCompete = 0, maxSurrogate = 0, useSurrogate = 2, surrogateStyle = 0, nTree = 10, mTry = NULL, replace = FALSE, strata = NULL, sampRate = NULL, importance = FALSE, seed = sample.int(.Machine$integer.max, 1), lossFunction = "bernoulli", learningRate = 0.1, maxNumBins = NULL, maxUnorderedLevels = 32, removeMissings = FALSE, useSparseCube = rxGetOption("useSparseCube"), findSplitsInParallel = TRUE, scheduleOnce = FALSE,"), ... ) ## S3 method for class `rxBTrees': print (x, by.class = FALSE, ... ) ## S3 method for class `rxBTrees': plot (x, type = "l", lty = 1:5, lwd = 1, pch = NULL, col = 1:6, main = deparse(substitute(x)), by.class = FALSE, ... ) OOB predictions. If NULL or the input data is a data frame, then no OOB predictions are stored to disk. If rowSelection is specified and not NULL, then outFile cannot be the same as the datasince the resulting set of OOB predictions will generally not have the same number of rows as the original data source. writeModelVars logical value. If TRUE, and the output file is different from the input file, variables in the model will be written to the output file in addition to the OOB predictions. If variables from the input data set are transformed in the model, the transformed variables will also be written out.. maxDepth the maximum depth of any tree node. The computations take much longer at greater depth, so lowering maxDepth can greatly speed up computationTree a positive integer specifying the number of boosting iterations, which is generally the number of trees to grow except for multinomial loss function, where the number of trees to grow for each boosting iteration is equal to the number of levels of the categorical response. mTry a positive integer specifying the number of variables to sample as split candidates at each tree node. The default values are sqrt(num of vars) for classification and (num of vars)/3 for regression. replace a logical value specifying if the sampling of observations should be done with or without replacement. strata a character string specifying the (factor) variable to use for stratified sampling. sampRate (that is, replace=TRUE) and 0.632 for sampling without replacement (that is, replace=FALSE). - for stratified sampling: a vector of positive values of length equal to the number of strata specifying the percentages of observations to sample from the strata for each tree. importance a logical. lossFunction character string specifying the name of the loss function to use. The following options are currently supported: "gaussian"- regression: for numeric responses. "bernoulli"- regression: for 0-1 responses. "multinomial"- classification: for categorical responses with two or more levels. learningRate numeric scalar specifying the learning rate of the boosting procedure.. scheduleOnce EXPERIMENTAL. logical value. If TRUE, rxBTrees will be run with rxExec, which submits only one job to the scheduler and thus can speed up computation on small data sets particularly in the RxHadoopMR compute context. ".rxSetLowHigh" attribute must be set for transformed variables if they are to be used in formula.. blocksPerRead number of blocks to read for each chunk of data read from the data source. 2 provide increasing amounts of information are provided. computeContext a valid RxComputeContext. The RxHadoopMR compute context distributes the computation among the nodes specified by the compute context; for other compute contexts, the computation is distributed if possible on the local computer. and to rxExec when scheduleOnce is set to TRUE. x an object of class rxBTrees. type, lty, lwd, pch, col, main see plot.default and matplot for details. by.class (classification with multinomial loss function only) logical value. If TRUE, the out-of-bag error estimate will be broken down by classes. Details rxBTrees is a parallel external memory algorithm for stochastic gradient boosted decision trees targeted for very large data sets. It is based on the gradient boosting machine of Jerome Friedman and Trevor Hastie and Robert Tibshirani and modeled after the gbm package of Greg Ridgeway with contributions from others, using the tree-fitting algorithm introduced in rxDTree. In a decision forest, a number of decision trees are fit to bootstrap samples of the original data. Observations omitted from a given bootstrap sample are termed "out-of-bag" observations. For a given observation, the decision forest prediction is determined by the result of sending the observation through all the trees for which it is out-of-bag. For classification, the prediction is the class to which a majority assigned the observation, and for regression, the prediction is the mean of the predictions. For each tree, the out-of-bag observations are fed through the tree to estimate out-of-bag error estimates. The reported out-of-bag error estimates are cumulative (that is, the ith element represents the out-of-bag error estimate for all trees through the ith). Value an object of class "rxBTrees" inherited from class "rxDForest". It is a list with the following components, similar to those of class "rxDForest": ntree The number of trees. mtry The number of variables tried at each split. type One of "class" (for classification) or "anova" (for regression). forest a list containing the entire forest. oob.err a data frame containing the out-of-bag error estimate. For classification forests, this includes the OOB error estimate, which represents the proportion of times the predicted class is not equal to the true class, and the cumulative number of out-of-bag observations for the forest. For regression forests, this includes the OOB error estimate, which here represents the sum of squared residuals of the out-of-bag observations divided by the number of out-of-bag observations, the number of out-of-bag observations, the out-of-bag variance, and the "pseudo-R-Squared", which is 1 minus the quotient of the oob.err and oob.var. init.pred The initial prediction value(s). params The input parameters passed to the underlying code. formula The input formula. call The original call to rxBTrees. Note Like rxDTree, rxBTrees requires multiple passes over the data set and the maximum number of passes can be computed as follows for loss functions other than multinomial: quantile computation: 1pass for computing the quantiles for all continuous variables, recursive partition: maxDepth + 1passes per tree for building the tree on the entire dataset, leaf prediction estimation: 1pass per tree for estimating the optimal terminal node predictions, out-of-bag prediction: 1pass per tree for computing the out-of-bag error estimates. For multinomial loss function, the number of passes except for the quantile computation needs to be multiplied by the number of levels of the categorical response. rxBTrees uses random streams and RNGs in parallel computation for sampling. Different threads on different nodes will be using different random streams so that different but equivalent results might be obtained for different number of threads. Author(s) Microsoft Corporation Microsoft Technical Support. Greg Ridgeway with contributions from others, gbm: Generalized Boosted Regression Models (R package), See Also rxDForest, rxDForestUtils, rxPredict.rxDForest. Examples library(RevoScaleR) set.seed(1234) # multi-class classification iris.sub <- c(sample(1:50, 25), sample(51:100, 25), sample(101:150, 25)) iris.form <- Species ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width iris.btrees <- rxBTrees(iris.form, data = iris[iris.sub, ], nTree = 50, importance = TRUE, lossFunction = "multinomial", learningRate = 0.1) iris.btrees plot(iris.btrees, by.class = TRUE) rxVarImpPlot(iris.btrees) iris.pred <- rxPredict(iris.btrees, iris[-iris.sub, ], type = "class") table(iris.pred[["Species_Pred"]], iris[-iris.sub, "Species"]) # binary response require(rpart) kyphosis.nrow <- nrow(kyphosis) kyphosis.sub <- sample(kyphosis.nrow, kyphosis.nrow / 2) kyphosis.form <- Kyphosis ~ Age + Number + Start kyphosis.btrees <- rxBTrees(kyphosis.form, data = kyphosis[kyphosis.sub, ], maxDepth = 6, minSplit = 2, nTree = 50, lossFunction = "bernoulli", learningRate = 0.1) kyphosis.btrees plot(kyphosis.btrees) kyphosis.prob <- rxPredict(kyphosis.btrees, kyphosis[-kyphosis.sub, ], type = "response") table(kyphosis.prob[["Kyphosis_prob"]] > 0.5, kyphosis[-kyphosis.sub, "Kyphosis"]) # regression with .xdf file claims.xdf <- file.path(rxGetOption("sampleDataDir"), "claims.xdf") claims.form <- cost ~ age + car.age + type claims.btrees <- rxBTrees(claims.form, data = claims.xdf, maxDepth = 6, minSplit = 2, nTree = 50, lossFunction = "gaussian", learningRate = 0.1) claims.btrees plot(claims.btrees)
https://docs.microsoft.com/en-us/machine-learning-server/r-reference/revoscaler/rxbtrees
2019-06-16T05:54:25
CC-MAIN-2019-26
1560627997731.69
[]
docs.microsoft.com
This article covers: What is a secondary login? A secondary email address or phone number allows you to log into your account and also forward receipts to [email protected] from either email or phone number. This will need to be done from the web, and you will not be able to add a secondary email or phone number using the mobile app. If you merge your two existing Expensify accounts, the account that you merge into the first will become the primary login. The account that was merged will become the secondary login, though you make either the primary login. Adding a secondary login Note that you can't edit an existing email address or phone number, so in order to change your account's email address or phone number, you will will need to add a secondary email or phone number and set it as your primary login email or phone number. - On the website, click Settings > Your Account - Scroll down to the Secondary Logins section - Click Add Secondary Login 4. Enter an email or mobile phone number (including the international code) 5. Expensify will send an email or SMS to the address here!, below. If you are a member of a group that has Domain Control enabled, you can add your personal email or phone number as a secondary login. That will allow you to forward receipts from a personal email address to [email protected]. Merging Two Accounts To merge two accounts, please head to Expensify.com as this won't be an action you can take on the mobile app. - Once you've logged in go to Settings > Your Account - Scroll to the Merge Accounts section and fill in the fields This will merge the two accounts into one. All of the reports, imported cards, secondary logins and most settings will be brought into your new account. Be careful because merging accounts is not reversible! while Domain Control is active on both domains. Example: Domain Control is active for company.com and companyhq.com. Joe works for both companies and wants to merge his accounts. He cannot merge until the Domain Control of eithercompany.com or companyhq.com is removed. Once that is done, he can merge the non-Domain Control account into the other account successfully. - Email addresses under the same domain with Domain Control active can be merged after one Expensify account is made for each email address. Please reach out to your domain admin to make sure they invite both email addresses to the Domain Control members. Troubleshooting and FAQ Do I need to add a secondary login or do I need to merge my accounts? What's the difference? Secondary logins can only be added if the email address or phone number is not an existing Expensify account. If the account already exists, you must merge accounts, below. If you are trying to add a secondary login that already has an account existing in Expensify, you will see the message below. "Emails (or phone number) with existing accounts cannot be secondary logins" I'm trying to add a secondary login, but I'm not receiving the invitation to my email or phone! Why? Sometimes it can take just a short bit to come to your email or phone inbox. If you feel you've waited a while, you can reference our help article that addresses trouble receiving our emails or SMS. I can't add a secondary email because I get a message about restricted account creation. What does that mean? If you see the below error message, it means that your group admin has Domain Control enabled for that email address's domain. You will need to reach out to your group's Expensify admin in this case. Please note, accounts that are created using a phone number cannot be under Domain Control. "Wait! Not so fast..." I can't merge my accounts because I get a message about "domain managed emails". What does that mean? This message shows when trying to merge accounts where both are under Domain Control. A Domain Admin will need to delete Domain Control for one of the domains. Ideally, the one which will not pertain to the Primary Login of each user on the domain. "Cannot merge domain managed emails" We've changed domains! I'm a Domain Admin and I want to switch all of my users accounts from one domain to another. Can I do that, or do we need to have each user do this manually? You can reach out to us for help with this thankfully! However, note, we can only help facilitate this if: - The new email addresses are all active (i.e. can receive and send emails)? - The new emails not yet associated with an Expensify account - The new email addresses map 1:1 with the old addresses. Example: [email protected] to [email protected]? ([email protected] to [email protected] wouldn't work.) - Under Domain Control, only the former (the domain you are leaving) is validated. We'll create and validate the new one in the process. If this is the case, reach out and confirm the above is true. We'll confirm back and get the process going! If any of the above four things are not true, you'll need to rectify so that they are for some help here. Otherwise, as a Domain Admin of the new Domain you can: - Delete Domain Control for the previous domain by selecting the red trash can from Settings > Domain Control - Create new separate accounts for each user on the new domain - Have each validate their new account and create a password - Once logged in as "user@newdomain", have them navigate to Settings > Your Account and follow the instructions above for merging with "[email protected]".
https://docs.expensify.com/articles/4411-add-a-secondary-login-or-merge-accounts
2019-06-16T05:14:02
CC-MAIN-2019-26
1560627997731.69
[array(['https://downloads.intercomcdn.com/i/o/102841993/645a99e5690605a65c02569c/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/52728976/c3e9432616b2fa28e4384b8d/Expensify+-+Settings+2018-03-21+13-28-12.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/102842044/39868dd6b6fbe48e2d941c33/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/64084555/0f2be42527dccfe562f15c61/Expensify_-_Settings.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/103037432/59d44d0b32848180d72e7d0c/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/57669928/5560ed515bdc26512f9a31e6/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/52729740/24ba381359e550543e22497a/Expensify+-+Settings+2018-03-21+13-33-04.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/64080955/806863039e9553b55b952cb5/image.png', None], dtype=object) ]
docs.expensify.com
Collections:-de /de/: permalink: /de/{slug}/ template: index-de filter: tag:de This would set the base URL to be in the site's default language, and add an additional site.com/de/ section for all posts in German, tagged with de. The main collection excludes these same posts to avoid any overlap. Full tutorial for creating a multi-lang site with Ghost »
https://docs.ghost.org/api/handlebars-themes/routing/collections/
2019-06-16T05:00:00
CC-MAIN-2019-26
1560627997731.69
[]
docs.ghost.org
Studio Pro are used. The value for a constant can also, Decimal and Integer/Long. Value Properties Default value The default value of the constant. This value is used when running locally or in a sandbox. When running locally, the value can be overridden in the currently selected Configuration.
https://docs.mendix.com/refguide/constants
2019-06-16T04:44:37
CC-MAIN-2019-26
1560627997731.69
[]
docs.mendix.com
layout.type String (default: "tree") The type of the layout algorithm to use. Predefined values are: - "tree" - Organizes a diagram in a hierarchical way and is typically used in organizational representations. This type includes the radial tree layout, mindmapping and the classic tree diagrams. "force" - Force-directed layout algorithm (also known as the spring-embedder algorithm) is based on a physical simulation of forces acting on the nodes whereby the links define whether two nodes act upon each other. Each link effectively is like a spring embedded in the diagram. The simulation attempts to find a minimum energy state in such a way that the springs are in their base-state and thus do not pull or push any (linked) node. This force-directed layout is non-deterministic; each layout pass will result in an unpredictable (and hence not reproducible) layout. The optimal length is more and indication in the algorithm than a guarantee that all nodes will be at this distance. The result of the layout is really a combination of the incidence structure of the diagram, the initial topology (positions of the nodes) and the number of iterations. "layered" - Organizes the diagram.
https://docs.telerik.com/kendo-ui/api/javascript/dataviz/ui/diagram/configuration/layout.type
2019-06-16T04:30:39
CC-MAIN-2019-26
1560627997731.69
[]
docs.telerik.com
Styling is more important for all websites. Sometimes site owner may wants to organize their products in more stylish way than usual one. In that case, they will be looking to use page builder. SP Page Builder is a most popular page builder in Joomla. Now we came up with the integration plugin that would integrate SP Page Builder with J2Store Joomla eCommerce solution. So that you could display the products in the page layout you designed by using SP Page builder. With the help of this integration plugin, you will be able to display the J2Store products via SP Page Builder in your Joomla site. It lets you insert product shortcode in any custom widgets. Note: Our SP Page Builder add-on plugin is compatible with both SP Page Builder 2 and 3. Installation - Download our SP Page builder add-on from here and install it via Joomla installer. - After installing, go to Extensions > Plugins and make sure that the plugin named "J2Store - SPPageBuilder" has been enabled. If it is not enabled, please enable it. Use cases Are you trying to display your products through SP Page Builder? Yes, you could do this with either Product display module or with J2Store's product shortcode. Below are two use cases that would show you how the SP Page Builder is using Product display module and product shortcode. How to publish Product display module via SP Page Builder ? Follow the below step by step instructions to display the product display module through SP Page Builder. Before doing this, you should enable and configure the Product Display module properly at Extensions > Modules. Create a new page by going to SP Page Builder > Pages and click NEW. Give the title to your page and click on Add New Row. Now click on Add New Add-On From Addons list, choose Joomla Module. In the popup window displayed, enter Admin label and title and then move to the ADDON OPTIONS where choose Module to the type and choose mod_j2products to the Module. Click Apply. Screenshot of frontend How to use J2Store product shortcode via SP Page Builder ? Create a new page by going to SP Page Builder > Pages and click NEW. Give the title to your page and click on Add New Row. Now click on Add New Add-On In the left side menu panel of the popup window displayed, you would see J2Store. On clicking on J2Store, it would show you J2Store Add-On. Please see the screenshots below: Click on J2Store Addon. It would show you the general settings and style configuration in a popup screen. Give the title and scroll down you would see the dropdown option named "Product shortcode tags". Select the tag(s) and enter the Product ID (in the text box provided next to the Product short code tags) of which you want to display. For example: Select the tag Thumbnail image, Cart and then enter the product ID. Click Apply.
http://docs.j2store.org/integrations/sp-page-builder
2019-04-18T21:07:13
CC-MAIN-2019-18
1555578526807.5
[array(['https://downloads.intercomcdn.com/i/o/66004726/ce05116a2d39389d0f03c5c3/sp-add-new.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66004728/96f41beae51a7b5924e8570e/sp-add-newrow.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66004730/7cce1a5ca86d21105e222c61/sp-newaddon.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66004731/f06754d49516301237f916d4/sp-joomod.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66004736/8f2a209dbe60ef21f8cd1e8a/sp-prodmod.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66004740/1f24de89d76380b9747cdd00/sp-prodmodfront.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66004742/33c8c3615a201c56b998c9cb/sp-j2store.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66004744/841a78d38ade93165205ca6c/sp-j2addon.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66004746/cf2582d90be43581fdd23c3f/sp-j2addonsettings.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/66004749/c0f7ef1eaf7b5dca06b01c22/sp-j2shortcode.png', None], dtype=object) ]
docs.j2store.org
Publication Date: 2017-06-30 Approval Date: 2017-06-29 Posted Date: 2016-11-01 Reference number of this document: OGC 16-136 Reference URL for this document: Category: User Guide Editor: Guy Schumann Title: Testbed-12 2D Test Dataset Implementation with Documentation Copyright © 2017 Open Geospatial Consortium. To obtain additional rights of use, visit POINTS OF CONTACT 1. Introduction This technical report is for A032. 2. Feature Types in GML 3.2 and NAS Schema 7.2 The corrected and complete NAS V7.2 is posted at: Note that NAS V7.2 and the test sample data are not publicly available via the links, but were posted for internal Testbed use (outside readers will not have access to these artifacts as they are posted). The following three test samples produced for the Testbed-12 are focusing on the SF Bay area and have been taken from the HiFLD () database: Geopolitical Entity (see Figure 2) Roads NHD Hydrography Flowlines The test sample for the Geopolitical Entity of CA (out of the HiFLD Geopolitical Entity data set) includes a feature-level and an attribute-level copyright and security marking and is posted at: 3. Suggested: JSON implemention There has been a suggestion to have a NAS JSON version at some point and this has been tested in this Testbed but it is recommended to taKe further testing and implementation of a NAS WFS JSON to Testbed-13. The NAS samples built during this Testbed have about 40 schemas, similar to the Geo4NIEM situation where it was necessary to build a custom WFS on CarbonCloud with a custom loader for instance. This type of complexity makes WFS implementation much more time consuming, and some operations like Describefeaturetype may not even validate when done. Therefore, considering a JSON representation that is simple is suggested, also WFS REST is moving towards JSON and so is the broader community. For more information, please refer to: Specifically: Prototype WFS 2.5 (A035) - Jeff Harrison [email protected] This service focuses on deploying NGA NAS GML as JSON: NGA NAS GML as JSON output format for testing purposes only, for example to assess whether this is even feasible (it is) and what data elements must be simplified to achieve an initial deployment. 4. OSM Feature Type Datasets (Testbed-11) Purpose: For use in demo and RDF creation For the NEO ontology encoding and symbology styling (REASON? XXXX) and also because the NAS GML implementation is still rather complex to implement for a lot of different feature types and geometries, it was decided to use the many OpenStreetMap (OSM) feature type datasets which have been collected over the SF Bay area during Testbed-11. Transforming the OSM datasets to GML using GDAL, the resulting datasets ended up having a number of problems that make them unusable for exploitation for the testbed, in particular for styling. The following issues were identified: There was no XML Schema provided for the data, so it make it difficult for other parties to create schemas, code bindings, and validate the data. The tag keys and values are concatenated together using comma delimited formatting. This is less exploitable by filters and styling. This also requires a custom parser to be written. When concatenation is too long, the string is trimmed and followed by three dots (…) Looking at the datasets for OSM used in Testbed-11, these are more exploitable but there are a couple of issues that needed to be fixed: The XSD schemas are highly redundant. They all use the same complex type definition and just differ by element name. The tags element contains key/value pairs, which is fine for capturing all the tags but they do not provide any name semantic. Data may be out of date (minor issue) As a result, the following recommendations have been implemented: Refactor the XML schema used in Testbed-11, by putting the common complex type in one schema (OSMFeature.xsd). See example below. <?xml version="1.0" encoding="UTF-8"?> <!-- edited with XMLSpy v2016 rel. 2 sp1 () by Stephane Fellah (Image Matters LLC) --> <schema xmlns: <import namespace="" schemaLocation=""/> <element name="OSMFeature" type="fme:OSMFeatureType" substitutionGroup="gml:AbstractFeature"/> <complexType name="OSMFeatureType"> <complexContent> <extension base="gml:AbstractFeatureType"> <sequence> <element name="id" type="string" minOccurs="0"/> <element name="timestamp" type="string" minOccurs="0"/> <element name="user" type="string" minOccurs="0"/> <element name="created_by" type="string" minOccurs="0"/> <element name="visible" type="string" minOccurs="0"/> <element name="area" type="string" minOccurs="0"/> <element name="layer" type="string" minOccurs="0"/> <element name="uid" type="string" minOccurs="0"/> <element name="version" type="string" minOccurs="0"/> <element name="changeset" type="string" minOccurs="0"/> <element name="tag" minOccurs="0" maxOccurs="unbounded"> <complexType> <sequence> <element name="k" type="string" minOccurs="0"/> <element name="v" type="string" minOccurs="0"/> </sequence> </complexType> </element> <element name="nd" minOccurs="0" maxOccurs="unbounded"> <complexType> <sequence> <element name="ref" type="string" minOccurs="0"/> </sequence> </complexType> </element> <element name="member" minOccurs="0" maxOccurs="unbounded"> <complexType> <sequence> <element name="type" type="string" minOccurs="0"/> <element name="ref" type="string" minOccurs="0"/> <element name="role" type="string" minOccurs="0"/> </sequence> </complexType> </element> </sequence> </extension> </complexContent> </complexType> </schema> Create element for each feature type using the common complex type OSMFeatureType if no extension is needed, otherwise create a substitution for the extended OSMFeatureType. See below for "emergency" feature type. <?xml version="1.0" encoding="UTF-8"?> <schema xmlns: <include schemaLocation="OSMFeature.xsd"/> <import namespace="" schemaLocation=""/> <element name="emergency" type="fme:emergencyType" substitutionGroup="fme:OSMFeature"/> <complexType name="emergencyType"> <complexContent> <extension base="fme:OSMFeatureType"> <sequence> <element name="emergency" minOccurs="0" type="string"/> <element ref="gml:pointProperty" minOccurs="0"/> <element ref="gml:multiPointProperty" minOccurs="0"/> <element ref="gml:curveProperty" minOccurs="0"/> <element ref="gml:multiCurveProperty" minOccurs="0"/> </sequence> </extension> </complexContent> </complexType> </schema> Convert the OSM data within SF area using this base schema. Analyze the tag keys for each feature type and create a new schema for each feature type and add a feature property on the type corresponding to each tag name used in the feature instances. Reprocess the data with the new schema so it better aligns with the RDF mapping (see below). For the RDF mapping, the OSM datasets from Testbed-11 were used to apply the methodology described above. The TagInfo Service from OSM was used to extract tag information and value information (definition in different languages, depiction). For the railroad instances, the following was implemented: Parse the XML for railroads (from Testbed-11) Create RDF instances of Railroad for each Feature instance in the XML Add core properties to the feature instance (id, name, user, version, etc..) For each tag name, create a RDF property in an ontology (OSM namespace) with the tag value to the feature instance If the tag name is the same as the feature type (railway), create a SKOS Concept (except for value Yes) subclass of RailwayCategory and add a dct:type to the feature pointing to the concept. Enrich the ontology by access definitions in different languages and depiction from the Tag Info Service Enrich the schemes by access value definition from the Tag Info Service In addition, some improvements of the geometry encoding and fixes on the url encoding were made: If the feature has some point geometry the URI of the feature should be{id}. If the geometry is a line it should be{id}. More properties on the geometry (spatial dimension and wgs84:latitude, wgs84:longitude) and better classification using the GeoSPARQL SF ontology (Point and LineString) were also added. This should help the clients get the adequate representation of the geometry of a feature (for portrayal for example (point or polygon)). The results of the process produces three models: the data model (railway instances for example). the ontology model (definition of properties). The taxonomy of category Railway. Aligning the XML data to the model, provides a way to combine semantic and XML data and possibly opens the door to some powerful semantic search if the taxonomy can be improved by adding some hierarchy (narrower terms). Using this process, XML files using the OSM datasets of Testbed-11 were created () for the following 11 different feature types, some containing several geometries: Aeroway Building Emergency Highway Landuse Leisure Military Power Public Transport Railway Waterways Note: Both the building and highway XML files are very large, so the data include building_ROI and highway_ROI which include one sample set in case smaller file size is needed. These sets can be expanded if needed. Feedback on how the process can be improved is important at this point in order to provide data that better fits the needs of future testbeds. Again, note that the test sample data are not publicly available via the links, but were posted for internal Testbed use (outside readers will not have access to these artifacts as they are posted). 5. Other Data provided for Testbed-12 General requirements for this and future testbeds: different data sets over the same area (i.e. SF Bay) which can be meaningfully combined into new products. Also, there is a need for time series data, i.e. a stack of the same data source over time to tests and demonstrate spatial/timeseries analysis. There is a lot of raster data for SF Bay that was used during Testbed-11 (LiDAR, flood inundation simulations, etc). Most are in geotiff format but some (flooding) are available also as time-varying netcdf. To access the data via FTP: u:ip-data p: EEypDat47Wuuhoo 5.1. Data uploaded to the OGC ftp server There is a lot of variety including big OSM feature data & raster data from satellites, models and other sources. Description of the many data is as follows. Data (static & time series) Note that data are either in the latlon (WSG84, epsg 4326) projection or the NAD or WGS84 UTM Zone 10N for California. San_Francisco_Bay_GML_from_OSM.zip: GML (standard schema) OSM various feature data (0.5 GB) over the SF Bay area. Note that this is so people can use it in threads for now. The plan is that I (RSS) will upload some sample building XML file with the NAS 7.0 schema soon and hopefully someone on Testbed-12 can serve that via a WFS or the like. geo_phys_data folder (satellite soil moisture, satellite rainfall and observed and simulated flooding as well as LiDAR and NED elevation data) Flooding_MODIS (from NASA GSFC): MODIS_Jan2_2015_raster250m.tif: This is a flood probability map from imagery aggregated over some days. MODIS_Jan2_2015_UTM10N_WGS84.shp: This is the standard flood mapping shapefile from the NASA real time flood mapping and it shows the Jan 2015 flood event in the Bay area. Very nice to include in a real demo! Floodzone_sealevelrise: This is a USGS project where they assume 100 cm sea level rise and it shows the land that would be permanently under water. GPM_imerge: This is standard NASA GPM iMerge data showing time-series geotiff rainfall intensity (in mm I assume). I covered the Bay area but resolution is 10 km pixel so please let me know if you want bigger area. Please use these data for now to test if it does what you are hoping for in your thread. satellite_soilmoisture: This is standard ESA SMOS data showing time-series geotiff soil moisture (in volumetric soil moisture). I covered the Bay area but resolution is 25+ km pixel so please let me know if you want bigger area. Please use these data for now to test if it does what you are hoping for in your thread. LiDAR_2m_SFSU: This is a very nice top quality LiDAR DEM put together by the San Francisco State University. It covers the area used in Testbed-11 to run flood simulations (see below) NED_elevation: This is the standard National Elevation Dataset elevation from USGS. Very big files. I just grabbed this as they come from the NED server. It covers the entire Bay area (actually for the flooding in Testbed 11 we fused the NED with the LiDAR to allow faster runs. Here are all the USGS elevation data and the ftp info is easy to grab too in case people want more. It’s an excellent service Flood_simsANDsurgeH folder selfe_ssh_01g_storm.nc: Please consider this a “test”. I would be interested in seeing whether these type of 3-D surge model output time-series point data can be served. I can deliver a much bigger area and larger time-series if desirable. Let me know. These are tidal surge heights in meters relative to zero (mean sea level) provided in hourly time stepping. The file structure is as follows with ssh denoting sea surface height: 'lat' 1x1 struct 84 'single' 'lon' 1x1 struct 84 'single' 'hour' 1x1 struct 840 'single' 'ssh' 1x2 struct [84,840] 'single' obs5m_maxdepths: This is a 5m 2-D hydrodynamic simulation of the SF Bay surge flood event of 1996. It couples two models –SCHISM surge model and LISFLOOD-FP for inland flood simulation (units are m depth at max peak above sea level) obs5m folder (please use only .wd files!): The .wd files are ascii grid files at 5m resolution at the UTM zone10 North projection which can be easily transformed into geotiff. The files show inundation depth above sea level (bare ground) in meters. There are 100 files as hourly time series, so 100 hours from the start of the surge in winter of 1996 (start: file 0000.wd). This ascii grid is an ESRI ascii grid (header info included) and is the standard output of the LISFLOOD-FP flood model (). Note that these simulations are only for a smaller area in the Bay but I’m producing at the moment these type of flood depths all over the Bay area and will upload these as soon as possible. Because of size, the results will be in netcdf (nc) as time-series. Please email [email protected] if you have any question and thanks to OGC Testbed-11 and USGS, SFSU, NASA, ESA for most of these data. Time-varying flood simulations in NETCDF (SF_1in100.nc) Together with my colleagues at SSBN Ltd (same as Testbed-11), we have generated a netcdf (nc) file that contains time-varying flood depths together with a tif for area coverage - 30 m Landsat image - and mp4 movie for a feel. I think this is absolutely great data to test and I wish to stress that this is state-of-the-art flood modeling and at the moment we are still pretty unsure how to deal/distribute flood simulations in time-varying format to end-users etc, so a great opportunity for this testbed. Please note: this dataset should only be used as test data and only within Testbed-12 for now. If we need to have it available for future testbeds, this should be possible. File format/variable are as follows (netcdf4): Dimensions: rows = 4800 cols = 4800 timestep = 108 LatitudeExtent = 2 LongitudeExtent = 2 cellsize = 1 Variables: rows Size 4800x1 Dimensions rows Datatype double cols Size 4800x1 Dimensions cols Datatype double timestep Size 108x1 Dimensions timestep Datatype double LatitudeExtent Size 2x1 Dimensions LatitudeExtent Datatype double LongitudeExtent Size 2x1 Dimensions LongitudeExtent Datatype double cellsize Size 1x1 Dimensions cellsize Datatype double depth Size 4800x4800x108 Dimensions rows,cols,timestep Datatype double
http://docs.opengeospatial.org/guides/16-136.html
2019-04-18T20:20:26
CC-MAIN-2019-18
1555578526807.5
[]
docs.opengeospatial.org
SOURCE_TENANT Dimension In cases in which the source system is part of a multi-tenant system, the source_process and source_product values might not be unique across tenants. The SOURCE_TENANT dimension provides the ability to define the source of the task further. As an extended attribute, SOURCE_TENANT can be populated by the source system that is submitting the task. This page was last modified on August 19, 2014, at 09:04. Feedback Comment on this article:
https://docs.genesys.com/Documentation/IWD/8.5.0/Ref/IWDSOURCE_TENANTDimension
2019-04-18T20:57:25
CC-MAIN-2019-18
1555578526807.5
[]
docs.genesys.com
The String type and associated operations. type String = [Char] Source A String is a list of characters. String constants in Haskell are values of type String. class IsString a where Source Class for string-like datastructures; used by the overloaded string extension (-XOverloadedStrings in GHC). fromString :: String -> a Source. © The University of Glasgow and others Licensed under a BSD-style license (see top of the page).
https://docs.w3cub.com/haskell~7/libraries/base-4.8.2.0/data-string/
2019-04-18T20:34:03
CC-MAIN-2019-18
1555578526807.5
[]
docs.w3cub.com
most relevant for users migrating from Sencha GXT 2.x to Sencha GXT 3.x. Overall changes that will be encountered when upgrading to 3.x+ Sencha GXT 3.1: As in GXT 2, a single module can be inherited into your project to begin using the library: com.sencha.gxt.ui.GXT. Several other modules have been created as well to make it possible to customize exactly how widgets will be drawn in specific situations. By not inheriting the main GXT module, you are able to limit what code will be included in your compile, and will have more control over what browser permutations to construct - it is possible to build a permutation for IE6, 7, 8, 9 as well as different versions of Firefox, and Chrome versus Safari - 13 in total. By default, when inheriting the main GXT module, these are collapsed down to the default GWT permutations - IE6-7, IE8, IE9, Firefox, and Safari/Chrome. Many of the classes and features that were available in GXT 2 are present in 3 as well, though their names may have changed. Some are no longer possible or reasonable to keep in GXT itself or have been superseded by GWT features that we are now taking advantage of. Several of the main features that are no longer present are available now in a legacy jar, but are not expected to continue to receive improvements, and are offered only as a stopgap measure while switching to more modern GWT techniques. To facilitate migrating large projects, GXT 2 and 3 can be on the same classpath at the same time - the basic package has changed from com.extjs.gxt to com.sencha.gxt. This, in conjunction with containers designed to adapt from 2.x to 3.x containers and layouts, will allow projects to migrate gradually if necessary. The resources required to use GXT are all managed internally now - it is no longer necessary to keep a resources directory up to date. Instead, the GWT ClientBundle feature is used to manage images and stylesheets, making sure they are present as part of the compiled project. It is still necessary to link to a stylesheet from the main html page however - every compiled project will have a reset.css file, used to normalize differences between browsers. This should be linked to from your html page. There is no need to link to a gxt-all.css file, or any other css to load a specific theme, and other theming should be done within the project, not using external files. As GXT 3 requires GWT 2.4 or later, and GWT 2.4 requires that the html page have a strict doctype, GXT 3 requires a strict doctype as well. This allows us to make sure that browsers will render content more consistently. Either <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" ""> or <!DOCTYPE html> may be used in the host html page as a strict doctype. Add the library to the GWT project xml module configuration: <inherits name='com.sencha.gxt.ui.GXT'/> The basic concept of Components is entirely unchanged - they extend GWT Widgets, and provide additional behavior and lifecycle details. Components no longer draw their content lazily - this allows GXT widgets to work more effectively in UiBinder, and are generally easier to us. To mitigate concerns about performance hits that may result from this, consider the GWT class LazyPanel - this allows a set of widgets to be kept from being displayed until they are needed. GWT is also starting to use the PotentialElement class - we will watch the development of this feature, and make use of it as it is finalized. An additional result of this change is that there are now very few component setters that can only be used before it is rendered, and none that can only be used post-render. The Component lifecycle no longer includes the ComponentPlugin initialization process: Most plugins cannot be added to multiple components anyway, and the difference between adding a plugin to a component and initializing a plugin with a component is negligible. GXT 3 events are now based off of the GwtEvent class, providing better consistency with existing GWT libraries and projects, and better type safety when adding handlers (formerly listeners) to objects. There is no longer an Observable interface, nor is there a BaseObservable class - EventBus instances can be used instead, and Has*Handler interfaces to declare where events can be fired from. /** Declare a strongly typed handler for this specific event */ public interface CustomEventHandler extends EventHandler { /** The method may be named and typed for this event */ public void onMyEvent(CustomEvent event); } /** Declare a new event object with whatever state it will convey */ public class CustomEvent extends GwtEvent<CustomEventHandler> { private GwtEvent.Type<CustomEventHandler> TYPE = new GwtEvent.Type<CustomEventHandler>(); public static GwtEvent.Type<CustomEventHandler> getType() { return TYPE; } @Override public GwtEvent.Type<CustomEventHandler> getAssocatedType() { return getType(); } @Override protected void dispatch(CustomEventHandler handler) { /* Call the specific handler method to inform listeners that the event is happening */ handler.onMyEvent(this); } } // Add a handler to an EventBus or HandlerManager for the specific event type eventBus.addHandler(CustomEvent.getType(), new CustomEventHandler() { @Override public void onMyEvent(CustomEvent event) { // React to the event } }); // The event can then be fired, and the handler will recieve it eventBus.fireEvent(new CustomEvent()); XTemplates are now generated at compile-time, and create a SafeHtml instance instead of returning a String or being applied to an Element. This makes them usable anywhere SafeHtml can be used, especially in Cells in a Grid or other data widget. They are also able to act on any object with accessor methods for properties instead of first needing to translate objects into JavaScriptObject instances. The old runtime Template and XTemplate classes are available in the legacy jar if runtime templates are required. Check out our XTemplates Redesign blog post for more details. XTemplates are more heavily used in Component internals for rendering, typically by the component's appearance. This modification typically results in more efficient rendering, as well as easier to read and modify structure of any component by modifying the appearance and its template directly, rather than using a subclass to remove or rearrange dom elements, a more costly operation. Many components are backed by a Cell implementation, allowing them to be rendered in any data widget that accepts a cell, including Grid, Tree, TreeGrid, and ListView. These cells in turn delegate to an appearance implementation, either by being given an instance in their constructor, or by relying on rebind rules typically defined in theme modules. These appearances then are responsible for drawing content using XTemplates and ClientBundles, and associating events that have occurred on elements with the behavior that this should initiate in the Cell or Component. Ext GWT 2 relied on the El class as a way or wrapping dom elements to perform operations on them efficiently, using the flyweight pattern to avoid frequent object construction while still adding functionality. In GXT 3 the XElement class has been introduced, extending from the JavaScriptObject class Element In GXT 2.x, all Container subclasses could be assigned a Layout instance, either internally or externally. Most then also supported the ability to add widgets or components with layout data associated to them. Little could be done to require that a particular widget have the right kind of data or any data at all. To help make this easier to write and understand, and to facilitate UiBinder declared layouts, there is no longer a Layout class, but containers declare how they will draw their children, and provide strongly typed add methods where possible. Some layouts are no longer required, such as the FitLayout -- any case where this might be used now typically already knows how to size a single child to itself, such as ContentPanel, Window, or Dialog; or the FormLayout -- with the creation of the FieldLabel, able to accept any widget as its contents, the FormLayout doesn't provide any meaningful value. Other layouts have had their name changed - RowLayout has become HorizontalLayoutContainer or VerticalLayoutContainer, depending on the direction to use, whereas ColumnLayout has become CssFloatLayoutContainer. Layouts in 2.x and 3.1 In GXT 2.x, several model object interfaces and base classes were provided, such as ModelData, TreeModel, BaseModelData. These types allowed for a sort of psuedo-reflection by making each model responsible for tracking the mapping from a String key to the property it referred to. This could make it difficult to use existing POJO beans and to write code in a type-safe way. When AutoBeans and RequestFactory were released, it was difficult to effectively use them. GXT 3 supports any bean-like object, with public accessors (getters and setters) through its XTemplates and through PropertyAccess, a mechanism to generate ValueProvider instances. ValueProvider is a way to deal with the problems of changing and reading values from outside the models in a generic way: ValueProvider<Person, String> firstName, lastName; Person p; String f = firstName.getValue(p); String l = lastName.getValue(p); By keeping access to these properties external, the models aren't required to extend from any base class or use any interface. All data widgets and stores use ValueProvider to read and write values, and XTemplates make use of them as well. There are times when values need to be read from the models and used as keys in a store, or as labels in the ui -- read-only interfaces have been declared for these purposes, LabelProvider and ModelKeyProvider. PropertyAccess is able to generate these as well. Multiple providers can be created for a given path [d]but with different names using the @Path annotation. The @Path annotation can also be used to refer to nested properties as well -- where in GXT 2 one could say model.get("owner.firstName"), the annotation would be written @Path("owner.firstName"). As part of these changes to support arbitrary model objects, we've also gone through and tightened up our use of generics to be as specific and consistent as possible throughout the library. In some places this has led to somewhat complex generics, but with the benefit of assuring that if the generic instances all are set up correctly, no class cast exceptions should be possible, and there should never be confusion[e] about how models are created and passed around. Guide to generics arguments in GXT 3 With the introduction of the strongly typed GWT Editor framework, the need for a GXT specific framework has someone been reduced. The Bindings and FieldBindings classes are still available in the legacy jar, but Field and other form classes now implement the various Editor interfaces instead of expecting a FormBinding object to wrap it and give it a string to bind to. Fields still support handling and reporting local errors, but now as part of the GWT Editor framework they can display errors reported outside of the field itself, and can also pass errors up to the editor driver. The Converter type, used in GXT 2 to convert values within the FieldBinding before setting them on the field and after reading them out of the field into the model, is still present and has been made into an interface with generics to ensure that it matches the expected type. It can be used with the Editor framework through the used of a ConverterEditorAdapter. This requires several generic arguments to ensure that the driver gets all of the details right - the actual type of the property, the type the field is intended to edit, and the actual field type to be used. The Store, ListStore, and TreeStore classes have been cleaned up and updated. They now support any object at all, provided there is a way to obtain a key for each model. ModelKeyProvider instances are mandatory, a departure from 2.x, but this change allows the stores to have one less possible code path at runtime. This, along with careful rewriting and review of these classes, should make them significantly better performing, especially in cases where the contents are being changed while already filtered and or sorted. A few methods have been renamed in ListStore to be more consistent with java.util.List - clear(), addAll(Collection), get(int), and several more have been changed to make the stores more flexible when manipulating the current filter and sort options. Records have been changed slightly in light of support for any POJO bean and any implementation of ValueProvider. They can be completely ignored by invoking store.setAutoCommit(true) so that changes are made right away to the bean, or if autoCommit is false, then changes will be queued up in the Record object until they are committed or rejected. This is a break from 2.x where changes were made right away to the underlying model, but the original value was persisted. There are several reasons why this is not the case: TreeStore has been modified similarly to Store and ListStore to accept any object at all, and to expose the tree structure through methods instead of as a property of the models themselves. In addition to better performance characteristics and usable generics, this means that objects are not expected to describe their own structure, though they are permitted to by implementing TreeNode Data Widgets in GXT 3 operate mostly the same as in 2.x - they are provided a Store from which to obtain data (and in some cases to persist changes), and have flexible options for displaying that data. The chief changes are around the use of ValueProviders to read content, Cells to display it, and the appearance pattern to modify how the data widgets generally collect their data. Each data widget can be give a Cell instance to delegate rendering to, and in some cases such as the Grid and TreeGrid, multiple Cells can be used. Typically each one maps to a single property of the model currently being rendered, or the model itself, via a ValueProvider. In cases where the model itself is being edited, an IdentityValueProvider may be used, which always returns the model itself, to be passed along to the Cell for rendering. In Ext GWT 2, Grid cells were rendered by GridCellRenderer instances, either returning a String or a Widget instance. If a Widget was used, one instance would be created for each row, which can become very expensive for many rows. Cells deal with this by using a single instance to draw all rows, and handling all logic with enough context to know exactly where and how each particular user interaction occurred. This results in efficient, responsive displays, even with large amounts of data - check out our fully loaded Cell Grid sample for an example of this. Most CellGridRenderers that return HTML are easy to translate into Cells, especially with the help of XTemplates. Renderers in 2.x that would return stock Ext GWT components now can be replaced by matching Cells, as most GXT 3 components have a Cell that is actually used to do their rendering and handle their events. And writing new, custom Cells is fairly straightforward, and will result in better performing applications than drawing widgets for each row.
http://docs-devel.sencha.com/gxt/4.x/guides/getting_started/migration/Migration2to3.html
2017-11-18T00:53:37
CC-MAIN-2017-47
1510934804125.49
[]
docs-devel.sencha.com
Tutorial: Create a Simple Pipeline . Important For this tutorial, in addition to completing the steps in Getting Started with AWS CodePipeline, you should create an Amazon EC2 instance key pair you will use to connect to the Amazon EC2 instances after they are launched. To create an Amazon EC2 instance key pair, follow the instructions in Creating Your Key Pair Using Amazon EC2. Not what you're looking for? To create a simple pipeline using an AWS CodeCommit branch as a code repository, see Tutorial: Create a Simple Pipeline (AWS CodeCommit Repository). Before you begin this walkthrough, you should complete the prerequisites in Getting Started with AWS CodePipeline. Step 2: Create AWS CodeDeploy Resources to Deploy the Sample Application. If you want to use a GitHub repository for your source instead of an Amazon S3 bucket, copy the sample applications to that repository, and skip ahead to Step 2: Create AWS CodeDeploy Resources to Deploy the Sample Application. Next. Choose Versioning, select Enable versioning, and then choose Save. When versioning is enabled, Amazon S3 saves every version of every object in the bucket. Choose Next, and modify permissions for the bucket as needed. Choose Next, and then choose Create. If you want to deploy to Amazon Linux instances using AWS CodeDeploy: If you want to deploy to Windows Server instances using AWS CodeDeploy: Choose the dist folder. Choose the file name. If you want to deploy to Amazon Linux instances: aws-codepipeline-s3-aws-codedeploy_linux.zip If you want to deploy to Windows Server instances: Sample deployment (Ohio), choose that region in the region selector. For more information about the regions and endpoints available for AWS CodePipeline, see Regions and Endpoints. If you see the Applications page instead of the Welcome page, in the More info section, choose Sample deployment wizard. On the Get started with AWS CodeDeploy page, choose Sample deployment, and then choose Next. On the Choose a deployment type page, choose In-place deployment, and then choose Next. On the Configure instances page, do the following: Choose the operating system and Amazon EC2 instance key pair you want to use. Your choice of operating system should match the sample application you downloaded from GitHub. Important If you downloaded aws-codepipeline-s3-aws-codedeploy_linux.zip, choose Amazon Linux. If you downloaded AWSCodePipeline-S3-AWSCodeDeploy_Windows.zip, choose Windows Server. Step 3: Configure instances in the AWS CodeDeploy User Guide. After your instances have been created, choose Next. On the Name your application page, in Application name, type CodePipelineDemoApplication, and then choose Next. On the Select a revision page, choose Next. On the Create a deployment group page, in Deployment group name, type CodePipelineDemoFleet, and then choose Next. Note The instances created for you in Step 3: Configure instances are listed in the Add instances area. On the Select a service role page, from the Service role drop-down box, choose Use an existing service role. In the Role name list, choose the service role you want to use, and then choose Next. Note If no service roles appear in Role name, or if you do not have a service role you want to use, choose Create a service role or follow the steps in Create a Service Role. The service role you create or use for AWS CodeDeploy is different from the service role you will create for AWS CodePipeline. On the Choose a deployment configuration page, choose Next. On the Review deployment details page, choose Deploy.. Note If you choose another name for your pipeline, be sure to use that name instead of MyFirstPipelinefor the rest of this walkthrough. After you create a pipeline, you cannot change its name. Pipeline names are subject to some limitations. For more information, see Limits in AWS CodePipeline. In Step 2: Source, in Source provider, choose Amazon S3. In Amazon S3 location, type the name of the Amazon S3 bucket you created in Step 1: Create an Amazon S3 Bucket for Your Application and the sample file you copied to that bucket, either aws-codepipeline-s3-aws-codedeploy_linux.zipor:Copy:Copy, Tutorial: Create a Four-Stage Pipeline. In Step 4: Deploy, in Deployment provider, choose AWS CodeDeploy. In Application name, type CodePipelineDemoApplication, or choose the Refresh button, and then choose the application name from the list. In Deployment group, type CodePipelineDemoFleet, Staging Staging AWS CodePipeline Concepts. Step 4: Add Another Stage to Your Pipeline Now add another stage in the pipeline to deploy from staging deployment group. On the Create deployment group page, in Deployment group name, type a name for the second deployment group, such as CodePipelineProductionFleet. Choose In-place deployment. In Key, enter Name, but in Value, choose CodePipelineDemofrom the list. Leave the default configuration for Deployment configuration. In Service role ARN, choose the same AWS CodeDeploy service role you used for the initial deployment (not the AWS CodePipeline service role), Staging stage. In the name field for the new stage, type Production, and then choose + Action: In the Action category drop-down list, choose Deploy. In Action name, type . Deploy-Second-Deployment In Deployment provider, choose AWS CodeDeploy from the drop-down list. In the AWS CodeDeploy section, in Application name, choose CodePipelineDemoApplicationfrom the drop-down list, as you did when you created the pipeline. In Deployment group, choose the deployment group you just created, CodePipelineProductionFleet. In the Input artifacts section, type MyAppin Input artifacts #1, and then choose Add action. Note (Staging) has already deployed the application to the same Amazon EC2 instances, this action will fail., and then choose Release when prompted. This will run. In the Action execution failed dialog box, choose Link to execution on your local Linux, macOS, or Unix machine, or a command prompt on your local Windows machine, and run the get-pipeline command to display the structure of the pipeline you just created. For MyFirstPipeline, you would type the following command:Copy aws codepipeline get-pipeline --name " MyFirstPipeline" This command returns the structure of MyFirstPipeline. The first part of the output should look similar to the following:Copy { "pipeline": { "roleArn": "arn:aws:iam::80398EXAMPLE:role/AWS-CodePipeline-Service", "stages": [ ... The final part of the output includes the pipeline metadata and should look similar to the following:Copy ... ], :Copy aws codepipeline get-pipeline --name MyFirstPipeline> pipeline.json Copy the Staging stage section and paste it after the first two stages. Because it is a deploy stage, just like the Staging stage, you will use it as a template for the third stage. Change the name of the stage and the deployment group details, and then save the file. The following example shows the JSON you will add to the pipeline.json file after the Staging stage. Edit the emphasized elements with new values. Remember to include a comma to separate the Staging and Production stage definitions.Copy , { : Important Be sure to include the file name. It is required in this command.Copy, Staging,, Tutorial: Create a Four-Stage Pipeline. AWS CodePipeline Concepts.
http://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-simple-s3.html
2017-11-18T01:22:27
CC-MAIN-2017-47
1510934804125.49
[]
docs.aws.amazon.com
The closed source versions of Chronograf (versions 0.4, 0.10 through 0.13, and 1.0) are deprecated and are no longer supported or developed. Users can continue to use the deprecated product; the documentation is available under the Deprecated header in the sidebar. For more information, please read the original announcement blog by our CTO, Paul Dix. We highly recommend moving to the open source Chronograf product. It’s pretty great, we promise! Check out the Getting Started guide to get up and running!
http://docs.influxdata.com/chronograf/deprecated/
2017-11-18T00:47:57
CC-MAIN-2017-47
1510934804125.49
[]
docs.influxdata.com
Configuration¶ stacker makes use of a YAML formatted config file to define the different CloudFormation stacks that make up a given environment. The configuration file has a loose definition, with only a few top-level keywords. Other than those keywords, you can define your own top-level keys to make use of other YAML features like anchors & references to avoid duplicating config. (See YAML anchors & references for details) Top Level Keywords¶ Namespace¶ You can provide a namespace to create all stacks within. The namespace will be used as a prefix for the name of any stack that stacker creates, and makes it unnecessary to specify the fully qualified name of the stack in output lookups. In addition, this value will be used to create an S3 bucket that stacker will use to upload and store all CloudFormation templates. In general, this is paired with the concept of Environments to create a namespace per environment: namespace: ${namespace} Namespace Delimiter¶ By default, stacker will use ‘-‘ as a delimiter between your namespace and the declared stack name to build the actual CloudFormation stack name that gets created. Since child resources of your stacks will, by default, use a portion of your stack name in the auto-generated resource names, the first characters of your fully-qualified stack name potentially convey valuable information to someone glancing at resource names. If you prefer to not use a delimiter, you can pass the namespace_delimiter top level key word in the config as an empty string. See the CloudFormation API Reference for allowed stack name characters S3 Bucket¶ Stacker, by default, pushes your CloudFormation templates into an S3 bucket and points CloudFormation at the template in that bucket when launching or updating your stacks. By default it uses a bucket named stacker-${namespace}, where the namespace is the namespace provided the config. If you want to change this, provide the stacker_bucket top level key word in the config. The bucket will be created in the same region that the stacks will be launched in. If you want to change this, or if you already have an existing bucket in a different region, you can set the stacker_bucket_region to the region where you want to create the bucket. - S3 Bucket location prior to 1.0.4: - There was a “bug” early on in stacker that created the s3 bucket in us-east-1, no matter what you specified as your –region. An issue came up leading us to believe this shouldn’t be the expected behavior, so we fixed the behavior. If you executed a stacker build prior to V 1.0.4, your bucket for templates would already exist in us-east-1, requiring you to specify the stacker_bucket_region top level keyword. Note Deprecation of fallback to legacy template bucket. We will first try the region you defined using the top level keyword under stacker_bucket_region, or what was specified in the –region flag. If that fails, we fallback to the us-east-1 region. The fallback to us-east-1 will be removed in a future release resulting in the following botocore excpetion to be thrown: TemplateURL must reference a valid S3 object to which you have access. To avoid this issue, specify the stacker_bucket_region top level keyword as described above. You can specify this keyword now to remove the deprecation warning. If you want stacker to upload templates directly to CloudFormation, instead of first uploading to S3, you can set stacker_bucket to an empty string. However, note that template size is greatly limited when uploading directly. See the CloudFormation Limits Reference. Module Paths¶ When setting the classpath for blueprints/hooks, it is sometimes desirable to load modules from outside the default sys.path (e.g., to include modules inside the same repo as config files). Adding a path (e.g. ./) to the sys_path top level key word will allow modules from that path location to be used. Service Role¶ By default stacker doesn’t specify a service role when executing changes to CloudFormation stacks. If you would prefer that it do so, you can set service_role to be the ARN of the service that stacker should use when executing CloudFormation changes. This is the equivalent of setting RoleARN on a call to the following CloudFormation api calls: CreateStack, UpdateStack, CreateChangeSet. See the AWS documentation for AWS CloudFormation Service Roles. Remote Packages¶ The package_sources top level keyword can be used to define remote git sources for blueprints (e.g., retrieving stacker_blueprints on github at tag v1.0.2). The only required key for a git repository config is uri, but branch, tag, & commit can also be specified: package_sources: git: - uri: [email protected]:acmecorp/stacker_blueprints.git - uri: [email protected]:remind101/stacker_blueprints.git tag: 1.0.0 paths: - stacker_blueprints - uri: [email protected]:contoso/webapp.git branch: staging - uri: [email protected]:contoso/foo.git commit: 12345678 Use the paths option when subdirectories of the repo should be added to Stacker’s sys.path. If no specific commit or tag is specified for a repo, the remote repository will be checked for newer commits on every execution of Stacker. Cloned repositories will be cached between builds; the cache location defaults to ~/.stacker but can be manually specified via the stacker_cache_dir top level keyword. Remote Configs¶ Configuration yamls from remote configs can also be used by specifying a list of configs in the repo to use: package_sources: git: - uri: [email protected]:acmecorp/stacker_blueprints.git configs: - vpc.yaml In this example, the configuration in vpc.yaml will be merged into the running current configuration, with the current configuration’s values taking priority over the values in vpc.yaml. Dictionary Stack Names & Hook Paths¶ To allow remote configs to be selectively overriden, stack names & hook paths can optionally be defined as dictionaries, e.g.: pre_build: my_route53_hook: path: stacker.hooks.route53.create_domain: required: true args: domain: mydomain.com stacks: vpc-example: class_path: stacker_blueprints.vpc.VPC locked: false enabled: true bastion-example: class_path: stacker_blueprints.bastion.Bastion locked: false enabled: true Pre & Post Hooks¶ Many actions allow for pre & post hooks. These are python methods that are executed before, and after the action is taken for the entire config. Only the following actions allow pre/post hooks: - build (keywords: pre_build, post_build) - destroy (keywords: pre_destroy, post_destroy) There are a few reasons to use these, though the most common is if you want better control over the naming of a resource than what CloudFormation allows. The keyword is a list of dictionaries with the following keys: - path: - the python import path to the hook - data_key: - If set, and the hook returns data (a dictionary), the results will be stored in the context.hook_data with the data_key as it’s key. - required: - whether to stop execution if the hook fails - args: - a dictionary of arguments to pass to the hook An example using the create_domain hook for creating a route53 domain before the build action: pre_build: - path: stacker.hooks.route53.create_domain required: true args: domain: mydomain.com Mappings¶ Mappings are dictionaries that are provided as Mappings to each CloudFormation stack that stacker produces. These can be useful for providing things like different AMIs for different instance types in different regions: These can be used in each blueprint/stack as usual. Lookups¶ Lookups allow you to create custom methods which take a value and are resolved at build time. The resolved values are passed to the Blueprints before it is rendered. For more information, see the Lookups documentation. stacker provides some common lookups, but it is sometimes useful to have your own custom lookup that doesn’t get shipped with stacker. You can register your own lookups by defining a lookups key: lookups: custom: path.to.lookup.handler The key name for the lookup will be used as the type name when registering the lookup. The value should be the path to a valid lookup handler. You can then use these within your config: conf_value: ${custom some-input-here} Stacks¶ This is the core part of the config - this is where you define each of the stacks that will be deployed in the environment. The top level keyword stacks is populated with a list of dictionaries, each representing a single stack to be built. A stack has the following keys: - name: - The base name for the stack (note: the namespace from the environment will be prepended to this) - class_path: - The python class path to the Blueprint to be used. - description: - A short description to apply to the stack. This overwrites any description provided in the Blueprint. See: - variables: - A dictionary of Variables to pass into the Blueprint when rendering the CloudFormation template. Variables can be any valid YAML data structure. - locked: - (optional) If set to true, the stack is locked and will not be updated unless the stack is passed to stacker via the –force flag. This is useful for risky stacks that you don’t want to take the risk of allowing CloudFormation to update, but still want to make sure get launched when the environment is first created. - enabled: - (optional) If set to false, the stack is disabled, and will not be built or updated. This can allow you to disable stacks in different environments. - protected: - (optional) When running an update in non-interactive mode, if a stack has protected set to true and would get changed, stacker will switch to interactive mode for that stack, allowing you to approve/skip the change. - requires: - (optional) a list of other stacks this stack requires. This is for explicit dependencies - you do not need to set this if you refer to another stack in a Parameter, so this is rarely necessary. - (optional) a dictionary of CloudFormation tags to apply to this stack. This will be combined with the global tags, but these tags will take precendence. Here’s an example from stacker_blueprints, used to create a VPC: stacks: - name: vpc-example class_path: stacker_blueprints.vpc.VPC locked: false enabled: true Variables¶ Variables are values that will be passed into a Blueprint before it is rendered. Variables can be any valid YAML data structure and can leverage Lookups to expand values at build time. The following concepts make working with variables within large templates easier: YAML anchors & references¶ (and hopefully not typo’ing any of them) you could do the following: domain_name: mydomain.com &domain Now you have an anchor called domain that you can use in place of any value in the config to provide the value mydomain.com. You use the anchor with a reference: stacks: - name: vpc class_path: stacker_blueprints.vpc.VPC variables: DomainName: *domain Even more powerful is the ability to anchor entire dictionaries, and then reference them in another dictionary, effectively providing it with default values. For example: common_variables: &common_variables DomainName: mydomain.com InstanceType: m3.medium AMI: ami-12345abc Now, rather than having to provide each of those variables to every stack that could use them, you can just do this instead: stacks: - name: vpc class_path: stacker_blueprints.vpc.VPC variables: << : *common_variables InstanceType: c4.xlarge # override the InstanceType in this stack Using Outputs as Variables¶ Since stacker encourages the breaking up of your CloudFormation stacks into entirely separate stacks, sometimes you’ll need to pass values from one stack to another. The way this is handled in stacker is by having one stack provide Outputs for all the values that another stack may need, and then using those as the inputs for another stack’s Variables. stacker makes this easier for you by providing a syntax for Variables that will cause stacker to automatically look up the values of Outputs from another stack in its config. To do so, use the following format for the Variable on the target stack: MyParameter: ${output OtherStack::OutputName} Since referencing Outputs from stacks is the most common use case, output is the default lookup type. For more information see Lookups. This example is taken from stacker_blueprints example config - when building: stacker_blueprints.vpc.VPC variables: DomainName: *domain - name: webservers class_path: stacker_blueprints.asg.AutoscalingGroup variables: DomainName: *domain VpcId: ${output vpc::VpcId} # gets the VpcId Output from the vpc stack Note: Doing this creates an implicit dependency from the webservers stack to the vpc stack, which will cause stacker to submit the vpc stack, and then wait until it is complete until it submits the webservers stack. Environments¶. Environments allow you to use your existing stacker config, but provide different values based on the environment file chosen on the command line. For more information, see the Environments documentation. Translators¶ Translators allow you to create custom methods which take a value, then modify it before passing it on to the stack. Currently this is used to allow you to pass a KMS encrypted string as a Parameter, then have KMS decrypt it before submitting it to CloudFormation. For more information, see the Translators documentation.
http://stacker.readthedocs.io/en/stable/config.html
2017-11-18T00:55:49
CC-MAIN-2017-47
1510934804125.49
[]
stacker.readthedocs.io
Create a new page and embed the following shortcode: [affiliates_affiliate_profile /] This shortcode allows the use of the following attributes: show_name "true": show first and last name (default) "": don’t show them edit_name "true": lets the user edit their first and last name "": don’t allow affiliate to edit (default) show_email Similar to show_namebut for the affiliate’s email. edit_email Similar to edit_namebut for the affiliate’s email. show_attributes "attr1,attr2,...": a comma-separated list of ids. Shows these attributes to the affiliate. Among supported ids are: paypal_email, referral_amount, referral_rate, coupons edit_attributes "attr1,attr2,...": a comma-separated list of ids Attributes that the affiliate can edit. For example you could allow the affiliate to edit the paypal_email. Normally other attributes should not need any editing capability granted to the affiliates, especially not commission amounts or rates.
http://docs.itthinx.com/document/affiliates-enterprise/shortcodes/advanced-shortcodes/affiliates_affiliate_profile/
2017-11-18T00:42:06
CC-MAIN-2017-47
1510934804125.49
[]
docs.itthinx.com
Certain. The current release of JBoss Cache, 3.2.3, is considered stable enough. No new features will be added to JBoss Cache or POJO Cache, although critical fixes will still be addressed. Consider both JBoss Cache and POJO Cache to be in maintenance mode. So, why not just call it JBoss Cache 4?. Several reasons. Binary and even API compatibility with older JBoss Cache releases is no longer maintained, and the new API is simpler and easier to use. Also, the internal data structure used is not compatible with JBoss Cache.
https://docs.jboss.org/author/display/ISPN/How+is+Infinispan+related+to+JBoss+Cache%3F
2017-11-18T01:17:24
CC-MAIN-2017-47
1510934804125.49
[]
docs.jboss.org
Common Library Permissions The Common Library is a public library that is accessible by everyone with an Appspace account. By default, the Common Library is set to have Read+Write permissions for everyone, on both cloud and on-premises installations. However, administrators and account owners can choose to change these settings. To change the Common Library permissions, follow the instructions below: Go to Admin > Users in the Appspace menu. Select All users in your network, and click the Edit button. In the Permissions page, select the access type for the Common Library. Click Save.
https://docs.appspace.com/appspace/6.1/appspace/library/common-library/
2019-07-16T05:08:07
CC-MAIN-2019-30
1563195524502.23
[array(['../../../_images/006.png', None], dtype=object)]
docs.appspace.com
- Configuration - Notifications - Other - Activate log tracker for a user - Add SSL certificate - Add Monitor Task - Add Monitor Timer Task - Add New Item - ext - Add New Item - Vhost - Change user inter-domain communication permission - Connections Time - DNS Query - Default room config - Delete Monitor Task - Fix User’s Roster - Fix User’s Roster on Tigase Cluster - Get User Roster - Get any file - Get Configuration File - Get init.properties File - Load Errors - New command script - Monitor - New command script - MUC - OAUth credentials - Pre-Bind BOSH user session - Reload component repository - Scripts - Statistics - Users This is an advanced window for settings and management for the XMPP server.. Here you can add SSL certificates from PEM files to specific virtual hosts. Although Tigase can generate its own self-signed certificates, this will override those default certificates. You can write scripts for Groovy or ECMAScript to add to monitor tasks here. This only adds the script to available scripts however, you will need to run it from another prompt. This section allows you to add monitor scripts in Groovy while using a delay setting which will delay the start of the script. Provides a method to add external components to the server. By default you are considered the owner, and the Tigase load balancer is automatically filled in. You can restrict users to only be able to send and receive packets to and from certain virtual hosts. This may be helpful if you want to lock users to a specific domain, or prevent them from getting information from a statistics component. Allows you to set the default configuration for new MUC rooms. This will not be able to modify current in use and persistent rooms. This removes a monitor task from the list of available monitor scripts. This action is not permanent as it will revert to initial settings on server restart. You can fix a users roster from this prompt. Fill out the bare JID of the user and the names you wish to add or remove from the roster. You can edit a users roster using this tool, and changes are permanent. This does the same as the Fix User’s Roster, but can apply to users in clustered servers. As the title implies this gets a users' roster and displays it on screen. You can use a bare or full JID to get specific rosters. Enables you to see the contents of any file in the tigase directory. By default you are in the root directory, if you wish to go into directory use the following format: logs/tigase.log.0 If you don’t want to type in the location of a configuration file, you can use this prompt to bring up the contents of either tigase.conf or init.properties. Will output the current init.properties file, this includes any modifications made during the current server session. Will display any errors the server encounters in loading and running. Can be useful if you need to address any issues. Allows you to write command scripts in Groovy and store them physically so they can be saved past server restart and run at any time. Scripts written here will only be able to work on the Monitor component. Allows you to write command scripts in Groovy and store them physically so they can be saved past server restart and run at any time. Scripts written here will only be able to work on the MUC component. Uses OAuth to set new credentials and enable or disable a registration requirement with a signed form. Allows admins to pre-bind a BOSH session with a full or bare JID (with the resource automatically populated on connection). You may also specify HOLD or WAIT parameters. This will show if you have any external components and will reload them in case of any stuck threads.. This section returns the number of active users per specific vhost..
https://docs.tigase.net/tigase-server/7.1.1/Administration_Guide/webhelp/_management_2.html
2019-07-16T04:06:13
CC-MAIN-2019-30
1563195524502.23
[]
docs.tigase.net
) ); } } finally { if ( fs ) delete (IDisposable^)fs; } } }); } } Imports System Imports System.IO Imports System.Text Public Class Test Public Shared Sub Main() Dim path As String = "c:\temp\MyTest.txt" ' Delete the file if it exists. If File.Exists(path) Then File.Delete(path) End If 'Create the file. Dim fs As FileStream = File.Create(path) AddText(fs, "This is some text") AddText(fs, "This is some more text,") AddText(fs, Environment.NewLine & "and this is on a new line") AddText(fs, Environment.NewLine & Environment.NewLine) AddText(fs, "The following is a subset of characters:" & Environment.NewLine) Dim i As Integer For i = 1 To 120 AddText(fs, Convert.ToChar(i).ToString()) Next fs.Close() 'Open the stream and read it back. fs = File.OpenRead(path) Dim b(1024) As Byte Dim temp As UTF8Encoding = New UTF8Encoding(True) Do While fs.Read(b, 0, b.Length) > 0 Console.WriteLine(temp.GetString(b)) Loop fs.Close() End Sub Private Shared Sub AddText(ByVal fs As FileStream, ByVal value As String) Dim info As Byte() = New UTF8Encoding(True).GetBytes(value) fs.Write(info, 0, info.Length) End Sub End Class that exists Remarks...
https://docs.microsoft.com/en-us/dotnet/api/system.io.filestream?view=netframework-4.7.2
2019-07-16T05:39:22
CC-MAIN-2019-30
1563195524502.23
[]
docs.microsoft.com
Managing Computers and Connectivity. How should I set up my router? How do I connect my server to my network? Turn on Remote Web Access Turn media streaming on or off How do I find my domain name service provider?. What is Connector software? Prerequisites for connecting a computer to the home server Supported operating systems for home computers What changes does the home server make to a home computer? Connect computers to the server using the Connector software This section provides access to procedures and information that will help you install the Connector software, connect your computer to the server, and troubleshoot connecting computers to the server.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-home-server/ff357694%28v%3Dws.11%29
2019-07-16T05:08:01
CC-MAIN-2019-30
1563195524502.23
[]
docs.microsoft.com
. - Messages are broadcast and received on a preselected channel (numbered 0-100). - Broadcasts are at a certain level of power - more power means more range. - Messages are filtered by address (like a house number) and group (like a named recipient at the specified address). - The rate of throughput can be one of three pre-determined settings. - Send and receieve bytes to work with arbitrary data. - 100 (inclusive) that defines an arbitrary “channel” to which the radio is tuned. Messages will be sent via this channel and only messages received via this channel will be put onto the incoming message queue. Each step is 1MHz wide, based at 2400MHz. The power(defaultwhen filtering messages. Conceptually, radiomodule : RATE_250KBIT, RATE_1MBITor RATE_2MBIT. If configis not called then the defaults described above are assumed. radio. reset()¶ Reset the settings to their default values (as listed in the documentation for the configfunction above). MegjegyzésErrorexception is raised if conversion to string fails. Examples¶ #
https://microbit-micropython-hu.readthedocs.io/hu/latest/radio.html
2019-07-16T04:30:00
CC-MAIN-2019-30
1563195524502.23
[]
microbit-micropython-hu.readthedocs.io
PHP 7 with Nginx Source to Image Example¶ This is a PHP 7 Nginx Source to image example based on the following s2i Builder Note: at the moment no official php 7 sti image is available: Please consider this example as tech preview-nginx-s2i" oc delete svc php7-nginx-s2i oc delete dc php7-nginx-s2i Create Builder by template, and build builder: oc new-app -f You have to wait until the builder is ready
https://appuio-community-documentation.readthedocs.io/en/latest/app/php7nginxsti.html
2019-07-16T04:44:48
CC-MAIN-2019-30
1563195524502.23
[]
appuio-community-documentation.readthedocs.io
This guide describes how to develop extensions and customize Alfresco Process Services. Before beginning this guide, you should read the Administration Guide to make sure you have an understanding of how Alfresco Process Services are installed and configured. - Alfresco Process Services high-level architecture The following diagram gives a high-level overview of the technical components in Alfresco Process Services. -resco Process Services needs user, group, and membership information in its database. The main reason is performance (for example quick user/group searches) and data consistency (for example models are linked to users through foreign keys). In the Process Services logic, this is typically referred to as Identity Management (IDM). - Security configuration overrides Configure security with the com.activiti.conf.SecurityConfiguration class. It allows you to switch between database and LDAP/Active Directory authentication out of the box. It also configures REST endpoints under "/app" to be protected using a cookie-based approach with tokens and REST endpoints under "/api" to be protected by Basic Auth. -
https://docs.alfresco.com/process-services1.7/topics/developmentGuide.html
2019-07-16T04:18:40
CC-MAIN-2019-30
1563195524502.23
[]
docs.alfresco.com
Excel Services Error Codes Excel Services generates errors and error messages in the SOAP exception based on errors that occur in Excel Services. The following table shows the errors that are accessible when calls to the Excel Web Services methods throw a SOAP exception. You use the SubCode property of the SoapException class to capture the error codes. For more information about using the SubCode property to capture error codes, see How to: Use the SubCode Property to Capture Error Codes For more information about Excel Services alerts, see Excel Services Alerts. Error Codes The following table lists the error codes for Excel Web Services alerts and the associated messages, explanation, and resolutions. See Also Tasks How to: Use the SubCode Property to Capture Error Codes Concepts Excel Services Alerts Excel Services Known Issues and Tips Excel Services Best Practices
https://docs.microsoft.com/en-us/previous-versions/office/developer/sharepoint-2007/ms575980%28v%3Doffice.12%29
2019-07-16T04:40:34
CC-MAIN-2019-30
1563195524502.23
[]
docs.microsoft.com
For customers who do not want to use the virtual appliance deployment, Workspace ONE UEM offers the Linux installer so you can configure, download, and install VMware Tunnel onto a server. The Linux installer has different prerequisites than the virtual appliance method. To run the Linux installer, you must meet specific hardware, software, and general requirements before you can begin installation. Using the virtual appliance simplifies the requirements and installation process.
https://docs.vmware.com/en/VMware-Workspace-ONE-UEM/9.4/vmware-airwatch-guides-94/GUID-AW94-TunnelInstall_Overview.html
2019-07-16T04:28:44
CC-MAIN-2019-30
1563195524502.23
[]
docs.vmware.com
We will present state of the art JIT compiler design based on CACAO, a GPL licensed multiplatform Java VM. After explaining the basics of code generation, we will focus on "problematic" instructions, and point to possible ways to exploit stuff. A short introduction into just-in-time compiler techniques is given: Why JIT, about compiler invocation, runtime code modification using signals, codegeneration. Then theoretical attack vectors are elaborated: language bugs, intermediate representation quirks and assembler instruction inadequacies. With these considerations in mind the results of a CACAO code review are presented. For each vulnerability possible exploits are discussed and two realized exploits are.
http://www.secdocs.org/docs/just-in-time-compilers-breaking-a-vm-video/
2019-07-16T04:52:20
CC-MAIN-2019-30
1563195524502.23
[]
www.secdocs.org
[ Tcllib Table Of Contents | Tcllib Index ] tie(n) 1.1 "Tcl Data Structures" Table Of Contents Synopsis - package require Tcl 8.4 - package require tie ?1.1? - ::tie::tie arrayvarname options... dstype dsname... - ::tie::untie arrayvarname ?token? - ::tie::info ties arrayvarname - ::tie::info types - ::tie::info type dstype - ::tie::register dsclasscmd as dstype - dsclasscmd objname ?dsname...? - ds destroy - ds names - ds size - ds get - ds set dict - ds unset ?pattern? - ds setv index value - ds unsetv index - ds getv index Description The tie package provides a framework for the creation of persistent Tcl array variables. It should be noted that the provided mechanism is generic enough to also allow its usage for the distribution of the contents of Tcl arrays over multiple threads and processes, i.e. communication. This, persistence and communication, is accomplished by tying) a Tcl array variable to a data source. Examples of data sources are other Tcl arrays and files. It should be noted that a single Tcl array variable can be tied to more than one data source. It is this feature which allows the framework to be used for communication as well. Just tie several Tcl arrays in many client processes to a Tcl array in a server and all changes to any of them will be distributed to all. Less centralized variants of this are of course possible as well. USING TIES TIE API This section describes the basic API used to establish and remove ties between Tcl array variables and data sources. This interface is the only one a casual user has to be concerned about. The following sections about the various internal interfaces can be safely skipped. - ::tie::tie arrayvarname options... dstype dsname... This command establishes a tie between the Tcl array whose name is provided by the argument arrayvarname and the data source identified by the dstype and its series of dsname arguments. All changes made to the Tcl array after this command returns will be saved to the data source for safekeeping (or distribution). The result of the command is always a token which identifies the new tie. This token can be used later to destroy this specific tie. - varname arrayvarname (in) The name of the Tcl array variable to connect the new tie to. - name|command dstype (in) This argument specifies the type of the data source we wish to access. The dstype can be one of log, array, remotearray, file, growfile, or dsource; in addition, the programmer can register additional data source types. Each dstype is followed by one or more arguments that identify the data source to which the array is to be tied. - string dsname (in) The series of dsname arguments coming after the dstype identifies the data source we wish to connect to, and has to be appropriate for the chosen type. The command understands a number of additional options which guide the process of setting up the connection between Tcl array and data source. - ::tie::untie arrayvarname ?token? This command dissolves one or more ties associated with the Tcl array named by arrayvarname. If no token is specified then all ties to that Tcl array are dissolved. Otherwise only the tie the token stands for is removed, if it is actually connected to the array. Trying to remove a specific tie not belonging to the provided array will cause an error. It should be noted that while severing a tie will destroy management information internal to the package the data source which was handled by the tie will not be touched, only closed. After the command returns none of changes made to the array will be saved to the data source anymore. The result of the command is an empty string. - varname arrayname (in) The name of a Tcl array variable which may have ties. - handle token (in) A handle representing a specific tie. This argument is optional. - ::tie::info ties arrayvarname This command returns a list of ties associated with the Tcl array variable named by arrayvarname. The result list will be empty if the variable has no ties associated with it. - ::tie::info types This command returns a dictionary of registered types, and the class commands they are associated with. - ::tie::info type dstype This command returns the fully resolved class command for a type name. This means that the command will follow a chain of type definitions ot its end. STANDARD DATA SOURCE TYPES This package provides the six following types as examples and standard data sources. - log This data source does not maintain any actual data, nor persistence. It does not accept any identifying arguments. All changes are simply logged to stdout. - array This data source uses a regular Tcl array as the origin of the persistent data. It accepts a single identifying argument, the name of this Tcl array. All changes are mirrored to that array. - remotearray This data source is similar to array. The difference is that the Tcl array to which we are mirroring is not directly accessible, but through a send-like command. It accepts three identifying arguments, the name of the other Tcl array, the command prefix for the send-like accessor command, and an identifier for the remote entity hosting the array, in this order. All changes are mirrored to that array, via the command prefix. All commands will be executed in the context of the global namespace. send-like means that the command prefix has to have send syntax and semantics. I.e. it is a channel over which we can send arbitrary commands to some other entity. The remote array data source however uses only the commands set, unset, array exists, array names, array set, and array get to retrieve and set values in the remote array. The command prefix and the entity id are separate to allow the data source to use options like -async when assembling the actual commands. Examples of command prefixes, listed with the id of the remote entity, without options. In reality only the part before the id is the command prefix: - file This data source uses a single file as origin of the persistent data. It accepts a single identifying argument, the path to this file. The file has to be both readable and writable. It may not exist, the data source will create it in that case. This (and only this) situation will require that the directory for the file exists and is writable as well. All changes are saved in the file, as proper Tcl commands, one command per operation. In other words, the file will always contain a proper Tcl script. If the file exists when the tie using it is set up, then it will be compacted, i.e. superfluous operations are removed, if the operations log stored in it contains either at least one operation clearing the whole array, or at least 1.5 times more operations than entries in the loaded array. - growfile This data source is like file in terms of the storage medium for the array data, and how it is configured. In constrast to the former it however assumes and ensures that the tied array will never shrink. I.e. the creation of new array entries, and the modification of existing entries is allowed, but the deletion of entries is not, and causes the data source to throw errors. This restriction allows us to simplify both file format and access to the file radically. For one, the file is read only once and the internal cache cannot be invalidated. Second, writing data is reduced to a simple append, and no compaction step is necessary. The format of the contents is the string representation of a dictionary which can be incrementally extended forever at the end. - dsource This data source uses an explicitly specified data source object as the source for the persistent data. It accepts a single identifying argument, the command prefix, i.e. object command. To use this type it is necessary to know how the framework manages ties and what data source objects are. All changes are delegated to the specified object. CREATING NEW DATA SOURCES This section is of no interest to the casual user of ties. Only developers wishing to create new data sources have to know the information provided herein. DATA SOURCE OBJECTS All ties are represented internally by an in-memory object which mediates between the tie framework and the specific data source, like an array, file, etc. This is the data source object. Its class, the data source class is not generic, but specific to the type of the data source. Writing a new data source requires us to write such a class, and then registering it with the framework as a new type. The following subsections describe the various APIs a data source class and the objects it generates will have to follow to be compatible with the tie framework. Data source objects are normally automatically created and destroyed by the framework when a tie is created, or removed. This management can be explicitly bypassed through the usage of the "dsource" type. The data source for this type is a data source object itself, and this object is outside of the scope of the tie framework and not managed by it. In other words, this type allows the creation of ties which talk to pre-existing data source objects, and these objects will survive the removal of the ties using them as well. REGISTERING A NEW DATA SOURCE CLASS After a data source class has been written it is necessary to register it as a new type with the framework. - ::tie::register dsclasscmd as dstype Using this command causes the tie framework to remember the class command dsclasscmd of a data source class under the type name dstype. After the call the argument dstype of the basic user command ::tie::tie will accept dstype as a type name and translate it internally to the appropriate class command for the creation of data source objects for the new data source. DATA SOURCE CLASS Each data source class is represented by a single command, also called the class command, or object creation command. Its syntax is - dsclasscmd objname ?dsname...? The first argument of the class command is the name of the data source object to create. The framework itself will always supply the string %AUTO%, to signal that the class command has to generate not only the object, but the object name as well. This is followed by a series of arguments identifying the data source the new object is for. These are the same dsname arguments which are given to the basic user command ::tie::tie. Their actual meaning is dependent on the data source class. The result of the class command has to be the fully-qualified name of the new data source object, i.e. the name of the object command. The interface this command has to follow is described in the section DATA SOURCE OBJECT API DATA SOURCE OBJECT API Please read the section DATA SOURCE CLASS first, to know how to generate new object commands. Each object command for a data source object has to provide at least the methods listed below for proper inter-operation with the tie framework. Note that the names of most of the methods match the subcommands of the builtin array command. - ds destroy This method is called when the object ds is destroyed. It now has to release all its internal resources associated with the external data source. - ds names This command has to return a list containing the names of all keys found in the data source the object talks to. This is equivalent to array names. - ds size This command has to return an integer number specifying the number of keys found in the data source the object talks to. This is equivalent to array size. - ds get This command has to return a dictionary containing the data found in the data source the object talks to. This is equivalent to array get. - ds set dict This command takes a dictionary and adds its contents to the data source the object talks to. This is equivalent to array set. - ds unset ?pattern? This command takes a pattern and removes all elements whose keys matching it from the data source. If no pattern is specified it defaults to *, causing the removal of all elements. This is nearly equivalent to array unset. - ds setv index value This command has to save the value in the data source the object talks to, under the key index. The result of the command is ignored. If an error is thrown then this error will show up as error of the set operation which caused the method call. - ds unsetv index This command has to remove the value under the key index from the data source the object talks to. The result of the command is ignored. If an error is thrown then this error will show up as error of the unset operation which caused the method call. - ds getv index This command has to return the value for the key index in the data source the object talks to. And here a small table comparing the data source methods to the regular Tcl commands for accessing an array. Regular Tcl Data source ----------- ----------- array names a ds names array size a ds size array get a ds get array set a dict ds set dict array unset a pattern ds unset ?pattern? ----------- ----------- set a($idx) $val ds setv idx val unset a($idx) ds unsetv idx $a($idx) ds getv idx ----------- ----------- Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems. Please report such in the category tie of the Tcllib Trackers. Please also report any ideas for enhancements you may have for either package and/or documentation.
http://docs.activestate.com/activetcl/8.5/tcl/tcllib/tie/tie.html
2019-05-19T10:59:13
CC-MAIN-2019-22
1558232254751.58
[]
docs.activestate.com
. - IPKISS Reference - PCell Reference - Netlist Reference - Layout Reference - Geometry Reference - Ports Reference - Routing - Elements and Layers - Importing Layouts from GDSII - Elements and Layers - Geometry Reference - Importing Layouts from GDSII - Ports Reference - Routing - Transformations - Traces - CircuitModel View reference - Technology Reference - Indices and tables Previous topic Next topic ipkiss3.pcell.routing.RouteToLine
http://docs.lucedaphotonics.com/3.1/reference/layout/routing.html
2019-05-19T10:43:16
CC-MAIN-2019-22
1558232254751.58
[]
docs.lucedaphotonics.com
Working with Variant Data Beginning with version 5.2, Databricks Runtime HLS includes a variety of tools for reading, writing, and manipulating variant data. Tip This topic uses the terms “variant” or “variant data” to refer to single nucleodite variants and short indels. VCF You can use Spark to read VCF files just like any other file format that Spark supports through the DataFrame API. df = spark.read.format("com.databricks.vcf").load(path) The returned DataFrame has a schema that mirrors a single row of a VCF. Information that applies to an entire variant (SNV or indel), like. You can use the DataFrameWriter API to save a VCF file, which you can then read with other tools. df.write.format("com.databricks.vcf").save(path) Each partition of the DataFrame is written to a separate VCF file. If you want the entire DataFrame in a single file, repartition the DataFrame before saving. df.repartition(1).write.format("com.databricks.vcf").save(path) To control the behavior of the VCF writer, you can provide the following option: BGEN Databricks Runtime HLS also provides the ability to read BGEN files, including those distributed by the UK Biobank project. df = spark.read.format("com.databricks.
https://docs.databricks.com/applications/genomics/variant-data.html
2019-05-19T11:25:00
CC-MAIN-2019-22
1558232254751.58
[]
docs.databricks.com
Knowledge Management Functionality related to Genesys Knowledge Management is supported in release 9.1 by means of an architecture that includes a high-availability pair of UCSs at release 8.5.3. All UCS clients (except RMI clients) are connected to the UCS 9.1 node or cluster and UCS 9.1 maintains a connection to UCS 8.5, sending it appropriate requests (requests not related to contacts and interactions). In a multi-Data-Center deployment, all UCS 9.1 nodes for all Data Centers will connect to the same pair of UCS 8.5s. UCS 8.5 pair (primary/backup) still need a relational database and Lucene indexes. These only contain the data for the functionalities described below and no longer handle contacts and interactions. The expected load is much lower than an UCS 8.5 that would also handle contact and interactions. Data size for Knowledge Management is expected to be under 1 GB of data. The Lucene index should be less than 100MB. This architecture enables support of following functionalities: - Standard Responses - Screening Rules - Categories - Field Codes - Training Models - PII information See Deploying and Configuring UCS 8.5.3 for installation details. Notes - All UCS clients (except RMI clients) are connected to the UCS 9.1 node/cluster. - UCS 9.1 maintains the connection to UCS 8.5 and sends it the following requests: - Standard Response Management - Categories Management - Categories Statistics Management - FAQ Object Management - Global Statistics Management - Model Management - Testing Result Management - Training Data Object Management - Training Email Management - Training Job Management - Field Codes Management - Screening Rules Management - Documents (OMResponse) Management - Relation Management - Action Management - Standard Response Favorites - Resource Property Feedback Comment on this article:
https://docs.genesys.com/Documentation/UCS/9.1.x/Dep/Architecture
2019-05-19T11:07:51
CC-MAIN-2019-22
1558232254751.58
[]
docs.genesys.com
AudioEffectLimiter¶ Inherits: AudioEffect < Resource < Reference < Object Category: Core Description¶ A limiter is similar to a compressor, but it’s less flexible and designed to disallow sound going over a given dB threshold. Adding one in the Master Bus is always recommended to reduce the effects of clipping. Soft clipping starts to reduce the peaks a little below the threshold level and progressively increases its effect as the input level increases such that the threshold is never exceeded. Property Descriptions¶ The waveform’s maximum allowed value..
https://docs.godotengine.org/en/3.1/classes/class_audioeffectlimiter.html
2019-05-19T10:24:23
CC-MAIN-2019-22
1558232254751.58
[]
docs.godotengine.org
Sending Push Notifications The Visual Studio App Center Push (ACP) service offers two ways to send notifications to registered devices: - Using the App Center Portal (described in this document) - Using the App Center Push API Create a notification To create a notification using ACP, complete the following steps: - Log into App Center. - Using the project navigator on the left side of the page, select the your user account or the organization where the app project is defined, then select the app project from the list that appears. - In the project navigator that opens, select Push. - If you've already configured ACP, then App Center opens the app project's Push Notifications section and lists the existing Campaigns for the selected app. If App Center opens the Push getting started page, then you've not configured the service; complete the App Center Push configuration as described in Configuring the Push Service before returning here. - In the upper-right corner of the page, click the Send notification button. At this point, App Center opens the Send notification wizard and walks you through the process of creating a campaign and sending the notification. The process consists of the following steps: - Compose: Define your internal name for the campaign, the title and content for the message, and any custom data (key/value pairs) you want included with the message. - Target: Specify the target audience for the campaign. - Review: Review the campaign settings one last time. - Send: Send the notification to the target audience. The following sections describe each step in the process in greater detail. Compose The first step in the process is to define the internal name for the campaign, and the content that's sent to target devices by App Center. - Populate the Campaign Name field with a descriptive name for the campaign. The value you provide will display in the App Center campaign list page. - (optional) Populate the Title field with an optional title for the notification sent to target devices. The value you provide here will be ##### - Populate the Message field with the content for the notification message. Message content is limited to 4,000 characters. - Use the Custom data area of the form to define up to 20 key/value data pairs that you want included with the message. Click the + button to add a key/value pair. Click the - button to remove a key/value pair from the message. Tip Custom data values enable campaigns to pass data into an application or trigger one or more actions within your client application. You can pass data directly to the app through a campaign, or send a URL the app uses to retrieve data. Applications can also use data values to set configuration options with in the application, changing app behavior through the campaign. For an implementation example, refer to A/B Test All the Things: React Native Experiments with App Center. When you're done populating the form, click the Next > button to continue. Target When sending notifications through ACP, you can target destination devices (notification recipients) in the following ways: - All registered devices: Sends notifications to all registered devices. Depending on the size of your target audience, this could take a long time to complete. - Audiences: Sends notifications to a segment of your app's registered device audience based on a set of device and custom properties. See Audiences below for additional information. - Device list: Sends notifications to up to 20 devices (using the install IDs for the target devices). When you select this option, populate the input field with a list of the Install IDs for the devices you want to send the notification to, separating IDs with commas. - User: Send notifications to all of the registered devices for up to 100 users. The user identity value used can be set using the App Center SDK or the App Center Auth service. In the wizard's Target panel, make the selection that makes the most sense for your campaign. Note The only way to get the Install ID for a particular device is through the App Center SDK. Your app must call the appropriate SDK method depending on the target platform (Android, iOS, Windows, etc.) to collect the ID, then share it with you (perhaps storing it in a server-based database) for use later. Review and send the message In the wizard's last pane, App Center summarizes the settings for the Campaign. To send the notification, click the Send notification button. To change the campaign before committing, click the < Back button. When you're done, App Center returns to the Campaigns list; select (click on) the campaign to check delivery progress. Audiences Audiences allow App Center users to define a segment of an app's user base using a set of properties (both pre-defined and custom) and send them targeted notifications. App Center allows customers to create multiple audiences in an app project (up to 200 audiences per app with a maximum of 1,000 devices per audience) and stores them for easy reuse. There are two types of properties you can use to define audiences: - Device properties - Custom properties Device properties The App Center SDK collects device properties automatically, retrieving them from the client application and exposing them through App Center for your use when defining audiences. The available properties are: - API Level (example: 2) - App Version (example: 1.0.0) - Country (example: Spain) - Device Model (example: Pixel 2 XL) - Language (example: DE) - Mobile Carrier (example: AT&T) - OEM (Original Equipment Manufacturer) (example: Samsung) - Screen Size (example: 1024X768) Custom properties Custom properties are custom key-value pairs defined by the developer and set in the application. They allow you to segment your app's user population based on something from your app (user settings, interests, tags, etc.). You can define a maximum of 60 custom properties per app project. Some examples of possible custom properties are: Developers set these custom properties in an app using the SDK methods for each target platform: Create an Audience App Center users can create Push Audiences two ways: - From the Audiences Page in the App Center portal - While creating a Push Campaign To create an Audience from the Audiences page: -. - Click the New audience button. To create an Audience from the Campaign Wizard - Create a Campaign using the instructions provided in Create a notification. - Navigate to the Campaign's Target page When defining an audience: - Add one or more rules until you've defined the proper selection criteria for your Audience. - Click the Add rule button to add a new rule to your Audience - Click the - button next to a rule to remove it from your Audience definition. When defining a Rule: - Select an available property from the property list (App Center shows all device properties and any custom properties you defined in your app) - Select the is or is not conditional from the list of options, or the mathematical operator ( >, <, >=, <=, etc.) - Select a value from the list of options that appear (for keyword properties like country) or enter a string or numeric value (depending on the property type) As you add rules, App Center calculates the audience size for your current selection, showing percentage of your total population and an estimate for the number of devices targeted by the Audience. When you're done making changes to the Audience, click the Save button to save the Audience. Note Only devices that have successfully registered for notifications are matched in audiences. Edit an Audience To edit Audience you want to edit. - When the Audience opens, click the pencil icon in the upper-right corner of the page to edit the Audience. Delete an Audience To delete checkbox next to the Audience you want to delete. - In the upper-right corner of the panel, click the Delete button. You can also open the Audience for editing, then in the upper-right corner of the page, click the three vertical dots and select Delete audience from the menu that appears. Send to user To send notifications to specific users, follow the instructions in Push to User. Limitations - App Center limits Audiences to a maximum of 1,000 devices. If you create an audience targeting more than 1,000 devices, App Center Push sends notifications to the first 1,000 devices that match the audience criteria, and skip all remaining devices (failing silently). - The Push to User feature limits notifications to 100 users. - The maximum number of Audiences for any App Center app project is 200. - You can define a maximum of 60 custom properties per app project. - Audiences match only devices that have a valid push registrations. For that reason, testing Audiences on an iOS simulator will fail. Feedback Send feedback about:
https://docs.microsoft.com/en-us/appcenter/push/send-notification
2019-05-19T11:22:46
CC-MAIN-2019-22
1558232254751.58
[array(['images/campaign-compose.png', 'App Center Push Campaign Compose page'], dtype=object) array(['images/campaign-target-device-list.png', 'App Center Push Campaign Target page'], dtype=object)]
docs.microsoft.com
To enable Tigase Message Archiving Component you need to add following block to etc/config.tdsl file: message-archive () { } It will enable component and configure it under name message-archive. By default it will also use database configured as default data source to store data. You can specify a custom database to be used for message archiving. To do this, define the archive-repo-uri property. 'message-archive' () { 'archive-repo-uri' = 'jdbc:mysql://localhost/messagearchivedb?user=test&password=test' } Here, messagearchivedb hosted on localhost is used.' () { } } Setting: To set default level to message you need to set default-store-method of message-archive processor to sess-man { message-archive { default-store-method = 'message' } } Setting` } } To By default, Tigase Message Archive will only store the message body with some metadata, this can exclude messages that are lacking a body. If you decide you wish to save non-body elements within Message Archive, you can now can now configure this by setting msg-archive-paths to list of elements paths which should trigger saving to Message Archive. To additionally store messages with <subject/> element: sess-man { message-archive { msg-archive-paths = [ '-/message/result[urn:xmpp:mam:1]' '/message/body', '/message/subject' ] } } Where above will set the archive to store messages with <body/> or <subject/> elements and for message with <result xmlns="urn:xmpp:mam:1"/> element not to be stored.! As mentioned above no additional configuration options than default configuration of Message Archiving component and plugin is needed to let user decide if he wants to enable or disable this feature (but it is disabled by default). In this case user to enable this feature needs to set settings of message archiving adding muc-save attribute to <default/> element of request with value set to true (or to false to disable this feature). To configure state of this feature on installation level, it is required to set store-muc-messages property of message-archive SessionManager processor: sess-man { message-archive { store-muc-messages = 'value' } } where value may be one of following values: user true false To configure state of this feature on domain level, you need to execute vhost configuration command. In list of fields to configure domain, field to set this will be available with following values: user true false This. VHost holds a setting that determines how long a message needs to be in archive for it to be considered old and removed. This can be set independently per Vhost. This setting can be modified by either using the HTTP admin, or the update item execution in adhoc command. This configuration is done by execution of Update item configuration adhoc command of vhost-man component, where you should select domain for which messages should be removed and then in field XEP-0136 - retention type select value Number of days and in field XEP-0136 - retention period (in days) enter number of days after which events should be removed from MA. In adhoc select domain for which messages should be removed and then in field XEP-0136 - retention type select value Number of days and in field XEP-0136 - retention period (in days) enter number of days after which events should be removed from MA. In HTTP UI select Other, then Update Item Configuration (Vhost-man), select the domain, and from there you can set XEP-0136 retention type, and set number of days at XEP-0136 retention period (in days). It is possible to use separate store for archived messages, to do so you need to configure new DataSource in dataSource section. Here we will use message-archive-store as a name of a data source. Additionally you need to pass name of newly configured data source to dataSourceName property of default repository of Message Archiving component. Example: dataSource { message-archive-store () { uri = 'jdbc:postgresql://server/message-archive-database' } } message-archive { repositoryPool { default () { dataSourceName = 'message-archive-store' } } } It is also possible to configure separate store for particular domain, i.e. example.com. Here we will configure data source with name example.com and use it to store data for archive: dataSource { 'example.com' () { uri = 'jdbc:postgresql://server/example-database' } } message-archive { repositoryPool { 'example.com' () { # we may not set dataSourceName as it matches name of domain } } } With this configuration messages for other domains than example.com will be stored in default data source. There } Tigase now is able to support querying message archives based on tags created for the query. Currently, Tigase can support the following tags to help search through message archives: - hashtag Words prefixed by a hash (#) are stored with a prefix and used as a tag, for example #Tigase - mention Words prefixed by an at (@) are stored with a prefix and used as a tag, for example @Tigase NOTE: Tags must be written in messages from users, they do not act as wildcards. To search for #Tigase, a message must have #Tigase in the <body> element. This feature allows users to query and retrieve messages or collections from the archive that only contain one or more tags. To enable this feature, the following line must be in the config.tdsl file (or may be added with Admin or Web UI) 'message-archive' (class: tigase.archive.MessageArchiveComponent) { 'tags-support' = true } To execute a request, the tags must be individual children elements of the retrieve or list element like the following request: <query xmlns=""> <tag>#People</tag> <tag>@User1</tag> </query> You may also specify specific senders, and limit the time and date that you wish to search through to keep the resulting list smaller. That can be accomplished by adding more fields to the retrieve element such as 'with', 'from’, and ’end' . Take a look at the below example: <iq type="get" id="query2"> <retrieve xmlns='urn:xmpp:archive' with='[email protected]' from='2014-01-01T00:00:00Z' end='2014-05-01T00:00:00Z'> <query xmlns=""> <tag>#People</tag> <tag>@User1</tag> </query> </retrieve> </iq> This stanza is requesting to retrieve messages tagged with @User1 and #people from chats with the user [email protected] between January 1st, 2014 at 00:00 to May 1st, 2014 at 00:00. NOTE: All times are in Zulu or GMT on a 24h clock. You can add as many tags as you wish, but each one is an AND statement; so the more tags you include, the smaller the results.
https://docs.tigase.net/tigase-server/snapshot/Administration_Guide/html_chunk/mAConfig.html
2019-05-19T10:41:38
CC-MAIN-2019-22
1558232254751.58
[]
docs.tigase.net
International Conference On Obesity & Its Treatment Start Date : October 1, 2018 End Date : October 3, 2018 Time : 9:00 am to5:00 pm Phone : +6531080483 Location : Hilton Garden Inn Las Vegas Strip South, 7830 South Las Vegas Boulevard, Las Vegas, Nevada, 89123, USA Description Meetings International looks forward to welcome you all to the “International Conference on Obesity & Its Treatment” which is going to be held at Las Vegas, USA on October 01-03, 2018. This conference is a global platform for Obesity Professionals where we assure you to have knowledge with scholars from around the world for the best current strategies for cure & treatment of obesity. Worldwide 300+ conferences are being organized by Meetings International Pte Ltd i.e. USA, Europe and Asia pacific regions. The main theme of this conference is “Obesity: A Dying Weigh of Life.”. Why to Attend With members from around the world focused on learning about obesity and its effects, this is your best opportunity to reach the largest assemblage of participants from the obesity and Endocrinology community. Obesity Conference will be an excellent opportunity as it will be the most cost-effective professional development choice. Obesity meetings will be the most relevant and densely-packed educational and networking opportunity focused on Obesity Research, Obesity issues available to professionals nationwide. Regulate presentations, spread information, be introduced to the current and potential scientists, make a feature with new drug developments, and receive name recognition at this 2-day event. World’s best-renowned speakers, the most recently updated techniques, developments, and the newest updates in obesity are the attributes of this conference. Importance and Scope The conference event covers a wide range of topics related to obesity while focusing on Health Problems. This obesity 2018 offers key features into evolving applications from global experts related to obesity and weight management. Obesity is best described as a condition in which extra body fat has accumulated to such an extent that health of a person may be adversely affected. Target Audiences: - Endocrinologists - Obesity specialists - Bariatric surgeons - Nutritionists - Dieticians - Fitness experts - Food system professionals - Drugs and Medical Devices Manufacturing Companies - Medical Colleges - Business Entrepreneurs Registration Info Admission : Go to URL Organized by Organized by nutritionobesity Zara Miles , Company Registration No: 201524282R 28 MAXWELL ROAD, #03-05 RED DOT TRAFFIC, SINGAPORE (069120) Tel: 6531080483 Mobile: +6531080483 Website: Event Categories: Endocrinology, Health & Nutrition, and Obstetrics.
http://meetings4docs.com/event/international-conference-on-obesity-its-treatment/
2019-05-19T10:30:23
CC-MAIN-2019-22
1558232254751.58
[]
meetings4docs.com
Topics in Clinical Nursing: 2019 Update, including Healthcare Communications and The Team Approach to Patient Care Start Date : November 24, 2019 End Date : December 1, 2019 Time : 4:30 pm to6:00 am Phone : 18004220711 Location : 5700 4th St. N. Description Topics: - 1. Resolving an Unexpected Medical Outcome using a team approach - Apply two models for disclosing an unexpected medical outcome: when care is reasonable or when care is unreasonable. - 2. Diabetes self-management the key to a successful outcome - Define the key skills needed to support a clinician as a change agent in coaching patients to self-manage their condition. - 3. Transitioning from Cure to Care in chronic disease management - Identify and use a model for documenting the continuum of care for patients being following by a retired nurse volunteer. - 4. The Electronic Health Record a challenge and an opportunity - Define the challenges that occur when using an electronic health record and 3 skills to overcome these challenges. - 5. Patient Safety: Strategies and tools to promote patient safety and improve clinical performance - Reasons to improve safety in the medical practice, including the role of leadership. - 6. Motivational Interviewing - Demonstrate communication techniques used in motivational interviewing - 7. Working in Highly Functioning Teams - 8. Diabetes 2019 Update - 9. Update on Communication Skills - Describe new communication tools used when interviewing patients - 10. Burnout - Define Burnout - Relate techniques to manage burnout in oneself and others - 11. The Neuroscience of Anger and the Angry Encounter - Describe new concepts in the neuroscience of anger - Demonstrate use of a communication tool to mitigate anger in oneself and others - 12. Navigating Difficult Conversations - Manage conversations when they become difficult - 13. Advanced Care Planning & End of Life - Describe the differences in ACP & EOL - Explain ACP & EOL to patients and families Registration Info Call 1-800-422-0711 or visit the Organizer Organized by Organized by KHenegar09 Continuing Education, Inc , 5700 4th St. N. Aboard Royal Caribbean's Oasis of the Seas for this 7-Night Eastern Caribbean Cruise Conference. Porting out of Miami, Florida.Tel: 1-800-422-0711 Mobile: 18004220711 Website: Event Categories: Family Practice, Internal Medicine, Physical Medicine, and Sports Medicine.
http://meetings4docs.com/event/topics-in-clinical-nursing-2019-update-including-healthcare-communications-and-the-team-approach-to-patient-care/
2019-05-19T11:15:56
CC-MAIN-2019-22
1558232254751.58
[]
meetings4docs.com
Cannot Run Notebook Commands After Canceling Streaming Cell Problem After you cancel a running streaming cell in a notebook attached to a Databricks Runtime 5.0 cluster, you cannot run any subsequent commands in the notebook. The commands are left in the “waiting to run” state, and you must clear the notebook’s state or detach and reattach the cluster before you can successfully run commands on the notebook. Note that this issue occurs only when you cancel a single cell; it does not apply when you run all and cancel all cells. Version This problem affects Databricks Runtime 5.0 clusters. It also affects Databricks Runtime 4.3 clusters whose Spark Configuration spark.databricks.chauffeur.enableIdleContextTracking has been set to true. Cause Databricks Runtime 4.3 introduced an optional idle execution context feature, which is enabled by default in Databricks Runtime 5.0, that allows the execution context to track streaming execution sequences to determine if they are idle. Unfortunately, this introduced an issue that causes the underlying execution context to be left in an invalid state when you cancel a streaming cell. This prevents additional commands from being run until the notebook state is reset. This behavior is specific to interactive notebooks and does not affect jobs. For more information about idle execution contexts, see Execution contexts. Solution Databricks is working to resolve this issue and release a maintenance update for Databricks Runtime 5.0. In the meantime, you can do either of the following: To remediate an affected notebook without restarting the cluster, go to the notebook’s Clear menu and select Clear State: If restarting the cluster is acceptable, you can solve the issue by turning off idle context tracking. Set the following Spark Configuration value on the cluster: spark.databricks.chauffeur.enableIdleContextTracking false Then restart the cluster.
https://docs.databricks.com/user-guide/faq/streaming-notebook-stuck.html
2019-05-19T11:24:32
CC-MAIN-2019-22
1558232254751.58
[]
docs.databricks.com
Everything Exposure Sheet Preferences Parameter Description Filtering These. Options Default Add Columns: The default position where the new column will be added. Default Column Width: The default width value for the new column being created.. NOTE You can set a keyboard shortcut to view the entire Xsheet.
https://docs.toonboom.com/help/harmony-16/essentials/preferences-guide/exposure-sheet-preference.html
2019-05-19T10:25:38
CC-MAIN-2019-22
1558232254751.58
[]
docs.toonboom.com
CaRP: Caching RSS Parser - Documentation CaRP Interactive FAQ Getting Started: Free Download | Purchase | Install Reference: Functions | Plugins | Themes | Full Index Etc.: Display Formatting | Example Code | Affiliates Getting Started: Free Download | Purchase | Install Reference: Functions | Plugins | Themes | Full Index Etc.: Display Formatting | Example Code | Affiliates CarpClearCache( which [, stale] );Delete all of the files in one of the cache directories Description: This function deletes all of the files in the cache folder indicated by "which". Arguments: - which: indicates by number which cache folder to empty. Valid values are: - 0: CarpAggregatePath--the aggregation cache - 1: CarpCachePath--where formatted output is stored by CarpAggregate, CarpFilter, CarpCacheFilter, CarpShow and CarpCacheShow - 2: CarpAutoCachePath--where raw newsfeeds are stored by CarpCacheFilter and CarpCacheShow - stale: Optionally, you may indicate how many days old each file must be to be deleted. If this argument is ommitted or is 0 (zero), all cache files in the specified directory will be deleted. Return value: none Usage Example (delete all files from the "autocache" directory that are more than 7 days old): <?php require_once '/home/geckotribe/carp/carp.php'; CarpClearCache(2,7); ?>
http://carp.docs.geckotribe.com/functions/carpclearcache.php
2019-05-19T10:25:39
CC-MAIN-2019-22
1558232254751.58
[array(['/img/logos/CaRP-Evolution-Box-200x200.gif', 'CaRP Evolution Box'], dtype=object) ]
carp.docs.geckotribe.com
ImageTexture¶ Inherits: Texture < Resource < Reference < Object Category: Core. Description¶ A Texture based on an Image. Can be created from an Image with create_from_image.. Return the format of the ImageTexture, one of Format. Load an ImageTexture from a file path. Set the Image of this ImageTexture. Resizes the ImageTexture to the specified dimensions.
http://docs.godotengine.org/ko/latest/classes/class_imagetexture.html
2019-05-19T10:47:25
CC-MAIN-2019-22
1558232254751.58
[]
docs.godotengine.org
class X::Attribute::Required Compilation error due to not declaring an attribute with the is required trait does X::MOP Compile time error thrown when a required attribute is not assigned when creating an object. For example my;my = Uses-required.new() Dies with OUTPUT: «(exit code 1) The attribute '$!req' is required, but you did not provide a value for it.␤» Methods method name method name(--> Str) Returns the name of the attribute.»
http://docs.perl6.org/type/X::Attribute::Required
2019-05-19T11:16:12
CC-MAIN-2019-22
1558232254751.58
[]
docs.perl6.org
Using the EMRFS S3-optimized Committer The EMRFS S3-optimized committer is an alternative OutputCommitter implementation that is optimized for writing files to Amazon S3 when using EMRFS. The committer is available with Amazon EMR release version 5.19.0 and later, and is enabled by default with Amazon EMR 5.20.0 and later. The committer is used for Spark jobs that use Spark SQL, DataFrames, or Datasets to write Parquet files. There are circumstances under which the committer is not used. For more information, see Requirements for the EMRFS S3-Optimized Committer. The EMRFS S3-optimized committer has the following benefits: Improves application performance by avoiding list and rename operations done in Amazon S3 during job and task commit phases. Avoids issues that can occur with Amazon S3 eventual consistency during job and task commit phases, and helps improve job correctness under task failure conditions.
https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-s3-optimized-committer.html
2019-05-19T11:16:18
CC-MAIN-2019-22
1558232254751.58
[]
docs.aws.amazon.com
Top level protections Four of the App Firewall protections are especially effective against common types of Web attacks, and are therefore more commonly used than any of the others. They are: HTML Cross-Site Scripting. Examines requests and responses for scripts that attempt to access or modify content on a different Web site than the one on which the script is located. When this check finds such a script, it either renders the script harmless before forwarding the request or response to its destination, or it blocks the connection. HTML SQL Injection. Examines requests that contain form field data for attempts to inject SQL commands into an SQL database. When this check detects injected SQL code, it either blocks the request or renders the injected SQL code harmless before forwarding the request to the Web server. Note: If both of the following conditions apply to your configuration, you should make certain that your App Firewall is correctly configured: - If you enable the HTML Cross-Site Scripting check or the HTML SQL Injection check (or both), and - Your protected Web sites accept file uploads or contain Web forms that can contain large POST body data. For more information about configuring the App Firewall to handle this case, see “Configuring the Application Firewall.” Buffer Overflow. Examines requests to detect attempts to cause a buffer overflow on the Web server. Cookie Consistency. Examines cookies returned with user requests to verify that they match the cookies your Web server set for that user. If a modified cookie is found, it is stripped from the request before the request is forwarded to the Web server. The Buffer Overflow check is simple; you can usually enable blocking for it immediately. The other three top-level checks are considerably more complex and require configuration before you can safely use them to block traffic. Citrix strongly recommends that, rather than attempting to configure these checks manually, you enable the learning feature and allow it to generate the necessary exceptions.
https://docs.citrix.com/en-us/netscaler/11-1/application-firewall/top-level-protections.html
2019-05-19T11:59:03
CC-MAIN-2019-22
1558232254751.58
[]
docs.citrix.com
solution requires no external controller or software suite; it runs the VXLAN service and registration daemons on Cumulus Linux itself. The data path between bridge entities is established on top of a layer 3 fabric by means of a simple service node coupled with traditional MAC address learning. To see an example of a full solution before reading the following background information, read this chapter. LNV is a lightweight controller option. Contact Cumulus Networks with your scale requirements so we can make sure this is the right fit for you. There are also other controller options that can work on Cumulus Linux. Contents LNV Concepts Consider the following example deployment: The two switches running Cumulus Linux, called leaf1 and leaf2, each have a bridge configured. These two bridges contain the physical switch port interfaces connecting to the servers as well as the logical VXLAN interface associated with the bridge. By creating a logical VXLAN interface on both leaf switches, the switches become VTEPs (virtual tunnel end points). The IP address associated with this VTEP is most commonly configured as its loopback address; in the image above, the loopback address is 10.2.1.1 for leaf1 and 10.2.1.2 for leaf2. Acquire the Forwarding Database at the Service Node To connect these two VXLANs together and forward BUM (Broadcast, Unknown-unicast, Multicast) packets to members of a VXLAN, the service node needs to acquire the addresses of all the VTEPs for every VXLAN it serves. The service node daemon does this through a registration daemon running on each leaf switch that contains a VTEP participating in LNV. The registration process informs the service node of all the VXLANs to which the switch belongs. MAC Learning and Flooding With LNV, as with traditional bridging of physical LANs or VLANs, a bridge automatically learns the location of hosts as a side effect of receiving packets on a port. For example, when server1 sends a layer 2 packet to server3, leaf2 learns that the MAC address for server1 is located on that particular VXLAN and the VXLAN interface learns that the IP address of the VTEP for server1 is 10.2.1.1. So when server3 sends a packet to server1, the bridge on leaf2 forwards the packet out of the port to the VXLAN interface and the VXLAN interface sends it, encapsulated in a UDP packet, to the address 10.2.1.1. But what if server3 sends a packet to some address that has yet to send it a packet (server2, for example)? In this case, the VXLAN interface sends the packet to the service node, which sends a copy to every other VTEP that belongs to the same VXLAN. This is called service node replication and is one of two techniques for handling BUM (Broadcast Unknown-unicast and Multicast) traffic. BUM Traffic Cumulus Linux has two ways of handling BUM (Broadcast Unknown-unicast and Multicast) traffic: - Head end replication - Service node replication Head end replication is enabled by default in Cumulus Linux. You cannot have both service node and head end replication configured simultaneously, as this causes the BUM traffic to be duplicated; both the source VTEP and the service node send their own copy of each packet to every remote VTEP. Head End Replication The Broadcom switch with the Tomahawk, Trident II+, and Trident II ASIC and the Mellanox switch with the Spectrum ASIC are capable of head end replication (HER), which is the ability to generate all the BUM traffic in hardware. The most scalable solution available with LNV is to have each VTEP (top of rack switch) generate all of its own BUM traffic instead of relying on an external service node. HER is enabled by default in Cumulus Linux. Cumulus Linux verified support for up to 128 VTEPs with head end replication. To disable head end replication, edit the /etc/vxrd.conf file and set head_rep to False. Service Node Replication Cumulus Linux also supports service node replication for VXLAN BUM packets. This is useful with LNV if you have more than 128 VTEPs. However, it is not recommended because it forces the spine switches running the vxsnd (service node daemon) to replicate the packets in software instead of in hardware, unlike head end replication. If you are not using a controller but have more than 128 VTEPs, contact Cumulus Networks. To enable service node replication: - Disable head end replication; set head_repto False in the /etc/vxrd.conffile. Configure a service node IP address for every VXLAN interface using the vxlan-svcnodeipparameter: cumulus@switch:~$ net add vxlan VXLAN vxlan svcnodeip IP_ADDRESS You only specify this parameter when head end replication is disabled. For the loopback, the parameter is still named vxrd-svcnode-ip. - Edit the /etc/vxsnd.conffile and configure the following: Set the same service node IP address that you configured in the previous step: svcnode_ip = <> To forward VXLAN data traffic, set the following variable to True: enable_vxlan_listen = true Requirements Hardware Requirements Broadcom switches with the Tomahawk, Trident II+, or Trident II ASIC or Mellanox switches with the Spectrum ASIC running Cumulus Linux 2.5.4 or later. Please refer to the Cumulus Networks hardware compatibility list for a list of supported switch models. Configuration Requirements - The VXLAN has an associated VXLAN Network Identifier (VNI), also interchangeably called a VXLAN ID. - The VNI cannot be 0 or 16777215, as these two numbers are reserved values under Cumulus Linux. - The VXLAN link and physical interfaces are added to the bridge to create the association between the port, VLAN, and VXLAN instance. - Each bridge on the switch has only one VXLAN interface. Cumulus Linux does not support more than one VXLAN link in a bridge; however, a switch can have multiple bridges. - An SVI (Switch VLAN Interface) or layer 3 address on the bridge is not supported. For example, you cannot ping from the leaf1 SVI to the leaf2 SVI through the VXLAN tunnel; you need to use server1 and server2 to verify. Install the LNV Packages vxfld is installed by default on all new installations of Cumulus Linux 3.x. If you are upgrading from an earlier version, run sudo -E apt-get install python-vxfld to install the LNV package. Sample LNV Configuration The following images illustrate the configuration that is referenced throughout this chapter. Network Connectivity There must be full network connectivity before you can configure LNV. The layer 3 IP addressing information as well as the OSPF configuration ( /etc/frr/frr.conf) below is provided to make the LNV example easier to understand. OSPF is not a requirement for LNV, LNV just requires layer 3 connectivity. With Cumulus Linux this can be achieved with static routes, OSPF or BGP. Layer 3 IP Addressing Here is the configuration for the IP addressing information used in this example. Layer 3 Fabric The service nodes and registration nodes must all be routable between each other. The layer 3 fabric on Cumulus Linux can either be BGP or OSPF. In this example, OSPF is used to demonstrate full reachability. Click to expand the FRRouting configurations below. FRRouting configuration using OSPF: Host Configuration In this example, the servers are running Ubuntu 14.04. There needs to be a trunk mapped from server1 and server2 to the respective switch. In Ubuntu this is done with subinterfaces. You can expand the configurations below. On Ubuntu, it is more reliable to use ifup and if down to bring the interfaces up and down individually, rather than restarting networking entirely (there is no equivalent to if reload like there is in Cumulus Linux): cumulus@server1:~$ sudo ifup eth3.10 Set name-type for VLAN subsystem. Should be visible in /proc/net/vlan/config Added VLAN with VID == 10 to IF -:eth3:- cumulus@server1:~$ sudo ifup eth3.20 Set name-type for VLAN subsystem. Should be visible in /proc/net/vlan/config Added VLAN with VID == 20 to IF -:eth3:- cumulus@server1:~$ sudo ifup eth3.30 Set name-type for VLAN subsystem. Should be visible in /proc/net/vlan/config Added VLAN with VID == 30 to IF -:eth3:- Configure the VLAN to VXLAN Mapping Configure the VLANs and associated VXLANs. In this example, there are 3 VLANs and 3 VXLAN IDs (VNIs). VLANs 10, 20 and 30 are used and associated with VNIs 10, 2000 and 30 respectively. The loopback address, used as the vxlan-local-tunnelip, is the only difference between leaf1 and leaf2 for this demonstration. Why is vni-2000 not vni-20? For example, why not tie VLAN 20 to VNI 20, or why was 2000 used? VXLANs and VLANs do not need to be the same number. However if you are using fewer than 4096 VLANs, there is no reason not to make it easy and correlate VLANs to VXLANs. It is completely up to you. Verify the VLAN to VXLAN Mapping Use the brctl show command to see the physical and logical interfaces associated with that bridge: cumulus@leaf1:~$ brctl show bridge name bridge id STP enabled interfaces bridge 8000.443839008404 yes swp32s0.10 vni-10 vni-2000 vni-30 As with any logical interfaces on Linux, the name does not matter (other than a 15-character limit). To verify the associated VNI for the logical name, use the ip -d link show command: cumulus@leaf1:~$ ip -d link show vni-10 43: vni-10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-10 state UNKNOWN mode DEFAULT link/ether 02:ec:ec:bd:7f:c6 brd ff:ff:ff:ff:ff:ff vxlan id 10 srcport 32768 61000 dstport 4789 ageing 1800 bridge_slave The vxlan id 10 indicates the VXLAN ID/VNI is indeed 10 as the logical name suggests. Enable and Manage Service Node and Registration Daemons Every VTEP must run the registration daemon ( vxrd). Typically, every leaf switch acts as a VTEP. A minimum of 1 switch (a switch not already acting as a VTEP) must run the service node daemon ( vxsnd). The instructions for enabling these daemons follows. Enable the Service Node Daemon The service node daemon ( vxsnd) is included in the Cumulus Linux repository as vxfld-vxsnd. The service node daemon can run on any switch running Cumulus Linux as long as that switch is not also a VXLAN VTEP. In this example, enable the service node only on the spine1 switch, then restart the service. cumulus@spine1:~$ sudo systemctl enable vxsnd.service cumulus@spine1:~$ sudo systemctl restart vxsnd.service Do not run vxsnd on a switch that is already acting as a VTEP. Enable the Registration Daemon The registration daemon ( vxrd) is included in the Cumulus Linux package as vxfld-vxrd. The registration daemon must run on each VTEP participating in LNV, so you must enable it on every TOR (leaf) switch acting as a VTEP, then restart the vxrd daemon. For example, on leaf1: cumulus@leaf1:~$ sudo systemctl enable vxrd.service cumulus@leaf1:~$ sudo systemctl restart vxrd.service Then enable and restart the vxrd daemon on leaf2: cumulus@leaf2:~$ sudo systemctl enable vxrd.service cumulus@leaf2:~$ sudo systemctl restart vxrd.service Check the Daemon Status To determine if the daemon is running, use the systemctl status <daemon name>.service command. For the service node daemon: cumulus@spine1:~$ sudo systemctl status vxsnd.service ● vxsnd.service - Lightweight Network Virt Discovery Svc and Replicator Loaded: loaded (/lib/systemd/system/vxsnd.service; enabled) Active: active (running) since Wed 2016-05-11 11:42:55 UTC; 10min ago Main PID: 774 (vxsnd) CGroup: /system.slice/vxsnd.service └─774 /usr/bin/python /usr/bin/vxsnd May 11 11:42:55 cumulus vxsnd[774]: INFO: Starting (pid 774) ... For the registration daemon: cumulus@leaf1:~$ sudo systemctl status vxrd.service ● vxrd.service - Lightweight Network Virtualization Peer Discovery Daemon Loaded: loaded (/lib/systemd/system/vxrd.service; enabled) Active: active (running) since Wed 2016-05-11 11:42:55 UTC; 10min ago Main PID: 929 (vxrd) CGroup: /system.slice/vxrd.service └─929 /usr/bin/python /usr/bin/vxrd May 11 11:42:55 cumulus vxrd[929]: INFO: Starting (pid 929) ... Configure the Registration Node The registration node was configured earlier in /etc/network/interfaces in the VXLAN mapping section above; no additional configuration is typically needed. However, if you need to modify the registration node configuration, edit /etc/vxrd.conf. cumulus@leaf1:~$ sudo nano /etc/vxrd.conf Then edit the svcnode_ip variable: svcnode_ip = 10.2.1.3 Then perform the same on leaf2: cumulus@leaf2:~$ sudo nano /etc/vxrd.conf And again edit the svcnode_ip variable: svcnode_ip = 10.2.1.3 Enable, then restart the registration node daemon for the change to take effect: cumulus@leaf1:~$ sudo systemctl enable vxrd.service cumulus@leaf1:~$ sudo systemctl restart vxrd.service Restart the daemon on leaf2: cumulus@leaf2:~$ sudo systemctl enable vxrd.service cumulus@leaf2:~$ sudo systemctl restart vxrd.service The complete list of options you can configure is listed below: Use 1, yes, true, or on for True for each relevant option. Use 0, no, false, or off for False. Configure the Service Node To configure the service node daemon, edit the /etc/vxsnd.conf configuration file. For the example configuration, default values are used, except for the svcnode_ip field. cumulus@spine1:~$ sudo nano /etc/vxsnd.conf The address field is set to the loopback address of the switch running the vxsnd daemon. svcnode_ip = 10.2.1.3 Enable, then restart the service node daemon for the change to take effect: cumulus@spine1:~$ sudo systemctl enable vxsnd.service cumulus@spine1:~$ sudo systemctl restart vxsnd.service The complete list of options you can configure is listed below: Use 1, yes, true, or on for True for each relevant option. Use 0, no, false, or off for False. Advanced LNV Usage Scale LNV by Load Balancing with Anycast The above configuration assumes a single service node, which can quickly be overwhelmed by BUM traffic. To load balance BUM traffic across multiple service nodes, use Anycast. Anycast enables BUM traffic to reach the topologically nearest service node instead of overwhelming a single service node. Enable the Service Node Daemon on Additional Spine Switches In this example, spine1 already has the service node daemon enabled. Enable it on the spine2 switch, then restart the vxsnd daemon: cumulus@spine2:~$ sudo systemctl enable vxsnd.service cumulus@spine2:~$ sudo systemctl restart vxsnd.service Configure the Anycast Address on All Participating Service Nodes Configure the Service Node vxsnd.conf File Reconfigure the VTEPs (Leafs) to Use the Anycast Address Test Connectivity Repeat the ping tests from the previous section. Here is the table again for reference: cumulus@server1:~$ ping 10.10.10.2 PING 10.10.10.2 (10.10.10.2) 56(84) bytes of data. 64 bytes from 10.10.10.2: icmp_seq=1 ttl=64 time=5.32 ms 64 bytes from 10.10.10.2: icmp_seq=2 ttl=64 time=0.206 ms ^C --- 10.10.10.2 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.206/2.767/5.329/2.562 ms PING 10.10.20.2 (10.10.20.2) 56(84) bytes of data. 64 bytes from 10.10.20.2: icmp_seq=1 ttl=64 time=1.64 ms 64 bytes from 10.10.20.2: icmp_seq=2 ttl=64 time=0.187 ms ^C --- 10.10.20.2 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.187/0.914/1.642/0.728 ms cumulus@server1:~$ ping 10.10.30.2 PING 10.10.30.2 (10.10.30.2) 56(84) bytes of data. 64 bytes from 10.10.30.2: icmp_seq=1 ttl=64 time=1.63 ms 64 bytes from 10.10.30.2: icmp_seq=2 ttl=64 time=0.191 ms ^C --- 10.10.30.2 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.191/0.913/1.635/0.722 ms Restart Network Removes vxsnd Anycast IP Address from Loopback Interface If you have not configured a loopback anycast IP address in the /etc/network/interfaces file, but you have enabled the vxsnd (service node daemon) log to automatically add anycast IP addresses, when you restart networking (with systemctl restart networking), the anycast IP address gets removed from the loopback interface. To prevent this issue from occurring, specify an anycast IP address for the loopback interface in both the /etc/network/interfaces file and the vxsnd.conf file. This way, in case vxsnd fails, you can withdraw the IP address. Related Information - tools.ietf.org/html/rfc7348 - en.wikipedia.org/wiki/Anycast Network virtualization chapter, Cumulus Linux user guide -
https://docs.cumulusnetworks.com/display/DOCS/Lightweight+Network+Virtualization+Overview
2019-05-19T10:28:16
CC-MAIN-2019-22
1558232254751.58
[]
docs.cumulusnetworks.com
UIElement. Preview UIElement. Stylus Out OfRange Event Preview UIElement. Stylus Out OfRange Event Preview UIElement. Stylus Out OfRange Event Preview Field Stylus Out OfRange Event Definition Identifies the PreviewStylusOutOfRange routed event. public: static initonly System::Windows::RoutedEvent ^ PreviewStylusOutOfRangeEvent; public static readonly System.Windows.RoutedEvent PreviewStylusOutOfRangeEvent; staticval mutable PreviewStylusOutOfRangeEvent : System.Windows.RoutedEvent Public Shared ReadOnly PreviewStylusOutOfRange.
https://docs.microsoft.com/en-us/dotnet/api/system.windows.uielement.previewstylusoutofrangeevent?view=netframework-4.7.2
2019-05-19T10:31:26
CC-MAIN-2019-22
1558232254751.58
[]
docs.microsoft.com
EntityInstanceReference Class Represents a reference to an External Item. Inheritance Hierarchy System.Object System.MarshalByRefObject Microsoft.BusinessData.Runtime.EntityInstanceReference Namespace: Microsoft.BusinessData.Runtime Assembly: Microsoft.BusinessData (in Microsoft.BusinessData.dll) Syntax 'Declaration Public NotInheritable Class EntityInstanceReference _ Inherits MarshalByRefObject 'Usage Dim instance As EntityInstanceReference public sealed class EntityInstanceReference : MarshalByRefObject Remarks EntityInstanceReference (EIR) allows applications to create a reference to an External Item, which can be stored as a string value. The applications can use the string value to keep track of relevant External Items. For example, an application can ask the user to specify an External Item to be associated with a document, then store the EIR as a string value in the document to remember this relationship. When needed, the application can get the External Item back using the EIR. EIR can also be used to send references to External Items to other machines or processes. Note The string representation of the EntityInstanceReference is case sensitive. Note The EntityInstanceReference identifies the external content type by name and namespace. Therefore the reference is valid only within a group of Metadata Stores that are controlled together. For example, it is safe to send EIRs between the users of a SharePoint farm; however, it is not safe to send EIRs across the Internet. Thread Safety Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe. See Also Reference EntityInstanceReference Members Microsoft.BusinessData.Runtime Namespace
https://docs.microsoft.com/en-us/previous-versions/office/developer/sharepoint-2010/ee545354%28v%3Doffice.14%29
2019-05-19T10:34:59
CC-MAIN-2019-22
1558232254751.58
[]
docs.microsoft.com
EnableWindow function Enables or disables mouse and keyboard input to the specified window or control. When input is disabled, the window does not receive input such as mouse clicks and key presses. When input is enabled, the window receives all input. Syntax BOOL EnableWindow( HWND hWnd, BOOL bEnable ); Parameters hWnd Type: HWND A handle to the window to be enabled or disabled. bEnable Type: BOOL Indicates whether to enable or disable the window. If this parameter is TRUE, the window is enabled. If the parameter is FALSE, the window is disabled. Return Value Type: BOOL If the window was previously disabled, the return value is nonzero. If the window was not previously disabled, the return value is zero. Remarks If the window is being disabled, the system sends a WM_CANCELMODE message. If the enabled state of a window is changing, the system sends a WM_ENABLE message after the WM_CANCELMODE message. (These messages are sent before EnableWindow returns.) If a window is already disabled, the system tries to determine which window should receive mouse messages. By default, a window is enabled when it is created. To create a window that is initially disabled, an application can specify the WS_DISABLED style in the CreateWindow or CreateWindowEx function. After a window has been created, an application can use EnableWindow to enable or disable the window. An application can use this function to enable or disable a control in a dialog box. A disabled control cannot receive the keyboard focus, nor can a user gain access to it. Requirements See Also Conceptual Reference
https://docs.microsoft.com/en-us/windows/desktop/api/winuser/nf-winuser-enablewindow
2019-05-19T10:34:06
CC-MAIN-2019-22
1558232254751.58
[]
docs.microsoft.com
We don't have a reseller program, but we do have a partner program. We differentiate a partner from a reseller as: - A partner is an organization that supports and adds value to the end-user. Typically they are acting as consultants and, within Spidergap, they will own and help manage the projects for their customers. - A reseller is an organization that simply acts as a buffer between the end-user and Spidergap. We do not support resellers as we believe that end-users will have a better experience by liaising directly with us. Interested in becoming a partner? Read more about the partner benefits and how to join.
https://docs.spidergap.com/spidergap-partners/does-spidergap-have-a-reseller-program
2019-05-19T11:20:08
CC-MAIN-2019-22
1558232254751.58
[]
docs.spidergap.com
Enable the indexer cluster master node Before reading this topic, read Indexer cluster deployment overview. A cluster has one, and only one, master node. The master node coordinates the activities of the peer nodes. It does not itself store or replicate data (aside from its own internal data). Important: A master node cannot do double duty as a peer node or a search node. The Splunk Enterprise instance that you enable as master node must perform only that single indexer cluster role. In addition, the master cannot share a machine with a peer. Under certain limited circumstances, however, the master instance can handle a few other lightweight functions. See "Additional roles for the master node". You must enable the master node as the first step in deploying a cluster, before setting up the peer nodes. The procedure in this topic explains how to use Splunk Web to enable a master node. You can also enable a master in two other ways: - Directly edit the master's server.conffile. See "Configure the master with server.conf" for details. Some advanced settings can only be configured by editing this file. - Use the CLI edit cluster-configcommand. See "Configure the master with the CLI" for details. Important: This topic explains how to enable a master for a single-site cluster only. If you plan to deploy a multisite cluster, see "Configure multisite indexer clusters with server.conf". Enable the master To enable an indexer as the master node: 1. Click Settings in the upper right corner of Splunk Web. 2. In the Distributed environment group, click Indexer clustering. 3. Select Enable indexer clustering. 4. Select Master master master node. The message appears, "You must restart Splunk for the master node to become active. You can restart Splunk from Server Controls." 7. Click Go to Server Controls. This takes you to the Settings page where you can initiate the. View the master dashboard After the restart, log back into the master and return to the Clustering page in Splunk Web. This time, you see the master clustering dashboard. For information on the dashboard, see "View the master dashboard". Perform additional configuration For information on post-deployment master node configuration, see "Master configuration overview".!
https://docs.splunk.com/Documentation/Splunk/6.5.3/Indexer/Enablethemasternode
2019-05-19T10:44:31
CC-MAIN-2019-22
1558232254751.58
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
LATEST VERSION: 5.x - RELEASE NOTES Determine the Transformation Schema A newer version of this documentation is available. Click here to view the most up-to-date release of the Greenplum 4.x documentation..
https://gpdb.docs.pivotal.io/43270/admin_guide/load/topics/g-determine-the-transformation-schema.html
2019-05-19T10:47:18
CC-MAIN-2019-22
1558232254751.58
[array(['/images/icon_gpdb.png', None], dtype=object)]
gpdb.docs.pivotal.io
gpfdist:// Protocol master host and on each segment host. Run gpfdist on the host where the external data files reside. gpfdist uncompresses gzip (.gz) and bzip2 (.bz2) files automatically. You can use the wildcard character (*) or other C-style pattern matching to denote multiple files to read. The files specified are assumed to be relative to the directory that you specified when you started the gpfdist instance. All primary segments access the external file(s) in parallel, subject to the number of segments set in the gp_external_max_segments server configuration parameter. Use multiple gpfdist data sources in a CREATE EXTERNAL TABLE statement to scale the external table's scan performance. For more information about configuring gpfdist, see Using the Greenplum Parallel File Server (gpfdist). See the gpfdist reference documentation for more information about using gpfdist with external tables.
https://gpdb.docs.pivotal.io/43310/admin_guide/load/topics/g-gpfdist-protocol.html
2019-05-19T10:32:00
CC-MAIN-2019-22
1558232254751.58
[array(['/images/icon_gpdb.png', None], dtype=object)]
gpdb.docs.pivotal.io
Azure¶ To deploy Pachyderm to Azure, you need to: Deploy Kubernetes¶ The easiest way to deploy a Kubernetes cluster is through the Azure Container Service (AKS). To create a new AKS Kubernetes cluster using the Azure CLI az, run: $ RESOURCE_GROUP=<a unique name for the resource group where Pachyderm will be deployed, e.g. "pach-resource-group"> $ LOCATION=<a Azure availability zone where AKS is available, e.g, "Central US"> $ NODE_SIZE=<size for the k8s instances, we recommend at least "Standard_DS4_v2"> $ CLUSTER_NAME=<unique name for the cluster, e.g., "pach-aks-cluster"> # Create the Azure resource group. $ az group create --name=${RESOURCE_GROUP} --location=${LOCATION} # Create the AKS cluster. $ az aks create --resource-group ${RESOURCE_GROUP} --name ${CLUSTER_NAME} --generate-ssh-keys --node-vm-size ${NODE_SIZE} Once Kubernetes is up and running you should be able to confirm the version of the Kubernetes server via: $ kubectl version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.9", GitCommit:"19fe91923d584c30bd6db5c5a21e9f0d5f742de8", GitTreeState:"clean", BuildDate:"2017-10-19T16:55:06Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Note - Azure AKS is still a relatively new managed service. As such, we have had some issues consistently deploying AKS clusters in certain availability zones. If you get timeouts or issues when provisioning an AKS cluster, we recommend trying in a fresh resource group and possibly trying a different zone. Deploy Pachyderm¶ To deploy Pachyderm we will need to: - Add some storage resources on Azure, - Install the Pachyderm CLI tool, pachctl, and - Deploy Pachyderm on top of the storage resources. Set up the Storage Resources¶ Pachyderm requires an object store and persistent volume (Azure Storage) to function correctly. To create these resources, you need to clone the Pachyderm GitHub repo and then run the following from the root of that repo: $ STORAGE_ACCOUNT=<The name of the storage account where your data will be stored, unique in the Azure location> $ CONTAINER_NAME=<The name of the Azure blob container where your data will be stored> $ STORAGE_SIZE=<the size of the persistent volume that you are going to create in GBs, we recommend at least "10"> # Create an Azure storage account az storage account create \ --resource-group="${RESOURCE_GROUP}" \ --location="${LOCATION}" \ --sku=Standard_LRS \ --name="${STORAGE_ACCOUNT}" \ --kind=Storage # Build a microsoft tool for creating Azure VMs from an image. Necessary to create the blank PV. $ STORAGE_KEY="$(az storage account keys list \ --account-name="${STORAGE_ACCOUNT}" \ --resource-group="${RESOURCE_GROUP}" \ --output=json \ | jq '.[0].value' -r )" Install pachctl¶ pachctl is a command-line utility used for interacting with a Pachyderm cluster. # For macOS: $ brew tap pachyderm/tap && brew install pachyderm/tap/[email protected] # For Linux (64 bit) or Window 10+ on WSL: $ curl -o /tmp/pachctl.deb -L && sudo dpkg -i /tmp/pachctl.deb You can try running pachctl version to check that this worked correctly: $ pachctl version --client-only COMPONENT VERSION pachctl 1.7.0 Deploy Pachyderm¶ Now we’re ready to deploy Pachyderm: $ pachctl deploy microsoft ${CONTAINER_NAME} ${STORAGE_ACCOUNT} ${STORAGE_KEY} ${STORAGE_SIZE} --dynamic-etcd-nodes 1 It may take a few minutes for the pachd pods to be running because it’s pulling containers from Docker Hub. When Pachyderm is up and running, you should see something similar to the following state: $ kubectl get pods NAME READY STATUS RESTARTS AGE dash-482120938-vdlg9 2/2 Running 0 54m etcd-0 1/1 Running 0 54m pachd-1971105989-mjn61 1/1 Running 0 54m Note: If you see a few restarts on the pachd nodes, that’s totally ok. That simply means that Kubernetes tried to bring up those containers before etcd was ready so it restarted them. Finally, assuming you want to connect to the cluster from your local machine (i.e., your laptop), by creating a new repo. $ pachctl version COMPONENT VERSION pachctl 1.7.0 pachd 1.7.0
https://pachyderm.readthedocs.io/en/stable/deployment/azure.html
2019-05-19T11:14:32
CC-MAIN-2019-22
1558232254751.58
[]
pachyderm.readthedocs.io
tkcon: Tcl Plugin Stripped Demo tkcon Documentation (May 2001) Documentation Purpose & Features Limitations To Do Online Demo (requires Tk plugin) Using TkCon with other Tk Languages This. Have a look at some of the features: (culled from the tkcon documentation) Variable / Path / Procedure Name Expansion. Type in set tc at the prompt. Hit <Control-Shift-V>. set tcl_ should now be visible. Hit <Control-Shift-V> again. You should see the rest of the completions printed out for you. Works the same for procedures and files paths (file access restricted from plugin). Works properly when spaces or other funny characters are including in the name. Command Highlighting. Note that set should be in green, denoting it is a recognized command in that interpreter. Electric Character Matching. Watch while you type the following: proc foo { a b } { puts [list $a $b] }. Did you notice the blink matching of the braces? Yes, it's smart. Command History. Use the Up/Down arrows or <Control-p>/<Control-n> to peruse the command history. <Control-r>/<Control-s> Actually does command history matching (like tcsh or other advanced Unix shells). Useful Colorization. Having defined foo above, type in foo hey. Note that the error comes back in red. Go up one in the command history and add you and see that regular stdout output comes through in blue (the colors are configurable). Cut/Copy/Paste. You should be able to do that between outside windows and TkCon. The default keys are <Control-x>/<Control-c>/<Control-v>. © Jeffrey Hobbs
http://docs.activestate.com/activetcl/8.5/tcl/tkcon/plugin.html
2019-05-19T10:59:03
CC-MAIN-2019-22
1558232254751.58
[]
docs.activestate.com
Testcase to check if the technical tax is not displayed in Recapitulation of producer invoices. Make sure G000X has producer invoice checked in bpartner, tab vendor Check bpartner group of G000X, make sure that: Set prices for P0001 and P0002 in CP and VP, if not set already Create a vendor invoice for G000X, using technical tax (2.5641%) Complete the invoice and print jasper Create a customer invoice for G000X, using a different tax (e.g. 2.5%) Complete the invoice and print jasper Create a purchase order and a sales order for G000X, using the same tax, products and qties as the invoices (no TUs!) Complete the purchase order and print the jasper (order confirmation) Complete the sales order and print the jasper (order confirmation) Go back to bpartner group of G000X, and set: Go back to your producer invoice, print the jasper again Do the same for your sales invoice
http://docs.metasfresh.org/tests_collection/testcases/Testcase_FRESH-358.html
2019-05-19T10:43:56
CC-MAIN-2019-22
1558232254751.58
[]
docs.metasfresh.org
Deployment Summary The basic sequence of events for deploying Genesys Performance Management Advisors is shown below. This sequence is repeated throughout the book to help you understand where you are in the deployment process. Advisors now integrate with the Genesys Management Layer. If you have installed earlier releases of Advisors and are familiar with the process, be aware that there are additional tasks required starting in release 8.5.1. See Integration with the Genesys Solution Control Server for information.). Also starting in release 8.5.1, you register and manage Stat Servers differently than in previous releases. The Advisors Genesys Adapter installer no longer prompts you for Stat Server information. You now execute dedicated database procedures against the Advisors Platform database to: - register or remove Stat Server instances - add, edit, or remove Stat Server configuration settings related to Advisors See Manage Advisors Stat Server Instances for information. See the Prerequisites and the various deployment procedures in Deploying Advisors for detailed information. - Install the databases that correspond to the Advisors products you will deploy: - Advisors Genesys Adapter metrics database - Advisors Platform database - Advisors Cisco Adapter database (if you use ACA) - Metric Graphing database - Create the Advisors User and the Object Configuration User accounts. - Install the Platform service (Geronimo) on servers on which you will deploy one of the following Advisors components: - Contact Center Advisor Web services - Workforce Advisor server or Web services - Frontline Advisor server or Web services - Contact Center Advisor–Mobile Edition server - Resource Management Console - Install each adapter you will use (AGA and ACA). - Register the Stat Servers that you plan to use with Advisors. - Install the Advisors components for your enterprise: - Contact Center Advisor - Workforce Advisor - Contact Center Advisor – Mobile Edition - Frontline Advisor - SDS and Resource Management - Make any required configuration changes. Feedback Comment on this article:
https://docs.genesys.com/Documentation/PMA/8.5.1/PMADep/Summary
2019-05-19T10:33:06
CC-MAIN-2019-22
1558232254751.58
[]
docs.genesys.com
This topic explains how to install the C/C++test plugin into a working copy of Eclipse or Application Developer on Windows. The section includes: See IDE Support for details about supported versions. To install the C/C++test plugin for Eclipse on Windows: To launch the plugin: Eclipse will automatically find the C/C++test plugin. After Eclipse is launched, you should see a Parasoft menu added to the Eclipse menu bar. If you do not see this menu, choose Window> Open Perspective> Other, select C++test, then click OK. If you suspect that C/C++test is not properly installed, see Troubleshooting and FAQs for help resolving some common installation problems. The license is configured through the centralized licensing framework (Parasoft> Preferences> Parasoft> Licenses). For details, see Licensing.
https://docs.parasoft.com/plugins/viewsource/viewpagesrc.action?pageId=6386428
2019-05-19T10:48:18
CC-MAIN-2019-22
1558232254751.58
[]
docs.parasoft.com
Roadmap¶ - Goals for version 2.2: - Improve workflow system - Workflow indexing support. Accessor already works {{ document.workflows.all.0.get_current_state }}. Index recalculation after workflow transition is missing. - Workflow actions. Predefined actions to be execute on document leaving or entering a state or a transition. Example: “Add to folder X”, “Attach tag X”. - Add support for state recipients. - Add workflow document inbox notification. - Replace indexing and smart linking template language (use Jinja2 instead of Django’s). - Display/find documents by their current workflow state. - Goals for version 3.0: - Replace UI. - General goals: - Distribution: - Debian packages. Limited success so far using. - Downloads: - Transition from filetransfer package to django-downloadview. This task was started and the view common.generics.SingleObjectDownloadViewwas created. The document_signaturesapp is the first app to use it. - Notifications: - Add support for subscribing to a document’s events. - Add support for subscribing to a document type events. - Add support for subscribing specific events. - OCR: - Add image preprocessing for OCR. Increase effectiveness of Tesseract. - Improve interface with tesseract. - Fix pytesseract shortcomings via upstream patches or re-implement. Move to PyOCR. - Python 3: - Complete support for Python3. - Find replacement for pdfminer (Python3 support blocker). Use pdfminer.six (#257). - Simple serving: - Provide option to serve Mayan EDMS without a webserver (using Tornado o similar). Work started in branch: /feature/tornado - Source code: - Implement Developer certificate of origin: - Upload wizard: - Make wizard step configurable. Create WirzardStepclass so apps can add their own upload wizard steps, instead of the steps being hardcoded in the sources app. - Add upload wizard step to add the new documents to a folder. - Other - Use a sequence and not the document upload date to determine the document version sequence. MySQL doesn’t store milisecond value in dates and if several version are uploaded in a single second there is no way to know the order or which one is the latests. This is why the document version tests include a 2 second delay. Possible solution: - Include external app Mayan-EXIF into main code. - Convert all views from functions to class based views (CBV). - Increase test coverage. - Mock external services in tests. For example the django_GPGapp key search and receive tests. - Pluggable icon app. Make switching icon set easier. - Reduce dependency on binary executables for a default install. - Find replacement for cssmin& django-compressor. - Find replacement for python-gnupg. Unstable & inconsistent API. - Google docs integration. Upload document from Google Drive. - Get dumpdataand loaddataworking flawlessly. Will allow for easier backups, restores and database backend migrations. - Make more view asynchronous: - trash can emptying. - document delete view. - Add support for loading settings from environment variables, not just settings/local.py. - Add generic list ordering. django.views.generic.list.MultipleObjectMixin() now supports an orderingparameter. - Workaround GitLab CI MySQL test errors. GitLab MySQL’s container doesn’t support UTF-8 content. - Add support for downloading the OCR content as a text file. - Add support to convert any document to PDF. - Add support for combining documents. - Add support for splitting documents. - Add task viewer. - Add new document source to get documents from an URL. - Document overlay support. Such as watermarks. - Add support for metadata mapping files. CSV file containing filename to metadata values mapping, useful for bulk upload and migrations. - Add support for registering widgets to the home screen. - Merge mimetype and converter apps. - Add entry in About menu to check latest Mayan EDMS version via PyPI. - Add GPG key generation. - Add documentation section on editing the settings/local.py file. - Add documentation section with warning about using runserver. - Replace urlpatterns = patterns( '', with Python lists. Django recommendation for post 1.7. - If SourceColumn label is None take description from model. Avoid unnecessary translatable strings. - Metadata widgets (Date, time, timedate). - Datatime widget: - Separate Event class instances with a parent namespace class: EventNamespace. - Add events for document signing app (uploaded detached signateure, signed document, deleted signature) - A configurable conversion process. Being able to invoke different binaries for file conversion, as opposed to the current libreoffice only solution. - A tool in the admin interface to mass (re)convert the files (basically the page count function, but then applied on all documents). - Find solution so that documents in watched folders are not processed until they are ready. Use case scanning directly to scanned folders.
http://mayan.readthedocs.io/en/v2.1.4/topics/roadmap.html
2017-03-23T04:14:33
CC-MAIN-2017-13
1490218186774.43
[]
mayan.readthedocs.io
Legend Overview The AnyStock legend is somewhat alike the basic charts legend. You may use all its functions, enable or disable completely the same features. You can find some information about basic legend in Legend tutorial. The main difference you should remember is that the legend in AnyStock is bound to the plot, not to the chart itself. Let's explore the legend usage in AnyStocks and have a look at a couple of samples. As the AnyStock legend is quite similar to other charts' legend, we're going to consider the cases of differences or when we need to change something. Positioning We use the same methods for positioning the AnyStock Chart Legend as for the Basic Charts Legend. So, we use orientation() and align() methods to control legend's alignment. For more complicated settings, such as changing the items layout or space between items, we use itemsLayout() and itemsSpacing() accordingly. Let's create a vertically arranged legend. // making the legend vertical legend.itemsLayout('vertical'); // setting the space between the items legend.itemsSpacing(1); Title By default, a Stock chart legend title shows the date and time of the hovered point on a chart, or the date and time of the last point of the chart when the mouse is out of the chart and no point is hovered. It is placed on the left side of the legend, while the whole legend is put in a line; title separator is disabled by default. We can change it all using titleFormatter() method for changing the legend title, change its placement using some positioning methods (such as position(), itemsLayout()), disable the title by setting "false" to enable(), enable the title separator with titleSeparator() or add any of the events to make it interactive. // turn the title on and set the position legend.title(true); legend.title().orientation('top').align('left'); // format the title legend.titleFormatter(function(){ return "ACME Corp. Stock Prices" }); //enable the titleSeparator legend.titleSeparator(true); Items By default, the legend items show the name of the series with the value hovered on a stock, and the icon of the item is of square form and of the represented series' color. We can change the appearance of the items list using itemsFormatter() method. It affects the list of items, so we can rename the items, change their icons' appearance. Look at the sample below firstPlot.legend().itemsFormatter(function(){ return [ {text: "High", iconType: "circle", iconFill:"#558B2F"}, {text: "Low", iconFill:"#D84315"}, {text: "Close", iconType: "circle", iconFill:"#FF8F00"} ] }); When we've got the OHLC-series on our chart, we should use the itemsTextFormatter() method to display all OHLC values in the legend. In the sample below we check if the series we're formatting is of OHLC type (which is necessary if your chart has a number of series) and then define what to display. plot.legend().itemsTextFormatter(function(){ if (this.open !== undefined){ return "Open: " + this.open + " High: " + this.high+ " Low: " + this.low + " Close: " + this.close } }); One more thing should we take into account: if we've got too many data points and the data is approximated, then the legend will show the approximate value of the hovered group of points. To see the exact value of the point you should scroll the data to a non-approximated state. Visualization When we want to change something in the legend view, there's almost no difference in usage and editing between the basic chart legend and the AnyStock one. Let's add a background to our legend, change its size and icons and adjust the legend paginator. // add a background legend.background("#E1F5FE"); // change size in height legend.height(35); // adjust the paginator legend.paginator(true); legend.paginator().orientation("right"); // icons var item = series.legendItem(); // set stroke of icons item.iconStroke("#000") // set type of icon marker item.iconType("ohlc"); Custom Item When creating legend you can add your own items with any information you want to see on the legend, to do that use itemsFormatter() method. var legend = chart.legend(); // adjust legend items legend.itemsFormatter(function(items){ // push into items array items.push({ // set text of a new item text: "item text " }); // return items array return items; }); In the sample chart below we've used custom item that adds Total data to legend. You can also create a custom legend. It's being done the same way as with basic chart legends, so you can look for it up in the Basic Chart Legend article. Custom Legend The same as we create one legend to several series in basic charts, we can do with stocks. The only difference you'd better remember is that in stocks we operate with series on plots instead of charts. Look at this in the Basic Chart Legend article. // create custom legend var customLegend = anychart.ui.legend(); // set sources for legend items customLegend.itemsSource([plot_column, plot_line_ohlc]); customLegend.enabled(true); customLegend.hAlign("center"); customLegend.height(50); // redraw legend every time the first chart is redrawn chart.listen( "chartDraw", function (){ // define legend bounds var legendBounds = anychart.math.rect( 0, customLegend.getRemainingBounds().getHeight(), chart.container().width(), chart.container().height() ); // set bounds and draw legend customLegend.parentBounds(legendBounds).draw(); } ); // set the bottom margin chart.margin().bottom(chart.height() - customLegend.getRemainingBounds().getHeight()); // draw legend customLegend.container(stage).draw();
https://docs.anychart.com/Stock_Charts/Legend
2017-03-23T04:22:23
CC-MAIN-2017-13
1490218186774.43
[]
docs.anychart.com
Jenkins Integration Overview For pre-modeled Jenkins projects (for example, Maven, to fetch the source code from Git/SVN), you can integrate with CloudCenter using the CloudCenter Jenkins plugin. You do not need to manually copy this file, CloudCenter provides a download URL to make this plugin available to Jenkins users. Contact CloudCenter Support to obtain the download location. The CloudCenter Jenkins Plugin The CloudCenter Jenkins plugin provides complete integration between Jenkins and CloudCenter by allowing users to directly launch deployments on any Supported Cloud from a Jenkins server. Additionally, users can upgrade an existing deployment by specifying upgrade scripts for each tier. Prerequisites - If you are new to Jenkins, setup a maven project on Jenkins with Github as its source repository. See for additional details. - The supported Jenkins versions value must be >=1.624 to use the CloudCenter Jenkins plugin. - The required Java version for the Jenkins server must be Java 7. - You must set the default cloud settings in the Deployment Environments so the Jenkins Plugin can fetch and use those cloud settings when you deploy the application. - The Jenkins plugin for CloudCenter 4.6.x requires the APIs associated with CloudCenter 4.6.x (for example, the Jenkins plugin uses v2 CloudCenter APIs for job deployment and the v2 APIs are only available in CloudCenter 4.6.x). - To ensure a successful connection between the Jenkins server and the CCM, be sure to import the certificate from the CCM into the Jenkins trusted Java key store (see Certificate Authentication for additional context). Install the CloudCenter Jenkins Plugin To install the CloudCenter Jenkins plugin, follow this procedure: - Contact CloudCenter Support to obtain the download link for the CloudCenter Jenkins plugin. - Log into CCM using your admin credentials. - Generate the API Management (Access) Key for the Jenkins user. - Model the Application so this user can access artifacts from the Jenkins build server. - In Jenkins, go to Manage Jenkins > Manage Plugins > Advanced > Upload Plugin to upload and install the CloudCenter Jenkins Plugin. - After you install theCliQrJenkinsPlugin, go to your existing/new project to configure post-build step and fetch the source code from the Git/SVN using a Maven project into the Jenkins Build. Configure the CloudCenter Application Deployment Client in Jenkins for continuous integration from build system and deployment (new or upgrade on an existing node). The parameters for the CloudCenter Application Deployment Client page are listed in the following table: The jenkinsBuildId Macro In Update Scripts, $BUILD_ID or %jenkinsBuildId% can be passed as an argument to point the Binaries to be Copied during an update deployment. The %jenkinsBuildId% macro is not an applications-specific macro. This CloudCenter-defined macro applies to deployments that are launched using the Jenkins plugin. The jenkinsBuildId macro is mainly used to pass the Jenkins Build ID to the userenv of the app deployment. Any deployment triggered by the Jenkins plugin will automatically have jenkinsBuildId in the userenv and will be used to point to the right binaries changes the value that should be passed to existing deployments as userenv has old the jenkinsBuildId value during the deployment. The folder name that CloudCenter creates in the target location will now use the jenkinsBuildId value instead of the random timestamp value. Create a New Deployment on Every Build This option creates a Brand new deployment on every build. Update an Existing Deployment When you update an existing deployment: - It updates a previous deployment that is launched from the Jenkins plugin during the previous builds of the same project. - If the previous deployment job is still in progress and is not in the Running state, the plugin waits till it enters the running state and then triggers an update. - If the previous deployments ends in an error or if that deployment is stopped/cancelled from the CCM, the plugin launches a fresh deployment as part of update. - If it is the first build, this plugin creates a new deployment and for the next successful build it uses the existing deployment. - If you receive a security group error, be sure to verify if more than one system is accessing the same account. - No labels
http://docs.cliqr.com/display/CCD46/Jenkins
2017-03-23T04:24:41
CC-MAIN-2017-13
1490218186774.43
[]
docs.cliqr.com
Hiera: Migrating existing Hiera configurations to Hiera 5 Included in Puppet Enterprise 2017.1. If you’re already a Hiera user, you don’t have to migrate anything yet. Hiera 5 is fully backwards-compatible with Hiera 3, and we won’t remove any legacy features until Puppet 6. You can even start using some Hiera 5 features (like module data) without migrating anything. But there are major advantages to fully adopting Hiera 5: - A?” Since Hiera 5 uses the same built-in data formats as Hiera 3, you don’t need to do mass edits of any data files. When we say “migrate to Hiera 5,” we’re talking about updating configuration. Specifically, we’re talking about the following tasks: Enabling the environment layer takes the most work, and yields the biggest benefits. Focus on that first, then do the rest at your own pace. Should you migrate yet? Probably! But there are a few situations where you might want to delay upgrading. UPDATED: hiera-eyaml users: go for it. Custom backend users: maybe wait for updated backends You can keep using custom Hiera 3 backends with Hiera 5, but they’ll make migration more complex, because you can’t move legacy data to the environment layer until there’s a Hiera 5 backend for it. If an updated version of the backend is coming out soon, it might be more efficient to wait (or even help contribute to its development). If you’re using an off-the-shelf custom backend, check its website or contact its developer. If you developed your backend in-house, read the documentation about writing Hiera 5 backends — updating it might be easier than you think. Custom data_binding_terminus users: go ahead, but replace it with a Hiera 5 backend ASAP There’s a deprecated data_binding_terminus setting in puppet.conf, which changes the behavior of automatic class parameter lookup. It can be set to hiera (normal), none (deprecated; disables auto-lookup), or the name of a custom plugin. With a custom data_binding_terminus, automatic lookup results are radically different from function-based lookups for the same keys. If you’re one of the rare few who use this feature, you’ve already had to design your Puppet code to avoid that problem, so it’s probably safe to migrate your configuration to Hiera 5. But since we’ve deprecated that extension point, you’ll have to replace your custom terminus with a Hiera 5 backend before Puppet 6 rolls around. -.
https://docs.puppet.com/puppet/4.9/hiera_migrate.html
2017-03-23T04:23:18
CC-MAIN-2017-13
1490218186774.43
[]
docs.puppet.com
Puppet 3.x to 4.x: Get upgrade-ready Included in Puppet Enterprise 2016.4. all your Puppet components are running the latest Puppet 3 versions, checking and updating in the following order. Note: PuppetDB remains optional, and you can skip it if you don’t use it. - If you already use Puppet Server, update it across your infrastructure to the latest 1.1.x release. - If you’re still using Rack or WEBrick to run your Puppet master, this is the best time to switch to Puppet Server. Puppet Server is designed to be a better-performing drop-in replacement for Rack and WEBrick Puppet masters, which are deprecated as of Puppet 4.1. - This is a big change! Make sure you can successfully switch to Puppet Server 1.1.x before tackling the Puppet 4 upgrade. - Check out our overview of what sets Puppet Server apart from a Rack Puppet master. - Puppet Server uses 2GB of memory by default. Depending on your server’s specs, you might have to adjust how much memory you allocate to Puppet Server before you launch it. - If you run multiple Puppet masters with a single certificate authority, you’ll need to edit Puppet Server’s bootstrap.cfgto disable the CA service. You’ll also need to ensure you’re routing traffic to the appropriate node with a load balancer or the agents’ ca_serversetting. - Update all Puppet agents to the latest 3.8.x release. - If you use PuppetDB, update it to the latest 2.3.x release, then update the PuppetDB terminus plugins on your Puppet Server node to the same release. - Puppet Server 1.x and 2.x look for the PuppetDB termini in two different places. The 2.3.x puppetdb-terminuspackage installs the termini in both of them, so the server need to adjust them before you upgrade. If you’ve already set stringify_facts = false in puppet.conf on every node in your deployment, skip to the next section. Otherwise: - Check your Puppet code for any comparisons that treat boolean facts like strings, like if $::is_virtual == "true" {...}, and change them so they’ll work with true Boolean values. - If you need to support Puppet 3 and 4 with the same code, you can instead use something like if str2bool("$::is_virtual") {...}. - Next, set stringify_facts = falsein puppet.confon every node in your deployment. To have Puppet change this setting, use an inifileresource. - Watch the next set of Puppet runs for any problems with your code. - broken code The future parser in Puppet 3 is the current parser in Puppet 4. If you haven’t enabled the future parser yet, do so now and check for problems in your current Puppet code during the next Puppet run. To change the parser per-environment: - Create a test directory environment that duplicates your production environment. - Set parser = futurein the test environment’s environment.conf. - Run nodes in the test environment and confirm they still get good catalogs. - Based on the result, make any necessary adjustments to your Puppet code. - Once the environment is in good shape, set parser = futurein puppet.confon all Puppet master nodes to make the change global. Some of the changes to look out for include: - Changes to comparison operators, particularly - The inoperator ignoring case when comparing strings. - Incompatible data types no longer being comparable. - New rules for converting values to Boolean (for example, empty strings are now true). - Facts having additional data types. - Quoting required for octal numbers in fileresources’ modeattributes..
https://docs.puppet.com/puppet/4.7/upgrade_major_pre.html
2017-03-23T04:16:25
CC-MAIN-2017-13
1490218186774.43
[]
docs.puppet.com
The PureWeb® platform provides powerful tools for building interactive and collaborative applications that deliver a seamless user experience in real time on web browsers and mobile devices. For more information, see the pureweb.io web site. This class library reference will help you develop PureWeb service applications in C#/.Net. applications, create and stream images, work with views, commands and application state, and to use built-in features such as collaboration. The extended API provide a list of all the interfaces and classes of the C#/.Net API library. © 2010 - 2016 Calgary Scientific Inc. All rights reserved. This documentation shall not wholly or in part, in any form or by any means, electronic, mechanical, including photocopying, be reproduced or transmitted without the authorized, written consent of Calgary Scientific. Generated on Fri Sep 2 2016 04:17:51 for PureWeb C# SDK by 1.8.11
http://docs.pureweb.io/API4.2/DotNet/html/index.html
2017-03-23T04:13:41
CC-MAIN-2017-13
1490218186774.43
[]
docs.pureweb.io
JBUS Locked Up Problem: The JBUS is no longer being recognized or is not recording ECM Data(aka. Engine Data). Solution: - Turn the truck ignition off. - Log out of the DriverTech unit. - Press and hold the power button. . - Once the unit has gone to a black screen, wait 30 seconds then power it back on. - Log in with your Driver Code. - Touch the System Info button then the Diagnostics tab then the JBUS tab. - Turn on the truck Engine and wait 2 minutes. - You may see this message, -JBUS Data present, missing parameters -. This is an acceptable message. - You should see the RPM field changing, if you do, the unit is most likely working properly. - To confirm, run a JBUS audit. - If everything on the screen is properly reporting, the issue is resolved. Related articles
http://docs.drivertech.com/display/SU/JBUS+Locked+Up
2020-02-16T21:29:20
CC-MAIN-2020-10
1581875141430.58
[]
docs.drivertech.com
Algorithm API¶ There are several components to an HTM system: - encoding data into SDR using Encoders - passing encoded data through the Spatial Pooler - running Temporal Memory over. See the low-level APIs for Encoders and Algorithms.
http://nupic.docs.numenta.org/0.6.0/guide-algorithms.html
2020-02-16T23:20:00
CC-MAIN-2020-10
1581875141430.58
[]
nupic.docs.numenta.org
... come and get yer Service Factory! That: - Lay some links on ya. For downloading the Service Factory releases and more info. - Share my favorite new features of the Service Factory with you. - Give you a sense of what is to come in the near and not-so-near future. Links - Service Factory December release helps you build both WCF and ASMX services in C#. The old July release is no longer available since everything it contained is included in this release. Web Service Software Factory–December 2006 (ASP.NET and WCF services in C#) -. Web Service Software Factory–July 2006 (ASP.NET services in VB.NET) - Service Factory information. If you're not sure what the Service Factory is and want to find out before downloading it, go to to learn more about it. Favs. Futures: - Modifying a guidance package - There is already a how-to in the Service Factory documentation, but customers are always telling me they need more help. If you tell me what kind of changes you're making to guidance packages I will try to create an exercise that illustrates how to make that change. - Building a service agent -. - Versioning - this will build on the topic I mentioned earlier and will walk you through how to evolve a service in a number of ways. - Message validation - The reference implementation already illustrates this. But would you find a HOL exercise valuable? If not, cool, I'll spend that energy somewhere else. If so, also cool. - Exception shielding - The reference implementation also illustrates this. Same question ... - Create a code analysis rule - I think this is also covered in the VS documentation, but I've never looked for it. Are you going to be writing your own rules? If so, would you find a HOL exercise helpful? - Workflow Foundation - we actually did a lot of work with this, but had to pull it at the last minute (yeah, sometimes cutting scope at the last minute hurts). We might be in a position to create an exercise as a result of what we already have. We'll see.!
https://docs.microsoft.com/en-us/archive/blogs/donsmith/come-and-get-yer-service-factory
2020-02-16T23:22:12
CC-MAIN-2020-10
1581875141430.58
[]
docs.microsoft.com
When you go to Admin Panel ▸ Appearance ▸ Customize▸Webmasters Tools, there you will see various options available to customize your site. Just try those options one by one you will able to understand how the theme works. You can add scripts before wp_head and after wp_footer. Options for verification of site.
https://docs.themeinwp.com/docs/infinity-news-pro/customizer-options-and-setups/webmasters-tools/
2020-02-16T21:25:20
CC-MAIN-2020-10
1581875141430.58
[array(['https://docs.themeinwp.com/wp-content/uploads/2019/07/master.png', None], dtype=object) array(['https://docs.themeinwp.com/wp-content/uploads/2019/07/verification.png', None], dtype=object) ]
docs.themeinwp.com
Reduce the image size after cloning a physical PC¶ Things to keep in mind when we have a cloned image of Windows from a physical PC. Note During these tasks, be aware that it affects the performance of the server. Avoid making them on a Ravada server in production. Check the image format¶ In the following case you can see that it’s RAW format. Although the extension of the file is qcow2 this obviously does not affect. qemu-img info Win7.qcow2 image: Win7.qcow2 file format: raw virtual size: 90G (96636764160 bytes) disk size: 90G STEPS TO FOLLOW¶ - Convert from RAW (binary) to QCOW2: qemu-img convert -p -f raw Win7.qcow2 -O qcow2 Win7-QCOW2.qcow2 Now verify that the image format is QCOW2, and it’s 26GB smaller. qemu-img info Win7-QCOW2.qcow2 image: Win7-QCOW2.qcow2 file format: qcow2 virtual size: 90G (96636764160 bytes) disk size: 64G cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: false refcount bits: 16 corrupt: false 2. The virt-sparsify command-line tool can be used to make a virtual machine disk (or any disk image) sparse. This is also known as thin-provisioning. Free disk space on the disk image is converted to free space on the host. virt-sparsify -v Win7-QCOW2.qcow2 Win7-QCOW2-sparsi.qcow2 Note The virtual machine must be shutdown before using virt-sparsify. In a worst case scenario, virt-sparsify may require up to twice the virtual size of the source disk image. One for the temporary copy and one for the destination image. If you use the –in-place option, large amounts of temporary space are not needed. Disk size now is 60G, below you can see that reduce image size in 30GB. qemu-img info Win7-QCOW2-sparsi.qcow2 image: Win7-QCOW2-sparsi.qcow2 file format: qcow2 virtual size: 90G (96636764160 bytes) disk size: 60G cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: false refcount bits: 16 corrupt: false Now it is advisable to let Windows do a CHKDSK, do not interrupt it. Finally, you need to install the Spice guest-tools. This improves features of the VM, such as the screen settings, it adjusts automatically, etc.
https://ravada.readthedocs.io/en/latest/docs/reduce-size-image.html
2020-02-16T23:03:20
CC-MAIN-2020-10
1581875141430.58
[]
ravada.readthedocs.io
—–BEGIN PGP SIGNED MESSAGE—– Hash: SHA256 Donations¶ Donations to the Tahoe-LAFS project are welcome, and can be made to the following Bitcoin address: 1PxiFvW1jyLM5T6Q1YhpkCLxUh3Fw8saF3 The funds currently available to the project are visible through the blockchain explorer: Governance¶ The Tahoe-LAFS Software Foundation manages these funds. Our intention is to use them for operational expenses (website hosting, test infrastructure, EC2 instance rental, and SSL certificates). Future uses might include developer summit expenses, bug bounties, contract services (e.g. graphic design for the web site, professional security review of codebases, development of features outside the core competencies of the main developers), and student sponsorships. The Foundation currently consists of secorp (Peter Secor), warner (Brian Warner), and zooko (Zooko Wilcox). Transparent Accounting¶ Our current plan is to leave all funds in the main 1Pxi key until they are spent. For each declared budget item, we will allocate a new public key, and transfer funds to that specific key before distributing them to the ultimate recipient. All expenditures can thus be tracked on the blockchain. Some day, we might choose to move the funds into a more sophisticated type of key (e.g. a 2-of-3 multisig address). If/when that happens, we will publish the new donation address, and transfer all funds to it. We will continue the plan of keeping all funds in the (new) primary donation address until they are spent. Expenditure Addresses¶ This lists the public key used for each declared budget item. The individual payments will be recorded in a separate file (see docs/expenses.rst), which is not signed. All transactions from the main 1Pxi key should be to some key on this list. - Initial testing (warner) 1387fFG7Jg1iwCzfmQ34FwUva7RnC6ZHYG one-time 0.01 BTC deposit+withdrawal - tahoe-lafs.org DNS registration (paid by warner) 1552pt6wpudVCRcJaU14T7tAk8grpUza4D ~$15/yr for DNS - tahoe-lafs.org SSL certificates (paid by warner) $0-$50/yr, ending 2015 (when we switched to LetsEncrypt) 1EkT8yLvQhnjnLpJ6bNFCfAHJrM9yDjsqa - website/dev-server hosting (on Linode, paid by secorp) ~$20-$25/mo, 2007-present) - 2016 Tahoe Summit expenses: venue rental, team dinners (paid by warner) ~$1020 1DskmM8uCvmvTKjPbeDgfmVsGifZCmxouG Historical Donation Addresses¶ The Tahoe project has had a couple of different donation addresses over the years, managed by different people. All of these funds have been (or will be) transferred to the current primary donation address (1Pxi). - 13GrdS9aLXoEbcptBLQi7ffTsVsPR7ubWE (21-Aug-2010 - 23-Aug-2010) Managed by secorp, total receipts: 17 BTC - 19jzBxijUeLvcMVpUYXcRr5kGG3ThWgx4P (23-Aug-2010 - 29-Jan-2013) Managed by secorp, total receipts: 358.520276 BTC - 14WTbezUqWSD3gLhmXjHD66jVg7CwqkgMc (24-May-2013 - 21-Mar-2016) Managed by luckyredhot, total receipts: 3.97784278 BTC stored in 19jXek4HRL54JrEwNPyEzitPnkew8XPkd8 - 1PxiFvW1jyLM5T6Q1YhpkCLxUh3Fw8saF3 (21-Mar-2016 - present) Managed by warner, backups with others Validation¶ This document is signed by the Tahoe-LAFS Release-Signing Key (GPG keyid 2048R/68666A7A, fingerprint E34E 62D0 6D0E 69CF CA41 79FF BDE0 D31D 6866 6A7A). It is also committed to the Tahoe source tree () as docs/donations.rst. Both actions require access to secrets held closely by Tahoe developers. signed: Brian Warner, 10-Nov-2016 —–BEGIN PGP SIGNATURE—– Version: GnuPG v2 iQEcBAEBCAAGBQJYJVQBAAoJEL3g0x1oZmp6/8gIAJ5N2jLRQgpfIQTbVvhpnnOc MGV/kTN5yiN88laX91BPiX8HoAYrBcrzVH/If/2qGkQOGt8RW/91XJC++85JopzN Gw8uoyhxFB2b4+Yw2WLBSFKx58CyNoq47ZSwLUpard7P/qNrN+Szb26X0jDLo+7V XL6kXphL82b775xbFxW6afSNSjFJzdbozU+imTqxCu+WqIRW8iD2vjQxx6T6SSrA q0aLSlZpmD2mHGG3C3K2yYnX7C0BoGR9j4HAN9HbXtTKdVxq98YZOh11jmU1RVV/ nTncD4E1CMrv/QqmktjXw/2shiGihYX+3ZqTO5BAZerORn0MkxPOIvESSVUhHVw= =Oj0C —–END PGP SIGNATURE—–
https://tahoe-lafs.readthedocs.io/en/latest/donations.html
2020-02-16T21:50:15
CC-MAIN-2020-10
1581875141430.58
[]
tahoe-lafs.readthedocs.io
Table of Contents Product Index The Seven Deadly Sins, or Capital Vices have been an inspiration for art for years. These ladies however put a new Avant Garde twist on a timeless theme. The sin of Lust is extreme longing or desire. Lust is often shown as pink, as such, this characters makeup options are only in a pink spectrum. Avarice, or greed, is a sin of desire. Unlike the desires of some of the other sins, Avarice is the need of materialistic and tangible items. Avarice is often shown as gold, as such, this characters makeup options are only in a gold spectrum. Gluttony is the act of habitual greed and excessive eating. Gluttony has been depicted in many ways over the years, but we have shown her makeups in a rainbow spectrum..
http://docs.daz3d.com/doku.php/public/read_me/index/40941/start
2020-02-16T23:24:30
CC-MAIN-2020-10
1581875141430.58
[]
docs.daz3d.com
Creating a Blog Page - Step 1 – To create a new blog page go to your wordpress admin >> Pages and click Add New. - Step 2 – Find the Page Attributes box (usually on the right), locate the template select box and choose “Blog”. - Step 3 – Save or publish the page by clicking the appropriate button in the top right of your screen. - Step 4 – Under the visual editor you should see a Blog Options box. There you can set the options for your blog page.
http://docs.kadencethemes.com/ascend-premium/templates/blog-page-template/
2020-02-16T21:51:16
CC-MAIN-2020-10
1581875141430.58
[array(['http://docs.kadencethemes.com/ascend-premium/wp-content/uploads/2016/03/Blog-Options-min.jpg', None], dtype=object) ]
docs.kadencethemes.com
When accessing the Add images section, you will be able to drag & drop images from your computer. The name of the images that you have dragged will show up, you can remove one or several of those images and you can finally click on Start Upload. After clicking on Start upload, you need to wait for the progress bar to start with the Initializing step. You should not leave or reload the page. That step corresponds to the effective upload of the images from your local computer to the remote servers. After a while, the progress bar will show up and you should see a screen like the one below. This tells you that your images are being added to the platform. After adding images via drag & drop, you can find those images in the Not tagged yet tab. In that tab, without any other sorting, the images are added chronologically. However, as there are several worker for the add images operation, the order of the images will not be exactly the same as the order in which you dragged the files. If you add images by batch though, the order of the batch will be respected.
https://docs.deepomatic.com/studio/adding-images/drag-and-drop
2020-02-16T23:23:34
CC-MAIN-2020-10
1581875141430.58
[]
docs.deepomatic.com
Mobile Devices SDK Quick Poll Do you develop applications for Pocket PCs or Smartphones? If so, and you have five minutes, I would appreciate it if you check out the short (anonymous) poll we have running. Just click here and answer a few questions, and miraculously watch your mobile devices SDK documentation improve before your eyes :-) Ok, it might take a few months for the improvements to trickle through, but seriously, this is your chance to influence the documentation you use on a daily basis. That's got to be worth five minutes of your time?
https://docs.microsoft.com/en-us/archive/blogs/johnkenn/mobile-devices-sdk-quick-poll
2020-02-16T23:18:23
CC-MAIN-2020-10
1581875141430.58
[]
docs.microsoft.com
Caching The Cache API The SS_Cache class provides a bunch of static functions wrapping the Zend_Cache system in something a little more easy to use with the SilverStripe config system. A Zend_Cache has both a frontend (determines how to get the value to cache, and how to serialize it for storage) and a backend (handles the actual storage). Rather than require library code to specify the backend directly, cache consumers provide a name for the cache backend they want. The end developer can then specify which backend to use for each name in their project's configuration. They can also use 'all' to provide a backend for all named caches. End developers provide a set of named backends, then pick the specific backend for each named cache. There is a default File cache set up as the 'default' named backend, which is assigned to 'all' named caches. Using Caches Caches can be created and retrieved through the SS_Cache::factory() method. The returned object is of type Zend_Cache. // foo is any name (try to be specific), and is used to get configuration // & storage info $cache = SS_Cache::factory('foo'); if (!($result = $cache->load($cachekey))) { $result = caluate some how; $cache->save($result, $cachekey); } return $result; backends clear out entries based on age and maximum allocated storage. If you include the version of the object in the cache key, even object changes don't need any invalidation. You can force disable the cache though, e.g. in development mode. // Disables all caches SS_Cache::set_cache_lifetime('any', -1, 100); Keep in mind that Zend_Cache::CLEANING_MODE_ALL deletes all cache entries across all caches, not just for the 'foo' cache in the example below. $cache = SS_Cache::factory('foo'); $cache->clean(Zend_Cache::CLEANING_MODE_ALL); $cache = SS_Cache::factory('foo'); $cache->remove($cachekey); it often pays to increase the lifetime of caches ("TTL"). It defaults to 10 minutes (600s) in SilverStripe, which can be quite short depending on how often your data changes. Keep in mind that data expiry should primarily be handled by your cache key, e.g. by including the LastEdited value when caching DataObject results. // set all caches to 3 hours SS_Cache::set_cache_lifetime('any', 60*60*3); SS_Cache segments caches based on the versioned reading mode. This prevents developers from caching draft data and then accidentally exposing it on the live stage without potentially required authorisation checks. This segmentation is automatic for all caches generated using SS_Cache::factory method. Data that is not content sensitive can be cached across stages by simply opting out of the segmented cache with the disable-segmentation argument. ## Alternative Cache Backends By default, SilverStripe uses a file-based caching backend. Together with a file stat cache like [APC]() this is reasonably quick, but still requires access to slow disk I/O. The `Zend_Cache` API supports various caching backends ([list]()) which can provide better performance, including APC, Xcache, ZendServer, Memcached and SQLite. ## Cleaning caches on flush=1 requests If `?flush=1` is requested in the URL, e.g., this will trigger a call to `flush()` on any classes that implement the `Flushable` interface. Using this, you can trigger your caches to clean. See [reference documentation on Flushable](/developer_guides/execution_pipeline/flushable/) for implementation details. ### Memcached This backends stores cache records into a [memcached]() server. memcached is a high-performance, distributed memory object caching system. To use this backend, you need a memcached daemon and the memcache PECL extension. :::php // _config.php SS_Cache::add_backend( 'primary_memcached', 'Memcached', array( 'servers' => array( 'host' => 'localhost', 'port' => 11211, 'persistent' => true, 'weight' => 1, 'timeout' => 5, 'retry_interval' => 15, 'status' => true, 'failure_callback' => null ) ) ); SS_Cache::pick_backend('primary_memcached', 'any', 10); If your Memcached instance is using a local Unix socket instead of a network port: :::php // _config.php SS_Cache::add_backend( 'primary_memcached', 'Memcached', array( 'servers' => array( 'host' => 'unix:///path/to/memcached.socket', 'port' => 0, 'persistent' => true, 'weight' => 1, 'timeout' => 5, 'retry_interval' => 15, 'status' => true, 'failure_callback' => null ) ) ); SS_Cache::pick_backend('primary_memcached', 'any', 10); This backends stores cache records in shared memory through the [APC]() (Alternative PHP Cache) extension (which is of course need for using this backend). ```php SS_Cache::add_backend('primary_apc', 'APC'); SS_Cache::pick_backend('primary_apc', 'any', 10); This backend is an hybrid one. It stores cache records in two other backends: a fast one (but limited) like Apc, Memcache... and a "slow" one like File or Sqlite.
https://docs.silverstripe.org/en/3/developer_guides/performance/caching/
2020-02-16T21:56:45
CC-MAIN-2020-10
1581875141430.58
[]
docs.silverstripe.org
WSO2 Elastic Load Balancer is currently retired. Tenant partitioning is required in a clustered deployment to be able to scale to large numbers of tenants. There can be multiple clusters for a single service and each cluster would have a subset of tenants as illustrated in the diagram below. In such situations, the load balancers need to be tenant aware in order to route the requests to the required tenant clusters. They also need to be service aware since it is the service clusters which are partitioned according to the clients. The following example further illustrates how this is achieved in WSO2 Elastic Load Balancer (ELB). A request sent to a load balancer has the following host header to identify the cluster domain:. wso2.com/carbon.as1.domain/carbon/admin/login.jsp In this URL: appserver.cloud-test.wso2.comis the service domain which allows the load balancer to identify the service. carbon.as1.domain.comis the tenant domain which allows the load balancer to identify the tenant. Services are configured with their cluster domains and tenant ranges in the in ELB_HOME/repository/conf/ file. These cluster domains and tenant ranges are picked by the load balancer when it loads. loadbalancer.conf The following is a sample configuration of the loadbalancer.conf file. appserver { # multiple hosts should be separated by a comma. hosts appserver.cloud-test.wso2.com; domains { carbon.as1.domain { tenant_range 1-100; } carbon.as2.domain { tenant_range 101-200; } } } In the above configuration, there is a host address which maps to the application server service. If required, you can enter multiple host addresses separated by commas. There are two cluster domains defined in the configuration. The cluster domain named carbon.as1.domain is used to load the range of tenants with IDs 1-100. The other cluster domain named carbon.as2.domain is used to load the tenants with IDs 101-200. If the tenant ID of abc.com is 22, the request will be directed to the Carbon.AS1.domain cluster.
https://docs.wso2.com/display/BRS220/Tenant-aware+Load+Balancing+Using+the+WSO2+Elastic+Load+Balancer
2020-02-16T22:09:27
CC-MAIN-2020-10
1581875141430.58
[]
docs.wso2.com
Fail2ban Fail2Ban is a service that automatically blocks failed connection attempts to services running on a node. The remote IP addresses are then usually blacklisted for up to 10 minutes before being allowed again. Commands - devops fail2ban start - Start Fail2ban if stopped - devops fail2ban stop - Stop Fail2ban if started - devops fail2ban reload - Reload Fail2ban, reload the configuration and perform a graceful restart - devops fail2ban restart - Restart Fail2ban, reload the configuration (but kills existing connection) - devops fail2ban ip ban - Ban an IP address to a service (known to fail2ban) Options - devops fail2ban ip unban - Lift a ban from an IP address to a service (known to fail2ban) Options Configuration - configuration: - fail2ban: - ignoreip string - Space separated list of IP addresses, CIDR masks or DNS hosts for which Fail2ban will never block connection attempts.
http://docs.devo.ps/services/fail2ban/
2020-02-16T22:22:14
CC-MAIN-2020-10
1581875141430.58
[]
docs.devo.ps
Sales forecasting is key to growth. Flexie allows you to predict revenues based on the value of each deal, and the probability of a certain deal being won. Leave guesswork out of the equation; a precise quantitative model goes a long way. Predict with accuracy the number of deals you may close next week, or next month. With Flexie, you can predict future revenue based on your deals pipeline activity. Deals forecasting is the process of estimating the number of deals you will close within a given time frame. Each deal has a certain amount attached to it, as well as a closing date. By creating custom reports, you can get huge insight into your deals. Reports will be visible in your dashboard, which is the first thing you see when you log on to Flexie CRM. Since each deal has a defined amount and a closing date, you will be able to make an accurate prediction of future revenues through the impeccable custom reports. Deals forecasting not only helps you gain insight into the expected revenues for next week, month or quarter, but it also enables you to see patterns that could have otherwise gone unnoticed. The more you know about your deals value and their expected closing date, the better business decisions you can make. The more accurate your deals forecast is, the better for your company’s effectiveness and productivity. Deals forecasting has never been easier. Close more deals, increase productivity and take your business to another whole new level. To stay updated with the latest features, news and how-to articles and videos, please join our group on Facebook, Flexie CRM Academy and subscribe to our YouTube channel Flexie CRM.
https://docs.flexie.io/docs/marketing-trends/deals-forecast/
2020-02-16T23:42:31
CC-MAIN-2020-10
1581875141430.58
[array(['https://flexie.io/wp-content/uploads/2017/07/Flexie-CRM-8.png', 'Deals value forecast'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/07/deals-forecast.png', 'Deals value forecast'], dtype=object) ]
docs.flexie.io
panda3d.core.Triangulator3¶ - class Triangulator3¶ Bases: Triangulator This is an extension of Triangulator to handle polygons with three- dimensional points. It assumes all of the points lie in a single plane, and internally projects the supplied points into 2-D for passing to the underlying Triangulator object. Inheritance diagram __init__(param0: Triangulator3) → None clear() → None¶ Removes all vertices and polygon specifications from the Triangulator, and prepares it to start over. addVertex(point: LPoint3d) → int¶ Adds a new vertex to the vertex pool. Returns the vertex index number. addVertex(x: float, y: float, z: float) → int Adds a new vertex to the vertex pool. Returns the vertex index number. getNumVertices() → int¶ Returns the number of vertices in the pool. Note that the Triangulator might append new vertices, in addition to those added by the user, if any of the polygon is self-intersecting, or if any of the holes intersect some part of the polygon edges. triangulate() → None¶ Does the work of triangulating the specified polygon. After this call, you may retrieve the new triangles one at a time by iterating through get_triangle_v0/1/2(). getPlane() → LPlaned¶ Returns the plane of the polygon. This is only available after calling triangulate(). - Return type - - property plane¶ Returns the plane of the polygon. This is only available after calling triangulate(). - Return type -
https://docs.panda3d.org/1.10/cpp/reference/panda3d.core.Triangulator3
2020-02-16T21:41:10
CC-MAIN-2020-10
1581875141430.58
[]
docs.panda3d.org
Description This section describes how to create an enumeration in multiple languages. Instructions If it does not exist yet, create the enumeration and give it a name. If you do not know how to add documents to your project, please refer to this article. Open the enumeration by double-clicking on it in the Project Explorer. You can add values to the enumeration by pressing the ‘New’ button. In the menu that appears, you can enter the caption and name of the enumeration value. The caption is the text that will be displayed in the application, whereas the name is the property by which the enumeration value is identified by the client. Finally you can add a picture for the enumeration value, which can be used for display in data grids. Once you have added all enumeration values, you can close the enumeration menu by pressing the ‘OK’ button. You have now created the enumeration with its values in the default language of your project. If the second language has not been added to the project yet, do so know. If you do now know how to do this, refer to this article. Switch to the new language using the drop down menu in the toolbar. If you now open the enumeration again, you will notice that the captions from the default language are displayed in the ‘ Press the ‘Edit’ button while highlighting the caption that needs to be changed, or double-click on it. In the edit menu that appears, you can now replace the caption from the default language with a translated caption in the new language. Do this for all entries; whenever you switch to the second language, these new captions will now be used. Note that when switching languages, only captions are changed, whereas the names for enumeration values remain the same.
https://docs.mendix.com/howto40/create-a-multi-language-enumeration
2017-10-17T01:52:31
CC-MAIN-2017-43
1508187820556.7
[array(['attachments/2621550/2752600.png', None], dtype=object) array(['attachments/2621550/2752599.png', None], dtype=object) array(['attachments/2621550/2752598.png', None], dtype=object) array(['attachments/2621550/2752601.png', None], dtype=object) array(['attachments/2621550/2752594.png', None], dtype=object) array(['attachments/2621550/2752605.png', None], dtype=object) array(['attachments/2621550/2752604.png', None], dtype=object)]
docs.mendix.com
Calendar of Events Dr. John and Dr. Juliana will review good ergonomics and show you how to sit properly at your desk, explaining the benefits of Effortless Neutral Posture and how using the adjustable LOGICBACK Custom Posture Support can help you! The way you walk could be misaligning your spine. Dr. John and Dr. Juliana will review how your feet, knees, hips can be affecting your spine with a Computerized gait analysis. Call us today to book your complimentary gait analysis or ergonomic consultation! 905-886-9778
http://chiro-docs.com/index.php?p=415587
2017-10-17T01:50:23
CC-MAIN-2017-43
1508187820556.7
[]
chiro-docs.com
Build with DOM Elements View and DOM A View is made up of one or multiple DOM elements. In other words, a view utilizes HTML 5 and CSS 3 to display the content to meet the requirement. In additions, the view interacts with the user by listening the DOM event sent by the DOM elements. Though rarely, you can draw the content in the fully customized way with CanvasElementtoo. The structure of the associated DOM elements are usually simple, such View, TextView and ScrollView, while some can be complex, such as Switch. It all depends on the requirement and whether HTML + CSS matches what you need. In additions, the top element can be found by use of View.node. Here is an example. As shown, ScrollView is made of two DIV elements, and the DOM elements of the child views are placed in the inner DIV element. On the other hand, TextBox is made of one INPUT element, and Button made of one BUTTON element. To interact with the user, ScrollView will listen the touch and mouse event for handling the scrolling. ScrollViewdoesn't listen the DOM event directly. Rather, it leverages the gesture called Scrollerfor easy implementation. The view is a thin layer on top of View.node. Many of View API is a proxy of the underlying View.node, such as View.id and View.style. Here is a list of differences. - Two CSS classes are always assigned: v-and v-clsnm(where clsnmis the view's class name View.className). - The v-CSS class has two important CSS rules that you shall not change: box-sizing: border-box;and position: absolute; - The v-clsnmis used to customize the look of a given type of views. - The position and dimension of a view (i.e., left, top, with and height) can only be measured in pixels. It doesn't support emand other units of measurement.
http://docs.rikulo.org/ui/latest/View_Development/Build_with_DOM_Elements/
2017-10-17T01:58:54
CC-MAIN-2017-43
1508187820556.7
[array(['scrollViewVsDOM.jpg?raw=true', 'Scroll view and associated DOM'], dtype=object) ]
docs.rikulo.org
You are viewing documentation for version 3 of the AWS SDK for Ruby. Version 2 documentation can be found here. Class: Aws::Inspector::Types::ResourceGroupTag - Defined in: - gems/aws-sdk-inspector/lib/aws-sdk-inspector/types.rb Overview Note: When making an API call, you may pass ResourceGroupTag data as a hash: { key: "TagKey", # required value: "TagValue", } This data type is used as one of the elements of the ResourceGroup data type. Instance Attribute Summary collapse - #key ⇒ String A tag key. - #value ⇒ String The value assigned to a tag key. Instance Attribute Details #key ⇒ String A tag key. #value ⇒ String The value assigned to a tag key.
http://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/Inspector/Types/ResourceGroupTag.html
2017-10-17T02:15:59
CC-MAIN-2017-43
1508187820556.7
[]
docs.aws.amazon.com
This sample demonstrates how to implement an application that uses WS-Security with X.509v3 certificate authentication for the client and requires server authentication using the server's X.509v3 certificate over MSMQ. Message security is sometimes more desirable to ensure that the messages in the MSMQ store stay encrypted and the application can perform its own authentication of the message. This sample is based on the Transacted MSMQ Binding sample. The messages are encrypted and signed. To set up, 2012. Expand the Features tab. Right-click Private Message Queues, and select New, Private Queue. Check the Transactional box. Enter ServiceModelSamplesTransactedas the name of the new queue. To build the C# or Visual Basic .NET edition of the solution, follow the instructions in Building the Windows Communication Foundation Samples. To run the sample on the same computer Ensure that the path includes the folder that contains Makecert.exe and FindPrivateKey.exe. Run Setup.bat from the sample install folder. This installs all the certificates required for running the sample. Note Ensure that you remove the certificates by running Cleanup.bat when you have finished with the sample. Other security samples use the same certificates. Launch Service.exe from \service\bin. Launch Client.exe from \client\bin. Client activity is displayed on the client console application. If the client and service are not able to communicate, see Troubleshooting Tips. To run the sample across computers Copy the Setup.bat, Cleanup.bat, and ImportClientCert.bat files to the service computer. Create a directory on the client computer for the client binaries. Copy the client program files to the client directory on the client computer. Also copy the Setup.bat, Cleanup.bat, and ImportServiceCert.bat files to the client. On the server, run setup.bat service. Running setup.batwith the serviceargument creates a service certificate with the fully-qualified domain name of the computer and exports the service certificate to a file named Service.cer. Edit service's service.exe.config to reflect the new certificate name (in the findValueattribute in the <serviceCertificate>) which is the same as the fully-qualified domain name of the computer. Copy the Service.cer file from the service directory to the client directory on the client computer. On the client, run setup.bat client.. You must also change the certificate name of the service to be the same as the fully-qualified domain name of the service computer (in the findValueattribute in the defaultCertificateelement of serviceCertificateunder clientCredentials). service computer, launch Service.exe from the command prompt. On the client computer, launch Client.exe from the command prompt. If the client and service are not able to communicate, see Troubleshooting Tips. To clean up after the sample Run Cleanup.bat in the samples folder once you have finished running the sample.. Requirements This sample requires that MSMQ is installed and running. Demonstrates The client encrypts the message using the public key of the service and signs the message using its own certificate. The service reading the message from the queue authenticates the client certificate with the certificate in its trusted people store. It then decrypts the message and dispatches the message to the service operation. Because the Windows Communication Foundation (WCF) message is carried as a payload in the body of the MSMQ message, the body remains encrypted in the MSMQ store. This secures the message from unwanted disclosure of the message. Note that MSMQ itself is not aware whether the message it is carrying is encrypted. The sample demonstrates how mutual authentication at the message level can be used with MSMQ. The certificates are exchanged out-of-band. This is always the case with queued application because the service and the client do not have to be up and running at the same time. Description The sample client and service code are the same as the Transacted MSMQ Binding sample with one difference. The operation contract is annotated with protection level, which suggests that the message must be signed and encrypted. // Define a service contract. [ServiceContract(Namespace="")] public interface IOrderProcessor { [OperationContract(IsOneWay = true, ProtectionLevel=ProtectionLevel.EncryptAndSign)] void SubmitPurchaseOrder(PurchaseOrder po); } To ensure that the message is secured using the required token to identify the service and client, the App.config contains credential information. The client configuration specifies the service certificate to authenticate the service. It uses its LocalMachine store as the trusted store to rely on the validity of the service. It also specifies the client certificate that is attached with the message for service authentication of the client. <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <client> <!-- Define NetMsmqEndpoint --> <endpoint address="net.msmq://localhost/private/ServiceModelSamplesMessageSecurity" binding="netMsmqBinding" bindingConfiguration="messageSecurityBinding" contract="Microsoft.ServiceModel.Samples.IOrderProcessor" behaviorConfiguration="ClientCertificateBehavior" /> </client> <bindings> <netMsmqBinding> <binding name="messageSecurityBinding"> <security mode="Message"> <message clientCredentialType="Certificate"/> </security> </binding> </netMsmqBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="ClientCertificateBehavior"> <!-- The clientCredentials behavior allows one to define a certificate to present to a service. A certificate is used by a client to authenticate itself to the service and provide message integrity. This configuration references the "client.com" certificate installed during the setup instructions. --> <clientCredentials> <clientCertificate findValue="client.com" storeLocation="CurrentUser" storeName="My" x509FindType="FindBySubjectName" /> <serviceCertificate> <defaultCertificate findValue="localhost" storeLocation="CurrentUser"> </configuration> Note that the security mode is set to Message and the ClientCredentialType is set to Certificate. The service configuration includes a service behavior that specifies the service's credentials that are used when the client authenticates the service. The server certificate subject name is specified in the findValue attribute in the <serviceCredentials>. <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <!-- Use appSetting to configure MSMQ queue name. --> <add key="queueName" value=".\private$\ServiceModelSamplesMessageSecurity" /> </appSettings> <system.serviceModel> <services> <service name="Microsoft.ServiceModel.Samples.OrderProcessorService" behaviorConfiguration="PurchaseOrderServiceBehavior"> <host> <baseAddresses> <add baseAddress=""/> </baseAddresses> </host> <!-- Define NetMsmqEndpoint --> <endpoint address="net.msmq://localhost/private/ServiceModelSamplesMessageSecurity" binding="netMsmqBinding" bindingConfiguration="messageSecurityBinding" contract="Microsoft.ServiceModel.Samples.IOrderProcessor" /> <!-- The mex endpoint is exposed at. --> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <bindings> <netMsmqBinding> <binding name="messageSecurityBinding"> <security mode="Message"> <message clientCredentialType="Certificate" /> </security> </binding> </netMsmqBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="PurchaseOrderServiceBehavior"> <serviceMetadata httpGetEnabled="True"/> <!-- The serviceCredentials behavior allows one" /> <clientCertificate> <certificate findValue="client.com" storeLocation="LocalMachine"" /> </clientCertificate> </serviceCredentials> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> The sample demonstrates controlling authentication using configuration, and how to obtain the caller’s identity from the security context, as shown in the following sample code: // Service class which implements the service contract. // Added code to write output to the console window. public class OrderProcessorService : IOrderProcessor { private string GetCallerIdentity() { // The client certificate is not mapped to a Windows identity by default. // ServiceSecurityContext.PrimaryIdentity is populated based on the information // in the certificate that the client used to authenticate itself to the service. return ServiceSecurityContext.Current.PrimaryIdentity.Name; } [OperationBehavior(TransactionScopeRequired = true, TransactionAutoComplete = true)] public void SubmitPurchaseOrder(PurchaseOrder po) { Console.WriteLine("Client's Identity {0} ", GetCallerIdentity()); Orders.Add(po); Console.WriteLine("Processing {0} ", po); } //… } When run, the service code displays the client identification. The following is a sample output from the service code: The service is ready. Press <ENTER> to terminate service. Client's Identity CN=client.com; ECA6629A3C695D01832D77EEE836E04891DE9D3C Processing Purchase Order: 6536e097-da96-4773-9da3-77bab4345b5d Customer: somecustomer.com OrderDetails Order LineItem: 54 of Blue Widget @unit price: $29.99 Order LineItem: 890 of Red Widget @unit price: $45.89 Total cost of this order: $42461.56 Order status: Pending Creating the client certificate. The following line in the batch file creates the client certificate. The client name specified is used in the subject name of the certificate created. The certificate is stored in Mystore at the CurrentUserstore location. echo ************ echo making client cert echo ************ makecert.exe -sr CurrentUser -ss MY -a sha1 -n CN=%CLIENT_NAME% -sky exchange -pe Installing the client certificate into server’s trusted certificate store. The following line in the batch file copies the client certificate into the server's TrustedPeople store so that the server can make the relevant trust or no-trust decisions. For a certificate installed in the TrustedPeople store to be trusted by a Windows Communication Foundation (WCF) service, the client certificate validation mode must be set to PeerOrChainTrustor PeerTrustvalue. See the previous service configuration sample to learn how this can be done using a configuration file. echo ************ echo copying client cert to server's LocalMachine store echo ************ certmgr.exe -add -r CurrentUser -s My -c -n %CLIENT_NAME% -r LocalMachine -s TrustedPeople Creating the server certificate. The following lines from the Setup.bat batch file create the server certificate to be used: echo ************ echo Server cert setup starting echo %SERVER_NAME% echo ************ echo making server cert echo ************ makecert.exe -sr LocalMachine -ss MY -a sha1 -n CN=%SERVER_NAME% -sky exchange -pe The %SERVER_NAME% variable specifies the server name. The certificate is stored in the LocalMachine store. If the setup batch file is run with an argument of service (such as, setup.bat service) the %SERVER_NAME% contains the fully-qualified domain name of the computer.Otherwise it defaults to localhost Installing Note If you are using a non-U.S. English edition of Microsoft Windows you must edit the Setup.bat file and replace the "NT AUTHORITY\NETWORK SERVICE" account name with your regional equivalent.\Binding\Net\MSMQ\MessageSecurity
https://docs.microsoft.com/de-de/dotnet/framework/wcf/samples/message-security-over-message-queuing
2017-10-17T02:12:46
CC-MAIN-2017-43
1508187820556.7
[]
docs.microsoft.com
Applies To: Windows PowerShell 4.0, Windows PowerShell 5.0 The WindowsFeature resource in Windows PowerShell Desired State Configuration (DSC) provides a mechanism to ensure that roles and features are added or removed on a target node. Syntax WindowsFeature [string] #ResourceName { Name = [string] [ Credential = [PSCredential] ] [ Ensure = [string] { Absent | Present } ] [ IncludeAllSubFeature = [bool] ] [ LogPath = [string] ] [ DependsOn = [string[]] ] [ Source = [string] ] } Properties Example WindowsFeature RoleExample { Ensure = "Present" # Alternatively, to ensure the role is uninstalled, set Ensure to "Absent" Name = "Web-Server" # Use the Name property from Get-WindowsFeature }
https://docs.microsoft.com/en-us/powershell/dsc/windowsfeatureresource
2017-10-17T02:12:54
CC-MAIN-2017-43
1508187820556.7
[]
docs.microsoft.com