content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
An Overview Nearly everything you work on in Ponzu is inside content files on the content types you create. These types must all reside in the content package and are the fundamental core of your CMS. In order for Content types to be rendered and managed by the CMS, they must implement the editor.Editable interface, and add their own interface{} container to the global item.Types map. Sound like a lot? Don't worry, all of this can be done for you by using the code-generating command line tools that come with Ponzu. It is rare to hand-write a new Content type, and should be generated instead! Generating Content types¶ To generate content types and boilerplate code, use the Ponzu CLI generate command as such: $ ponzu generate content post title:string body:string:richtext author:string The command above will create a file at content/post.go and will generate the following code: package content import ( "fmt" "github.com/ponzu-cms/ponzu/management/editor" "github.com/ponzu-cms/ponzu/system/item" ) type Post struct { item.Item Title string `json:"title"` Body string `json:"body"` Author string `json:"author"` } // MarshalEditor writes a buffer of html to edit a Post within the CMS // and implements editor.Editable func (p *Post) MarshalEditor() ([]byte, error) { view, err := editor.Form(p, // Take note that the first argument to these Input-like functions // is the string version of each Post field, and must follow // this pattern for auto-decoding and auto-encoding reasons: editor.Field{ View: editor.Input("Title", p, map[string]string{ "label": "Title", "type": "text", "placeholder": "Enter the Title here", }), }, editor.Field{ View: editor.Richtext("Body", p, map[string]string{ "label": "Body", "placeholder": "Enter the Body here", }), }, editor.Field{ View: editor.Input("Author", p, map[string]string{ "label": "Author", "type": "text", "placeholder": "Enter the Author here", }), }, ) if err != nil { return nil, fmt.Errorf("Failed to render Post editor view: %s", err.Error()) } return view, nil } func init() { item.Types["Post"] = func() interface{} { return new(Post) } } The code above is the baseline amount required to manage content for the Post type from within the CMS. See Extending Content for information about how to add more functionality to your Content types. All content managed by the CMS and exposed via the API is considered an "item", and thus should embed the item.Item type. There are many benefits to this, such as becoming automatically sortable by time, and being given default methods that are useful inside and out of the CMS. All content types that are created by the generate command via Ponzu CLI will embed Item. Related packages¶ The item package has a number of useful interfaces, which make it simple to add functionality to all content types and other types that embed Item. The editor package has the Editable interface, which allows types to create an editor for their fields within the CMS. Additionally, there is a helper function editor.Form which simplifies defining the editor's input layout and input types using editor.Input and various other functions to make HTML input elements like Select, Checkbox, Richtext, Textarea and more. The api package has interfaces including api.Createable and api.Mergeable which make it trivial to accept and approve or reject content submitted from 3rd parties (POST from HTML forms, mobile clients, etc).
https://docs.ponzu-cms.org/Content/An-Overview/
2019-05-19T14:39:19
CC-MAIN-2019-22
1558232254889.43
[]
docs.ponzu-cms.org
JupyterLab on JupyterHub¶ JupyterLab works out of the box with JupyterHub, and can even run side by side with the classic Notebook.: c.Spawner.default_url = '/lab' In this configuration, users can still access the classic Notebook at /tree, by either typing that URL into the browser, or by using the “Launch Classic Notebook” item in JupyterLab’s Help menu. Further Integration¶ Additional integration between JupyterLab and JupyterHub is offered by the jupyterlab-hub extension for JupyterLab. It provides a Hub menu with items to access the JupyterHub control panel or logout of the hub. To install the jupyterlab-hub extension, run: jupyter labextension install @jupyterlab/hub-extension Further directions are provided on the jupyterlab-hub GitHub repository. Example Configuration¶ For a fully-configured example of using JupyterLab with JupyterHub, see the jupyterhub-deploy-teaching repository.
https://jupyterlab.readthedocs.io/en/latest/user/jupyterhub.html
2019-05-19T15:32:31
CC-MAIN-2019-22
1558232254889.43
[]
jupyterlab.readthedocs.io
Deleting a Computer from the JSS You can remove a computer from your inventory by deleting it from the Jamf Software Server (JSS). The files and folders installed during enrollment are not removed from the computer when it is deleted from the JSS. For instructions on how to remove these components, see the Removing Jamf Components from Computers Knowledge Base article. Log in to the JSS with a web browser. Click Computers at the top of the page. Click Search Inventory. Perform a simple or advanced computer search. For more information, see Simple Computer Searches or Advanced Computer Searches . Click the computer you want to delete. If you performed a simple search for an item other than computers, such as computer applications, you must click Expand next to an item name to view the computers related to that item. Click Delete, and then click Delete again to confirm. Related Information For related information, see the following section in this guide: Mass Deleting Computers Find out how to mass delete computers from the JSS.
https://docs.jamf.com/9.99.0/casper-suite/administrator-guide/Deleting_a_Computer_from_the_JSS.html
2019-05-19T14:35:06
CC-MAIN-2019-22
1558232254889.43
[]
docs.jamf.com
All content with label api+faq+hot_rod+infinispan+jboss_cache+listener+publish+release+wcm. Related Labels: expiration, datagrid, coherence, server, replication, transactionmanager, dist, partitioning, query, deadlock, future, archetype, lock_striping, jbossas, nexus, guide, schema, editor, cache, s3, amazon, grid, jcache, test, ehcache, maven, documentation, page, write_behind, ec2, 缓存, s, hibernate, getting, templates, interface, custom_interceptor, setup, clustering, eviction, gridfs, template, out_of_memory, concurrency, examples, tags, import, events, hash_function, configuration, batch, buddy_replication, loader, xa, write_through, cloud, mvcc, tutorial, notification, jbosscache3x, read_committed, xml, distribution, composition, started, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, br, development, websocket, async, transaction, interactive, xaresource, build, gatein, categories, searchable, demo, installation, cache_server, scala, client, non-blocking, migration, jpa, filesystem, design, tx, gui_demo, eventing, post, content, client_server, testng, infinispan_user_guide, standalone, hotrod, snapshot, repeatable_read, tasks, docs, consistent_hash, batching, jta, 2lcache, as5, downloads, jsr-107, lucene, jgroups, locking, rest, uploads more » ( - api, - faq, - hot_rod, - infinispan, - jboss_cache, - listener, - publish, - release, - wcm ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/api+faq+hot_rod+infinispan+jboss_cache+listener+publish+release+wcm
2019-05-19T14:55:21
CC-MAIN-2019-22
1558232254889.43
[]
docs.jboss.org
All content with label hot_rod+infinispan+publish+release+standalone. Related Labels: expiration, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, query, deadlock, archetype, lock_striping, jbossas, nexus, guide, schema, listener, cache, amazon, grid, test, jcache, api, xsd, ehcache, wildfly, maven, documentation, write_behind, ec2, 缓存, hibernate, aws, getting_started, interface, custom_interceptor, setup, clustering, eviction, gridfs, concurrency, jboss_cache, import, index, events, hash_function, configuration, batch, buddy_replication, loader, xa, write_through, cloud, tutorial, notification, read_committed, xml, distribution, cachestore, data_grid, resteasy, hibernate_search, cluster, websocket, transaction, async, interactive, xaresource, build, searchable, demo, cache_server, installation, scala, mod_cluster, client, non-blocking, migration, jpa, filesystem, gui_demo, eventing, client_server, testng, infinispan_user_guide, webdav, hotrod, snapshot, repeatable_read, docs, consistent_hash, batching, store, jta, faq, 2lcache, as5, jsr-107, jgroups, locking, rest more » ( - hot_rod, - infinispan, - publish, - release, - standalone ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/hot_rod+infinispan+publish+release+standalone
2019-05-19T15:42:54
CC-MAIN-2019-22
1558232254889.43
[]
docs.jboss.org
The name of the WAM whose Webroutine will be invoked when an item in the tree is selected. Individual items in an unlevelled list can override this using the list_onselect_wamname_field property. Default value The current WAM. Valid values The name of a WAM, in single quotes. A selection can be made from a list of known WAMs by clicking on the corresponding dropdown button in the property sheet.
https://docs.lansa.com/14/en/lansa087/content/lansa/wamb8_tree_onselectwam.htm
2019-05-19T14:16:52
CC-MAIN-2019-22
1558232254889.43
[]
docs.lansa.com
You can use the Metrics API to periodically poll for data about your cluster, hosts, containers, and applications. The Metrics API is one way to get metrics from DC/OS. It is designed for occasional targeted access to specific tasks and hosts. It is not the best way to get a comprehensive picture of all metrics on DC/OS. It is recommended to use the DC/OS Monitoring service to monitor all the metrics on your cluster. The Metrics API is backed by Telegraf, which runs on all nodes in the cluster. To get started with the DC/OS metrics component documentation. The Metrics API also requires authorization via the following permissions: All routes may also be reached by users with the dcos:superuser permission. To assign permissions to your account, see the permissions reference documentation. ResourcesResources The following resources are available under both of the above routes:
https://docs.mesosphere.com/1.13/metrics/metrics-api/
2019-05-19T14:27:02
CC-MAIN-2019-22
1558232254889.43
[]
docs.mesosphere.com
Packages¶ The pfSense package system provides the ability to extend pfSense without adding bloat and potential security vulnerabilities to the base distribution. Packages are supported on full installs and a reduced set of packages are available on NanoBSD-based embedded installs. Note NanoBSD installs have the capability of running some packages, but due to the nature of the platform and its disk writing restrictions, some packages will not work and thus are not available for installation on that platform. To see the packages available for the current firewall platform being utilized, browse to System > Packages, on the Available Packages tab. Introduction to Packages¶ Many of the packages have been written by the pfSense community and not by the pfSense development team. The available packages vary quite widely, and some are more mature and well-maintained than others. There are packages which install and provide a GUI interface for third-party software, such as Squid, and others which extend the functionality of pfSense itself, like the OpenVPN Client Export Utility package which automatically creates VPN configuration files. By far the most popular package available for pfSense is the Squid Proxy Server. It is installed more than twice as often as the next most popular package: Squidguard, which is a content filter that works with Squid to control access to web resources by users. Not surprisingly, the third most popular package is Lightsquid, which is a Squid log analysis package that makes reports of the web sites which have been visited by users behind the proxy. Some other examples of available packages (which are not Squid related) are: - Bandwidth monitors that show traffic by IP address such as ntopng, and Darkstat. - Extra services such as FreeRADIUS. - Proxies for other services such as SIP and FTP, and reverse proxies for HTTP or HTTPS such as HAProxy. - System utilities such as NUT for monitoring a UPS. - Popular third-party utilities such as nmap, iperf, and arping. - BGP Routing, OSPF routing, Cron editing, Zabbix agent, and many, many others. - Some items that were formerly in the base system but were moved to packages, such as RIP (routed) As of this writing there are more than 40 different packages available; too many to cover them all in this book! The full list of packages that can be installed on a particular system is available from within any pfSense system by browsing to System > Packages. The packages screen may take a little longer to load than other pages in the web interface. This is because the firewall fetches the package information from the pfSense package servers before the page is rendered to provide the most up-to- date package information. If the firewall does not have a functional Internet connection including DNS resolution, this will fail and trigger a notification. If the package information has been retrieved previously, it will be displayed from cache, but the information will be outdated. This is usually caused by a missing or incorrect DNS server configuration. For static IP connections, verify working DNS servers are entered on the System > General Setup page. For those with dynamically assigned connections, ensure the DNS servers assigned by the ISP are functioning. This traffic will only go via the default gateway on the firewall, so ensure that gateway is up or change another active WAN gateway to be the default.
https://docs.netgate.com/pfsense/en/latest/book/packages/index.html
2019-05-19T14:57:15
CC-MAIN-2019-22
1558232254889.43
[]
docs.netgate.com
Report network calls manually Normally, Splunk MINT monitors all HTTP/S network calls. In some cases, the SDK may fail to capture network calls automatically. If this happens, you can call logNetworkEvent() to generate an event that contains information about network calls for your application. - Notes - If you pass NULL in the url argument, Splunk will not receive the event. For information about logNetworkEvent(), see the Splunk MINT SDK for Android API Reference. This documentation applies to the following versions of Splunk MINT™ SDK for Android: 5.2.x Feedback submitted, thanks!
https://docs.splunk.com/Documentation/MintAndroidSDK/5.2.x/DevGuide/Reportnetworkcallsmanually
2019-05-19T15:40:44
CC-MAIN-2019-22
1558232254889.43
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Setting the Greenplum Recommended OS Parameters Greenplum requires. Linux System Settings - Set the following parameters in the /etc/sysctl.conf file and reboot: kernel.shmmax = 500000000,allocsize=16m 0 0 -uler You can. $ cat /sys/kernel/mm/*transparent_hugepage/enabled always [never] For more information about Transparent Huge Pages or the grubby utility, see your operating system documentation. - gp. - changeme Installing Oracle Compatibility Functions Optional. Many Oracle Compatibility SQL functions are available in Greenplum Database. These functions target PostgreSQL. Before using any Oracle Compatibility Functions, you need to run the installation script $GPHOME/share/postgresql/contrib/orafunc.sql once for each database. For example, to install the functions in database testdb, use the command $ psql -d testdb -f $GPHOME/share/postgresql/contrib/orafunc.sql To uninstall Oracle Compatibility Functions, use the script: $GPHOME/share/postgresql/contrib/uninstall_orafunc.sql:
https://gpdb.docs.pivotal.io/43120/install_guide/prep_os_install_gpdb.html
2019-05-19T14:54:06
CC-MAIN-2019-22
1558232254889.43
[array(['/images/icon_gpdb.png', None], dtype=object)]
gpdb.docs.pivotal.io
SnapCenter ships with pre-defined roles, each with a set of permissions already enabled. When setting up and administering role-based access control (RBAC), you can either use these pre-defined roles or create new ones. SnapCenter includes the following pre-defined roles: For information about pre-defined roles and permissions for SnapCenter Plug-in for VMware vSphere users, see the information about SnapCenter Plug-in for VMware vSphere. When you add a user to a role, you must assign either the StorageConnection permission to enable storage virtual machine (SVM) communication, or assign an SVM to the user to enable permission to use the SVM. The Storage Connection permission enables users to create SVM connections. For example, a user with the SnapCenter Admin role can create SVM connections and assign them to a user with the App Backup and Clone Admin role, which by default does not have permission to create or edit SVM connections. Without an SVM connection, users cannot complete any backup, clone, or restore operations. The App Backup and Clone Admin role has the permissions required to perform administrative actions for application backups and clone-related tasks. This role does not have permissions for host management, provisioning, storage connection management, or remote installation. The Backup and Clone Viewer role has read-only view of all permissions. This role also has permissions enabled for discovery, reporting, and access to the Dashboard. The Infrastructure Admin role has permissions enabled for host management, storage management, provisioning, resource groups, remote installation reports, and access to the Dashboard.
http://docs.netapp.com/ocsc-40/topic/com.netapp.doc.ocsc-con/GUID-E0CF021A-5BBB-4558-903B-E48AAC611472.html
2019-05-19T14:55:07
CC-MAIN-2019-22
1558232254889.43
[]
docs.netapp.com
Acrolinx for Office on Windows - Word What You Can Check Acrolinx for Office on Windows - Word works best if you use it in the normal writing mode. In protected documents, Acrolinx checks your content but since you can't make any changes in the document, Acrolinx can't either.: - Headers and footers - Footnotes and endnotes - SmartArt graphics - Field codes in SmartArt graphics If you're using field codes for dynamic content, Acrolinx highlights and replaces the content but as soon as you update your field, Word overwrites the suggestion. Acrolinx can't highlight and replace issues for field codes in your Word document when the field codes are in some elements including: - Headers and footers "Track Changes" works more consistently with Microsoft Office versions 2013 and 2016. You or your Acrolinx administrator can extend what Acrolinx includes in your check by configuring Content Profiles. Find Acrolinx in Office - Word. With check selection you can check smaller sections of your content such as Shapes, SmartArt, or Footnotes. To check a paragraph, you don't need to select the whole text, just click within the paragraph or highlight a few words. If you want to check text in a table, you can highlight the whole table or select single cells.
https://docs.acrolinx.com/officeonwindows/latest/en/acrolinx-for-office-on-windows-word
2019-05-19T14:59:24
CC-MAIN-2019-22
1558232254889.43
[]
docs.acrolinx.com
WAM025 – Using the Layout Wizard 1. Start the wizard from the Tools ribbon in the Utilities grouping: 2. Select the Web Application Layout Manager Wizard. 3. Click Next to continue. 4. Enter the following Site Layout Details: a. Site Layout Name: iiilay01 b. Site Layout Description: iii Workshop Layout c. Generate a WAM using this Site Layout. If Generate Site Style or Generate Site Script is selected, an Application Images Directory field will be displayed. The Application Images Directory will use an existing folder if entered. If a new folder name is entered this will be created in c:\Program Files (x86)\LANSA\Webserver\Images. The new folder could be used to store application specific images. Note that you would also need to later, set this folder up on the IBM i server. d. Click Next. 5. Enter the Web Application Details: a. Name: iiiLAYTST for this exercise. In your own WAMs you may use a long name. b. Description: Layout Demo c. Select the Sample WebRoutines to show themed Weblets option. d. Click Next. The Wizard creates a sample WAM for you. 6. Select one of the Site Themes. The color scheme will be displayed when selected. 7. Click Next. 8. In the Application Content: a. Select the One Content Area: b. Select the Fluid Site Layout Width option. c. Click Finish. 9. Select the Compile and Execute Application option. 10. Click Generate. Note: The appearance of your WAM will depend on the Theme that you selected in the Wizard questionnaire. The WebRoutines can be invoked from the menu at the top of the content area.
https://docs.lansa.com/14/en/lansa087/content/lansa/wamengt4_0275.htm
2019-05-19T14:30:49
CC-MAIN-2019-22
1558232254889.43
[]
docs.lansa.com
Can I sell pfSense¶ Many consulting companies offer pfSense solutions to their customers. A business or individual can load pfSense for themselves, friends, relatives, employers, and, yes, even customers, so long as the Trademark Guidelines and Apache 2.0 license requirements as detailed on the website are obeyed by all parties involved. What can not be offered is a commercial redistribution of pfSense® software, for example the guidelines do not permit someone to offer “Installation of pfSense® software” as a service or to sell a device pre-loaded with pfSense® software to customers without the prior express written permission of ESF pursuant to the trademark policy as referenced in the RCL Terms and Conditions. Example 1: A consultant may offer firewall services (e.g. “Fred’s Firewalls”), without mentioning pfSense® software or using the logo in their advertising, marketing material, and so on. They can install pfSense® software and manage it for their customers. Example 2: Fred’s Firewalls may make a customized distribution pfSense® software with their own name and logo used in place of the pfSense marks. They can use the pfSense marks to truthfully describe the origin of the software, such as “Fred’s Firewall software is derived from the pfSense CE source code.” Even though Fred’s Firewall is based on pfSense® software, it can not be referred to as “pfSense® software” since it has been modified. Example 3: Fred’s Firewalls may sell their customized firewall distribution pre-loaded on systems to customers, so long as the relationship to pfSense is clearly stated. The Apache 2.0 license only applies to the software and not the pfSense name and logo, which are trademarks and may not be used without a license. Reading and understanding the RCL Terms and Conditions document is required before one considers selling pfSense Software. Contributing Back to the Project¶ We ask anyone profiting by using pfSense software to contribute to the project in some fashion. Ideally with the level of contributions from a business or individual corresponding to the amount of financial gain received from use of pfSense software. Many paths exist for resellers and consultants to contribute. For the long term success of the project this support is critically important. - Purchase hardware and merchandise from the Netgate Store. - Become a Netgate Partner to resell Netgate hardware pre-loaded with pfSense software. - Development contributions - Dedicate a portion of your internal developers’ time to open source development. - Help with support and documentation - Assisting users on the forum and mailing list, or contributing documentation changes, aides the overall project. - Support subscription via Netgate Global Support Having direct access to our team for any questions or deployment assistance helps ensure your success. Using the pfSense Name and Logo¶ The “pfSense” name and logo are trademarks of Electric Sheep Fencing, LLC. The pfSense software source code is open source and covered by the Apache 2.0 license. That license only covers the source code and not our name and trademarks, which have restricted usage. We think it is great that people want to promote and support the pfSense project. At the same time, we also need to verify that what is referred to as “pfSense” is a genuine instance of pfSense software and not modified in any way. - The pfSense name and logo MAY NOT be used physically on a hardware device. - For example: A sticker, badge, etching, or similar rendering of the pfSense name or logo is NOT allowed. - The pfSense logo MAY NOT be used on marketing materials or in other ways without a license, including references on websites. - The pfSense name MAY be used to describe the case that a product is based on a pfSense distribution, but the designated product name may not include pfSense or a derivative. Basically stating facts regarding product origin is acceptable. Anything that implies that your product is endorsed by or made by ESF or the pfSense project is not allowed. - Some examples: - “Blahsoft Fireblah based on pfSense software” – Acceptable - “Blahsoft pfSense Firewall” – NOT Allowed - You may ONLY install an UNMODIFIED version of pfSense software and still call it “pfSense software”. - If the source code has been changed, compiled/rebuilt separately, included extra file installations such as themes or add-on scripts, or any other customizations, it can not be called “pfSense software”, it must be called something else. - Trademark protection aside, this requirement preserves the integrity and reputation of the pfSense project. It also prevents unverified changes that may be questionably implemented from being attributed to pfSense. - If a pfSense distribution is modified, the resulting software CANNOT be called “pfSense” or anything similar. The new name must be distinct from pfSense. Trademark law does not allow use of names or trademarks that are confusingly similar to the pfSense Marks. This means, among other things, that you may not use a variation of the pfSense Marks, their phonetic equivalents, mimicry, wordplay, or abbreviation with respect to similar or related projects, products, or services (for example, “pfSense Lifestyle,” “PFsense Community,” “pf-Sense Sensibility,” “pfSensor”, etc., all infringe on ESF’s rights). - Examples: - “pfSomething”, or “somethingSense” – INFRINGING references - “ExampleWall”, “FireWidget” – NON-Infringing references - The “pfSense” name MAY NOT be used in a company name or similar. You CANNOT call your company “pfSense Support, Ltd” or “pfSense Experts, LLC”, or use it in a domain name or subdomain reference. However, you can state support for pfSense software, offer training for pfSense software, etc. - You MUST ensure there is a distinction between your company name and pfSense or Electric Sheep Fencing, LLC. It is your responsibility to be certain no relationship or endorsement is stated or implied between the two companies, unless we have explicitly licensed and agreed to such a statement.
https://docs.netgate.com/pfsense/en/latest/general/can-i-sell-pfsense.html
2019-05-19T15:15:17
CC-MAIN-2019-22
1558232254889.43
[]
docs.netgate.com
Setting up a Moksha RPM & mod_wsgi environment (Fedora, RHEL, CentOS)¶ Installing the Moksha Apache/mod_wsgi server¶ $ sudo yum install moksha-{server,hub,docs} Note The above setup does not install any apps. To duplicate the moksha dashboard demo, you can yum install moksha* Running Orbited¶ Out of the box Orbited comes with a very minimal configuration Copy over Moksha’s Orbited configuration: # cp /etc/moksha/orbited.cfg /etc/orbited.cfg Note Moksha’s Orbited configuration enables the MorbidQ STOMP message broker by default, for ease of development. This can be disabled by commenting out the line stomp://:61613 and the line under the [access] section. Starting the Orbited daemon # service orbited start Note You can also start orbited by hand by running orbited -c /etc/moksha/orbited.cfg Install the dependencies and setup your RPM tree¶ This step is only necessary if you plan on building moksha apps. $ sudo yum install rpmdevtools $ rpmdev-setuptree $ sudo yum-builddep -y moksha
https://moksha.readthedocs.io/en/latest/main/RPMInstallation/
2019-05-19T14:53:57
CC-MAIN-2019-22
1558232254889.43
[]
moksha.readthedocs.io
LookupAccountSidA function The LookupAccountSid function accepts a security identifier (SID) as input. It retrieves the name of the account for this SID and the name of the first domain on which this SID is found. Syntax BOOL LookupAccountSidA( LPCSTR lpSystemName, PSID Sid, LPSTR Name, LPDWORD cchName, LPSTR ReferencedDomainName, LPDWORD cchReferencedDomainName, PSID_NAME_USE peUse ); Parameters lpSystemName. Sid A pointer to the SID to look up. Name A pointer to a buffer that receives a null-terminated string that contains the account name that corresponds to the lpSid parameter. cchName On input, specifies the size, in TCHARs, of the lpName buffer. If the function fails because the buffer is too small or if cchName is zero, cchName receives the required buffer size, including the terminating null character. ReferencedDomainName A pointer to a variable that receives a SID_NAME_USE value that indicates the type of the account. Return Value If the function succeeds, the function returns nonzero. If the function fails, it returns zero. To get extended error information, call GetLastError. Remarks. Basic Access Control Functions
https://docs.microsoft.com/en-us/windows/desktop/api/Winbase/nf-winbase-lookupaccountsida
2019-05-19T14:56:56
CC-MAIN-2019-22
1558232254889.43
[]
docs.microsoft.com
noun A feature that you configure to take data that is in an input queue and store it to files on disk. Using a persistent queue can prevent data loss if the forwarder or indexer has too much data to process at one time. By default, forwarders and indexers have an in-memory input queue of 500KB. Without persistent queues, there is a potential for data loss if the input stream overflows the in-memory queue. With persistent queues, when the in-memory queue is full, the forwarder or indexer writes the input stream to files on disk. The forwarder or indexer then processes data from the queues (in-memory and disk) until it reaches the point when it can again start processing directly from the data stream. You enable persistent queues in inputs.conf on an input-by-input basis. Persistent queues are disabled by default. In Getting Data In:
https://docs.splunk.com/Splexicon:Persistentqueue
2019-05-19T15:45:41
CC-MAIN-2019-22
1558232254889.43
[]
docs.splunk.com
ThoughtS. Operators Aggregate functions These functions can be used to aggregate data. Conversion functions These functions can be used to convert data from one data type to another. Conversion to or from date data types is not supported. Date functions Mixed functions These functions can be used with text and numeric data types.
https://docs.thoughtspot.com/5.0/reference/formula-reference.html
2019-05-19T15:51:15
CC-MAIN-2019-22
1558232254889.43
[]
docs.thoughtspot.com
- - - permissions for VDAs earlier than XenDesktop 7 If users have VDAs earlier than XenDesktop 7,: Service.Connector.WinRM.Identity = Service You can configure these permissions in one of two ways: - Add the service account to the local Administrators group on the desktop machine. - C:\inetpub\wwwroot\Director\tools folder. You must grant permissions to all Director users. To grant the permissions to an Active Directory security group, user, computer account, or for actions like End Application and End Process, run the tool with administrative privileges from a command prompt using the following arguments: ConfigRemoteMgmt.exe /configwinrmuser domain\name where name is a security group, user, or computer account. To grant the required permissions to a user security group: ConfigRemoteMgmt.exe /configwinrmuser domain\HelpDeskUsers To grant the permissions to a specific computer account: ConfigRemoteMgmt.exe /configwinrmuser domain\DirectorServer$ For End Process, End Application, and Shadow actions: ConfigRemoteMgmt.exe /configwinrmuser domain\name /all To grant the permissions to a user group: ConfigRemoteMgmt.exe /configwinrmuser domain\HelpDeskUsers /all To display help for the tool: ConfigRemoteMgmt.exe Configure permissions for VDAs earlier than XenDesktop 7
https://docs.citrix.com/en-us/xenapp-and-xendesktop/7-15-ltsr/director/permissions-earlier-vda.html
2019-05-19T15:38:02
CC-MAIN-2019-22
1558232254889.43
[]
docs.citrix.com
All content with label archetype+gridfs+infinispan+installation+jcache+jsr-107+repeatable_read+replication. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, transactionmanager, dist, release, query, deadlock,, lucene, jgroups, locking, rest, hot_rod more » ( - archetype, - gridfs, - infinispan, - installation, - jcache, - jsr-107, - repeatable_read, - replication ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/archetype+gridfs+infinispan+installation+jcache+jsr-107+repeatable_read+replication
2019-05-19T15:30:09
CC-MAIN-2019-22
1558232254889.43
[]
docs.jboss.org
Configuring and managing tokens and readers McAfee Drive Encryption supports different logon tokens, for example, Passwords, Stored Value SmartCards, PKI SmartCards, CAC SmartCards, and Biometric tokens. This section describes how to configure the Drive Encryption software to support these SmartCards. Modify the token type associated with a system or groupYou can create a new user-based policy with a required token type and deploy it to the required system or a system group. You can also edit and deploy an existing policy. Using a Stored Value token in Drive EncryptionA Stored Value token supported in Drive Encryption stores some token data on the token itself. You must initialize these tokens with Drive Encryption before you can use them for authentication. The token needs to contain the necessary token data to allow successful authentication of the user. Using a PKI token in Drive EncryptionA PKI token is a smartcard supported in Drive Encryption that finds the necessary certificate information for the user in a PKI store (such as Active Directory) and used to initialize the Drive Encryption token data. You must initialize these tokens before they can be used to authenticate a user. Using a Self-Initializing token in Drive EncryptionA Self-Initializing token is a form of PKI token, but rather than referencing certificate information and pre-initializing the token data in McAfee ePO, the client sees the card and performs the necessary initialization steps. Only the client performs the initialization of the token data. One of the assumptions for using a Self-Initializing token is that the necessary certificate information cannot be referenced in Active Directory or any other supported Directory Service. Setup scenarios for the Read Username from Smartcard featureYou can set up your environment using the new Drive Encryption feature Read Username from Smartcard. Using a Biometric token in Drive Encryption A Biometric token allows fingerprints to authenticate to Drive Encryption instead of using passwords. Currently, Drive Encryption 7.2 supports two Biometric fingerprint readers in specific laptop models.
https://docs.mcafee.com/bundle/drive-encryption-7.2.0-product-guide-epolicy-orchestrator/page/GUID-A9E358B0-2EBD-4ADC-AEBE-FC22ECE6D4B6.html
2019-05-19T15:31:37
CC-MAIN-2019-22
1558232254889.43
[]
docs.mcafee.com
Respond Method (MeetingItem Object) The Respond method returns a MeetingItem object for responding to this meeting request. Syntax Set objMeetResp = objMeeting.Respond(RespondType) objMeetResp Object. On successful return, contains a MeetingItem object that can be used to respond to the meeting request. objMeeting Required. This MeetingItem object. RespondType Required. Long. The value to send as the response. Remarks The Respond method prepares a meeting response which can be sent in answer to a meeting request using the Forward or Send method. The response takes the form of a MeetingItem object with the meeting's initiating user as a primary recipient. The initiating user is available through the Organizer property of the associated AppointmentItem object, which can be obtained from the GetAssociatedAppointment method. The RespondType parameter can have exactly one of the following values:, or if you call GetAssociatedAppointment on the meeting request and then respond with CdoResponseDeclined, you must either Delete the associated AppointmentItem object yourself or leave it in the folder. Note The Exchange Server 2003 SP2 version of CDO 1.2.1 handle calendar items differently than earlier versions of Exchange Server 2003. See Calendaring Changes in CDO 1.2.1 for more information.. Respond in this case returns CdoE_NO_SUPPORT. Calling the Respond method is the same as calling GetAssociatedAppointment and then calling Respond on the AppointmentItem object. Example Dim objSess As Session Dim objMtg As MeetingItem Dim objAppt As AppointmentItem Dim objResp As MeetingItem ' response to meeting request On Error Resume Next Set objSess = CreateObject("MAPI.Session") objSess.Logon Set objMtg = objSess.Inbox.Messages(1) If objMtg Is Nothing Then MsgBox "No messages in Inbox" ' ... error exit ... ElseIf objMtg.Class <> 27 Then ' CdoMeetingItem MsgBox "Message is not a meeting request or response" ' ... error exit ... End If MsgBox "Meeting is " & objMtg ' default property is .Subject ' Message exists and is a meeting; is it a request? If objMtg.MeetingType <> 1 Then ' CdoMeetingRequest MsgBox "Meeting item is not a request" ' ... error exit ... End If Set objAppt = objMtg.GetAssociatedAppointment MsgBox "Meeting times" & objAppt.StartTime & " - " & objAppt.EndTime _ & "; recurring is " & objAppt.IsRecurring ' we can Respond from either the AppointmentItem or the MeetingItem Set objResp = objMtg.Respond(3) ' CdoResponseAccepted objResp.Text = "OK, I'll be there" objResp.Send
https://docs.microsoft.com/en-us/previous-versions/exchange-server/exchange-10/ms527208(v=exchg.10)
2019-05-19T15:37:55
CC-MAIN-2019-22
1558232254889.43
[]
docs.microsoft.com
QoS policy groups You can use System Manager to create, edit, and delete QoS policy groups. More information Creating QoS policy groups You can use System Manager to create Storage Quality of Service (QoS) policy groups to limit the throughput of workloads and to monitor workload performance. Deleting QoS policy groups System Manager enables you to delete a Storage QoS policy group that is no longer required. Editing QoS policy groups You can use the Edit Policy Group dialog box in System Manager to modify the name and maximum throughput of an existing Storage Quality of Service (QoS) policy group. Managing workload performance by using Storage QoS Storage. How Storage QoS works Storage QoS controls workloads that are assigned to policy groups by throttling and prioritizing client operations (SAN and NAS data requests) and system operations. How the maximum throughput limit works You can specify one service-level objective for a Storage QoS policy group: a maximum throughput limit. A maximum throughput limit, which you define in terms of IOPS, MBps, or both, specifies the throughput that the workloads in the policy group cannot collectively exceed. Rules for assigning storage objects to policy groups You should be aware of rules that dictate how you can assign storage objects to Storage QoS policy groups. QoS Policy Groups window Storage QoS (Quality of Service) can help you manage risks related to meeting your performance objectives. Storage QoS enables you to limit the throughput of workloads and to monitor workload performance. You can use the QoS Policy groups window to manage your policy groups and view information about them. Parent topic: Managing logical storage Part number: 215-11149-D0 June 2017 Updated for ONTAP 9.2
https://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-900/GUID-A6AFED89-C048-42D2-BC9D-ABDC0EB3BDD9.html
2019-05-19T15:01:44
CC-MAIN-2019-22
1558232254889.43
[]
docs.netapp.com
Configure NGINX status API input NGINX Plus provides a real-time live activity monitoring interface that shows key load and performance metrics of your server infrastructure. These metrics are represented as a RESTful JSON interface and this live data can be ingested into Splunk as NGINX Status API input. Configure NGINX Status API input through Splunk Web. - Log in to Splunk Web. - Select Settings > Data inputs > Splunk Add-on for NGINX. - Click New. - On the NGINX Status API Input page, enter the following fields: - Name: A unique name that identifies the NGINX Status API input - Log level: One of these log levels (with decreasing verbosity): debug, info, warning, error - NGINX URL: Location of the NGINX status JSON REST interface. For example, - Optionally, select More settings and modify the detailed settings field values as needed - Click Next. - Click Review. - After you review the information, click Submit. Validate data collection After you configure monitoring, run one of the following searches to check that you are ingesting the data that you expect. sourcetype=nginx:plus:api This documentation applies to the following versions of Splunk® Supported Add-ons: released Feedback submitted, thanks!
https://docs.splunk.com/Documentation/AddOns/released/NGINX/Configureinputsv1modular
2019-05-19T14:57:42
CC-MAIN-2019-22
1558232254889.43
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Available only in PRO Edition The widget is available in the Webix Pro edition. Webix GridLayout widget makes it easier for you to build a graceful and balanced page structure. It gives you a ready scheme for arranging elements, thus saving your time in creating a nicely organized web application. GridLayout is based on Webix Layout and possesses the idea of arranging content from DataTable. It divides a page into rows and columns and uses grid cells for aligning views in the layout. Besides, it provides the possibility to save and restore the state of elements on a page. To initialize GridLayout on a page, make use of the code below: webix.ui({ view:"gridlayout", gridColumns:4, gridRows:3, cells:[ { template:"Single", x:0, y:0, dx:1, dy:1 }, { template:"Wide 1", x:1, y:0, dx:2, dy:1 }, { template:"Wide 2", x:0, y:1, dx:2, dy:1 }, { template:"Square", x:2, y:1, dx:2, dy:2 } ] }); GridLayout contains a set of cells, each of which presents an object with the properties that define the content, location and size of a cell. Each cell can contain a template or a view inside. The list of available cell properties is given below: Related sample: Grid Layout GridLayout provides a range of configuration options that will help you to achieve the desired look and feel. By default, the grid of GridLayout includes 2 grid rows and 2 grid columns. You can change these grid settings via the gridRows and gridColumns options: webix.ui({ view:"gridlayout", gridColumns:4, gridRows:3, cells:[ { template:"Single", x:0, y:0, dx:1, dy:1 }, { template:"Wide 1", x:1, y:0, dx:2, dy:1 } // more cells ] }); You can specify the necessary width and height of grid cells in pixels via the cellHeight and cellWidth options: webix.ui:({ view:"gridlayout", id:"grid", gridColumns:4, gridRows:3, cellHeight: 150, cellWidth: 200, cells:[ { id:"a", template:"Single", x:0, y:0, dx:1, dy:1 }, { id:"b", template:"Wide 1", x:1, y:0, dx:2, dy:1 } // more cells ] }); You can define proper margin and padding values to get the necessary alignment of cells in the grid. By default, the value of both margin and padding is 10. It is also possible to change the horizontal and vertical padding separately with the help of the paddingX and paddingY properties. webix.ui({ view:"gridlayout", gridColumns:4, gridRows:3, margin:20, // paddingY:20 padding 20, cells:[ { template:"Single", x:0, y:0, dx:1, dy:1 }, { template:"Wide 1", x:1, y:0, dx:2, dy:1 }, // more cells ] }); GridLayout provides a set of API methods to operate the inner views. You can add/remove views and move them to new positions. You can add a new view into a gridlayout via the addView method. It may take two parameters: $$("grid").addView({ template:"Column", x:1, y:1, dx:1, dy:1 }); Related sample: Grid Layout API To remove an unnecessary view, apply the removeView method. It takes either a child view object or its id as an argument: webix.ui({ view:"gridlayout", id:"grid", cells:[ { id:"a", template:"Single", x:0, y:0, dx:1, dy:1 }, { id:"b", template:"Wide 1", x:1, y:0, dx:2, dy:1 } // more cells ] }); $$("grid").removeView("a"); Related sample: Grid Layout API You can also remove all views from a gridlayout with the help of the clearAll method: $$("grid").clearAll(); Related sample: Grid Dashboard - Saving State To change the location and sizes of a view, make use of the moveView method. You need to pass two parameters to it: $$("grid").moveView("d", { x:0, y:0, dx:2, dy:2 }); Related sample: Grid Layout API Webix GridLayout enables you to store/restore the state of a layout to a cookie, a local or session storage. For these needs you should store the configurations of all widgets withing gridlayout cells to recreate them multiple times. var widgets = { { view:"list", id:"list", width:200, drag:true, template:"#value# - (#dx#x#dy#)", data:[ { id:"a", value:"Widget A", dx:1, dy:1 }, { id:"b", value:"Widget B", dx:1, dy:2 }, { id:"c", value:"Widget C", dx:2, dy:1 }, // other cells ] } }; webix.ui({ type:"space", cols:[ { view:"scrollview", body:grid }, widgets ] }); Then you can initialize gridlayout using these configuration objects with the help of the function defined by the factory configuration option. It takes one parameter: webix.ui({ view:"gridlayout", id:"grid", gridColumns:4, gridRows:4, factory:function(obj){ // your logic here } }); After that you are ready to work with the gridlayout state. To save the current state of a gridlayout to the local storage, you should call the serialize method as in: var state = $$("grid").serialize(serializer); webix.storage.local.put("grid-dashboard-state", state); As a parameter the serialize() method may take the serializer function that contains the serialization logic of each particular component within a gridlayout body. To recreate the cells content, you need to get the state object by its name from the storage and call the restore method of GridLayout. It takes two parameters: var state = webix.storage.local.get("grid-dashboard-state"); $$("grid").restore(state,factory); The factory function declared above implements the logic of creating cells. It will return a view from the existing configuration (here configurations are stored in the widgets object) to which it will point by the id. Related sample: Grid Dashboard - Saving State You can catch changes made in a gridlayout, such as adding, removing or moving of views and handle them with the help of the onChange event handler. Back to topBack to top $$("grid").attachEvent("onChange",function(){ // webix.message("A view has been moved") });
https://docs.webix.com/desktop__grid_layout.html
2019-05-19T15:29:33
CC-MAIN-2019-22
1558232254889.43
[]
docs.webix.com
5. Forms¶ Although Django’s ModelForm can work with translatable models, they will only know about untranslatable fields. Don’t worry though, django-hvad’s got you covered with the following form types: - TranslatableModelForm is the translation-enabled counterpart to Django’s ModelForm. - Translatable formsets is the translation-enabled counterpart to Django’s model formsets, for editing several instances at once. - Translatable inline formsets is the translation-enabled counterpart to Django’s inline formsets, for editing several instances attached to another object. - Translation formsets allows building a formset of all the translations of a single instance for editing them all at once. For instance, in a tabbed view. 5.1. TranslatableModelForm¶ TranslatableModelForms work like ModelForm, but can display and edit translatable fields as well. Their use is very similar, except the form must subclass TranslatableModelForm instead of ModelForm: class ArticleForm(TranslatableModelForm): class Meta: model = Article fields = ['pub_date', 'headline', 'content', 'reporter'] Notice the difference from Django's example? There is none but for the parent class. This ArticleForm will allow editing of one Article in one language, correctly introspecting the model to know which fields are translatable. The form can work in either normal mode, or enforce mode. This affects the way the form chooses a language for displaying and committing. - A form is in normal mode if it has no language set. This is the default. In this mode, it will use the language of the instanceit is given, defaulting to current language if not instanceis specified. - A form is in enforce mode if is has a language set. This is usually achieved by calling translatable_modelform_factory. When in enforce mode, the form will always use its language, disregarding current language and reloading the instanceit is given if it has another language loaded. - The language can be overriden manually by providing a custom clean() method. In all cases, the language is not part of the form seen by the browser or sent in the POST request. If you need to change the language based on some user input, you must override the clean() method with your own logic, and set cleaned_data ['language_code'] with it. All features of Django forms work as usual. 5.2. TranslatableModelForm factory¶ Similar to Django’s ModelForm factory, hvad eases the generation of uncustomized forms by providing a factory: BookForm = translatable_modelform_factory('en', Book, fields=('author', 'title')) The translation-aware version works exactly the same way as the original one, except it takes the language the form should use as an additional argument. The returned form class is in enforce mode. Note If using the form= parameter, the given form class must inherit TranslatableModelForm. 5.3. TranslatableModel Formset¶ Similar to Django’s ModelFormset factory, hvad provides a factory to create formsets of translatable models: AuthorFormSet = translatable_modelformset_factory('en', Author) This formset allows edition a collection of Author instances, all of them being in English. All arguments supported by Django’s modelformset_factory() can be used. For instance, it is possible to override the queryset, the same way it is done for a regular formset. In fact, it is recommended for performance, as the default queryset will not prefetch translations: BookForm = translatable_modelformset_factory( 'en', Book, fields=('author', 'title'), queryset=Book.objects.language('en').all(), ) Here, using language() ensures translations will be loaded at once, and allows filtering on translated fields is needed. The returned formset class is in enforce mode. Note To override the form by passing a form= argument to the factory, the custom form must inherit TranslatableModelForm. 5.4. TranslatableModel Inline Formset¶ Similar to Django’s inline formset factory, hvad provides a factory to create inline formsets of translatable models: BookFormSet = translatable_inlineformset_factory('en', Author, Book) This creates an inline formset, allowing edition of a collection of instances of Book attached to a single instance of Author, all of those objects being editted in English. It does not allow editting other languages; for this, please see translationformset_factory. Any argument accepted by Django’s inlineformset_factory() can be used with translatable_inlineformset_factory as well. The returned formset class is in enforce mode. Note To override the form by passing a form= argument to the factory, the custom form must inherit TranslatableModelForm. 5.5. Translations Formset¶ Basic usage¶ The translation formset allows one to edit all translations of an instance at once: adding new translations, updating and deleting existing ones. It works mostly like regular BaseInlineFormSet except it automatically sets itself up for working with the Translations Model of given TranslatableModel. Example: from django.forms.models import modelform_factory from hvad.forms import translationformset_factory from myapp.models import MyTranslatableModel MyUntranslatableFieldsForm = modelform_factory(MyTranslatableModel) MyTranslationsFormSet = translationformset_factory(MyTranslatableModel) Now, MyUntranslatableFieldsForm is a regular, Django, translation-unaware form class, showing only the untranslatable fields of an instance, while MyTranslationsFormSet is a formset class showing only the translatable fields of an instance, with one form for each available translation (plus any additional forms requested with the extra parameter - see modelform_factory()). Custom Translation Form¶ As with regular formsets, one may specify a custom form class to use. For instance: class MyTranslationForm(ModelForm): class Meta: fields = ['title', 'content', 'slug'] MyTranslationFormSet = translationformset_factory( MyTranslatableModel, form=MyTranslationForm, extra=1 ) Note The translations formset will use a language_code field if defined, or create one automatically if none was defined. One may also specify a custom formset class to use. It must inherit BaseTranslationFormSet. Wrapping it up: editing the whole instance¶ A common requirement, being able to edit the whole instance at once, can be achieved by combining a regular, translation unaware ModelForm with a translation formset in the same view. It works the way one would expect it to. The following code samples highlight a few gotchas. Creating the form and formset for the object: FormClass = modelform_factory(MyTranslatableModel) TranslationsFormSetClass = translationformset_factory(MyTranslatablemodel) self.object = self.get_object() form = FormClass(instance=self.object, data=request.POST) formset = TranslationsFormSetClass(instance=self.object, data=request.POST) Checking submitted form validity: if form.is_valid() and formset.is_valid(): form.save(commit=False) formset.save() self.object.save_m2m() # only if our model has m2m relationships return HttpResponseRedirect('/confirm_edit_success.html') Note When saving the formset, translations will be recombined with the main object, and saved as a whole. This allows custom save() defined on the model to be called properly and signal handlers to be fed a full instance. For this reason, we use commit=False while saving the form, avoiding a useless query. Warning You must ensure that form.instance and formset.instance reference the same object, so that saving the formset does not overwrite the values computed by form. A common way to use this view would be to render the form on top, with the formset below it, using JavaScript to show each translation in a tab. Next, we will take a look at the administration panel.
https://django-hvad.readthedocs.io/en/latest/public/forms.html
2019-06-16T07:37:41
CC-MAIN-2019-26
1560627997801.20
[]
django-hvad.readthedocs.io
Billing and metering questions This article answers frequently asked questions regarding billing and metering in Microsoft Flow. Where can I find out what pricing plans are available? See the pricing page. Where can I find out what my plan is? See the pricing page. How do I switch plans? In the top navigation menu, select Learn > Pricing, and then select the plan to which you want to switch. How do I know how much I've used? If you're on a free plan or a trial plan, click or tap the gear icon in the top navigation bar to show your current usage against your plan. If you're on a paid plan, runs are pooled across all users in your organization. We're working on features to expose available quota and usage across an organization. What happens if my usage exceeds the limits? Microsoft Flow throttles your flow runs. Where can I find more information regarding the usage limits? On the pricing page, see the FAQ section. What happens if I try to execute runs too frequently? Your plan determines how often your flows run. For example, your flows may run every 15 minutes if you're on the free plan. If a flow is triggered less than 15 minutes after its last run, it's queued until 15 minutes have elapsed. What counts as a run? Whenever a flow is triggered, whether by an automatic trigger or by manually starting it, this is considered a run. Checks for new data don't count as runs. Are there differences between Microsoft Accounts and work or school accounts for billing? Yes. If you sign in with a Microsoft Account (such as an account that ends with @outlook.com or @gmail.com), you can use only the free plan. To take advantage of the features in the paid plan, sign in with a work or school email address. I'm trying to upgrade, but I'm told my account isn't eligible. To upgrade, use a work or school account, or create an Office 365 trial account. Why did I run out of runs when my flow only ran a few times? Certain flows may run more frequently than you expect. For example, you might create a flow that sends you a push notification whenever your manager sends you an email. That flow must run every time you get an email (from anyone) because the flow must check whether the email came from your manager. This action counts as a run. You can work around this issue by putting all the filtering you need into the trigger. In the push notification example, expand the Advanced Options menu, and then provide your manager's email address in the From field. Other limits and caveats - Each account may have as many as: - 250 flows. - 15 Custom Connectors. - 20 connections per API and 100 connections total. - You can install a gateway only in the default environment. - Certain external connectors, such as Twitter, implement connection throttling to control quality of service. Your flows fail when throttling is in effect. If your flows are failing, review the details of the run that failed in the flow's run history. Feedback Send feedback about:
https://docs.microsoft.com/en-us/flow/billing-questions
2019-06-16T08:18:01
CC-MAIN-2019-26
1560627997801.20
[array(['media/billing-questions/learn-pricing.png', 'Learn > Pricing'], dtype=object) array(['media/billing-questions/settings.png', 'Settings button'], dtype=object) ]
docs.microsoft.com
Introduction This document is designed to assist organizations using BDD 2007 to deploy the Windows operating system. Using this guide, the organization can set up and implement testing solutions that validate the project throughout each phase. This material is intended for information technology (IT) professionals, subject matter experts (SMEs), and consultants responsible for stabilizing the BDD 2007 project before using it in a production environment. Anyone using this document must be familiar, at a minimum, with Microsoft management technologies, products, and concepts. The Test Feature Team Guide describes at a high level the test objectives, scope, practices, and testing methodologies that the Test feature team uses. Supporting this guide is the Test Cases Workbook in C:\Program Files\BDD 2007\Documentation\Job Aids, which provides the details required to execute the BDD 2007 test cases described here as well as the results the testers obtained in the lab. This information is provided as a suggested rather than prescriptive approach to designing, setting up, and operating the test lab environment for a BDD 2007 project. This guide focuses primarily on process documentation. Detailed, step-by-step procedures can be found in the appendix. If the reader’s role is planning, he or she should read the main body of the guide. If the reader’s role is stabilizing the solution, he or she should read this entire guide. Like any other technological implementation, the BDD 2007 project must be fully tested before deployment into a production environment. A test environment consists of a test lab or labs and includes test plans that detail what to test as well as test cases that describe how to test each component. This test environment must simulate the production environment as closely as possible. The test lab can consist of a single lab or of several labs, each of which supports testing without presenting risk to the production environment. In the test lab, members of the Test feature team can verify their deployment design assumptions, discover deployment problems, and improve their understanding of the new technology. Such activities reduce the risk of errors during deployment and minimize downtime in the production environment. Regardless of whether the team is using an existing lab for testing or building a new test lab for the deployment project, the team must think through and clearly define goals for the test lab and its long-term purpose. On This Page Background Prerequisites Background The work described in this guide typically starts in the Envisioning Phase of MSF, when the project team is scoping the project. It continues through the Planning, Developing, Stabilizing, and Deploying Phases. The primary consumer of this guide is the MSF Test Role Cluster, because most of this guide focuses on validating the developed solution. Prerequisites Test team members must have a comprehensive understanding of the project’s business objectives. They must also possess strong communication skills. Test team members should be familiar with a range of testing and deployment concepts and technologies. For example, Microsoft Windows Server® 2003 running in a Microsoft Windows Server System™ domain Microsoft Virtual Server 2005 (only if the team is building a virtual server environment to save on the cost of hardware) Microsoft Virtual PC 2004 with Service Pack 1 (SP1) Windows Server 2003 interaction issues with Windows client computers (Windows Vista, Windows XP, Microsoft Windows 2000, and Windows NT® Workstation 4.0) Deployment technologies used with current Windows operating systems Testing methodologies, processes, and practices (for example, test plans; test cases; and white box, black box, and gray box testing) Microsoft Systems Management Server (SMS) 2003 experience Microsoft Windows SharePoint® Services experience Download Get the Microsoft Solution Accelerator for Business Desktop Deployment 2007 Update Notifications Feedback Send us your comments or suggestions
https://docs.microsoft.com/en-us/previous-versions/bb490183%28v%3Dtechnet.10%29
2019-06-16T07:34:03
CC-MAIN-2019-26
1560627997801.20
[array(['images/bb451534.3squares%28en-us%2ctechnet.10%29.gif', None], dtype=object) ]
docs.microsoft.com
Lesson 1-7: Add and configure the OLE DB destination SQL Server, including on Linux Azure SQL Database Azure SQL Data Warehouse Parallel Data Warehouse Your package can now extract data from the flat file source and transform that data into a format compatible with the destination. The next task is to load the transformed data into the destination. To load the data, you add an OLE DB destination to the data flow. The OLE DB destination can use a database table, view, or a SQL command to load data into a variety of OLE DB-compliant databases. In this task, you add and configure an OLE DB destination to use the OLE DB connection manager that you previously created. Add and configure the sample OLE DB destination In the SSIS Toolbox, expand Other Destinations, and drag OLE DB Destination onto the design surface of the Data Flow tab. Place the OLE DB Destination directly below the Lookup Date Key transformation. Select the Lookup Date Key transformation and drag its blue arrow over to the new OLE DB Destination to connect the two components together. In the Input Output Selection dialog, in the Output list box, select Lookup Match Output, and then select OK. On the Data Flow design surface, select the name OLE DB Destination in the new OLE DB Destination component, and change that name to Sample OLE DB Destination. Double-click Sample OLE DB Destination. In the OLE DB Destination Editor dialog, ensure that localhost.AdventureWorksDW2012 is selected in the OLE DB Connection manager box. In the Name of the table or the view box, enter or select [dbo].[FactCurrencyRate]. Select the New button to create a new table. Change the name of the table in the script from Sample OLE DB Destination to NewFactCurrencyRate. Select OK. Upon selecting OK, the dialog closes and the Name of the table or the view automatically changes to NewFactCurrencyRate. Select Mappings. Verify that the AverageRate, CurrencyKey, EndOfDayRate, and DateKey input columns are mapped correctly to the destination columns. If same-named columns are mapped, the mapping is correct. Select OK. Right-click the Sample OLE DB Destination destination and select Properties. In the Properties window, verify that the LocaleID property is set to English (United States) and the DefaultCodePage property is set to 1252. Go to next task Step 8: Annotate and format the Lesson 1 package
https://docs.microsoft.com/en-us/sql/integration-services/lesson-1-7-adding-and-configuring-the-ole-db-destination?view=sql-server-2017
2019-06-16T07:37:35
CC-MAIN-2019-26
1560627997801.20
[]
docs.microsoft.com
Contents IT Service Management Previous Topic Next Topic On-call scheduling security Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share On-call scheduling security Assign users the appropriate access to the features of on-call scheduling. In on-call scheduling, the three important roles that are considered are: rota_admin rota_manager itil The rota_admin role gives the user the ability to create, read, update, and delete rotations in the instance. The rota_admin can create rotations using Create new schedule module, modify rotations and rosters, and maintain coverage and time-off on the View the on-call calendar. The rota_manager role is for the users who are the manager of a group or it can be delegated to the members of a group using Delegate roles . The role cannot be used to manage all groups in the system. The purpose of the role is to distinguish a member that has been delegated the role of managing rotations of particular groups. A user with the itil role can view the View the on-call calendar, on-call commitments via the report, and general read access to their groups rotations. Note: roster_admin is a deprecated role that is not fully supported and has been maintained for old upgrade customers. Do not use the roster_admin role. Use the roles that are mentioned at the beginning of the topic. Related ReferenceRoles installed with on-call scheduling On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/madrid-it-service-management/page/administer/on-call-scheduling/concept/oncallschedulesecurity.html
2019-06-16T07:14:14
CC-MAIN-2019-26
1560627997801.20
[]
docs.servicenow.com
Product Template This template renders a detailed page for an individual product. It includes an HTML form that visitors use to select a variant and add it to the cart. Featured ImageFeatured Image This is the featured image of the product. You can select a Div Block or an Image. Insert the attribute: el-child=featured-image Connect the image or the background image to the field main image of the CMS TitleTitle This will be the title of your product. To create it select a Text Block and insert the attribute: el-child=title connect it to the field name of the CMS DescriptionDescription This is the description of your product. Select a Rich Text item and insert the attribute: el-child=description Connect it to the field Post Body of the CMS Sold-outSold-out Insert the element you wish. It will only appear when a product is sold-out (a Div Block with a text, for example). Insert the attribute: el-child=sold-out On SaleOn Sale Insert the element you wish. It will only appear when a product is on sale (a Div Block with a text, for example). Insert the attribute: el-child=on-sale Compare PriceCompare Price If you have the “On Sale” mode enabled, you can show the discounted price close to the full price. Select a Text Block and insert the attribute: el-child=compare-price Current PriceCurrent Price Select a Text Block and insert the attribute: el-child=current-price Form BlockForm Block It’s a form that visitors use to select the quantity of a given product and choose the variations in the case we are faced with a variable product. Insert a Form Block and assign the attribute: el-child=add-to-cart In the Form Block insert a Text Field with the attribute: el-child= quantity it will allow to choose the quantity of that given product. In the same Form Block you can insert a Div Block that will appear dinamically if the product contains some variations that will change the price automatically. This Div Block must have the following attribute: el-child=option-wrapper VendorVendor If you want to display the entire catalogue of that specific vendor, you have to insert a Button with the attribute: el-child=vendor Tags and CollectionsTags and Collections Insert a List item and add a link (Tags) within it. The list will have the attribute: el-child=tags A second list will have a second link (Collections). Assign the following attribute to the list: el-child=collections
https://docs.udesly.com/product-template/
2019-06-16T07:25:55
CC-MAIN-2019-26
1560627997801.20
[array(['/assets/product-template.png', None], dtype=object) array(['/assets/product-template2.png', None], dtype=object) array(['/assets/product-template3.png', None], dtype=object)]
docs.udesly.com
Plugin: The team members within a Concurrent Design activity typically represent a Domain of Expertise that is required in the design. These domains are managed at the level of the Site Directory and available to be used within a specific Engineering Model. This section describes how a team for a Concurrent Design activity can be created by assigning [Persons] as Participants to the required active domains in a specific engineering model, with a required Participant Role. The person that has created a new engineering model setup is automatically assigned to it as a participant, in the Participant Role of Model Administrator, see the description of managing engineering models. This initial participant then has to perform several steps to setup and manage the other active domains and participants, to make sure that other team members can get access to the engineering model when required. Engineering models can be expanded using the triangular arrow icon to show the Participants and Iterations, each in a separate folder. Open the Engineering Model browser by selecting the Models icon on the Directory tab. This will show all the available engineering models the user is allowed to see according to the permissions, depending on the setup of the Person Role and the engineering models to which the user is assigned. The Participants are given with the Name, Description and Role. In the description column, the Organization and Domain of Expertise are provided. For a new engineering model, this will thus give in any case the domain and the participant as model administrator for the person that created the new engineering model. If an engineering model was created based on another engineering model, e.g. a template or study model, additional domains that were set as Active Domains in the source model, and participants to the source model will be available as well in the new engineering model, see managing engineering models. In the setup of a team for a Concurrent Design activity, one of the first steps is to setup or edit the Active Domains in the engineering model. This is done by editing the Engineering Model Setup. Select the Edit Engineering Model Setup icon or in the context menu select Edit. On the Active Domain tab, a list is given of all Domains of Expertise that exist at the level of the Site Directory. To add other domains as Active Domains for the engineering model, tick the check box for the required domains and select Ok. Users that have the appropriate permissions in their Person Role (see Manage Person Roles) can edit the participants in an engineering model. In the Engineering Model browser, expand the Participants folder to inspect the existing participants. In the context menu (on the engineering model, the participants folder, or on an existing participant), select Create a Participant. On the Basic tab of the Create Participant modal dialog, select a Person using the drop-down. For this person as a new participant, the required Participant Role has to be specified. This is a mandatory field to allow a participant access to the engineering model with the appropriate permissions. For examples on this, see the description of a typical role setup. The default domain for the selected person will be automatically preselected as Domain of Expertise if this domain is set as an Active Domain for the engineering model. It is always possible to change this default domain assignment by ticking or unticking the check boxes for the available domains as required. A person can participate in multiple domains and multiple users can participate in a single domain. If the default domain is not provided for the person, or if this is not an active domain in the engineering model, of course it cannot be preselected and the user has to provide the correct domain assignment for the new participant. See below for a description of the Is Active setting. Create Participant To allow a participant access to the engineering model, the participant has to be active, see the figure of the Create Participant modal dialog. As long as a participant is not set as Is Active, the user will not be able to access the engineering model as a participant. This can be used to revoke access for a specific participant. It can also be used to setup the required team already, but only provide access to a core team of engineers that is needed to perform [preliminary engineering] tasks, by setting only those team members as Is Active. When the Concurrent Design activity is moving to the Design Sessions Phase the other participants can be edited to be set to Is Active. To inspect a participant, select the Inspect Participant icon or in the context menu select Inspect. In the Inspect modal dialog, all the details can be seen on the Basic tab. Additionally the status of the participant is given by the check box for Is Active, see the description of this above. The Advanced tab provides information that may be useful mostly to CDP™ database administrators. Given are the UniqueID and the Revision Number. To edit a participant, select the Edit Participant icon or in the context menu select Edit. It is possible to edit all the fields on the Basic tab, except the Person field. Most often, required changes will concern the representation of the Domain(s) of Expertise, or the Is Active setting as explained above. After making the required changes, click Ok. The browsers in the CDP™ Client of the user that performed the editing action will be updated immediately. Other CDP™ Clients will be updated with a refresh. On the Advanced tab, a Revision Number is given; see the description of Revision Number for details. To delete a participant, select the Delete Participant icon or in the context menu select Delete. Last modified 1 year ago.
http://cdp4docs.rheagroup.com/?c=D%20Performing%20a%20CD%20Study/Prepare%20CDP%20Activity&p=Create_Team.md
2019-06-16T07:41:33
CC-MAIN-2019-26
1560627997801.20
[]
cdp4docs.rheagroup.com
Squeak allows kids of all ages to be creative with their computer. The goal of the Squeak Project is to build a system without constraints: It is used at schools,universities and in industry. Squeak is an open system: It is implemented in Squeak itself, all parts are open for learning and hacking. The whole source code is available and can be changed while the system is running. The Croquet project ist building a revolutionary collaborative environment based on Squeak. It provides a scalable, peer-to-peer multiuser 3D environment that is as open and fun as Squeak itself. Squeak is available on the internet under a free license, it is highly portable and currently used on over 20 different platforms. This talk will give an overview over the Squeak Project: From the eToy kids programming environment up to the Seaside system for professional web development. The eToys make programming fun for children from around age 8. The talk will show how to build simple eToy programs and how Squeak is used at school. But even professional developers are using Squeak; The Seaside framework shows how the openness of Squeak can help to make developers more productive. The last part of the talk will give a glimpse into the future: OpenCroquet. The Croquet project is building a revolutionary collaborative environment based on Squeak. It provides a scalable, peer-to-peer multiuser 3D environment that is completely open for exploration and makes novel ways for communication and interaction.
http://www.secdocs.org/docs/squeak-and-croquet-paper/
2019-06-16T07:08:04
CC-MAIN-2019-26
1560627997801.20
[]
www.secdocs.org
- Containers Virtualization - Publisher Page - OpenVZ - Category - Virtual Machine Software - Release - TKU 2015-Feb-1 - Change History - OpenVZ Containers Virtualization - Change History - Reports & Attributes - [OpenVZ Containers Virtualization - Reports & Attributes] - Publisher Link - OpenVZ Product Description. Known Versions - 3.0 - 4.0
https://docs.bmc.com/docs/display/Configipedia/OpenVZ+Containers+Virtualization
2019-06-16T07:52:19
CC-MAIN-2019-26
1560627997801.20
[]
docs.bmc.com
The App Center SDK uses a modular architecture so you can use any or all of the services. Let's get started with setting up App Center iOS SDK in your app to use App Center Analytics and App Center Crashes. To add App Center Distribute to your app, look at the documentation for App Center Distribute. 1. Prerequisites The following requirements must be met to use App Center SDK: - Your iOS project is set up in Xcode 10 or later on macOS version 10.12 or later. - You are targeting devices running on iOS 9.0 or later. - You are not using any other library that provides Crash Reporting functionality (only for App Center Crashes). 2. Create your app in the App Center Portal to obtain the App Secret If you have iOS as the OS and Objective-C/Swift as a platform. - Hit the button at the bottom right that says Add new app. Once you have iOS can be integrated into your app via Cocoapods. Alternatively,. Note If you see an error like [!] Unable to find a specification for `AppCenter` while running pod install, please run pod repo update to get the latest pods from the Cocoapods repository and then run pod install. Now that you've integrated the frameworks in your application, it's time to start the SDK and make use of the App Center services. 3.2 Integration by copying the binaries into your project Below are the steps on how to integrate the compiled binaries in your Xcode project to set up App Center Analytics and App Center Crashes for your iOS app. Download the App Center SDK frameworks provided as a zip file. Unzip the file and you will usually reside inside a subdirectory, often called Vendor. If the project isn't organized with a subdirectory for libraries, create a Vendor subdirectory now. - Create a group called Vendor inside your Xcode project to mimic your file structure on disk. Open the unzipped AppCenter-SDK-Apple folder in Finder and copy the folder into your project's folder at the location where you want it to reside. (in the location from the previous step) to start the SDK in the project's AppDelegate class in the didFinishLaunchingWithOptions do not want to use one of the two services, remove the corresponding parameter from the method call above. Unless you explicitly specify each module as a parameter in the start method, you can't use that App Center service. In addition,. To learn how to get started with Push, read the documentation of App Center Push. Feedback Send feedback about:
https://docs.microsoft.com/en-us/appcenter/sdk/getting-started/ios
2019-06-16T07:33:28
CC-MAIN-2019-26
1560627997801.20
[]
docs.microsoft.com
2.2.2.5.6 Data Model for endpointConfiguration provisionGroup The endpointConfiguration provisionGroup is as follows.<112> provisionGroup (name=’endpointConfiguration’) | |-- ShowRecentContacts |-- ShowManagePrivacyRelationships |-- MaxPhotoSizeKB |-- DisableMusicOnHold |-- AttendantSafeTransfer |-- MusicOnHoldAudioFile |-- CustomLinkInErrorMessages |-- CustomStateUrl |-- DisablePoorDeviceWarnings |-- DisablePoorNetworkWarnings |-- BlockConversationFromFederatedContacts |-- CalendarStatePublicationInterval |-- EnableCallLogAutoArchiving |-- EnableAppearOffline |-- EnableConversationWindowTabs |-- EnableEventLogging |-- EnableFullScreenVideoPreviewDisabled |-- EnableSQMData |-- EnableTracing |-- EnableURL |-- EnableIMAutoArchiving |-- DisableEmailComparisonCheck |-- DisableCalendarPresence |-- DisableEmoticons |-- DisableFederatedPromptDisplayName |-- DisableFreeBusyInfo |-- DisableHandsetOnLockedMachine |-- DisableHtmlIm |-- DisableInkIM |-- DisableRTFIM |-- DisableSavingIM |-- DisableMeetingSubjectAndLocation |-- DisableOneNote12Integration |-- DisableOnlineContextualSearch |-- DisablePhonePresence |-- DisablePICPromptDisplayName |-- DisablePresenceNote |-- PhotoUsage |-- AbsUsage |-- AllowUnencryptedFileTransfer |-- AutoDiscoveryRetryInterval |-- ConferenceIMIdleTimeout |-- DGRefreshInterval |-- ExcludedContactFolders |-- IMWarning |-- MapiPollInterval |-- MaximumNumberOfContacts |-- NotificationForNewSubscribers |-- PlayAbbrDialTone |-- SearchPrefixFlags |-- TabURL |-- WebServicePollInterval |-- DisableFeedsTab |-- EnableEnterpriseCustomizedHelp |-- CustomizedHelpUrl |-- DisableContactCardOrganizationTab |-- EnableHotdesking |-- HotdeskingTimeout |-- SPSearchInternalUrl |-- SPSearchExternalUrl |-- SPSearchCenterInternalUrl |-- SPSearchCenterExternalUrl |-- EnableExchangeDelegateSyncUp |-- EnableContactSync |-- ShowSharepointPhotoEditLink |-- EnableVOIPCallDefault |-- MaximumDGsAllowedInContactList |-- ImLatencySpinnerDelay |-- ImLatencyErrorThreshold |-- EnableMediaRedirection |-- P2PAppSharingEncryption |-- HelpEnvironment |-- RateMyCallAllowCustomUserFeedback |-- RateMyCallDisplayPercentage |-- imLatencySpinnerDelay |-- imLatencyErrorThreshold |-- EnableHighPerformanceP2PAppSharing |-- EnableHighPerformanceConferencingAppSharing |-- TracingLevel |-- EnableServerConversationHistory |-- EnableSkypeUI |-- CallViaWorkEnabled |-- UseAdminCallbackNumber |-- AdminCallbackNumber |-- OnlineFeedbackURL |-- BITSServerAddressInternal |-- BITSServerAddressExternal |-- EnableSendFeedback |-- EnableIssueReports |-- EnableBugFiling |-- EnableBackgroundDataCollection |-- PrivacyStatementURL |-- SendFeedbackURL The following XSD schema fragment defines the requirements to which an endpointConfiguration provisionGroup element XML document SHOULD conform. <?xml version="1.0" encoding="utf-16"?> <xs:schema xmlns: <xs:complexType <xs:simpleContent> <xs:extension <xs:attribute </xs:extension> </xs:simpleContent> </xs:complexType> <xs:complexType <xs:sequence> <xs:element </xs:sequence> </xs:complexType> <xs:element </xs:schema> ShowRecentContacts: Ignored. AbsUsage: Specifies whether the user MAY use an Address Book Service (as specified in [MS-ABS]) web query, download a file, or either one for accessing an address book. Values include "WebSearchAndFileDownload", "WebSearchOnly", or "FileDownloadOnly". AllowUnencryptedFileTransfer: Allows unencrypted files to be transferred. MusicOnHoldAudioFile: Optional, ignored. DisableMusicOnHold: Optional, ignored. AttendantSafeTransfer: Optional, ignored. AutoDiscoveryRetryInterval: How frequently to retry autodiscovery if it fails. BlockConversationFromFederatedContacts: This policy prevents external contacts from inviting users to IM or Audio-Video conversations unless they are in the user’s High Presence level. High Presence means that they are in the ALLOW access control list (ACL) of the user. In addition, it blocks messages that are not part of a session from reaching the user. CalendarStatePublicationInterval: How frequently data loaded from the calendar is published, in seconds. ConferenceIMIdleTimeout: Amount of time a user MAY sit in an IM-only conference without sending or receiving an IM. When the timeout occurs, the user automatically leaves the IM conference. CustomizedHelpUrl: Determines the root URL to be used when passing help parameters to the customized Help system of the enterprise. CustomLinkInErrorMessages: Custom link pointing users to an enterprise-hosted extranet site from the error messages. CustomStateUrl: URL to an XML file containing custom Presence states. DGRefreshInterval: Timing period, in seconds, for refreshing the Distribution Group. DisableCalendarPresence: Disables loading Free/Busy data from Messaging Application Programming Interface (MAPI) or web services. Also disables publication of free/busy data. DisableContactCardOrganizationTab: Removes the contact card organization tab from the UI. DisableEmailComparisonCheck: Disables the e-mail comparison check; normally, if the e-mail client profile’s email-id does not match the Simple Mail Transfer Protocol (SMTP) address received from the e-mail server<113>, any personal information management (PIM) integration<114> is disabled. This policy is disabled by default. DisableEmoticons: Prevents emoticons being shown in instant messages. DisableFederatedPromptDisplayName: The display name of federated, non-public IM connectivity, contacts in the notification dialog. DisableFeedsTab: Indicates whether the feed environment is shown in the UI. If this setting is enabled, the feed environment and the feed tab are removed from the UI. DisableFreeBusyInfo: Indicates whether free/busy information is published. If this setting is enabled, free/busy information is not published. DisableHandsetOnLockedMachine: When the PC is locked, this setting does not allow calls to or from the handset. DisableHtmlIm: Disables HTML instant messages. DisableInkIM: Ignored. DisableMeetingSubjectAndLocation: Indicates whether the meeting subject and location information is published. DisableOneNote12Integration: Specifies whether planning and note-taking software integration with the instant-messaging client is enabled.<115> DisableOnlineContextualSearch: Disables online searches for related conversations. This disables both the automatic population of previous logs, and the ability to start a contextual search for conversation related to emails. This only affects users who are using an e-mail client<116> in online mode. DisablePhonePresence: When this policy is enabled, phone call states are not published as part of the presence information. DisablePICPromptDisplayName: The display name of public IM connectivity contacts in the notification dialog. DisablePoorDeviceWarnings: Disables poor device warnings that appear during the first run, in the tuning wizard, in the main UI, and in the conversation window at the endpoint. DisablePoorNetworkWarnings: Disables poor network warnings that appear during the conversation at the endpoint. DisablePreCallNetworkDiagnostics: Disables poor network warnings that appear during the conversation at the endpoint. DisablePresenceNote: Disables the loading of the OOF (out of office) message from MAPI or web services. Also disables the publication of OOF messages. DisableRTFIM: Indicates that rich text MUST NOT be allowed for IM. DisableSavingIM: Prevents users from saving instant messages. EnableAppearOffline: Controls the appear offline entry points on the client. The value MUST be "true" or "false". EnableCallLogAutoArchiving: When enabled, the client archives call logs. When disabled, the client never archives call logs. EnableConversationWindowTabs: Enables the web browser in the conversation window. EnableEnterpriseCustomizedHelp: Determines whether the enterprise wants the default online help system or uses a completely customized help system. EnableEventLogging: Turns on UCCP event logging for the client. EnableExchangeDelegateSyncUp: When this is enabled, the delegate’s information is retrieved from the calendar, with author and editor rights on the boss's calendar. EnableFullScreenVideoPreviewDisabled: Enables full screen video with the correct aspect ratio and disables video preview for all client video calls. EnableHotdesking: Specifies whether hotdesking is enabled for a common area phone. Only phones with the common area phone device type make use of this setting. EnableIMAutoArchiving: When this policy is enabled, the client archives IM conversations. When it is disabled, the client never archives IM conversations. When this policy is not set, the user controls archiving of IM conversations. EnableSQMData: Enables collection of software quality metrics. EnableTracing: Turns on tracing for the client; primarily for use by customer support and the OC team to assist customer problem solving. EnableURL: Allows hyperlinks in instant messages. ExcludedContactFolders: This policy is used for excluding contact folders from being imported into the client search results. HotdeskingTimeout: The common area phone operating in hotdesking mode uses this setting to determine when to log the user out and revert to common area mode if the user has been inactive for the hotdeskingTimeout period. This setting is only made use of by phones with the common area device type and the enableHotdesking setting set to "true". IMWarning: Allows the administrator to configure the initial text that appears in the instant messaging area when a conversation window is opened. MapiPollInterval: The frequency, in minutes, of loading calendar data from the MAPI provider. MaximumNumberOfContacts: Maximum number of contacts users are allowed to have. MaxPhotoSizeKB: Maximum size of an individual photo that a client MAY download. NotificationForNewSubscribers: Notifications are shown unless the user has selected otherwise in the options dialog. PhotoUsage: Whether client endpoints display photos of each type. Valid values are "NoPhoto", "PhotosFromADOnly", and "AllPhotos". PlayAbbrDialTone: Changes the length of the dial tone from a 30-second dial tone to a fading, 3-second dial tone. SearchPrefixFlags: A DWORD value whose bits represent the decision of which address book attribute to index into the prefix search tree. These bits are as follows (from low to high): Bit 0: Primary e-mail. Bit 1: Alias. Bit 2: All emails. Bit 3: Company. Bit 4: Display name. Bit 5: First name. Bit 6: Last name. ShowManagePrivacyRelationships: Controls whether the "view by" menu in the contact list shows or hides "manage privacy relationships" (previously known as access level management view).Valid values are "true" and "false". SPSearchCenterExternalUrl: The site search center external URL is used to enable the site link in the expert search results. This URL represents the search center outside the enterprise network of the search service specified in the ExpertSearchExternalURL. If this setting is blank, there is no link to the site server at the bottom of the search results. SPSearchCenterInternalUrl: The site search center internal URL is used to enable the site link in the expert search results. This URL represents the search center within the enterprise network of the search service specified in the ExpertSearchInternalURL. If this setting is blank, there is no link to the site server at the bottom of the search results. SPSearchExternalUrl: The site external URL is used to enable expert search capabilities If the client is connected to the server outside of the enterprise network. If the URL is blank or invalid, none of the expert search capabilities and UI are enabled. SPSearchInternalUrl: The site internal URL is used to enable expert search capabilities if the client is connected to the server inside of the enterprise network. If the URL is blank or invalid, none of the expert search capabilities and UI are enabled. TabURL: URL for the XML file from which the tab definitions are loaded. WebServicePollInterval: The frequency, in minutes, of loading calendar data from a web services provider. EnableContactSync: Determines whether the client synchronizes contacts into the e-mail server contacts store.<117> The value MUST be "true" or "false". ShowSharepointPhotoEditLink: Determines whether the client shows the site link to edit their photo. The value MUST be "true" or "false". EnableVOIPCallDefault: Determines whether the client chooses Voice over IP (VoIP) call as the default calling method instead of a PSTN call when click to call is used. The value MUST be "true" or "false". MaximumDGsAllowedInContactList: Determines the number of distribution groups allowed in the contact list of the client. The type for this is unsigned integer. EnableMediaRedirection: Enables an enterprise-grade audio/video experience.<118> HelpEnvironment: Specifies the help documentation that the client shows to the user. P2PAppSharingEncryption: An integer value that specifies the encryption mode for application sharing sessions between clients. 0 : Supported 1 : Enforced 2 : Not Supported TracingLevel:<119> Controls the tracing level on the client. This with EnableTracing controls tracing, if EnableTracing is not set then this attribute controls the level of tracing and the user can change the tracing level, if EnableTracing is set then the user cannot change the tracing level and the tracing level is determined by the value of this attribute. This string attribute MUST have one of the following values. Off Light Full RateMyCallAllowCustomUserFeedback: <120> Whether the call rating dialog will include a free form text field, where user can provide custom feedback (value set to "true"), or not ("false"). RateMyCallDisplayPercentage: <121> An integer percentage value between 0 and 100 that indicates the percentage of calls for which call rating information is requested from the user at the end of a call. A value of 0 specifies to never request call rating information. A value of 100 specifies requesting call rating information at the end of every call. The default value is 10, specifying requesting call rating information for 1 call out of every 10 calls. ImLatencySpinnerDelay: <122> The amount of time in milliseconds to wait before showing the spinner in the client when IM delivery is delayed. ImLatencyErrorThreshold: <123> An integer value in milliseconds. If the IM latency is above this threshold the client will submit a CER. EnableServerConversationHistory: <124> Whether to allow the IM server to record the conversation history for this user. EnableHighPerformanceP2PAppSharing: <125> Whether to enable high frame rate application sharing in a two-party conversation. To enable the best experience,. EnableHighPerformanceConferencingAppSharing: <126> Whether to enable high frame rate application sharing in a multi-party (conference) conversation. Note that when enabled, this could have a performance impact on the application sharing conference server, possibly resulting in degraded performance if the server is heavily loaded. To enable the best experience, EnableSkypeUI: <127> Whether the client will display a Lync-style or Skype-style user interface. When set to "true", the client will display a Skype-style interface; when set to "false", the client will display a Lync-style interface. If this setting is absent, the client can choose in an implementation-specific manner. CallViaWorkEnabled: <128> Whether to enable or disable a user for CallViaWork functionality. AdminCallBackNumber: <129> An administrator-specified callback phone number for a specific user. UseAdminCallbackNumber: <130> Whether to force the user to use their AdminCallbackNumber as their callback phone number. When set to "true", the AdminCallbackNumber MUST be configured. When set to "false", user has the ability to specify their own callback phone number in addition to the administrator-specified AdminCallbackNumber. OnlineFeedbackURL: Controls whether to show an icon and Help menu UI for providing feedback. The URL value for this parameter is what is launched (in the user’s default browser) when the user selects the icon or menu option (Help > Report a problem). BITSServerAddressInternal: The internal URL is used to hold the URL of the server to upload the logs or feedback into. BITSServerAddressExternal: The external URL is used to hold the URL of the server to upload the logs or feedback into. EnableSendFeedback: Determines whether sending feedback is enabled. EnableIssueReports: Determines whether issue reporting or log collection is allowed. EnableBugFiling: Whether the bug filing is enabled. EnableBackgroundDataCollection: Whether sending background metrics is enabled. PrivacyStatementURL: Provisions the directed feedback and privacy URLs. SendFeedbackURL: The URL to specify issue reporting server address. For a detailed example, see section 4.2.2.
https://docs.microsoft.com/en-us/openspecs/office_protocols/ms-siprege/8284ec7e-22ce-4dc3-bca3-a06d62cf4a23
2019-06-16T07:28:20
CC-MAIN-2019-26
1560627997801.20
[]
docs.microsoft.com
Plugin: This section will describe the details of managing the main modelling concepts within an engineering model. A high level description of the actions that are needed to setup and develop the actual design items in an engineering model is given in the topic develop engineering model. The details of using parameters to characterize the element definitions within an engineering model is described in the topic of managing parameters. The element definitions as the main modelling items in the CDP4™ are managed in the Element Definitions browser. Open the Element Definitions browser by selecting the Open icon in the Element Definitions group of the Model tab. The element definitions in this browser are given with their Name, Owner and have Element Definition as Row Type. element definitions modal dialogs further have the generic functionality on the Aliases, Definitions and HyperLinks tabs, described in the topic standard tabs. To create a new element definition, select the Create Element Definition icon or in the context menu select Create an Element Definition. On the Basic tab, provide the mandatory fields for Name, Short Name and Owner. Furthermore, it is possible to indicate that the element definition is a Top Element, by ticking the check box Is Top Element. Optionally, a category can be applied to the new element definition. To do this, provide the applicable category or categories by selecting these on the Categories tab. Given on this tab are only the categories that can be applied to element definitions. This is done by including element definitions in the list of permissible classes in managing the [categories][Cat]. To make use of the full advantages of ECSS-E-TM-10-25, it is advisable to indeed apply a well-defined category or categories to any element definition in the model. This will provide meaning to it and allows the model validation capabilities to function, e.g. by checking defined rules to inspect the model setup and consistency. The element definitions that are created in the model need to get parameters assigned that characterize them, see the description of manage parameters. An element definition is shown with first the list of corresponding parameters, then the list of element usages (see below), both alphabetically ordered. To edit an element definition, select the Edit Element Definition icon or in the context menu select Edit. It is possible to edit most of the available fields on the Basic or content-specific tabs, as well as optional items on the other tabs for Aliases, Definitions, and Hyperlinks. In the case of Element Definitions, these are the Name, Short Name, Owner and indication of Top Element on the Basic tab, as well as the applicable categories on the Categories tab. After making the required changes, click Ok. Please take the effects of some of these edits into account. These are described below. To inspect an element definition, select the Inspect Element Definition icon or in the context menu select Inspect. In the Inspect modal dialog, all the details can be seen on the Basic and Category* tab, as well as optional tabs for **Aliases, Definitions, and Hyperlinks. The Advanced tab provides information that may be useful mostly to CDP™ database administrators. Given are the UniqueID and the [Revision Number][Rev_Num]. To delete an element definition, select the Delete Element Definition icon or in the context menu select Delete. When an element definition is deleted from the CDP™ database, it will actually be removed from the data that is stored in the CDP™ database from that point onwards. Note that deleting an element definition will also delete all its related element usages. To export element definitions, select the Export Element Definitions icon. Element Definitions can be added to other element definitions as Element Usages. This mechanism is used to create a hierarchy within the engineering model, see the section on option trees below. This section describes the more technical details of managing element usages. When an Element Usage is being created, it has all the same characteristics in terms of name, short name and all the parameters that belong to it. To create an element usage, select a source element definition, and drag-and-drop this on the target element definition. With this action, an element usage with a link to the source element definition will be created under the target element definition. Any new element usage will be added to the alphabetically ordered list of element usages under an element definition. Element definitions are indicated with the Element Usage icon in the Element Definitions browser. To edit an element usage, select the Edit Element Usage icon or in the context menu select Edit. Note that for the Edit menu item, there is a choice to edit the Element Usage, or the corresponding Element Definition. It is possible to edit most of the available fields on the Basic or content-specific tabs, as well as optional items on the other tabs for Aliases, Definitions, and Hyperlinks. In the case of Element Usages, these are the Name, Short Name and Owner on the Basic tab. The corresponding Element Definition cannot be edited. It is further possible to edit the applicable categories on the Categories tab. Please note that any changes to these categories will only have an effect on this particular element usage. Any changes that are made to it do not impact the corresponding element definition, nor any of the other element usages. See the description below for more details on the effect of changes to element definitions and element usages. The categories of the corresponding element definition are given on the Definition Categories tab. For an element usage, an additional attribute on the Basic tab is the Interface End. This specified the type of interface an element usage represents, if applicable. Possible choices for this are: NONE not an interface end UNDIRECTED general undirected interface end Example: a mechanical mounting plate INPUT interface end that acts as an input for its containingElement ElementDefinition Example: a power inlet socket OUTPUT interface end that acts as an output for its containingElement ElementDefinition Example: a signal output connector on a sensor IN_OUT interface end that acts both as an input and an output for its containingElement ElementDefinition Example: an Ethernet port on an electronic device Another tab that is specific to element usages is the Options tab. On this tab, all the options that are specified within the iteration are given. It is possible to indicate if the element usage should be included or excluded for a specific option, see the description of nested elements and the option tree below. To exclude an element usage from an option, untick the check box for that option. Use the Select All check box to include or exclude the element usages from all options. To inspect an element usage, select the Inspect Element Usage icon or in the context menu select Inspect. Note that for the Inspect menu item, there is a choice to edit the Element Usage, or the corresponding Element Definition. In the Inspect modal dialog, all the details can be seen on the Basic, Options, ** Category* and **Definitions Category tab, as well as optional tabs for Aliases, Definitions, and Hyperlinks. The Advanced tab provides information that may be useful mostly to CDP™ database administrators. Given are the UniqueID and the [Revision Number][Rev_Num]. To export element usages, select the Export Element Usage icon. Element Usages in the CDP4™ are used to build up nested element trees using the element definitions. The element definitions can be seen as “building blocks”, where the element usages are then used to specify where in a nested element tree these building blocks will appear. An element definition has to be unique. Multiple element usages of a single element definition can be added however to an engineering model. Please note that the user should edit the element usage to create a unique name and shortname to distinguish between each usage if multiple usages are added to the same element definition, so it is not unique at that specific level. The representation for each item in the Engineering Model browser itself is only one level deep. An element definition is shown with first the list of corresponding parameters, then the list of element usages, both alphabetically ordered. One element definition in the engineering model can be indicated to be the Top Element, which will be the starting point to derive expanded option trees as nested element trees. For each element usage that is present within the element definition defined as the top element, its corresponding element usages within their corresponding element definitions will be collected, and so forth to recursively build up a fully expanded view of the model showing the nested elements. This type of view on the model is available in the option-generated sheets. The nested element trees are generated for each option specifically. The distinction between the options can be expressed by indicating the desired option dependency of an Element Usage. If the option dependency is indicated for an element usage, this means that this element usage will be included in the nested element tree for that option when this is generated. By unticking the check box for an option, the element usage will be excluded from it, and the underlying parameters will not show up on the generated option sheet for that option. Setting the option dependency for element usages is thus one of the main mechanisms to create different nested element trees for different options to explore these as solution directions in a CD Study.. Last modified 1 year ago.
http://cdp4docs.rheagroup.com/?c=D%20Performing%20a%20CD%20Study/CDP%20Client&p=Manage_ED_EU.md
2019-06-16T07:41:51
CC-MAIN-2019-26
1560627997801.20
[]
cdp4docs.rheagroup.com
Remote Desktop Gateway on the AWS Cloud: Quick Start Reference Deployment Deployment Guide Santiago Cardenas — Solutions Architect, AWS Quick Start Reference Team April 2014 (last update: June 2017) This Quick Start reference deployment guide includes architectural considerations and configuration steps for deploying Remote Desktop Gateway (RD Gateway) on the Amazon Web Services (AWS) Cloud. It discusses best practices for securely accessing your Windows-based instances using the Remote Desktop Protocol (RDP) for remote administration. The Quick Start includes automated AWS CloudFormation templates that you can leverage for your deployment or launch directly into your AWS account. The guide is for organizations that are running workloads in the AWS Cloud that require secure remote administrative access to Windows-based, Amazon Elastic Compute Cloud (Amazon EC2) instances over the internet. After reading this guide, IT infrastructure personnel should have a good understanding of how to design and deploy an RD Gateway infrastructure on AWS. The following links are for your convenience. Before you launch the Quick Start, please review the architecture, configuration, network security, and other considerations discussed in this guide. If you have an AWS account and you're already familiar with RD Gateway and AWS services, you can launch the Quick Start to deploy RD Gateway into a new VPC in your AWS account. The deployment takes approximately 30 minutes. If you’re new to AWS or RD Gateway, or if you want to deploy RD Gateway into an existing VPC, please review the implementation details and follow the step-by-step instructions provided in this guide. If you'd like to take a look under the covers, you can view the template that automates the deployment for a new VPC. You can customize the template during launch, or download and extend it for other projects. Note You are responsible for the costs related to your use of any AWS services used while running this Quick Start reference deployment. There is no additional cost for using the Quick Start. For cost estimates, see the pricing pages for each AWS service you will be using in this Quick Start..
https://docs.aws.amazon.com/quickstart/latest/rd-gateway/welcome.html
2019-06-16T06:59:06
CC-MAIN-2019-26
1560627997801.20
[]
docs.aws.amazon.com
- ¶ Changed in version 2.6. Pipeline stages have a limit of 100 megabytes of RAM. If a stage exceeds this limit, MongoDB will produce an error. To allow for the handling of large datasets, use the allowDiskUse option to enable aggregation pipeline stages to write data to temporary files. Changed in version 3.4. also $sort and Memory Restrictions and $group Operator and Memory.
https://docs.mongodb.com/manual/core/aggregation-pipeline-limits/
2019-06-16T08:13:39
CC-MAIN-2019-26
1560627997801.20
[]
docs.mongodb.com
Configuring System Service Firewall Rules. Note If you cannot add a program or system service to the exceptions list, you must determine which port or ports the program or system service uses and add the port or ports to the Windows Firewall exceptions list. However, adding programs and system services to the exceptions list is the recommended way to control the traffic that is allowed through Windows Firewall. Windows Firewall provides four preconfigured system service exceptions that you can enable or disable. When you enable a preconfigured exception, Windows Firewall adds the appropriate programs and ports to the exceptions list so that the system service can receive unsolicited incoming traffic. When you disable a preconfigured system service, Windows Firewall deletes the programs and ports from the exceptions list. The following table lists the preconfigured exceptions you can configure in Windows Server 2003. Note There is no predefined Remote Assistance exception in Windows Server 2003. If you want to use Remote Assistance on Windows Server 2003, you must enable the Remote Desktop exception or add TCP port 3389 to the Windows Firewall exceptions list. In most scenarios, all of the preconfigured exceptions are disabled by default. However, the File and Printer Sharing exception might be enabled by default after you perform an upgrade from an older operating system that has shared folders or printers. The Remote Desktop exception might be enabled by default after you perform an upgrade from an older operating system that has Remote Desktop Connection enabled. In addition to enabling and disabling preconfigured exceptions, you can edit the File and Printer Sharing, Remote Desktop, and UPnP Framework exceptions. Editing a preconfigured exception allows you to enable and disable the programs and ports that are associated with the preconfigured exception. This is useful for troubleshooting and for those cases in which you want to modify a preconfigured exception to suit a specific server configuration but do not want to add additional exceptions to the exceptions list. You cannot delete any of the preconfigured exceptions; disabling a preconfigured exception does not remove it from the exceptions list. Note You cannot edit the Remote Administration exception nor can you enable or disable it in the graphical user interface. You must use the netsh firewall command or Group Policy to enable or disable the Remote Administration exception. are configured to provide numerous services can be a critical point of failure in your organization and might indicate poor infrastructure design. To decrease your security risk, follow these guidelines when you configure preconfigured exceptions: Enable an exception only when you need it. If you think a program might require a port for unsolicited incoming traffic, do not enable a preconfigured exception until you verify that the program attempted to listen for unsolicited traffic and that the ports it uses are those that are opened by the preconfigured exception. Never enable an exception for a program or system service that you do not recognize. If Windows Firewall notifies you that a program has attempted to listen for unsolicited incoming traffic, verify the name of the program and the executable (.exe) file before you enable a preconfigured exception. If you use the security event log to determine that a system service attempted to listen for unsolicited incoming traffic, verify that the system service is a valid component before you enable a preconfigured exception. Disable preconfigured exceptions when you no longer need them. If you enable a preconfigured exception on a server, and then change the server's role or reconfigure the services and applications on the server, be sure to update the exceptions list and disable the preconfigured exceptions that are no longer required. When to perform this task You should configure the File and Printer Sharing, Desktop Connection, and UPnP Framework exceptions when your server provides or uses file and printer sharing, Remote Desktop Connection, or UPnP discovery. These exceptions are preconfigured for these services and features and are not meant to be used in any other way. You should configure the Remote Administration exception on a server when you want to administer the server with a remote administration tool that uses RPC and DCOM. Malicious users often attempt to attack networks and computers using RPC and DCOM. You should contact the manufacturer of your remote administration tool to determine if it requires RPC and DCOM communication. If it does not, do not enable this exception. Task requirements No special tools are required to complete this task. Task procedures To complete this task, perform the following procedures: Enable or Disable the File and Printer Sharing Firewall Rule Enable or Disable the Remote Desktop Firewall Rule Enable or Disable the UPnP Framework Firewall Rule Enable or Disable the Remote Administration Firewall Rule See Also Concepts Known Issues for Managing Firewall Rules Configuring Program Firewall Rules Configuring Port Firewall Rules Configuring Firewall Rules for Specific Connections Configuring Scope Settings
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc781583(v=ws.10)
2019-06-16T07:40:14
CC-MAIN-2019-26
1560627997801.20
[]
docs.microsoft.com
Features of Windows Server Update Services Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2, Windows Server Update Services Server-Side Features The following features comprise the server-side component of the WSUS solution. More updates At least one WSUS server must connect to Microsoft Update to get updates and update information for Microsoft Windows, Office, SQL Server, and Exchange. Additional Microsoft product updates will become available on Microsoft Update in the future. Specific updates can be set to download automatically When a WSUS server downloads available updates, either from Microsoft Update or an upstream WSUS server, “synchronization” occurs. Administrators can choose which updates are downloaded to a WSUS server during synchronization, based on the following criteria: Product or product family (for example, Windows Server 2003 or Microsoft Office) Update category (for example, Critical Updates, and Drivers) Language (for example, English and Japanese only) In addition, administrators can specify a schedule for synchronization to initiate automatically. Automated actions for updates determined by administrator approval An administrator must approve every automated action to be carried out for the update. Approval actions include the following: Install Remove (this action is possible only if the update supports uninstall) Detect-only Decline In addition, the administrator can enforce a deadline for install or remove (uninstall) update approvals. By setting a deadline, the action the administrator specifies initiates by a certain date and time. The administrator can force an immediate download by setting a deadline for a time in the past. Ability to determine the applicability of updates before installing them Administrators can estimate how many computers need an update by approving a detect-only action for an update. A detect-only action determines, by computer, if an update is appropriate for installation. This enables the administrator to analyze the update’s impact before actually planning and deploying the update for installation. When an administrator approves an update for detection, the detection occurs for computers the next time they communicate with the WSUS server. The administrator can also create an automatic approval action for specific types of updates to be approved for either a detect action as well as for installation. Pre-approving updates for detection ensures applicability and need analysis can be automatically generated. Pre-approval of specific types of updates to a computer group gives administrators freedom from manually preparing test machines. Updates classified as Critical Updates or Security Updates are automatically approved for detection. Targeting Targeting enables administrators to deploy updates to specific computers and groups of computers. This can be configured either on the WSUS server directly, on the WSUS server by using Group Policy in an Active Directory® network environment, or on the client computer by editing registry settings. The following are examples of targeting-enabled. Database options The WSUS database stores update information, event information about update actions on client computers, and WSUS server settings. Administrators have the following options for the WSUS database. The Microsoft SQL Server 2000 Desktop Engine (Windows) (WMSDE) database that WSUS can install during setup on Windows Server 2003. An existing Microsoft® SQL Server™ 2000 database. An existing Microsoft Data Engine 2000 (MSDE) with Service Pack 3 (SP3) or later. Replica synchronization WSUS enables administrators to create an update management infrastructure consisting of a hierarchy of WSUS servers. WSUS servers can be scaled out to handle any number of clients. With replica synchronization, updates, target groups, and approvals created by the administrator of the central WSUS server are automatically propagated to WSUS servers designated as replica servers. This is beneficial because branch clients get updates from a local server, and having an unreliable low-bandwidth link to the central server is not a problem for client/server communication. Also, update approval is controlled by the central server; administration is not required at the branch. Reporting Using the WSUS reports, administrators can monitor the following activity. Update status Administrators can assess and monitor the level of update compliance for their client computers on an ongoing basis using the Status of Update report, which provides status for update approval and deployment per update, per computer, and per computer group, based on all events that are sent from the client computer. Computer status Administrators can assess the status of client computers with respect to the status of updates on those computers-for example, a summary of updates that have been installed or are needed for a particular computer. Computer compliance status Administrators can view or print a summary of compliance information for a specific computer, including report is in a printable format. Extensibility A software development kit (SDK) is available to enable administrators and developers to work with the .NET-based API. Administrators can create custom code to manage both Automatic Updates and WSUS servers. Developers can create management applications that integrate with WSUS. Configurable communication options Administrators have the flexibility of configuring computers to get updates directly from Microsoft Update, from an intranet WSUS server that distributes updates internally, or from a combination of both, depending on their network configuration. Administrators can configure a WSUS server to use a custom port for connecting to the intranet or Internet, if appropriate. The default port used by a WSUS server is port 80. Administrators can configure proxy server settings if the WSUS server connects to the Internet through a proxy server. Import and export and data migration from the command line Administrators can import and export update metadata and content between WSUS servers. This is a necessary task in a network with limited or restricted Internet connectivity. Administrators can seamlessly migrate their previous administrative settings, content approvals and content from a SUS server to a WSUS server. This method can also be useful for consolidation of WSUS servers. For example, administrators can migrate approvals for specific target groups from one WSUS server to another. Backup and restore WSUS supports a number of options for backup and restore, including a command-line tool for backing up MSDE and WMSDE databases, NTbackup for update content files, and SQL Enterprise manager for SQL Server metadata. Client-Side Features The following features comprise the client-side component of the WSUS solution. Powerful and extensible management of the Automatic Updates service In an Active Directory scheduling and notification wait until the scheduled automatic installation time. Managing client computers through the Component Object Model (COM)–based API. An SDK is available. Self-updating for client computers If connected to a WSUS server, client computers can detect if a newer version of Automatic Updates is available, and then upgrade their Automatic Updates service automatically. Automatic detection of applicable updates Automatic Updates can download and install specific updates that are truly applicable to the computer. Automatic Updates works with the WSUS server to evaluate which updates should be applied to a specific client computer. This is initiated by approving an update for detect-only. Under-the-hood efficiency end user license agreements (EULAs). EULAs.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc720434(v=ws.10)
2019-06-16T06:56:23
CC-MAIN-2019-26
1560627997801.20
[]
docs.microsoft.com
. - Thumbprint Verification vSphere Replication checks for a thumbprint match. vSphere Replication trusts remote server certificates if it can verify the the thumbprints through secure vSphere platform channels or, in some rare cases, after the user confirms them. vSphere Replication only takes certificate thumbprints into account when verifying the certificates and does not check certificate validity. - Verification of Thumbprint and Certificate Validity vSphere Replication checks the thumbprint and checks that all server certificates are valid. If you select the Accept only SSL certificates signed by a trusted Certificate Authority option, vSphere Replication refuses to communicate with a server with an invalid certificate. When verifying certificate validity, vSphere Replication checks expiration dates, subject names and the certificate issuing authorities..
https://docs.vmware.com/en/vSphere-Replication/6.0/com.vmware.vsphere.replication-admin.doc/GUID-FAE28EC2-9136-47F5-9DED-ADBBAA6AB33D.html
2019-06-16T06:43:24
CC-MAIN-2019-26
1560627997801.20
[]
docs.vmware.com
Stardog is an Enterprise Knowledge Graph platform. Stardog’s semantic graphs, data modeling, and deep reasoning make it fast and easy to turn data into knowledge without writing code. Check out the Quick Start Guide to get Stardog installed and running in five easy steps. Introduction. Stardog is made with skill, taste, and a point of view in DC, Boston, Heidelberg, Madison, Moscow, Hanalei, Houston, South Dakota, Porto, Manchester, PA, Columbia, MO, and Pittsburgh. Premium 4 If you are upgrading to Stardog 4 from any previous version, please see Stardog 4 in SNARL Migration Guide for details about auto-migrating pre-4.0 indexes. Linux and OSX Tell Stardog where its home directory (where databases and other files will be stored) is $ export STARDOG_HOME=/data/stardog If you’re using some weird Unix shell that doesn’t create environment variables in this way, adjust accordingly. If STARDOG_HOMEisn’t defined, Stardog will use the Java user.dirproperty value. Copy the stardog-license-key.bininto the right place: $ cp stardog-license-key.bin $STARDOG_HOME Of course stardog-license-key.binhas to be readable by the Stardog process. Stardog won’t run without a valid stardog-license-key.binin STARDOG_HOME. Start the Stardog server. By default the server will expose SNARL and HTTP interfaces on port 5820.[2] $ ./stardog-admin server start Create a database with an input file: $ ./stardog-admin db create -n myDB data.ttl Query the database: $ ./stardog query myDB "SELECT DISTINCT ?s WHERE { ?s ?p ?o } LIMIT 10" You can use the Web Console to search or query the new database you created by visiting in your browser. Now go have a drink: you’ve earned it., start the Stardog server. By default the server will expose SNARL and HTTP interfaces on port 5820.[3] > stardog-admin.bat server start This will start the server in the current command prompt, you should leave this window open and open a new command prompt window to continue. Fourth, create a database with some input file: > stardog-admin.bat db create -n myDB data.ttl Fifth, query the database: > stardog.bat query myDB "SELECT DISTINCT ?s WHERE { ?s ?p ?o } LIMIT 10" You can use the Web Console to search or query the new database you created by hitting in your browser. You should drink the whole bottle, brave Windows user! }" Detailed information on using the query command in Stardog can be found on its man page. See Managing Stored Queries section for configuration, usage, and details of stored queries.[4] which allows users to query distributed RDF via SPARQL-compliant data sources. You can use this to federate queries between several Stardog database or Stardog and other public endpoints. [Virtual Graphs] 4.2 complete. | +-------+-------+---------------------------------------------------+ See Full-Text Search for more details..
https://docs.stardog.com/
2017-04-23T09:49:32
CC-MAIN-2017-17
1492917118519.29
[]
docs.stardog.com
Creating. Video Overviews Overview of Group Settings How to Use Groups in MemberPress Group Options Upgrade Path - When this is set, members can only subscribe to one Membership in the group at a time. If this is unset, members can subscribe to each Membership in the group simultaneously. Reset billing period - Shown only after enabling the Upgrade Path option, this option will force the billing period to be reset to the day the user purchased. For example, if the subscription is monthly, and was originally purchased on the 1st the month, but upgraded on the 15th of the month, the next billing date would be the 15th instead of the 1st. Downgrade Path - Use the drop down to select the membership you would like a user to default back to should their subscription to any membership in the group become inactive (from a lapsed payment, failed payment, subscription expires). If set to 'default', then the user will NOT be placed into another membership automatically. Note: this automatic downgrade will always be free - even if the membership has a price. So it's recommended that your downgrade path Membership is a free Membership level.. Just be sure to use the Group Price boxes Shortcode found this protations will also be calculated. Here is an example of what this may look like:
https://docs.memberpress.com/article/61-creating-groups
2019-02-16T01:01:23
CC-MAIN-2019-09
1550247479729.27
[array(['https://d3vv6lp55qjaqc.cloudfront.net/items/2D1h1h100S1G0V3h3o2K/%5B7c94f14f41e2a0b11ace49950761cb49%5D_Image%2525202017-01-06%252520at%25252011.25.26%252520AM.png?X-CloudApp-Visitor-Id=1963854', None], dtype=object) array(['https://d3vv6lp55qjaqc.cloudfront.net/items/0R1M0m2w1v3k3o1r1X2W/Image%202017-01-06%20at%2011.29.34%20AM.png?X-CloudApp-Visitor-Id=1963854', None], dtype=object) ]
docs.memberpress.com
Migrating Opsview to Different Hardware This document is for migrating Opsview from one hardware to another. You will effectively have two different instances of Opsview, each with their own data stores. Note: If you have a distributed environment, you should disable slaves on the old Opsview, otherwise there will be contention between the two masters. Note: If you are changing architectures as part of the migration, read through these instructions first as some steps require exporting of data to an architecture independent format (mysql and RRDs in particular). It is not necessary to have the same version of Opsview on the new server - however, ensure you check the upgrade notes. Note, you cannot go backwards in versions on the new server. These instructions assume you have a new install of Opsview on the new server. There will be an outage to Opsview during the migration. Warnings Some things to consider when changing your Opsview master that are outside the scope of this document. Opsview Agents / NRPE If you have set up security on your NRPE agents by limiting the IP address of the servers allowed to interrogate them, don't forget to add in the new servers IP address. SNMP If you limit SNMP communication from a specific IP address, ensure devices have been updated to allow the new server If you send SNMP traps to the Opsview master, ensure these are redirected to the new server Firewalls If you have setup firewalls for web access, these will need to be updated. Build the new Opsview master Install Opsview on the new server. Migrate configuration files Migrate any configuration files you may have customised: - Migrate any custom plugins Transfer any extra plugins or custom scripts (event handlers or notification scripts) from the old server to the new one. Stop Opsview On the old and the new server: /etc/init.d/opsview stop /etc/init.d/opsview-web stop Migrate the Nagios® Core logs If you want the old Nagios Core log files, which are used for the Nagios Core availability reports, move all files within the /usr/local/nagios/var/archives directory to the new server, and also move /usr/local/nagios/var/nagios.log into the same area, taking into account the naming convention Migrate Existing Status Data Including Downtimes and Acknowledgements If you want to keep the existing status data, including downtimes, acknowledgements and Nagios comments, copy the file /usr/local/nagios/var/retention.dat to the new server. Migrate Nagvis and RRD data (MRTG, NMIS and Opsview performance graphs) Migrate the following directories if you want to retain history: - Nagvis - /usr/local/nagios/nagvis/etc - MRTG - /usr/local/nagios/var/mrtg - NMIS - /usr/local/nagios/nmis/database and /usr/local/nagios/nmis/var - Opsview performance graphs - /usr/local/nagios/var/rrd Note: If you are migrating across architectures, RRD files (from MRTG, NMIS and Opsview performance data) need to be exported and imported instead of copied. There is a script at /usr/local/nagios/installer/rrd_converter which can be used. Make sure there is enough space in /usr/local/nagios/var and /usr/local/nagios/nmis/database as it will make a copy of all your RRD files. Run it with rrd_converter export and it will create a tarball in /tmp. Run with rrd_converter import on the target system to import all those RRD files. Migrate Databases Migrate the databases from the old server to the new server - see the Backup databases and Restore databases instructions in the database migration document Ensure database permissions are correct by running db_mysql as mysql root on the new server, for example /usr/local/nagios/bin/db_mysql -u root -p<password> Upgrade the databases on the new server: su - nagios /usr/local/nagios/installer/upgradedb.pl Update Slaves In a distributed environment, run send2slaves -t and correct any errors. Run send2slaves to update slaves with the latest code. Start Opsview Start Opsview on the new server: /usr/local/nagios/bin/rc.opsview gen_config /etc/init.d/opsview-web start
https://docs.opsview.com/doku.php?id=opsview4.6:migratinghardware
2019-02-16T01:35:45
CC-MAIN-2019-09
1550247479729.27
[]
docs.opsview.com
One of the main benefits of using Moss is that you rarely have to deal with the low-level details of server and site configuration 😊. However, sometimes you need to get your hands dirty and provide extra configuration for your applications. Moss will honor your own config and won't overwrite it. In this article we tell you how you can set it up. PHP-FPM extra config Moss serves your PHP applications using PHP-FPM, an implementation of the FastCGI protocol for PHP. The configuration for PHP-FPM can be found on your servers within directories /etc/php/<version>/fpm/, where <version> are each of the PHP versions installed on your server. For the sake of simplicity, in the remainder of this article we provide examples for PHP 7.0. However everything stated here applies to all supported PHP versions. Just substitute 7.0 for the PHP version you're actually tweaking. moss@<server>:~$ ls -F /etc/php/7.0/fpm/ conf.d/ php-fpm.conf php.ini pool.d/ Moss sets up some of these configuration files on your behalf. But luckily, you may overwrite the values of the directives you want by including additional .ini files within /etc/php/7.0/fpm/conf.d/ (e.g. /etc/php/7.0/fpm/conf.d/99-extra.ini). Moss won't overwrite the configuration that you put into those files. Once you're done with the config, log into Moss and provision your website so that PHP-FPM is reloaded. Nginx extra config Moss installs and sets up the Nginx web server on your behalf. You can find the Nginx configuration files within the /usr/local/openresty/nginx/conf/ and /usr/local/openresty/nginx/sites/ directories of your server. You can tweak the configuration of each site by editing the following files: /usr/local/openresty/nginx/conf/server_params.site.com: Nginx directives within the serverblock of your site.com. /usr/local/openresty/nginx/conf/root_params.site.com: Nginx directives within the location /block of your site.com. /usr/local/openresty/nginx/conf/fastcgi_params.site.com: Nginx directives within blocks with FastCGI configs (block location ~ .php$at the moment) of your site.com. Moss won't overwrite the configuration that you put into those files. Once you're done with the config, log into Moss and provision your website so that Nginx is reloaded. Apache extra config Most sites provisioned via Moss either use Nginx in standalone mode or as a reverse proxy in front of Apache. In the latter case, you may fine-tune Apache configs in different ways: - Create config files within /etc/apache2/conf-available/and run a2enconfafterwards (e.g. create file myconfig.confand run a2enconf myconfig). Since you're changing the global configuration, you must SSH into your server as user mossand become rootusing sudo. Once you're done, you must log into Moss and provision your website so that Apache is reloaded. - Create .htaccessfiles within your sites' directories. Since you're dealing with a given website, you must create these files as the server user that runs such website. - WordPress users will likely employ a plugin to make some configs like setting up the browser cache, among others. Such plugins could add the required configs to a .htaccessfile, so you don't need to do it by hand. What's next? Need some examples? Below you can find articles that describe some common use cases: - How to increase the maximum upload size - How to set up basic auth for a website - How to tweak fastcgi params for a website - How to add rewrite rules in Nginx for a website - How to set up the cache policy for the resources of a website Don't hesitate to contact us if you have any other use case in mind 😉.
https://docs.moss.sh/help-in-english/custom-configs/tweak-the-configuration-of-your-sites
2019-02-16T02:00:59
CC-MAIN-2019-09
1550247479729.27
[]
docs.moss.sh
File integrity monitoring (FIM) is available for Linux and Darwin using inotify and FSEvents. The daemon reads a list of files/directories from the osquery configuration. The actions (and hashes when appropriate) to those selected files populate the file_events table. To get started with FIM, you must first identify which files and directories you wish to monitor. Then use fnmatch-style, or filesystem globbing, patterns to represent the target paths. You may use standard wildcards "*" or SQL-style wildcards "%":. For example, you may want to monitor /etc along with other files on a Linux system. After you identify your target files and directories you wish to monitor, add them to a new section in the config file_paths. Note: Many applications may replace a file instead of editing them in place. If you monitor the file directly, osquery will need to be restarted in order to monitor the replacement. This can be avoided by monitoring the containing directory instead. The three areas below that are relevant to FIM are the scheduled query against file_events, the added file_paths section and the exclude_paths sections. The file_events query is scheduled to collect all of the FIM events that have occurred on any files within the paths specified within file_paths but excluding the paths specified within exclude_paths on a five minute interval. At a high level this means events are buffered within osquery and sent to the configured logger every five minutes. Note: You cannot match recursively inside a path. For example /Users/%%/Configuration.conf is not a valid wildcard. Example FIM Config { "schedule": { "crontab": { "query": "SELECT * FROM crontab;", "interval": 300 }, "file_events": { "query": "SELECT * FROM file_events;", "removed": false, "interval": 300 } }, "file_paths": { "homes": [ "/root/.ssh/%%", "/home/%/.ssh/%%" ], "etc": [ "/etc/%%" ], "tmp": [ "/tmp/%%" ] }, "exclude_paths": { "homes": [ "/home/not_to_monitor/.ssh/%%" ], "tmp": [ "/tmp/too_many_events/" ] } } One must not mention arbitrary category name under the exclude_paths node, only valid categories are allowed. valid category- Categories which are mentioned under file_pathsnode. In the above example config homes, etcand tmpare termed as valid categories. invalid category- Any other category name apart from homes, etcand tmpare considered as invalid categories. In addition to file_paths one can use file_paths_query to specify the file paths to monitor as path column of the results of the given query, for example: { "file_paths_query": { "category_name": [ "SELECT DISTINCT '/home/' || username || '/.gitconfig' as path FROM last WHERE username != '' AND username != 'root';" ] } } Note: Invalid categories get dropped silently, i.e. they don't have any effect on the events generated. Sample Event Output As file changes happen,81229c8bfac", "target_path":"\/root\/.ssh\/authorized_keys", "time":"1429208712", "transaction_id":"0" } Tuning Linux inotify limits For Linux, osquery uses inotify to subscribe to file changes at the kernel level for performance. This introduces some limitations on the number of files that can be monitored since each inotify watch takes up memory in kernel space (non-swappable memory). Adjusting your limits accordingly can help increase the file limit at a cost of kernel memory. Example sysctl.conf modifications #/proc/sys/fs/inotify/max_user_watches = 8192 fs.inotify.max_user_watches = 524288 #/proc/sys/fs/inotify/max_user_instances = 128 fs.inotify.max_user_instances = 256 #/proc/sys/fs/inotify/max_queued_events = 16384 fs.inotify.max_queued_events = 32768 File Accesses In addition to FIM which generates events if a file is created/modified/deleted, osquery also supports file access monitoring which can generate events if a file is accessed. File accesses on Linux using inotify may induce unexpected and unwanted performance reduction. To prevent 'flooding' of access events alongside FIM, enabling access events for file_path categories is an explicit opt-in. You may add categories that were defined in your file_paths stanza: { "file_paths": { "homes": [ "/root/.ssh/%%", "/home/%/.ssh/%%" ], "etc": [ "/etc/%%" ], "tmp": [ "/tmp/%%" ] }, "file_accesses": ["homes", "etc"] } The above configuration snippet will enable file integrity monitoring for 'homes', 'etc', and 'tmp' but only enable access monitoring for the 'homes' and 'etc' directories. NOTICE: The hashes of files will not be calculated to avoid generating additional access events. Process File Accesses on macOS It is possible to monitor for file accesses by process using the osquery macOS kernel module. File accesses induce a LOT of stress on the system and are more or less useless giving the context from userland monitoring systems (aka, not having the process that caused the modification). If the macOS kernel extension is running, the process_file_events table will be populated using the same file_paths key in the osquery config. This implementation of access monitoring includes process PIDs and should not cause CPU or memory latency outside of the normal kernel extension/module guarantees. See ../development/kernel.md for more information.
https://osquery.readthedocs.io/en/latest/deployment/file-integrity-monitoring/
2019-02-16T01:19:54
CC-MAIN-2019-09
1550247479729.27
[]
osquery.readthedocs.io
nodetool disablethrift Disables the Thrift server. Disables the Thrift server. Synopsis nodetool [options] disablethrift installation_location/resources/cassandra/bin - thrift on a node preventing the node from acting as a coordinator. The node can still be a replica for a different coordinator and data read at consistency level ONE could be stale. To cause a node to ignore read requests from other coordinators, nodetool disablegossip would also need to be run. However, if both commands are run, the node will not perform repairs, and the node will continue to store stale data. If the goal is to repair the node, set the read operations to a consistency level of QUORUM or higher while you run repair. An alternative approach is to delete the node's data and restart the Cassandra process.
https://docs.datastax.com/en/cassandra/3.0/cassandra/tools/toolsDisableThrift.html
2019-02-16T01:29:27
CC-MAIN-2019-09
1550247479729.27
[]
docs.datastax.com
docs.intersystems.com / Installation Guide / About This Book Installation Guide About This Book Search : InterSystems IRIS Data Platform™ runs on several different platforms. Please check the online InterSystems Supported Platforms document for this release to verify that InterSystems IRIS runs on your particular version of the supported operating systems. This Installation Guide contains the following chapters. To plan and prepare to install InterSystems IRIS, see the chapter: Preparing to Install InterSystems IRIS Install InterSystems IRIS following the instructions in the appropriate platform-specific installation chapter: Installing on Microsoft Windows Installing on UNIX®, Linux, and MacOS If you are upgrading from InterSystems IRIS Version 2018.1 or later, read the following chapter for a list of pre-installation upgrade tasks: Upgrading InterSystems IRIS To learn how to create an installation manifest describing a specific InterSystems IRIS configuration and use it to generate code to configure an InterSystems IRIS instance, read the chapter: Creating and Using an Installation Manifest For detailed information, see the Table of Contents . View this book as PDF © 1997-2019 InterSystems Corporation, Cambridge, MA Content Date/Time: 2019-02-14 22:44:32
https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCI_preface
2019-02-16T01:02:15
CC-MAIN-2019-09
1550247479729.27
[]
docs.intersystems.com
When: The virtual machine is cycling through power operations, such as Power Off/Power On, Reset, Pause/Resume. The device is unplugged from the host then plugged back in to the same USB port. The device is power cycled but has not changed its physical connection path. The device is mutating identity during usage. A new virtual USB device is added The USB passthrough autoconnect feature identifies the device by using the USB path of the device on the host.. might not work if you change a USB device with another USB device that works with different speed. For example, you might connect a USB 2.0 high-speed device to a port and connect that device to the virtual machine. If you unplug the device from the host and plug a USB 1.1 or USB 3.0 device into the same port, the device might not connect to the virtual machine. For a list of supported USB devices for passthrough from an ESXi host to a virtual machine, see the VMware knowledge base article at.
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.vm_admin.doc/GUID-4C61BFEA-0EBD-4FED-B807-9E125A8AC81A.html
2019-02-16T01:46:04
CC-MAIN-2019-09
1550247479729.27
[]
docs.vmware.com
Chapter 13. Data Replication, Synchronization and Transformation Services Abstract This chapter describes how to replicate data between Virtuoso and non-Virtuoso servers. The material in this chapter covers the programmatic means of performing these operations. You can also use the graphical interface to do replication and synchronization. This is covered in the Replication & Synchronization section of the Visual Server Administration Interface chapter. Table of Contents - 13.1. Introduction - 13.1.1. Snapshot replication - 13.1.2. Transactional replication - 13.2. Snapshot Replication - 13.2.1. Non incremental snapshot replication - 13.2.2. Incremental snapshot replication - 13.2.3. Command reference - 13.2.4. Bi-Directional Snapshot Replication - 13.2.5. Registry variables - 13.2.6. Heterogeneous snapshot replication - 13.2.7. Data type mappings - 13.2.8. Objects created by incremental snapshot replication - 13.2.9. Objects created by bi-directional snapshot replication - 13.2.10. Replication system tables - 13.2.11. Table snapshot logs - 13.3. Transactional Replication - 13.3.1. Publishable Items - 13.3.2. Errors in Replication - 13.3.3. Publisher Transactional Replication Functions - 13.3.4. Subscriber Functions - 13.3.5. Common Status Functions - 13.3.6. Bi-Directional Transactional Replication - 13.3.7. Purging replication logs - 13.3.8. Objects created by transactional replication - 13.4. Virtuoso scheduler - 13.4.1. SYS_SCHEDULED_EVENT - 13.5. Transactional Replication Example - - 13.6. Replication Logger Sample - 13.6.1. Configuration of the Sample - 13.6.2. Synchronization - 13.6.3. Running the Sample - 13.6.4. Notes on the Sample's Dynamics
http://docs.openlinksw.com/virtuoso/ch-repl/
2019-02-16T01:01:11
CC-MAIN-2019-09
1550247479729.27
[]
docs.openlinksw.com
Dashboards Dashboards are an effective tool for monitoring high-priority network traffic or troubleshooting issues because they consolidate multiple metric charts into a central location where you can investigate and share data. You can also add text boxes, formatted through Markdown, to provide content for stakeholders. Dashboards and collections are located in the dashboard dock. Click Collections to display all of the dashboard collections you own or that have been shared with you. The number of dashboards in each collection is displayed. Click the collection name to view the owner, who the collection is shared with, and the list of dashboards in the collection. Only the collection owner can modify or delete a collection. However, because dashboards can be added to multiple collections, you can create a collection and share it with other users and groups. Click Dashboards to display an alphabetized list of all of the dashboards that you own or that have been shared with you, including dashboards shared through a collection. The owner of each dashboard is displayed. An icon next to the owner name indicates that the dashboard was shared with you. Creating dashboards If you want to monitor specific metrics or custom metrics, you can create a custom dashboard. Custom dashboards are stored separately for each user that accesses the ExtraHop system. After you build a custom dashboard, you can share it with other ExtraHop users. There are several ways to create your own dashboard: - Create a custom dashboard or create a dashboard with dynamic sources from scratch - Copy an existing dashboard, and then customize it - Copy an existing chart, and then save it to a new dashboard New dashboards are opened in Edit Layout mode, which enables you to add, arrange, and delete components within the dashboard. After creating a dashboard, you can complete the following tasks: Click the command menu in the upper right corner of the page to edit the dashboard properties or delete the dashboard. Learn how to monitor your network by completing a dashboard walkthrough. Viewing dashboards Dashboards are composed of chart widgets, alert widgets, and text box widgets that can present a concise view about critical systems or about systems managed by a particular team. Click within a chart to interact with the metric data: - Click a chart title to view a list of metric sources and menu options. - Click a metric label to drill down and investigate by a metric detail. - Click a metric label and click Hold Focus to display only that metric in the chart. - Click a chart title or a metric label and then click Description to learn about the source metric. - Click a detection marker to navigate to the detection detail page Change the time selector to observe data changes over time: Export and share dashboard data By default, all custom dashboards are private and no other ExtraHop users can view or edit your dashboard. Share your dashboard to grant view or edit permission to other ExtraHop users and groups, or share a collection to grant view-only permission to multiple dashboards. You can only modify a shared dashboard if the owner granted you edit permission. However, you can copy and customize a shared dashboard without edit permission. Export data by individual chart or by the entire dashboard: - To export individual chart data, click the chart title and select one of the following options from the drop-down menu: Export to CSV or Export to Excel. - To present or export the entire dashboard, click the command menu in the upper right corner of the page and select one of the following options: Presentation Mode, Export to PDF or Scheduled Reports (Command appliance and Reveal(x) 360 only). System dashboards The ExtraHop system provides the following built-in dashboards that display common protocol activity about the general behavior and health of your network. System dashboards are located in the default System Dashboards collection in the dashboard dock and cannot be added to another collection. System dashboards can be viewed by any user except for restricted users. -. - Security dashboard (Reveal(x) only) - Monitor general information about potential security threats on your network. For more information about charts in this dashboard, see Security dashboard. - System Health dashboard - Ensure that your ExtraHop system is running as expected, troubleshoot issues, and assess areas that are affecting performance. For more information about charts in this dashboard, see System Health dashboard. Thank you for your feedback. Can we contact you to ask follow up questions?
https://docs.extrahop.com/8.6/dashboards/
2022-06-25T10:42:11
CC-MAIN-2022-27
1656103034930.3
[array(['/images/8.6/dashboards_dock.png', None], dtype=object) array(['/images/8.6/dashboards_chart_interact.png', None], dtype=object)]
docs.extrahop.com
Provisioning Fedora CoreOS on VMware This guide shows how to provision new Fedora CoreOS (FCOS) nodes on the VMware hypervisor. Prerequisites Before provisioning" coreos-installer download -s "${STREAM}" -p vmware -f ova |="${CONFIG_ENCODING}" \ --extraConfig:guestinfo.ignition.config.data="${CONFIG_ENCODED}" \ "$=$" Modifying OVF metadata Fedora CoreOS is intended to run on generally supported releases of VMware ESXi, VMware Workstation, and VMware Fusion. Accordingly, the Fedora CoreOS VMware OVA image specifies a virtual hardware version that may not be compatible with older, unsupported VMware products. However, you can modify the image’s OVF metadata to specify an older virtual hardware version. The VMware OVA is simply a tarball that contains the files disk.vmdk and coreos.ovf. In order to edit the metadata used by FCOS as a guest VM, you should untar the OVA artifact, edit the OVF file, then create a new OVA file. The example commands below change the OVF hardware version from the preconfigured value to hardware version 13. (Note: the defaults in the OVF are subject to change.) tar -xvf fedora-coreos-36.20220605.3.0-vmware.x86_64.ova sed -iE 's/vmx-[0-9]*/vmx-13/' coreos.ovf tar -H posix -cvf fedora-coreos-36.20220605.3.0-vmware-vmx-13.x86_64.ova coreos.ovf disk.vmdk
https://docs.fedoraproject.org/uz/fedora-coreos/provisioning-vmware/
2022-06-25T11:35:41
CC-MAIN-2022-27
1656103034930.3
[]
docs.fedoraproject.org
An Act to create 47.05 of the statutes; Relating to: competitive integrated employment of persons with a disability and granting rule-making authority. (FE) Bill Text (PDF: ) Fiscal Estimates and Reports SB514 ROCP for Committee on Senate Organization (PDF: ) SB514 ROCP for Committee on Workforce Development, Military Affairs and Senior Issues On 2/8/2018 (PDF: ) LC Bill Hearing Materials Wisconsin Ethics Commission information 2017 Assembly Bill 625 - A - Enacted into Law
https://docs.legis.wisconsin.gov/2017/proposals/sb514
2022-06-25T10:32:42
CC-MAIN-2022-27
1656103034930.3
[]
docs.legis.wisconsin.gov
The array[] value constructor The array[] value constructor is a special variadic function. Uniquely among all the functions described in this "Array data types and functionality" major section, it uses square brackets ( []) to surround its list of actual arguments. Purpose and signature Purpose: Create an array value from scratch using an expression for each of the array's values. Such an expression can itself use the array[] constructor or an array literal. Signature input value: [anyarray | [ anyelement, [anyelement]* ] return value: anyarray Note: You can meet the goal of creating an array from directly specified values, instead, by using an array literal. These thee ordinary functions also create an array value from scratch: array_fill()creates a "blank canvas" array of the specified shape with all values set the same to what you want. array_agg()creates an array (of, in general, an implied "row" type) from a SQL subquery. text_to_array()creates a text[]array from a single textvalue that uses a specifiable delimiter to beak it into individual values. Example: create type rt as (f1 int, f2 text); select array[(1, 'a')::rt, (2, 'b')::rt, (3, 'dog \ house')::rt]::rt[] as arr; This is the result: arr -------------------------------------------- {"(1,a)","(2,b)","(3,\"dog \\\\ house\")"} Whenever an array value is shown in ysqlsh, it is implicitly ::text typecast. This text value can be used immediately by enquoting it and typecasting it to the appropriate array data type to recreate the starting value. The YSQL documentation refers to this form of the literal as its canonical form. It is characterized by its complete lack of whitespace except within text scalar values and within date-time scalar values. This term is defined formally in Defining the canonical form of a literal. To learn why you see four consecutive backslashes, see Statement of the rules. Users who are familiar with the rules that are described in that section often find it expedient, for example when prototyping code that builds an array literal, to create an example value first, ad hoc, using the array[] constructor, like the code above does, to see an example of the syntax that their code must create programmatically. Using the array[] constructor in PL/pgSQL code The example below attempts to make many teaching points in one piece of code. - The actual syntax, when the expressions that the array[]constructor uses are all literals, is far simpler than the syntax that governs how to construct an array literal. - You can use all of the YSQL array functionality in PL/pgSQL code, just as you can in SQL statements. The code creates and invokes a table function, and not just a DOblock, to emphasize this interoperability point. - Array-like functionality is essential in any programming language. - The array[]constructor is most valuable when the expressions that it uses are composed using declared variables, and especially formal parameters, that are used to build whatever values are intended. In this example, the values have the user-defined data type "rt". In other words, the array[]constructor is particularly valuable when you build an array programmatically from scalar values that you know first at run time. - It vividly demonstrates the semantic effect of the array[]constructor like this: declare r rt[]; two_d rt[]; begin ... assert (array_dims(r) = '[1:3]'), 'assert failed'; one_d_1 := array[r[1], r[2], r[3]]; assert (one_d_1 = r), 'assert failed'; array_dims() is documented in the "Functions for reporting the geometric properties of an array" section. Run this to create the required user-defined "row" type and the table function and then to invoke it. -- Don't create "type rt" if it's still there following the previous example. create type rt as (f1 int, f2 text); create function some_arrays() returns table(arr text) language plpgsql as $body$ declare i1 constant int := 1; t1 constant text := 'a'; r1 constant rt := (i1, t1); i2 constant int := 2; t2 constant text := 'b'; r2 constant rt := (i2, t2); i3 constant int := 3; t3 constant text := 'dog \ house'; r3 constant rt := (i3, t3); a1 constant rt[] := array[r1, r2, r3]; begin arr := a1::text; return next; declare r rt[]; one_d_1 rt[]; one_d_2 rt[]; one_d_3 rt[]; two_d rt[]; n int not null := 0; begin ---------------------------------------------- -- Show how arrays are useful, in the classic -- sense, as what EVERY programming language -- needs to handle a number of items when the -- number isn't known until run time. for j in 1..3 loop n := j + 100; r[j] := (n, chr(n)); end loop; -- This further demonstrates the semantics -- of the array[] constructor. assert (array_dims(r) = '[1:3]'), 'assert failed'; one_d_1 := array[r[1], r[2], r[3]]; assert (one_d_1 = r), 'assert failed'; ---------------------------------------------- one_d_2 := array[(104, chr(104)), (105, chr(105)), (106, chr(106))]; one_d_3 := array[(107, chr(107)), (108, chr(108)), (109, chr(109))]; -- Show how the expressions that define the outcome -- of the array[] constructor can themselves be arrays. two_d := array[one_d_1, one_d_2, one_d_3]; arr := two_d::text; return next; end; end; $body$; select arr from some_arrays(); It produces two rows. This is the first: arr -------------------------------------------- {"(1,a)","(2,b)","(3,\"dog \\\\ house\")"} And this is the second row. The readability was improved by adding some whitespace manually: { {"(101,e)","(102,f)","(103,g)"}, {"(104,h)","(105,i)","(106,j)"}, {"(107,k)","(108,l)","(109,m)"} } Using the array[] constructor in a prepared statement This example emphasizes the value of using the array[] constructor over using an array literal because it lets you use expressions like chr() within it. -- Don't create "type rt" if it's still there followng the previous examples. create type rt as (f1 int, f2 text); create table t(k serial primary key, arr rt[]); prepare stmt(rt[]) as insert into t(arr) values($1); -- It's essential to typecast the individual "rt" values. execute stmt(array[(104, chr(104))::rt, (105, chr(105))::rt, (106, chr(106))::rt]); This execution of the prepared statement, using an array literal as the actual argument, is semantically equivalent: execute stmt('{"(104,h)","(105,i)","(106,j)"}'); But here, of course, you just have to know in advance that chr(104) is h, and so on. Prove that the results of the two executions of the prepared statement are identical thus: select ( (select arr from t where k = 1) = (select arr from t where k = 2) )::text as result; It shows this: result -------- true
https://docs.yugabyte.com/preview/api/ysql/datatypes/type_array/array-constructor/
2022-06-25T11:25:48
CC-MAIN-2022-27
1656103034930.3
[]
docs.yugabyte.com
infix xor Documentation for infix xor assembled from the following types: language documentation Operators (Operators) infix xor Same as infix ^^, except with looser precedence. Returns the operand that evaluates to True in boolean context, if and only if the other operand evaluates to False in boolean context. If both operands evaluate to False, returns the last argument. If both operands evaluate to True, returns Nil. When chaining, returns the operand that evaluates to True, if and only if there is one such operand. If more than one operand is true, it short-circuits after evaluating the second and returns Nil. If all operands are false, returns the last one.
https://docs-stage.raku.org/routine/xor
2022-06-25T11:44:07
CC-MAIN-2022-27
1656103034930.3
[]
docs-stage.raku.org
This article should help if you encounter the following error and accompanying stack trace: occurs when your Java environment does not trust the certificate of the server running your SonarQube instance. To alleviate this issue, we need to add the server certificate to the Java key store following these steps. You will need Java keytool, and the location of your CACERTS file. These are typically located in your JRE or JDK such as this: ./jdk1.6.0_24/jre/lib/security/cacerts ./jdk-11.0.9/bin/keytool.exe Navigate to your server in Chrome > click the padlock on the left of address bar > Click Certificate Certification Path > select the root certificate > View Certificate Details > Copy to File Select DER encoding and download In an administrative command prompt, navigate to the directory with your downloaded cert and add the certification to your CACERTS using the keytool (keep in mind the directory syntax will change depending on your OS). keytool -import -alias MyCert -keystore "C:\Program Files\Java\jdk-11.0.9\lib\security\cacerts" -file cert.cer
https://docs.codescan.io/hc/en-us/articles/360059029511-PKIX-Path-Building-Failed
2022-06-25T11:28:50
CC-MAIN-2022-27
1656103034930.3
[array(['https://i.imgur.com/XbVLqLF.png', None], dtype=object) array(['https://i.imgur.com/fY75oQ4.png', None], dtype=object) array(['https://i.imgur.com/lV5Dcos.png', None], dtype=object) array(['https://i.imgur.com/J7L5wX8.png', None], dtype=object) array(['https://i.imgur.com/4CECdQC.png', None], dtype=object) array(['https://i.imgur.com/aAdtbP2.png', None], dtype=object)]
docs.codescan.io
paceSynchronizationEndpointInterceptor extends SpaceSynchronizationEndpoint { ... private SomeExternalDataSource externalDataSource = ... @Override(); } @Override()); } }
https://docs.gigaspaces.com/xap/10.2/dev-java/space-synchronization-endpoint-api.html
2022-06-25T11:51:54
CC-MAIN-2022-27
1656103034930.3
[]
docs.gigaspaces.com
Eclipse MicroProfile Health Check APICheck HealthCheck endpoints, so exert extreme caution when making these changes. Upgrading from MicroProfile 3.x to 4.x MicroProfile 4.0 brings with it a number of changes to MicroProfile Health. The main incompatible change brought in by this upgrade is the removal of the deprecated @Health annotation. There isn’t a workaround for this, the annotation is simply gone: all health checks must be registered as a @Readiness, @Liveness, or @Startup check.
https://docs.payara.fish/enterprise/docs/Technical%20Documentation/MicroProfile/HealthCheck.html
2022-06-25T10:15:39
CC-MAIN-2022-27
1656103034930.3
[array(['../../_images/microprofile/health-check.png', 'Set Health Check Configuration'], dtype=object)]
docs.payara.fish
Most packages need some form of explanation to help users have the best experience and optimize its use. This page provides some tips for how to structure the information and format the documentation. After the title of the package, you should give a basic overview of what the package does and/or what it contains. Following the overview, include instructions for installing, and any system requirements and/or limitations. You can also provide links for getting help and providing feedback, including public forums or knowledge bases, and helpdesk contacts. After this preliminary information, you can provide more in-depth workflows, description of the user interface or directory listings for samples, and then more advanced topics. It’s best to provide reference pages near the end. Markdown is a lightweight format commonly used in packages. Many repository hosting services (such as GitHub and Bitbucket) support it for READMEs and documentation sites. You can provide an MD file in the Documentation~ folder under your package root so that if your package’s user clicks the View documentation link in the details pane of Unity’s Package Manager, the user’s default MD viewer opens the file. Alternatively, you can use your own website to host your documentation. To set the location for the View documentation link to point to your own website, set it with the documentationUrl property in your package.json file. If you decide to use a Markdown file to document your package, you can find more information about how to write MD files from any number of help sites, such as:
https://docs.unity3d.com/Manual/cus-document.html
2022-06-25T10:48:58
CC-MAIN-2022-27
1656103034930.3
[]
docs.unity3d.com
.4:’s new: Cluster creation with two commands - To get started with Kubernetes a user must provision nodes, install Kubernetes and bootstrap the cluster. A common request from users is to have an easy, portable way to do this on any cloud (public, private, or bare metal). - Kubernetes 1.4 introduces ‘kubeadm’ which reduces bootstrapping to two commands, with no complex scripts involved. Once kubernetes is installed, kubeadm init starts the master while kubeadm join joins the nodes to the cluster. - Installation is also streamlined by packaging Kubernetes with its dependencies, for most major Linux distributions including Red Hat and Ubuntu Xenial. This means users can now install Kubernetes using familiar tools such as apt-get and yum. - Add-on deployments, such as for an overlay network, can be reduced to one command by using a DaemonSet. - Enabling this simplicity is a new certificates API and its use for kubelet TLS bootstrap, as well as a new discovery API. Expanded stateful application support - While cloud-native applications are built to run in containers, many existing applications need additional features to make it easy to adopt containers. Most commonly, these include stateful applications such as batch processing, databases and key-value stores. In Kubernetes 1.4, we have introduced a number of features simplifying the deployment of such applications, including: -. Cluster federation API additions - One of the most requested capabilities from our global customers has been the ability to build applications with clusters that span regions and clouds. -. Container security support - Administrators of multi-tenant clusters require the ability to provide varying sets of permissions among tenants, infrastructure components, and end users of the system. -. Infrastructure enhancements - We continue adding to the scheduler, storage and client capabilities in Kubernetes based on user and ecosystem needs. - Scheduler - introducing inter-pod affinity and anti-affinity Alpha for users who want to customize how Kubernetes co-locates or spreads their pods. Also priority scheduling capability for cluster add-ons such as DNS, Heapster, and the Kube Dashboard. - Disruption SLOs - Pod Disruption Budget is introduced to limit impact of pods deleted by cluster management operations (such as node upgrade) at any one time. - Storage - New volume plugins for Quobyte and Azure Data Disk have been added. - Clients - Swagger 2.0 support is added, enabling non-Go clients. Kubernetes Dashboard UI - lastly, a great looking Kubernetes Dashboard UI with 90% CLI parity for at-a-glance management. For a complete list of updates see the release notes on GitHub. Apart from features the most impressive aspect of Kubernetes development is the community of contributors. This is particularly true of the 1.4 release, the full breadth of which will unfold in upcoming weeks.
https://v1-23.docs.kubernetes.io/blog/2016/09/kubernetes-1-4-making-it-easy-to-run-on-kuberentes-anywhere/
2022-06-25T11:23:12
CC-MAIN-2022-27
1656103034930.3
[]
v1-23.docs.kubernetes.io
Profile Management 2203 LTSR - - FAQs about profiles on multiple platforms and Profile Management migration - - - Resolve conflicting profiles Specify a template or mandatory profile Choose a migration policy Enable Profile Management - About the Profile Management .ini file Include and exclude items Define which groups' profiles are processed Specify the path to the user store Replicate user stores Enable credential-based access to user stores - Automatic migration of existing application profiles - - Configure folder redirection Manage transactional folders Manage Internet Explorer cookie folders Configure offline profiles Configure the Customer Experience Improvement Program (CEIP) Configure active write back Configure cross-platform settings - Synchronize file security attributes Enable large file handling Enable application profiler Enable native Outlook search experience Automatic backup and restore of Outlook search index database Citrix Profile Management profile container Enable multi-session write-back for profile containers Specify the storage path for VHDX files Automatically reattach detached VHDX disks in sessions Enable support for Azure AD domain-joined and non-domain-joined VDA machines - - - - - Profile Management! Replicate user stores You can replicate a user store to multiple paths in addition to the path that the Path to user store policy specifies - upon each logon and logoff.. You can configure the policy through Microsoft Active Directory Group Policy Management, Citrix Studio, and Workspace Environment Management (WEM). To configure the Replicate user stores policy through Microsoft Active Directory Group Policy Management, complete the following steps: - Open the Group Policy Management Editor. Under Computer Configuration > Administrative Templates > Citrix Components > Profile Management > Advanced settings, double-click the Replicate user stores policy. Set the policy to Enabled, set the paths to replicated user stores, and then click OK. The paths to replicated user stores - along with the path that the Path to user store policy specifies - form a complete list of remote user profile storage. - For your changes to take effect, run the gpupdate /force command from the command prompt on the machine where Profile Management is installed. Log off from all sessions and then log back on. To configure the Replicate user stores policy in Citrix Studio, complete the following steps: In the left pane of Citrix Studio, click Policies. In the Create Policy window, type the policy in the search box. For example, type “Replicate user stores.” Click Select to open the Replicate user stores policy. Select Enabled, type the paths to replicated user stores, and then click OK. Note: Press Enter to separate multiple entries. To configure the Replicate user stores policy in WEM, complete the following steps: In the administration console, navigate to Policies and Profiles > Citrix Profile Management Settings > Advanced Settings. On the Advanced Settings tab, select or clear the Enable Replicate user stores check box and set the paths to replicated user stores..
https://docs.citrix.com/en-us/profile-management/current-release/configure/replicate-user-stores.html
2022-06-25T11:22:24
CC-MAIN-2022-27
1656103034930.3
[]
docs.citrix.com
log-filter-data Section The log-filter-data section contains configuration options used to define the treatment of filtering data in log output on a key-by-key basis. This section contains one configuration option in the form of <key name>. Refer to the Hide Selected Data in Logs chapter in the Genesys 8.1 Security Deployment Guide for complete information about this option. This page was last edited on August 15, 2017, at 15:49.
https://docs.genesys.com/Documentation/TS/latest/TSCSTA/CLogFilterDataSection
2022-06-25T12:27:42
CC-MAIN-2022-27
1656103034930.3
[]
docs.genesys.com
Some key notes about Areas->Boundary Checking->Edge mode Why is Edge mode only supported in certain render engines? Edge mode works at render time, removing elements which are outside of the scatter area. Therefore, it is required to implement a custom shader with certain capabilities, but actually only V-Ray allows this technique. The "trimmed part" is treated by the V-Ray render engine as a kind of "Matte object" internally (this way render engine ignores it for its calculations). For other supported renderers, you will need to add a Forest Edge map to the material's Opacity input.. Forest Material Optimiser can do this for you automatically. What do I have to do to preview edge mode in the viewport? Edge Mode does not preview in viewports by default - to improve viewport performance edge trimming is calculated only at render time. This means the effects of this mode will not be previewed in the viewports. However, previews can be enabled by holding down the left CTRL key and clicking on the Edge radio button. (Points-Cloud Viewport mode only) AO pass doesn't work properly with the Edge mode? When rendering Ambient Occlusion as an extra pass, to get the VRayDirt map used in VRayExtraTex Render Element to work properly, please mark the "work with transparency" option under VRayDirt Parameters. Why I'm getting some render artefacts while using the Edge mode? What should i do to avoid it? In the case of very high density scatters (like dense grass), many overlapping objects could be generated and V-Ray may exceed its default transparency limit (which can cause render artefacts). To fix it: - Use lower density values, increasing Distribution Map->Density X/Y Size. (in many cases this also will improve the render performance, because there are less items to render) - Under V-Ray render settings, increase Global switches->Max transp. levels (default value is 50). Sometimes may be necessary to modify Secondary rays bias also. Related articles
https://docs.itoosoft.com/kb/forest-pack/some-key-notes-about-areas-boundary-checking-edge-mode
2022-06-25T10:12:55
CC-MAIN-2022-27
1656103034930.3
[]
docs.itoosoft.com
} [...] MicroProfile Config support Managing. Asadmin Commands for Managing Password Aliases The following is a detailed list of the administration commands that can be used to interact and configure password aliases. create-password-alias - Usage asadmin> create-password-alias <alias-name> - Aim Creates a new password alias using the provided name. The user is then prompted to enter the associated password twice. Password Aliases can also be created non-interactively using a password file. delete-password-alias - Usage asadmin> delete-password-alias <alias-name> - Aim Deletes the specified password alias and password from the server. list-password-aliases - Usage asadmin> list-password-aliases - Aim Lists the password aliases for the domain. update-password-alias - Usage asadmin> update-password-alias <alias-name> - Aim Updates the password associated with the given alias. Passwords can also be updated non-interactively using a password file.
https://docs.payara.fish/enterprise/docs/Technical%20Documentation/Payara%20Server%20Documentation/Server%20Configuration%20And%20Management/Configuration%20Options/Password%20Aliases.html
2022-06-25T11:28:18
CC-MAIN-2022-27
1656103034930.3
[array(['../../../../_images/password-aliases/password-aliases-unused.png', 'Password in plain text'], dtype=object) array(['../../../../_images/password-aliases/password-aliases-using.png', 'Placeholder for Password Alias'], dtype=object) array(['../../../../_images/password-aliases/password-aliases-modifying.png', 'Modifying password alias'], dtype=object) ]
docs.payara.fish
xdmp.filesystemFile( pathname as String ) as String Reads a file from the filesystem. The file at the specified path must be UTF-8 encoded. This function is typically used for text files; for binary files, consider using the xdmp:external-binary function. xdmp.filesystemFile("/etc/motd"); => contents of /etc/motd Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question.
https://docs.marklogic.com/9.0/xdmp.filesystemFile
2022-06-25T11:33:51
CC-MAIN-2022-27
1656103034930.3
[]
docs.marklogic.com
Interviewing candidates in guest mode You can use Replit to conduct technical interviews. The Teams Pro guest feature lets you pair program with candidates so you can work with them or observe them in real time. In this tutorial, we'll show you step-by-step how to conduct a technical interview using Teams Pro. Watch the video or read the text below. Steps to follow: We'll cover how to: - Create a repl - Invite candidates - Observe candidates 1. Create a repl To create a repl for an interview, you need to be a team admin. See the documentation here to find out how to create a team with Replit. Navigate to the "Teams" page. Under "Team Repls", click the "Create team repl" button and the following popup window will appear: Choose the template language you will be using for the interview and give you repl a name, then click the "Create repl" button. Once the repl has been created, you will be able to add the relevant files required for the interview. In this example, we've put the instructions for the candidate to follow during the interview in the main.py file. 2. Invite candidates Once you have written your challenges, invite candidates by clicking on the "Invite" button in the top-right corner of the window. You can invite candidates by entering their email address or by generating a join link to share with them. Candidates will get a notification of the invite. They will need to sign up for a Replit account before they can accept the invitation. 3. Observe candidates Once candidates join the interview repl, they will be able to access the challenges in the provided files in read and write mode. You can observe the candidates as they complete the challenges. Click on the round icon next to the "Invite" button to observe the candidate's repl and watch them work on their main.py file. Candidates can also view your IDE in the same way. This lets them see the changes you want them to make or new intructions you want to add. 4. Remove candidate access To remove candidates from interview repls so they cannot access the interview once it is complete, click on the "Invite" button. In the pop-up window, find the candidate’s name or email. Click on the "x" next to their name and they will be removed from the interview rempl. They will no longer be able to access the repl via the invite link.
https://docs.replit.com/teams-pro/interviewing-candidates
2022-06-25T11:13:50
CC-MAIN-2022-27
1656103034930.3
[array(['https://replit-docs-images.bardia.repl.co/images/teamsPro/interviewing-candidates-tutorial-images/create_interview_repl.png', 'Creating a repl'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/teamsPro/interviewing-candidates-tutorial-images/instructions.png', 'interview challenge'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/teamsPro/interviewing-candidates-tutorial-images/invite_members.png', 'repl invitation'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/teamsPro/interviewing-candidates-tutorial-images/notification.png', 'invite notification'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/teamsPro/interviewing-candidates-tutorial-images/interview_screen.gif', 'interview window'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/teamsPro/interviewing-candidates-tutorial-images/candidate_screen.png', 'candidate_screen'], dtype=object) array(['https://replit-docs-images.bardia.repl.co/images/teamsPro/interviewing-candidates-tutorial-images/remove-candidates.png', 'remove candidate access'], dtype=object) ]
docs.replit.com
Analyse Package Archive (REST API) This tutorial complements the REST API section, and the aim here is to show the API features while analyzing a package archive. Tip As a perquisite, check our REST API chapter for more details on REST API and how to get started. Instructions: First, let’s create a new project called boolean.py-3.8. We’ll be using this package as the project input. We can add and execute the scan_package pipeline on our new project. Note Whether you follow this tutorial and previous instructions using cURL or Python script, the final results should be the same. Using cURL In your terminal, insert the following: api_url="" content_type="Content-Type: application/json" data='{ "name": "boolean.py-3.8", "input_urls": "", "pipeline": "scan_package", "execute_now": true }' curl -X POST "$api_url" -H "$content_type" -d "$data" Note You have to set the api_url to if you run on a local development setup. Tip You can provide the data using a json file with the text below, which will be passed in the -d parameter of the curl request: { "name": "boolean.py-3.8", "input_urls": "", "pipeline": "scan_package", "execute_now": true } While in the same directory as your JSON file, here called boolean.py-3.8_cURL.json, create your new project with the following curl request: curl -X POST "" -H "Content-Type: application/json" -d @boolean.py-3.8_cURL.json If the new project has been successfully created, the response should include the project’s details URL value among the returned data. { "name": "boolean.py-3.8", "url": "", "[...]": "[...]" } If you click on the project url, you’ll be directed to the new project’s instance page that allows you to perform extra actions on the project including deleting it. Using Python script Tip To interact with REST APIs, we will be turning to the requests library. To follow the above instructions and create a new project, start up the Python interpreter by typing pythonin your terminal. If you are seeing the prompt >>>, you can execute the following commands: import requests api_url = "" data = { "name": "boolean.py-3.8", "input_urls": "", "pipeline": "scan_package", "execute_now": True, } response = requests.post(api_url, data=data) response.json() The JSON response includes a generated UUID for the new project. # print(response.json()) { "name": "boolean.py-3.8", "url": "", "[...]": "[...]", } Note Alternatively, you can create a Python script with the above commands/text. Then, navigate to the same directory as your Python file and run the script to create your new project. However, no response will be shown on the terminal, and to access a given project details, you need to visit the projects’ API endpoint.
https://scancodeio.readthedocs.io/en/latest/tutorial_api_analyze_package_archive.html
2022-06-25T11:31:34
CC-MAIN-2022-27
1656103034930.3
[]
scancodeio.readthedocs.io
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Get-R53RFirewallDomainListList-MaxResult <Int32>-NextToken <String>-Select <String>-NoAutoIteration <SwitchParameter> MaxResults. NextTokenvalue that you can use in a subsequent call to get the next batch of objects.If you don't specify a value for MaxResults, Resolver returns up to 100 objects. MaxResults. If more objects are available for retrieval, Resolver returns a NextTokenvalue in the response. To retrieve the next batch of objects, use the token that was returned for the prior request in your next request. AWS Tools for PowerShell: 2.x.y.z
https://docs.aws.amazon.com/powershell/latest/reference/items/Get-R53RFirewallDomainListList.html
2022-06-25T12:40:53
CC-MAIN-2022-27
1656103034930.3
[]
docs.aws.amazon.com
Tools Windows Debugging Tools - userdump - (correct version should be chosen - 32 or 64-bit according to the application) to get a dump Debugging Tools for Windows - to navigate in the dump. While userdump is by default non-intrusive and should not kill the application, it causes the application freeze which could be for a few seconds and it, eventually, could crash the application. So, it is NOT 100% safe. Debugging Tools for Windows is GUI wrapper for debugging tools that could be used to simplify process identification, debugging, dump reading, it shows stack per each thread, etc. Both tools are well tested and common in Windows environment. jstack Utility - (part of JDK 1.6, including Windows) which shows thread stack and locking state per thread. It is non-disruptive and is very light with almost immediate execution (sub-second; tested), so it should not be a problem to run it even in pretty loaded environment Performance Monitor - As trigger to shoot jstack when CPU goes up, built-in Windows XP Performance Monitor could be used (Control Panel/Administrative Tools/Performance). It has alerts on any of resource events, including CPU. Alert could start any program when chosen metric comes to the predefined value. It should start jstack-based script producing each time the file with the unique name (timestamp+). To avoid overloading the system with jstack runs, sampling interval could be set to 1min in alert definition. JRockit Mission Control - When we’ll get stacks of the process, likely, it will be not that easy to interpret it, as we should know what are the threads with most of activity. It could be done with no impact on performance with BEA JRockit “Mission Control” profiler, for example. Packet Sniffer/Network Analyzer Tool See Packet Sniffer/Network Analyzer Tool Sniffer/Network Analyzer Tool) section for more details. Threads Dumper SendSignal Utility Java has this great feature where if you hit Ctrl-break on the console, it dumps a list of threads and all their held locks to stdout. If your app is stuck or overwhelmed with too many threads, you can figure out what it is doing. If it is deadlocked/overwhelmed, sometimes the JVM can even tell you exactly which threads are involved. The problem is that if you can’t get to the console, you can’t hit ctrl-break. This commonly happens to us when we are running under an IDE. The IDE captures stdout and there is no console. If the app deadlocks, there is no way to get the crucial debugging info. Another common usage scenario is once we do not have access to the console itself. A very useful utility can be used for sending a Windows Ctrl+break signal to any process (or Java Process) and which will dump the threads. The nice thing is that only the target process is affected, and any process (even a windowed process) can be targeted. Download the Send.Signal EXE file from the project webpage Usage: SendSignal <pid> <pid> - send ctrl-break to process <pid> (hex ok) Log Correlation Tool In many scenarios, mainly in production or large deployments, you many times face issues occurred in the same time over several components and cross several distributed machines. In order to track down what was the root cause of the event and which were just symptoms you need to review many log files and correlate what happened in that specific time. In order to correlate events from different log files and visualize it you have to create a Log Correlation which is where the Eclipse Log and Trace Analyzer component TPTP LTA) shines. This component is an extensive and extendable framework that includes built in probes mechanism.
https://docs.gigaspaces.com/xap/10.2/admin/troubleshooting-tools.html
2022-06-25T11:49:09
CC-MAIN-2022-27
1656103034930.3
[]
docs.gigaspaces.com
Synopsis app_create [-d destsys] -a appname app_remove [-d destsys] -a appname Description A LifeKeeper application is a group of related resource types. When an application is removed, all resource types installed under it are also removed. These programs provide an interface for generating new applications in the configuration database and removing existing ones. All commands exit to 0 if they are successful. Commands exit with a nonzero code and print to standard error if they fail. Exit Codes The following exit codes could be returned by these commands: Feedback Thanks for your feedback. Post your comment on this topic.
https://docs.us.sios.com/sps/8.8.2/en/topic/lcdi-applications
2022-06-25T11:43:47
CC-MAIN-2022-27
1656103034930.3
[]
docs.us.sios.com
A function definition defines a user-defined function object (see section 3.2): funcdef: "def" funcname "(" [parameter_list] ")" ":" suite 5.3.4. forms, described in section 5.10. Note that the lambda form is merely a shorthand for a simplified function definition; a function defined in a ``def'' statement can be passed around or assigned to another name just like a function defined by a lambda form. The ``def'' form is actually more powerful since it allows the execution of multiple statements. Programmer's note: a `` def'' form executed inside a function definition defines a local function that can be returned or passed around. The semantics of name resolution in the nested function will change in Python 2.2. See the appendix for a description of the new semantics. See About this document... for information on suggesting changes.See About this document... for information on suggesting changes.
http://docs.python.org/release/2.1/ref/function.html
2012-05-27T08:36:11
crawl-003
crawl-003-021
[]
docs.python.org
: There is a subtlety when the sequence is being modified by the loop (this can only occur for mutable sequences, i) See About this document... for information on suggesting changes.See About this document... for information on suggesting changes.
http://docs.python.org/release/2.1/ref/for.html
2012-05-27T08:36:01
crawl-003
crawl-003-021
[]
docs.python.org
This is the home page for Envision Customer Knowledge Base. This is a public site intended for the support of Beet customers. Beet Documentation and Tutorial Videos - Administration Documents - End User Documentation - Reporting and Dashboard Documentation - Tutorial Videos Collection and Work Instructions - Training and Demonstration This link takes the user to the Beet Docs page. Here, in this page we have all the documentation for Administration, End Users, Reporting/Dashboards, Training and Tutorial Videos. The purpose of this tool is to provide screenshots of commonly used enVision modules, and the ability to identify certain images, while providing a brief explanation. It will also allow the user to go to the section selected. Hovering over the image will reveal links to the section documentation in the Beet Documentations Space of the enVision Knowledge Base. Some links may have a brief description about the image you are hovering over. Clicking on the image will take you to the information selected. The BEET ACADEMY is an instructional space within the enVision customer knowledge base. It provides a closely structured outline for Beet customers to use during the training process as well as a reference to return to for more information. All the sections are based upon the Training outline used by Beet Training and Coaching personnel. In the Beet Academy splash Page, you will find a Table of Contents, a brief overview video (for first time use), and image icons on this page that are hyperlinked to all relevant courses of the Beet Academy in the image. This mean a user can click on one of the images within the Beet Academy image to go to that selected section. All sections are linked to the documented materials in the Reference Supplement. Every section opened will have a start page with a brief introduction, a Video, and a table of contents. BEET Support section is used by the housing of documentation of the beet internal information. This link will send the user to the Beet Support page, that will contain Manuals, Technical Specifications, Technical Questions/Answers, I.T., Installation, and Maintenance information for enVision. Frequently asked questions enVision Manuals: End User, Admin, Tutorial Videos, and others in Beet Documentation - What's new to Envision - Hardware Requirements - Remote Support Access - Index of All KB Documents - How to enable debug mode on EDC - How does Deployment Work? Need more help? - Create a Support Ticket. - You can also send email to [email protected] - Tutorial Videos Browse by topic - A-D - E - F-M - N-R - S-Z Recently updated articles BEET Contact Information BEET US Office Address: 45207 Helm St., Plymouth, Michigan 48170 BEET China Office Address:中国浙江省杭州市滨江区江南大道3900号科技金融创新中心4楼 Contact : Edward Chang Cell Phone: +86 188 6875 8858
https://docs.beet.com/
2019-11-11T23:04:40
CC-MAIN-2019-47
1573496664439.7
[array(['/download/attachments/393230/image2019-10-16_9-9-13.png?version=1&modificationDate=1571231353815&api=v2', None], dtype=object) ]
docs.beet.com
Release Date May 22, 2019 This is a feature release, using the Arnold 5.3.1.0 DOWNLOADS: Baking failing when a displacement map is used. Crash when baking if something is connected to material's displacement. Crash on outputting denoise AOVs without saving to file. Photometric light failing on some IES files. Max freezes when switching Mesh Light from Texture to Color mode. Arnold menu behaving randomly. Bitmap rendering with seams if offset in U/V. - Hiding operators references in Procedural and Alembic for Max 2018. See the Arnold 5.3.1.0 release notes for the full list of enhancements and fixes.
https://docs.arnoldrenderer.com/display/A5AF3DSUG/3.1.26
2019-11-11T23:01:20
CC-MAIN-2019-47
1573496664439.7
[]
docs.arnoldrenderer.com
All content with label 2lcache+async+development+distribution+grid+infinispan+jboss_cache+listener+pojo_cache+query+rebalance+release+user_guide+webdav+xml. Related Labels: podcast, expiration, publish, datagrid, coherence, interceptor, server, rehash, transactionmanager, dist, partitioning, deadlock, intro, archetype, jbossas, lock_striping, nexus, guide, schema, state_transfer, cache, amazon,, hash_function, configuration, batch, buddy_replication, loader, colocation, pojo, write_through, cloud, tutorial, notification, murmurhash2, presentation, read_committed, jira, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, br, websocket, transaction, interactive, xaresource, build, hinting, searchable, demo, scala, installation, client, non-blocking, migration, filesystem, jpa, article, gui_demo, eventing, client_server, testng, infinispan_user_guide, murmurhash, standalone, repeatable_read, snapshot, hotrod, docs, batching, consistent_hash, store, whitepaper, jta, faq, as5, jsr-107, jgroups, lucene, locking, rest, hot_rod more » ( - 2lcache, - async, - development, - distribution, - grid, - infinispan, - jboss_cache, - listener, - pojo_cache, - query, - rebalance, - release, - user_guide, - webdav, - xml ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/2lcache+async+development+distribution+grid+infinispan+jboss_cache+listener+pojo_cache+query+rebalance+release+user_guide+webdav+xml
2019-11-11T23:43:33
CC-MAIN-2019-47
1573496664439.7
[]
docs.jboss.org
All content with label async+cache_server+cluster+development+expiration+grid+hot_rod+import+infinispan+listener+maven+release+standalone+user_guide+xml. Related Labels: podcast, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, partitioning, query, timer, deadlock, intro, archetype, pojo_cache, lock_striping, jbossas, nexus, guide, schema, cache, httpd, s3, amazon, ha, test, high-availability, jcache, api, xsd, ehcache, wildfly, documentation, jboss, roadmap, youtube, userguide, write_behind, ec2, eap, 缓存, eap6, hibernate, aws, getting_started, interface, custom_interceptor, clustering, setup, eviction, gridfs, out_of_memory, mod_jk, fine_grained, jboss_cache, index, events, l, batch, configuration, hash_function, buddy_replication, loader, xa, pojo, write_through, cloud, mvcc, notification, tutorial, presentation, read_committed, jbosscache3x, distribution, ssl, jira, cachestore, data_grid, cacheloader, resteasy, br, permission, websocket, transaction, interactive, xaresource, build, domain, searchable, subsystem, scala, installation, mod_cluster, client, non-blocking, as7, migration, jpa, filesystem, http, tx, article, gui_demo, eventing, client_server, testng, infinispan_user_guide, webdav, hotrod, ejb, snapshot, repeatable_read, docs, batching, consistent_hash, store, whitepaper, jta, faq, as5, 2lcache, jsr-107, docbook, lucene, jgroups, locking, rest more » ( - async, - cache_server, - cluster, - development, - expiration, - grid, - hot_rod, - import, - infinispan, - listener, - maven, - release, - standalone, - user_guide, - xml ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/async+cache_server+cluster+development+expiration+grid+hot_rod+import+infinispan+listener+maven+release+standalone+user_guide+xml
2019-11-11T23:46:17
CC-MAIN-2019-47
1573496664439.7
[]
docs.jboss.org
All premium extensions for the Meta Box plugin can be updated automatically or manually. The automatic update uses WordPress update mechanism which checks for new versions twice a day. This is the recommended way to get new updates from our website. Automatic update To enable automatic update, you need to have a valid license key. Go to My Account page to get it. Of course, you have to buy at least one premium extension to have the license key. Then go to Dash Board > Meta Box > License and enter your license key and click Save Changes button. Make sure you paste the correct API you just copied above. (If you see message Invalid license key, please try again one more time). From now on, when WordPress checks new versions for plugins, it will also check for new versions of Meta Box extensions. If there is any new update, you’ll see this: Then you can update the extensions the same way as for other plugins by clicking update now link. Manual update You can also update the extensions manually by following these steps: - Go to My Account page. - In your account page, you will see all downloads for your purchased extensions. Download the extension(s) to your computer, unzip the file and upload the extension folder to your website, overwriting the old files.
https://docs.metabox.io/extensions/update/
2019-11-11T23:20:20
CC-MAIN-2019-47
1573496664439.7
[array(['https://i.imgur.com/Gul7JuL.png', 'New versions'], dtype=object)]
docs.metabox.io
Service Fabric cluster capacity planning considerations For any production deployment, capacity planning is an important step. Here are some of the items that you have to consider as a part of that process. - The number of node types your cluster needs to start out with - The properties of each node type (size, primary, internet facing, number of VMs, etc.) - The reliability and durability characteristics of the cluster Note You should minimally review all Not Allowed upgrade policy values during planning. This is to ensure that you set the values appropriately and to mitigate burning down of your cluster later because of unchangeable system configuration settings. Let us briefly review each of these items. The number of node types your cluster needs to start out with First, you need to figure out what the cluster you are creating is going to be used for. What kinds of applications you are planning to deploy into this cluster? If you are not clear on the purpose of the cluster, you are most likely not yet ready to enter the capacity planning process. Establish the number of node types your cluster needs to start out with. Each node type is mapped to a virtual machine scale set. Each node type can then be scaled up or down independently, have different sets of ports open, and can have different capacity metrics. So the decision of the number of node types essentially comes down to the following considerations: Does your application have multiple services, and do any of them need to be public or internet facing? Typical applications contain a front-end gateway service that receives input from a client and one or more back-end services that communicate with the front-end services. So in this case, you end up having at least two node types. Do your services (that make up your application) have different infrastructure needs such as greater RAM or higher CPU cycles? For example, let us assume that the application that you want to deploy contains a front-end service and a back-end service. The front-end service can run on smaller VMs (VM sizes like D2) that have ports open to the internet. The back-end service, however, is computation intensive and needs to run on larger VMs (with VM sizes like D4, D6, D15) that are not internet facing. In this example, although you can decide to put all the services on one node type, we recommended that you place them in a cluster with two node types. This allows each node type to have distinct properties such as internet connectivity or VM size. The number of VMs can be scaled independently, as well. Because you cannot predict the future, go with facts you know, and choose the number of node types that your applications need to start with. You can always add or remove node types later. A Service Fabric cluster must have at least one node type. The properties of each node type The node type can be seen as equivalent to roles in Cloud Services. Node types define the VM sizes, the number of VMs, and their properties. Every node type that is defined in a Service Fabric cluster maps to a virtual machine scale set. Each node type is a distinct scale set and can be scaled up or down independently, have different sets of ports open, and have different capacity metrics. For more information about the relationships between node types and virtual machine scale sets, how to RDP into one of the instances, how to open new ports, and so on, see Service Fabric cluster node types. A Service Fabric cluster can consist of more than one node type. In that event, the cluster consists of one primary node type and one or more non-primary node types. A single node type cannot reliably scale beyond 100 nodes per virtual machine scale set for SF applications; achieving greater than 100 nodes reliably, will require you to add additional virtual machine scale sets. Primary node type The Service Fabric system services (for example, the Cluster Manager service or Image Store service) are placed on the primary node type. - The minimum size of VMs for the primary node type is determined by the durability tier you choose. The default durability tier is Bronze. See The durability characteristics of the cluster for more details. - The minimum number of VMs for the primary node type is determined by the reliability tier you choose. The default reliability tier is Silver. See The reliability characteristics of the cluster for more details. From the Azure Resource Manager template, the primary node type is configured with the isPrimary attribute under the node type definition. Non-primary node type In a cluster with multiple node types, there is one primary node type and the rest are non-primary. - The minimum size of VMs for non-primary node types is determined by the durability tier you choose. The default durability tier is Bronze. For more information, see The durability characteristics of the cluster. - The minimum number of VMs for non-primary node types is one. However, you should choose this number based on the number of replicas of the application/services that you want to run in this node type. The number of VMs in a node type can be increased after you have deployed the cluster. The durability characteristics of the cluster The durability tier is used to indicate to the system the privileges that your VMs have with the underlying Azure infrastructure. In the primary node type, this privilege allows Service Fabric to pause any VM level infrastructure request (such as a VM reboot, VM reimage, or VM migration) that impact the quorum requirements for the system services and your stateful services. In the non-primary node types, this privilege allows Service Fabric to pause any VM level infrastructure requests (such as VM reboot, VM reimage, and VM migration) that impact the quorum requirements for your stateful services. Warning Node types running with Bronze durability obtain no privileges. This means that infrastructure jobs that impact your stateful workloads will not be stopped or delayed, which might impact your workloads. Use only Bronze for node types that run only stateless workloads. For production workloads, running Silver or above is recommended. Regardless of any durability level, Deallocation operation on VM Scale Set will destroy the cluster Advantages of using Silver or Gold durability levels - Reduces the number of required steps in a scale-in operation (that is, node deactivation and Remove-ServiceFabricNodeState is called automatically). - Reduces the risk of data loss due to a customer-initiated in-place VM SKU change operation or Azure infrastructure operations. Disadvantages of using Silver or Gold durability levels - Deployments to your virtual machine scale set and other related Azure resources can be delayed, can time out, or can be blocked entirely by problems in your cluster or at the infrastructure level. - Increases the number of replica lifecycle events (for example, primary swaps) due to automated node deactivations during Azure infrastructure operations. - Takes nodes out of service for periods of time while Azure platform software updates or hardware maintenance activities are occurring. You may see nodes with status Disabling/Disabled during these activities. This reduces the capacity of your cluster temporarily, but should not impact the availability of your cluster or applications. Recommendations for when to use Silver or Gold durability levels Use Silver or Gold durability for all node types that host stateful services you expect to scale-in (reduce VM instance count) frequently, and you would prefer that deployment operations be delayed and capacity to be reduced in favor of simplifying these scale-in operations. The scale-out scenarios (adding VMs instances) do not play into your choice of the durability tier, only scale-in does. Changing durability levels - Node types with durability levels of Silver or Gold cannot be downgraded to Bronze. - Upgrading from Bronze to Silver or Gold can take a few hours. - When changing durability level, be sure to update it in both the Service Fabric extension configuration in your virtual machine scale set resource, and in the node type definition in your Service Fabric cluster resource. These values must match. Operational recommendations for the node type that you have set to silver or gold durability level. Keep your cluster and applications healthy at all times, and make sure that applications respond to all Service replica lifecycle events (like replica in build is stuck) in a timely fashion. Adopt safer ways to make a VM SKU change (Scale up/down): Changing the VM SKU of a virtual machine scale set requires a number of steps and considerations. Here is the process you can follow to avoid common issues. - For non-primary node types: It is recommended that you create new virtual machine scale set, modify the service placement constraint to include the new virtual machine scale set/node type and then reduce the old virtual machine scale set instance count to zero, one node at a time (this is to make sure that removal of the nodes do not impact the reliability of the cluster). - For the primary node type: If the VM SKU you have selected is at capacity and you would like to change to a larger VM SKU, follow our guidance on vertical scaling for a primary node type. Maintain a minimum count of five nodes for any virtual machine scale set that has durability level of Gold or Silver enabled. Each virtual machine scale set with durability level Silver or Gold must map to its own node type in the Service Fabric cluster. Mapping multiple virtual machine scale sets to a single node type will prevent coordination between the Service Fabric cluster and the Azure infrastructure from working properly. Do not delete random VM instances, always use virtual machine scale set scale down feature. The deletion of random VM instances has a potential of creating imbalances in the VM instance spread across UD and FD. This imbalance could adversely affect the systems ability to properly load balance amongst the service instances/Service replicas. If using Autoscale, then set the rules such that scale in (removing of VM instances) are done only one node at a time. Scaling down more than one instance at a time is not safe. If deleting or deallocating VMs on the primary node type, you should never reduce the count of allocated VMs below what the reliability tier requires. These operations will be blocked indefinitely in a scale set with a durability level of Silver or Gold. The reliability characteristics of the cluster The reliability tier is used to set the number of replicas of the system services that you want to run in this cluster on the primary node type. The more the number of replicas, the more reliable the system services are in your cluster. The reliability tier can take the following values: - Platinum - Run the System services with a target replica set count of nine - Gold - Run the System services with a target replica set count of seven - Silver - Run the System services with a target replica set count of five - Bronze - Run the System services with a target replica set count of three Note The reliability tier you choose determines the minimum number of nodes your primary node type must have. Recommendations for the reliability tier When you increase or decrease the size of your cluster (the sum of VM instances in all node types), you must update the reliability of your cluster from one tier to another. Doing this triggers the cluster upgrades needed to change the system services replica set count. Wait for the upgrade in progress to complete before making any other changes to the cluster, like adding nodes. You can monitor the progress of the upgrade on Service Fabric Explorer or by running Get-ServiceFabricClusterUpgrade Here is the recommendation on choosing the reliability tier. The number of seed nodes is also set to the minimum number of nodes for a reliability tier. For example, for a cluster with Gold reliability there are 7 seed nodes. Primary node type - capacity guidance Here is the guidance for planning the primary node type capacity: - Number of VM instances to run any production workload in Azure: You must specify a minimum Primary Node type size of 5 and a Reliability Tier of Silver. - Number of VM instances to run test workloads in Azure You can specify a minimum primary node type size of 1 or 3. The one node cluster, runs with a special configuration and so, scale out of that cluster is not supported. The one node cluster, has no reliability and so in your Resource Manager template, you have to remove/not specify that configuration (not setting the configuration value is not enough). If you set up the one node cluster set up via portal, then the configuration is automatically taken care of. One and three node clusters are not supported for running production workloads. - VM SKU: Primary node type is where the system services run, so the VM SKU you choose for it, must take into account the overall peak load you plan to place into the cluster. Here is an analogy to illustrate what I mean here - Think of the primary node type as your "Lungs", it is what provides oxygen to your brain, and so if the brain does not get enough oxygen, your body suffers. Since the capacity needs of a cluster is determined by workload you plan to run in the cluster, we cannot provide you with qualitative guidance for your specific workload, however here is the broad guidance to help you get started For production workloads: - It's recommended to dedicate your clusters primary NodeType to system services, and use placement constraints to deploy your application to secondary NodeTypes. -. - Our recommendation is a minimum of 50 GB. For your workloads, especially when running Windows containers, larger disks are required. - Partial core VM SKUs like Standard A0 are not supported for production workloads. - A series VM SKUs are not supported for production workloads for performance reasons. - Low-priority VMs are not supported. Warning Changing the primary node VM SKU size on a running cluster, is a scaling operation, and documented in Virtual Machine Scale Set scale out documentation. Non-primary node type - capacity guidance for stateful workloads This guidance is for stateful Workloads using Service fabric reliable collections or reliable Actors that you are running in the non-primary node type. Number of VM instances: For production workloads that are stateful, it is recommended that you run them with a minimum and target replica count of 5. This means that in steady state you end up with a replica (from a replica set) in each fault domain and upgrade domain. The whole reliability tier concept for the primary node type is a way to specify this setting for system services. So the same consideration applies to your stateful services as well. So for production workloads, the minimum recommended non-Primary Node type size is 5, if you are running stateful workloads in it. VM SKU: This is the node type where your application services are running, so the VM SKU you choose for it, must take into account the peak load you plan to place into each Node. The capacity needs of the node type, is determined by workload you plan to run in the cluster, so we cannot provide you with qualitative guidance for your specific workload, however here is the broad guidance to help you get started For production workloads -. - Partial core VM SKUs like Standard A0 are not supported for production workloads. - A series VM SKUs are not supported for production workloads for performance reasons. Non-primary node type - capacity guidance for stateless workloads This guidance of stateless Workloads that you are running on the non-primary node type. Number of VM instances: For production workloads that are stateless, the minimum supported non-Primary Node type size is 2. This allows you to run you two stateless instances of your application and allowing your service to survive the loss of a VM instance. VM SKU: This is the node type where your application services are running, so the VM SKU you choose for it, must take into account the peak load you plan to place into each Node. The capacity needs of the node type is determined by the workload you plan to run in the cluster. We cannot provide you with qualitative guidance for your specific workload. However, here is the broad guidance to help you get started. For production workloads - The recommended VM SKU is Standard D2_V2 or equivalent. - The minimum supported use VM SKU is Standard D1 or Standard D1_V2 or equivalent. - Partial core VM SKUs like Standard A0 are not supported for production workloads. - A series VM SKUs are not supported for production workloads for performance reasons. Next steps Once you finish your capacity planning and set up a cluster, read the following: Feedback
https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-capacity
2019-11-11T22:24:43
CC-MAIN-2019-47
1573496664439.7
[array(['media/service-fabric-cluster-capacity/systemservices.png', 'Screenshot of a cluster that has two Node Types'], dtype=object)]
docs.microsoft.com
Test-Replication Health Use the Test-ReplicationHealth cmdlet to check all aspects of replication and replay, or to provide status for a specific Mailbox server in a database availability group (DAG). For information about the parameter sets in the Syntax section below, see Exchange cmdlet syntax (). Syntax Test-ReplicationHealth [[-Identity] <ServerIdParameter>] [-ActiveDirectoryTimeout <Int32>] [-Confirm] [-DomainController <Fqdn>] [-MonitoringContext <$true | $false>] [-OutputObjects] [-TransientEventSuppressionWindow <UInt32>] [-WhatIf] [-DatabaseAvailabilityGroup <DatabaseAvailabilityGroupIdParameter>] [<CommonParameters>] Description-ReplicationHealth -Identity MBX1 This example tests the health of replication for the Mailbox server MBX1. Parameters The ActiveDirectoryTimeout parameter specifies the time interval in seconds that's allowed for each directory service operation before the operation times out. The default value is 15 DatabaseAvailabilityGroup parameter specifies whether to test all servers in the specified DAG. You can use any value that uniquely identifies the DAG. For example: Name Distinguished name (DN) GUID You can't use this parameter server that you want to test. You can use any value that uniquely identifies the server. For example: Name FQDN Distinguished name (DN) ExchangeLegacyDN You can't use this parameter with the DatabaseAvailabilityGroup parameter. OutputObjects switch specifies whether to output an array of information regarding failures. You don't need to specify a value with this switch. The TransientEventSuppressionWindow parameter specifies the number of minutes that the queue lengths can be exceeded before the queue length tests are considered to have failed. This parameter is used to reduce the number of failures due to transient load
https://docs.microsoft.com/en-us/powershell/module/exchange/database-availability-groups/Test-ReplicationHealth?redirectedfrom=MSDN&view=exchange-ps
2019-11-11T22:11:25
CC-MAIN-2019-47
1573496664439.7
[]
docs.microsoft.com
Ecommerce Basic Payment Forms Last Edited: Oct 22, 2019 Basic Payments are a way to take a payment against a standard Form submission. You set the currency, and the minimum payment value in Form Builder. When submitting the form, a value is sent to our payment processor, and validated server-side to check that it's higher than the minimum payment value you have set. Here's 3 examples of when you'd use this payment system: Basic Payment is available using the following Payment Gateways: Note: depending on the Payment Gateway, some specific fields may be needed in order to capture the necessary payment details. Those fields will be specified in step 5. In order to use Secure Zones in this process, they first need to be set up. For instructions on how to use Secure Zones on a Form, click here. If you apply a Secure Zone to the Form, then the user will be given access after a successful payment. When creating a Form, navigate to the 'Payments' tab. Here you should enter the following data: For further instructions on how to set up Forms, click here. Please see the accordion below for code relevant to your Payment Gateway. To test your new Basic Payment Form, you first must put it on a Page. You can do so using Toolbox's 'Insert Form' feature. Once you have created a Page and navigated to it, you will need to fill in the fields and submit. If your Payment Gateway is in 'Test Mode', then you may need to enter card details specific to the testing environment. You can find those under 'Test Cards' on our Payment Gateways documentation page.
https://docs.siteglide.com/ecommerce/basic-payment-forms
2019-11-11T22:46:37
CC-MAIN-2019-47
1573496664439.7
[]
docs.siteglide.com
HelioPy is a free and open source set of tools for heliopsheric and planetary physics. For more information see the module documentation below. If you would like a new feature to be built into HelioPy, you can either open a bug report in the github issue tracker, or send an email to [email protected].
https://docs.heliopy.org/en/0.8.1/
2019-11-11T21:54:29
CC-MAIN-2019-47
1573496664439.7
[]
docs.heliopy.org
Integration meshStack supports access to Cloud Foundry platforms which provide convenient application hosting capabilities to software and DevOps engineers. Usually, there are also many backing services available within the platforms marketplace such as database, data processing, queueing, and many more (depending on the platform operators choice of services). For Cloud Foundry, meshStack provides org and space creation and configuration, user management and SSO via Cloud Foundry's UAA. Integration OverviewIntegration Overview To enable integration with Cloud Foundry, operators deploy and configure the meshStack Cloud Foundry connector to make Cloud Foundry platforms available at their meshStack instance. meshStack provides users access to Cloud Foundry (CF) instances via the OIDC protocol for authentication while it replicates permission rights directly to authorize correct access. When accessing the platform the user has an OIDC Token (JWT) which is issued by the meshIdB (and potentially by an upstream corporate identity provider, cf. Identity Federation). Cloud Foundry's Auth component UAA validates the token upon access. Also for CF access within meshPanel the token is used to request status information about apps and services to display within meshPanel. The meshFed replication ensures spaces and orgs are created within the CF platform and appropriate permission rights are set when users access the CF platform. If a user's meshProject permissions are modified, meshStack updates the permissions for this user accordingly within the CF platform. Cloud Foundry Access WorkflowCloud Foundry Access Workflow The full workflow to access the Cloud Foundry platform is as follows: - User accesses the meshPanel via browser. - If logged out, the user is forwarded to the meshIdB component to enter his credentials. - If there is an external identity provider connected, the user or his credentials are forwarded for authentication purposes. - Upon successful authentication, the meshIdB issues an OIDC token (meshToken) which provides the user authorized access to the meshPanel and the cloud platform tenants he is authorized to access. - If the user accesses a cloud platform via meshPanel, meshStack ensures full replication of the current tenant configuration including permissions. - The meshStack backend exchanges the user's meshToken against an UAA token for the user. - When accessing CF, the UAA validates the token and grants access if it is valid (time, issuer). - The meshPanel also uses the UAA token to access and display status information about the CF space in focus. - Every time the user accesses the CF API, CF's UAA validates the token to ensure authorized access to the requested resource. - If the UAA token is expired, the meshToken/UAA token exchange is executed again via the meshStack backend. If the meshToken is expired, the user must re-authenticate against the meshIdB and/or the delegated enterprise SSO system. PrerequisitesPrerequisites UAA configurationUAA configuration UAA needs to have jwt-bearer Auth grants enabled which is configured against a corresponding OIDC client configuration within the meshIdB.
https://docs.meshcloud.io/docs/meshstack.cloudfoundry.index.html
2019-11-11T22:22:57
CC-MAIN-2019-47
1573496664439.7
[]
docs.meshcloud.io
Configuration meshcloud will typically operate your meshStack installation as a managed service for you. As a managed service, all configuration and validation is done by meshcloud. Nonetheless, we make references to configuration options in the documentation so that operators get a better understanding of meshStack's capabilities. The configuration references also simplify examples and communicate the exact parameters that may need to be supplied by platform operators (e.g. Service Principal credentials). meshcloud configures your meshStack installation using a dhall configuration model. As part of meshcloud's managed service, customers get access to their configuration in a git repository. This is also useful to communicate configuration options and track changes. The configuration documentation will occasionally also make references to YAML configuration options. These will be replaced with dhall models in the next releases. Dhall models can generate YAML configuration files dynamically, but provide superior features in terms of flexibility and validation. Global Configuration OptionsGlobal Configuration Options IdentifiersIdentifiers meshStack can restrict legal identifiers for meshCustomers and meshProjects. This is useful to ensure projects can be identifiers are compatible with all connected cloud platforms and don't require additional name mangling to comply with cloud specific naming rules. { customerIdentifierLength : Optional Natural , projectIdentifierLength : Optional Natural , projectIdentifierPrefix : Optional Text , projectNamePrefix : Optional Text , envIdentifier : Optional Text } Customer Invite Link for AdministratorsCustomer Invite Link for Administrators The link can be enabled or disabled by setting this config key. For more invormation about this invite link see invite links. web: register: allow-partner-invite-links: true
https://docs.meshcloud.io/docs/meshstack.configuration.html
2019-11-11T23:07:36
CC-MAIN-2019-47
1573496664439.7
[]
docs.meshcloud.io
Private npm module support When are npm private modules used? Private npm modules are used at two times during Renovate's process. 1. Module lookup If a private npm module is listed as a dependency in a package.json, then Renovate will attempt to keep it up-to-date by querying the npm registry like it would for any other package. Hence, by default with no configuration a private package lookup will fail, because of lack of credentials. This means it won't be "renovated" and its version will remain unchanged in the package file unless you update it manually. These failures don't affect Renovate's ability to look up other modules in the same package file. Assuming the private module lookup succeeds (solutions for that are described later in this document) then private package versions will be kept up-to-date like public package versions are. 2. Lock file generation If you are using a lock file (e.g. yarn's yarn.lock or npm's package-lock.json) then Renovate needs to update that lock file whenever any package listed in your package file is updated to a new version. To do this, Renovate will run npm install or equivalent and save the resulting lock file. If a private module hasn't been updated, it usually won't matter to npm/yarn because they won't attempt to udpate its lock file entry anyway. However it's possible that the install will fail if it attempts to look up that private module for some reason, even when that private module is not the main one being updated. It's therefore better to provide Renovate with all the credentials it needs to look up private packages. Supported npm authentication approaches The recommended approaches in order of preference are: If you are running your own Renovate bot: copy an .npmrc file to the home dir of the bot and it will work for all repositories Renovate App with private modules from npmjs.org: Add an encrypted npmToken to your Renovate config Renovate App with a private registry: Add an unencrypted npmrc plus an encrypted npmToken in config All the various approaches are described below: Commit .npmrc file into repository One approach that many projects use for private repositories is to simply check in an authenticated .npmrc into the repository that is then shared between all developers. Therefore anyone running npm install or yarn install from the project root will be automatically authenticated with npm without having to distribute npm logins to every developer and make sure they've run npm login first before installing. The good news is that this works for Renovate too. If Renovate detects a .npmrc or .yarnrc file then it will use it for its install. Add npmrc string to Renovate config The above solution maybe have a downside that all users of the repository (e.g. developers) will also use any .npmrc that is checked into the repository, instead of their own one in ~/.npmrc. To avoid this, you can instead add your .npmrc authentication line to your Renovate config under the field npmrc. e.g. a renovate.json might look like this: { "npmrc": "//some.registry.com/:_authToken=abcdefghi-1234-jklmno-aac6-12345567889" } If configured like this, Renovate will use this to authenticate with npm and will ignore any .npmrc files(s) it finds checked into the repository. Add npmToken to Renovate config If you are using the main npmjs registry then you can configure just the npmToken instead: { "npmToken": "abcdefghi-1234-jklmno-aac6-12345567889" } Add an encrypted npm token to Renovate config If you don't wish for all users of the repository to be able to see the unencrypted token, you can encrypt it with Renovate's public key instead, so that only Renovate can decrypt it. Go to, paste in your npm token, click "Encrypt", then copy the encrypted result. Add the encrypted result inside an encrypted object like==" } } If you have no .npmrc file then Renovate will create one for you, pointing to the default npmjs registry. If instead you use an alternative registry or need an .npmrc file for some other reason, you should configure it too and substitute the npm token with ${NPM_TOKEN} for it to be replaced. e==" }, "npmrc": "registry=\n//my.custom.registry/npm:_authToken=${NPM_TOKEN}" } Renovate will then use the following logic: - If no npmrcstring is present in config then one will be created with the _authTokenpointing to the default npmjs registry - If an npmrcstring is present and contains ${NPM_TOKEN}then that placeholder will be replaced with the decrypted token - If an npmrcstring is present but doesn't contain ${NPM_TOKEN}then the file will have _authToken=<token>appended to it Encrypted entire .npmrc file into config Copy the entire .npmrc, replace newlines with \n chars, and then try encrypting it at You will then get an encrypted string that you can substitute into your renovate.json instead. The result will now look something like this: { "encrypted": { "npmrc": "WOTWu+jliBtXYz3CU2eI7dDyMIvSJKS2N5PEHZmLB3XKT3vLaaYTGCU6m92Q9FgdaM/q2wLYun2JrTP4GPaW8eGZ3iiG1cm7lgOR5xPnkCzz0DUmSf6Cc/6geeVeSFdJ0zqlEAhdNMyJ4pUW6iQxC3WJKgM/ADvFtme077Acvc0fhCXv0XvbNSbtUwHF/gD6OJ0r2qlIzUMGJk/eI254xo5SwWVctc1iZS9LW+L0/CKjqhWh4SbyglP3lKE5shg3q7mzWDZepa/nJmAnNmXdoVO2aPPeQCG3BKqCtCfvLUUU/0LvnJ2SbQ1obyzL7vhh2OF/VsATS5cxbHvoX/hxWQ==" } } However be aware that if your .npmrc is too long to encrypt then the above command will fail.
https://docs.renovatebot.com/private-modules/
2019-11-11T23:17:45
CC-MAIN-2019-47
1573496664439.7
[]
docs.renovatebot.com
Use field lookups to add information to your events Contents - Configure a time-based lookup - Include advanced options - Upload the lookup table to Splunk - Define the lookup - Set the lookup to run automatically Use "Add fields from external data sources" in the Knowledge Manager manual. List existing lookup tables or upload a new file View existing lookup table files in Manager > Lookups > Lookup table files, or click "New" to upload more CSV files to use in your definitions for file-based lookups. To upload new files: 1. Select a Destination app.:: 4.2 , 4.2.1 , 4.2.2 , 4.2.3 , 4.2.4 , 4.2.5 , 4.3 , 4.3.1 , 4.3.2 View the Article History for its revisions. The graphic in step 4 of "Upload the lookup table to Splunk" in "Example of HTTP status lookup" topic does not match the text in step 4. The "Upload a lookup file" and "Destination filename" fields in the graphic contain product_lookup.csv while the text indicates the lookup file name is http_status. When defining the input and output fields for automatic lookups, the CSV column name goes in the left field and the search field name goes in the right. Thanks, Rmjenson and lanmaddox4bookrags; I've updated this topic with a new screenshot and some additional explanatory text to address the errors/omissions that you identified. Matt Ness, Splunk Documentation
http://docs.splunk.com/Documentation/Splunk/latest/User/CreateAndConfigureFieldLookups
2012-05-27T11:01:59
crawl-003
crawl-003-022
[]
docs.splunk.com
format format Synopsis Takes the results of a subsearch and formats them into a single result. Syntax format ["<string>" "<string>" "<string>" "<string>" "<string>" "<string>"] Optional arguments - <string> - Syntax: "<string>" - Description: These six optional string arguments correspond to: ["<row prefix>" "<column prefix>" "<column separator>" "<column end>" "<row separator>" " Example 2: Increase the maximum number of events from the default to 2000 for a subsearch to use in generating a search. In limits.conf: [format] maxresults = 2000 and in the subsearch: ... | head 2 | fields source, sourcetype, host | format maxresults=2000 See also Answers Have questions? Visit Splunk Answers and see what questions and answers the Splunk community has using the format. Can you include an example using the optional arguments?
http://docs.splunk.com/Documentation/Splunk/4.3/SearchReference/format
2012-05-27T10:19:23
crawl-003
crawl-003-022
[]
docs.splunk.com
The API Getting an Auth Key To generate an authentication key, you have to go to autocompeter.com and sign in using GitHub. Once you've done that you get access to a form where you can type in your domain name and generate a key. Copy-n-paste that somewhere secure and use when you access private API endpoints. Every Auth Key belongs to one single domain. E.g. yoursecurekey->. Submitting titles You have to submit one title at a time. (This might change in the near future) You'll need an Auth Key, a title, a URL, optionally a popularity number and optionally a group for access control.. The URL you need to do a HTTP POST to is: The Auth Key needs to be set as a HTTP header called Auth-Key. The parameters need to be sent as application/x-www-form-urlencoded. The keys you need to send are: Here's an example using curl: curl -X POST -H "Auth-Key: yoursecurekey" -d url= \ -d title="A blog post example" \ -d group="loggedin" \ -d popularity="105" \ Here's the same example using Python requests: response = requests.post( '', data={ 'title': 'A blog post example', 'url': '', 'group': 'loggedin', 'popularity': 105 }, headers={ 'Auth-Key': 'yoursecurekey' } ) assert response.status_code == 201 The response code will always be 201 and the response content will be application/json that simple looks like this: {"message": "OK"} Uniqueness of the URL You can submit two "documents" that have the same title but you can not submit two documents that have the same URL. If you submit: curl -X POST -H "Auth-Key: yoursecurekey" \ -d url= \ -d title="This is the first title" \ # now the same URL, different title curl -X POST -H "Auth-Key: yoursecurekey" \ -d url= \ -d title="A different title the second time" \ Then, the first title will be overwritten and replaced with the second title. About the popularity If you omit the popularity key, it's the same as sending it as 0. The search will always be sorted by the popularity and the higher the number the higher the document title will appear in the search results. If you don't really have the concept of ranking your titles by a popularity or hits or score or anything like that, then use the titles "date" so that the most recent ones have higher priority. That way more fresh titles appear first. About the groups and access control and privacy Suppose your site visitors should see different things depending how they're signed in. Well, first of all you can't do it on per-user basis. However, suppose you have a set of titles for all visitors of the site and some extra just for people who are signed in, then you can use group as a parameter per title. Note: There is no way to securely protect this information. You can make it so that restricted titles don't appear to people who shouldn't see it but it's impossible to prevent people from manually querying by a specific group on the command line for example. Note that you can have multiple groups. For example, the titles that is publically available you submit with no group set (or leave it as an empty string) and then you submit some as group="private" and some as group="admins". How to delete a title/URL If a URL hasn't changed by the title has, you can simply submit it again. Or if neither the title or the URL has changed but the popularity has changed you can simply submit it again. However, suppose a title needs to be remove you send a HTTP DELETE. Send it to the same URL you use to submit a title. E.g. curl -X DELETE -H "Auth-Key: yoursecurekey" \ Note that you can't use application/x-www-form-urlencoded with HTTP DELETE. So you have to put the ?url=... into the URL. Note also that in this example the url is URL encoded. The : becomes %3A. How to remove all your documents You can start over and flush all the documents you have sent it by doing a HTTP DELETE request to the url /v1/flush. Like this: curl -X DELETE -H "Auth-Key: yoursecurekey" \ This will reset the counts all related to your domain. The only thing that isn't removed is your auth key. Bulk upload Instead of submitting one "document" at a time you can instead send in a whole big JSON blob. The struct needs to be like this example: { "documents": [ { "url": "", "title": "Page One" }, { "url": "", "title": "Other page", "popularity": 123 }, { "url": "", "title": "Last page", "group": "admins" }, ] } Note that the popularity and the group keys are optional. Each dictionary in the array called documents needs to have a url and title. The endpoint to use and you need to do a HTTP POST or a HTTP PUT. Here's an example using curl: curl -X POST -H "Auth-Key: 3b14d7c280bf525b779d0a01c601fe44" \ -d '{"documents": [{"url":"/url", "title":"My Title", "popularity":1001}]}' \ And here's an example using Python requests: import json import requests documents = [ { 'url': '/some/page', 'title': 'Some title', 'popularity': 10 }, { 'url': '/other/page', 'title': 'Other title', }, { 'url': '/private/page', 'title': 'Other private page', 'group': 'private' }, ] print requests.post( '', data=json.dumps({'documents': documents}), headers={ 'Auth-Key': '3b14d7c280bf525b779d0a01c601fe44', } )
https://autocompeter.readthedocs.io/en/latest/api/
2020-02-16T22:07:28
CC-MAIN-2020-10
1581875141430.58
[]
autocompeter.readthedocs.io
.. - If using ~/.aws/configor ~/.aws/credentialsa :profileoption can be used to choose the proper credentials. Shared configuration is loaded only a single time, and credentials are provided statically at client creation time. Shared credentials do not refresh.. Each service gem contains its own resource interface..
https://docs.aws.amazon.com/sdk-for-ruby/v3/api/index.html
2020-02-16T21:28:54
CC-MAIN-2020-10
1581875141430.58
[]
docs.aws.amazon.com
Develop JavaScript apps Learn how to develop your own JavaScript a sample application to see Kentico Kontent features in action. You can choose tutorials for the React app or Vue.js app or see the other JavaScript sample apps we have available. - Next build your own app from scratch with our tutorial on building your first React app. JavaScript SDKs: the Delivery SDK Opens in a new windowfor retrieving published content and the Management SDKOpens in a new window for managing content in all production stages. - Explore the boilerplates and tools we have to get your development started right. - See how to integrate with other services to build out capabilities. Content as a Service is ideal for a microservices architecture, so you'll want to know how to add in other services to your solution.
https://docs.kontent.ai/tutorials/develop-apps?tech=react
2020-02-16T22:15:19
CC-MAIN-2020-10
1581875141430.58
[]
docs.kontent.ai
Going Green with Office.’
https://docs.microsoft.com/en-us/archive/blogs/atwork/going-green-with-office
2020-02-16T23:28:40
CC-MAIN-2020-10
1581875141430.58
[]
docs.microsoft.com
The Day Zero Configuration Guide describes the generally valid configuration steps that should be taken to achieve this, regardless of the specific Alfresco use case. It describes the steps to take before Alfresco is started for the first time, together with optional configuration and tuning to reach optimal Alfresco performance. This document does not describe the full breadth of Alfresco configuration options that can be leveraged to scale Alfresco in use case specific ways, but aggregates a general set of recommendations from the official documentation in a one-stop-shop document. For additional documentation, see the rest of the product documentation, access the Knowledge Base through the Support Portal, or the Scalability Blueprint document. This is a live document generated out of Alfresco product documentation, so make sure that you check this page often for updates. Check the PDF publication date to make sure you are using the latest available version.
https://docs.alfresco.com/4.2/concepts/zeroday-overview.html?m=2
2020-02-16T23:05:56
CC-MAIN-2020-10
1581875141430.58
[]
docs.alfresco.com
Develop Ruby apps Learn how to develop your own Ruby the sample Rails appOpens in a new window that displays content from Kentico Kontent. - Next build your own app from scratch with our blog post on creating a Kentico Kontent Ruby on Rails applicationOpens in a new window. Delivery Ruby SDKOpens in a new window for retrieving published content. - See how to integrate with other services to build out capabilities. Content as a Service is ideal for a microservices architecture, so you'll want to know how to add in other services to your solution.
https://docs.kontent.ai/tutorials/develop-apps?tech=ruby
2020-02-16T21:32:28
CC-MAIN-2020-10
1581875141430.58
[]
docs.kontent.ai
appendFields A Workflow Engine function that appends a concatenated set of fields to an existing field, using a separatorFields takes the following arguments:
https://docs.moogsoft.com/en/appendfields.html
2020-02-16T23:02:43
CC-MAIN-2020-10
1581875141430.58
[]
docs.moogsoft.com
>>. Basic="some value" The results look something like this: 2. Determine if the modified time of an event is greater than the relative time For events that contain the field scheduled_time in UNIX ] ) Extended examples 1. Create daily results for testing You can use the makeresults command to create a series of results to test your search syntax. For example, the following search creates a set of the eval command. | makeresults count=5 | streamstats count | eval _time=_time-(count*86400) The calculation multiplies the value in the count field by the number of seconds in a day. The result is subtracted from the original _time field to get new dates equivalent to 24 hours ago, 48 hours ago, and so forth. The seconds in the date are different because _time is calculated the moment you run the search. The results look something like this: The dates start from the day before the original date, 2020-01-09, and go back five days. Need more than five results? Simply change the count value in the makeresults command. 2. Create hourly results for testing You can create a series of hours instead of a series of days for testing. Use 3600, the number of seconds in an hour, instead of 86400 in the eval command. | makeresults count=5 | streamstats count | eval _time=_time-(count*3600) The results look something like this: Notice that the hours in the timestamp are 1 hour apart. 3. Add a field with string values You can specify a list of values for a field. But to have the values appear in separate results, you need to make the list a multivalue field and then expand that multivalued list into separate results. Use this search, substituting your strings for buttercup and her friends: | makeresults | eval test="buttercup rarity tenderhoof dash mcintosh fleetfoot mistmane" | makemv delim=" " test | mvexpand test The results look something like this: 4. the previous example. - You can add a field with a set of randomly generated numbers by using the randomfunction, as shown below: | makeresults count=5 | streamstats count | eval test=random()/random() The results look something like this: Use the round function to round the numbers up. For example, this search rounds the numbers up to four digits to the right of the decimal: ...| eval test=round(random()/random(),4) The results look something like.
https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Makeresults
2020-02-16T23:00:17
CC-MAIN-2020-10
1581875141430.58
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Installing Bonus plugin for free theme. Once you activate the premium theme you will see a notice on the top of your WordPress admin screen asking you to install a recommended plugin. If you don’t see this notice or dismissed this notice you can go to: Apperance >> Install Plugins page Then just click to install and activate the plugin.
http://docs.kadencethemes.com/pinnacle-free/installing-premium-plugins/
2020-02-16T21:32:48
CC-MAIN-2020-10
1581875141430.58
[array(['http://docs.kadencethemes.com/pinnacle-free/wp-content/uploads/2016/03/Get-Toolkit-min.jpg', 'Get Toolkit-min'], dtype=object) array(['http://docs.kadencethemes.com/pinnacle-free/wp-content/uploads/2016/03/AltInstall-min.jpg', 'AltInstall-min'], dtype=object) ]
docs.kadencethemes.com
Cloud Platform Release Announcements for October 11, Functions | Java support public preview - Power BI Embedded | General availability (GA) - Node.js support on Azure | Application Insights SDK for Node.js—GA - Node.js support on Azure | Node.js migration and support services from nearForm - Azure API Management | PowerBI Solution Template - Azure Functions | Durable Functions public preview - Azure Event Hubs | Event Hubs dedicated pricing change - Azure Cosmos DB | Try Cosmos DB at no cost - SQL Server 2012 SP4 | GA - Azure DNS private zones | Managed preview - Azure App Gateway Web Application Firewall (WAF) | GA - Azure Files share snapshots | In preview - Visual Studio 2017 | Update - Visual Studio for Mac | Update - Azure Batch AI | Preview Azure Functions | Java support public preview Announcing preview support for Java for Azure Functions. Serverless computing enabled by Azure Functions provides great benefits for cost, productivity and innovation. Developers want to be able to reap these benefits while working with their favorite programming languages and development tools. In order to enable that experience, support for Java languages for Azure Functions is now available in preview. This includes support for popular Java tools like Maven, Jenkins, and IDEs such as IntelliJ, Eclipse, as well as Visual Studio Code. For more details, please see the announcement blog post on Azure blog. Power BI Embedded | GA Microsoft Power BI simplified how partners and developers embed visualizations into their apps. In the same way that partners build apps on Azure infrastructure, they can also use Power BI capabilities to quickly add stunning visuals, reports, and dashboards into their intelligent apps through Power BI Embedded capacity-based tiers. Partners and developers can choose between using our visuals and creating their own. They can expose insights to their customers by connecting to countless data sources, and easily manage the needs of their apps and consumption of the service. Learn more. Node.js support on Azure | Application Insights SDK for Node.js | GA Application Insights is an application performance management tool that monitors your apps, services, and components in production, after deployment. It helps you rapidly discover and diagnose performance bottlenecks and other issues. On October 4, 2017, we announced the availability of the Node.js SDK for Application Insights 1.0, which brings maturity to the project along with improvements in performance, reliability, and stability. - Monitor your Node.js apps and services in production. Find issues before your users report them such as bugs and performance bottlenecks. - Gather telemetry data from your Node.js app. - Monitor connected services with Application Map, including databases (MongoDB, Redis, PostgreSQL, MySQL, and more) and popular NPM packages, out-of-the-box. - Works everywhere—virtual machines on Azure and on-premises, Azure Web Apps on Linux, Internet-of-Things devices, and even desktop apps built with Electron. Application Insights SDK for Node.js is available today, and is published on GitHub and NPM as an open source project. Learn more about Application Insights, then check out the getting started guide for Node.js. Node.js support on Azure | Node.js migration and support services from nearForm Read full announcement NearForm and Microsoft announced at Node Interactive a new partnership to help customers migrate Node.js apps and services to Azure, and provide enterprise-grade support for them. By partnering with nearForm, we're bringing their multi-year expertise in architecting, designing, and supporting Node.js apps to developers as they adopt Azure and build on top of our cloud. The partnership includes services for: - Migration of Node.js apps and services to Azure. - Developer support for production apps and services. - Technology consulting—migration to micro services-based architectures, code reviews, Node.js apps performance. - Helping businesses migrate to a SaaS model. With nearForm, Azure customers get access to teams of world-class software architects, designers, developers, DevOps engineers, and open source tooling experts, including Node.js Core contributors and members of the TSC. Based in Ireland, nearForm operates worldwide, assisting customers of any size and industry. Customers interested in the migration and developer services can request a quote from nearForm directly. Azure API Management | PowerBI Solution Template Azure API Management Analytics PowerBI Solution Template now available We're pleased to announce the immediate availability of the Azure API Management Analytics Power BI Solution Template. This template allows you to create engaging, customizable dashboards that illustrate traffic flowing through your Azure API Management instance. The template offers a variety of pages to help you understand and manage your API activity, including: - At-a-glance provides a summary of traffic and KPIs. - API Calls offers detailed insights into specific APIs, subscriptions, and time periods. - Errors helps you identify possible concerns in your API gateway. - Call Frequency highlights unseen patterns in your traffic. - Relationships illustrates calls that are likely to come sequentially, providing intelligence regarding how developers are using your APIs. Download the template. See an overview with screenshots and find more technical details. Azure Functions | Durable Functions public preview We're happy to announce the public preview of Durable Functions, an Azure Functions extension for building long-running, stateful function orchestrations in code using C# in a serverless environment. This will allow developers to implement a lot of new scenarios which were not possible earlier, including complex chaining scenarios, fan in/fan out patterns, stateful actors, and scenarios with long callbacks. Azure Event Hubs | Event Hubs Dedicated pricing change New pricing tier for Event Hubs Dedicated Effective November 13, 2017, we will offer a new pricing tier for Azure Event Hubs Dedicated. This tier introduces hourly fixed pricing, and will be available to both EA and Direct via the web customers. Azure Event Hubs is a hyper-scale telemetry ingestion service that collects, transforms, and stores millions of events. As a fully managed service, it lets you focus on getting value from your telemetry rather than on gathering the data. The Event Hubs Dedicated service offers single-tenant deployments for customers with the most demanding requirements. Key benefits of the service include: - Support for message sizes up to 1MB - Guaranteed capacity to meet peak needs - Message retention for 7 days without any additional cost - Leverages the Capture feature of Event Hubs, allowing a single stream to support real-time and batch based pipelines Collectively, these enhancements provide greater value in terms of price per capacity unit, and greater flexibility to pay for what you use. This blog post provides additional details. Azure Cosmos DB | Try Cosmos DB at no cost Try Azure Cosmos DB at no cost for a limited time Take advantage of the free time-bound access offer for Azure Cosmos DB without requiring an Azure subscription. This allows you to more easily evaluate Azure Cosmos DB at no cost to you. Start your free offer today. SQL Server 2012 Service Pack 4 | GA The SQL Server team is excited to bring you the final service pack release for SQL Server 2012. SQL Server 2012 Service Pack 4 (SP4) contains a rollup of released hotfixes as well as more than 20 improvements centered around performance, scalability, and diagnostics based on the feedback from customers and the SQL community. These include additional monitoring capabilities through enhancements in DMV, Extended Events, and Query Plans. Learn more in the recent SQL Server 2012 SP4 post on the SQL Server Blog. Azure DNS private zones | Managed preview We're announcing a managed preview of private zones, a key feature addition to Azure DNS. This capability provides a reliable, secure DNS service to manage and resolve names in a virtual network (VNet), without the need for you to create and manage custom DNS solution. This feature allows you to use your company domain rather than the Azure-provided names available today, provides name resolution for virtual machines (VMs) within a VNet and across VNets. Additionally, you can configure zones names with a split-horizon view, allowing for a private and a public DNS zone to share the same name. Zone and record management is done using the already familiar PowerShell. Support for Rest API, CLI, and Portal will be announced soon. For more information and to participate in the managed preview, please contact [email protected]. For more information, including pricing details, please visit the Azure DNS Private Zone overview page and the Azure DNS pricing page. (Note—Customers will be charged a reduced rate during the preview.) Azure App Gateway WAF (Web Application Firewall) | GA Azure Application Gateway is an application delivery controller service that offers various Layer 7 (HTTP/HTTPS) load balancing and WAF capabilities to Azure customers. As part of continued enhancements, we're pleased to announce support for IPv6 for Azure Application Gateway. With this capability, customers can now choose to create IPv6 end points in addition to existing IPv4 end points. This allows users to connect to Application Gateway using either IPv4 or IPv6 protocols. IPv6 support is available in select regions. Support for remaining regions will be announced in coming weeks. For more information, refer to Application Gateway IPv6 page. Azure Files share snapshots | In preview Announcing the public preview for Azure File snapshots We're excited to introduce Azure Files share snapshots. Share snapshots allow you to periodically create a read-only copy of the data in an entire file share to use as a baseline for a new file share. It can also be used as a means of providing a cloud-based version of the file recovery capabilities that users know and love. File share snapshots provide a point in time “picture” of the contents of a cloud file share. Only the incremental changes to individual files in the share will be written to the snapshot. A file share may have up to 200 snapshots. Share snapshots persist until they are explicitly deleted. Azure File share snapshots use a cloud version of the SMB "Previous Versions" functionality found in client and server versions of Windows. Previous Versions functionality helps users to: - Recover files you accidentally deleted. - View or restore a version of a file that you have saved over. - Allow you to compare current and/or previous versions of a file side-by-side. Customers can create or delete share snapshots using REST API, Client Library, PowerShell, CLI, and Azure Portal. Customers can view snapshots of a share, file, or directory using both REST API and SMB. The Azure Files share snapshots public preview will be available in all regions. During the preview, capacity occupied by share snapshots will be free but standard transaction charges will be billed. For more information regarding Azure Files Share Snapshots, please visit our blog. See additional pricing information. Visual Studio 2017 | Update Visual Studio 2017 is now available, adding support for the Windows Fall Creators Update, .NET Standard 2.0 support for UWP development, and a new Windows Application packaging project to help you pack any Windows project into an .appx container for distribution through the Windows Store. Xamarin Live enables you to continuously deploy, test, and debug your apps directly on iOS and Android devices. Download the update. Visual Studio for Mac | Update A new update to Visual Studio for Mac is now available, adding Docker support for ASP.NET Core apps, Android 8.0 Oreo support for apps using Xamarin, as well as .NET Core 2.0 installation from within the Visual Studio for Mac installer. The complete release notes are available here. Download the update here, or through the Visual Studio for Mac updater. Azure Batch AI | Preview We recently announced the release of Azure Batch AI Preview. Azure Batch AI Preview helps you train deep learning and other machine learning models using GPU and CPU clusters. There's no additional charge for using Batch AI Service Preview beyond the underlying compute and other resources consumed. Batch AI Preview supports both standard and low priority virtual machines. Only Linux virtual machines in the East US region are supported in the preview. For more information, see Azure Batch AI.
https://docs.microsoft.com/en-us/archive/blogs/stbnewsbytes/cloud-platform-release-announcements-for-october-11-2017
2020-02-16T23:56:48
CC-MAIN-2020-10
1581875141430.58
[]
docs.microsoft.com
To build a complex streaming analytics application from scratch, we will work with a fictional use case. A trucking company has a large fleet of trucks, and wants to perform real-time analytics on the sensor data from the trucks, and to monitor them in real time. Their analyitcs application has the following requirements: Outfit each truck with two sensors that emit event data such as timestamp, driver ID, truck ID, route, geographic location, and event type. The geo event sensor emits geographic information (latitude and longitude coordinates) and events such as excessive braking or speeding. The speed sensor emits the speed of the vehicle. Stream the sensor events to an IoT gateway, which serializes the events as Avro objects and streams them into separate Kafka topics, one for each Kafka sensor. Use NiFi to consume the serialized Avro events from the Kafka topics, and then route, transform, enrich, and deliver the data to a downstream Kafka instance. Connect to the two streams of data to do analytics on the stream. Join the two sensor streams using attributes in real-time. For example, join the geo-location stream of a truck with the speed stream of a driver. Filter the stream on only events that are infractions or violations. All infraction events need to be available for descriptive analytics (dash-boarding, visualizations, or similar) by a business analyst. The analyst needs the ability to do analysis on the streaming data. Detect complex patterns in real-time. For example, over a three-minute period, detect if the average speed of a driver is more than 80 miles per hour on routes known to be dangerous. When each of the preceding rules fires, create alerts and make them instantly accessible. Execute a logistical regression Spark ML model on the events in the stream to predict if a driver is going to commit a violation. If violation is predicted, then alert on it. The below sections walks you through how to implement all ten requirements. Requirements 1-3 are done using NiFi and Schema Registry. Requirements 4 through 10, are implemented using the new Streaming Analtyics Manager.
https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.0.1.1/bk_getting-started-with-stream-analytics/content/ch01s01.html
2017-10-17T00:07:32
CC-MAIN-2017-43
1508187820487.5
[]
docs.hortonworks.com
Welcome to the WSO2 Private PaaS 4.0.0 documentation! WSO2 Stratos 2.0.0 was donated to the Apache Software foundation and was released as Apache Stratos 3.0.0 (incubating). Thereafter, WSO2 Stratos has been rebranded as WSO2 Private PaaS. WSO2 Private PaaS version 4.0.0 is the first release and it has been built on top of Apache Stratos 4.0.0. WSO2 Private Pa.
https://docs.wso2.com/display/PP400/WSO2+Private+PaaS+Documentation
2017-10-17T00:39:53
CC-MAIN-2017-43
1508187820487.5
[]
docs.wso2.com
Installation On this page you will learn System Requirements The DataTank requires a server with - Apache2 or Nginx - mod rewrite enabled - PHP 5.4 or higher - Mcrypt installed and enabled - Git - Any database supported by Laravel 4 - dbase PHP extension php5-mcrypt $ a2enmod rewrite $ php5enmod mcrypt $ pecl install dbase $ php5enmod dbase $ service apache2 restart. If you're using a Mac system, you'll have to download a web stack which contains the necessary requirements, such as XAMPP. Set the rewrite rules in /etc/apache2/sites-available/default to ALL: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName tdt.dev DocumentRoot /var/www/core/public <Directory /var/www/core/public> Require all granted> installed properly and ready to use. Next to that, you'll need to prepare a database with a user that has read & write permissions for The DataTank to use. Clone the project To install the project on your device, open up a terminal and clone our repository git clone Configure the database Provide your database credentials in the app/config/database.php file, according to the Laravel database configuration. After that you're ready to make composer work his magic! Run the following commands from the root of the folder where you cloned the repository: $ composer install $ composer update immediately by editing it using the user interface. Click the question mark in the user interface to start the help guide.
http://docs.thedatatank.com/5.6/installation
2017-10-16T23:38:28
CC-MAIN-2017-43
1508187820487.5
[]
docs.thedatatank.com