content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
.Model Assembly: AWSSDK.dll Version: (assembly version) The DeleteIdentityPolicyRequest type exposes the following members .NET Framework: Supported in: 4.5, 4.0, 3.5 .NET for Windows Store apps: Supported in: Windows 8.1, Windows 8 .NET for Windows Phone: Supported in: Windows Phone 8.1, Windows Phone 8
http://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/TSimpleEmailDeleteIdentityPolicyRequestNET45.html
2017-11-17T20:03:39
CC-MAIN-2017-47
1510934803906.12
[]
docs.aws.amazon.com
Commerce v1 SimpleCart or Commerce? Since announcing Commerce in 2015, the question we received most was if people should wait for Commerce, or use SimpleCart (the other ecommerce plugin we have available)for their projects. Until we released Commerce that was easy to answer: use SimpleCart, as Commerce isn't ready to go, and there is no release date just yet. But now that Commerce is in beta, there are different considerations. We'll go over a few in this document to see how the two products differ, and when which one would make sense. Already using SimpleCart? Just stick with it. If your webshop is humming along nicely, and there's no immediate reason to change how it works, SimpleCart should do just fine. We're continuing to maintain SimpleCart, even now that Commerce is available, so as long as you keep that up to date you'll enjoy a stable experience. If your shop has grown beyond what SimpleCart can offer, migrating it to Commerce may make sense. There's a few options you could consider. First, you could leave your catalog in place, using SimpleCart. Combined with a few changes to your templates, cart, checkout and customer section this option will save you a lot of content migrations, while benefiting from the more advanced and flexible Commerce features. It's possible to move along your order history as well if needed. We're planning to make tools and documentation to support this available in the next few months. The second option is to move your products into a new catalog as well. Look into the different ways of managing your catalog in Commerce for comparisons to find what best suits your shop. By moving the catalog, you might get additional benefits or features available that your shop needs. The most radical option is to start with a clean slate. Perhaps the shop could use a redesign or rebuild? You get to pick the best tool for the job, which is probably Commerce. High-level Requirements There's a few high level requirements that will lead you to either products. Product Prices inclusive of Taxes/VAT If you need to configure your products to have prices including taxes or VAT, then Commerce is the way to go. Commerce, like SimpleCart, defaults to prices exclusive of taxes, however Commerce offers a setting to toggle to an inclusive calculation instead. Commerce also ships with self-updating VAT rates for the EU, and a TaxJar integration for automated US Sales Tax calculation. Read more about taxes in Commerce. Dynamic shipping costs If you need to calculate shipping costs based on an integration with a postal service, or based on custom business logic involving product weight, size, or quantity, then you'll want Commerce. SimpleCart is rather restricted in how it allows shipping methods to be priced. In the past we have researched adding more flexible shipping pricing to SimpleCart, however that turned out to be not as straightforward to do as we hoped due to the structure of the checkout, and the legacy users to consider. The flexibility of Commerce will allow you to manage shipping costs in various ways. For each shipping method you can provide a fixed or percentage price out of the box. Each method also has restrictions on the order total you can set, making sure shipping methods are only available as an option until, between, or after a certain order total. Simple business rules like "€5 shipping under €25, €3 from €25-50, free from €50+" can be easily implemented with these options. A shipping method for weight based costings is also available out of the box, as is a shipping method that sets different prices per country. For specific business rules or integrations with carriers, you could build a custom shipping method. Payment Service Providers When accepting payments online you need to work with one or more payment providers. This can be an important factor in deciding which shopping solution fits your needs. SimpleCart supports the following payment providers: - Authorize.net, SIM integration - Mollie - PayPal Express - Stripe For a current list of the supported payment methods in Commerce see Payment Methos. At time of writing, the following are supported in Commerce: - Authorize.net, Accept.js integration - Braintree - Mollie - MultiSafePay - Paymill - PayPal Express - SagePay - Stripe If the payment provider you need is not on either lists, Commerce is the better option. Implementing gateways is a lot easier in Commerce thanks to the use of the OmniPay library. If a "driver" exists for the Payment Provider you need, we'll be able of integrating it with Commerce much quicker. Be sure to request the payment provider you need well ahead of time, and we might be able of adding it in our regular release cycle if there is sufficient demand. To commission a payment provider because you need it short term, please contact Mark via [email protected] for an estimate.
http://docs.modmore.com/en/Commerce/v1/SimpleCart_or_Commerce.html
2017-11-17T19:16:10
CC-MAIN-2017-47
1510934803906.12
[]
docs.modmore.com
cortex_m_rtfm pub fn logical2hw(logical: u8) -> u8 Converts a logical priority into a shifted hardware priority, as used by the NVIC and the BASEPRI register This function panics if logical is outside the closed range [1, 1 << PRIORITY_BITS]. Where PRIORITY_BITS is the number of priority bits used by the device specific NVIC implementation. logical [1, 1 << PRIORITY_BITS] PRIORITY_BITS
https://docs.rs/cortex-m-rtfm/0.1.0/cortex_m_rtfm/fn.logical2hw.html
2017-11-17T19:28:45
CC-MAIN-2017-47
1510934803906.12
[]
docs.rs
Recently Viewed Topics Industrial Security Navigation The top navigation menu displays three main pages: Monitoring, Results, and Settings. Click a page name to open that page. Click the username to display a drop-down menu with two options: Change Password and Sign Out. Note: The Configuration page is available only to users with administrative privileges. The bell ( ) icon toggles the Notification History box, which displays a list of notifications, successful or unsuccessful login attempts, errors, and system information generated by Industrial Security. The color of the bell changes based on the nature of the notifications in the list. If there are no alerts, or all notifications are information alerts, then the bell is white. If there are error alerts in the notification list, then the bell is red. The Notification History box displays up to 1,000 alerts. Once the limit is reached, no new alerts can be listed until old ones are cleared..
https://docs.tenable.com/industrialsecurity/1_0/Content/IndustrialSecurityNavigation.htm
2017-11-17T19:30:37
CC-MAIN-2017-47
1510934803906.12
[array(['Resources/Images/5.3TopNavigation.png', None], dtype=object)]
docs.tenable.com
SimpleCart v2 Frontend Checkout Order and Delivery Addresses Within SimpleCart you have the ability to use multiple addresses for the order itself and the delivery. To request the additional delivery address, you'll need to add some fields to the same checkout form. These fields typically use a prefix delivery_, so you get delivery_firstname, delivery_lastname, delivery_street, delivery_number and so on. As SimpleCart doesn't know what these fields are out of the box, you need to add them to the field mapping on the FormIt call. Mapping the address fields Without mapping the address fields, SimpleCart will treat new fields as custom order fields. In the default checkout form (discussed here), it uses two address fields which are mapped to a different field already. These are street and number, and the mapping for that looks like this: [[!FormIt? ... &orderAddress=`address1:street,address2:number` ... ]] As you can see, in this mapping parameter the fields "street" and "number" are mapped to SimpleCarts "address1" and "address2" fields. Multiple form fields can be separated by a comma. Mapping multiple checkout form fields into one SimpleCart field is also possible by adding additional fields after a colon, for example address1:street:number. Each field will be added separated by a space. The mapping key for the delivery address is done the same way, but with &deliveryAddress. For example: [[!FormIt? ... &orderAddress=`address1:street,address2:number` &deliverAddress=`firstname:delivery_firstname,lastname:delivery_lastname,address1:delivery_street,address2:delivery_number,...` ... ]] In this delivery mapping you need to map every single delivery field. Available address fields Below a list of all available address fields in SimpleCart which can be used in your checkout form. salutationUsed to store the salutation of the customer nameUsed to store a fullname of the customer (required if not using firstname and lastname) firstnameUsed to store the firstname of the customer (required if not using fullname) lastnameCan be used to store the lastname of the customer companyCan be used to store the company name in address1A mixed field for the first address line of the customer (required) address2A mixed field for the second address line of the customer address3A mixed field for the second address line of the customer zipUse this field to store the zip/postal code of the customer (required) cityUse this field to store the city name of the customer (required) stateCan be used to store the customers state or province name countryUse this field to store the customers country name (or code) in phoneThe general phone number field mobileAn optional mobile phone number field Conditional validation For use cases where you have a checkbox to allow a user to enter a different delivery address, you'll need a conditional validator. You can find a custom requiredIf validator here, to use that add it to MODX as a snippet. Here's an example of how you would use the requiredIf validator to only validate delivery fields if a checkbox use_delivery_address has a non-empty value. [[!FormIt? ... &customValidators=`requiredIf` &validate=`... delivery_firstname:requiredIf=^use_delivery_address^ delivery_lastname:requiredIf=^use_delivery_address^ delivery_street:requiredIf=^use_delivery_address^ ...` ... ]]
http://docs.modmore.com/en/SimpleCart/v2.x/Frontend/Checkout/Order_and_Delivery_Addresses.html
2017-11-17T19:26:25
CC-MAIN-2017-47
1510934803906.12
[]
docs.modmore.com
= Current mandate to perform task = No current mandate to perform task The contents of this page are out of date - needs clarification of the implementation status of ISO Feature model on geotools and geoserver trunk. Geotools 2.4 Unsupported GeoServer 1.6 Milestone 1 - March, 2007. Geotools 2.4 Supported Unsupported GeoServer 1.6 Milestone 2 - April, 2007 Switch to the new simple feature model implementation developed in the unsupported space during the previous milestone. Geotoools 2.5 Supported Unsupported GeoServer 1.7 Milestone 3 - June, 2007 Preparation for the complex feature model.
http://docs.codehaus.org/pages/diffpages.action?pageId=69045&originalId=233049900
2015-02-27T04:15:21
CC-MAIN-2015-11
1424936460472.17
[]
docs.codehaus.org
. Tasks Russel Winder
http://docs.codehaus.org/pages/viewpage.action?pageId=73727
2015-02-27T04:19:44
CC-MAIN-2015-11
1424936460472.17
[]
docs.codehaus.org
Hortonworks Data Platform deploys Apache Hive for your Hadoop cluster. Hive is a data warehouse infrastructure built on top of Hadoop. It provides tools to enable easy data ETL, a mechanism to put structures on the data, and the capability of querying and analyzing Hive with scripts can be found in C:\hadoop\hive-0.11.0\scripts\metastore\upgrade\
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0-Win/bk_dataintegration/content/ch_using-hive.html
2015-02-27T04:02:23
CC-MAIN-2015-11
1424936460472.17
[]
docs.hortonworks.com
Static asset pipeline¶ This page.
http://zulip.readthedocs.io/en/stable/front-end-build-process.html
2017-11-18T00:55:19
CC-MAIN-2017-47
1510934804125.49
[]
zulip.readthedocs.io
Work with Text¶ Define Text in Multiple Languages¶ Most text fields in Colectica Designer allow you to enter text in multiple languages. The active language is identified by the language indicator found on the right side of a text box. Configure Available Languages¶ From the File menu, choose Languages In the dialog window that appears, move languages to and from the Project Cultures list; move languages either by selecting the language and clicking the appropriate arrow button, or by clicking and dragging a language from one list to the other. Change the Active Language¶ You can change which metadata language Colectica displays in its forms. Choose the language from the dropdown list in Languages section of the ribbon’s Study tab. All text fields will automatically switch to display text in the selected language. Note Text fields that do not contain text for the currently specified language, but that do have text in another language, will show a message indicating this. View All Translations of a Text Field¶ Click the language button. The textbox will expand to show all languages. Specify Text for Multiple Audiences¶ Many text fields allow you to specify different text for different audiences. For example, you may wish to describe your study one way to internal staff members, and a different way on your public web site. Text fields that support definitions for multiple audiences have an Audience button, as shown below. Click the Audience button to open the multi-audience editor. To add an audience, click the Add an Audience button. You will be prompted for the name of the new audience, and an additional text box will appear. Spell Check¶ For English language text, Colectica Designer automatically provides spell checking.
http://docs.colectica.com/designer/view-and-edit/work-with-text/
2017-11-18T00:51:15
CC-MAIN-2017-47
1510934804125.49
[array(['../../../_images/textbox-language.png', '../../../_images/textbox-language.png'], dtype=object) array(['../../../_images/textbox-audience-button.png', '../../../_images/textbox-audience-button.png'], dtype=object) array(['../../../_images/textbox-audiences-expanded.png', '../../../_images/textbox-audiences-expanded.png'], dtype=object) array(['../../../_images/textbox-spellcheck.png', '../../../_images/textbox-spellcheck.png'], dtype=object)]
docs.colectica.com
Migrating from v3.2 to v4.0¶ Major evolution occurs with this v4.0 release as the traditional module command implemented in C is replaced by the native Tcl version. This full Tcl rewrite of the Modules package was started in 2002 and has now reached maturity to take over the binary version. This flavor change enables to refine and push forward the module concept. This document provides an outlook of what is changing when migrating from v3.2 to v4.0 by first describing the introduced new features. Both v3.2 and v4.0 are quite similar and transition to the new major version should be smooth. Slights differences may be noticed in a few use-cases. So the second part of the document will help to learn about them by listing the features that have been discontinued in this new major release or the features where a behavior change can be noticed. New features¶ On its overall this major release brings a lot more robustness to the module command with now more than 4000 non-regression tests crafted to ensure correct operations over the time. This version 4.0 also comes with fair amount of improved functionalities. The major new features are described in this section. Non-zero exit code in case of error¶ All module sub-commands will now return a non-zero exit code in case of error whereas Modules v3.2 always returned zero exit code even if issue occurred. Output redirect¶ Traditionally the module command output text that should be seen by the user on stderr since shell commands are output to stdout to change shell’s environment. Now on sh, bash, ksh, zsh and fish shells, output text is redirected to stdout after shell command evaluation if shell is in interactive mode. Filtering avail output¶ Results obtained from the avail sub-command can now be filtered to only get the default version of each module name with use of the –default or -d command line switch. Default version is either the explicitly set default version or the highest numerically sorted modulefile or module alias if no default version set. It is also possible to filter results to only get the highest numerically sorted version of each module name with use of the –latest or -L command line switch. Extended support for module alias and symbolic version¶ Module aliases are now included in the result of the avail, whatis and apropos on avail, whatis and apropos sub-commands. Modules v4 resolves module alias or symbolic version passed to unload command to then remove the loaded modulefile pointed by the mentioned alias or symbolic version. A symbolic version sets on a module alias is now propagated toward the resolution path to also apply to the relative modulefile if it still correspond to the same module name. Hiding modulefiles¶ Visibility of modulefiles can be adapted by use of file mode bits or file ownership. If a modulefile should only be used by a given subset of persons, its mode an ownership can be tailored to provide read rights to this group of people only. In this situation, module only reports the modulefile, during an avail command for instance, if this modulefile can be read by the current user. These hidden modulefiles are simply ignored when walking through the modulepath content. Access issues (permission denied) occur only when trying to access directly a hidden modulefile or when accessing a symbol or an alias targeting a hidden modulefile. Improved modulefiles location¶ When looking for an implicit default in a modulefile directory, aliases are now taken into account in addition to modulefiles and directories to determine the highest numerically sorted element. Modules v4 resolves module alias or symbolic version when it points to a modulefile located in another modulepath. Access issues (permission denied) are now distinguished from find issues (cannot locate) when trying to access directly a directory or a modulefile as done on load, display or whatis commands. In addition, on this kind of access not readable .modulerc or .version files are ignored rather producing a missing magic cookie error. Module collection¶ Modules v4 introduces support for module collections. Collections describe a sequence of module use then module load commands that are interpreted by Modules to set the user environment as described by this sequence. When a collection is activated, with the restore sub-command, modulepaths and loaded modules are unused or unloaded if they are not part or if they are not ordered the same way as in the collection. Collections are generated by the save sub-command that dumps the current user environment state in terms of modulepaths and loaded modules. By default collections are saved under the $HOME/.module directory. Collections can be listed with savelist sub-command, displayed with saveshow and removed with saverm. Collections may be valid for a given target if they are suffixed. In this case these collections can only be restored if their suffix correspond to the current value of the MODULES_COLLECTION_TARGET environment variable. Saving collection registers the target footprint by suffixing the collection filename with .$MODULES_COLLECTION_TARGET. Path variable element counter¶ Modules 4 provides path element counting feature which increases a reference counter each time a given path entry is added to a given path-like environment variable. As consequence a path entry element is removed from a path-like variable only if the related element counter is equal to 1. If this counter is greater than 1, path element is kept in variable and reference counter is decreased by 1. This feature allows shared usage of particular path elements. For instance, modulefiles can append /usr/local/bin to PATH, which is not unloaded until all the modulefiles that loaded it unload too. Optimized I/O operations¶ Substantial work has been done to reduce the number of I/O operations done during global modulefile analysis commands like avail or whatis. stat, open, read and close I/O operations have been cut down to the minimum required when walking through the modulepath directories to check if files are modulefiles or to resolve module aliases. Interpretation of modulefiles and modulerc are handled by the minimum required Tcl interpreters. Which means a configured Tcl interpreter is reused as much as possible between each modulefile interpretation or between each modulerc interpretation. Sourcing modulefiles¶ Modules 4 introduces the possibility to source a modulefile rather loading it. When it is sourced, a modulefile is interpreted into the shell environment but then it is not marked loaded in shell environment which differ from load sub-command. This functionality is used in shell initialization scripts once module function is defined. There the etc/modulerc modulefile is sourced to setup the initial state of the environment, composed of module use and module load commands. Removed features and substantial behavior changes¶ Following sections provide list of Modules v3.2 features that are discontinued on Modules v4 or features with a substantial behavior change that should be taken in consideration when migrating to v4. Package initialization¶ MODULESBEGINENV environment snapshot functionality is not supported anymore on Modules v4. Modules collection mechanism should be used instead to save and restore sets of enabled modulepaths and loaded modulefiles. Command line switches¶ Some command line switches are not supported anymore on v4.0. When still using them, a warning message is displayed and the command is ran with these unsupported switches ignored. Following command line switches are concerned: --force, -f --human --verbose, -v --silent, -s --create, -c --icase, -i --userlvllvl, -ulvl Module sub-commands¶ During an help sub-command, Modules v4 does not redirect output made on stdout in ModulesHelp Tcl procedure to stderr. Moreover when running help, version 4 interprets all the content of the modulefile, then call the ModulesHelp procedure if it exists, whereas Modules 3.2 only interprets the ModulesHelp procedure and not the rest of the modulefile content. When load is asked on an already loaded modulefiles, Modules v4 ignores this new load order whereas v3.2 refreshed shell alias definitions found in this modulefile. When switching on version 4 an old modulefile by a new one, no error is raised if old modulefile is not currently loaded. In this situation v3.2 threw an error and abort switch action. Additionally on switch sub-command, new modulefile does not keep the position held by old modulefile in loaded modules list on Modules v4 as it was the case on v3.2. Same goes for path-like environment variables: replaced path component is appended to the end or prepended to the beginning of the relative path-like variable, not appended or prepended relatively to the position hold by the swapped path component. During a switch command, version 4 interprets the swapped-out modulefile in unload mode, so the sub-modulefiles loaded, with module load order in the swapped-out modulefile are also unloaded during the switch.. On Modules 3.2 paths composing the MODULEPATH environment variable may contain reference to environment variable. These variable references are resolved dynamically when MODULEPATH is looked at during module sub-command action. This feature has been discontinued on Modules v4. Following Modules sub-commands are not supported anymore on v4.0: clear update Modules specific Tcl commands¶ Modules v4 provides path element counting feature which increases a reference counter each time a given path entry is added to a given environment variable. As a consequence a path entry element is not always removed from a path-like variable when calling to remove-path or calling to append-path or append-path at unloading time. The path element is removed only if its related element counter is equal to 1. If this counter is greater than 1, path element is kept in variable and reference counter is decreased by 1. On Modules v4, module-info mode returns during an unload sub-command the unload value instead of remove on Modules v3.2. However if mode is tested against remove value, true will be returned. During a switch sub-command on Modules v4, unload then load is returned instead of switch1 then switch2 then switch3 on Modules v3.2. However if mode is tested against switch value, true will be returned. When using set-alias, Modules v3.2 defines a shell function when variables are in use in alias value on Bourne shell derivatives, Modules 4 always defines a shell alias never a shell function. Some Modules specific Tcl commands are not supported anymore on v4.0. When still using them, a warning message is displayed and these unsupported Tcl commands are ignored. Following Modules specific Tcl commands are concerned: module-info flags module-info trace module-info tracepat module-info user module-log module-trace module-user module-verbosity Further reading¶ To get a complete list of the differences between Modules v3.2 and v4, please read the Differences between versions 3.2 and 4 document. A significant number of issues reported for v3.2 have been closed on v4. List of these closed issues can be found at:
https://modules.readthedocs.io/en/stable/MIGRATING.html
2017-11-18T00:44:32
CC-MAIN-2017-47
1510934804125.49
[]
modules.readthedocs.io
Tunneling PyZMQ Connections with SSH¶ New in version 2.1.9. You may want to connect ØMQ sockets across machines, or untrusted networks. One common way to do this is to tunnel the connection via SSH. IPython introduced some tools for tunneling ØMQ connections over ssh in simple cases. These functions have been brought into pyzmq as zmq.ssh under IPython’s BSD license. PyZMQ will use the shell ssh command via pexpect by default, but it also supports using paramiko for tunnels, so it should work on Windows. An SSH tunnel has five basic components: - server : the SSH server through which the tunnel will be created - remote ip : the IP of the remote machine as seen from the server (remote ip may be, but is not not generally the same machine as server). - remote port : the port on the remote machine that you want to connect to. - local ip : the interface on your local machine you want to use (default: 127.0.0.1) - local port : the local port you want to forward to the remote port (default: high random) So once you have established the tunnel, connections to localip:localport will actually be connections to remoteip:remoteport. In most cases, you have a zeromq url for a remote machine, but you need to tunnel the connection through an ssh server. This is So if you would use this command from the same LAN as the remote machine: sock.connect("tcp://10.0.1.2:5555") to make the same connection from another machine that is outside the network, but you have ssh access to a machine server on the same LAN, you would simply do: from zmq import ssh ssh.tunnel_connection(sock, "tcp://10.0.1.2:5555", "server") Note that "server" can actually be a fully specified "user@server:port" ssh url. Since this really just launches a shell command, all your ssh configuration of usernames, aliases, keys, etc. will be respected. If necessary, tunnel_connection() does take arguments for specific passwords, private keys (the ssh -i option), and non-default choice of whether to use paramiko. If you are on the same network as the machine, but it is only listening on localhost, you can still connect by making the machine itself the server, and using loopback as the remote ip: from zmq import ssh ssh.tunnel_connection(sock, "tcp://127.0.0.1:5555", "10.0.1.2") The tunnel_connection() function is a simple utility that forwards a random localhost port to the real destination, and connects a socket to the new local url, rather than the remote one that wouldn’t actually work. See also A short discussion of ssh tunnels:
http://pyzmq.readthedocs.io/en/latest/ssh.html
2017-11-18T00:52:08
CC-MAIN-2017-47
1510934804125.49
[]
pyzmq.readthedocs.io
Make life easier! Starting with 0.8.0 Project FiFo releases pre-made datasets. See "Installing via FiFo AIO Dataset" for instructions. This is the quickest and easiest way to get FiFo running. Please see the Requirements section for details on the requirements. From the GZ (Global Zone) import the base dataset which we will use as the foundation for our FiFo Zone. Once imported we then confirm it is installed. Please consult Zone requirements for details. { "autoboot": true, "brand": "joyent", "image_uuid": "<zone uuid>", "delegate_dataset": true, "indestructible_delegated": true, "max_physical_memory": 3072, "cpu_cap": 100, "alias": "fifo", "quota": "40", "resolvers": [ "8.8.8.8", "8.8.4.4" ], "nics": [ { "interface": "net0", "nic_tag": "admin", "ip": "10.1.1.240", "gateway": "10.1.1.1", "netmask": "255.255.255.0" } ] } Next we create our FiFo JSON payload file and save it in case we need to re-install at a later stage. We now zlogin to our newly created FiFo Zone and proceed with adding the FiFo package repository. Then we install the required FiFo packages. First we do need to configure the delegate dataset to be mounted to /data we can do this from within the zone with the following command: Starting with 14.4.0 Datasets, Joyent introduced signed packages. Starting with Version 0.7.0 - FiFo has now also started signing it's packages. To properly install FiFo packages, the FiFo public key is required and can be installed with the following command. cd /data curl -O gpg --primary-keyring /opt/local/etc/gnupg/pkgsrc.gpg --import < fifo.gpg gpg --keyring /opt/local/etc/gnupg/pkgsrc.gpg --fingerprint The key id is BB975564 and the fingerprint CE62 C662 67D5 9129 B291 62A0 ADDF 278A BB97 5564 should be returned respectively. Now we can install the packages. echo "" >> /opt/local/etc/pkgin/repositories.conf pkgin -fy up pkgin install fifo-snarl fifo-sniffle fifo-howl fifo-cerberus If this is a fresh installation the installer will create default configuration files for each service. When updating the configuration files do not get overwritten, but new *.conf.example files will be added. The generated files contain some defaults. However is it advised to take some time to configure Sniffle , Snarl and Howl. Datasets origin By default, the FiFo UI Cerberus, will list and download available datasets (VM images) from datasets.at. If you have a local mirror, you can point to it changing the file /opt/local/fifo-cerberus/config/config.js svcadm enable epmd svcadm enable snarl svcadm enable sniffle svcadm enable howl svcs epmd snarl sniffle howl Initial administrative tasks The last step is to create an admin user and organisation, this can be done with one simple command: # snarl-admin init <realm> <org> <role> <user> <pass> snarl-admin init default MyOrg Users admin admin LeoFS Storage should be working before proceeding A working and fully functional two Zone LeoFS setup MUST be up and running before you proceed with the below step. If you have not previously setup your LeoFS Storage Zones, you should pause now at this point and proceed with the LeoFS Install Guide before continuing with the last step below. The guide can be found here: Once LeoFS is configured and up and running, the init-leofs command can be used from sniffle-admin to set up the required, users, buckets and endpoints. You can use the xip.io service or your own domain name, if you setup your DNS with wildcard: That's it. You can now log out of your FiFo Zone and back into the Global Zone and continue with installing the Chunter service. Now that you have FiFo installed and chunter running, you can now start managing and create virtual machines. We got to add some dataset servers: # Official and community images sniffle-admin datasets servers add # For FiFo custom images sniffle-admin datasets servers add # for free bsd images sniffle-admin datasets servers add You can login to the web console and use the Configuration menu to setup the following: - Create an IP Range - Associate it with a Network - Create some Packages - Import some Datasets - Setup an SSH key to your user be able to login to the VMs At the point you should be able to create some VMs The ability to monitor your cloud data in real-time (cerberus metric graphs) or over time (historical data) is of fundamental importance to most cloud operators. FiFo has some ultra-performant, purpose built services that have been designed specifically for this task. Excelling at both efficiently storing huge volumes of data as well as performing extremely fast queries that can scale to handle very large cloud installations. Although an optional service, we do encourage this to be setup for all FiFo installations to truly complete your cloud offering. First you need to setup the metric storage database called "DalmatinerDB" Then the "tachyon-meter" service is installed in each hypervisor's Global Zone and the aggregation service called "Tachyon" is installed within its own separate Zone. Note: The DalmatinerDB and the Tachyon aggregation service can be installed together on the same zone. A detailed setup is covered in the comprehensive setup guide below:
http://docs.project-fifo.net/docs/installing-fifo
2017-11-18T01:06:58
CC-MAIN-2017-47
1510934804125.49
[]
docs.project-fifo.net
JDBC Source Connector¶ The JDBC source/source/source, still be correctly detected and delivered when the system recovers. - Custom Query: The source. source connect-standalone or connect-distributed. For example: $ CLASSPATH=/usr/local/firebird/* ./bin/connect-distributed ./config/connect.
https://docs.confluent.io/current/connect/connect-jdbc/docs/source_connector.html
2017-11-18T01:04:27
CC-MAIN-2017-47
1510934804125.49
[]
docs.confluent.io
The ETC has not implemented any specific features for Solar System targets, but can be used to approximate reflected sunlight and thermal emission from them. Introduction The JWST ETC can be used to model the spectra of moving targets, but is limited to doing so for a single target brightness. For distant targets (those at least as far from the Sun as Jupiter) on nearly circular orbits, this isn't a major problem because their brightness is fairly constant during the period when the the target is within the JWST field of regard. For more nearby targets the brightness can change by much more than 50%, so observers must account for those variations manually when creating ETC sources and scenes to represent the target. Reflected light can be approximated using the Phoenix stellar model G2V template spectrum, and thermal emission can be approximated using the blackbody template. The user must determine the correct normalizations to apply to those template spectra in order to accurately represent the emission from their target on a given date. Targets expected to have both reflected-light and thermal emission components within the wavelength range of interest can be specified as two sources that coincide in the ETC scene. Normalizing target spectra The emission from a target has to be normalized in a way to represent the physics controlling the flux density of the spectrum as received at JWST. These factors include: - Observing circumstances such as heliocentric and observatory-centric distances - Phase angle - Size and albedo - Thermal properties Observing circumstances can be retrieved from the JPL Horizons web service by entering the string "@JWST" (no quotes) in the observatory search field. Note: It is critical to include solar elongation constraints of 85° - 135° when using Horizons to generate target ephemerides for JWST observations. Point and extended sources For targets too small to be resolved by JWST, the spectrum can be modeled using the ETC point-source target type. Extended targets can also be specified in the ETC as elliptical shapes with brightness distributions that are flat, Gaussian, or sercic profiles (the last is typically used for galaxies). For observers interested in Jupiter, Saturn, Mars, and highly-extended comets, capabilities of the web interface of the ETC limit the size of the scene to a few arc seconds across. This doesn't prevent estimates of SNR for a given observation, but does require observers to correctly specify the brightness of the source in the units the ETC currently supports (i.e NOT surface brightness units). The basic procedure is: - Compute the spectrum of the target integrated over the disk or emission region. - Convert the integrated emission to surface brightness by dividing by the area of the target (e.g., in arcsec2). - Specify an extended source in the ETC small enough to fit within the small dimensions of the available scene (e.g., 1 arcsec across). - Normalize the target brightness to a level determined by multiplying the area of the ETC extended source by the surface brightness computed in step 2) above. User supplied spectra The ETC allows users to upload their own spectra for sources. ASCII and fits format are supported, and the spectrum in either case consists of two vectors containing wavelength and flux density. Format and other requirements are described in the ETC documentation and help (see JWST ETC User Supplied Spectra). Example workbooks The ETC contains an example workbook with two scenes specified. - An asteroid modeled as a point source using the superposition of a reflected-light and thermal component. - A comet modeled as a point-source nucleus and two extended sources representing the coma. Reflected-light and thermal emission components are included for nucleus and coma. These workbooks are primarily focused on providing examples of how to construct an ETC scene useful for solar system observers. Details of how to specify ETC calculations (which equate to observations in APT) are given in detail in other workbooks that are specific to the instruments. Limitations The ETC does not currently have: - A method for using an albedo spectrum to modify the predicted reflected-light spectrum. - More realistic models for thermal emission, such as the standard thermal model (STM) or near Earth asteroid thermal model (NEATM). - A way to compute a target spectrum based on basic inputs such as the size of and distance to the target, and an albedo. - A way to allow users to specify a spectrum in surface brightness units (e.g., mJy / arcsec2). - A 1/r brightness distribution, such as is typically used for cometary comae. - A short-cut to use typical background values near the ecliptic plane. Instead, users must specify an RA,DEC corresponding to a position near the Ecliptic plane. While adding such features has been discussed, there is currently no schedule for adding them to the ETC. Future improvements Template spectra - A template spectrum for the Sun, absolutely calibrated to represent flux density at 1 AU, and at a spectral resolution high enough to support modeling for the high-resolution gratings of NIRSpec and for the MIRI MRS is under development. - Template spectra for the giant planets (disk-integrated) are also under development. - A community-based effort to create template spectra for a range of spectral classes and/or iconic examples of asteroids and TNOs will be explored at various community forums. One or more of these 'template' spectra may be implemented instead as a library of spectra users can share external to the ETC, and then upload rather than residing within the ETC as true template spectra. As these materials become completed observers can find additional information here, and should look for announcements on solar system community forums such as the DPS and PEN newsletters. Pandeia tools The engine driving ETC calculations, along with necessary throughput curves for imaging and spectral performance data, and a library of monochromatic PSF models, are available to the community as the Pandeia Python package, which can be installed from here (Pontoppidan et al., 2016). As with template spectra, a community-based effort to develop an interface to Pandeia that can serve the needs of the solar system science community will be discussed at various community forums. Related links JWST User Documentation Home ETC articles Moving target articles"
https://jwst-docs.stsci.edu/display/JPP/JWST+Moving+Targets+in+ETC
2017-11-18T01:03:45
CC-MAIN-2017-47
1510934804125.49
[]
jwst-docs.stsci.edu
JWST user documentation is under development; current versions are preliminary and subject to revision. changes.mady.by.user jpp user Saved on Sep 05, 2017 Saved on Sep 14, 2017 The JWST Exposure Time Calculator (ETC) is a pixel-based ETC paired with a modern graphical user interface. It supports all JWST observing modes: imaging, spectroscopy (slitted, slitless, and IFU), coronagraphy, and aperture masking interferometry. It has advanced features that go well beyond those in previous exposure time calculators; this includes algorithms that accurately model data acquisition and post-processing, as well as functionality for users to efficiently explore parameter space. The graphical user interface provides enhanced capabilities supporting multiple workflows. For example, users can create workbooks to manage related sets of calculations, can create complex astronomical scenes with multiple sources, and can compare the results of multiple calculations. JWST Exposure Time Calculator OverviewJWST ETC Calculations Page OverviewJWST ETC Scenes and Sources Page OverviewJWST ETC Outputs OverviewJWST BackgroundsJWST ETC Source Spectral Energy Distributions JWST ETC Quick Start GuideJWST ETC Creating a New CalculationJWST ETC Defining a New SceneJWST ETC Defining a New SourcJWST ETC Batch ExpansionsJWST ETC Using the Sample WorkbooksJWST ETC Sharing WorkbooksJWST ETC User Supplied Spectra Go to the on-line.
https://jwst-docs.stsci.edu/pages/diffpagesbyversion.action?pageId=20421767&selectedPageVersions=54&selectedPageVersions=53
2017-11-18T01:08:22
CC-MAIN-2017-47
1510934804125.49
[]
jwst-docs.stsci.edu
Hosts are checked by the Shinken), Shinken will not perform checks of the hosts on a regular basis. It will, however, still perform on-demand checks of the host as needed for other parts of the monitoring logic. On-demand checks are made when a service associated with the host changes state because Shinken needs to know whether the host has also changed state. Services that change state are often an indicator that the host may have also changed state. For example, if Shinken. Shinken Shinken to forgo executing a host check if it determines a relatively recent check result will do instead. More information on cached checks can be found here. You can define host execution dependencies that prevent Shinken from checking the status of a host depending on the state of one or more other hosts. More information on dependencies can be found here. Host checks are performed by plugins, which can return a state of OK, WARNING, UNKNOWN, or CRITICAL. How does Shinken, Shinken will attempt to see if the host is really DOWN or if it is UNREACHABLE. The distinction between DOWN and UNREACHABLE host states is important, as it allows admins to determine root cause of network outages faster. The following table shows how Shinken makes a final state determination based on the state of the hosts parent(s). A host’s parents are defined in the parents directive in host definition. More information on how Shinken distinguishes between DOWN and UNREACHABLE states can be found here. As you are probably well aware, hosts don’t always stay in one state. Things break, patches get applied, and servers need to be rebooted. When Shinken Shinken. Shinken can detect when hosts start flapping, and can suppress notifications until flapping stops and the host’s state stabilizes. More information on the flap detection logic can be found here.
http://testdocshinken.readthedocs.io/en/latest/05_thebasics/hostchecks.html
2017-11-18T00:47:25
CC-MAIN-2017-47
1510934804125.49
[]
testdocshinken.readthedocs.io
Require single sign-on (SSO) logins If you prefer that your users only use SSO, you can enable Require SSO, which prevents users from logging in from an enterprise email domain, or accessing applications associated with your SSO organization in buddybuild. To require SSO logins, follow these Require SSO logins toggle. The Require SSO logins dialog is displayed: Click the Require SSO logins button. The SSO settings screen is displayed. That’s it!
https://docs.buddybuild.com/quickstart/sso/require.html
2017-11-18T00:36:00
CC-MAIN-2017-47
1510934804125.49
[]
docs.buddybuild.com
An artifact task lets you search the library of binaries in different Artifactory repositories that you can deploy on a virtual machine. About this task When you include an artifact task in a stage, you can run a pipeline execution every time you develop new code that affects that artifact. The search output parameter from varied source repositories is always the same, which includes a repository name, a download URL, and size information, if available. Prerequisites Verify that a pipeline is available. See Create a Release Pipeline. Verify that an Artifactory server is registered. See the Installation and Configuration guide. Procedure - Click the Code Stream tab. - Select an existing pipeline to configure from the Pipeline tab. - Select Edit > Stages. - Select Add Task. - Select Artifact from the Category drop-down menu. - Select the VMware Repository Solution from the Provider drop-down menu. - Enter an artifact task name.
https://docs.vmware.com/en/vRealize-Code-Stream/1.2/com.vmware.vrcs11.using.doc/GUID-B4603C68-A3D5-4104-B8EC-29AB34CD42C6.html
2017-11-18T01:09:35
CC-MAIN-2017-47
1510934804125.49
[]
docs.vmware.com
Decision elements are simple Boolean functions that you use to create branches in workflows. Decision elements determine whether the input received matches the decision statement you set. As a function of this decision, the workflow continues its course along one of two possible paths. Before you begin Verify that you have a decision element linked to two other elements in the schema in the workflow editor before you define the decision. Procedure - Click the Edit icon ( ) of the decision element. A dialog box that lists the properties of the decision element appears. - Click the Decision tab in the dialog box. - Click the Not Set (NULL) link to select the source input parameter for this decision. A dialog box that lists all the attributes and input parameters defined in this workflow appears. - Select an input parameter from the list by double-clicking it. - If you did not define the source parameter to which to bind, create it by clicking the Create attribute/parameter in workflow link in the parameter selection dialog box. - Select a decision statement from the drop-down menu. The statements that the menu proposes are contextual, and differ according to the type of input parameter selected. - Add a value that you want the decision statement to match. Depending on the input type and the statement you select, you might see a Not Set (NULL) link in the value text box. Clicking this link gives you a predefined choice of values. Otherwise, for example for Strings, this is a text box in which you provide a value. Results You defined a statement for the decision element. When the decision element receives the input parameter, it compares the value of the input parameter to the value in the statement and determines whether the statement is true or false. What to do next You must set how the workflow handles exceptions.
https://docs.vmware.com/en/vRealize-Orchestrator/7.0/com.vmware.vrealize.orchestrator-dev.doc/GUID5D5778DD-3547-403E-9852-ED43F278C4B7.html
2017-11-18T01:09:56
CC-MAIN-2017-47
1510934804125.49
[]
docs.vmware.com
. Tip This is of course for fun. Unless you have a secure connection to your monitoring infrastructure. You should never open up your firewall to have in/out communications from a mobile phone directly to your monitoring systems. A serious infrastructure should use an SMS gateway in a DMZ that receives notifications from a your monitoring system. Either sourced as mails, or other message types. - enable the “Unknown sources” option in your device’s “Application” settings to allow application installation from another source that the android marker. - Go to and “flash” the barcode with an application like “barcode scanner”, or just download. Install this application. - Launch the sl4a application you just installed. - click in the menu button, click “view” and then select “interpreter” - click the menu button again, then add and select “Python 2.6”. Then click to install. - Don’t close your sdcard explorer - Connect your phone to a computer, and open the sdcard disk. - Copy your shinken library directory in SDCARDcom.googlecode.pythonforandroidextraspython. If you do not have the SDCARDcom.googlecode.pythonforandroidextraspythonshinken__ini__.py file, you put the bad directory. - Copy the bin/shinken-reactionner file in SDCARDsl4ascripts direcotry and rename it shinken-reactionner.py (so add the .py extension) - Unmount the phone from your computer and be sure to re-mount the sdcard on your phone (look at the notifications). - Launch the sl4a app - launch the shinken-reactionner.py app in the script list. - It should launch without errors The phone(s) will be a new reactionner daemon. You should want to only launch SMS with it, not mail commands or nother notifications. So you will have to define this reactionner to manage only the SMS commands..
http://testdocshinken.readthedocs.io/en/latest/07_advanced/sms-with-android.html
2017-11-18T00:52:07
CC-MAIN-2017-47
1510934804125.49
[]
testdocshinken.readthedocs.io
So you’ve finally got Shinken up and running and you want to know how you can tweak it a bit. Tuning Shinken to increase performance can be necessary when you start monitoring a large number (> 10,000) of hosts and services. Here are the common optimization paths. Planning a large scale Shinken deployments starts before installing Shinken and monitoring agents. Scaling Shinken for large deployments Hardware performance shouldn’t be an issue unless: - you’re monitoring thousands of services - you are writing to a metric database such as RRDtool or Graphite. Disk access will be a very important factor. - you’re doing a lot of post-processing of performance data, etc. Your system configuration and your hardware setup are going to directly affect how your operating system performs, so they’ll affect how Shinken performs. The most common hardware optimization you can make is with your hard drives, RAID, do not update attributes for access-time/write-time. Shinken needs quite a bit of memory which is pre-allocated by the Python processes.
http://testdocshinken.readthedocs.io/en/latest/12_tuning/tuning.html
2017-11-18T00:47:00
CC-MAIN-2017-47
1510934804125.49
[]
testdocshinken.readthedocs.io
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. DeleteAttributes is an idempotent operation; running it multiple times on the same item or attribute does not result in an error response. Because Amazon SimpleDB makes multiple copies of item data and uses an eventual consistency update model, performing a GetAttributes or Select operation (read) immediately after a DeleteAttributes or PutAttributes operation (write) might not return updated item data. Namespace: Amazon.SimpleDB Assembly: AWSSDK.dll Version: (assembly version) Container for the necessary parameters to execute the DeleteAttributes service method. .NET Framework: Supported in: 4.5, 4.0, 3.5
http://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/MSimpleDBISimpleDBDeleteAttributesDeleteAttributesRequestNET45.html
2017-11-18T01:15:31
CC-MAIN-2017-47
1510934804125.49
[]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Set the website configuration for a bucket. Namespace: Amazon.S3.Model Assembly: AWSSDK.dll Version: (assembly version) The PutBucketWebsiteRequest type exposes the following members .NET Framework: Supported in: 4.5, 4.0, 3.5 .NET for Windows Store apps: Supported in: Windows 8.1, Windows 8 .NET for Windows Phone: Supported in: Windows Phone 8.1, Windows Phone 8
http://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/TS3PutBucketWebsiteRequestNET45.html
2017-11-18T01:14:57
CC-MAIN-2017-47
1510934804125.49
[]
docs.aws.amazon.com
To prepare for provisioning vCloud Air and vCloud Director machines by using vRealize Automation, you must configure the organization virtual data center with templates and customization objects. To provision vCloud Air and vCloud Director resources using vRealize Automation, the organization requires a template to clone from that consists of one or more machine resources. Templates that are to be shared across organizations must be public. Only reserved templates are available to vRealize Automation as a cloning source. When you create a blueprint by cloning from a template, that template's unique identifier becomes associated with the blueprint. When the blueprint is published to the vRealize Automation catalog and used in the provisioning and data collection processes, the associated template is recognized. If you delete the template in vCloud Air or vCloud Director, subsequent vRealize Automation provisioning and data collection fails because the associated template no longer exists. Instead of deleting and recreating a template, for example to upload an updated version, replace the template using the vCloud Air vCloud Director template replacement process. Using vCloud Air or vCloud Director to replace the template, rather than deleting and recreating the template, keeps the template's unique ID intact and allows provisioning and data collection to continue functioning. The following overview illustrates the steps you need to perform before you use vRealize Automation to create endpoints and define reservations and blueprints. For more information about these administrative tasks, see vCloud Air and vCloud Director product documentation. In vCloud Air or vCloud Director, create a template for cloning and add it to the organization catalog. In vCloud Air or vCloud Director, use the template to specify custom settings such as passwords, domain, and scripts for the guest operating system on each machine. You can use vRealize Automation to override some of these settings. Customization can vary depending on the guest operating system of the resource. In vCloud Air or vCloud Director, configure the catalog to be shared with everyone in the organization. In vCloud Air or vCloud Director, configure account administrator access to applicable organizations to allow all users and groups in the organization to have access to the catalog. Without this sharing designation, the catalog templates are not be visible to endpoint or blueprint architects in vRealize Automation. Gather the following information so that you can include it in blueprints: Name of the vCloud Air or vCloud Director template. Amount of total storage specified for the template.
https://docs.vmware.com/en/vRealize-Automation/7.3/com.vmware.vra.prepare.use.doc/GUID-2FDABE02-7671-4862-A818-9A1FF69F8143.html
2017-11-18T01:03:28
CC-MAIN-2017-47
1510934804125.49
[]
docs.vmware.com
Table of Contents By default, when editing item records, library code is displayed in front of shelving location in Shelving Location field. You may reverse the order by going to Admin → Workstation Administration → Copy Editor: Copy Location Name First. Simply click it to make copy location name displayed first. The setting is saved on the workstation.
http://docs.evergreen-ils.org/2.10/_workstation_administration.html
2017-11-18T00:55:30
CC-MAIN-2017-47
1510934804125.49
[]
docs.evergreen-ils.org
Max Window The Max window window. Showing the Max window Choose Max Window from the Window menu. To close the Max Window, click on the window's close button. The Max application keeps track of whether or not the Max Window is visible when you quit the program, and will hide or show the window when you relaunch Max based on its state when you last used Max. About the Max window: The Max Window toolbar includes several buttons used for common tasks. The Clear All button clears all messages currently displayed in the Max window. The Order By Time button lets you see messages to the Max application in the time order they were received. The Inspector button lets you open the Inspector for any Max object that has a message listed in the Max Window.. The Show Object button is used to locate the object associated with a message in the Max Window. Rows of the Max window are color coded: status information or things you put there with the print object are in gray / white warnings are in yellow error messages are red internal errors (bugs in Max itself) are in blue Columns of the Max window are divided into Object and : The Object column displays the Max object associated with any error messages that are generated—either while you’re patching or while a patch is running. The column displays the error messages . Click on the column heading to sort by object or message. Click on the clock in the toolbar to sort by the order the message was posted. This is the default order and the one you’ll want to use most often. Finding the Object that generated an Error Message If a row of the Max window window displays the name of an object, click on the row to select it. Click on the Inspector button in the Max Window toolbar to open the Inspector for the object. Seeing the complete text of a message in the Max window If the clue window isn't already visible, choose Clue Window from the from the Window menu to show it. Move the mouse over a row of the Max window. The entire contents of the row is displayed in the clue window. You can find a complete listing of Max, MSP, and Jitter error messages here . Clearing the text in the Max window Click on the Clear All button in the Max Window toolbar.
https://docs.cycling74.com/max5/vignettes/core/max_window.html
2015-08-28T05:28:18
CC-MAIN-2015-35
1440644060633.7
[]
docs.cycling74.com
UI Guidelines Local Navigation Displaying information on a screen temporarily You can display information on a screen temporarily using a transparent overlay or an opaque overlay. For example, when users view an image, you can display a description of the image along the top or bottom of the screen. You can define the amount of time that the information appears on a screen. Best practice: Displaying information on a screen temporarily Next topic: Screen transitions Previous topic: Best practice: Displaying images Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/developers/deliverables/28627/Displaying_info_on_a_screen_temporarily_6_1_1490114_11.jsp
2015-08-28T05:49:04
CC-MAIN-2015-35
1440644060633.7
[]
docs.blackberry.com
and PulseAudioon the same computer. When the jackserver starts, it automatically takes control of your audio hardware from PulseAudio. When the jackserver stops, it automatically returns control of your audio hardware to PulseAudio. There is no longer a benefit to removing PulseAudio. jack2is another example of the behind-the-scenes improvements that are a part of Fedora 14.
https://docs.fedoraproject.org/it-IT/Fedora/14/html/Release_Notes/sect-Release_Notes-Changes_for_Specific_Audiences.html
2015-08-28T06:58:51
CC-MAIN-2015-35
1440644060633.7
[]
docs.fedoraproject.org
Getting Started Guide Local Navigation Creating your first application To verify that you have correctly set up your development BlackBerry® Enterprise Server and your development computer, you can set up a project in Eclipse® and run the code sample Creating your first application. Next topic: Create a project Previous topic: Add the email web service for Novell GroupWise Was this information helpful? Send us your comments.
http://docs.blackberry.com/es-es/developers/deliverables/25822/Creating_your_first_application_1430275_11.jsp
2015-08-28T05:53:59
CC-MAIN-2015-35
1440644060633.7
[]
docs.blackberry.com
Administration Guide Local Navigation - Overview: BlackBerry Enterprise Server Express - Express - Turn off support for rich text formatting in email messages using an IT policy rule - Configuring IBM Lotus Notes links on devices - Configure the BlackBerry Enterprise Server Express to support IBM Lotus Notes links to different IBM Lotus Domino domains - Updating the map for IBM Lotus Domino server names and host names - Change how often the BlackBerry Messaging Agent updates the map for IBM Lotus Domino server names and host names - Turn off support for IBM Lotus Notes links - Synchronizing folders on the BlackBerry device - Control which published public contact folders a user can synchronize to a BlackBerry device - Control which personal contact subfolders a user can synchronize to a BlackBerry device - Control which personal mail folders a user can synchronize with a BlackBerry device - Specify public contact databases that users can access from their BlackBerry devices - Control which public contact databases a user can access - Automated notification messages - - IBM Lotus Domino > Administration Guide BlackBerry Enterprise Server Express for IBM Lotus Domino - 5.0.3 Attachment file formats that the BlackBerry Attachment Service supports Previous topic: Change how a BlackBerry Attachment Connector restores a lost connection to a BlackBerry Attachment Service Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/admin/deliverables/28313/Supported_attachment_formats_1141522_11.jsp
2015-08-28T05:39:03
CC-MAIN-2015-35
1440644060633.7
[]
docs.blackberry.com
Table of Contents This is a feature you can use to remote-control kdm. It's mostly intended for use by ksmserver and kdesktop from a running session, but other applications are possible as well. The sockets are UNIX® domain sockets which live in subdirectories of the directory specified by FifoDir=. The subdir is the key to addressing and security; the sockets all have the file name socket and file permissions rw-rw-rw- (0666). This is because some systems don't care for the file permission of the socket files.). Group ownership of the subdirs can be set via FifoGroup=, otherwise it's root. The file permissions of the subdirs are rwxr-x--- (0750). The fields of a command are separated by tabs (\t), the fields of a list are separated by spaces, literal spaces in list fields are denoted by \s. The command is terminated by a newline (\n). The same applies to replies. The reply on success is ok, possibly followed by the requested information. The reply on error is an errno-style word (e.g. perm, noent, etc.) followed by a longer explanation. Global commands: display( now| schedule) user password[session_arguments] login user at specified display. if nowis specified, a possibly running session is killed, otherwise the login is done after the session exits. session_arguments are printf-like escaped contents for .dmrc. Unlisted keys will default to previously saved values. - resume Force return from console mode, even if TTY logins are still active. - manage display[ display_class[ auth_name auth_data]] Start managing the named foreign display. display_class, if specified and non-empty, will be used for configuration matching; see Chapter 5, The Files kdm Uses for Configuration. auth_nameand auth_dataneed to be passed if the display requires X authorization. The format is the same as the 2nd and 3rd column of the xauth list output. - unmanage display Stop managing the named foreign display. Per-display commands: - lock The display is marked as locked. If the X-Server crashes in this state, no auto-relogin will be performed even if the option is on. - unlock Reverse the effect of lock, and, lock, suicide, login, resume, manage The respective command is supported - bootoptions The listbootoptions command and the =to shutdown are supported - shutdown <list> shutdown is supported and allowed for the listed users (a comma separated list.) * means all authenticated users. - nuke <list> Forced shutdown may be performed by the listed users. - nuke Forced shutdown may be performed by everybody - reserve <number> Reserve displays are configured, and number are available at this time - list [ all| alllocal] Return a list of running sessions. By default all active sessions are listed (this is useful for a shutdown warning). If allis specified, passive sessions are listed as well. If alllocalis specified, passive sessions are listed as well, but all incoming remote sessions are skipped (this is useful for a fast user switching agent). Each session entry is a comma separated tuple of: Display or TTY name VT name for local sessions, remote host name prefixed by @for remote TTY sessions, otherwise empty Logged in user's name, empty for passive sessions and outgoing remote sessions (local chooser mode) Session type for active local sessions, remote hostname for outgoing remote sessions, empty for passive sessions. A Flag field: *for the display belonging to the requesting socket. !for sessions that cannot be killed by the requesting socket. tfor TTY sessions. New fields may be added in the future. - reserve Start a reserve login screen. If nobody logs in within some time, (eg; :2). Permitted only on sockets of local displays and the global socket. - listbootoptions List available boot options. The return value contains these tokens: A list of boot options (as shown in kdm itself). The default boot option. The current boot option. The default and current option are zero-based indices into the list of boot options. If either one is unset or not determinable, it is -1. -is the time for which the shutdown is scheduled. If it starts with a plus-sign, the current time is added. Zero means immediately. endis the latest time at which the shutdown should be performed if active sessions are still running. If it starts with a plus-sign, the start time is added. -1and endare specified in seconds since the UNIX® epoch. trynowis a synonym for 0 0 cancel, forcenowfor 0 0 forceand schedulefor 0 -1. askattempts There are two ways of using the sockets: Connecting them directly. FifoDir is exported as $ DM_CONTROL; the name of per-display sockets can be derived from $ DISPLAY. By using the kdmctl command (e.g. from within a shell script). Try kdmctl -hto find out more. Here is an example bash script “reboot into FreeBSD”: if kdmctl | grep -q shutdown; then IFS=$'\t' set -- `kdmctl listbootoptions` if [ "$1" = ok ]; then fbsd=$(echo "$2" | tr ' ' '\n' | sed -ne 's,\\s, ,g;/freebsd/I{p;q}') if [ -n "$fbsd" ]; then kdmctl shutdown reboot "=$fbsd" ask > /dev/null else echo "FreeBSD boot unavailable." fi else echo "Boot options unavailable." fi else echo "Cannot reboot system." fi
https://docs.kde.org/stable4/en/kde-workspace/kdm/advanced-topics.html
2016-09-25T05:25:28
CC-MAIN-2016-40
1474738659865.46
[array(['/stable4/common/top-kde.jpg', None], dtype=object)]
docs.kde.org
Had. In 1996, Norvig gave a talk where he talked about how most of the patterns in GoF are invisible or grossly simplified in dynamic languages. But then goes on to talk about how at one point a sub-routine call was also considered a “design pattern”. Going to focus on patterns that show up slightly differently in functional languages. And he’s not going to talk about Monads, even though some of the patterns he’ll describe are monadic. Monads are useful for writing programs, but he doesn’t find them very useful for explaining them. Architectural Patterns: describing an entire system Design Patterns: describing a specific task/operation Idioms: low-level patterns specific to a programming language This is the pattern of a single function that takes a state and an event. Modeling your state this way is powerful – it allows you to do things like take the starting point and all inputs and reduce to the end state. Makes it a great pattern for testing systems. Allows you to make assertions about the state of the system over time. You also have a lot of flexibility about how you store the state: at one end of the spectrum, you only store inputs/events, not state, since it’s derived. One of the downsides is that every input/event in your system has to be a data structure. That’s sort of the point, but it can add complexity. One function takes state + input, and returns a sequence of events. Another applies that sequence of events to state using reduce. You need to decide if you’re going to allow recursive consequences. A problem with this is that you can’t just compose the consequences to get to the current state. One of the essences of functional programming: lazy sequences – map, mapcat, filter, etc – and reduce. This has a built-in assumption of ordered, linear processing, that you’re going to deal with things one at a time. Utilizes a reducer and a combiner function. The combiner provides a way to “roll up” one level to a level “up”. Doesn’t assume linear processing (hence the associative requirement). In some simple cases (addition, for example), the reducer and combiner may be the same function. A function takes an expander and some input, and calls expander with input (and after the first call, the result of the previous call), until the return value equals the input value. Because each step needs to take and return the same “shape” of data, the code can wind up being a little longer. But the result is very clear: you can easily see the steps that are being taken. And because you have to work with the same shape of data, the resulting pipeline is composable into other, larger pipelines Instead of composing a list of functions (steps), you use higher order functions that could do something before or after an individual step. Because each step can do things before and after, it can become difficult to reason about where something is happening. So you wrap the operation with something that returns a “token” – something that can cease the operation and get you back to your original state. The scheduled thread pool in Java works this way. The observer could take the old and new state, along with either the delta, the triggering event, or the container. Clojure protocols are an implementation of this. Another way to do this is by passing around a map of the functions. This feels functional, but it has some performance overhead: every invocation requires a map lookup.
https://strange-loop-2012-notes.readthedocs.io/en/latest/monday/functional-design-patterns.html
2016-09-25T05:24:33
CC-MAIN-2016-40
1474738659865.46
[]
strange-loop-2012-notes.readthedocs.io
Quick start¶ Who. A quick introduction¶ >>>"} The Index and Schema objects¶. whoosh.fields.ID - This type simply indexes (and optionally stores) the entire value of the field as a single unit (that is, it doesn’t break it up into individual words). This is useful for fields such as a file path, URL, date, category, etc. whoosh.fields.STORED - This field is stored with the document, but not indexed. This field type is not indexed and not searchable. This is useful for document information you want to display to the user in the search results. whoosh.fields.KEYWORD - This type is designed for space- or comma-separated keywords. This type is indexed and searchable (and optionally stored). To save space, it does not support phrase searching. whoosh.fields.TEXT - This type is for body text. It indexes (and optionally stores) the text and stores term positions to allow phrase searching. whoosh.fields.NUMERIC - This type is for numbers. You can store integers or floating point numbers. whoosh.fields.BOOLEAN - This type is for boolean (true/false) values. whoosh.fields.DATETIME - This type is for datetimeobjects. See Indexing and parsing dates/times for more information. whoosh.fields.NGRAMand whoosh.fields.NGRAMWORDS - These types break the field text or individual terms into N-grams. See Indexing and searching N-grams for more information. ") The IndexWriter object¶: - You don’t have to fill in a value for every field. Whoosh doesn’t care if you leave out a field from a document. - Indexed text fields must be passed a unicode value. Fields that are stored but not indexed ( STOREDfield type) can be passed any pickle-able object.. The Searcher object¶ - Sorting results by the value of an indexed field, instead of by relelvance. - Highlighting the search terms in excerpts from the original documents. - Expanding the query terms based on the top few documents found. - Paginating the results (e.g. “Showing results 1-20, page 1 of 4”). See How to search for more information.
http://whoosh.readthedocs.io/en/latest/quickstart.html
2016-09-25T05:21:18
CC-MAIN-2016-40
1474738659865.46
[]
whoosh.readthedocs.io
Description /arc_4<< Compatibility Matrix Metrics Definitions Future Work Plenty !!! Waiting for your ideas as well! Change Log
http://docs.codehaus.org/pages/viewpage.action?pageId=230398691
2013-05-18T11:04:43
CC-MAIN-2013-20
1368696382261
[array(['/s/fr_FR/3278/15/_/images/icons/emoticons/warning.png', None], dtype=object) array(['/download/attachments/229741975/scm-stats-commits-per-user.png?version=1&modificationDate=1349171731612', None], dtype=object) array(['https://dl.dropbox.com/u/16516393/authors_activity.png', None], dtype=object) array(['/download/attachments/229741975/scm-stats-commits-clockhour.png?version=1&modificationDate=1347001896777', None], dtype=object) array(['https://dl.dropbox.com/u/16516393/widget_set_period.png', None], dtype=object) ]
docs.codehaus.org
Security Checklist/Site Recovery From Joomla! Documentation < Security Checklist(Difference between revisions)) Your Turn... - If you discover a vulnerability in Joomla! core files, report it here.
http://docs.joomla.org/index.php?title=Security_Checklist/Site_Recovery&diff=76607&oldid=11259
2013-05-18T10:44:59
CC-MAIN-2013-20
1368696382261
[]
docs.joomla.org
A mutex object is a synchronization object whose state is set to signaled when it is not owned by any thread, and nonsignaled when it is owned. Its name comes from its usefulness in coordinating mutually-exclusive access to a shared resource as only one thread at a time can own a mutex object. Mutexes may be recursive in the sense that a thread can lock a mutex which it had already locked before (instead of dead locking the entire process in this situation by starting to wait on a mutex which will never be released while the thread is waiting) but using them is not recommended and they are not recursive by default. The reason for this is that recursive mutexes are not supported by all Unix flavours and, worse, they cannot be used with wxCondition. For example, when several threads use the data stored in the linked list, modifications to the list should only be allowed to one thread at a time because during a new node addition the list integrity is temporarily broken (this is also called program invariant). Example // this variable has an "s_" prefix because it is static: seeing an "s_" in // a multithreaded program is in general a good sign that you should use a // mutex (or a critical section) static wxMutex *s_mutexProtectingTheGlobalData; // we store some numbers in this global array which is presumably used by // several threads simultaneously wxArrayInt s_data; void MyThread::AddNewNode(int num) { // ensure that no other thread accesses the list s_mutexProtectingTheGlobalList->Lock(); s_data.Add(num); s_mutexProtectingTheGlobalList->Unlock(); } // return true the given number is greater than all array elements bool MyThread::IsGreater(int num) { // before using the list we must acquire the mutex wxMutexLocker lock(s_mutexProtectingTheGlobalData); size_t count = s_data.Count(); for ( size_t n = 0; n < count; n++ ) { if ( s_data[n] > num ) return false; } return true; } Notice how wxMutexLocker was used in the second function to ensure that the mutex is unlocked in any case: whether the function returns true or false (because the destructor of the local object lock is always called). Using this class instead of directly using wxMutex is, in general safer and is even more so if your program uses C++ exceptions. Constants enum wxMutexType { // normal mutex: try to always use this one wxMUTEX_DEFAULT, // recursive mutex: don't use these ones with wxCondition wxMUTEX_RECURSIVE };Derived from None. Include files <wx/thread.h> See also wxThread, wxCondition, wxMutexLocker, wxCriticalSection wxMutex::wxMutex wxMutex::~wxMutex wxMutex::Lock wxMutex::TryLock wxMutex::Unlock wxMutex(wxMutexType type = wxMUTEX_DEFAULT) Default constructor. ~wxMutex() Destroys the wxMutex object. wxMutexError Lock() Locks the mutex object. Return value One of: wxMutexError TryLock() Tries to lock the mutex object. If it can't, returns immediately with an error. Return value One of: wxMutexError Unlock() Unlocks the mutex object. Return value One of:
http://docs.wxwidgets.org/2.8/wx_wxmutex.html
2013-05-18T10:41:45
CC-MAIN-2013-20
1368696382261
[]
docs.wxwidgets.org
Help16:Extensions Plugin Manager Edit,.: - Convert Old Usernames. - Require Policy phishing-resistant. - Require Policy multi-factor. - Require Policy multi-factor-physical. Highlighter (GeSHi) This plug-in displays formatted code in Articles based on the GeSHi highlighting engine. It has no options. Syntax usage : <pre xml: echo $test; </pre>. Quick Tips - If you are using the TinyMCE 2.0 editor, you can control which options appear on the editor's toolbar by setting the parameters in the "Editor - TinyMCE 2.0" Plugin. -
http://docs.joomla.org/index.php?title=Help16:Extensions_Plugin_Manager_Edit&oldid=30498
2013-05-18T10:15:43
CC-MAIN-2013-20
1368696382261
[array(['/images/9/90/Plugin.Tinymce_Output.png', 'Plugin.Tinymce Output.png'], dtype=object)]
docs.joomla.org
Help31:Menus Menu Item Finder Search Revision as of 09:49, 27 December 2012 Overview Used to a page of search results from Smart Search, together with an optional form to allow refinement of the search criteria. Location. The menu that this menu item (choice) will be part of. The menus defined for the site will show in the list box..
http://docs.joomla.org/index.php?title=Help30:Menus_Menu_Item_Finder_Search&diff=79446&oldid=79159
2013-05-18T10:44:37
CC-MAIN-2013-20
1368696382261
[]
docs.joomla.org
. ssh2_exec (PECL ssh2 >= 0.9.0) ssh2_exec — Execute a command on a remote server Beschreibung resource ssh2_exec ( resource $session, string $command[, string $pty[, array $env[, int $width= 80 [, int $height= 25 [, int $width_height_type= SSH2_TERM_UNIT_CHARS ]]]]] ) Execute a command at the remote end and allocate a channel for it. Parameter-Liste session An SSH connection link identifier, obtained from a call to ssh2_connect(). command pty env envmay be passed as an associative array of name/value pairs to set in the target environment. width Width of the virtual terminal. height Height of the virtual terminal. width_height_type width_height_typeshould be one of SSH2_TERM_UNIT_CHARSor SSH2_TERM_UNIT_PIXELS. Rückgabewerte Returns a stream on successIm Fehlerfall wird FALSE zurückgegeben.. Beispiele Beispiel #1 Executing a command <?php $connection = ssh2_connect('shell.example.com', 22); ssh2_auth_password($connection, 'username', 'password'); $stream = ssh2_exec($connection, '/usr/local/bin/php -i'); ?> Siehe auch - ssh2_connect() - Connect to an SSH server - ssh2_shell() - Request an interactive shell - ssh2_tunnel() - Open a tunnel through a remote server christopher dot millward at gmail dot com ¶ 2 years ago jaimie at seoegghead dot com ¶ 4 years ago. Hope this helps! tabber dot watts at gmail dot com ¶ 7 years ago_connect(; } } } } ?> [EDIT BY danbrown AT php DOT net: Contains a bugfix supplied by (jschwepp AT gmail DOT com) on 17-FEB-2010 to fix a typo in a function name.] noreply at voicemeup dot com ¶ 3 months ago *** IMPORTANT *** If you are having issues getting STDERR on PHP 5.2.X, make sure to update to 5.2.17 and latest ssh2 extension from PECL. Jim ¶ 2 years ago. gwinans at gmail dot com ¶ 3 years ago ¶ 4 years ago The "pty" parameter is not documented. You should pass a pty emulation name ("vt102", "ansi", etc...) if you want to emulate a pty, or NULL if you don't. Passing false will convert false to a string, and will allocate a "" terminal. lko at netuse dot de ¶ 5 years ago if you are using exec function, and have problems with a output > 1000 lines you should use <?php stream_set_blocking($stream, true); while($line = fgets($stream)) { flush(); echo $line."<br />"; } ?> except <?php stream_set_blocking($stream, true); echo stream_get_contents($stream); ?> Betsy Gamrat ¶ 6 years ago It is also good to use register_shutdown_function to shred the keys after this runs. Jon-Eirik Pettersen ¶ 6 years ago ¶ 6 years ago If the ssh2_exec takes awhile to run, and you need a handshake, you can use a file. In this case, $flag names a handshake file that is written when the ssh script finishes. <?php ). gakksimian at yahoo dot com ¶ 7 years ago ¶ 7 years ago. <?php )){ // .......................................... } ?> col at pobox dot com ¶ 6 years ago as of 0.9 and above, if not allocating a pty, add FALSE to the call... old way: <?php ssh2_exec($connection, $command); ?> new way: <?php ssh2_exec($connection, $command, FALSE); ?>
http://docs.php.net/manual/de/function.ssh2-exec.php
2013-05-18T10:13:30
CC-MAIN-2013-20
1368696382261
[array(['/images/notes-add.gif', 'add a note'], dtype=object)]
docs.php.net
. sys.modules) and the module search path ( sys.path) are also separate. The new environment has no sys.argvvariable. It has new standard I/O stream file objects sys.stdin, sys.stdoutand sys.stderr(however these refer to the same underlying FILE structures in the C library). The return value points to the first thread state created in the new sub-interpreter. This thread state is made.. 'Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam'. sys.argvbased() .
http://docs.python.org/release/2.2.1/api/initialization.html
2013-05-18T10:30:51
CC-MAIN-2013-20
1368696382261
[]
docs.python.org
Delaunay tesselation in N dimensions. New in version 0.9. Notes The tesselation is computed using the Qhull library [Qhull]. Note Unless you pass in the Qhull option “QJ”, Qhull does not guarantee that each input point appears as a vertex in the Delaunay triangulation. Omitted points are listed in the coplanar attribute. References([[3, 2, 0], [3, 1, 0]], dtype=int32) >>> points[tri.simplices] array([[[ 1. , 1. ], [ 1. , 0. ], [)]) >>> tri.find_simplex(p) array([ 1, -1], dtype=int32) We can also compute barycentric coordinates in triangle 1 for these points: >>> b = tri.transform[1,:2].dot(p - tri.transform[1,2]) >>> np.c_[b, 1 - b.sum(axis=1)] array([[ 0.1 , 0.2 , 0.7 ], [ 1.27272727, 0.27272727, -0.54545455]]) The coordinates for the first point are all positive, meaning it is indeed inside the triangle. Attributes Methods
http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.Delaunay.html
2013-05-18T10:31:14
CC-MAIN-2013-20
1368696382261
[]
docs.scipy.org
a MySQL Applier installation with parallel apply enabled. The slave will apply transactions using 30 channels. shell> ./tools/tpm configure defaults \ --reset \ --install-directory=/opt/continuent \ --user=tungsten \ --mysql-allow-intensive-checks=true \ --profile-script=~/.bash_profile \ --start-and-report=trueshell> ./tools/tpm configure alpha \ --master=sourcehost \ --members=localhost,sourcehost \ --datasource-type=mysql \ --replication-user=tungsten \ --replication-password=secret \ --svc-parallelization-type=disk \ --channels=10 shell> vi /etc/tungsten/tungsten.ini [defaults] install-directory=/opt/continuent user=tungsten mysql-allow-intensive-checks=true profile-script=~/.bash_profile start-and-report=true [alpha] master=sourcehost members=localhost,sourcehost datasource-type=mysql replication-user=tungsten replication-password=secret svc-parallelization-type=disk channels=10 Configuration group defaults The description of each of the options is shown below; click the icon to hide this detail: Click the icon to show a detailed description of each argument. For staging configurations, deletes all pre-existing configuration information between updating with the new configuration values. --install-directory=/opt/continuent install-directory=/opt/continuent Path to the directory where the active deployment will be installed. The configured directory will contain the software, THL and relay log information unless configured otherwise. System User --mysql-allow-intensive-checks=true mysql-allow-intensive-checks=true For MySQL installation, enables detailed checks on the supported data types within the MySQL database to confirm compatibility. This includes checking each table definition individually for any unsupported data types. --profile-script=~/.bash_profile profile-script=~/.bash_profile Append commands to include env.sh in this profile script Start the services and report out the status after configuration Configuration group alpha The description of each of the options is shown below; click the icon to hide this detail: Click the icon to show a detailed description of each argument. The hostname of the master (extractor) within the current service. If the current host does not match this specification, then the deployment willby default be configured as a master/extractor. --members=localhost,sourcehost members=localhost,sourcehost Hostnames for the dataservice members Database type -. --svc-parallelization-type=disk svc-parallelization-type=disk Method for implementing parallel apply Number of replication channels to use for services. You can check the number of active channels on a slave by looking at the "channels" property once the replicator restarts. slave shell> trepctl -service alpha status| grep channelschannels : 10 The channel count for a Master will ALWAYS be 1 because extraction is single-threaded: master shell> trepctl -service alpha status| grep channelschannels : 1 Enabling parallel apply will dramatically increase the number of connections to the database server. Typically the calculation on a slave would be: Connections = Channel_Count x Sevice_Count x 2, so for a 4-way Composite Multimaster topology with 30 channels there would be 30 x 4 x 2 = 240 connections required for the replicator alone, not counting application traffic. You may display the currently used number of connections in MySQL: mysql> SHOW STATUS LIKE 'max_used_connections';+----------------------+-------+ | Variable_name | Value | +----------------------+-------+ | Max_used_connections | 190 | +----------------------+-------+ 1 row in set (0.00 sec) Below are suggestions for how to change the maximum connections setting in MySQL both for the running instance as well as at startup: mysql> SET GLOBAL max_connections = 512;mysql> SHOW VARIABLES LIKE 'max_connections';+-----------------+-------+ | Variable_name | Value | +-----------------+-------+ | max_connections | 512 | +-----------------+-------+ 1 row in set (0.00 sec) shell> vi /etc/my.cnf#max_connections = 151 max_connections = 512
https://docs.continuent.com/tungsten-replicator-5.3/deployment-parallel-enabling.html
2019-12-05T17:07:19
CC-MAIN-2019-51
1575540481281.1
[]
docs.continuent.com
Difference between revisions of "WebSocket DAT" Latest revision as of 16:31, 12 August 2019...
https://docs.derivative.ca/index.php?title=WebSocket_DAT&diff=cur&oldid=4381
2019-12-05T18:31:30
CC-MAIN-2019-51
1575540481281.1
[]
docs.derivative.ca
Test-ReplicationHealth Applies to: Exchange Server 2007 SP1, Exchange Server 2007 SP2, Exchange Server 2007 SP3 Use the Test-ReplicationHealth cmdlet to check all aspects of replication, cluster services, and storage group replication and replay status to provide a complete overview of the replication system. Syntax Test-ReplicationHealth [-ActiveDirectoryTimeout <Int32>] [-Confirm [<SwitchParameter>]] [-DomainController <Fqdn>] [-MonitoringContext <$true | $false>] [-TransientEventSuppressionWindow <UInt32>] [-WhatIf [<SwitchParameter>]] Detailed Description. Parameters Input Types Return Types Errors Example In this example, the Test-ReplicationHealth cmdlet is used without parameters to test the health of replication for a clustered mailbox server. Test-ReplicationHealth
https://docs.microsoft.com/en-us/previous-versions/office/exchange-server-2007/bb691314%28v%3Dexchg.80%29
2019-12-05T17:22:10
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
Windows Deployment Tools Technical Reference Concepts Windows ADK Overview What's New in the Windows ADK for Windows 8.1 Volume Activation Management Tool (VAMT) Technical Reference Other Resources Application Compatibility Toolkit (ACT) Technical Reference User State Migration Tool (USMT) Technical Reference
https://docs.microsoft.com/en-us/previous-versions/windows/hh825039(v=win.10)?redirectedfrom=MSDN
2019-12-05T16:48:12
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
strongly. NOTE: If your Sensu Go license expires, event storage will automatically revert to etcd. See Revert to built-in datastore below. observe old events in the web UI or sensuctl output until the etcd datastore catches up with the current state of your monitored infrastructure. NOTE: If your Sensu Go license expires, event storage will automatically revert to etcd.
https://docs.sensuapp.org/sensu-go/5.15/guides/scale-event-storage/
2019-12-05T18:05:14
CC-MAIN-2019-51
1575540481281.1
[]
docs.sensuapp.org
Install Guide¶ Install stable releases of Zeroless with pip. $ pip install zeroless Installing from Github¶ The canonical repository for Zeroless is on GitHub. $ git clone [email protected]:zmqless/python-zeroless.git $ cd zeroless $ python setup.py develop The best reason to install from source is to help us develop Zeroless. See the Development section for more on that.
https://python-zeroless.readthedocs.io/en/latest/install.html
2019-12-05T17:48:22
CC-MAIN-2019-51
1575540481281.1
[]
python-zeroless.readthedocs.io
SNMP Trap Servers¶ You can easily add a SNMP Trap Server to the platform to capture SNMP Traps sent by your devices. Configuration¶ Remote Host: this is used if you want to limit your server to one particular IP address and block all others. Leave empty or set to 0.0.0.0 if you want your server to accept incoming Traps no matter where from. !!!Please Notice this is not a replacement for your firewall, which will still need to be configured accordingly. Port: the port you want your server to listen on. The default for SNMP Traps is 162. Notice only one server can listen to a specific port, the first one will succeed in listening and the others will fail. ResIOT handles SNMP Servers in no specific order. Usage¶ In your log, you will see if the server manages to start listening or if it fails and why. Once listening, you will get traps reported to your log via the even comm_trap. If you want to run a custom scene on trap arrival (using a smart scene), here's how to retrieve the SNMP data: resiot_comm_getparam("snmpversion") -- will print the version of the trap resiot_comm_getparam("snmpcommunity") -- will print the community value of the trap resiot_comm_getparam("snmpvariables") -- will print the variables sent along with their values. resiot_comm_getparam("snmpenterprise") -- will print the enterprise trap value resiot_comm_getparam("snmpgenerictrap") -- will print the generic trap value resiot_comm_getparam("snmpspecifictrap") -- will print the specific trap value resiot_comm_getparam("snmpagentaddress") -- will print the agent address value of the trap Because of the nature of the trap variables, we give access to a json containing all of them at once. You can then parse it as you would do with any other json.
http://docs.resiot.io/SNMPTrapServer/
2019-12-05T17:52:42
CC-MAIN-2019-51
1575540481281.1
[]
docs.resiot.io
2.2.5.8 X-HTTP-Method This header is a custom HTTP request header defined by this document. It is possible to instruct network intermediaries (proxies, firewalls, and so on) inspecting traffic at the application protocol layer (for example, HTTP) to block requests that contain certain HTTP verbs. In practice, GET and POST verbs are rarely blocked (traditional web pages rely heavily on these HTTP methods), while, for a variety of reasons (such as security vulnerabilities in prior protocols), other HTTP methods (PUT, DELETE, and so on) are at times blocked by intermediaries. Additionally, some existing HTTP libraries do not allow creation of requests using verbs other than GET or POST. Therefore, an alternative way of specifying request types which use verbs other than GET and POST is needed to ensure that this document works well in a wide range of environments. To address this need, the X-HTTP-Method header can be added to a POST request that signals that the server MUST process the request not as a POST, but as if the HTTP verb specified as the value of the header was used as the method on the HTTP request's request line, as specified in [RFC2616] section 5.1. This technique is often referred to as "verb tunneling". This header is valid only when on POST requests. A server MAY<57> support verb tunneling as defined in the preceding paragraph. If a server implementing this document does not support verb tunneling, it MUST ignore an X-HTTP-Method header, if present in a POST request, and treat the request as a standard POST request. This implies that a client of such a data service has to determine in advance (using server documentation and so on) if a given data service endpoint supports verb tunneling. A tunneled request sent to a service that does not support verb tunneling interprets the request as an insert request since POST requests map to an insert request, as specified in [RFC5023]. The syntax of the X-HTTP-Method is defined as follows: XHTTPMethod = "X-HTTP-Method: " ("PUT" / "MERGE" / "PATCH" / "DELETE") CRLF For example, the HTTP request in the following Delete Request Tunneled in a POST Request listing instructs the server to delete the EntityType instance identified by EntityKey value 5 in the Categories EntitySet instead of performing an insert operation. POST /Categories(5) HTTP/1.1 Host: server X-HTTP-Method: DELETE Listing: Delete Request Tunneled in a POST Request
https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-odata/bdbabfa6-8c4a-4741-85a9-8d93ffd66c41?redirectedfrom=MSDN
2019-12-05T18:03:10
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
Invite a new Owner to my workspace Invite new Owner The quickest way to invite a new Owner Owner and hit the blue Invite button. The person will be notified with an invitation email from you. You’ll see a new member in the People section, with the Status of Invited and Role of Owner. When the member will accept your invitation the status will switch from Invited to Active. Upgrade member from User to `Owner In the People section, search for the user you want to upgrade to Owner, click the Options ( ...) buttons, then Edit profile. Select the role Owner from the dialog, then hit the blue Save changes button. You’re done! Learn more here about Owner permissions.
https://docs.sametab.com/docs/troubleshoting/invite-a-new-owner-to-my-workspace/
2019-12-05T18:32:12
CC-MAIN-2019-51
1575540481281.1
[]
docs.sametab.com
Domain Creation Followup¶ There are a series of steps that domains go through before they are created. Order Steps¶ The following steps describes the process of registering a new domain name. These steps can be done very quickly, or may take some time, depending on the details of your particular order. Details for some steps are provided below. - Payment - Start the creation process - Wait for the registry - Provisioning with Gandi Services - Domain created Payment¶ The order will not be complete until we have received payment. This may take some time depending on your method of payment. You can check on the status of your order following these steps: - After logging in, select “Billing” from the left menu. - Select the organization you used when placing the order. - Select the “Orders” tab to see a list of all the orders made by this organization. - Select the order you wish to view. If your order has not yet been paid, you have the possibility of cancelling it online (for example, if you want to change the payment means). Provisioning with Gandi Services¶ During this step, we are in the process of compliling all the domain name information and sending it to the registry. I If your order is stuck in this phase, it is likely due to the following: - Corporate extension: please wait 3 days following your order. These are handled manually in some cases, and so the processing time is longer. If, after 3 days, your order is still stuck, please contact customer support. - A Registry Error: our specialized Proactive team does their best to fix these errors within 1 day. If your order is still stuck in this period for more than 1 day, please contact customer support.
https://docs.gandi.net/en/domain_names/register/followup.html
2019-12-05T17:04:38
CC-MAIN-2019-51
1575540481281.1
[]
docs.gandi.net
Hide Affirm for Certain SKUs (Shopify Plus only) Merchants using Shopify Plus can hide Affirm as a payment option when the customer's cart contains items with certain SKUs. - Go to the Shopify script editor - Click Create Script - Choose Payment Gateway for the script type - Choose Blank Template - Click Create Script - In the Title box, enter Affirm Hide Based on SKU as the script name - Click Code to open the Ruby source code console - Paste the following code into the console. Replace SKU-1234 with your SKUs and add as many as you need (this is a comma separated list). available_gateways = Input.payment_gateways cart = Input.cart SKUS_TO_HIDE = ["SKU-1234", "..."] cart.line_items.each do |item| item.variant.skus.each do |sku| if SKUS_TO_HIDE.include? sku available_gateways = available_gateways.delete_if do |payment_gateway| payment_gateway.name == "Affirm" end end end end Output.payment_gateways = available_gateways - Click Run Script - Click Save and Publish
https://docs.affirm.com/Integrate_Affirm/Platform_Integration/Shopify_Integration/Hide_Affirm_for_Certain_SKUs_(Shopify_Plus_only)
2019-08-17T13:58:30
CC-MAIN-2019-35
1566027313259.30
[]
docs.affirm.com
Translate Feed Items on the Spot Where: This change applies to Lightning communities accessed through Lightning Experience and Salesforce Classic in Essentials, Enterprise, Performance, Unlimited, and Developer editions. Why: A user can translate a feed item into the default language or select a different language. - To translate the feed item into the default language, click Translate with Google. For community members, the default language comes from the locale that’s set on the user’s profile. For guest users, it comes from the locale set in the user’s browser. - To select a language, click the menu icon. After translation, Translate with Google switches to View Original, so it’s easy to switch back to the original language. How: In Community Builder, go toand enter your Google Cloud Translation API key. After you validate and save your key, a Translate with Google menu appears on all feed items in your community.
https://docs.releasenotes.salesforce.com/en-us/spring19/release-notes/rn_networks_translate.htm
2019-08-17T12:58:40
CC-MAIN-2019-35
1566027313259.30
[array(['release_notes/networks/images/rn_networks_translate.png', 'Translation settings on Settings|Languages tab'], dtype=object) array(['release_notes/networks/images/rn_networks_translate_menu.png', 'Translate with Google menu'], dtype=object) ]
docs.releasenotes.salesforce.com
Adding Harmony Binaries to the $PATH Environment Variable on GNU/Linux You can add the path to the Harmony binary files to the $PATH environment variable. This will allow you to run Harmony and its applications and utilities from a terminal by typing the name of the executable files, without having to type their full path. In a Terminal, navigate to the location where you previously extracted the Harmony package for installation—see Installing Harmony on GNU/Linux. $ cd ~/Downloads/name-of-package Run the installation script with the -e option. $ sudo ./install -e The Toon Boom Harmony license agreement prompt appears. - In the License Agreement dialog, take the time to carefully review the license agreement. Use the Up and Down keys to scroll through the text in the agreement and read it until the end. - Press Tab to switch to the AGREE and DISAGREE buttons. You can use the Left and Right keys to switch between selecting the AGREE or the DISAGREE button: - If you agree with the license agreement, select the AGREE button and press Enter. - If you disagree with the license agreement, select the DISAGREE button and press Enter. A dialog will prompt you to confirm that you want to add the path to the Harmony binaries to the $PATH environment variable. You can use the Left and Right arrow keys to select Yes or No. Select Yes and press Enter. The install script will add the path to the Harmony binaries to the $PATH environment variable. To verify that the change has been applied, open a new Terminal window and type the following command: $ echo $PATH The path to the Harmony bin folder should be included in the output, separated by other paths with a colon.
https://docs.toonboom.com/help/harmony-15/advanced/installation/basic/linux/add-to-path-linux.html
2019-08-17T12:54:29
CC-MAIN-2019-35
1566027313259.30
[array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
Manage resource accounts in Microsoft Teams A resource account is also known as a disabled user object in Azure AD, and can be used to represent resources in general. In Exchange it might be used to represent conference rooms, for example, and allow them to have a phone number. A resource account can be homed in Microsoft 365 or on premises using Skype for Business Server 2019. In Microsoft Teams or Skype for Business Online, each Phone System call queue or auto attendant is required to have an associated resource account. Whether a resource account needs an assigned phone number will depend on the intended use of the associated call queue or auto attendant, as shown in the following diagram. You can also refer to the articles on call queues and auto attendants linked at the bottom of this article before assigning a phone number to a resource account. Note This article applies to both Microsoft Teams and Skype for Business Online. For resource accounts homed on Skype for Business Server 2019, see Configure resource accounts. Overview If your organization is already using at least one Phone System license, to assign a phone number to a Phone System call queue or auto attendant the process is: - Obtain a service number. - Obtain a free Phone System - Virtual User license or a paid Phone System license to use with the resource account or a Phone System license. - Create the resource account. An auto attendant or call queue is required to have an associated resource account. - Assign the Phone System or a Phone System - Virtual user license to the resource account. - Assign a service phone number to the resource account you just assigned licenses to. - Create a Phone System call queue or auto attendant - Link the resource account with a call queue or auto attendant. If the auto attendant or call queue is nested under a top level auto attendant, the associated resource account only needs a phone number if you want multiple points of entry into the structure of auto attendants and call queues. To redirect calls to people in your organization who are homed Online, they must have a Phone System license and be enabled for Enterprise Voice or have Office 365 Calling Plans. See Assign Microsoft Teams licenses. To enable them for Enterprise Voice, you can use Windows PowerShell. For example run: Set-CsUser -identity "Amos Marble" -EnterpriseVoiceEnabled $true Warning In order to avoid problems with the resource account, follow these steps in this order. If the Phone System call queue or auto attendant you're creating will be nested and won't need a phone number, the process is: - Create the resource account - Create a Phone System call queue or auto attendant - Associate the resource account with a Phone System call queue or auto attendant Create a resource account with a phone number A top-level auto attendant or call queue will require a phone number be linked to its auto attendant. To create a resource account that uses a phone number, the process is: Port or get a toll or toll-free service number. The number can't be assigned to any other voice services or resource accounts. Before you assign a phone number to a resource account, you need to get or port your existing toll or toll-free service numbers. After you get the toll or toll-free service phone numbers, they show up in Microsoft Teams admin center > Voice > Phone numbers, and the Number type will be listed as Service - Toll-Free. To get your service numbers, see Getting service phone numbers or if you want to transfer an existing service number, see Transfer phone numbers to Office 365. If you are assigning a phone number to a resource account you can now use the cost-free Phone System Virtual User license. This provides Phone System capabilities to phone numbers at the organizational level, and allows you to create auto attendant and call queue capabilities. Obtain a Phone System Virtual User license or a regular Phone System license. To get the Virtual User license, starting from the Microsoft 365 admin center, go to Billing > Purchase services > Add-on subscriptions and scroll to the end - you will see "Phone System - Virtual User" license. Select Buy now. There is a zero cost, but you still need to follow these steps to acquire the license. Create a new resource account. See Create a resource account in Microsoft Teams admin center or Create a resource account in Powershell Assign a Phone System - Virtual User license or Phone System License to the resource account. See Assign Microsoft Teams licenses and Assign licenses to one user. Assign the service number to the resource account. See Assign/Unassign phone numbers and services. Set up one of the following: Link the resource account to the auto attendant or call queue. See Assign/Unassign phone numbers and services Create a resource account without a phone number A nested auto attendant or call queue will require a resource account, but in many cases the corresponding resource account will not need a phone number and the licensing required to support a phone number. Creating a resource account that does not need a phone number would require performing the following tasks in the following order: - Create a new resource account. See Create a resource account in Microsoft Teams admin center or Create a resource account in Powershell - Set up one of the following: - Assign the resource account to the call queue or auto attendant. See Assign/Unassign phone numbers and services Create a resource account in Microsoft Teams admin center After you've bought a Phone System license, using Microsoft Teams admin center navigate to Org-wide settings > Resource accounts. To create a new resource account click + New account. In the pop-up, fill out the display name and user name for the resource account (the domain name should populate automatically) then click Save. Next, apply a license to the resource account in the O365 Admin center, as described in Assign licenses to users in Office 365 for business Edit resource account name You can edit the resource account display name using the Edit option. Click Save when you are done. Assign/Unassign phone numbers and services Once you've created the resource account and assigned the license, you can click on Assign/Unassign to assign a service number to the resource account, or assign the resource account to an auto attendant or call queue that already exists. Assigning a direct routing number can be done using Cmdlets only. If your call queue or auto attendant still needs to be created, you can link the resource account while you create it. Click Save when you are done. To assign a direct routing or hybrid number to a resource account you will need to use PowerShell, see the following section. Important If your resource account doesn't have a valid license, an internal check will cause a failure when you try to assign the phone number to the resource account. You won't be able to assign the number or associate the resource account with a call queue or auto attendant. Change an existing resource account to use a Virtual User license If you decide to switch the licenses on your existing resource account from a Phone system license to a Virtual User license, you'll need to acquire the free Virtual User license, then follow the linked steps in the Microsoft 365 Admin center to Move users to a different subscription. Warning Always remove a full Phone System License and assign the Virtual User license in the same license activity. If you remove the old license, save the account changes, add the new license, and then save the account settings again, the resource account may no longer function as expected. If this happens, we recommend you create a new resource account for the Virtual User license and remove the broken resource account. Create a resource account in Powershell Depending on whether your resource account is located online or on premises, you would need to connect to the appropriate Powershell prompt with Admin privileges. The following Powershell cmdlet examples presume the resource account is homed online using New-CsOnlineApplicationInstance to create a resource account that is homed online. For resource accounts homed on-premises in Skype For Business Server 2019 that can be used with Cloud Call Queues and Cloud Auto Attendants, see Configure Cloud Call Queues or Configure Cloud Auto Attendants. Hybrid implementations (numbers homed on Direct Routing) will use New-CsHybridApplicationEndpoint. The application ID's that you need to use while creating the application instances are: - Auto Attendant: ce933385-9390-45d1-9512-c8d228074e07 - Call Queue: 11cd3e2e-fccb-42ad-ad00-878b93575e07 Note If you want the call queue or auto attendant to be searchable by on-premise users, you should create your resource accounts on-premise, since online resource accounts are not synced down to Active Directory. - To create a resource account online for use with an auto attendant, use the following command. New-CsOnlineApplicationInstance -UserPrincipalName [email protected] -ApplicationId “ce933385-9390-45d1-9512-c8d228074e07” -DisplayName "Resource account 1" You will not be able to use the resource account until you apply a license to it. For how to apply a license to an account in the O365 admin center, see Assign licenses to users in Office 365 for business as well as Assign Skype for Business licenses. (Optional) Once the correct license is applied to the resource account you can set a phone number to the resource account as shown below. Not all resource accounts will require a phone number. If you did not apply a license to the resource account, the phone number assignment will fail. Set-CsOnlineVoiceApplicationInstance -Identity [email protected] -TelephoneNumber +14255550100 Get-CsOnlineTelephoneNumber -TelephoneNumber +14255550100 See Set-CsOnlineVoiceApplicationInstance for more details on this command. Note It's easiest to set the online phone number using the Microsoft Teams admin center, as described previously. To assign a direct routing or hybrid number to a resource account, use the following cmdlet: Set-CsOnlineApplicationInstance -Identity [email protected] -OnpremPhoneNumber +14250000000 Manage Resource account settings in Microsoft Teams admin center To manage Resource account settings in Microsoft Teams admin center, navigate to Org-wide settings > Resource accounts, select the resource account you need to change settings for, and then click on the Edit button. in the Edit resource account screen, you will be able to change these settings: - Display name for the account - Call queue or auto attendant that uses the account - Phone number assigned to the account When finished, click on Save. Delete a resource account Make sure you dissociate the telephone number from the resource account before deleting it, to avoid getting your service number stuck in pending mode. You can do that using the following commandlet: Set-csonlinevoiceapplicationinstance -identity <Resource Account oid> -TelephoneNumber $null Once you do that, you can delete the resource account from the O365 admin portal, under Users tab. Troubleshooting In case you do not see the phone number assigned to the resource account on the Teams Admin Center and you are unable to assign the number from there, please check the following: Get-MsolUser -UserPrincipalName "[email protected]"| fl objectID,department If the department attribute displays Skype for Business Application Endpoint please run the cmdlet below : Set-MsolUser -ObjectId -Department "Microsoft Communication Application Instance" Note Refresh the Teams Admin center webpage after running the cmldet, and you should be able to assign the number correctly. Related Information For implementations that are hybrid with Skype for Business Server: Plan Cloud auto attendants Configure on-prem resource accounts For implementations in Teams or Skype for Business Online: What are Cloud auto attendants? Set up a Cloud auto attendant Small business example - Set up an auto attendant Create a Cloud call queue New-CsHybridApplicationEndpoint New-CsOnlineApplicationInstance Phone System - Virtual User license Feedback
https://docs.microsoft.com/en-us/microsoftteams/manage-resource-accounts?wt.mc_id=MVP
2019-08-17T13:51:06
CC-MAIN-2019-35
1566027313259.30
[array(['media/resource-account.png', 'example of resource accounts and user licenses'], dtype=object) array(['media/r-a-master.png', 'Screenshot of the Resource accounts page'], dtype=object) array(['media/sfbcallout1.png', 'Icon of the number 1, referencing a callout in the previous screenshot'], dtype=object) array(['media/res-acct.png', 'Screenshot of the New resource account options'], dtype=object) array(['media/r-a-edit.png', 'Screenshot of the Edit resource account option'], dtype=object) array(['media/r-a-assign.png', 'Screenshot of the Assign/unassign options'], dtype=object)]
docs.microsoft.com
MoreGallery v1 User Guide This user guide is meant for the end-users of the MoreGallery add-on. For information about front-end implementation, please review the documentation. Please note that your designer or developer has the tools to customise MoreGallery to your exact needs. Because of that, not all information in this document may apply to your specific situation. MoreGallery is a special type of document that lets you manage image or photo galleries from an easy to use interface. It appears in the Resource tree with an icon representing an image. Creating a Gallery To create a new Gallery, we will first decide where it will sit within the hierarchy of the website. For example, you may want to put it under a "Galleries" or "Photos" menu item that automatically lists the different available galleries for your visitors. In that case, right click the resource or root of the website and choose Create > Create a Gallery Here. If you want it at the root of the website, you can also click the Gallery icon in the toolbar. In the new page, make sure the right template is selected (your designer or developer should have told you which one) and fill in the details as you would for other pages on the site. Be sure to at least add a title. Save the new document. You will now be able of adding images to the gallery. Simply scroll to the bottom until you see the Gallery section, and the Upload button. Uploading Images Uploading images is really easy. Just click the Upload button, and choose the image from your computer that you want to add to the gallery. You can choose multiple images at the same time if you hold the ctrl button. After choosing images, you will see them being added to the images overview. They are still being uploaded (if the images are really big, this can take some time), but when the spinner disappears and the bar turns green, it is done. Importing Images If the images you would like to add to a gallery have already been uploaded to the site, you can use the Import button instead of the Upload button. The file browser will then open in a window, allowing you to browse and select files on the system. Adding Video If enabled by your web development partner, you can also add videos to your Gallery. This works for videos hosted on YouTube and Vimeo. To add a video, click the Add Video button in the toolbar. In the window that pops up you can paste in the link to the video on YouTube or Vimeo. MoreGallery will automatically download the cover image from the video, as well as its name and description. Sorting Images To sort the images in a different order than what they were uploaded, you can drag them across the page. Click the image while holding down the mouse button, and move your mouse to where you want the image to appear. A black outline will show where the image will be placed if you release the mouse. Editing Image Information Each image has information associated with it. This includes at the very least a name, but may also contain a description, an URL (link) or other data that you can add. To edit this information, click the white bar below the image (the one with the name, filename and the modify icon). An edit pane and enlargement of the image will appear below it. When the edit pane opened, you can edit the information in the fields. When you are done changing the details, you can just close the edit pane with the close button; the changes are automatically saved. In this window you may also find the ability to tag images and to create crops or thumbnails of the image. Publishing / Hiding and Removing images When hovering over an image in the main view, you can see a number of options. By clicking on the eye symbol it will hide the image from the gallery, allowing you to bring it back in view with another click. The trashcan icon will remove the image (and its relevant files) completely. Clicking anywhere else on the image will show you a large preview. Viewing large images To view a large version of an image in the Manager, click the small image preview. A modal window will pop up showing a large version of the image.
https://docs.modmore.com/en/MoreGallery/v1.x/User_Guide.html
2019-08-17T13:36:18
CC-MAIN-2019-35
1566027313259.30
[array(['https://assets.modmore.com/uploads/2016/03/2016-03-22-13.14.42-create-gallery.png', None], dtype=object) array(['https://modmore.com/assets/uploads/2013/mgupload3.gif', None], dtype=object) array(['https://modmore.com/assets/uploads/2013/recording2.gif', None], dtype=object) array(['https://modmore.com/assets/uploads/2013/editimage.gif', None], dtype=object) array(['images/user-guide_image-meta.png', 'Hovering over an image shows options to show or hide an image, as well as to remove it completely.'], dtype=object) array(['https://modmore.com/assets/uploads/2013/previewlarge3.gif', None], dtype=object) ]
docs.modmore.com
This page provides information on the Color Mapping rollout of the V-Ray Main render settings. Overview Color mapping (sometimes ||V-Ray Main render settings|| > Color Mapping rollout Parameters Type – The type of transformation used. These are the possible types: Linear Multiply – Simply multiplies the final image colors based on their brightness without applying any changes. Exponential – Saturates the colors based on their brightness. This can be useful to prevent burn-outs in very bright areas (for example, around light sources, etc.). This mode clamps colors so that no value exceeds (255, or 1 in floating point value). HSV exponential – Very similar to the Exponential mode, but it also preserves the color hue and saturation, instead of washing out the color towards white. Intensity exponential – Similar to the Exponential mode, but it preserves the ratio of the RGB color components and only affects the intensity of the colors. Gamma correction – Applies a gamma curve to the colors. In this case, the Multiplier influences the colors before they are gamma-corrected. The Inverse gamma is the inverse of the gamma value (i.e. for gamma 2.2, the Inverse gamma must be 0.4545). This is a deprecated mode, do not use it. For more information, see Example: Linear Work Flow. Intensity gamma – Applies a gamma curve to the intensity of the colors, instead of each channel (RGB) independently. This is a deprecated mode, do not use it. Reinhard – A blend between exponential-style color mapping and linear mapping. If the Burn value is 1.0, the result is linear color mapping and if the Burn value is 0.0, the result is exponential-style mapping. The default settings for color mapping are such that V-Ray renders out the image in linear space (Reinhard color mapping with Burn value 1.0 produces a linear result) Multiplier – Allows you to control the overall brightness by multiplying each RGB value with the value here. Burn Value – Controls the Reinhard Mapping. Value of 1 results in a Linear color mapping a value of 0 results in Exponential color mapping. Values between 1 and 0 blend the two color mapping modes. Gamma – Allows you to control the gamma correction for the output image regardless of the color mapping mode. Affect background – When disabled, color mapping does not affect colors belonging to the background. Mode – The possible values for this option are: Color mapping and gamma – Both color mapping and gamma are burned into the final image. None – Neither color mapping nor gamma are burned into the final image. However, V-Ray proceeds with all its calculations as though color mapping and gamma are applied (e.g. the noise levels are corrected accordingly). This can be useful, for example, if you know that you apply some color correction to the image later on, but wish to keep the rendering itself in linear space for compositing purposes. Color mapping only (no gamma) – Only color mapping is burned into the final image, but not the gamma correction. This is the default option. V-Ray still proceeds to sample the image as though both color mapping and gamma are applied, but only applies the color correction (Linear, Reinhard, etc.) to the final result. Clamp output – When enabled, colors are clamped after color mapping. In some situations, this may be undesirable: for example, if you wish to anti-alias HDR parts of the image as well, turn clamping. Clamp level – Specifies the level at which color components are clamped if the Clamp output option is on. Example: Color Mapping Modes This example demonstrates the differences between the color mapping modes. Linear color mapping Exponential color mapping HSV exponential color mapping: Linear Work Flow This example shows the same image rendered with 3 different settings for Gamma and Liner Workflow. Gamma = 1 Mode = Color mapping only (no gamma correction) Gamma = 2.2; Mode = Color mapping and gamma Gamma = 2.2; Mode = Color mapping only (no gamma correction)
https://docs.chaosgroup.com/display/VRAY4MODO/Color+Mapping
2019-11-12T04:10:28
CC-MAIN-2019-47
1573496664567.4
[]
docs.chaosgroup.com
~/.bash_profileor ~/.bash_rc. You can do this by adding a line like export ANDROID_SDK=/Users/myuser/Library/Android/sdk. platform-toolsto your ~/.bash_profileor ~/.bash_rc., by adding a line like export PATH=/Users/myuser/Library/Android/sdk/platform-tools:$PATH adbfrom your terminal. Configure -> AVD Manager. adbversions on your system can result in the error adb server version (xx) doesn't match this client (xx); killing... adbversion on the system: $adb version $cd ~/Android/sdk/platform-tools $./adb version adbfrom Android SDK directory to usr/bindirectory: $sudo cp ~/Android/sdk/platform-tools/adb /usr/bin
https://docs.expo.io/versions/latest/workflow/android-studio-emulator/
2019-11-12T04:13:36
CC-MAIN-2019-47
1573496664567.4
[]
docs.expo.io
Tracking your customers' subscriptions Integrations View these docs to setup subscription tracking with your billing system (i.e Stripe, Recurly). To enable revenue and renewal tracking, navigate to the Subscribed Customers section in Account Settings. Step 1 - select how MRR is tracked The first question asks you to select how each customer's revenue is identified in your data. There are 2 options: - via a customer trait (recommended) - Select this option if revenue is attached to customers as a trait when tracked with your analytics service - e.g. each customer has an mrrtrait in Segment/Mixpanel. - via an event (not recommended) - Select this option if you send an event when customers pay for your product - e.g. your analytics service sends a Plan purchasedevent with a priceproperty when a customer subscribes to a paid plan. When possible, we encourage using traits over events. Step 2 - select data that identifies revenue Depending on your choice in Step 1, you'll need to select the trait or event for your revenue data here: - via a customer trait (recommended) - Select the trait that specifies each customer's revenue - e.g. the mrrtrait. - via an event (not recommended) - Select the event sent whenever a customer is charged for your product - e.g. the Plan purchasedevent. If you are using this option, you'll also need to select the property on the selected event that specifies the customer's revenue - e.g. the priceproperty. Step 3 - define the condition that determines paid customers from trials (optional) Sometimes, you may send revenue data even if the customer isn't paying you yet. For example, let's say this is your setup: For trials - A customer can view your available pricing plans and choose one to start a 30-day trial on - When a customer starts their trial with their selected plan, you send a Plan started event with 2 properties: planPrice and isTrial (set to true). For trials converting to paid plans - The trial can view your available pricing plans and select one to start paying for. - When the customer submits their payment for their selected plan, you send a Plan started event with 3 properties: planPrice, isTrial (set to false), and renewsOn. In this scenario, your revenue setup will use events, with Plan started specified as the revenue event and planPrice as the actual revenue.: Steps 4 & 5 - track the subscription period (optional)). Note: If a customer's renewal date passes without an update (e.g. we don't receive a new charge event), the customer will remain active in Vitally. In other words, they will not be marked as churned. We will, however, highlight these 'expired' renewal dates when viewing customers. Step 6 - Monthly vs yearly subscriptions This step simply allows you to specify whether revenue is always sent as MRR.
https://docs.vitally.io/en/articles/25-tracking-your-customers-subscriptions
2019-11-12T02:50:33
CC-MAIN-2019-47
1573496664567.4
[array(['https://cdn.elev.io/file/uploads/rsVNBydcI0oDOpH0-5LcIes9-tG1wQo8r1t7aqlWfxc/VHk_A4Aq1JUEHwRnZEhf_VdZ-kzxjVQJBXJAvwwkEqU/Screen Shot 2018-06-18 at 12.07.28 PM-3Hc.png', 'https://cdn.elev.io/file/uploads/rsVNBydcI0oDOpH0-5LcIes9-tG1wQo8r1t7aqlWfxc/VHk_A4Aq1JUEHwRnZEhf_VdZ-kzxjVQJBXJAvwwkEqU/Screen Shot 2018-06-18 at 12.07.28 PM-3Hc.png https://cdn.elev.io/file/uploads/rsVNBydcI0oDOpH0-5LcIes9-tG1wQo8r1t7aqlWfxc/VHk_A4Aq1JUEHwRnZEhf_VdZ-kzxjVQJBXJAvwwkEqU/Screen Shot 2018-06-18 at 12.07.28 PM-3Hc.png'], dtype=object) array(['https://cdn.elev.io/file/uploads/rsVNBydcI0oDOpH0-5LcIes9-tG1wQo8r1t7aqlWfxc/LbxC5i9gqudxRnXjSS7BcJLWc0jz1ayoOnUVRAS8LeE/Screen Shot 2018-06-18 at 12.07.34 PM-d2k.png', 'https://cdn.elev.io/file/uploads/rsVNBydcI0oDOpH0-5LcIes9-tG1wQo8r1t7aqlWfxc/LbxC5i9gqudxRnXjSS7BcJLWc0jz1ayoOnUVRAS8LeE/Screen Shot 2018-06-18 at 12.07.34 PM-d2k.png https://cdn.elev.io/file/uploads/rsVNBydcI0oDOpH0-5LcIes9-tG1wQo8r1t7aqlWfxc/LbxC5i9gqudxRnXjSS7BcJLWc0jz1ayoOnUVRAS8LeE/Screen Shot 2018-06-18 at 12.07.34 PM-d2k.png'], dtype=object) array(['https://cdn.elev.io/file/uploads/rsVNBydcI0oDOpH0-5LcIes9-tG1wQo8r1t7aqlWfxc/bcKRMirbNnL1jMxBTeCHavxU7j-PmKarIhDLy11G-QQ/Screen Shot 2018-06-18 at 12.06.55 PM-6dA.png', 'https://cdn.elev.io/file/uploads/rsVNBydcI0oDOpH0-5LcIes9-tG1wQo8r1t7aqlWfxc/bcKRMirbNnL1jMxBTeCHavxU7j-PmKarIhDLy11G-QQ/Screen Shot 2018-06-18 at 12.06.55 PM-6dA.png https://cdn.elev.io/file/uploads/rsVNBydcI0oDOpH0-5LcIes9-tG1wQo8r1t7aqlWfxc/bcKRMirbNnL1jMxBTeCHavxU7j-PmKarIhDLy11G-QQ/Screen Shot 2018-06-18 at 12.06.55 PM-6dA.png'], dtype=object) array(['https://cdn.elev.io/file/uploads/rsVNBydcI0oDOpH0-5LcIes9-tG1wQo8r1t7aqlWfxc/dwOz2zA9aZ4LfZYwwpKnKeEJloVrimougFcNw76C0MA/Screen Shot 2018-06-18 at 12.11.58 PM-bqE.png', 'https://cdn.elev.io/file/uploads/rsVNBydcI0oDOpH0-5LcIes9-tG1wQo8r1t7aqlWfxc/dwOz2zA9aZ4LfZYwwpKnKeEJloVrimougFcNw76C0MA/Screen Shot 2018-06-18 at 12.11.58 PM-bqE.png https://cdn.elev.io/file/uploads/rsVNBydcI0oDOpH0-5LcIes9-tG1wQo8r1t7aqlWfxc/dwOz2zA9aZ4LfZYwwpKnKeEJloVrimougFcNw76C0MA/Screen Shot 2018-06-18 at 12.11.58 PM-bqE.png'], dtype=object) array(['https://cdn.elev.io/file/uploads/rsVNBydcI0oDOpH0-5LcIes9-tG1wQo8r1t7aqlWfxc/k5qMMaFItRq9PcoLyHwuYahMAgyqq8-1FZIdzz5xMjI/Screen Shot 2018-06-18 at 12.13.48 PM-gNw.png', 'https://cdn.elev.io/file/uploads/rsVNBydcI0oDOpH0-5LcIes9-tG1wQo8r1t7aqlWfxc/k5qMMaFItRq9PcoLyHwuYahMAgyqq8-1FZIdzz5xMjI/Screen Shot 2018-06-18 at 12.13.48 PM-gNw.png https://cdn.elev.io/file/uploads/rsVNBydcI0oDOpH0-5LcIes9-tG1wQo8r1t7aqlWfxc/k5qMMaFItRq9PcoLyHwuYahMAgyqq8-1FZIdzz5xMjI/Screen Shot 2018-06-18 at 12.13.48 PM-gNw.png'], dtype=object) array(['https://cdn.elev.io/file/uploads/rsVNBydcI0oDOpH0-5LcIes9-tG1wQo8r1t7aqlWfxc/jPKcH3hpSufPVv3G1h9Peajg0KpEsNCVQnhRaadqtsU/Screen Shot 2018-06-18 at 12.19.09 PM-TeM.png', 'https://cdn.elev.io/file/uploads/rsVNBydcI0oDOpH0-5LcIes9-tG1wQo8r1t7aqlWfxc/jPKcH3hpSufPVv3G1h9Peajg0KpEsNCVQnhRaadqtsU/Screen Shot 2018-06-18 at 12.19.09 PM-TeM.png https://cdn.elev.io/file/uploads/rsVNBydcI0oDOpH0-5LcIes9-tG1wQo8r1t7aqlWfxc/jPKcH3hpSufPVv3G1h9Peajg0KpEsNCVQnhRaadqtsU/Screen Shot 2018-06-18 at 12.19.09 PM-TeM.png'], dtype=object) ]
docs.vitally.io
The performance of the training process is sensitive to different constants called hyperparameters. DPP supports automatic hyperparameter optimization via the grid search method in order to evaluate performance under many different combinations of hyperparameter values. Training with Hyperparameter Search In order to train with hyperparameter search, you must replace the line model.begin_training() with a modified version containing the ranges of hyperparameters you would like to evaluate: model.begin_training_with_hyperparameter_search(l2_reg_limits=[0.001, 0.005], lr_limits=[0.0001, 0.001], num_steps=4) Here, you can see that we are searching over values for two hyperparameters: the L2 regularization coefficient ( l2_reg_limits) and the learning rate ( lr_limits). If you don't want to search over a particular hyperparameter, just set its limits to None and make sure you set it manually in your model (for example, with set_regularization_coefficient()). The values in brackets indicate the lowest and highest values to try, respectively. The area in between the low and high values is divided into equal parts depending on the number of steps chosen. The parameter num_steps=4 means that the system will search over 4 values for each of the two hyperparameters, meaning that in total 12 runs will be executed. Please note that larger values for num_steps will increase the amount of runs exponentially, which will increase the run time dramatically.
https://deep-plant-phenomics.readthedocs.io/en/latest/Hyperparameter-Optimization/
2019-11-12T02:47:55
CC-MAIN-2019-47
1573496664567.4
[]
deep-plant-phenomics.readthedocs.io
This. This is by design: the template system is meant to express presentation, not program logic. syntax listed below are supported by default (although you can add your own extensions to the template language as needed). A template. The most powerful – and thus the most complex – part of Django’s template engine is template inheritance. Template inheritance allows you to build a base “skeleton” template that contains all the common elements of your site and defines blocks that child templates can override. Let’s look at template inheritance by starting with an example: <!DOCTYPE html> <html> </html> This template, which we’ll call base.html, defines an HTML skeleton document that you might use for a> helps. Data inserted using {{ block.super }} will not be automatically escaped (see the next section), since it was already escaped, if necessary, in the parent template. Variables created outside of a {% block %} using the template tag as syntax can’t be used inside the block. For example, this template doesn’t render anything: {% trans "Title" as title %} {% block content %}{{ title }}{% endblock %}. email message, for instance.> To control auto-escaping for a template, wrap the template (or: {% autoescape off %} <h1>{% block title %}{% endblock %}</h1> {% block content %} {% endblock %} {% endautoescape %} {%. As wetag.
https://django.readthedocs.io/en/latest/ref/templates/language.html
2019-11-12T03:32:24
CC-MAIN-2019-47
1573496664567.4
[]
django.readthedocs.io
WBEMTimeSpan::GetBSTR method [The WBEMTimeSpan class is part of the WMI Provider Framework which is now considered in final state, and no further development, enhancements, or updates will be available for non-security related issues affecting these libraries. The MI APIs should be used for all new development.] The GetBSTR method gets the time span as a BSTR in Date and Time format. Syntax BSTR throw(CHeap_Exception) GetBSTR( ); Parameters This method has no parameters. Return Value The time span is returned as a BSTR in Date and Time Format. Remarks The calling method must call SysFreeString on the return value.
https://docs.microsoft.com/en-us/windows/win32/api/wbemtime/nf-wbemtime-wbemtimespan-getbstr?redirectedfrom=MSDN
2019-11-12T03:42:36
CC-MAIN-2019-47
1573496664567.4
[]
docs.microsoft.com
OSPF Area Configuration¶ To configure area-specific settings in OSPF, start in config-ospf mode and use the area <area-id> command to enter config-ospf-area mode. tnsr(config-ospf)# area <area-id> tnsr(config-ospf-area)# config-ospf-area mode contains the following commands: - authentication Enables authentication for this area. Communication from peers must contain the expected authentication information to be accepted, and outgoing packets will have authentication information added. When present on its own, the authentication mechanism used is simple passwords. Authentication passwords are configured in OSPF Interface Configuration mode using the authentication-keycommand. - message-digest When present, enables MD5 HMAC authentication for this area. Much stronger authentication than simple passwords. The key is configured in OSPF Interface Configuration mode using the message-digest-keycommand. - default-cost <cost> Sets the cost applied to default route summary LSA messages sent to stub areas. - export-list <acl-name> Uses the given ACL to limit Type 3 summary LSA messages for intra-area paths that would otherwise be advertised. This behavior only applies if this router is the ABR for the area in question. - filter-list (in|out) prefix-list <prefix-list-name> Similar to export-listand import-listbut uses prefix lists instead of ACLs, and can work in either direction. - import-list <acl-name> Similar to export-list, but for routes announced by other routers into this area. - nssa [(no-summary|translate (always|candidate|never))] Configures this area as a Not-so-Stubby Area (NSSA), which does not contain external links but may contain static routes to non-OSPF destinations (See Area Types for more information on area types and behaviors. - no-summary When present, the area will instead of considered an NSSA Totally Stub area (Area Types). - translate (always|candidate|never) Configures NSSA-ABR translations, for converting between Type 5 and Type 7 LSAs. - always Always translate messages. - candidate Participate in NSSA-ABR candidate elections. Currently the default behavior. - never Never translate messages. - range <prefix> [cost <val>|not-advertise|substitute <sub-prefix>] Configure summarization of routes inside the given prefix. Instead of Type 1 (Router) and Type 2 (Network) LSAs, it creates Type 3 Summary LSAs instead. - cost <val> Apply the specified cost to summarized routes for this prefix. - not-advertise Disable advertisement for this prefix. - substitute <sub-prefix> Instead of advertising the first prefix, advertise this prefix instead. - shortcut (default|disable|enable) For use with abr-type shortcut(OSPF Server Configuration), this advertises the area as capable of supporting ABR shortcut behavior (draft-ietf-ospf-shortcut-abr-02). - stub [no-summary] Configure this area as a Stub Area (Area Types). - no-summary When present, the area will instead be considered a Totally Stub Area (Area Types). - virtual-link <router-id> Configures a virtual link in this area between this router and the specified router. Both this router and the target router must be ABRs, and both must have a link to this (non-backbone) area. Additionally, the virtual link must be added on both ends. This command enters config-ospf-vlinkmode which has a subset of commands available similar to OSPF Interface Configuration. The available commands are authentication-key, dead-interval, hello-interval, message-digest-key, retransmit-interval, and transmit-delay. The usage of these commands is explained in OSPF Interface Configuration. The virtual link is used to exchange routing information directly between the routers involved, and can be used to deliver traffic via the peer if necessary. Such a relationship may be necessary to nudge traffic from an ABR with a single undesirable link to another ABR with a faster link to a common remote destination, when the path would otherwise be selected because it is shorter.
https://docs.netgate.com/tnsr/en/latest/dynamicrouting/ospf/config-area.html
2019-11-12T04:20:35
CC-MAIN-2019-47
1573496664567.4
[]
docs.netgate.com
BlackPill F103C8 (128k)¶pill_f103c8_128 ID for board option in “platformio.ini” (Project Configuration File): [env:blackpill_f103c8_128] platform = ststm32 board = blackpill_f103c8_128 You can override default BlackPill F103C8 (128k) settings per build environment using board_*** option, where *** is a JSON object path from board manifest blackpill_f103c8_128.json. For example, board_build.mcu, board_build.f_cpu, etc. [env:blackpill_f103c8_128] platform = ststm32 board = blackpill_f103c8_128 ; change microcontroller board_build.mcu = stm32f103c8t6 ; change MCU frequency board_build.f_cpu = 72000000L Uploading¶ BlackPill F103C8 (128k) supports the next uploading protocols: blackmagic jlink serial stlink Default protocol is stlink You can change upload protocol using upload_protocol option: [env:blackpill_f103c8_128] platform = ststm32 board = blackpill_f103c8_128Pill F103C8 (128k) does not have on-board debug probe and IS NOT READY for debugging. You will need to use/buy one of external probe listed below.
https://docs.platformio.org/en/latest/boards/ststm32/blackpill_f103c8_128.html
2019-11-12T03:30:44
CC-MAIN-2019-47
1573496664567.4
[]
docs.platformio.org
both force and torque to reduce both the linear and angular velocities to zero. The joint constantly tries to reduce both the ::Rigidbody2D::velocity and ::Rigidbody2D::angularVelocity to zero. Unlike contact friction which requires two colliders to be in contact, force and torque here are applied continuously. You can control both the maximum force using maxForce and maximum torque using maxTorque. Because you can use very high force or torque limits, you can essentially reduce an objects movement to almost zero. A typical usage for this joint might be to simulate top-down surface friction or to simulate stiff rotation of an object. Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/ScriptReference/FrictionJoint2D.html
2019-11-12T03:57:44
CC-MAIN-2019-47
1573496664567.4
[]
docs.unity3d.com
Creating an account Creating an account in Barion This article contains information about creating an account in the Barion system. To register, visit the Barion website and click the "Sign up now" button. Individual, business or non-profit You can register yourself as an individual, a business entity or a non-profit organization. Please make sure you have chosen the proper registration type (e.g. do not register as an individual if you have a company running your business). Failing to do so will result in difficulties validating yourself through our KYC process. Different currencies You can choose from the following available currencies upon registration: - CZK (Czech crown) - EUR (Euro) - HUF (Hungarian forint) - USD (U.S. dollar) You must choose one currency to create your account with. You can manually add the other ones later on the "My currencies" page. Anonimity and limits When you create an account as an individual, you have the option to be completely anonymous in the system. In such case, you only have to provide your e-mail address and a password. If you register as a business or non-profit organization, you must provide additional data related to your business. At the end of the registration process, your account will be in an unverified status. This means there are several limits applied to your account. These limits are the following: - In one calendar year, the maximum sum of topped up amount cannot be larger than HUF650,000 (roughly €2,100 or $2,250 - see actual conversion rates) - Note: payments made in your shops through the Barion Smart Gateway count towards your top up amount! - In one calendar year, the maximum sum of withdrawn money cannot be larger than HUF260,000 (rougly €840 or $900 - see actual conversion rates) - In one calendar day, the maximum sum of e-money transfers must be lower than HUF3,600,000 (rougly €11,500 or $12,500 - see actual conversion rates) - Bank transfers are inspected by the monitoring system and can be held back for several days To disable these limits, you must undergo our KYC procedure to verify your account. Please note that if you verify your account, you can not remain anonymous in the Barion system.
https://docs.barion.com/index.php?title=Creating_an_account&oldid=1445
2019-11-12T02:49:51
CC-MAIN-2019-47
1573496664567.4
[]
docs.barion.com
Changing Default Entity¶ In order to change the default item, use Set default option in the actions drop-down menu. NOTE: an entity cannot be deleted from an account-owned collection for as long as it is set as the default one. Only one entity at a time can be set as default in the context of a particular collection. Animation¶ If we take workflows as our case study, then a new default entry can be chosen as demonstrated in the animation below.
https://docs.exabyte.io/entities-general/actions/set-default/
2019-11-12T03:36:54
CC-MAIN-2019-47
1573496664567.4
[]
docs.exabyte.io
August 2016 Volume 31 Number 8 [Cutting Edge] Beyond CRUD: Commands, Events and Bus By Dino Esposito | August 2016 In recent installments of this column, I discussed what it takes to build a Historical create, read, update, delete (H-CRUD). An H-CRUD is a simple extension to classic CRUD where you use two conceptually distinct data stores to persist the current state of objects and all the events that happened during the lifetime of individual objects. If you simply limit your vision to the data store that contains the current state, then all is pretty much the same as with classic CRUD. You have your customer records, your invoices, orders and whatever else forms the data model for the business domain. The key thing that’s going on here is that this summary data store isn’t the primary data store you create, but is derived as a projection from the data store of events. In other words, the essence of building a historical CRUD is to save events as they happen and then infer the current state of the system for whatever UI you need to create. Designing your solution around business events is a relatively new approach that’s gaining momentum, though there’s a long way ahead for it to become the mainstream paradigm. Centering your design on events is beneficial because you never miss anything that happens in the system; you can reread and replay events at any time and build new projections on top of the same core data for, say, business intelligence purposes. Even more interesting, with events as an architect, you have the greatest chance to design the system around the business-specific ubiquitous language. Well beyond being a pillar of Domain-Driven Design (DDD), more pragmatically the ubiquitous language is a great help to understand the surrounding business domain and plan the most effective architectural diagram of cooperating parts and internal dynamics of tasks and workflows. The implementation of events you might have seen in my May (msdn.com/magazine/mt703431) and June 2016 (msdn.com/magazine/mt707524) columns was very simple and to some extent even simplistic. The main purpose, though, was showing that any CRUD could be turned into an H-CRUD with minimal effort and still gain some benefits from the introduction of business events. The H-CRUD approach has some obvious overlapping with popular acronyms and keywords of today such as CQRS and Event Sourcing. In this column, I’ll take the idea of H-CRUD much further to make it merge with the core idea of Event Sourcing. You’ll see how H-CRUD can turn into an implementation made of commands, buses and events that at first might look like an overly complex way to do basic reads and writes to a database. One Event, Many Aggregates In my opinion, one of the reasons software is sometimes hard to write on time and on budget is the lack of attention to the business language spoken by the domain expert. Most of the time, acknowledging requirements means mapping understood requirements to some sort of relational data model. The business logic is then architected to tunnel data between persistence and presentation, making any necessary adjustments along the way. While imperfect, this pattern worked for a long time and the number of cases where monumental levels of complexity made it impractical were numerically irrelevant and, anyway, brought to the formulation of DDD, is still the most effective way to tackle any software projects today. Events are beneficial here because they force a different form of analysis of the domain, much more task-oriented and without the urgency of working out the perfect relational model in which to save data. When you look at events, though, cardinality is key. In H-CRUD examples I discussed in past columns, I made an assumption that could be quite dangerous if let go without further considerations and explanation. In my examples, I used a one-to-one event-to-aggregate association. In fact, I used the unique identifier of the aggregate being persisted as the foreign key to link events. To go with the example of the article, whenever a room was booked, the system logs a booking-created event that refers to a given booking ID. To retrieve all events for an aggregate (that is, the booking) a query on the events data store for the specified booking ID is sufficient to get all information. It definitely works, but it’s a rather simple scenario. The danger is that when aspects of a simple scenario become a common practice, you typically move from a simple solution to a simplistic solution. And this isn’t exactly a good thing. Aggregates and Objects The real cardinality of the event/aggregate association is written in the ubiquitous language of the business domain. At any rate, a one-to-many association is much more likely to happen than a simpler one-to-one association. Concretely, a one-to-many association between events and aggregates means that an event may sometimes be pertinent to multiple aggregates and that more than one aggregate may be interested in processing that event and may have its state altered because of that event. As an example, imagine a scenario in which an invoice is registered in the system as a cost of an ongoing job order. This means that in your domain model, you probably have two aggregates—invoice and job order. The event invoice registered captures the interest of the invoice aggregate because a new invoice is entered into the system, but it might also capture the attention of the JobOrder aggregate if the invoice refers to some activity pertinent to the order. Clearly, whether the invoice relates to a job order or not can be determined only after a full understanding of the business domain. There might be domain models (and applications) in which an invoice may stand on its own and domain models (and applications) in which an invoice might be registered in the accounting of a job order and subsequently alter the current balance. However, getting the point that events may relate to many aggregates completely changes the architecture of the solution and even the landscape of viable technologies. Dispatching Events Breaks Up Complexity At the foundation of CRUD and H-CRUD lies the substantial constraint that events are bound to a single aggregate. When multiple aggregates are touched by a business event, you write business logic code to ensure that the state is altered and tracked as appropriate. When the number of aggregates and events exceeds a critical threshold, the complexity of the business logic code might become hard and impractical to handle and evolve. In this context, the CQRS pattern represents a first step in the right direction as it basically suggests you reason separately on actions that “just read” or “just alter” the current state of the system. Event Sourcing is another popular pattern that suggests you log whatever happens in the system as an event. The entire state of the system is tracked and the actual state of aggregates in the system is built as a projection of the events. Put another way, you map the content of events to other properties that altogether form the state of objects usable in the software. Event Sourcing is built around a framework that knows how to save and retrieve events. An Event Sourcing mechanism is append-only, supports replay of streams of events and knows how to save related data that might have radically different layouts. Event store frameworks such as EventStore (bit.ly/1UPxEUP) and NEventStore (bit.ly/1UdHcfz) abstract away the real persistence framework and offer a super-API to deal in code directly with events. In essence, you see streams of events that are somewhat related and the point of attraction for those events is an aggregate. This works just fine. However, when an event has impact on multiple aggregates, you should find a way to give each aggregate the ability to track down all of its events of interest. In addition, you should manage to build a software infrastructure that, beyond the mere point of events persistence, allows all standing aggregates to be informed of events of interest. To achieve the goals of proper dispatching of events to aggregates and proper event persistence, H-CRUD is not enough. Both the pattern behind the business logic and the technology used for persisting event-related data must be revisited. Defining the Aggregate The concept of an aggregate comes from DDD and in brief it refers to a cluster of domain objects grouped together to match transactional consistency. Transactional consistency simply means that whatever is comprised within an aggregate is guaranteed to be consistent and up-to-date at the end of a business action. The following code snippet presents an interface that summarizes the main aspects of just any aggregate class. There might be more, but I dare say this is the absolute minimum: public interface IAggregate { Guid ID { get; } bool HasPendingChanges { get; } IList<DomainEvent> OccurredEvents { get; set; } IEnumerable<DomainEvent> GetUncommittedEvents(); } At any time, the aggregate contains the list of occurred events and can distinguish between those committed and those uncommitted that result in pending changes. A base class to implement the IAggregate interface will have a non-public member to set the ID and implement the list of committed and uncommitted events. Furthermore, a base Aggregate class will also have some RaiseEvent method used to add an event to the internal list of uncommitted events. The interesting thing is how events are internally used to alter the state of an aggregate. Suppose you have a Customer aggregate and want to update the public name of the customer. In a CRUD scenario, it will simply be an assignment like this: customer.DisplayName = "new value"; With events, it will be a more sophisticated route: public void Handle(ChangeCustomerNameCommand command) { var customer = _customerRepository.GetById(command.CompanyId); customer.ChangeName(command.DisplayName); customerRepository.Save(customer); } Let’s skip for a moment the Handle method and who runs it and focus on the implementation. At first, it might seem that ChangeName is a mere wrapper for the CRUD-style code examined earlier. Well, not exactly: public void ChangeName(string newDisplayName) { var evt = new CustomerNameChangedEvent(this.Id, newDisplayName); RaiseEvent(e); } The RaiseEvent method defined on the Aggregate base class will just append the event in the internal list of uncommitted events. Uncommitted events are finally processed when the aggregate is persisted. Persisting the State via Events With events deeply involved, the structure of repository classes can be made generic. The Save method of a repository designed to operate with aggregate classes described so far will simply loop through the list of the aggregate’s uncommitted events and call a new method the aggregate must offer—the ApplyEvent method: public void ApplyEvent(CustomerNameChangedEvent evt) { this.DisplayName = evt.DisplayName; } The aggregate class will have one overload of the ApplyEvent method for each event of interest. The CRUD-style code you considered way back will just find its place here. There’s still one missing link: How do you orchestrate front-end use cases, end-user actions with multiple aggregates, business workflows and persistence? You need a bus component. Introducing a Bus Component A bus component can be defined as a shared pathway between running instances of known business processes. End users act through the presentation layer and set instructions for the system to deal with. The application layer receives those inputs and turns them into concrete business actions. In a CRUD scenario, the application layer will call directly the business process (that is, workflow) responsible for the requested action. When aggregates and business rules are too numerous, a bus greatly simplifies the overall design. The bus can be a component you write yourself, the Azure Service Bus, MSMQ or a product such as Rebus (bit.ly/2cmTd1s), NServiceBus (bit.ly/2cruxDI) or MassTransit (bit.ly/2cmT2Dl). The application layer pushes a command or an event to the bus for listeners to react appropriately. Listeners are components commonly called “sagas” that are ultimately instances of known business processes. A saga knows how to react to a bunch of commands and events. A saga has access to the persistence layer and can push commands and events back to the bus. The saga is the class where the aforementioned Handle method belongs. You typically have a saga class per workflow or use case and a saga is fully identified by the events and commands it can handle. The overall resulting architecture is depicted in Figure 1. Figure 1 Using a Bus to Dispatch Events and Commands Finally, note that events must also be persisted and queried back from their source. This raises another nontrivial point: Is a classic relational database ideal to store events? Different events can be added at any time in the development and even post production. Each event, in addition, has its own schema. In this context, a non-relational data store fits in even though using a relational database still remains an option—at least an option to consider and rule out with strong evidence. Wrapping Up I dare say that most of the perceived complexity of software is due to the fact that we keep on thinking the CRUD way for systems that although based on the fundamental four operations in the acronym (create, read, update, delete) are no longer as simple as reading and writing to a single table or aggregate. This article was meant to be the teaser for more in-depth analysis of patterns and tools, which will continue next month when I present a framework that attempts to make this sort of development faster and sustainable.: Jon Arne Saeteras Discuss this article in the MSDN Magazine forum
https://docs.microsoft.com/en-us/archive/msdn-magazine/2016/august/cutting-edge-beyond-crud-commands-events-and-bus
2019-11-12T03:19:22
CC-MAIN-2019-47
1573496664567.4
[array(['images/mt767692.esposito_figure1_hires%28en-us%2cmsdn.10%29.png', 'Using a Bus to Dispatch Events and Commands Using a Bus to Dispatch Events and Commands'], dtype=object) ]
docs.microsoft.com
Before You Install POSReady 2/16/2009 Before you install Windows Embedded POSReady 2009, review this topic to help you become familiar with POSReady and to consider the best way to deploy it. Minimum Hardware Requirements for POSReady The hardware requirements for POSReady depend on whether you choose to use virtual memory or not. Windows manages virtual memory by using a paging file, which is automatically calculated as the computer's RAM multiplied by 1.5. You can adjust the paging file size after installation. The requirements are: - Minimum of 480 MB of free disk space without a paging file. Actual requirements will vary based on the system configuration and the applications and features that you choose to install. Additional available disk space might be required if you are installing over a network. For more information about how to install over a network, see Installing POSReady by Using Remote Deployment. Note You cannot install POSReady on a dynamic disk. You must either convert your disk to a basic disk or delete the volume and create a basic disk. - Minimum of 333 megahertz CPU clock speed. - Minimum of 64 MB of RAM (512 MB of RAM is recommended) with a paging file. If you are not using a paging file, POSReady requires a minimum of 512 MB of RAM. What's on the POSReady DVD The POSReady DVD contains the following: - POSReady core operating system (OS) files - Multilingual User Interface Pack (MUI) files for 32 languages - Companion applications Choosing Your Installation Type POSReady provides two installation types: - Interactive Installation, where you supply step-by-step information as the Setup Wizard displays steps on the screen. - Unattended Installation, where you supply information in an answer file so that Setup can run automatically. You should consider this installation type if you are installing POSReady with the same configuration on many computers. For more information about how to create an answer file for unattended Setup, see Creating a POSReady Answer File. You can create an unattended answer file during an interactive installation by using a command-line switch. Command-line switches can also be used to create a log file for error-logging purposes. For more information, see POSReady Setup and the Command Line. Information to Collect Before Installation Before you install POSReady, collect information that will be required for both installation types. You are prompted to provide this information on Setup Wizard pages for interactive installations, or you must include it in your answer file for unattended installations. - If the computer requires third-party storage drivers, you must have them available before you can install POSReady. For more information about drivers, see Adding Drivers for Your POSReady Installation. - The product key, which is the 25-character code unique to your copy of POSReady. - The name of the user and the organization associated with the computer on which the product will be installed. - The name (if any) of the computer on which the product will be installed, and the password for the administrator. The password is required, and must include three of the following: uppercase characters, lowercase characters, numbers, or symbols. Password length must not exceed 95 characters. - The default language and standards to use for formatting currency, dates, and more. For more information about how to use a Multilingual User Interface Pack (MUI), see Installing the Multilingual User Interface Pack (MUI). - The geographical location and time zone of the computer on which the product will be installed. - The file system to use (NTFS is recommended). - The TCP/IP Settings, DNS server address, and WINS server for the computer on which the product will be installed (all three can be obtained automatically if the network supports queries for this information). - If the computer is part of a workgroup, you will need the workgroup name. If the computer is a member of a domain, you will need the domain name, and the user name and password of a user who has permission to add computers to the domain. See Also Tasks Installing the Multilingual User Interface Pack (MUI)
https://docs.microsoft.com/en-us/previous-versions/windows/embedded/dd458829(v%3Dwinembedded.20)
2019-11-12T03:11:40
CC-MAIN-2019-47
1573496664567.4
[]
docs.microsoft.com
This page provides a step-by-step guide to V-Ray Lights and how to texture them. Page Contents Introduction In this Quick Start tutorial we will be taking a look at V-Ray lights and how to apply textures to them, using a modified version of the headphone scene that mimics a photographic studio. 1) Scene Observations For this tutorial we have a studio lighting variation to our headphone scene. Render the scene so we can get a concept of how it looks. We can get a little more interest and variety into our lighting setup by applying some of V-Ray's textures to the area lights. 2) Create a second camera Create a new camera and point it directly at one of the area lights so that you can see your changes directly. Then set this as your render camera in the main Modo render properties. Next, select Area Light. In the Properties tab, enable both the Visible to Camera and Visible to Reflection Rays options so that the effect of the textures applied to the light will be visible during rendering. Run V-Ray RT to confirm that everything is working. 2) Apply a softbox texture to the light Scroll to the bottom of the Shader Tree and expand the Lights section. Select the area light that the camera is pointing at, and then from the Shader Tree Add Layer menu, scroll down to V-Ray Textures and from the sub-menu select Add V-Ray Softbox. Initially no change will be visible, so you need to enable some of the options on the V-Ray Softbox. Enable the Spots On option, and then also enable the Radial Vignette On option. Click the Edit Radial Vignette Gradient button just below it to open the Gradient Editor. Middle click inside the gradient to add a color key at the 100% mark. Make sure the key is selected and the Color Picker set to black. This will create a gradation from white in the center to black at the edges of the area light. With the key at 100% selected, change the color from black to a dark grey. Next enable the Frame Vignette On option and once again click the Edit Radial Vignette Gradient button. Add a new color key at 100% set to black. This will add a soft frame around the edges of the area light and provide more realistic and interesting reflections. 3) Apply a grid texture to the light Next you will add a grid texture to the light. In the Shader Tree go to Add Layer > V-Ray Textures Bercon > V-Ray Bercon Tile. You will need to increase the Scale to 0.5. Reduce the variation by setting the Pattern to 0,1,1 and match the Tile Height and Tile Width to 4. In order to get this texture to mix with the softbox texture, we'll need to apply this texture with the Multiply blending mode so that it is mixed with the V-Ray Softbox below it. The area light should look something like this: 3) Apply a grid texture to the light Now that we have the light texture set up, we can copy the materials to our other light. 4) Check and render the scene In the render properties, reset the camera back to the original scene camera and do a test render with V-Ray RT. Because of the additional texturing, the lighting will be visibly darker. Select the area light directly above the headphones and increase its Radiance from 4 to 8 to brighten the scene. Do a production render and compare it with the original untextured result. You should see a substantial improvement with the textured lighting providing much greater depth and interest.
https://docs.chaosgroup.com/pages/viewpage.action?pageId=21806012
2019-11-12T03:07:58
CC-MAIN-2019-47
1573496664567.4
[]
docs.chaosgroup.com
Work. To achieve that we use these questions to prioritize what gets done:?
https://docs.opencollective.com/help/about/the-open-collective-way/core-contributors-guidelines
2019-11-12T02:53:53
CC-MAIN-2019-47
1573496664567.4
[]
docs.opencollective.com
TOPICS× How Do I Acquire this Data_ You must install and run Sensor on each web server that serves the content for your site to collect all of the requests that are seen by those servers. These requests make up 90% or more of the requests made to your site and 90% or more of the data that is needed for the complete analysis of your site's traffic. Page Tags should then be used to collect the remaining 10% or less of the traffic data that is not known to your web servers. The following, however, are valid configurations for the collection of web request data from your site, in order of preference, based on our operational experience: - Sensor is installed on each web server that you control and that supports your site. Content from third-party sites, content served from cache, and certain types of dynamic content should be tagged, and such page tags should send the data that they collect to a web server at your location that is running Sensor. You may add an additional web server if the level of page tag request traffic justifies such, or in special cases, dedicate a web server to collect these page tag requests. - Sensor is installed on two web servers, also referred to as data collection servers in this guide, at your location that are dedicated to collecting page tag request data from tagged pages. All content on your site is tagged and all page tags are directed to the two data collection servers. - Sensor’s data collection services are provided by an outsourcer that runs data collection servers to collect all of your web request data. In this case, all content on your site is tagged and the page tags send their data to the outsourced data collection servers.For more information about Sensor, see the Data Workbench Sensor Guide .
https://docs.adobe.com/content/help/en/data-workbench/using/page-tagging/t-how-acq-data-.html
2019-11-12T02:47:01
CC-MAIN-2019-47
1573496664567.4
[]
docs.adobe.com
discovery.wbemEnumInstances Performs a WBEM query on the target and returns a list of discovery.wbemEnumInstances(target, class_name, properties, namespace, filter_locally) DiscoveredWBEMInstanceDDD must be present though it may be empty. namespace– the WBEM namespace to query, for example, root/cimv2. Used with the class_nameto identify objects of interest. filter_locally– when set, the function does not request specific properties so all are returned. Only the specified properties are stored as DDD. This parameter can be used if a particular system does not support the retrieval of specific properties. The default is false, that is, request just the specified properties. WBEM credential is required which matches the endpoint being scanned. This is not the host credential used to scan the endpoint. The following example shows the filter_locally parameter set in order to retrieve all properties: discovery.wbemEnumInstances(discovery_access, "HITACHI_SCSIPCForPort", [ "Dependent", "Antecedent" ], "root/hitachi/smis", filter_locally := true);
https://docs.bmc.com/docs/discovery/112/developing/the-pattern-language-tpl/pattern-overview/body/functions/discovery-functions/discovery-action-functions/discovery-wbemenuminstances
2019-11-12T04:48:58
CC-MAIN-2019-47
1573496664567.4
[]
docs.bmc.com
. Uninstalling a plugin fails Issue When trying to remove a plugin, the application displays the error message: An internal error has occurred, we're sorry about that Reason All TrueSight Intelligence.
https://docs.bmc.com/docs/intelligence/troubleshooting-plugin-issues-738269939.html
2019-11-12T04:50:34
CC-MAIN-2019-47
1573496664567.4
[]
docs.bmc.com
OverviewOverview Companies today spend a considerable amount of time on planning their marketing strategies but they quite often miss the inbound marketing factor that can help them reach their target customers using their own ‘company created internet content – something that ends customers care about the most. That’s where HubSpot takes the reins for all your worries. HubSpot is an inbound marketing and sales platform. It is a marketer and developer of the software products that help companies plan their inbound marketing strategies. HubSpot provides tools for social media marketing, content management, web analytics, search engine optimization etc for customized content to attract the end users. Keeping these factors in mind to help the online sellers, CedCommerce presents the HubSpot Integration for Magento 2 Extension. Using this extension by CedCommerce, the sellers can not only create the customized content for their customer base but can also sync the comprehensive product and details from their Magento 2 admin panel to HubSpot. Result? you can manage your business well by in-depth details of products. A glance at its Features- Product Sync – Sync the product along with its details such as Name, Image, Price, and Description to HubSpot for your e-commerce store. Customer Sync – Sync all the customer details from your e-commerce store to HubSpot. The customer details that you may sync are – Email, First Name, Last Name, Company Name, Telephone Number, Street, City, Region, Country, Post Code, and Contact Stage. Deal Sync – Get all the orders from your e-commerce store synced with HubSpot. The order details that can be synced using HubSpot E-Commerce Integration are – Deal Stage, Deal Name, Closed Won Reason, Closed Lost Reason, Close Date, Amount, Pipeline, Abandoned Cart URL, Discount Amount, Increment ID, Shipment IDs, Tax Amount, and Contact Ids. Line Item Sync – Sync the Line Items to HubSpot and know in detail about products’ performance – which is being ordered or is high in demand. The HubSpot E-Commerce Integration lets you sync the Product ID, Deal ID, Discount Amount, Quantity, Price, Name, and SKU. E-Mail – With the HubSpot E-Commerce Integration, you may create the e-mail pattern with personalized content from the HubSpot panel itself and make your presence more prominent amongst your target clientèle. Marketing Automation – Forget the hassles of e-mail marketing. Operate and experience the automated e-mail marketing from HubSpot panel with the HubSpot E-Commerce Integration, and rely on the self-operating e-mail marketing. Analytics – Understand your customers’ behaviour and purchasing pattern by understanding the analytics from the HubSpot panel. Get the comprehensive details with HubSpot E-Commerce Integration, have deeper insights about your marketplace analytics, and turn it into your forte. Abandoned Cart Recovery – HubSpot E-Commerce Integration enables you to send emails to the customers from your HubSpot panel in the case of Abandoned Shopping Cart, to procure the recovery. After a specific time period, communicate with the shoppers through an email to persuade them to take the desired action. Customer Welcome – On your customers’ first purchase, send the welcome email to them within a specific time period. From your HubSpot panel, initiate the email communication with shoppers once they have completed the buying process, within a definite time period. Customer Re-engagement – With the HubSpot E-Commerce Integration, re-engage with your customers and draw their attention towards your brand once again. The HubSpot E-Commerce Integration authorizes you to send emails to communicate with your users who haven’t made a purchase with you lately.
https://docs.cedcommerce.com/hubspot/hubspot-magento-2-extension-guide/
2019-11-12T02:53:05
CC-MAIN-2019-47
1573496664567.4
[array(['https://docs.cedcommerce.com/wp-content/plugins/documentor/skins/bar/images/search.png', 'search_box'], dtype=object) array(['https://docs.cedcommerce.com/wp-content/plugins/documentor/core/images/loader.gif', None], dtype=object) ]
docs.cedcommerce.com
OverviewOverview Dropshipping simply means when your customer makes a purchase from your site, you order the item from your Chinese supplier and they ship the item to your customer. Dropshipping from AliExpress requires you to first set up a store or have a place to sell your goods, like Amazon or eBay. Aliexpress is the number one online retailer in Russia. It is the retail branch of the Alibaba umbrella. It is one of the top 10 websites that the Russians access. Aliexpress Dropshipping WooCommerce by CedCommerce connects your WooCommerce Store to one of the world’s biggest wholesale suppliers AliExpress and set off for your Dropshipping. With this WooCommerce Dropship Extension, now import products you want on your WooCommerce store, list them the way you want, and sell them to your buyers, the easy way. Key Features are as follows: - Import products from AliExpress to your WooCommerce store. - Set the Mark-up price you want for the products on your WooCommerce store. - Import as many as 1000 products from AliExpress with a single click. - Set up the Crons for updating the Price and Inventory in-sync with AliExpress, automatically.
https://docs.cedcommerce.com/woocommerce/woocommerce-aliexpress-dropshipping-guide/
2019-11-12T04:25:12
CC-MAIN-2019-47
1573496664567.4
[array(['https://docs.cedcommerce.com/wp-content/plugins/documentor/skins/bar/images/search.png', 'search_box'], dtype=object) array(['https://docs.cedcommerce.com/wp-content/plugins/documentor/core/images/loader.gif', None], dtype=object) ]
docs.cedcommerce.com
Materials Bank¶ We collect unique copies of all materials in the "Materials Bank" collection. The general features of a Bank are discussed in the following introductory page. We assert the uniqueness by structural parameters and the use of standard primitive representation 1. Mapping Function¶ Materials Mapping Function first calculates the standard primitive representation of the candidate structure. Next, the corresponding hash string is produced. The hash is then compared against those of existing Bank entries in the same manner as for other bank entries. Advanced Search¶ Advanced search functionality specific to Materials and available also for Materials Bank page are described here. Copy from Bank¶ The procedure of copying (or importing) Bank Materials into Account-owned Materials collection is described here.
https://docs.exabyte.io/materials/bank/
2019-11-12T04:30:52
CC-MAIN-2019-47
1573496664567.4
[]
docs.exabyte.io
NTP Server¶ The NTP Daemon (ntpd), configured at Services > NTP, allows pfSense® software to act as a Network Time Protocol server for a network, and also keeps the clock in sync against remote NTP servers as an NTP client itself. Before enabling this service, ensure that the router’s clock keeps fairly accurate time. The ntp.org NTPD distribution of ntpd is used. By default the NTP server will bind to and act as an NTP server on all available IP addresses. This may be restricted using the Interface(s) selection on Services > NTP. This service should not be exposed publicly. Ensure inbound rules on WANs do not allow connections from the Internet to reach the NTP server on the firewall.
https://docs.netgate.com/pfsense/en/latest/services/ntp-server.html
2019-11-12T03:35:23
CC-MAIN-2019-47
1573496664567.4
[]
docs.netgate.com
How Do I Create a New Table Using a Snowflake Connector? 3. Fill out the 'Add New Wizard' modal.3. Fill out the 'Add New Wizard' modal. - Name: The name of this Connector. - Description: An explanation of this Connector. - Wizard Type: Choose 'Connection'. - Type: Choose 'Snowflake Export' - Beacon: The beacon to use with this Connector. - Snowflake Account Name: The account name for Snowflake. - User Name: The user name for the Snowflake account. - Password: The password for the Snowflake account. Click 'Next'
http://docs.tmmdata.com/m/25949/l/676968?data-resolve-url=true&data-manual-id=25949
2019-11-12T04:32:33
CC-MAIN-2019-47
1573496664567.4
[]
docs.tmmdata.com
Create In App Purchases for Android Overview - Create your Google Play Store listing - Get your billing key - Create a manifest.json file, add billing key, and add to your app on myapppresser.com - Build your app with a release key on PhoneGap Build - Create a beta release on Google Play and upload your app .apk - Create your in app purchase For Android you must first create a store listing for your app on the Google Play store. Get Your Billing Key From the Google Play dashboard, click your app. Go to Development Tools => Services and APIs. Under Licensing and in-app billing, copy the billing key. You will need this later. If you do not see a billing key, you may need to setup your billing profile first. Manifest.json File Create a new file called manifest.json. Add this code: { "play_store_key": "MIIBIjANBgkqhkXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" } Replace the key with your billing key. This file needs to go in the root www folder of your app before you upload to PhoneGap Build. We can help you with this. Upload .apk to Google Play store with billing permissions You must upload an .apk with billing permissions, to do that you just need to add the In App Purchase plugin to your app and rebuild for the app store. Add this code to your custom config in the Settings tab of your app customizer: <preference name="phonegap-version" value="cli-8.0.0" /> <plugin spec="" /> Note: the cli-8.0.0 line will be removed in a future release Build your app, it needs to be hooked up to PhoneGap Build. Build your app with a release key in PhoneGap Build. Download the apk file, this is what you will upload to the Google Play Store. Create a Beta Release In the Google Play store, click on your app, then go to Release Management => App Releases. Under Beta, click create release or manage releases. Create a new closed beta release, add your email to the list of testers, and upload your apk there. You can test the app by downloading it from the QR code in PhoneGap Build or in myapppresser.com. Create the In App Purchase Go to your publishing dashboard, and click on your app. Expand "Store Presence" and click on In App Products. You may be prompted to setup a billing profile, you will need to be approved before you can add a purchase. To create a subscription, you must click the "Subscriptions" tab. Click the blue button to create a purchase or subscription. For Product ID, it's easiest if you use the same ID as iOS. For example, "membersubscription" like we used above. It's ok if you used a different ID, just make sure you setup the form correctly in the app with 2 different IDs. Fill out the fields and set the purchase to Active. When you are testing in app purchases, you do not need to use the release version of your app. You can go to PhoneGap Build and change the signing key back to "No key selected." Proceed to add the purchase form in your app.
https://docs.apppresser.com/article/479-create-in-app-purchases-for-android
2019-11-12T04:29:52
CC-MAIN-2019-47
1573496664567.4
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/543577d6e4b01359b0ede64c/images/5c33a5a02c7d3a31944fc14d/file-Vuh6e5Tmx5.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/543577d6e4b01359b0ede64c/images/5c33a41404286304a71df5be/file-kC2EvZOhvH.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/543577d6e4b01359b0ede64c/images/5c34f5dc04286304a71e00bb/file-VG9dVusrLB.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/543577d6e4b01359b0ede64c/images/5c33a2ef04286304a71df5ad/file-IfODc1PITL.png', None], dtype=object) ]
docs.apppresser.com
Note Due to interdependency between the Oracle Real Application Clusters and Oracle Database patterns, you might need to run two scans of the related hosts to discover all components of the deployment. Note In TKU May 2019 Oracle RAC modelling was updated, for more details see "Cluster Awareness in BMC Discovery 11.x series" section. This product can be discovered by any edition of BMC Discovery. Download our free Community Edition to try it out, or [see what else it can discover] ! Extended Discovery pattern which allows to model Database Detail Nodes being managed by the Oracle Database Server is available for this product. Oracle RDBMS (a product of Oracle Corporation) is an enterprise-class relational database management system product. Oracle RDBMS is available on multiple platforms such as: Unix (Solaris, HP-UX, AIX, Mac OS X Server, Tru64), Linux, Microsoft Windows, z/OS, OpenVMS. Version numbering of Oracle products has been fairly inconsistent and seems to vary from product to product with marketing and actual product versions often used interchangeably. Known versions of this product are: Oracle RDBMS is available in a number of different editions all of which are built on the same common code base. The editions are likely provided for marketing and license-tracking reasons. Known editions of the current version of the product (11g) are: The patterns have been created in a manner that allows them to support Windows, Linux and Unix platforms from the same module. The patterns in this module will set the following value for the type attribute on a SI: The following components/processes are identified using the combination of pattern definitions and simple identity mappings. Simple identifiers for DatabaseServer Simple identifiers for DatabaseServer Express In case of Oracle Database Server, version information is currently collected using one of four possible methods (sql query, active, registry, path) while in case of Oracle Database Server Express (UNIX) version information is collected using one of 2 possible methods (active, package). The methods are tried in an order of precedence based on likely success and/or accuracy of the information that can be gathered. Once a result is obtained, the method lower in precedence is not attempted. In order of precedence the methods are: The pattern obtains version information from the SQL query (performed by the core pattern to determine the optimal IP / port information) by means of a regex: (?i)Release\s+(\d+(?:\.\d+)*) For versions Oracle 18+ the following command is used in order to determine the version: Unix: %ora_home%/bin/oraversion -compositeVersion Windows: %ora_home%\bin\oraversion -compositeVersion If the above failed, the method is attempts to run SQLPlus command only if the installation path (ora_home) has been determined: Unix: "echo exit | ORACLE_HOME=%ora_home% %ora_home%/bin/sqlplus /NOLOG" Windows: 'set "ORACLE_HOME=%ora_home%" && echo exit | "%ora_home%\\bin\\sqlplus" /NOLOG' The output from this command is then parsed via the regex, " Release\s+(\d+(?:\.\d+)*) ", to extract the version string. If version prior to Oracle 8 is detected, a code mapping is used to attempt to map the version obtained to the actual Oracle version because such versions are not directly linked to the version of the Oracle Database Server. If this does not succeed, then this approach will not set a version for the product. Note The disadvantage of this method is that the user permissions for the account Discovery uses need to allow execution of the SQLPlus binary, and the fact that an assumption is made that the 'oratab' file is being actively maintained with $ORACLE_HOME path being accurate. If installation path is not obtained, versioning will using this method will not be attempted. The content of <install_path>/oracle.key file is extracted to give the Oracle Install Key: ( 'HKEY_LOCAL_MACHINE\\' + oracle_keyfile.content + '\\'). Once the Oracle Install Key is known, version_key is obtained from the Windows Registry by searching for "%oracle_install_key%\VERSION" . Oracle Database Server Express Edition (available on Unix and Windows) can be installed from the .rpm package only. The host is being queried for the 'oracle-xe-univ' package, and its version parameter provides the full version of product. Regular expression used to match the package name is: oracle-xe-univ$ oracle-database- The Path Regex functionality allows a regular expression to be applied against the process command line to derive a version number from the command path or arguments. For Database Server (other than Express Edition) running on a UNIX host the regular expression used is as follows: (?:(?:ora|orcl)[^/]*|(?:ora|orcl)[^ ]*product[^ /]*|/[Pp][Rr][Oo][Dd][Uu][Cc][Tt][s]?|/prod|/oracle)/(1?\d)\.?(\d?)\.?(\d?)\.?(\d?) For Database Server Express Edition (UNIX) the regular expression used is as follows:: /oracle\S+/(\d+(?:\.\d+)*) This path may come from a file or the results of an active command. Windows Path Regular Expression: (?i)(?:Oracle|ora|or|product)(?:\\|)(1?\d)\.?(\d?)\.?(\d?)\.?(\d?) Versioning is achieved in this caseuptodepth x.x.x depending on the deployment pattern. Examples of paths that would be matched and versions extracted are: For clustered environments version can also be obtained from the result of "crs_stat -v" command. Versioning is performed using the path versioning approach. The Path Regex functionality allows a regular expression to be applied against the process command line to derive a version number from the command path or arguments. The regular expressions used are as follows: Unix Path Regex: (?:(?:ora|orcl)[^/]*|(?:ora|orcl)[^ ]*product[^ /]*|/[Pp][Rr][Oo][Dd][Uu][Cc][Tt][s]?|/prod|/oracle|/app)/(1?\d)\.?(\d?)\.?(\d?)\.?(\d?) Windows Path Regex: (?i)(?:Oracle|ora|or|product)(?:\\|)(1?\d)\.?(\d?)\.?(\d?)\.?(\d?) When these Regular Expressions fail, another multi-platform Regular Expression is employed. Multi-platform Regular Expression: (?i)[/\\](\d+(?:\.\d+)+)[/\\]bin[/\\]tnslsnrVersioning is achieved in this case to depth x.x , x.x.x or x.x.x.x - depending on the deployment pattern. Examples of paths that would be matched and versions extracted are: Note These files are viewed using an unprivileged account. It is possible to view them as a privileged user. To do this you must alter the PRIV_CAT variable is the platform scripts Unix/Linux Detailed analysis of methods to obtain Oracle RDBMS product version was then undertaken by Engineering and Oracle RDBMS SMEs (internal and customer) and it was concluded that due to the complexity of the product in terms of deployment and configuration, there is no single way to reliably obtain Oracle RDBMS version in all instances. An Oracle database server instance comprises a set of operating system processes and memory structures that interact with the storage. Additional to this are client connectivity components which enable database clients to communicate with the database server. Depending on how Oracle was installed (including licensing restrictions) the processes listed in the table of related processes above may not all be observed on a single host running Oracle RDBMS. Client processes in particular are likely to be running on multiple hosts. There are a few configuration options available for this product: oratabs :=[ "/etc/oratab", "/etc/oracle/oratab","/etc/opt/oracle/oratab" "/var/opt/oracle/oratab", "/var/opt/unix/oratab", "/shared/opt/oracle/oratab" ]; - List of Oracle oratab locations listener_ora_path := [](""); - List of custom locations to Oracle listener config file (ex: <CUSTOM_PATH>/listener.ora or <CUSTOM_PATH>/<ORA_SID>/listener.ora) By default each Oracle Instance has at least one "Oracle Net Services (TNS) Listener" which manages and distributes connections from Oracle client programs to specific "Oracle Database Server" Instance. But in advanced Oracle configurations (in clusters, HA systems etc) it is possible to have many-to-many relations between Oracle Database Servers and TNS Listeners: The DatabaseServer pattern triggers on either the Oracle System Monitor (ora_smon_<SID> or xe_smon_<SID>) or Oracle Database Server (oracle.exe, oracle73.exe or oracle80.exe) processes to identify an instance of Oracle Database Server. The TNSListener pattern triggers on the Oracle Net Services (TNS) Listener (tnslsnr or tnslsnr.exe) process to identify an instance of TNS Listener. On Unix the main process present for each instance of an Oracle database is Oracle System Monitor (SMON). An instance is also denoted by a SID which is observed on a command-line as the ending of the process name. e.g. ora_smon_SXQ1 The DatabaseServer pattern definitions use the Oracle System Monitor process (ora_smon_<SID> or xe_smon_<SID>) as the trigger process. The instance name is extracted from the command using the following regular expression '(?i)ora_smon_(\S+)' (or 'xe_smon_(.+)'for the Express edition). This name is used to create a unique Software Instance (SI). The prime process is then collected into a set of all other processes that also have the same SID in their command or command-line arguments (in case of Oracle Net Services (TNS) Listener process). On Windows, one or more Oracle Database Server processes are observed on hosts running Oracle RDBMS. Oracle Database instances can in certain cases be inferred from the command-line arguments but this pattern is not always observed and may be linked to the number of database instances being managed. It seems that if only one database instance is being managed, an SID is not required. More research is however required to be certain that this behavior is correct. If the pattern fails to obtain the SID from the command-line arguments, it will attempt to obtain the SID by searching for the Oracle Database Server service that corresponds to the trigger process pid and then extract the SID from the service name using the following regular expression: '(?i)OracleService(\S+)' If a SID is not obtained by the pattern a grouped (on version) SI is created in case of Oracle Database Server, while in case of Oracle Database Server Express (UNIX) and SI with a key based on type and host key is created (since only one instance of Express Edition can run on a host). The TNS Listener pattern will attempt to extract the Oracle database SID from the command-line arguments of the trigger process and use that to create a unique Software Instance (SI). If a SID is not obtained by the pattern a grouped (on oracle installation path extracted from the trigger process) SI is created. For BMC Discovery 11.x, cluster awareness has been incorporated into the pattern module. This means that a clustered Oracle Database Server Software Instance will be linked to the Cluster node that is logically hosting the Software Instance rather than the host it is directly running on. In addition, the SI key will be altered to include global_db_name in place of ora_sid. Oracle Database Server Software Instance "instance" attribute may be updated. Starting from TKU May 2019 a new Oracle Database Worker SI node type will be created. Oracle Database Worker represents the SI running locally on the host and it will be linked to the clustered Oracle Database Server Software Instance. A communication relationship is created between the Oracle Database Server and the Oracle Net Services (TNS) Listener (with exception of Oracle Database Express Edition on either Windows or Unix platform). This relationship is modeled both from the DatabaseServer and the TNSListener patterns. The DatabaseServer pattern then tries to associate all other related Oracle processes, running on the same host, to the SI. NOTE: We cannot determine appropriate (TNS) Listener for the Express Edition of Oracle Database Server, as the command line doesn't contain any attributes. In order to represent architecture of homogeneous distributed database system, which is a network of two or more Oracle Databases that reside on one or more machines, the pattern creates a client-server communication link between Oracle SIs which form such system. The central concept in distributed database systems is a database link which is a connection between two physical database servers that allows a client to access them as one logical database. The pattern tries to obtain a list of defined database links to other Oracle databases by means of SQL query: "SELECT host FROM dba_db_links ORDER BY host" where "host" column could be: linked_ora_host: regex '^(\S+?)[:/]' linked_ora_port: regex ':(\d+)' linked_ora_sid: regex '/(\S+)$' the "net service name" of remote database. Pattern searches related section for each "net service name" in <ORACLE_HOME>/network/admin/tnsnames.ora file and obtains the following information from each found section: linked_oracle_host: regex '(?i)HOST\s*=\s*([^\s\)]+)' linked_oracle_service: regex '(?i)SERVICE_NAME\s*=\s*([^\s\)]+)' linked_oracle_sid: regex '(?i)SID\s*=\s*([^\s\)]+)' linked_oracle_host: regex '(?i)HOST\s*=\s*([^\s\)]+)' linked_oracle_service: regex '(?i)SERVICE_NAME\s*=\s*([^\s\)]+)' linked_oracle_sid: regex '(?i)SID\s*=\s*([^\s\)]+)' Then pattern creates a link to each Oracle SI with attribute service_name = <linked_si_service> on host <linked_si_host>. The Oracle Database Server pattern raises a flag when acting in an Oracle E-Business Suite environment. This flag is stored in the Oracle Database Server SI as the ebs_suite attribute. The way the pattern determines whether it has discovered and instance of Oracle Database Server running as part of E-Business is to search for any TNS Listener process where its command-line args match a regular expression APPS_<ora SID> where <ora SID> is this Database Server SID. The flag is then checked for later in the pattern in order to determine the correct path to listener.ora file, as this path is different for Database Servers running as a component of Oracle E-Business. The database SID (instance name) is initially extracted from the trigger process command line arguments by using a regular expression, which varies depending on Operating System: from process.args using regex '(\S+)' from Windows Service Name which starts 'oracle.exe' process: regex '(?i)OracleService(\S+)' fromprocess.cmd using regex '(?i)ora_smon_(\S+)' NOTE: Oracle SID for Windows is always stored in upper case. The pattern extracts service name from the JDBC connection. The database name is used as service name, if the connection string matches the following regular expression: Otherwise, the service name is extracted using the following regular expression: This method is employed for the DatabaseServer pattern and uses these possible approaches. Windows: 1. From process.cmd using regex "(?i)^(\w:.+)\\bin\\oracle" 2. From command line of Windows Service which starts 'oracle.exe' process: regex "(?i)^(\w:.+)\\bin\\\w\.exe" 3. From listener.ora file. This approach works only if related Oracle Net Services (TNS) Listener is found and has listener_ora_file_path and Oracle SID is known.. Unix: 1. from process.cmd using regex "(?i)^(/.+)/bin/ora_smon_" 2. from oratab file: the pattern tries to open a file called 'oratab' by searching through a list of potential (user configurable) locations for it: Note These files are typically viewed using an unprivileged account (Discovery login account). It is possible to view them as a privileged user. To do this you must alter the PRIV_CAT variable in the platform scripts If the 'oratab' file is located, the file is parsed for the database with the SID in the process command-line via the following regular expression: "(?m)^\s*%norm_ora_sid%:([^:]+):(?:[YNW])?" Note In clustered environments db_unique_name is used instead of SID 3. Using pmap command (Solaris and Linux) On certain Unix platforms (Solaris and Linux), the pmap command can be used against the process id (pid) of the 'oracle' process: <path_to_pmap>/pmap %process.pid% | awk '{print $4}' | grep '/oracle$' | uniq Oracle installation path is then extracted using one of the following regular expressions: For Database Server: \W(/[^ ]*/)bin/oracle\W For Database Express: \(/\S*)/bin/oracle\W Note This approach works only in cases where the appliance has credentials that give the logged-in user elevated privileges via privilege escalation mechanism (e.g. 'sudo'). The approach is disabled by default due to the above requirement (majority of installations do not provide these level of privileges to the account used by the Discovery appliance). To enable this approach: 1. Both the priv_execution and pmap_enabled options in the configuration section should be set to 'true'. 2. The PRIV_RUNCMD function should be defined in the appropriate platform scripts to invoke the privilege escalation (e.g. through the use of 'sudo' or 'suexec') on the hosts being scanned. 3. The pmap_path option in the configuration section may be modified to point to an alternative path (the default being /usr/bin/pmap) for an instance of 'pmap' on the hosts being scanned 4. From listener.ora file. This approach works only if related Oracle Net Services (TNS) Listener is found and has listener_ora_file_path attribute and Oracle SID is known.. The patterns use the following approach to obtain the port and IP (if specifically set) the database server is listening on: from: command "lsnrctl status <LISTENER_NAME>" or <ORACLE_HOME>/network/admin/listener.ora - default installation <ORACLE_HOME>/network/admin/<ORA_SID>_<SHORT_HOSTNAME>/listener.ora - Oracle with E-Business Suite <ORACLE_HOME>/network/admin/<ORA_SID>/listener.ora - Oracle with E-Business Suite <ORACLE_BASE>/<ORA_SID>/listener.ora - Oracle with Oracle Active Data Guard <LISTENER_DIR_PATH_CUSTOM>/listener.ora <LISTENER_DIR_PATH_CUSTOM>/<ORA_SID>/listener.ora , where <LISTENER_DIR_PATH_CUSTOM>is configured in pattern configuration section Note updated OracleRDBMS pattern tries to resolve all named addresses to IP addresses, like for example 'host_alias' being resolved to '192.168.1.10' in the example above The pattern uses the following methods for obtaining Oracle 'service_names' and 'global_db_name': In clustered environment pattern obtains <db_unique_name> from from output of command '<crs_home>/bin/crs_stat -v' by means of regexes: for Oracle RAC 11: '(?i)^ora\.([^\.]+)\.db' against the configuration section corresponding to current <ORA_SID>. for Oracle RAC 10: '(?i)NAME=ora\.([^\.]+)\.<ora_sid_norm>\.inst' then pattern run the following command to receive full information about Oracle RAC: ORACLE_HOME=<ora_home>; export ORACLE_HOME; ulimit -c 0 && <ora_home>/bin/srvctl config database -d <db_unique_name> and tries to obtain from output <db_domain> and <service_names> PRIV_CAT <ORACLE_HOME>/dbs/spfile<ORACLE_SID>.ora | egrep '(db_name|db_unique_name|db_domain)' PRIV_CAT <ORACLE_HOME>/dbs/init<ORACLE_SID>.ora | egrep '(db_name|db_unique_name|db_domain)' cmd /C findstr "db_name db_unique_name db_domain" "<ORACLE_HOME>\\database\\spfile<ORACLE_SID>.ora" cmd /C findstr "db_name db_unique_name db_domain" "<ORACLE_HOME>\\database\\init<ORACLE_SID>.ora" cmd /C findstr "db_name db_unique_name db_domain" "<ORACLE_HOME>\\dbs\\spfile<ORACLE_SID>.ora" Please note that it is used to uniquely identify the database, when discovering clustered environments (RAC), check that all related components have similar permissions for consistency. Service_names attributes consists of the values in the following order: 5. from output of ORACLE_HOME=<ora_home>; export ORACLE_HOME; ulimit -c 0 && <ora_home>/bin/srvctl config database -d <db_unique_name> command for clustered Instances Note To run the SRVCTL command successfully, its version must be the same as the version of the Database. So it is recommended to run it from <ora_home>\bin. The user, which runs the command, should be in software owner's group. Service_name is the first element in <service_names> list. Each attribute has its own purpose despite it looks like attributes could duplicate each other, The pattern obtains all 'net service names which are set up in local Oracle client ( <ORACLE_HOME>/network/admin/tnsnames.ora ), which are pointed to local Oracle SI (by SID or by service_name). Each net service name is obtained by regex: (?is)\n\s*([^\s\(\)]+)\s*=\s*\(\s*DESCRIPTION from the section in 'tnsnames.ora' file which matches regexes: for SID: (?is)SID\s*=\s*<ORACLE_SID>[\s|\)] for Service_name: (?is)SERVICE_NAME\s*=\s*<Service_name>[\s|\)] The pattern uses the following methods for finding evidence that current oracle instance is configured in a clustered setup Windows: pattern tries to run the command ' <crs_home>/bin/crs_stat -v ' and looks for a string which matches regex: (?i)USR_ORA_INST_NAME@SERVERNAME\([^\)]+\)=%ora_sid_norm%\s, where <crs_home>is taken from Oracle Clusterware SI running on this host. If a clustered setup is detected, the Oracle Database SI has the 'clustered' attribute set to 'true'. Pattern tries to obtain the list of host nodes on which current Oracle RAC resides and for which current instance is part of using command <crs_home>/bin/crs_stat -v where pattern obtains list of related host nodes names from section correspondent for current Oracle SID using regex: regex '(?i)GEN_USR_ORA_INST_NAME@SERVERNAME((\S+))=' In order to make this list equal on all hosts it is additional sorted. Edition information is obtained using the following methods: (?i)(\S+)\sEdition HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\<Oracle service key>\ORACLE_BUNDLE_NAME Get edition information for Oracle 11.2 and above from "<ora_home>/inventory/response" directory: file name in "<ora_home>/inventory/response" directory: Windows: dir "<ora_home>\\inventory\\response" Unix: "PRIV_LS <ora_home>/inventory/response" (where file names examples: oracle.server_EE.rsp or oracle.server_SE.rsp or oracle.server_PE.rsp or oracle.crs_Complete.rsp) output is then parsed by regex: (?i)oracle\.(?:server|crs)_(\S+)\.rsp if no 'valid' edition was extracted, then its obtained from content of the file <ora_home>/inventory/response/oracle.crs_Complete.rsp (content examples oracle_install_db_InstallEdition="EE" or "STD"/"SE" or "PE"), Unix: "PRIV_CAT <ora_home>/inventory/response/<rsp_file_name> | grep oracle_install_db_InstallEdition" by means of regex '(?i)oracle_install_db_InstallEdition="(\S+)"' Get edition information for Oracle 11.1.X and below from '<ora_home>/inventory/Components21/oracle.rdbms/<version_subdir>/context.xml' file: First we need to know exactly name of the directory where 'context.xml' resides <version_subdir> is obtained from directory listing, if more than one directory is found, then the highest version is used: #1: PRIV_LS <ora_home>/inventory/Components21/oracle.rdbms #2: PRIV_LS <ora_home>/inventory/Components21/oracle.server then pattern run the following commands, which extracts lines from context.xml file where edition information is stored: 'PRIV_CAT <ora_home>/inventory/Components21/oracle.server/<version_subdir>/context.xml | grep -w s_serverInstallType' if no or 'Custom' edition is extracted then: 'PRIV_CAT <ora_home>/inventory/Components21/oracle.rdbms/<version_subdir>/context.xml | grep -w s_nameOfBundle' then, Edition is extracted using regex: 'VAL="(\S+)"' In all cases, edition information is only recorded if it can be fully determined. Quite often, the edition information is set to 'Custom' in the context.xml file and this is done by the Oracle installation script if the defaults of creating a database are not followed but a custom installation is performed. In those cases, edition information is not stored in the Software Instance. The method detailed abovecannot howeverbe used to identify Oracle RDBMS Express Edition. A separate pattern has been created to identify this edition as the trigger process is different (and distinct) - it is xe_smon_<SID> . In addition to this, Express Edition has 'XE' as its SID, the only one available for the installation; therefore, 'Express' edition can always be positively determined and the 'edition' attribute set. In order to perform extended database discovery the Oracle Database pattern has to collect 'listen_tcp_sockets' information from all TNS Listeners which serve current Oracle SID and obtain Oracle "service_name" information. The pattern then tries to run the following SQL query on each combination of found listen tcp socket AND ( Oracle SERVICE or Oracle SID): SELECT banner FROM v$version WHERE banner LIKE 'Oracle%' Once the query successfully run, pattern stores information of successful query into attribute "success_login_credential", examples: success_login_credential := [method, port, ip, service_name] where method = 'sid' or 'service' This information is used by all associated patterns which run SQL queries against the Oracle Database Server allowing them not to run those queries against all possible pairs of listen tcp sockets AND ( Oracle SID or Oracle SERVICE ). Note Running of this SQL query is enabled by default! Disabling this SQL query by setting 'false' value for 'extended_sql_discovery' variable in pattern configuration section disables this feature for all associated patterns which run SQL queries against the Oracle Database Server The list of affected patterns: A separate pattern has been created to query the Oracle Database in order to obtain schema and (optionally) database table details. For more information about this pattern, please refer to the relevant page A separate pattern is used to identify and model a selection of Installed Options in the Oracle Database. For more information on this approach, please refer to the relevant page. A separate pattern is used to identify and model a selection of Management Packs in the Oracle Database. For more information on this approach, please refer to the relevant page. A pattern to identify and model Oracle Pro*C pre-compiler (installed as an optional component of Oracle Database) has also been developed. For more information about this pattern, please refer to the relevant page. The set of simple identifiers for Oracle RDBMS running on Unix/Linux hosts, including approaches used to version the product have been developed from local knowledge on Oracle RDBMS as well as with input from JPMC SMEs. Testing to ensure the processes related to Oracle RDBMS have been correctly identified has been performed using Discovery record data from hosts running Solaris,AIXand Linux operating systems. Record data contained enough information to extract the version information using the Regex Path versioning approach. In addition to this, active version command approach was developed and initially evaluated both on in-house Oracle RDBMS installations and on customer sites. The approach taken was shown to work well, provided the environment conditions that this approach requires were met. Testing to ensure the processes related to Oracle RDBMS Express Edition have been correctly identified has been performed against in-house Oracle RDBMS Express installations running on Linux hosts. Product version obtained using active command approach, package query and path regex was proven to work. Testing to ensure the processes related to Oracle RDBMS have been correctly identified has been performed using both Discovery Record data and in-house Oracle RDBMS installations. Path Regex versioning approach was also tested against both in-house Oracle RDBMS installations and Discovery Record data and was deemed to work well unless constrained by the data returned from the hosts (e.g. Discovery typically cannot obtain process command-line on hosts running Windows NT or Windows 2000 Server. This limitation no longer exists on hosts running Windows XP or Windows 2003 server). Testing to ensure the processes related to Oracle RDBMS Express Edition have been correctly identified has been performed against in-house Oracle RDBMS Express installations running on Windows hosts. Product version obtained using active command approach, package query and path regex was proven to work. Oracle Version Numbering Oracle Database distributed database architecture On some specific Oracle Enterprise Linux hosts running the "opatch lsinventory" command with insufficient privileges may cause the command hanging on the appliance. You need to scan the host with the appropriate privileges in order to avoid this behavior. A workaround was added in TKU October 2015 to remove the hanging process for Oracle Enterprise Linux 6.0+. srvctl command to get the database information may cause Java dumps on Oracle Database 12.2 running on AIX 7.1. Created by: Rebecca Shalfield 30 Oct 2007 Updated by: Dmytro Ostapchuk 26 Nov 2015
https://docs.bmc.com/docs/display/Configipedia/Oracle+Database
2019-11-12T04:42:50
CC-MAIN-2019-47
1573496664567.4
[]
docs.bmc.com
+= Operator (Visual Basic). variableorproperty += expression Parts variableorproperty Required. Any numeric or String variable or property. expression Required. Any numeric or String, see + Operator (Visual Basic). Note When you use the += operator, you might not be able to determine whether addition or string concatenation will occur. Use the &= operator for concatenation to eliminate ambiguity and to provide self-documenting code. combine the value of one variable with another. The first part uses += with numeric variables to add one value to another. The second part uses += with String variables to concatenate one value with another. In both cases, the result is assigned to the first variable. ' This part uses numeric variables. Dim num1 As Integer = 10 Dim num2 As Integer = 3 num1 += num2 ' This part uses string variables. Dim str1 As String = "10" Dim str2 As String = "3" str1 += str2 The value of num1 is now 13, and the value of str1 is now "103". See Also Concepts Reference + Operator (Visual Basic) Arithmetic Operators (Visual Basic) Concatenation Operators (Visual Basic) Operator Precedence in Visual Basic Operators Listed by Functionality
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/s7s8d7f4%28v%3Dvs.90%29
2019-11-12T04:06:37
CC-MAIN-2019-47
1573496664567.4
[]
docs.microsoft.com
CInstance::SetNull SetNull method sets a property to NULL. Syntax Platform::Boolean SetNull( LPCWSTR name ); Parameters name Name of the property to set to NULL. Return Value Returns TRUE if the operation was successful and FALSE if an attempt was made to set a nonexistent property. More information is available in the log file, Framework.log.
https://docs.microsoft.com/en-us/windows/win32/api/instance/nf-instance-cinstance-setnull?redirectedfrom=MSDN
2019-11-12T04:24:03
CC-MAIN-2019-47
1573496664567.4
[]
docs.microsoft.com
VRRP Configuration¶ VRRP is configured on a per-interface basis from within config-interface mode. To define a new VR address, use ip vrrp-virtual-router <vrid> for IPv4 or ipv6 vrrp-virtual-router <vrid> for IPv6 when configuring an interface. The <vrid> must be an integer from 1-255. This identifier must be identical for all nodes in the same cluster using a specific VR address. The VR ID must also be different from VR IDs used for other VR addresses on any other VRRP router on the network segment connected to this interface. Note The VR ID must only be unique on a single layer 2 network segment. The same VR ID may be used on different segments. Note In situations where it is unclear whether or not there is other VRRP traffic on a segment, run packet captures looking for VRRP to see if any turns up. There would typically be at least one VRRP advertisement per second from other nodes on the network. A packet capture would also show which VR IDs are active on the segment and thus should be avoided. Tip Though it is common to use the last octet of the VR address as the VR ID, this is not required. Example which creates a new virtual router address: tnsr(config)# int TenGigabitEthernet6/0/0 tnsr(config-interface)# ip vrrp-virtual-router 220 tnsr(config-vrrp4)# This command enters config-vrrp4 (IPv4) or config-vrrp6 (IPv6) mode to configure the properties of the VR address. This mode includes the following commands: - virtual-address <ip-address> The IPv4 or IPv6 address which will be shared by the virtual router. Also referred to as the “Virtual Router Address” or “VR Address”. For the primary node, or owner, for this address (priority 255), the same IP address must be configured on an interface. - accept-mode (true|false) Controls whether TNSR will accept packets delivered to this virtual address while in master state if it is not the IP address owner. The default is ‘false’. Deployments that rely on pinging the virtual address or using it for services such as DNS or IPsec should enable this feature. Note IPv6 Neighbor Solicitations and Neighbor Advertisements MUST NOT be dropped when accept-mode is ‘false’. - preempt (true|false) Instructs TNSR whether or not to preempt a lower priority peer to become master. The default value is true, and the owner of a VR address will always preempt other nodes, no matter how this value is set. When set to false, a failed node will not take back over from the current mater when it recovers, but would wait until a new election occurs. - priority <priority> The priority for the VR address on this host. Higher values are preferred during the master election process, with the highest priority router currently operating winning the election. The primary node, which is the owner of the VR address, must use a priority of 255and no other node should have that priority. Lower priority nodes should use unique priority values, evenly distributed throughout the 1-254range, depending on the number of nodes. The default value is 100. - v3-advertisement-interval <interval> The interval, specified in centiseconds (hundredths of a second), at which VRRP advertisements will be sent by this node. The default value is 100, or one second. The value may be in the range of 1-4095.
https://docs.netgate.com/tnsr/en/latest/vrrp/config.html
2019-11-12T04:23:08
CC-MAIN-2019-47
1573496664567.4
[]
docs.netgate.com
In certain countries or regions, Fiscal Hosts are required to collect local taxes—for example, VAT in the EU. Please contact [email protected] if you need Collectives under your umbrella to charge taxes. We will work with you to conform to you local legislation. VAT will only apply if the collective creates a SERVICE or PRODUCT tier and for events. Once VAT is setup for a collective, we will start asking the country of residency and an optionnal VAT number to everyone who order tiers or tickets from the collective. collective-collective basis by setting a country and a "VAT setting" for the collective in the collective edit page ({the_collective}/edit): If the collective has a VAT number and should be responsible for collecting VAT itself, you can enable that by following the exact same steps than before, except that you'll choose the option Use my own VAT number on the last step.
https://docs.opencollective.com/help/fiscal-hosts/local-tax
2019-11-12T03:12:24
CC-MAIN-2019-47
1573496664567.4
[]
docs.opencollective.com
public abstract class AbstractBrokerMessageHandler extends Object implements MessageHandler, ApplicationEventPublisherAware, SmartLifecycle MessageHandlerthat broker messages to registered subscribers. DEFAULT_PHASE clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait getPhase Collection(). The default implementation returns true. isAutoStartupin interface SmartLifecycle Lifecycle.start(), SmartLifecycle.getPhase(), LifecycleProcessor.onRefresh(), ConfigurableApplicationContext.refresh() String destination) protected void publishBrokerAvailableEvent() protected void publishBrokerUnavailableEvent() protected MessageChannel getClientOutboundChannelForSession(String sessionId) preservePublishOrder=true.
https://docs.spring.io/spring-framework/docs/5.2.0.M3/javadoc-api/org/springframework/messaging/simp/broker/AbstractBrokerMessageHandler.html
2019-11-12T04:11:26
CC-MAIN-2019-47
1573496664567.4
[]
docs.spring.io
BatchListOutgoingTypedLinks Returns a paginated list of all the outgoing TypedLinkSpecifier information for an object inside a BatchRead operation. For more information, see ListOutgoingTypedLinks and BatchRead:Operations. Contents - The reference that identifies the object whose attributes will be listed. Type: ObjectReference object Required: Yes See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/clouddirectory/latest/APIReference/API_BatchListOutgoingTypedLinks.html
2019-11-12T04:10:55
CC-MAIN-2019-47
1573496664567.4
[]
docs.aws.amazon.com
The Brigade.js API The Brigade.js API This document describes the public APIs typically used for writing Brigade.js. It does not describe internal libraries, nor does it list non-public methods and properties on these objects. An Brigade JavaScript file is executed inside of a cluster. It runs inside of a Node.js-like environment (with a few libraries blocked for security reasons). It uses Node 8. High-level Concepts An Brigade JS file is always associated with a project. A project defines contextual information, and also dictates the security parameters under which the script will execute. A project may associate the script to a repository, where a repository is typically a VCS reference (e.g. a git repository). Each job will, by default, have access to the project’s repository. Brigade files respond to events. That is, Brigade scripts are typically composed of one or more event handlers. When the Brigade environment triggers an event, the associated event handler will be called. The brigadier Library The main library for Brigade is called brigadier. The Brigade runtime grants access to this library. The source code for this library is located in brigadecore/brigadier. const brigadier = require('brigadier') It is considered idiomatic to destructure the library on import: const { events, Job, Group } = require('brigadier') Some objects described in this document are not declared in brigadier, but are exposed via brigadier. The BrigadeEvent class The BrigadeEvent class describes an event. Typically, it is exposed to the script via a callback handler. events.on("pull", (brigadeEvent, project) => {}) An instance of an BrigadeEvent has the following properties: buildID: string: The unique ID for the build. This will change for each build. type: string: The event type ( push, exec, pull_request). provider: string: The name of the thing that triggered this event. revision: Revision: The revision details, if supplied, of the underlying VCS system. payload: string: Arbitrary data supplied by an event emitter. Each event emitter will describe its own payload. For example, the GitHub gateway emits events that contain GitHub’s webhook objects. cause: Cause: If one event triggers another event, the causal chain is passed through the causeproperty The revision object The revision object has the following properties: commit: string: The commit ID, if supplied, for the underlying VCS system. When this is supplied, each Job will have access to the VCS at this revision. ref: string: The symbolic ref name. (e.g refs/heads/master) If the revision object is not provided, it may be interpreted as master, or the head of the main branch. The default value is not guaranteed to be master in future versions. The Cause class A Cause is attached to an BrigadeEvent, and describes the event that caused this event. It has the following properties: event: BrigadeEvent: The causing event reason: any: The reason this event was caused. Typically this is an error object. trigger: string: The mechanism that triggered this event (e.g. “unhandled exception”) The after and error built-in events will set a Cause on their BrigadeEvent objects. The events Object Within brigadier, the events object provides access to the main event handler. events.on(eventName: string, callback: (e: BrigadeEvent, p: Project) => {}) The events.on() function is the way event handlers are registered. An on() method takes two arguments: the name of the event and the callback that will be executed when the named event fires. events.on("push", (e, p) => { console.log(p.name); }); events.has(eventName: string): boolean events.has is used to see if an event handler was registered already. The Group class The Group class provides both static methods and object methods for working with groups. The static runAll(Job[]): Promise<Result[]> method The runAll method runs all jobs in parallel, and returns a Promise that waits until all jobs are done and then returns the collected results. This is useful for running a batch of jobs in parallel, but waiting until they are complete before continuing with another operation. The static runEach(Job[]): Promise<Result[]> method This runs each of the given jobs in sequence, blocking on each job until it is complete. The Promise will return the collected results. The new Group(Job[]): Group constructor Create a new Group and optionally pass it some jobs. The add(Job...) method Adds one or more Job objects to the group. The length(): number method Return how many jobs are in the group. The runAll(): Promise<Result[]> method Runs all of the jobs in the group in parallel. When the Promise resolves, it will wrap all of the results. Functionally, this is equivalent to the static runAll method. The runEach method Runs each of the jobs in sequence (synchronously). When the Promise resolves, it will wrap all of the results. Functionally, this is equivalent to the static runEach method. The Job class The Job class describes a job that can be run. constructor new Job(name: string, image?: string, tasks?: string[], imageForcePull?: boolean): Job The constructor requires a name parameter, and this must be unique within your script. It must be composed of the characters a-z, A-Z, 0-9, and -. Additionally, the - cannot be the first or last character, and the name must be at least two characters. Optionally, you may specify the container image (e.g. node:8, alpine:3.4). The container image must be fetchable by the runtime (Kubernetes). If no container is specified here or with Job.image, a default image will be loaded. Optionally, you may specify a list of tasks to be run inside of the container. If no tasks are specified here or with Job.tasks, the container will be run with its defaults. These two are equivalent: var one = new Job("one"); one.image = "alpine:3.4"; one.tasks = ["echo hello"]; var two = new Job("two", "alpine:3.4", ["echo hello"]); Properties of Job name: string: The job name shell: string: The shell in which to execute the tasks ( /bin/sh) tasks: string[]: Tasks to be run in the job, in order. Tasks are concatenated together and, by default, packaged as a Bourne ( /bin/sh) shell script with set -e. If the Bourne Again Shell is used ( /bin/bash), set -eo pipefailwill be used. args: string[]: Arguments to pass to the container’s entrypoint. It is recommended, though not required, that implementors not use both argsand tasks. imageForcePull: boolean: Defines the container image pull policy: Alwaysif trueor IfNotPresentif false(defaults to false). env: {[key: string]:string}: Name/value pairs of environment variables. image: string: The container image to run imagePullSecrets: string[]: The names of the pull secrets (for pulling images from a secure remote repository) mountPath: string: The path where any resources should be mounted (e.g. where a Git repository will be cloned) (defaults to /src) timeout: number: Time to wait, in milliseconds, before the job is marked “failed” useSource: bool: If false, no external resource will be loaded (e.g. no git clone will be performed) privileged: bool: If this is true, the job will be executed in privileged mode, which allows it to do things like access a Docker socket. EXPERTS ONLY. host: JobHost: Preferences for the host that runs the job. cache: JobCache: Preferences for the job’s cache storage: JobStorage: Preferences for the way this job attaches to the build storage docker: JobDockerMount: Preferences for mounting a Docker socket serviceAccount: string: The name of the service account to use (if you need to override the default). annotations: {[key: string]:string}: Name/value pairs of annotations to add to the job’s pod resourceRequests: JobResourceRequest: CPU and memory request resources for the job pod container. resourceLimits: JobResourceLimit: CPU and memory limit resources for the job pod container. streamLogs: boolean: controls whether logs from the job Pod will be streamed to output (similar functionality to kubectl logs PODNAME -f). volumes: kubernetes.V1Volume[]: list of Kubernetes volumes to be attached to the job pod specification. See the Kubernetes type definition volumeMounts: kubernetes.V1VolumeMount[]: list of Kubernetes volume mounts to be attached to all containers in the job pod specification. See the Kubernetes type definition Setting execution resources to a job For some jobs is a good practice to set limits and guarantee some resources. In the following example job pod container resource requests and limits are set. var job = new Job("huge-job"); // Our job uses a lot of resources, we set huge requests but set safe memory limits: job.resourceRequests.memory = "2Gi"; job.resourceRequests.cpu = "500m"; job.resourceLimits.memory = "3Gi"; job.resourceLimits.cpu = "1"; All are optional, for example you could set only resourceLimits.memory = 3Gi). The job.podName() method This returns the name of the pod that was started during job.run(). It will return an empty string before run() is called. The job.run(): Promise<Result> method Run the job, returning a Promise that returns when the job is complete. The JobCache class A JobCache object provides preferences for a job’s usage of a cache. Caches are disabled by default. Properties: enabled: boolean: If true, the cache is turned on for this job. size: string: The size, defaults to 5Mi. This value is only evaluated the first time a job is cached. To resize, the cache must be destroyed manually. path: string: A read-only attribute returning path (in the container) in which the cache is available. The JobDockerMount class The JobDockerMount controls whether, and how, a Docker socket is mounted to the job. Docker sockets are used for building Docker images. Because they mount to the host, using a Docker socket is considered dangerous. Thus, to use the Docker mount, the job must be put into privileged mode. Properties: enabled: boolean: If true, the Docker socket will be mounted to the pod The JobHost class A JobHost object provides preferences for the host upon which the job is executed. os: string: The name of the OS upon which the job should be run ( linux, windows). Not all clusters support all OSes. name: string: The name of the host (node) upon which the job will run. This is highly system dependent. The JobStorage class enabled: boolean: If set to true, the Job will mount the build storage. Build storage exposes a mounted volume at /mnt/brigade/sharewith storage that can be shared across jobs. path: string: The read-only path to the shared storage from within the container. The KubernetesConfig class A KubernetesConfig object has the following properties: namespace: string: The namespace in which Kubernetes objects are created. vcsSidecar: string: The name of the sidecar image that fetches the repository. By default, this is the Git sidecar that fetches git repositories. buildStorageSize: string: The size of the build shared storage space used by the build jobs. The Result class This wraps the result of a Job run. The toString(): string method This returns the result as a string. The Project class Properties: id: string: The unique ID of the project name: string: The project name, typically org/name. kubernetes: KubernetesConfig: The object describing this project’s Kubernetes settings repo: Repository: Information on the upstream repository (if available). secrets: {[key: string]: string}: Key/value pairs of secret name and secret value. The security model may limit access to this property or its values. Secrets ( project.secrets) are passed from the project configuration into a Kubernetes Secret, then injected into Brigade. So helm install brigade-project --set secrets.foo=bar will add foo: bar to project.secrets. The Event object The Event object describes an event. Properties: type: The event type (e.g. push) provider: The entity that caused the event ( github) revision: The Revision object containing details for the commit that this script should operate on. payload: The object received from the event trigger. For GitHub requests, its the data we get from GitHub. The Job object To create a new job: j = new Job(name); Parameters: - A job name (alpha-numeric characters plus dashes). Properties: name: The name of the job image: A Docker image with optional tag. tasks: An array of commands to run for this job shell: The terminal emulator that job tasks will be executed under. By default, this is /bin/sh env: Key/value pairs or Kubernetes value references that will be injected into the environment. - If supplying key/value, the key is the variable name ( MY_VAR), and the value is the string value ( foo) - If you are referencing existing Secrets or ConfigMaps in your Kubernetes cluster, the envobject key will be your secret name, and the value will be a Kubernetes reference object. fieldRef, secretKeyRef, and configMapKeyRefare accepted. resourceFieldRefis technically supported but not advised, since resources are not generally specified for Brigade jobs. - Example: javascript myJob.env = { myOneOffSecret: "secret value", myConfigReference: { configMapKeyRef: { name: "my-configmap", key: "my-configmap-key" } }, mySecretReference: { secretKeyRef: { name: "my-secret", key: "my-secret-key" } } }; It is common to pass data from the e.env Event object into the Job object as is appropriate: events.push = function(e) { j = new Job("example"); j.env = { DB_PASSWORD: project.secrets.dbPassword }; //... j.run(); }; The above will make $DB_PASSWORD available to the “example” job’s runtime. Methods: run(): Run this job and wait for it to exit. background(): Run this job in the background. wait(): Wait for a backgrounded job to complete. The Repository Class The Repository class describes a project’s VCS repository (if provided). name: string: The name of the repo ( org/name) cloneURL: string: The URL that the VCS software can use to clone the repository.
https://docs.brigade.sh/topics/javascript/
2019-11-12T03:59:32
CC-MAIN-2019-47
1573496664567.4
[]
docs.brigade.sh
Security best practices for Microsoft Dynamics CRM Applies To: Dynamics CRM 2015 Internet Information Services (IIS) is a mature web service that is included with Windows Server. Microsoft Dynamics CRM CRM CRM Setup and services. Important Make sure all websites that are running on the same computer as the Microsoft Dynamics CRM website also have access to the CRM database. If you use a domain user account, before you run Microsoft Dynamics CRM Server Setup, you may need to verify that the service principal name (SPN) is set correctly for that account, and if necessary, set the correct SPN. For more information about SPNs and how to set them, see How to use SPNs when you configure Web applications that are hosted on IIS. Service principal name management in Microsoft Dynamics CRM to which it is trying to connect. The Microsoft Dynamics CRM Server installer deploys role-specific services and web application pools that operate under user credentials specified during Setup. To review the complete list of these roles and their permission requirements, see Minimum permissions required for Microsoft Dynamics CRM Setup and services. When you deploy a hosted Microsoft Dynamics CRM CRM database. If you want to adhere to a least-privileged model, the safest approach for implementing SPNs in a hosted Microsoft Dynamics CRM CRM domaikuser See Also Security considerations for Microsoft Dynamics CRM Administration best practices for on-premises deployments of Microsoft Dynamics CRM
https://docs.microsoft.com/en-us/previous-versions/dynamicscrm-2015/deployment-administrators-guide/hh699761%28v%3Dcrm.7%29
2019-11-12T04:08:52
CC-MAIN-2019-47
1573496664567.4
[]
docs.microsoft.com
\= Operator Divides the value of a variable or property by the value of an expression and assigns the integer result to the variable or property. variableorproperty \= expression Parts variableorproperty Required. Any numeric variable or property. expression Required. Any numeric. For further information on integer division, see \ Operator (Visual Basic). divide one Integer variable by a second and assign the integer result to the first variable. Dim var1 As Integer = 10 Dim var2 As Integer = 3 var1 \= var2 ' The value of var1 is now 3. See Also Concepts Reference \ Operator (Visual Basic) /= Operator (Visual Basic) Arithmetic Operators (Visual Basic) Operator Precedence in Visual Basic Operators Listed by Functionality
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/3b25bh2c%28v%3Dvs.90%29
2019-11-12T03:38:31
CC-MAIN-2019-47
1573496664567.4
[]
docs.microsoft.com
#include <wx/html/htmlwin.h> wxHtmlWindow is probably the only class you will directly use unless you want to do something special (like adding new tag handlers or MIME filters). The purpose of this class is to display rich content pages (either local file or downloaded via HTTP protocol) in a window based on a subset of the HTML standard. The width of the window is constant, given in the constructor and virtual height is changed dynamically depending on page size. Once the window is created you can set its content by calling SetPage() with raw HTML, LoadPage() with a wxFileSystem location or LoadFile() with a filename. wxHtmlWindow uses the wxImage class for displaying images, so you need to initialize the handlers for any image formats you use before loading a page. See wxInitAllImageHandlers and wxImage::AddHandler. This class supports the following styles: The following event handler macros redirect the events to member function handlers 'func' with prototypes like: Event macros for events emitted by this class: Default ctor. Constructor. The parameters are the same as wxScrolled::wxScrolled() constructor. Adds an input filter to the static list of available filters. These filters are present by default: The plain text filter will be used if no other filter matches. Appends HTML fragment to currently displayed text and refreshes the window. Retrieves the default cursor for a given HTMLCursor type. Returns pointer to the top-level container. Returns anchor within currently opened page (see wxHtmlWindow::GetOpenedPage). If no page is opened or if the displayed page wasn't produced by call to LoadPage(), empty string is returned. Returns full location of the opened page. If no page is opened or if the displayed page wasn't produced by call to LoadPage(), empty string is returned. Returns title of the opened page or wxEmptyString if the current page does not contain <TITLE> tag. Returns a pointer to the current parser. Returns the related frame. Moves back to the previous page. Only pages displayed using LoadPage() are stored in history list. Returns true if it is possible to go back in the history i.e. HistoryBack() won't fail. Returns true if it is possible to go forward in the history i.e. HistoryForward() won't fail. Clears history. Moves to next page in history. Only pages displayed using LoadPage() are stored in history list. Loads an HTML page from a file and displays it. Unlike SetPage() this function first loads the HTML page from location and then displays it. This method is called when a mouse button is clicked inside wxHtmlWindow. The default behaviour is to emit a wxHtmlCellEvent and, if the event was not processed or skipped, call OnLinkClicked() if the cell contains an hypertext link. Overloading this method is deprecated; intercept the event instead. This method is called when a mouse moves over an HTML cell. Default behaviour is to emit a wxHtmlCellEvent. Overloading this method is deprecated; intercept the event instead. Called when user clicks on hypertext link. Default behaviour is to emit a wxHtmlLinkEvent and, if the event was not processed or skipped, call LoadPage() and do nothing else. Overloading this method is deprecated; intercept the event instead. Also see wxHtmlLinkInfo. Called when an URL is being opened (either when the user clicks on a link or an image is loaded). The URL will be opened only if OnOpeningURL() returns wxHTML_OPEN. This method is called by wxHtmlParser::OpenURL. You can override OnOpeningURL() to selectively block some URLs (e.g. for security reasons) or to redirect them elsewhere. Default behaviour is to always return wxHTML_OPEN. The return value is: Called on parsing <TITLE> tag. This reads custom settings from wxConfig. It uses the path 'path' if given, otherwise it saves info into currently selected path. The values are stored in sub-path wxHtmlWindow. Read values: all things set by SetFonts(), SetBorders(). Selects all text in the window. Returns the current selection as plain text. Returns an empty string if no text is currently selected. Selects the line of text that pos points at. Note that pos is relative to the top of displayed page, not to window's origin, use wxScrolled::CalcUnscrolledPosition() to convert physical coordinate. Selects the word at position pos. Note that pos is relative to the top of displayed page, not to window's origin, use wxScrolled::CalcUnscrolledPosition() to convert physical coordinate. This function sets the space between border of window and HTML contents. See image: Sets the default cursor for a given HTMLCursor type. These cursors are used for all wxHtmlWindow objects by default, but can be overridden on a per-window basis. This function sets font sizes and faces. See wxHtmlDCRenderer::SetFonts for detailed description. Sets the source of a page and displays it, for example: If you want to load a document from some location use LoadPage() instead. Sets the frame in which page title will be displayed. format is the format of the frame title, e.g. "HtmlHelp : %s". It must contain exactly one s. This s is substituted with HTML page title. After calling SetRelatedFrame(), this sets statusbar slot where messages will be displayed. (Default is -1 = no messages.) Sets the associated statusbar where messages will be displayed. Call this instead of SetRelatedFrame() if you want statusbar updates only, no changing of the frame title. Sets default font sizes and/or default font size. See wxHtmlDCRenderer::SetStandardFonts for detailed description. Returns content of currently displayed page as plain text. Saves custom settings into wxConfig. It uses the path 'path' if given, otherwise it saves info into currently selected path. Regardless of whether the path is given or not, the function creates sub-path wxHtmlWindow. Saved values: all things set by SetFonts(), SetBorders().
https://docs.wxwidgets.org/trunk/classwx_html_window.html
2019-11-12T04:07:21
CC-MAIN-2019-47
1573496664567.4
[]
docs.wxwidgets.org
Certificate function? Third party plugins, patches, bugfixes Post Reply 3 posts • Page 1 of 1 - Newbie - Posts: 3 - Joined: Mon Aug 05, 2019 8:50 pm - Version: forma.lms 2.0 Certificate function? I'd like to attach a api to run when a user passes a course and is awarded a certificate. This API just adds the username, name, email address and some other fields to a Campaign Monitor list. Is there a way to do this without messing with core? Re: Certificate function? You need a plugin. If interested contact me with a private message. I'm Jasmines, the One. If you need, you can contact me. Re: Certificate function? In the latest version 2.3 we introduced new web services to retrieve certificates released for users, and API to get user data are already available -------------------------------------------------- Become a CONTRIBUTOR Support the project for FREE! Become a CONTRIBUTOR Support the project for FREE! Post Reply 3 posts • Page 1 of 1
https://docs.formalms.org/forums/13/13382.html?p=21181
2020-11-24T03:05:26
CC-MAIN-2020-50
1606141171077.4
[]
docs.formalms.org
Plugins define how a compiled taxi document will be processed. Typically, this involves creating models in some language. Currently, there's only a single generator provided - the Kotlin generator. However, the plugin system of Taxi is evolving rapidly, and we intend to support loading externally provided plugins. Plugins can be used to output a taxi model in a specific language - or to modify the source from a generator. Plugins are either packaged internally with Taxi, or you can write your own, which can be dowloaded and included. See Writing your own plugins. Plugins are declared in the plugins section of the taxi.conf file. Declare the name of the plugin to enable, followed by configuration options for that plugin plugins: {'taxi/kotlin' { // Name of the plugin// plugin config goes here}}pluginSettings: { // Optionalrepositories: ['']localCache: '~/.taxi.plugins'} Each plugin determines it's own configuration. The PluginWithConfig<T> interface defines the type of config that a plugin will consume. For an example, check out the Kotlin plugin.
https://docs.taxilang.org/command-line-tools/plugins
2020-11-24T04:04:56
CC-MAIN-2020-50
1606141171077.4
[]
docs.taxilang.org
Business Management menu offers two sub-menus. The <Company Profile>, where you can update information about your organization, and the <Business Contract>, where you can activate the smart contract that underpins all of your projects. Company Profile Company Profile offers the ability to update some basic information about your account. Business Contract Upon logging into the ToolChain environment for the first time, you are presented with the sandbox environment. This environment is watermarked as such to distinguish it from a production environment, which incurs a cost to you since it consumes TCC tied to the mainnet, versus the free TCC associated with the test-net. Click <Switch to Production> from the upper right dropdown menu, then click on “Console” -> <Business Contract> to initiate the smart contract and the associated ToolChain services. Then click on <Create New Contract>, establishing your data provenance services. It may take a few minutes for the new contract to be completed after submission. The creation of a new contract will consume TCC. TCC must be purchased prior to activation. You can purchase TCC under “Order Management” -> <Order List> -> <New Order>. Please sign in to leave a comment.
https://docs.vetoolchain.com/hc/en-us/articles/360048704071-Company-Setting
2020-11-24T03:58:58
CC-MAIN-2020-50
1606141171077.4
[array(['/hc/article_attachments/360065952572/image-1.png', None], dtype=object) array(['/hc/article_attachments/360066169151/image-2.png', None], dtype=object) ]
docs.vetoolchain.com
Identify Relevance Issues You can tune search result relevance with Coveo by analyzing the queries, keywords, and clicked items to identify the top occurrences with clicks on items that don’t appear at the top of the search results list. Identifying items that are poorly ranked for frequent queries helps you focus on important relevance issues to fix. To identify relevance issues Access the Administration Console Search Relevance explorer (in the navigation bar on the left, under Analytics, select Reports, and then doesn’t consider the frequency of the measured event. Consequently, events with highest Average Click Rank values (particularly those higher than 3), are sometimes for events that occurred only once or just a few times. Get more details on a user query: In the table, bring the mouse pointer on a user query with a low Relevance Index value that you would like to investigate. In the table cell, click) or the Document Title (click) dimension (depending on your organization settings) clear issues with the item titles? Are the search words emphasized in the item? Are people using different keywords to search for this item? Consider using the following methods to improve ranking: Add synonyms to better match terms used by visitors with terms used in the indexed content (see Manage Thesaurus Rules). Add stop words to ignore in queries to improve relevance (see Manage Stop Word Rules). Change the weights of a ranking factor (see Manage Ranking Weight Rules). A developer can also customize search interfaces to influence ranking using various methods:
https://docs.coveo.com/en/2017/
2020-11-24T04:23:45
CC-MAIN-2020-50
1606141171077.4
[]
docs.coveo.com
.NET Framework deployment guide for developers This topic provides information for developers who want to install any version of the .NET Framework from the .NET Framework 4.5 to the .NET Framework 4.7.2 with their apps. For download links, see the section Redistributable Packages. You can also download the redistributable packages and language packs from these Microsoft Download Center pages: .NET Framework 4.7.2 for all operating systems (web installer or offline installer) .NET Framework 4.7.1 for all operating systems (web installer or offline installer) .NET Framework 4.7 for all operating systems (web installer or offline installer) .NET Framework 4.6.2 for all operating systems (web installer or offline installer) .NET Framework 4.6.1 for all operating systems (web installer or offline installer) .NET Framework 4.6 for all operating systems (web installer or offline installer) .NET Framework 4.5.2 for all operating systems (web installer or offline installer) .NET Framework 4.5.1 for all operating systems (web installer or offline installer) - Important notes: Note The phrase "the .NET Framework 4.5 and its point releases" refers to the .NET Framework 4.5 and all later versions. Versions of the .NET Framework from the .NET Framework 4.5.1 through the .NET Framework 4.7.2 are in-place updates to the .NET Framework 4.5, which means they use the same runtime version, but the assembly versions are updated and include new types and members. The .NET Framework 4.5 and its point releases are built incrementally on the .NET Framework 4. When you install the .NET Framework 4.5 or its point releases on a system that has the .NET Framework 4 installed, the version 4 assemblies are replaced with newer versions. If you are referencing a Microsoft out-of-band package in your app, the assembly will be included in the app package. You must have administrator privileges to install the .NET Framework 4.5 and its point releases. The .NET Framework 4.5 is included in Windows 8 and Windows Server 2012, so you don't have to deploy it with your app on those operating systems. Similarly, the .NET Framework 4.5.1 is included in Windows 8.1 and Windows Server 2012 R2. The .NET Framework 4.5.2 isn't included in any operating systems. The .NET Framework 4.6 is included in Windows 10, the .NET Framework 4.6.1 is included in Windows 10 November Update, and the .NET Framework 4.6.2 is included in Windows 10 Anniversary Update. The .NET Framework 4.7 is included in Windows 10 Creators Update, the .NET Framework 4.7.1 is included in Windows 10 Fall Creators Update, and the .NET Framework 4.7.2 is included in Windows 10 October 2018 Update and Windows 10 April 2018 Update. For a full list of hardware and software requirements, see System Requirements. Starting with the .NET Framework 4.5, your users can view a list of running .NET Framework apps during setup and close them easily. This may help avoid system restarts caused by .NET Framework installations. See Reducing System Restarts. Uninstalling the .NET Framework 4.5 or one of its point releases also removes pre-existing .NET Framework 4 files. If you want to go back to the .NET Framework 4, you must reinstall it and any updates to it. (See Installing the .NET Framework 4.) Microsoft Download Center. For more information about this issue, see Microsoft Security Advisory 2749655. For information about how a system administrator can deploy the .NET Framework and its system dependencies across a network, see Deployment Guide for Administrators. Deployment options for your app. Redistributable Packages The .NET Framework is available in two redistributable packages: web installer (bootstrapper) and offline installer (stand-alone redistributable). The following table compares the two packages. * The offline installer is larger because it contains the components for all the target platforms. When you finish running setup, the Windows operating system caches only the installer that was used. If the offline installer is deleted after the installation, the disk space used is the same as that used by the web installer. If the tool you use (for example, InstallAware or InstallShield) to create your app's setup program provides a setup file folder that is removed after installation, the offline installer can be automatically deleted by placing it into the setup folder. ** If you're using the web installer. Deployment methods Four deployment methods are available: You can set a dependency on the .NET Framework. You can specify the .NET Framework as a prerequisite in your app's installation, using one of these methods: Use ClickOnce deployment (available with Visual Studio) Create an InstallAware project (free edition available for Visual Studio users). Setting a dependency on the .NET Framework If you use ClickOnce, InstallAware, InstallShield, or WiX to deploy your app, you can add a dependency on the .NET Framework so it can be installed as part of your app. ClickOnce deployment version of the .NET Framework that you've used to build your project. Choose an option to specify the source location for the prerequisites, and then choose OK. If you supply a URL for the .NET Framework download location, you can specify either the Microsoft Download Center site or a site of your own. If you are placing the redistributable package on your own server, it must be the offline installer and not the web installer. You can only link to the web installer on the Microsoft Download Center. The URL can also specify a disc on which your own app is being distributed. In the Property Pages dialog box, choose OK. InstallAware deployment InstallAware builds Windows app (APPX), Windows Installer (MSI), Native Code (EXE), and App-V (Application Virtualization) packages from a single source. Easily include any version of the .NET Framework in your setup, optionally customizing the installation by editing the default scripts. For example, InstallAware pre-installs certificates on Windows 7, without which the .NET Framework 4.7 setup fails. For more information on InstallAware, see the InstallAware for Windows Installer website. InstallShield deployment creating a setup and deployment project for the first time, choose Go to InstallShield or Enable InstallShield Limited Edition to download InstallShield Limited Edition for your version of Microsoft Visual Studio. Restart Visual Studio.. Windows Installer XML (WiX) deployment. Installing the .NET Framework manually In some situations, it might be impractical to automatically install the .NET Framework with your app. In that case, you can have users install the .NET Framework themselves. The redistributable package is available in two packages. In your setup process, provide instructions for how users should locate and install the .NET Framework. Chaining the .NET Framework installation to your app's setup installer or the offline installer. Each package has its advantages: If you use the web installer, the .NET Framework setup process will decide which installation package is required, and download and install only that package from the web. If you use the offline installer, you can include the complete set of .NET Framework installation packages with your redistribution media so that your users don't have to download any additional files from the web during setup. Chaining by using the default .NET Framework UI To silently chain the .NET Framework installation process and let the .NET Framework installer provide the UI, add the following command to your setup program: <.NET Framework redistributable> /q /norestart /ChainingPackage <PackageName> For example, if your executable program is Contoso.exe and you want to silently install the .NET Framework 4.5 offline redistributable package, use the command: dotNetFx45_Full_x86_x64.exe /q /norestart /ChainingPackage Contoso You can use additional command-line options to customize the installation. For example: To provide a way for users to close running .NET Framework apps to minimize system restarts, set passive mode and use the /showrmuioption as follows: dotNetFx45_Full_x86_x64.exe /norestart /passive /showrmui /ChainingPackage Contoso This command allows Restart Manager to display a message box that gives users the opportunity to close .NET Framework apps before installing the .NET Framework. If you're using the web installer, you can use the /LCIDoption to specify a language pack. For example, to chain the .NET Framework 4.5 web installer to your Contoso setup program and install the Japanese language pack, add the following command to your app's setup process: dotNetFx45_Full_setup.exe /q /norestart /ChainingPackage Contoso /LCID 1041 If you omit the /LCIDoption, setup will install the language pack that matches the user's MUI setting. Note Different language packs may have different release dates. If the language pack you specify is not available at the download center, setup will install the .NET Framework without the language pack. If the .NET Framework is already installed on the user’s computer, the setup will install only the language pack. For a complete list of options, see the Command-Line Options section. For common return codes, see the Return Codes section. Chaining by Using a Custom UI If you have a custom setup package, you may want to silently launch and track the .NET Framework setup while showing your own view of the setup progress. If this is the case, make sure that your code covers the following: Check for .NET Framework hardware and software requirements. Detect whether the correct version of the .NET Framework is already installed on the user’s computer. Important In determining whether the correct version of the .NET Framework is already installed, you should check whether your target version or a later version is installed, not whether your target version is installed. In other words, you should evaluate whether the release key you retrieve from the registry is greater than or equal to the release key of your target version, not whether it equals the release key of your target version. Detect whether the language packs are already installed on the user’s computer. If you want to control the deployment, silently launch and track the .NET Framework setup process (see How to: Get Progress from the .NET Framework 4.5 Installer). If you’re deploying the offline installer, chain the language packs separately. Customize deployment by using command-line options. For example, if you’re chaining the .NET Framework web installer, but you want to override the default language pack, use the /LCIDoption, as described in the previous section. - Detecting the .NET Framework The .NET Framework installer writes registry keys when installation is successful. You can test whether the .NET Framework 4.5 or later is installed by checking the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full folder in the registry for a DWORD value named Release. (Note that "NET Framework Setup" doesn't begin with a period.) The existence of this key indicates that the .NET Framework 4.5 or a later version has been installed on that computer. The value of Release indicates which version of the .NET Framework is installed. Important You should check for a value greater than or equal to the release keyword value when attempting to detect whether a specific version is present. Detecting the language packs You can test whether a specific language pack is installed by checking the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full\LCID folder in the registry for a DWORD value named Release. (Note that "NET Framework Setup" doesn't begin with a period.) LCID specifies a locale identifier; see supported languages for a list of these. For example, to detect whether the full Japanese language pack (LCID=1041) is installed, check for the following values in the registry: Key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full\1041 Name: Release Type: DWORD To determine whether the final release version of a language pack is installed for a particular version of the .NET Framework from 4.5 through 4.7.2, check the value of the RELEASE key DWORD value described in the previous section, Detecting the .NET Framework. Chaining the language packs to your app setup The .NET Framework provides a set of stand-alone language pack executable files that contain localized resources for specific cultures. The language packs are available from the Microsoft Download Center: Important The language packs don't contain the .NET Framework components that are required to run an app; you must install the .NET Framework by using the web or offline installer before you install a language pack. Starting with the .NET Framework 4.5.1, the package names take the form NDP< version>-KB< number>-x86-x64-AllOS-< culture>.exe, where version is the version number of the .NET Framework, number is a Microsoft Knowledge Base article number, and culture specifies a country/region. An example of one of these packages is NDP452-KB2901907-x86-x64-AllOS-JPN.exe. Package names are listed in the Redistributable Packages section earlier in this article. To install a language pack with the .NET Framework offline installer, you must chain it to your app's setup. For example, to deploy the .NET Framework 4.5.1 offline installer with the Japanese language pack, use the following command: NDP451-KB2858728-x86-x64-AllOS-JPN.exe/q /norestart /ChainingPackage <ProductName> You do not have to chain the language packs if you use the web installer; setup will install the language pack that matches the user's MUI setting. If you want to install a different language, you can use the /LCID option to specify a language pack. For a complete list of command-line options, see the Command-Line Options section. Troubleshooting Return codes The following table lists the most common return codes for the .NET Framework redistributable installer. The return codes are the same for all versions of the installer. For links to detailed information, see the next section. Download error codes See the following content: Other error codes See the following content: Uninstalling the .NET Framework Starting with Windows 8, you can uninstall the .NET Framework 4.5 or one of its point releases by using Turn Windows features on and off in Control Panel. In older versions of Windows, you can uninstall the .NET Framework 4.5 or one of its point releases by using Add or Remove Programs in Control Panel. Important For Windows 7 and earlier operating systems, uninstalling the .NET Framework 4.5.1, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, or 4.7.2 doesn't restore .NET Framework 4.5 files, and uninstalling the .NET Framework 4.5 doesn't restore .NET Framework 4 files. If you want to go back to the older version, you must reinstall it and any updates to it. Appendix Command-line options The following table lists options that you can include when you chain the .NET Framework 4.5 redistributable to your app's setup. Supported languages The following table lists .NET Framework language packs that are available for the .NET Framework 4.5 and its point releases. See also Feedback We'd love to hear your thoughts. Choose the type you'd like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-us/dotnet/framework/deployment/deployment-guide-for-developers
2019-03-18T15:46:46
CC-MAIN-2019-13
1552912201455.20
[]
docs.microsoft.com
Using Impersonation with Transport Security Impersonation is the ability of a server application to take on the identity of the client. It is common for services to use impersonation when validating access to resources. The server application runs using a service account, but when the server accepts a client connection, it impersonates the client so that access checks are performed using the client's credentials. Transport security is a mechanism both for passing credentials and securing communication using those credentials. This topic describes using transport security in Windows Communication Foundation (WCF) with the impersonation feature. For more information about impersonation using message security, see Delegation and Impersonation. Five Impersonation Levels Transport security makes use of five levels of impersonation, as described in the following table. The levels most commonly used with transport security are Identify and Impersonate. The levels None and Anonymous are not recommended for typical use, and many transports do not support using those levels with authentication. The Delegate level is a powerful feature that should be used with care. Only trusted server applications should be given the permission to delegate credentials. Using impersonation at the Impersonate or Delegate levels requires the server application to have the SeImpersonatePrivilege privilege. An application has this privilege by default if it is running on an account in the Administrators group or on an account with a Service SID (Network Service, Local Service, or Local System). Impersonation does not require mutual authentication of the client and server. Some authentication schemes that support impersonation, such as NTLM, cannot be used with mutual authentication. Transport-Specific Issues with Impersonation The choice of a transport in WCF affects the possible choices for impersonation. This section describes issues affecting the standard HTTP and named pipe transports in WCF. Custom transports have their own restrictions on support for impersonation. Named Pipe Transport The following items are used with the named pipe transport: The named pipe transport is intended for use only on the local machine. The named pipe transport in WCF explicitly disallows cross-machine connections. Named pipes cannot be used with the Impersonateor Delegateimpersonation level. The named pipe cannot enforce the on-machine guarantee at these impersonation levels. For more information about named pipes, see Choosing a Transport. HTTP Transport The bindings that use the HTTP transport (WSHttpBinding and BasicHttpBinding) support several authentication schemes, as explained in Understanding HTTP Authentication. The impersonation level supported depends on the authentication scheme. The following items are used with the HTTP transport: The Anonymousauthentication scheme ignores impersonation. The Basicauthentication scheme supports only the Delegatelevel. All lower impersonation levels are upgraded. The Digestauthentication scheme supports only the Impersonateand Delegatelevels. The NTLMauthentication scheme, selectable either directly or through negotiation, supports only the Delegatelevel on the local machine. The Kerberos authentication scheme, which can only be selected through negotiation, can be used with any supported impersonation level. For more information about the HTTP transport, see Choosing a Transport. See also Feedback We'd love to hear your thoughts. Choose the type you'd like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-us/dotnet/framework/wcf/feature-details/using-impersonation-with-transport-security
2019-03-18T16:01:37
CC-MAIN-2019-13
1552912201455.20
[]
docs.microsoft.com
The network-control interface network-control enables the configuration of networking and network namespaces via ip netns, providing a wide, privileged access to networking. Auto-connect: no Requires snapd version 2.20+. This is a snap interface. See Interface management and Supported interfaces for further details on how interfaces are used. Last updated a month ago. Help improve this document in the forum.
https://docs.snapcraft.io/the-network-control-interface/7882
2019-03-18T16:31:04
CC-MAIN-2019-13
1552912201455.20
[]
docs.snapcraft.io
simple green industrial cleaner cleaning with all purpose to remove burnt on sawdust and crystal degreaser sds. Related Post Stainless Steel Pocket Knife Camping Tents With Screened Porch American Coin Treasures Country Club Elite Golf Mat Water Floats For Lake Mega Bloks Castle Remote Control Toy Truck Aquarest Spa Exerpeutic Elliptical Compact Exercise Equipment Dirt Bike Power Wheel Hummingbird Depth Finders Army Binoculars Sleep Mat For Adults 12 Inch Dolls
http://top-docs.co/simple-green-industrial-cleaner/simple-green-industrial-cleaner-cleaning-with-all-purpose-to-remove-burnt-on-sawdust-and-crystal-degreaser-sds/
2019-03-18T16:20:21
CC-MAIN-2019-13
1552912201455.20
[array(['http://top-docs.co/wp-content/uploads/2018/05/simple-green-industrial-cleaner-cleaning-with-all-purpose-to-remove-burnt-on-sawdust-and-crystal-degreaser-sds.jpg', 'simple green industrial cleaner cleaning with all purpose to remove burnt on sawdust and crystal degreaser sds simple green industrial cleaner cleaning with all purpose to remove burnt on sawdust and crystal degreaser sds'], dtype=object) ]
top-docs.co