content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Metadata is how you track your instruments- things like names, asset numbers, locations, makes and models. TetraScience allows you to input or copy instrument metadata into the program. Edit Device Metadata - My Devices Screen From the 'My Devices' screen, it is possible to batch edit instrument metadata. To do this: - Navigate to the 'Settings' page screen, from top navigation bar menu . 2. Click on "My Organization" 3. Navigate to the 'Devices' tab 4. In the 'Devices' section (below 'Device groups') click 'Edit All' on the far right. 5. This opens the 'Instrument Info Editor.' From this screen, you can edit device info as a spreadsheet. Edit Device Metadata - Device Panels Screen You can also edit individual device names from a device panel. - From 'Device Panels' find the panel for the device you want to edit. 2. Click the gear icon to bring up the settings panel. 3. Click the pencil icon to edit the device name, and click the save icon when you are done.
http://docs.tetrascience.com/settings-and-administration/edit-device-metadata
2018-10-15T14:59:13
CC-MAIN-2018-43
1539583509326.21
[]
docs.tetrascience.com
Created: 26/04/2017 Latest update: 01/06/2018 By: Villatheme Thank you for purchasing our plugin. If you have any questions that are beyond the scope of this documentation, please feel free to request support at our Support Forum. Thanks so much! What is Woocommerce Product Builder? Woocommerce Product Builder is an essential plugin which helps build a full product set from small components as you want. Therefore, the powerful plugin will certainly bring your website a huge profit. With the friendly user interface, Woocommerce Product Builder allows: System Requires: Make sure that those limits to a minimum as follows in order for free-trouble while installing: WordPress 4.2.x or higher (Recommended WordPress 4.5+). Woocommerce 3.2.x Download Plugin: Go to Plugin/ Add New/ Upload Plugin/ Choose file/ select the plugin zip file/ click “Install Now”/ click “Active plugin”. Done! Let’s start using the plugin. Video install and set up WooCommerce Product Builder After installing and activating the plugin, create your first product builder page. Go to Dashboard/ Product Builders/ Add new/ to add a new product builder page. Enter product builder page name and configure products and categories for each step. It’s already done, now you can see it in front-end: On front-end page After finish all steps, or by click on the Preview button, customers will be redirected to the preview page. All products, quantities, prices and the total price will be listed here. Customers can choose to add all product to cart and then checkout, or send the complete product via email. How to use the Compatible feature on WooCommerce Product Builder? Please keep in mind that WooCommerce Product Builder uses Attribute to detect which products are compatible with different steps. At first, you need to create attributes and add these attribute to your products. To add new attributes go to Dashboard/ Products/ Attributes In this picture, I create a socket attributes. Please take note that the plugin will not work with custom attributes. After successful create a new attribute, please go to product pages to add attributes to products. Now I will add the attribute “Socket” to 2 different products in 2 different step of WooCommerce Product Builder. Click on the text “Custom product attribute”, look for the attribute that you have just created, then click Add. After successful add an attribute to the product, click on “value(s)” to add a value for that attribute. You can select an existed value or create a new one. Repeating the progress above with others products. In this case, I added the value “LGA 1151” of the attribute “Socket” for some products in “Main” and “CPU” categories. Go to Product Builder Edit page at Dashboard/ Product Builder/ *Your Product Builder* At this page, I have “Mainboard” categories at step 2, after the CPU categories at step 1. Scroll down the page, you will find the defending part. Assign Main to CPU. Done. Now after customers select a CPU with socket LGA 1151 at the first step. They will find only mainboards with socket LGA 1151 in the second step. Attribute Widgets helps customers to search product by attributes on front-end. To display attribute widgets on front-end, go to Dashboard/ Appearance/ Widgets and look for WC Product Builder Price Filter, WC Product Builder Rating Filter and WC Product Builder Attribute Filter. Drag and drop these widgets into WC Product Builder Sidebar. Take note that you may need to add the widget WC Product Builder Attribute Filter many times because each attribute will cost a widget. In each widget settings, you can edit the widget title and click on the Attribute to select an attribute slug. All product builder pages will be listed in Dashboard/ Product Builders. You can Add new/ Remove product builder here. Adding a new product builder or click on any product builder to edit it. Set up WooCommerce Product Builder settings at Dashboard/ Product Builder/ Settings Thank you for your attention! If you have any question, please create topic at FORUM, we will support within 24 hours.
http://docs.villatheme.com/woocommerce-product-builder/
2018-10-15T16:08:24
CC-MAIN-2018-43
1539583509326.21
[]
docs.villatheme.com
Meta-data guide¶ For each entry in org-id.guide, a full meta-data entry is maintained, following the schema. This meta-data: - Helps data publishers to understand the nature of an organization identifier list, and how to find the identifiers it contains; - Helps users to interpret identifiers, and locate additional sources of information linked to each id; - Drives the quality and relevance rankings used in the org-id.guide frontend; Assigning a Code¶ Each organisation list code is made up of two parts: a jurisdiction code, and a list name code. 1) Jurisdiction code¶ For any list which contains entries only from a given country, the ISO 2-digit country code should be used. For lists that contain entries from multiple countries, one of the following codes should be used (NOTE: We rely here on the fact that in ISO 3166-1 the following alpha-2 codes can be user-assigned: AA, QM to QZ, XA to XZ, and ZZ. We avoid any widely used X codes. ). 2) List name code¶ The list name code is manually assigned. In general, list name codes have been designed so that: - They are between 2 and 7 characters long; - They use a recognisable acronym or contraction of the name of the organisation list; - Acronyms should be based on the local language version of the name; - Codes should be memorable, allowing users to become familiar with codes. However, there is no intention that the list name code portion of the prefix will carry any semantics (e.g. type of organisations listed etc.). Points 1 - 4 are guidance only. Breaking any of these principles shall not be grounds for revision of an existing code. When a new organisation list is identified, the creator/proposer may suggest a list code. To confirm a code, the researcher should check: - The proposed list code is based on a clear understanding of the name of the underlying organisation list or list provider. - Any requirements for multi-lingual codes. For example, in Canada, acronyms should always be given in both their English and French forms. This is achieved using an underscore separator in the list code. (For example CA-CRA_ACR for the Canadian Revenue Agency / Agence du revenu du Canada) - The code does not duplicate an existing code, or AKA value. If a list code is later replaced, the full deprecated prefix should be recorded in the ```formerCodes`` field. This will allow systems to warn about deprecated prefixes and to notify users of their replacements. Entering meta-data¶ Name / Title¶ Provide the name of the identifier list, or, where the list is the primary list maintained by an organisation, the name of the organisation that maintains the list. For example, we use ‘Companies House’ to identify the owner of the UK Company Register. You can enter the name in English, and separately in a local language. URL¶ Provide the root URL where information on the list can be found. This does not need to link directly to an interface to search the list. Description¶ The description should focus on explaining the way in which organisations end up on the list. If a list covers multiple kinds of organisation, the description may highlight this, using text from the lists own website, or summarised from other relevant documentation. Descriptions may include Markdown. When including citations, use markdown footnote notation: This is some text which requires a citation at the end [1]. [1]: This is the footnote text. Hint Example Description - AU-ABN “The Australian Business Number (ABN) enables businesses in Australia to deal with a range of government departments and agencies using a single identification number. The ABN is a public number which does not replace an organisations tax file number.” “ABN registration details become part of the Australian Business Register (ABR)” Each ABN should equate to a single ‘business structure’, although that structure may be used to carry out a range of business activities. A range of kinds of entity are issued ABNs, including individuals, corporations, partnerships, unincorporated associations, trusts and superannuation funds. Entities must be carrying on a business in or connection to Australia to receive an ABN. Geographic coverage¶ Enter each of the jurisdictions this identifier list covers. If the list is global, use one of the XI (International), XM (Multilateral) or ZZ (Publisher created). If the list is regional, enter all the countries that the region covers. Sub-national coverage¶ If this list only covers one or more sub-national territories, select these. (If the schema does not include the required ISO 3166-2 Subdivision Assigned Codes, open a GitHub issue to request these are added) Legal structure¶ Select all the legal structures which this list covers. Note that legal structures are organised hierarchically in the dataset. So, for example, ‘Sole Trader’ is a kind of company. This is shown in the lookup list under the ‘Parent’ field. Please consult the research lead if you feel you need to add an extra category to legal structures. If the list is not specific to a particular kind of legal structure, leave this field blank. Hint Example: GB-COH UK Companies House registers a number of different kinds of company, including: - Public limited company (PLC) - Private company limited by shares (Ltd, Limited) - Private company limited by guarantee, typically a non-commercial embership body such as a charity - Private unlimited company (either with or without a share capital) - Limited liability partnership (LLP) - Limited partnership (LP) - Societas Europaea (SE): European Union-wide company structure - Companies incorporated by Royal Charter (RC) - Community interest company It is listed against the following specific company types: Partnership, Limited Company, Listed Company, Community Interest Company, and Charity. However, wider research tells us that whilst all Limited Companies, Listed Companies and CIC’s should have a registration in Companies House, not all charities will have a Companies House number. Sector¶ If this list is specific to a particular sector, you can declare that here. If the list is not specific to a particular sector, leave this field blank. Hint Example: GB-UKPRN The UK Register of Learning Providers covers only education institutes, so has ‘Education’ set in the sector field. List type¶ This is one of the most important fields in the dataset. You will need to determine if this list is a primary identifier list or whether it has secondary, third-party or local status. Definitions of each category are provided above. Drawing on your research into how identifiers are created, and looking at a range of example entries in the list, make your determination. You can use the comments feature in AirTable to provide supporting reasons if you require. The following rule-of-thumb criteria may be useful. Available online, and online availability details¶ Indicate whether this list is available online in any form, including only partial search. Provide the URL that users should visit to access this list and a description of how to find identifiers. How to locate identifiers¶ If users need to follow particular steps in order to carry out a identifier search, detail those here. This might include: - Guidance on how to find search features on a complex website; - Information on charged access to identifiers if no freely available online access is provided; - Information on how to spot the actual identifier, and how to copy it for re-use; - Information on formatting the identifiers. Hint Example: AU-ABN It is possible to search for identifiers at The Australian Business Number (ABN) is a unique 11 digit identifier issued to all entities registered in the Australian Business Register (ABR). The 11 digit ABN is structured as a 9 digit identifier with two leading check digits. The identifiers are displayed on the website with spaces in the number. All the spaces should be removed when making use of the number within an identifier. Access to data & Data access details¶ Check for bulk downloads, and API access to the data, and indicate if these are available. For official registers, check on the national data portal as well as the list website itself. Take note of whether the available data appears to be regularly updated, or only a one-off data dump. Write brief notes on how the data can be accessed. Confirm the license information for the data. Data features¶ Select all the features that are apply to either of information available through the list’s website, or in APIs or bulk data products. The goal here is to be aware of all the possible additional available information that could be explored to disambiguate organisations, whether that is available as structured data or not. Openly licensed and license details¶ Look for a license for the contents of the list. Indicate whether or not an open license can be found, and provide the name of the license (if common) or a short description of the license if it is not a common license. In OpenCorporates?¶ If OpenCorporates has data for this list, include a link to the open corporates page here. Languages supported¶ Using ISO e-digit language codes, indicate which languages this list is available in.
http://docs.org-id.guide/en/latest/metadata/
2020-02-17T01:10:42
CC-MAIN-2020-10
1581875141460.64
[]
docs.org-id.guide
Using “Server” API¶ The “server” API contains modules supporting VO Cone Search’s server-side operations, particularly to validate external Cone Search services for Simple Cone Search. A typical user should not need the validator. However, this could be used by VO service providers to validate their services. Currently, any service to be validated has to be registered in STScI VAO Registry. Validation for Simple Cone Search¶ astroquery.vo_conesearch.validator.validate validates VO services. Currently, only Cone Search validation is done using check_conesearch_sites(), which utilizes underlying astropy.io.votable.validator library. A master list of all available Cone Search services is obtained from astroquery.vo_conesearch.validator.conf.conesearch_master_list, which is a URL query to STScI VAO Registry by default. However, by default, only the ones in astroquery.vo_conesearch.validator.conf.conesearch_urls are validated (also see Default Cone Search Services), while the rest are skipped. There are also options to validate a user-defined list of services or all of them. All Cone Search queries are done using RA, Dec, and SR given by testQuery fields in the registry, and maximum verbosity. In an uncommon case where testQuery is not defined for a service, it uses a default search for RA=0&DEC=0&SR=0.1. The results are separated into 4 groups below. Each group is stored as a JSON file of VOSDatabase: conesearch_good.json Passed validation without critical warnings and exceptions. This database residing in astroquery.vo_conesearch.conf.vos_baseurlis the one used by Simple Cone Search by default. conesearch_warn.json Has critical warnings but no exceptions. Users can manually set astroquery.vo_conesearch.conf.conesearch_dbnameto use this at their own risk. conesearch_exception.json Has some exceptions. Never use this. For informational purpose only. conesearch_error.json Has network connection error. Never use this. For informational purpose only. HTML pages summarizing the validation results are stored in 'results' sub-directory, which also contains downloaded XML files from individual Cone Search queries. Warnings and Exceptions¶ A subset of astropy.io.votable.exceptions that is considered non-critical is defined by astroquery.vo_conesearch.validator.conf.noncritical_warnings, which will not be flagged as bad by the validator. However, this does not change the behavior of astroquery.vo_conesearch.conf.pedantic, which still needs to be set to False for them not to be thrown out by conesearch(). Despite being listed as non-critical, user is responsible to check whether the results are reliable; They should not be used blindly. Some units recognized by VizieR are considered invalid by Cone Search standards. As a result, they will give the warning 'W50', which is non-critical by default. User can also modify astroquery.vo_conesearch.validator.conf.noncritical_warnings to include or exclude any warnings or exceptions, as desired. However, this should be done with caution. Adding exceptions to non-critical list is not recommended. Building the Database from Registry¶ Each Cone Search service is a VOSCatalog in a VOSDatabase (see Catalog Manipulation and Database Manipulation). In the master registry, there are duplicate catalog titles with different access URLs, duplicate access URLs with different titles, duplicate catalogs with slightly different descriptions, etc. A Cone Search service is really defined by its access URL regardless of title, description, etc. By default, from_registry() ensures each access URL is unique across the database. However, for user-friendly catalog listing, its title will be the catalog key, not the access URL. In the case of two different access URLs sharing the same title, each URL will have its own database entry, with a sequence number appended to their titles (e.g., ‘Title 1’ and ‘Title 2’). For consistency, even if the title does not repeat, it will still be renamed to ‘Title 1’. In the case of the same access URL appearing multiple times in the registry, the validator will store the first catalog with that access URL and throw out the rest. However, it will keep count of the number of duplicates thrown out in the 'duplicatesIgnored' dictionary key of the catalog kept in the database. All the existing catalog tags will be copied over as dictionary keys, except 'access_url' that is renamed to 'url' for simplicity. In addition, new keys from validation are added: validate_expected Expected validation result category, e.g., “good”. validate_network_error Indication for connection error. validate_nexceptions Number of exceptions found. validate_nwarnings Number of warnings found. validate_out_db_name Cone Search database name this entry belongs to. validate_version Version of validation software. validate_warning_types List of warning codes. validate_warnings Descriptions of the warnings. validate_xmllint Indication of whether xmllintpassed. validate_xmllint_content Output from xmllint. Configurable Items¶ These parameters are set via Configuration System (astropy.config): astroquery.vo_conesearch.validator.conf.conesearch_master_list VO registry query URL that should return a VO table with all the desired VO services. astroquery.vo_conesearch.validator.conf.conesearch_urls Subset of Cone Search access URLs to validate. astroquery.vo_conesearch.validator.conf.noncritical_warnings List of VO table parser warning codes that are considered non-critical. Also depends on properties in Simple Cone Search Configurable Items. Examples¶ Validate default Cone Search sites with multiprocessing and write results in the current directory. Reading the master registry can be slow, so the default timeout is internally set to 60 seconds for it. In addition, all VO table warnings from the registry are suppressed because we are not trying to validate the registry itself but the services it contains: >>> from astroquery.vo_conesearch.validator import validate >>> validate.check_conesearch_sites() Downloading... |==========================================| 44M/ 44M (100.00%) 0s INFO: Only 18/17832 site(s) are validated [...] # ... WARNING: 2 not found in registry! Skipped: # ... INFO: good: 13 catalog(s) [astroquery.vo_conesearch.validator.validate] INFO: warn: 2 catalog(s) [astroquery.vo_conesearch.validator.validate] INFO: excp: 0 catalog(s) [astroquery.vo_conesearch.validator.validate] INFO: nerr: 2 catalog(s) [astroquery.vo_conesearch.validator.validate] INFO: total: 17 out of 19 catalog(s) [...] INFO: check_conesearch_sites took 16.862793922424316 s on AVERAGE... (16.862793922424316, None) Validate only Cone Search access URLs hosted by 'stsci.edu' without verbose outputs (except warnings that are controlled by warnings) or multiprocessing, and write results in 'subset' sub-directory instead of the current directory. For this example, we use registry_db from VO database examples: >>> urls = registry_db.list_catalogs_by_url(pattern='stsci.edu') >>> urls ['?', '?', .., ''] >>> validate.check_conesearch_sites( ... destdir='./subset', verbose=False, parallel=False, url_list=urls) # ... INFO: check_conesearch_sites took 64.51968932151794 s on AVERAGE... (64.51968932151794, None) Add 'W24' from astropy.io.votable.exceptions to the list of non-critical warnings to be ignored and re-run default validation. This is not recommended unless you know exactly what you are doing: >>> from astroquery.vo_conesearch.validator import conf as validator_conf >>> new_warns = validator_conf.noncritical_warnings + ['W24'] >>> with validator_conf.set_temp('noncritical_warnings', new_warns): ... validate.check_conesearch_sites() Validate all Cone Search services in the master registry (this will take a while) and write results in 'all' sub-directory: >>> validate.check_conesearch_sites(destdir='./all', url_list=None) To look at the HTML pages of the validation results in the current directory using Firefox browser (images shown are from STScI server but your own results should look similar): firefox results/index.html When you click on ‘All tests’ from the page above, you will see all the Cone Search services validated with a summary of validation results: When you click on any of the listed URLs from above, you will see detailed validation warnings and exceptions for the selected URL: When you click on the URL on top of the page above, you will see the actual VO Table returned by the Cone Search query: Inspection of Validation Results¶ astroquery.vo_conesearch.validator.inspect inspects results from Validation for Simple Cone Search. It reads in JSON files of VOSDatabase residing in astroquery.vo_conesearch.conf.vos_baseurl, which can be changed to point to a different location. Configurable Items¶ This parameter is set via Configuration System (astropy.config): astroquery.vo_conesearch.conf.vos_baseurl Examples¶ >>> from astroquery.vo_conesearch.validator import inspect Load Cone Search validation results from astroquery.vo_conesearch.conf.vos_baseurl (by default, the one used by Simple Cone Search): >>> r = inspect.ConeSearchResults() Downloading ... Downloading ... Downloading ... Downloading ... Print tally. In this example, there are 16 Cone Search services that passed validation with non-critical warnings, 2 with critical warnings, 0 with exceptions, and 0 with network error: >>> r.tally() good: 16 catalog(s) warn: 2 catalog(s) exception: 0 catalog(s) error: 0 catalog(s) total: 18 catalog(s) Print a list of good Cone Search catalogs, each with title, access URL, warning codes collected, and individual warnings: >>> r.list_cats('good') Guide Star Catalog v2 1 W48,W50 .../vo.xml:136:0: W50: Invalid unit string 'pixel' .../vo.xml:155:0: W48: Unknown attribute 'nrows' on TABLEDATA # ... USNO-A2 Catalogue 1 W17,W21,W42 .../vo.xml:4:0: W21: vo.table is designed for VOTable version 1.1 and 1.2... .../vo.xml:4:0: W42: No XML namespace specified .../vo.xml:15:15: W17: VOTABLE element contains more than one DESCRIPTION... List Cone Search catalogs with warnings, excluding warnings that were ignored in astroquery.vo_conesearch.validator.conf.noncritical_warnings, and writes the output to a file named 'warn_cats.txt' in the current directory. This is useful to see why the services failed validations: >>> with open('warn_cats.txt', 'w') as fout: ... r.list_cats('warn', fout=fout, ignore_noncrit=True) List the titles of all good Cone Search catalogs: >>> r.catkeys['good'] ['Guide Star Catalog v2 1', 'SDSS DR8 - Sloan Digital Sky Survey Data Release 8 1', ..., 'USNO-A2 Catalogue 1'] Print the details of catalog titled 'USNO-A2 Catalogue 1': >>> r.print_cat('USNO-A2 Catalogue 1') { # ... "cap_type": "conesearch", "content_level": "research", # ... "waveband": "optical", "wsdl_url": "" } Found in good Load Cone Search validation results from a local directory named 'subset'. This is useful if you ran your own Validation for Simple Cone Search and wish to inspect the output databases. This example reads in validation of STScI Cone Search services done in Validation for Simple Cone Search Examples: >>> from astroquery.vo_conesearch import conf >>> with conf.set_temp('vos_baseurl', './subset/'): >>> r = inspect.ConeSearchResults() >>> r.tally() good: 11 catalog(s) warn: 3 catalog(s) exception: 15 catalog(s) error: 0 catalog(s) total: 29 catalog(s) >>> r.catkeys['good'] [u'Berkeley Extreme and Far-UV Spectrometer 1', u'Copernicus Satellite 1', ..., u'Wisconsin Ultraviolet Photo-Polarimeter Experiment 1']
https://astroquery.readthedocs.io/en/latest/vo_conesearch/validator.html
2020-02-17T01:05:12
CC-MAIN-2020-10
1581875141460.64
[array(['../_images/validator_html_1.png', 'Main HTML page of validation results'], dtype=object) array(['../_images/validator_html_2.png', 'All tests HTML page'], dtype=object) array(['../_images/validator_html_3.png', 'Detailed validation warnings HTML page'], dtype=object) array(['../_images/validator_html_4.png', 'VOTABLE XML page'], dtype=object) ]
astroquery.readthedocs.io
Enable the MRS Proxy endpoint for remote moves The Mailbox Replication service (MRS) has a proxy endpoint that's required for cross-forest mailbox moves and remote move migrations between your on-premises Exchange organization and Office 365. You enable the MRS proxy endpoint in the Exchange Web Services (EWS) virtual directory settings in the Client Access (frontend) services on Exchange 2016 or Exchange 2019 Mailbox servers. Where you enable the MRS Proxy endpoint depends on the type and direction of the mailbox move: Cross-forest enterprise moves: For cross-forest moves that are initiated from the target forest (known as a pull move type), you need to enable the MRS Proxy endpoint on Mailbox servers in the source forest. For cross-forest moves that are initiated from the source forest (known as a push move type), you need to enable the MRS Proxy endpoint on Mailbox servers in the target forest. Remote move migrations between an on-premises Exchange organization and Office 365. For both onboarding and offboarding remote move migrations, you need to enable the MRS Proxy endpoint on Mailbox servers in your on-premises Exchange organization. Note: If you use theExchange admin center (EAC) to move mailboxes, cross-forest moves and onboarding remote move migrations are pull move types, because you initiate the request from the target environment. Offboarding remote move migrations are push move types because you initiate the request multiple Mailbox servers in your Exchange organization, you should enable the MRS Proxy endpoint in the Client Access services on each Mailbox server. If you add additional Mailbox servers, be sure to enable the MRS Proxy endpoint on the new servers. Cross-forest moves and remote move migrations can fail if the MRS Proxy endpoint isn't enabled on all Mailbox servers. If you don't perform cross-forest moves or remote move migrations, keep MRS Proxy endpoints disabled in the Client Access services on Mailbox servers to reduce the attack surface of your organization. For information about keyboard shortcuts that may apply to the procedures in this topic, see Keyboard shortcuts in the Exchange admin center. Tip Having problems? Ask for help in the Exchange forums. Visit the forums at: Exchange Server, Exchange Online, or Exchange Online Protection. Use the EAC to enable the MRS Proxy endpoint In the EAC, go to Servers > Virtual Directories. Select the EWS virtual directory that you want to configure. You can use the Select server drop-down list to filter the Exchange servers by name. To only display EWS virtual directories, select EWS in the Select type drop-down list. After you've selected the EWS virtual directory that you want to configure, click Edit . On the properties page that opens, on the General tab, select the Enable MRS Proxy endpoint check box, and then click Save. Use the Exchange Management Shell to enable the MRS Proxy endpoint To enable the MRS Proxy endpoint, use this syntax: Set-WebServicesVirtualDirectory -Identity "[<Server>\]EWS (Default Web Site)" -MRSProxyEnabled $true This example enables the MRS Proxy endpoint in Client Access services on the Mailbox server named EXCH-SRV-01. Set-WebServicesVirtualDirectory -Identity "EXCH-SRV-01\EWS (Default Web Site)" -MRSProxyEnabled $true This example enables the MRS Proxy endpoint in Client Access services on all Mailbox servers in your Exchange organization. Get-WebServicesVirtualDirectory | Set-WebServicesVirtualDirectory -MRSProxyEnabled $true For detailed syntax and parameter information, see Set-WebServicesVirtualDirectory. How do you know this worked? To verify that you've successfully enabled the MRS Proxy endpoint, do any of these steps: In the EAC, go to Servers > Virtual Directories > select the EWS virtual directory, and verify in the details pane that the MRS Proxy endpoint is enabled. Run this command in the Exchange Management Shell, and verify that the MRSProxyEnabled property for the EWS virtual directory has the value True: Get-WebServicesVirtualDirectory | Format-Table -Auto Identity,MRSProxyEnabled Use the Test-MigrationServerAvailability cmdlet in the Exchange Management Shell to test communication with the remote servers that hosts the mailboxes that you want to move (or the servers in your on-premises Exchange organization for offboarding remote move migrations from Office 365). Replace <EmailAddress> with the email address of one of the mailboxes that you want to move, and run this command in the Exchange Management Shell: Test-MigrationServerAvailability -ExchangeRemoteMove -Autodiscover -EmailAddress <EmailAddress> -Credentials (Get-Credential) To run this command successfully, the MRS Proxy endpoint must be enabled. For detailed syntax and parameter information, see Test-MigrationServerAvailability. Feedback
https://docs.microsoft.com/en-us/Exchange/architecture/mailbox-servers/mrs-proxy-endpoint?view=exchserver-2016
2020-02-17T01:42:26
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Microsoft Defender ATP production deployment Applies to: Proper planning is the foundation of a successful deployment. In this deployment scenario, you'll be guided through the steps on: - Tenant configuration - Network configuration - Onboarding using System Center Configuration Manager - Endpoint detection and response - Next generation protection - Attack surface reduction Note For the purpose of guiding you through a typical deployment, this scenario will only cover the use of System Center Configuration Manager. Microsoft Defnder ATP supports the use of other onboarding tools but will not cover those scenarios in the deployment guide. For more information, see Onboard machines to Microsoft Defender ATP. Tenant Configuration When accessing Microsoft Defender Security Center for the first time there will be a setup wizard that will guide you through some initial steps. At the end of the setup wizard there will be a dedicated cloud instance of Microsoft Defender ATP created. The easiest method is to perform these steps from a Windows 10 client machine. From a web browser, navigate to. If going through a TRIAL license, go to the link () Once the authorization step is completed, the Welcome screen will be displayed. Go through the authorization steps. Set up preferences. Data storage location - It's important to set this up correctly. Determine where the customer wants to be primarily hosted: US, EU or UK. You cannot change the location after this setup and Microsoft will not transfer the data from the specified geolocation. Data retention - The default is 6 months. Enable preview features - The default is on, can be changed later. Select Next. Select Continue. Network configuration If the organization does not require the endpoints to use a Proxy to access the Internet, skip this section. The Microsoft Defender ATP sensor requires Microsoft Windows HTTP (WinHTTP) to report sensor data and communicate with the Microsoft Defender ATP service. The embedded Microsoft Defender ATP sensor runs in the system context using the LocalSystem account. The sensor uses Microsoft Windows HTTP Services (WinHTTP) to enable communication with the Microsoft Defender ATP cloud service. The WinHTTP configuration setting is independent of the Windows Internet (WinINet) internet browsing proxy settings and can only discover a proxy server by using the following discovery methods: Auto-discovery methods: Transparent proxy Web Proxy Auto-discovery Protocol (WPAD) If a Transparent proxy or WPAD has been implemented in the network topology, there is no need for special configuration settings. For more information on Microsoft Defender ATP URL exclusions in the proxy, see the Appendix section in this document for the URLs Whitelisting or on Microsoft Docs.�Disable Authenticated Proxy usage Open the Group Policy Management Console. Create a policy or edit an existing policy based off the organizational practices. Edit the Group Policy and navigate to Administrative Templates > Windows Components > Data Collection and Preview Builds > Configure Authenticated Proxy usage for the Connected User Experience and Telemetry Service. Select Enabled. Select Disable Authenticated Proxy usage. Navigate to Administrative Templates > Windows Components > Data Collection and Preview Builds > Configure connected user experiences and telemetry. Select Enabled. Enter the Proxy Server Name. The policy sets two registry values TelemetryProxyServer as REG_SZ and DisableEnterpriseAuthProxy as REG_DWORD under the registry key HKLM\Software\Policies\Microsoft\Windows\DataCollection. The registry value TelemetryProxyServer takes the following string format: <server name or ip>:<port> For example: 10.0.0 Proxy Configuration for down-level machines Down-Level machines include Windows 7 SP1 and Windows 8.1 workstations as well as Windows Server 2008 R2, Windows Sever 2012, Windows Server 2012 R2, and versions of Windows Server 2016 prior to Windows Server CB 1803. These operating systems will have the proxy configured as part of the Microsoft Management Agent to handle communication from the endpoint to Azure. Refer to the Microsoft Management Agent Fast Deployment Guide for information on how a proxy is configured on these machines. Proxy Service URLs: - +. Note As a cloud-based solution, the IP range can change. It's recommended you move to DNS resolving setting. Onboarding using System Center Configuration Manager Collection creation To onboard Windows 10 devices with System Center Configuration Manager, the deployment can target either and existing collection or a new collection can be created for testing. The onboarding like group policy or manual method does not install any agent on the system. Within the Configuration Manager console the onboarding process will be configured as part of the compliance settings within the console. Any system that receives this required configuration will maintain that configuration for as long as the Configuration Manager client continues to receive this policy from the management point. Follow the steps below to onboard systems with Configuration Manager. In System Center Configuration Manager console, navigate to Assets and Compliance > Overview > Device Collections. Right Click Device Collection and select Create Device Collection. Provide a Name and Limiting Collection, then select Next. Select Add Rule and choose Query Rule. Click Next on the Direct Membership Wizard and click on Edit Query Statement. Select Criteria and then choose the star icon. Keep criterion type as simple value, choose where as Operating System - build number, operator as is equal to and value 10240 and click on OK. Select Next and Close. Select Next. After completing this task, you now have a device collection with all the Windows 10 endpoints in the environment. Endpoint detection and response Windows 10 From within the Microsoft Defender Security Center it is possible to download the '.onboarding' policy that can be used to create the policy in System Center Configuration Manager and deploy that policy to Windows 10 devices. From a Microsoft Defender Security Center Portal, select Settings and then Onboarding. Under Deployment method select the supported version of System Center Configuration Manager. Select Download package. Save the package to an accessible location. In System Center Configuration Manager, navigate to: Assets and Compliance > Overview > Endpoint Protection > Microsoft Defender ATP Policies. Right-click Microsoft Defender ATP Policies and select Create Microsoft Defender ATP Policy. Enter the name and description, verify Onboarding is selected, then select Next. Click Browse. Navigate to the location of the downloaded file from step 4 above. Click Next. Configure the Agent with the appropriate samples (None or All file types). Select the appropriate telemetry (Normal or Expedited) then click Next. Verify the configuration, then click Next. Click Close when the Wizard completes. In the System Center Configuration Manager console, right-click the Microsoft Defender ATP policy you just created and select Deploy. On the right panel, select the previously created collection and click OK. Previous versions of Windows Client (Windows 7 and Windows 8.1) Follow the steps below to identify the Microsoft Defender ATP Workspace ID and Workspace Key, that will be required for the onboarding of previous versions of Windows. From a Microsoft Defender Security Center Portal, select Settings > Onboarding. Under operating system choose Windows 7 SP1 and 8.1. Copy the Workspace ID and Workspace Key and save them. They will be used later in the process. Before the systems can be onboarded into the workspace, the deployment scripts need to be updated to contain the correct information. Failure to do so will result in the systems not being properly onboarded. Depending on the deployment method, this step may have already been completed. Edit the InstallMMA.cmd with a text editor, such as notepad and update the following lines and save the file: Edit the ConfiguerOMSAgent.vbs with a text editor, such as notepad, and update the following lines and save the file: Microsoft Monitoring Agent (MMA) is currently (as of January 2019) supported on the following Windows Operating Systems: Server SKUs: Windows Server 2008 SP1 or Newer Client SKUs: Windows 7 SP1 and later The MMA agent will need to be installed on Windows devices. To install the agent, some systems will need to download the Update for customer experience and diagnostic telemetry in order to collect the data with MMA. These system versions include but may not be limited to: Windows 8.1 Windows 7 Windows Server 2016 Windows Server 2012 R2 Windows Server 2008 R2 Specifically, for Windows 7 SP1, the following patches must be installed: - Install either .NET Framework 4.5 (or later) or KB3154518. Do not install both on the same system. To deploy the MMA with System Center Configuration Manager, follow the steps below to utilize the provided batch files to onboard the systems. The CMD file when executed, will require the system to copy files from a network share by the System, the System will install MMA, Install the DependencyAgent, and configure MMA for enrollment into the workspace. In System Center Configuration Manager console, navigate to Software Library. Expand Application Management. Right-click Packages then select Create Package. Provide a Name for the package, then click Next Verify Standard Program is selected. Click Next. Enter a program name. Browse to the location of the InstallMMA.cmd. Set Run to Hidden. Set Program can run to Whether or not a user is logged on. Click Next. Set the Maximum allowed run time to 720. Click Next. Verify the configuration, then click Next. Click Next. Click Close. In the System Center Configuration Manager console, right-click the Microsoft Defender ATP Onboarding Package just created and select Deploy. On the right panel select the appropriate collection. Click OK. Next generation protection Microsoft Defender Antivirus is a built-in antimalware solution that provides next generation protection for desktops, portable computers, and servers. In the System Center Configuration Manager console, navigate to Assets and Compliance > Overview > Endpoint Protection > Antimalware Polices and choose Create Antimalware Policy. Select Scheduled scans, Scan settings, Default actions, Real-time protection, Exclusion settings, Advanced, Threat overrides, Cloud Protection Service and Security intelligence updates and choose OK. In certain industries or some select enterprise customers might have specific needs on how Antivirus is configured. Quick scan versus full scan and custom scan For more details, see Windows Security configuration framework Right-click on the newly created antimalware policy and select Deploy . Target the new antimalware policy to your Windows 10 collection and click OK. After completing this task, you now have successfully configured Windows Defender Antivirus. Attack Surface Reduction The attack surface reduction pillar of Microsoft Defender ATP includes the feature set that is available under Exploit Guard. Attack surface reduction (ASR) rules, Controlled Folder Access, Network Protection and Exploit Protection. All these features provide an audit mode and a block mode. In audit mode there is no end user impact all it does is collect additional telemetry and make it available in the Microsoft Defender Security Center. The goal with a deployment is to step by step move security controls into block mode. To set ASR rules in Audit mode: In the System Center Configuration Manager console, navigate to Assets and Compliance > Overview > Endpoint Protection > Windows Defender Exploit Guard and choose Create Exploit Guard Policy. Select Attack Surface Reduction. Set rules to Audit and click Next. Confirm the new Exploit Guard policy by clicking on Next. Once the policy is created click Close. Right-click on the newly created policy and choose Deploy. Target the policy to the newly created Windows 10 collection and click OK. After completing this task, you now have successfully configured ASR rules in audit mode. Below are additional steps to verify whether ASR rules are correctly applied to endpoints. (This may take few minutes) From a web browser, navigate to. Select Configuration management from left side menu. Click Go to attack surface management in the Attack surface management panel. Click Configuration tab in Attack Surface reduction rules reports. It shows ASR rules configuration overview and ASR rules status on each devices. Click each device shows configuration details of ASR rules. See Optimize ASR rule deployment and detections for more details. To set Network Protection rules in Audit mode: In the System Center Configuration Manager console, navigate to Assets and Compliance > Overview > Endpoint Protection > Windows Defender Exploit Guard and choose Create Exploit Guard Policy. Select Network protection. Set the setting to Audit and click Next. Confirm the new Exploit Guard Policy by clicking Next. Once the policy is created click on Close. Right-click on the newly created policy and choose Deploy. Select the policy to the newly created Windows 10 collection and choose OK. After completing this task, you now have successfully configured Network Protection in audit mode. To set Controlled Folder Access rules in Audit mode: In the System Center Configuration Manager console, navigate to Assets and Compliance > Overview > Endpoint Protection > Windows Defender Exploit Guard and choose Create Exploit Guard Policy. Select Controlled folder access. Set the configuration to Audit and click Next. Confirm the new Exploit Guard Policy by clicking on Next. Once the policy is created click on Close. Right-click on the newly created policy and choose Deploy. Target the policy to the newly created Windows 10 collection and click OK. After completing this task, you now have successfully configured Controlled folder access in audit mode. Feedback
https://docs.microsoft.com/en-us/windows/security/threat-protection/microsoft-defender-atp/production-deployment
2020-02-17T01:23:39
CC-MAIN-2020-10
1581875141460.64
[array(['images/a22081b675da83e8f62a046ae6922b0d.png', 'Image of onboarding'], dtype=object) array(['images/09833d16df7f37eda97ea1d5009b651a.png', 'Image of onboarding'], dtype=object) array(['images/262a41839704d6da2bbd72ed6b4a826a.png', 'Image of System Center Configuration Manager console'], dtype=object) ]
docs.microsoft.com
2. Supplier Tutorial 2.1 User interface The login page and the login information (username and password) will be sent to you by your client. Once you are logged in, a new window opens. It is your dashboard. The dashboard will tell you at a glance if you have any task waiting or if there is an active discussion. You can’t create a task or a discussion. The client is the only one who can do it. 2.2 The Tasks To select a task, just press it. A window will open To access to : - Supplier assistant: click on the 1st icon, - Raw material assistant: click on the 2nd icon. How to change a task Supplier's Assistant You can fill or change the requested information For each step, click on “next” and to stop or to finish, click on “Finish”. You can change the data at any time. Raw material creation assistant You can fill the information for the specific raw material. You can fill the field to be completed. Each step must be validated by clicking on “Next”. Validate each step before pressing “Finish”. It is possible to stop and to resume the data entry whenever you wish. Finish the Task Before ending the task, you can write a comment for your client (missing ingredient, physico-chemical results…). Once all tasks have been completed, the tasks’ list will be empty. +*End the task Supplier's information If you click on "CANCEL" your progress will be saved. Only the information on the last window will be lost. To complete the task "Création de la matière première" : Task sent back from a client The client may have comments about the information you registered. The task is going to appear again on the supplier‘s dashboard. The client will have validated some information. There are now locked and no longer editable. An icon (see below) indicates a comment written by the client. It is possible to read it and to answer. 2.5 Discussion To facilitate the exchange of information, the client can create a discussion. A discussion enables the site’s members to discuss all subjects at the same time and in the same space. Only the subjects related to your products will be visible on your dashboard. To answer and to consult a discussion, you only need to click on the subject and click on “Answer” It will be possible to change your answer by clicking on “modifier”. How to answer : Click on the discussion you are interested in:
http://docs.becpg.fr/en/utilization/supplier-portal-supplier-tuto.html
2020-02-17T00:17:17
CC-MAIN-2020-10
1581875141460.64
[array(['images/27_supplier-portal-supplier-tuto-1.png', None], dtype=object) array(['images/27_supplier-portal-supplier-tuto-2.png', None], dtype=object) array(['images/27_supplier-portal-supplier-tuto-3.png', None], dtype=object) array(['images/27_supplier-portal-supplier-tuto-4.png', None], dtype=object) array(['images/27_supplier-portal-supplier-tuto-5.png', None], dtype=object) array(['images/27_supplier-portal-supplier-tuto-6.png', None], dtype=object) array(['images/27_supplier-portal-supplier-tuto-7.png', None], dtype=object) array(['images/27_supplier-portal-supplier-tuto-8.png', None], dtype=object) array(['images/27_supplier-portal-supplier-tuto-9.png', None], dtype=object) array(['images/27_supplier-portal-supplier-tuto-10.png', None], dtype=object) array(['images/27_supplier-portal-supplier-tuto-11.png', None], dtype=object) array(['images/27_supplier-portal-supplier-tuto-12.png', None], dtype=object) array(['images/27_supplier-portal-supplier-tuto-13.png', None], dtype=object) array(['images/27_supplier-portal-supplier-tuto-14.png', None], dtype=object) array(['images/27_supplier-portal-supplier-tuto-15.png', None], dtype=object) ]
docs.becpg.fr
To purchase a VPS on Aruba, first visit their main page. Click on the banner which says "Cloud VPS, from 2.79€ / month": On the next page, scroll to the bottom, and choose the cheapest "€2.79+VAT/month" plan by clicking the green "Start now" button: You will be taken to a page where you choose the amount you wish to pay. Click "Continue" after entering the desired amount. You'll be taken to a page where you fill in your personal details. Fill in your personal details and click "CONFIRM" at the bottom of the page when you're done. Afterwards you will be taken to the Order summary page where you will be able to see the final amount with VAT added: Tick both boxes and press "Continue" to go to the Payment page: Choose your preferred method of payment and pay. You'll be able to continue to the customer area afterwards: Click on the CLOUD tab to go to the section where you will create and configure your VPS server: NOTE: the text in the red box "AWI-XXXXXX" (XXXXXX is a sequence of numbers) is your username for the Control panel where you will set up your VPS. The password for the Control panel is sent to you in a text message to your phone. Click the "GO TO THE CONTROL PANEL" button and you will be taken to the Control panel login page: Enter your login details for the Control panel and click "SIGN IN". You will be taken to a page where you can select the region of you VPS. Pick a region from the marked row and click on it: Now click on the "CREATE NEW SERVER" button to start configuring your VPS server: You will be taken to the server configuration page. First, choose the SMART option. Then enter your desired VPS name, and finally, click on "CHOOSE TEMPLATE". After you click on the "CHOOSE TEMPLATE" button you will see this window with a list of operating systems. Scroll down in the left part of the window until you see Ubuntu Server 16.04 LTS 64bit and click it. The right part of the window will now show the details of your chosen operating system: NOTE: Check that the icon pointed out on the picture above is green. This indicates IPv6 compatibility which you NEED to run multiple masternodes. Click "CHOOSE THIS TEMPLATE" to finalize your choice of operating system. You will be returned to the server configuration page. Now that you are back on the page, tick the box next to the "Configure also a public IPv6 address" text. This is very important as it will make configuring the rest of IPv6 addresses needed much simpler. Continue by entering your chosen server password into the fields marked below. DO NOT FORGET YOUR PASSWORD. SAVE IT SOMEWHERE BECAUSE YOU WON'T BE ABLE TO LOG INTO YOUR SERVER WITHOUT IT. Also note that the username for your server is "root" (without the quotes). Now choose the "Small" hardware configuration. Finally, click on the "CREATE CLOUD SERVER" to finish the setup and create your VPS: You will be taken to your server overview. Here you can see the VPS server you created: The most important piece of infomation here is your server IP. Save your server IP because you will need it to access your server! Congratulations! You now have your own VPS on the Aruba hosting platform.
https://docs.fix.network/english/fix-masternodes/how-to-setup-vps-from-aruba
2020-02-17T02:23:27
CC-MAIN-2020-10
1581875141460.64
[]
docs.fix.network
Package org.opencv.tracking Class TrackerKCF - java.lang.Object - org.opencv.core.Algorithm - org.opencv.tracking.Tracker - org.opencv.tracking.TrackerKCF public class TrackerKCF extends Trackerthe KCF (Kernelized Correlation Filter) tracker KCF is a novel tracking framework that utilizes properties of circulant matrix to enhance the processing speed. This tracking method is an implementation of CITE: KCF_ECCV which is extended to KCF with color-names features (CITE: KCF_CN). The original paper of KCF is available at <> as well as the matlab implementation. For more information about KCF with color-names features, please refer to <>. Method Summary Methods inherited from class org.opencv.core.Algorithm clear, empty, getDefaultName, getNativeObjAddr, save Field Detail GRAY public static final int GRAY - See Also: - Constant Field Values CN public static final int CN - See Also: - Constant Field Values CUSTOM public static final int CUSTOM - See Also: - Constant Field Values Method Detail __fromPtr__ public static TrackerKCF __fromPtr__(long addr) create public static TrackerKCF create()Constructor - Returns: - automatically generated
https://docs.opencv.org/master/javadoc/org/opencv/tracking/TrackerKCF.html
2020-02-17T02:51:45
CC-MAIN-2020-10
1581875141460.64
[]
docs.opencv.org
The Global Clock¶ The global clock is an instance of the ClockObject class. There’s a single instance of it already initialized that you can access statically. To get the time (in seconds) since the last frame was drawn: double dt = ClockObject::get_global_clock()->get_dt(); Another useful function is the frame time (in seconds, since the program started): double frame_time = ClockObject::get_global_clock()->get_frame_time(); Note This section is incomplete. It will be updated soon.
https://docs.panda3d.org/1.10/cpp/programming/timing/global-clock
2020-02-17T00:30:57
CC-MAIN-2020-10
1581875141460.64
[]
docs.panda3d.org
Before installing your application on a device or submitting it to Tizen Store, it must be signed with a certificate profile. The signature verifies the source of the application and makes sure it has not been tampered with since its publication. A certificate profile is a combination of the certificates used for signing. To select the certificates used to package your application: In the Visual Studio menu, go to Tools > Options > Tizen > Certification. Define the certificates in one of the following ways: Using the default certificates If you do not need to upload your application to Tizen Store, you can use a default certificate and deploy your application in the Tizen Emulator for testing purposes. To use the default certificates, uncheck the Sign the .TPK file using the following option. checkbox. Using an existing certificate profile If you have used Tizen Studio before and have already generated a certificate profile using the Tizen Certificate Manager, you can import the profile by selecting Use profile of Tizen Certificate Manager from the drop-down list. If you want to create a new certificate profile, see Creating a Certificate Profile. Using your own certificates If you already have author and distributor certificates from another application store, you can import them by selecting Direct registration from the drop-down list and entering the required information. Click OK. Note It is recommended to keep your certificates and password safe in the local repo to prevent it from being compromised. A certificate profile consists of an author certificate and 1 or 2 distributor certificates: To distribute your application, you must create a certificate profile and sign the application with it. You can create a new certificate profile with the Certificate Manager: In the Visual Studio menu, select Tools > Tizen > Tizen Certificate Manager. In the Certificate Manager window, click + to create a new profile. The certificate profile creation wizard opens. Enter a name for the profile and click Next. Add the author and distributor certificates: Select whether to create a new author certificate or use a previously created author certificate, and click Next. Define the existing author certificate or enter the required information for a new certificate, and click Next. You can use the default Tizen distributor certificate or another distributor certificate if you have one. In general, the default Tizen distributor certificate is used and you do not need to modify the distributor certificates. You can also select the privilege level of the distributor certificate (needed if the same certificate is used for signing native and Web applications). Click Finish. You can view, edit, and remove the certificate profiles you have created in the Certificate Manager. Figure: Managing certificate profiles To manage a certificate profile: To see the details of an individual certificate within the selected certificate profile, click the info button ( ). Figure: Certificate information To change the author or distributor certificate of the selected certificate profile, click the pencil button ( ). Figure: Changing the certificate To remove the selected certificate profile, click the trash button ( ). To set the selected certificate profile as active, click the check button ( ). The active profile is used when you package your application. The active profile is also automatically set in Tools > Options > Tizen > Certification. Figure: Removing the certificate profile or setting it active
https://docs.tizen.org/application/vstools/tools/certificate-manager
2020-02-17T01:54:01
CC-MAIN-2020-10
1581875141460.64
[]
docs.tizen.org
Gitomail¶ Gitomail is a tool for generating pretty inline-HTML emails for Git commits, with a sending capability to proper recipients. This page provides a short introduction to Gitomail's main features. First, some history¶ Many years before Git became popular or existed, people were using mailing lists in order to collaborate on code changes. The unified diff format, now popularized, as used as a diff format in the Plain Text-formatted emails. A text-based console email program such as mutt presented the user a convinent way to handle these diffs and import them into their source trees. Below is a fake example of such an email, based on a commit in the PostgreSQL project: (shown here above, a screenshot from an old email reader) With the advant of sites like Github, email became under-used for reviewing changes, and in the Webmail era, emails containing diffs may appear somewhat arcane to developers of today. Diffs in the age of Webmail¶ Nowadays, the Plain Text version of the email message seems outdated. This is where Gitomail comes into the picture. For example, the email from above, when sent by Gitomail, can appear like the following: Combined with full syntax highlighting, the HTML part of the email makes this appearance possible. Inline reply friendliness¶ Similarly to Plain Text emails, it's possible to reply to changes inline: Branch change summaries¶ Gitomail tracks changes to branches, and can describe what changed, dealing properly with fast-forward and rebases. It's possible to specify how branches relate to each so that summaries make sense. Fast forward example¶ Rebase example¶ Automatic recipients and code maintainership¶ Of course, it is not enough to format the emails. We would also like to designate their recipients, preferrably in an automated way. Inspired by a very wonderful script in the Linux kernel source tree named get_maintainers.pl, Gitomail supports its own Maintainers file format, which can specify rules to match people to certain files or directories. maintainer dan file.* maintainer jack Makefile reviewer mailinglist Using a very minimal specification language, and formatted similarly to .gitignore, these Maintainers files can optionally spread across the source tree, assigning code to maintainers. It's especially useful for single repositories bearing multiple maintainers of code. These files are then used to automatically set the destination address of emails to the rightful maintainers, based on the code changed in the commit, effectively working very similarly to get_maintainers.pl.
https://gitomail.readthedocs.io/en/stable/
2020-02-17T01:25:45
CC-MAIN-2020-10
1581875141460.64
[array(['./example2.png', 'example'], dtype=object) array(['./example5.png', 'example'], dtype=object) array(['example3.png', None], dtype=object) array(['./example4.png', 'example'], dtype=object)]
gitomail.readthedocs.io
All content with label api+cache+client+client_server+command-line+dist+documentation+gridfs+gui_demo+guide+infinispan+intro+jboss_cache+listener. Related Labels: podcast, expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, release, partitioning, query, deadlock, archetype, pojo_cache, jbossas, lock_striping, nexus, schema, amazon, s3, grid, memcached, test, jcache, xsd, ehcache, maven, wcm, youtube, userguide, write_behind, 缓存, ec2, hibernate, aws, interface, custom_interceptor, clustering, setup, eviction, fine_grained, concurrency, out_of_memory, import, index, events, configuration, hash_function, batch, buddy_replication, loader, pojo, write_through, cloud, remoting, mvcc, notification, tutorial, presentation, xml, read_committed, distribution, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, br, websocket, async, transaction, interactive, xaresource, build, gatein, searchable, demo, scala, installation, migration, non-blocking, jpa, filesystem, user_guide, article, eventing, shell, testng, infinispan_user_guide, standalone, snapshot, repeatable_read, hotrod, webdav, docs, consistent_hash, batching, store, whitepaper, jta, faq, as5, spring, 2lcache, jsr-107, lucene, jgroups, locking, rest, hot_rod more » ( - api, - cache, - client, - client_server, - command-line, - dist, - documentation, - gridfs, - gui_demo, - guide, - infinispan, - intro, - jboss_cache, - listener ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/api+cache+client+client_server+command-line+dist+documentation+gridfs+gui_demo+guide+infinispan+intro+jboss_cache+listener
2020-02-17T01:38:43
CC-MAIN-2020-10
1581875141460.64
[]
docs.jboss.org
Azure Monitor libraries for .NET Overview Azure Monitor helps you track performance, maintain security, and identify trends. Learn more about Azure Monitor. Management library Install the NuGet package directly from the Visual Studio Package Manager console or with the .NET Core CLI. Visual Studio Package Manager Install-Package Microsoft.Azure.Management.Monitor.Fluent dotnet add package Microsoft.Azure.Management.Monitor.Fluent Samples Explore more sample .NET code you can use in your apps.
https://docs.microsoft.com/en-us/dotnet/api/overview/azure/monitor?view=azure-dotnet
2018-03-17T14:39:27
CC-MAIN-2018-13
1521257645177.12
[]
docs.microsoft.com
Importing Code¶ Populating the Database¶ Once joern has been installed, you can begin to import code into the database by simply pointing joern.jar to the directory containing the source code: java -jar $JOERN/bin/joern.jar $CodeDirectory or, if you want to ensure that the JVM has access to your heap memory java -Xmx$SIZEg -jar $JOERN/bin/joern.jar $CodeDirectory where $SIZE is the maximum size of the Java Heap in GB. As an example, you can import $JOERN/testCode. This will create a Neo4J database directory .joernIndex in your current working directory. Note that if the directory already exists and contains a Neo4J database, joern.jar will add the code to the existing database. You can thus import additional code at any time. If however, you want to create a new database, make sure to delete .joernIndex prior to running joern.jar. Tainting Arguments (Optional)¶ Many times, an argument to a library function (e.g., the first argument to recv) is tainted by the library function. There is no way to statically determine this when the code of the library function is not available. Also, Joern does not perform inter-procedural taint-analysis and therefore, by default, symbols passed to functions as arguments are considered used but not defined. To instruct Joern to consider arguments of a function to be tainted by calls to that function, you can use the tool argumentTainter. For example, by executing java -jar ./bin/argumentTainter.jar recv 0 from the Joern root directory, all first arguments to recv will be considered tainted and dependency graphs will be recalculated accordingly. Starting the Database Server¶ It is possible to access the graph database directly from your scripts by loading the database into memory on script startup. However, it is highly recommended to access data via the Neo4J server instead. The advantage of doing so is that the data is loaded only once for all scripts you may want to execute allowing you to benefit from Neo4J’s caching for increased speed. To install the neo4j server, download version textbf{1.9.7} from. Once downloaded, unpack the archive into a directory of your choice, which we will call $Neo4jDir in the following. Next, specificy the location of the database created by joern in your Neo4J server configuration file in $Neo4jDir/conf/neo4j-server.properties: # neo4j-server.properties org.neo4j.server.database.location=/$path_to_index/.joernIndex/ For example, if your .joernIndex is located in /home/user/joern/.joernIndex, your configuration file should contain the line: # neo4j-server.properties org.neo4j.server.database.location=/home/user/joern/.joernIndex/ Please also make sure that org.neo4j.server.database.location is set only once. You can now start the database server by issuing the following command: $Neo4jDir/bin/neo4j console If your installation of Neo4J is more recent than the libraries bundled with joern, the database might fail to start and request an upgrade of the stored data. This upgrade can be performed on the fly by enabling allow_store_upgrade in neo4j.properties as follows: # neo4j.properties allow_store_upgrade=true The Neo4J server offers a web interface and a web-based API (REST API) to explore and query the database. Once your database server has been launched, point your browser to . Next, visit . This is the reference node, which is the root node of the graph database. Starting from this node, the entire database contents can be accessed. In particular, you can get an overview of all existing edge types as well as the properties attached to nodes and edges. Of course, in practice, you will not want to use your browser to query the database. Instead, you can use python-joern to access the REST API using Python as described in the following section.
http://joern.readthedocs.io/en/latest/import.html
2018-03-17T14:44:28
CC-MAIN-2018-13
1521257645177.12
[]
joern.readthedocs.io
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Encrypts data on the server side with a new customer master key (CMK) without exposing the plaintext of the data on the client side. The data is first decrypted and then reencrypted. You can also use this operation to change the encryption context of a ciphertext. You can reencrypt data using CMKs in different AWS accounts. Unlike other operations, ReEncrypt is authorized twice, once as ReEncryptFrom on the source CMK and once as ReEncryptTo. For .NET Core and PCL this operation is only available in asynchronous form. Please refer to ReEncryptAsync. Namespace: Amazon.KeyManagementService Assembly: AWSSDK.KeyManagementService.dll Version: 3.x.y.z Container for the necessary parameters to execute the ReEncrypt service method. The following example reencrypts data with the specified CMK. var response = client.ReEncrypt(new ReEncryptRequest { CiphertextBlob = new MemoryStream( ), // The data to reencrypt. DestinationKeyId = "0987dcba-09fe-87dc-65ba-ab0987654321" // The identifier of the CMK to use to reencrypt the data. You can use the key ID or Amazon Resource Name (ARN) of the CMK, or the name or ARN of an alias that refers to the CMK. }); MemoryStream ciphertextBlob = response.CiphertextBlob; // The reencrypted data. string keyId = response.KeyId; // The ARN of the CMK that was used to reencrypt the data. string sourceKeyId = response.SourceKeyId; // The ARN of the CMK that was used to originally encrypt the data. .NET Framework: Supported in: 4.5, 4.0, 3.5 Portable Class Library: Supported in: Windows Store Apps Supported in: Windows Phone 8.1 Supported in: Xamarin Android Supported in: Xamarin iOS (Unified) Supported in: Xamarin.Forms
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/KeyManagementService/MKeyManagementServiceReEncryptReEncryptRequest.html
2018-03-17T14:56:59
CC-MAIN-2018-13
1521257645177.12
[]
docs.aws.amazon.com
November 15, 2017 Update This release ofintroduces intransformation steps as plain text in the Transform Editor as well as use the Transform Builder. In Release 4.1 and later, the Transform Editor has been removed, in favor of an enhanced version of the Transform Builder. See Transform Builder. Dependencies Browser has been replaced In Release 4.0.1, you could explore dependencies between your datasets through the Dependencies Browser, which was accessible through a graph in the toolbar in the Transformer page. In Release 4.1 and later,. See Dataset Navigator. Key Bug Fixes New Known Issues March 9, 2017 Update This release features menu-based shortcuts for accessing common transformations, a visualization of relationships between datasets in flows, human-readable recipe steps, and a number of newfunctions and capabilities. Details are below. What's New. - steps can now be displayed in natural language. See Data Grid Panel. - New column menu shortcuts allow you to quickly assemble recipe steps from menu selections, based on a column's data type. See Column Menus. - New column browser streamlines interactions involving multiple columns. See Column Browser Panel.. Key Bug Fixes New Known Issues January 16, 2017 Update This release features the introduction of the following key features: - A completely redesigned execution engine (codename: Photon), which enables much better performance across larger samples in the Transformer page and faster execution. The Transform Builder, a menu-driven interface for rapidly building transforms. - Fuzzy matching options for joins. - In-app live chat support. - Updated workspace terminology and organization. - Numerous other features and performance enhancements. Details are below. What's New Workspace: - Import to Wrangle in one step. See Import Dataset Page. Transformer Page: - Substantial increase in the size of samples in Transformer page for better visibility into source data and more detailed profiling. - A newly designed interface helps you to quickly build transform steps. See Transform Builder. - Scrolling and loading improvements in the Transformer page. - Use the Dependencies Browser to review and resolve dependency errors between your datasets. See Dataset Navigator. - For more information on the implications, see Changes to the Object Model. Join tool now supports fuzzy join options. See Join Page. Job Execution and Performance: Superior performance in job execution. Run jobs on theat a faster rate. - Numerous performance improvements to the web application across many users. Improved error message on job failure. Live Chat Support - Chat with via in-app live chat during business hours Pacific time. Select Chat Support from the Help menu. Changes to System Behavior This section outlines changes to how the platform behaves that have resulted from features or bug fixes in this release. Object Model Architecture changes and related changes to theenable greater flexibility in asset reuse in current and future releases. Terminology and organization in the workspace have been updated as well. Changes to - The multisplittransform has been replaced by a more flexible version of the splittransform. For more information, see Split Transform. - A number of functions have been renamed to conform to common function names. See Changes to the Language. Key Bug Fixes New Known Issues August 11, 2016 Update This release includes minor bug fixes and usability improvements. June 27, 2016 Update What's New Core Transform & Profiling: - Union: support for custom mapping, data refresh and multi-dataset selection - Join: Edit and update joins. See Join Page. - Row number and row-based transforms - Additional Datetime formats: - Support milliseconds. - Support for ISO 8601 time zones, non-delimited datetime values (for example, yyyymmdd), and the 12-hour clock (for example, 1pm or 11:13am). - See Supported Data Types. Proxy Errors: - If your environment connects to the public Internet through a proxy server that modifies certificates in use, please contact your IT administrator. Wrangle Language: - New pivottransform enables pivoting and aggregating your data through a single transform step. See Pivot Transform. - New flattentransform allows for simple unnesting of single columns of arrays into separate rows. Simpler syntax and usage than the unnesttransform. See Flatten Transform. - New iffunction supersedes ternary expressions and simplifies the syntax. While ternaries are still supported, you should use the iffunction for scripts in the future. See IF Function. New sourcerownumber()function can be used to reference the original sort order of the dataset. See SOURCEROWNUMBER Function. headertransform now accepts a sourcerownumberparameter, which references the row from the original sort order when the dataset was created. See Header Transform. - Expanded syntax to support deep key extraction for unnesttransform. See Unnest Transform. - New groupEveryparameter for the unpivottransform lets you specify the number of grouped output columns. See Unpivot Transform. - Regular expressions are now validated in the Transform Editor. For more information, see Supported Special Regular Expression Characters. Core User Experience: - Workspace: Streamlined dataset creation and enhanced Excel support. See Import Dataset Page . - Additional script performance improvements. - User experience enhancements: wrapping script lines, updated preview for replace and set, better transformer loading experience Performance: - Improvements in script performance for larger scripts in the following areas: - Click Add to Script when editing the Nth step of a script. Add a script step through the Transform Editor. Click a column name and wait for the suggestion cards. Documentation: Integrated search product documentation and Support Portal through the application. In the menu bar, click the ? icon. Improved Wrangle Language reference content is now available in product documentation. Includes detailed examples for each transform and function. See Wrangle Language. Changes to System Behavior This section outlines changes to how the platform behaves that have resulted from features or bug fixes in this release. More consistent lookups Prior to this release, a lookup into a file containing multiple matches of the lookup returned only a single match at execution time. Beginning in this release, such a lookup returns all matched values but limits the maximum number of matches to 3 per source value. Improvement to the Type System See Improvements to the Type System. Key Bug Fixes New Known Issues February 25, 2016 Update Key Bug Fixes The following important bug fixes have been addressed in this release. February 23, 2016 Update What's New Many new features in this release. Read on! Core features: - Newly supported data types: - Social Security Number (SSN) - Phone Number - Credit Card - Gender - Zip Code - See Supported Data Types. - New workflow for creating projects. See Create Flow Page. - Change datasources for your dataset. See Transformer Page. - The new Union tool enables concatenation of multiple datasets. See Union Page. Wrangle: - Identify null values using the isnullfunction in Wrangle language. - See Wrangle Language. - See Manage Null Values. - Hashtag (#) and mentions (@) now supported as . See Text Matching. Performance: - Improvements in initial column- and row-splitting inference - Improvements to UI performance and overall user experience - Improvements in Transformer page with long scripts - More consistent sorting across data types Upgrade Changes Script Upgrade Issues To support the upgrading of scripts to Release 3.0.1, some changes have been made to your Wrangle scripts. Changes to auto-generated column names: The names given to columns that are generated by your script steps has changed: Prior to Release 3.0, these column names were named in the following format: For Release 3.0.1 and later, these column names are generated according to the following method: If the transform includes a function reference, the function name is included in the new column. If the above step is applied again, a duplicate column is generated with the following name: If the transform does not contain a function reference, the old naming convention is used, as in the following examples: The above change has implications in script upgrades: When you edit a step in an upgraded script through the application, however, the new naming logic is applied. All subsequent steps in the script that apply to the affected column(s) are automatically updated for you. When you download a script, the new naming logic is applied to your exported script. The names in the script stored in the application are not changed with this operation. Required Changes to Transform Parameters In Release 3.0, improvements on how transform steps are checked against type have illuminated some issues, which may affect scripts created in previous versions. Please be aware of the following if you are upgrading from Release 2.7 or earlier. These issues must be fixed when you edit your scripts in Release 3.0 or later: merge: - In previous releases, you were permitted to use a or regex pattern as part when merging columns and strings. Beginning in Release 3.0, your previous scripts will still work, but when you edit them, you must replace any patterns in your merge with string values. extractlist transform: - Prior to Release 3.0, the extractlisttransform allowed use of patterns for the delimiterand quoteparameters. Beginning in Release 3.0, these values must be literal string values. Patterns are not supported. extractkv transforms: - In previous releases, some parameters were listed as pattern values for this transform in the Transform Editor, even though string values were accepted. When you edit upgraded scripts in Release 3.0 or later, patterns inserted as values for the valueafterand delimiterparameters must be changed to string values. Key Bug Fixes The following important bug fixes have been addressed in this release. Required Changes to Transform Parameters Improvements on how transform steps are checked against type have illuminated some issues, which may affect scripts created in previous versions. Please be aware of the following if you are upgrading from an earlier version. These issues must be fixed when you edit your scripts: merge: - In previous releases, you were permitted to use a or regex pattern as part when merging columns and strings. Beginning in this release, your previous scripts will still work, but when you edit them, you must replace any patterns in your merge with string values. extractlist transform: - In previous releases, the extractlisttransform allowed use of patterns for the delimiterand quoteparameters. Beginning in this release, these values must be literal string values. Patterns are not supported. extractkv transforms: - In previous releases, some parameters were listed as pattern values for this transform in the Transform Editor, even though string values were accepted. When you edit upgraded scripts in this release or later, patterns inserted as values for the valueafterand delimiterparameters must be changed to string values. New Known Issues The following issues are known to appear in this release. - Any Known Issues from prior releases that have been fixed are listed in the Key Bug Fixes section. - Unfixed Known Issues from earlier releases are listed in the Release Notes pages for the version where they were discovered. January 12, 2016 Update Happy New Year from! What's New - Updated Terms of Service. See the product for details. Key Bug Fixes None. November 18, 2015 Update What's New Support for server access through network proxy. Key Bug Fixes None. November 4, 2015 Update This update addresses the following issues. Key Bug Fixes Release 1.0Release 1.0 Welcome to the first release of. Key Features - Import data in text (such as CSVs, TSVs, log files, etc.), JSON, or Excel format. - Compressed GZIP files are automatically unzipped and processed. - Locate missing or mismatched data through graphical data quality bars for each column. Then, select missing or mismatched data to be prompted with a set of pre-defined transformations that can be immediately applied to the data. See Overview of Predictive Transformation. - Identify statistical outliers in each column using its data histogram. Select these values and make changes to them as needed. See Locate Outliers. - Unnest complex, nested data structures with a simple step. - Explore data details in individual columns, including frequency of values, standard deviation, and other statistical metrics. You can also select examples here to trigger suggestions. See Column Details Panel. - Merge your dataset with another or perform joins between datasets through the intuitive interface. - When ready, execute your script and generate results in a variety of formats, including Tableau Data Extract (TDE) format. - As needed, you can modify your transformation steps to fine-tune parameters using Wrangle, a purpose-built data transformation language. See Wrangle Language. - To get started using , see Workflow Basics. Known Issues The following issues are known to appear in this release.
https://docs.trifacta.com/pages/diffpagesbyversion.action?pageId=16318513&selectedPageVersions=40&selectedPageVersions=39
2018-03-17T14:25:45
CC-MAIN-2018-13
1521257645177.12
[]
docs.trifacta.com
OnClientRating The OnClientRating event is raised just before you click an item of the RadRating control. It is cancellable and precedes the OnClientRated event. If you cancel it the OnClientRated event is not raised, nor is a postback initiated if the AutoPostBack property is true. You can use this event to prevent the rating operation based on certain criteria (e.g. the user has rated already). Canceling this event will prevent setting the value of the rating control. The event handler receives two arguments: Sender – the RadRating object that fired the event. Event arguments – an event arguments object that exposes the following properties and methods: Client-side methods of the event arguments object. Example 1 : Using the OnClientRating event. <telerik:RadRating</telerik:RadRating> <script type="text/javascript"> var isRated = false; function OnClientRating(sender, args) { args.set_cancel(isRated); if (!isRated) isRated = true; } </script>
https://docs.telerik.com/devtools/aspnet-ajax/controls/rating/client-side-programming/events/onclientrating
2018-03-17T14:43:32
CC-MAIN-2018-13
1521257645177.12
[]
docs.telerik.com
Get Current Test Project Path I would like to get the path (in the context of the file system) to which the current test project belongs. Solution This is doable with a coded solution. Create a coded step inside of a test: this.ExecutionContext.DeploymentDirectory; Me.ExecutionContext.DeploymentDirectory This will store the current project path inside the string variable currentProjectFolder. The actual path string can be, for example: - C:\Users\smith\Documents\Test Studio Projects\TestProject1 However, when running tests from the Visual Studio Test View or Test Explorer, the above code will return a complex Out folder and not the Project's containing folder. For instance: - C:\MySampleProject\TestResults\smith_Smith 2011-11-21 12_56_51\Out You might also want to get the path of the data source files attached to your test project. When a project contains a data source, it's stored in Project Folder\Data. For instance, your test might be data bound to an Excel file by the name of excelSample.xlsx. This file will be stored under Project Folder\Data\excelSample.xlsx. Here's how to get this path in code: string dataSourcePath = this.ExecutionContext.DeploymentDirectory + @"\Data\excelSample.xlsx"; Dim dataSourcePath As String = Me.ExecutionContext.DeploymentDirectory + "\Data\excelSample.xlsx" It's still possible to edit or replace it from that location in a coded step. This will work even if the test is data bound to the file in question.
https://docs.telerik.com/teststudio/advanced-topics/coded-samples/general/get-project-path
2018-03-17T14:49:16
CC-MAIN-2018-13
1521257645177.12
[array(['/teststudio/img/advanced-topics/coded-samples/general/get-project-path/fig1.png', 'Current path'], dtype=object) ]
docs.telerik.com
Model Options¶ In addition to Django’s default Meta options, Django MongoDB Engine supports various options specific to MongoDB through a special class MongoMeta. class FooModel(models.Model): ... class MongoMeta: # Mongo options here ... Indexes¶ Django MongoDB Engine already understands the standard db_index and unique_together options and generates the corresponding MongoDB indexes on syncdb. To make use of other index features, like multi-key indexes and Geospatial Indexing, additional indexes can be specified using the indexes setting. class Club(models.Model): location = ListField() rating = models.FloatField() admission = models.IntegerField() ... class MongoMeta: indexes = [ [('rating', -1)], [('rating', -1), ('admission', 1)], {'fields': [('location', '2d')], 'min': -42, 'max': 42}, ] indexes can be specified in two ways: - The simple “without options” form is a list of (field, direction)pairs. For example, a single ascending index (the same thing you get using db_index) is expressed as [(field, 1)]. A multi-key, descending index can be written as [(field1, -1), (field2, -1), ...]. - The second form is slightly more verbose but takes additional MongoDB index options. A descending, sparse index for instance may be expressed as {'fields': [(field, -1)], 'sparse': True}. Capped Collections¶ Use the capped option and collection_size (and/or collection_max) to limit a collection in size (and/or document count), new documents replacing old ones after reaching one of the limit sets. For example, a logging collection fixed to 50MiB could be defined as follows: class LogEntry(models.Model): timestamp = models.DateTimeField() message = models.TextField() ... class MongoMeta: capped = True collection_size = 50*1024*1024
http://django-mongodb-engine.readthedocs.io/en/latest/reference/model-options.html
2018-03-17T14:03:56
CC-MAIN-2018-13
1521257645177.12
[]
django-mongodb-engine.readthedocs.io
Introduction Status Purpose. GROBID includes batch processing, a comprehensive RESTful API, a JAVA API, a relatively generic evaluation framework (precision, recall, etc.) and the semi-automatic generation of training data. GROBID can be considered as production ready. Deployments in production includes ResearchGate, HAL Research Archive, the European Patent Office, INIST, Mendeley, CERN, ... The key aspects of GROBID are the following ones: - Written in Java, with JNI call to native CRF libraries. - Speed - on a modern but low profile MacBook Pro: header extraction from 4000 PDF in 10 minutes (or from 3 PDF per second with the RESTful API), parsing of 3000 references in 18 seconds. - Speed and Scalability: INIST recently scaled GROBID REST service for extracting bibliographical references of 1 million PDF in 1 day on a Xeon 10 CPU E5-2660 and 10 GB memory (3GB used in average) with 9 threads - so around 11.5 PDF per second. The complete processing of 395,000 PDF (IOP) with full text structuring was performed in 12h46mn with 16 threads, 0.11s per PDF (~1,72s per PDF with single thread). - Lazy loading of models and resources. Depending on the selected process, only the required data are loaded in memory. For instance, extracting only metadata header from a PDF requires less than 2 GB memory in a multithreading usage, extracting citations uses around 3GB and extracting all the PDF structure around 4GB. - Robust and fast PDF processing based on Xpdf and dedicated post-processing. - Modular and reusable machine learning models. The extractions are based on Linear Chain Conditional Random Fields which is currently the state of the art in bibliographical information extraction and labeling. The specialized CRF models are cascaded to build a complete document structure. - Full encoding in TEI, both for the training corpus and the parsed results. - Reinforcement of extracted bibliographical data via online call to Crossref (optional), export in OpenURL, etc. for easier integration into Digital Library environments. - Rich bibliographical processing: fine grained parsing of author names, dates, affiliations, addresses, etc. but also for instance quite reliable automatic attachment of affiliations and emails to authors. - "Automatic Generation" of pre-formatted training data based on new pdf documents, for supporting semi-automatic training data generation. - Support for CJK and Arabic languages based on customized Lucene analyzers provided by WIPO. The GROBID extraction and parsing algorithms uses the Wapiti CRF library. CRF++ library is not supported since GROBID version 0.4. The C++ libraries are transparently integrated as JNI with dynamic call based on the current OS. GROBID should run properly "out of the box" on MacOS X, Linux (32 and 64 bits) and Windows. Credits The main author is Patrice Lopez ([email protected]). Many thanks to: - Vyacheslav Zholudev (ResearchGate) - Luca Foppiano (Inria) - Christopher Boumenot (Microsoft) in particular for Windows support - Laurent Romary (Inria), as project promoter and TEI pope - Florian Zipser (Humboldt University) who developed the first version of the REST API in 2011 - the other contributors from ResearchGate: Michael Häusler, Kyryl Bilokurov, Artem Oboturov - Damien Ridereau (Infotel) - Bruno Pouliquen (WIPO) for the custom analyzers for Eastern languages - Thomas Lavergne, Olivier Cappé and François Yvon for Wapiti - Taku Kudo for CRF++ - Hervé Déjean and his colleagues from Xerox Research Centre Europe, for xml2pdf - and the other contributors: Dmitry Katsubo, Phil Gooch, Romain Loth, Maud Medves, Chris Mattmann, Sujen Shah, Joseph Boyd, Guillaume Muller, Achraf Azhar, ...
http://grobid.readthedocs.io/en/latest/Introduction/
2018-03-17T14:18:18
CC-MAIN-2018-13
1521257645177.12
[]
grobid.readthedocs.io
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Container for the parameters to the DeleteThingShadow operation. Deletes the thing shadow for the specified thing. For more information, see DeleteThingShadow in the AWS IoT Developer Guide. Namespace: Amazon.IotData.Model Assembly: AWSSDK.IotData.dll Version: 3.x.y.z The DeleteThingShadow
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/IotData/TDeleteThingShadowRequest.html
2018-03-17T14:04:17
CC-MAIN-2018-13
1521257645177.12
[]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Lists all provisioning artifacts (also known as versions) for the specified product. For .NET Core and PCL this operation is only available in asynchronous form. Please refer to ListProvisioningArtifactsAsync. Namespace: Amazon.ServiceCatalog Assembly: AWSSDK.ServiceCatalog.dll Version: 3.x.y.z Container for the necessary parameters to execute the ListProvisioningArtifacts service method. .NET Framework: Supported in: 4.5, 4.0, 3.5 Portable Class Library: Supported in: Windows Store Apps Supported in: Windows Phone 8.1 Supported in: Xamarin Android Supported in: Xamarin iOS (Unified) Supported in: Xamarin.Forms
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/ServiceCatalog/MServiceCatalogListProvisioningArtifactsListProvisioningArtifactsRequest.html
2018-03-17T14:53:10
CC-MAIN-2018-13
1521257645177.12
[]
docs.aws.amazon.com
dbsize // Using callbacks (NodeJS or Web Browser) kuzzle.memoryStorage.dbsize(function (err, count) { // callback called once the action has completed }); // Using promises (NodeJS only) kuzzle.memoryStorage.dbsize() .then(count => { // resolved once the action has completed }); kuzzle.memoryStorage.dbsize(new ResponseListener<Long>() { public void onSuccess(int count) { // callback called once the action has completed } public void onError(JSONObject error) { } }); use \Kuzzle\Kuzzle; $kuzzle = new Kuzzle('localhost'); try { $count = $kuzzle->memoryStorage()->dbsize(); } catch (ErrorException $e) { } Callback response: 12 Returns the number of keys in the application database. dbsize([options], callback) Options Callback response Resolves to an integer containing the number of keys in the application database
http://docs.kuzzle.io/sdk-reference/memory-storage/dbsize/
2018-03-17T14:22:09
CC-MAIN-2018-13
1521257645177.12
[]
docs.kuzzle.io
Dfsdiag TestDFSIntegrity Applies To: Windows Server 2012, Windows 8 Checks the integrity of the Distributed File System (DFS) namespace by performing the following tests: Checks for DFS metadata corruption or inconsistencies between domain controllers. Validates the configuration of access-based enumeration to ensure that it is consistent between DFS metadata and the namespace server share. Detects overlapping DFS folders (links), duplicate folders, and folders with overlapping folder targets. For examples of how this command can be used, see <?Comment LV: Do we really need this link here? The topic is not very long. 2010-08-24T15:35:00Z Id='1?>Examples<?CommentEnd Id='1' ?>. Syntax DFSDiag /TestDFSIntegrity /DFSRoot:<DFS root path> [/Recurse] [/Full] Parameters Examples To TBD, type: DFSDiag /TestDFSIntegrity /DFSRoot:\\Contoso.com\MyNamespace /Recurse /Full
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh875508(v=ws.11)
2018-03-17T15:14:14
CC-MAIN-2018-13
1521257645177.12
[]
docs.microsoft.com
Installing Modules Included in Puppet Enterprise 3.2. A newer version is available; see the version menu above for details. Installing Modules - Windows nodes that pull configurations from a Linux or Unix puppet master can use any Forge modules installed on the master. Continue reading to learn how to use the module tool on your puppet master. - On Windows nodes which compile their own catalogs, you can install a Forge module by downloading and extracting the module’s release tarball new modules from the Forge. Its interface is similar to several common package managers and makes it easy to search for and install new modules from the command line. - Continue reading to learn how to install and manage modules from the Puppet Forge. - See “Module Fundamentals” to learn how to use and write Puppet modules. - See “Publishing Modules” to learn how to contribute your own modules to the Forge, including information about the puppet module tool’s buildand generateactions. - See introduces supported modules, which includes additional fields in the modules’ metadata.json file to indicate compatibility with PE versions and OSes. The puppet module tool (PMT) has been updated in PE 3.2 to look for PE version requirements in the metadata. If you are running PE 3.2,. Finding Modules Modules can be found by browsing the Forge’s web interface or by using the module tool’s search action. adobe-hbase Puppet module to d... @adobe apache adobe-zookeeper Puppet module to d... @adobe apache adobe-highavailability Puppet module to c... @adobe apache mon adobe-mon Puppet module to d... @adobe apache mon puppetmanaged-webserver Apache webserver m... @puppetmanaged apache ghoneycutt-apache Manages apache ser... @ghoneycutt apache web ghoneycutt-sites This module manage... @ghoneycutt apache web fliplap-apache_modules_sles11 Exactly the same a... @fliplap mstanislav-puppet_yum Puppet 2. @mstanislav apache mstanislav-apache_yum Puppet 2. @mstanislav apache jonhadfield-wordpress Puppet module to s... @jonhadfield apache php saz-php Manage cli, apache... @saz apache php pmtacceptance-apache This is a dummy ap... @pmtacceptance apache php pmtacceptance-php This is a dummy ph... @pmtacceptance apache php Once you’ve identified the module you need, you can install it by name as described above. Managing Modules Listing Installed Modules Use the module tool’s list action to see which modules you have installed (and which directory they’re installed in). - Use the --treeoption to view the modules arranged by dependency instead of by location on disk. Upgrading Modules Use the module tool’s upgrade action to upgrade an installed module to the latest version. The target module must be identified by its full name. - Use the --versionoption to specify a version. -.
https://docs.puppet.com/puppet/3/modules_installing.html
2018-03-17T14:45:03
CC-MAIN-2018-13
1521257645177.12
[array(['/images/windows-logo-small.jpg', 'Windows note'], dtype=object)]
docs.puppet.com
This document is for the Zen Grid Bridge version of of the Ascent template. For more information on the Zen Grid Bridge framework please see this document. Common Overrides - ZGB based themes are a hybrid of the Zen Grid Framework and the One Web framework from Seth Warburton (Zen Grid Bridge themes use Seth's html overrides). So there may be some small differences for the styling of author, category, date etc when compared to the original release. Interface - ZGB themes use the standard Joomla interface in the administrator and no longer sports the zgf interface that you will have used in Joomla 1.5 and Joomla 2.5. Logo control - ZGF themes use a module position called "logo" and do not provide any other control of the logo position except for font control. The font used for the logo module is set via the template admin while the content of the logo position is entirely set via what ever is published to that position. See below for more information. Advert1 to advert6 modules no longer available - The Advert1 to 6 module positions have been replaced by the above and below module positions. Layout control uses new settings - Please see below for a description of how layouts of modules can be set. jQuery is loaded via Joomla - ZGB themes use the standard Joomla 3 method of using jQuery on the page. Copyright is now in a module rather than the template - See below Social icons are rendered via the Zen Social module - See below Toggle Menu is the only mobile option - Previous Zen Grid Framework themes had the option of using a select menu however this is not available in the bridge themes. The content for the logo in all Zen Grid Bridge themes is created via publishign a custom html module to the logo position. Please note this is a different method to creating the module that users of the previous version of this theme would be used to. It is still possible to create either google font rendered text based logos or use image based logos. Creating the logo position: 1. In your Joomla administrator naviugate to the module manager via extensions > moduel manager. 2. Click the new button on the top left of the screen. 3. Select Custom HTML module 4. Create a title for the module that you will easily recognise later eg Site Logo. 5. Assign the module to the "logo" position. 6. Ensure that the module title is set to hide. 7. Click on the "Custom Output" tab. 8. Add your logo content: a. Using a font for the logo text. If you are using just text for your logo you can add the logo content as per the following. <h2>My site logo</h2> <p><em>My tagline</em></p> - You can use any tag that you like here. - The font family for the h1,h2,h3,h4,h5,h6 headings are controlled via the template settings. You can assign a google font to be used for your logo here. b. Using an image for your logo. Using a logo in this position is as simple as using the default Joomla editor image upload feature to insert an image into the module. The Zen Grid Bridge based themes use the Zen Social module to replicate the social icons seen in earlier versions of this template. There may be visual variations with the social icons used in the Zen Social module. This module uses font icons to render the social icons. The colour, size and position for the social icons can be controlled via the module settings. The Zen Social module must be published to the social module position to replicate the styling seen in the original Ascent template. Copyright can be added to your site by publishign a custom html module to the copyright position. This will automatically be positioned to the right of the footer position at the very bottom of the page. Template Width Sets the maximum width for the template - can be px or % value. Hilite Sets the colour scheme for the template. Loads a separate css file found in the css/ folder fo the template. Navigation Colour Sets the background colour for the menu Top Style Sets the background image and or colour for the area above and including the banner. Middle Style Sets the background image and or colour for the main content area below the banner row and above the footer row. Bottom Style Sets the background image and or colour for the area below the main content area. Responsive Allows the user to enable or disable the responsive features of the template. Toggle Menu title If responsive is enabled this sets the string used for the word that triggers the opening of the toggle menu on small screens. Menu alignment Sets the left, center or right alignment of the menu. The Ascent Zen Grid Bridge theme uses a toggle menu for displaying the menu on small screens. The mobile menu is a separate module position called "togglemenu" and is automatically displayed when the browser is narrower than 740px. To display a menu for small screens simply publish a new menu module to the togglemenu position.
http://docs.joomlabamboo.com/joomla-templates/2012-template/ascent-joomla-3-bridge
2018-03-17T14:23:33
CC-MAIN-2018-13
1521257645177.12
[]
docs.joomlabamboo.com
As well as the listed changes to json_decode, it appears that in contrast to PHP5.6, PHP7's json_decode is stricter about control characters in the JSON input. They are incorrect JSON, but I had tabs and newlines (that is byte value 13, not the two characters "\n") in one string inadvertently and they used to work in PHP5.6 so I hadn't noticed; now they don't.
http://docs.php.net/manual/pt_BR/function.json-decode.php
2018-03-17T14:32:01
CC-MAIN-2018-13
1521257645177.12
[]
docs.php.net
Background Is it possible to associate a single RightScale account with Multiple AWS accounts? Or can I associate multiple RightScale accounts with the same AWS credentials? Answer An AWS account number can only be associated with a single RightScale account. Alternatively, you cannot have multiple RightScale accounts that use the same AWS credentials (i.e., Access Key, Secret Key, x509, etc) for a given AWS cloud. If your RightScale subscription supports the use of multiple RightScale accounts, each RightScale account must be uniquely tied to AWS credentials that are not associated with another RightScale account. However, if you want to consolidate your AWS accounts so that you only receive a single bill at the end of the month for your cloud usage costs, use Amazon's consolidated billing option. See Consolidated Billing for AWS Accounts. Questions? Concerns? Call us at (866) 787-2253 or feel free to send in a support request from the RightScale dashboard (Support > Email).
http://docs.rightscale.com/faq/Can_I_associate_multiple_AWS_accounts_with_a_single_RightScale_account.html
2018-03-17T14:25:56
CC-MAIN-2018-13
1521257645177.12
[]
docs.rightscale.com
Update Behaviors of Stack Resources When you submit an update, AWS CloudFormation updates resources based on differences between what you submit and the stack's current template. Resources that have not changed run without disruption during the update process. For updated resources, AWS CloudFormation uses one of the following update behaviors: - Update with No Interruption AWS CloudFormation updates the resource without disrupting operation of that resource and without changing the resource's physical ID. For example, if you update any property on an AWS::CloudTrail::Trail resource, AWS CloudFormation updates the trail without disruption. - Updates with Some Interruption AWS CloudFormation updates the resource with some interruption and retains the physical ID. For example, if you update certain properties on an AWS::EC2::Instance resource, the instance might have some interruption while AWS CloudFormation and Amazon EC2 reconfigure the instance. - Replacement AWS CloudFormation recreates the resource during an update, which also generates a new physical ID. AWS CloudFormation creates the replacement resource first, changes references from other dependent resources to point to the replacement resource, and then deletes the old resource. For example, if you update the Engineproperty of an AWS::RDS::DBInstance resource type, AWS CloudFormation creates a new resource and replaces the current DB instance resource with the new one. The method AWS CloudFormation uses depends on which property you update for a given resource type. The update behavior for each property is described in the AWS Resource Types Reference. Depending on the update behavior, you can decide when to modify resources to reduce the impact of these changes on your application. In particular, you can plan when resources must be replaced during an update. For example, if you update the Port property of an AWS::RDS::DBInstance resource type, AWS CloudFormation replaces the DB instance by creating a new DB instance with the updated port setting and deletes the old DB instance. Before the update, you might plan to do the following to prepare for the database replacement: Take a snapshot of the current databases. Prepare a strategy for how applications that use that DB instance will handle an interruption while the DB instance is being replaced. Ensure that the applications that use that DB instance take into account the updated port setting and any other updates you have made. Use the DB snapshot to restore the databases on the new DB instance. This example is not exhaustive; it's meant to give you an idea of the things to plan for when a resource is replaced during an update. Note If the template includes one or more nested stacks, AWS CloudFormation also initiates an update for every nested stack. This is necessary to determine whether the nested stacks have been modified. AWS CloudFormation updates only those resources in the nested stacks that have changes specified in corresponding templates.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-update-behaviors.html
2018-03-17T14:49:16
CC-MAIN-2018-13
1521257645177.12
[]
docs.aws.amazon.com
The DynamoDB Session Handler is a custom session handler for PHP that allows developers to use Amazon DynamoDB as a session store. Using DynamoDB for session storage alleviates issues that occur with session handling in a distributed web application by moving sessions off of the local file system and into a shared location. DynamoDB is fast, scalable, easy to setup, and handles replication of your data automatically. The DynamoDB Session Handler uses the session_set_save_handler() function to hook DynamoDB operations into PHP's native session functions to allow for a true drop in replacement. This includes support for features like session locking and garbage collection which are a part of PHP's default session handler. For more information on the Amazon DynamoDB service, please visit the Amazon DynamoDB homepage. The first step is to instantiate and register the session handler. use Aws\DynamoDb\SessionHandler; $sessionHandler = SessionHandler::fromClient($dynamoDb, [ 'table_name' => 'sessions' ]); $sessionHandler->register(); Before you can actually use the session handler, you need to create a table in which to store the sessions. This can be done ahead of time through the AWS Console for Amazon DynamoDB, or using the SDK. Once the session handler is registered and the table exists, you can write to and read from the session using the $_SESSION superglobal, just like you normally do with PHP's default session handler. The DynamoDB Session Handler encapsulates and abstracts the interactions with Amazon DynamoDB and enables you to simply use PHP's native session functions and interface. // Start the session session_start(); // Alter the session data $_SESSION['user.name'] = 'jeremy'; $_SESSION['user.role'] = 'admin'; // Close the session (optional, but recommended) session_write_close(); You may configure the behavior of the session handler using the following options. All options are optional, but you should make sure to understand what the defaults are. table_name 'sessions'. hash_key 'id'. session_lifetime ini_get('session.gc_maxlifetime'). consistent_read GetItemoperation. This defaults to true. locking false. batch_config SessionHandler::garbageCollect(). max_lock_wait_time 10and is only used with session locking. min_lock_retry_microtime 10000and is only used with session locking. max_lock_retry_microtime 50000and is only used with session locking. To configure the Session Handler, you must specify the configuration options when you instantiate the handler. The following code is an example with all of the configuration options specified. $sessionHandler = SessionHandler::fromClient($dynamoDb, [ 'table_name' => 'sessions', 'hash_key' => 'id', 'session_lifetime' => 3600, 'consistent_read' => true, 'locking' => false, 'batch_config' => [], 'max_lock_wait_time' => 10, 'min_lock_retry_microtime' => 5000, 'max_lock_retry_microtime' => 50000, ]); Aside from data storage and data transfer fees, the costs associated with using Amazon DynamoDB are calculated based on the provisioned throughput capacity of your table (see the Amazon DynamoDB pricing details). Throughput is measured in units of Write Capacity and Read Capacity. The Amazon DynamoDB homepage says: A unit of read capacity represents one strongly consistent read per second (or two eventually consistent reads per second) for items as large as 4 KB. A unit of write capacity represents one write per second for items as large as 1 KB. Ultimately, the throughput and the costs required for your sessions table is going to correlate with your expected traffic and session size. The following table explains the amount of read and write operations that are performed on your DynamoDB table for each of the session functions. The DynamoDB Session Handler supports pessimistic session locking in order to mimic the behavior of PHP's default session handler. By default the DynamoDB Session Handler has this feature turned off since it can become a performance bottleneck and drive up costs, especially when an application accesses the session when using ajax requests or iframes. You should carefully consider whether or not your application requires session locking or not before enabling it. To enable session locking, set the 'locking' option to true when you instantiate the SessionHandler. $sessionHandler = SessionHandler::fromClient($dynamoDb, [ 'table_name' => 'sessions', 'locking' => true, ]); The DynamoDB Session Handler supports session garbage collection by using a series of Scan and BatchWriteItem operations. Due to the nature of how the Scan operation works and in order to find all of the expired sessions and delete them, the garbage collection process can require a lot of provisioned throughput. For this reason, we do not support automated garbage collection . A better practice is to schedule the garbage collection to occur during an off-peak time where a burst of consumed throughput will not disrupt the rest of the application. For example, you could have a nightly cron job trigger a script to run the garbage collection. This script would need to do something like the following: $sessionHandler = SessionHandler::fromClient($dynamoDb, [ 'table_name' => 'sessions', 'batch_config' => [ 'batch_size' => 25, 'before' => function ($command) { echo "About to delete a batch of expired sessions.\n"; } ] ]); $sessionHandler->garbageCollect(); You can also use the 'before' option within 'batch_config' to introduce delays on the BatchWriteItem operations that are performed by the garbage collection process. This will increase the amount of time it takes the garbage collection to complete, but it can help you spread out the requests made by the session handler in order to help you stay close to or within your provisioned throughput capacity during garbage collection. $sessionHandler = SessionHandler::fromClient($dynamoDb, [ 'table_name' => 'sessions', 'batch_config' => [ 'before' => function ($command) { $command['@http']['delay'] = 5000; } ] ]); $sessionHandler->garbageCollect(); 'batch_config'option to your advantage. To use the DynamoDB session handler, your configured credentials must have permission to use the DynamoDB table that you created in a previous step. The following IAM policy contains the minimum permissions that you need. To use this policy, replace the Resource value with the Amazon Resource Name (ARN) of the table that you created previously. For more information about creating and attaching IAM policies, see Managing IAM Policies in the AWS Identity and Access Management User Guide. { "Version": "2012-10-17", "Statement": [ { "Action": [ "dynamodb:GetItem", "dynamodb:UpdateItem", "dynamodb:DeleteItem", "dynamodb:Scan", "dynamodb:BatchWriteItem" ], "Effect": "Allow", "Resource": "arn:aws:dynamodb:<region>:<account-id>:table/<table-name>" } ] }
https://docs.aws.amazon.com/aws-sdk-php/v3/guide/service/dynamodb-session-handler.html
2018-03-17T14:49:52
CC-MAIN-2018-13
1521257645177.12
[]
docs.aws.amazon.com
JMS credentials The JMS credentials type manages access to a Java Message Service (JMS). This credential type is available for Discovery and Orchestration. These fields are available in the Credentials form for JMS. Field Description Name Enter a unique and descriptive name for this credential. Active Enable or disable these credentials for use. User name Enter the user name to create in the Credentials table. Avoid leading or trailing spaces in user names. A warning appears if the platform detects leading or trailing spaces in the user name.. on to devices. The smaller the number, the higher in the list this credential appears. Establish credential order when using large numbers of credentials or when security locks out users after three failed login attempts. If all the credentials have the same order number (or none), the instance tries the credentials in a random order.
https://docs.servicenow.com/bundle/kingston-servicenow-platform/page/product/discovery/reference/r_JMSCredentialsForm.html
2018-03-17T14:23:06
CC-MAIN-2018-13
1521257645177.12
[]
docs.servicenow.com
In the data grid, you can review how the current recipe applies to the individual columns in your sample. - The grid is the default view in the Transformer page. - To open the data grid, click the Grid View icon in the Transformer bar at the top of the page. - Click column headings to review suggestions for transforms to apply to the column. Select specific values in a column for suggestions on those strings. -.. Add or Edit: - To add a selected suggestion card to your recipe,. Status Bar Below the data grid, you can review summary information about the data in your currently selected sample.. Show only affected: When transform steps are previewed, you can use these checkboxes to display only the previewed changes for affected rows, columns, or both. Find Column In a wide dataset, it can be easier to use the Find Column bar to locate the column of interest. Above the data grid, click the Find Column textbox.Above the data grid, click the Find Column textbox. - Use the up and down arrows to view the list of the columns in the dataset. - You can start typing a column name to filter the list. Column Information At the top of the column, you can review: Selecting values You can click and drag to select values in a column: - Select a single value in the column to prompt a set of suggestions. - Select multiple values in a single cell to receive a different set of suggestions. - See Suggestion Cards.
https://docs.trifacta.com/pages/diffpagesbyversion.action?pageId=16318611&selectedPageVersions=16&selectedPageVersions=15
2018-03-17T14:21:56
CC-MAIN-2018-13
1521257645177.12
[]
docs.trifacta.com
With GIMP-2.8, the Save command saves images in XCF format only. The Export command is now used to store images to various file formats. You can access to this command through Ctrl+Shift+E.→ , or from the keyboard by using the shortcut With this file browser, you can edit filename and extension directly in name box (default is “Untitled.png”) or by selecting a file in name list. You must also fix the image destination in Save in Folder. You can create a new folder if necessary. Select File Type. If you develop this option, you can select an extension in the drop-down list for your file: File formats dialogs are described in Section 1, “Files”. When file name and destination are set, click on. This opens the export dialog for the specified file format. If you have loaded a non-XCF file, a new item appears in File menu, allowing you to to export file in the same format, overwriting the original file. If you modify an image that you already have exported, the Export command in File menu is changed, allowing you to export file again in the same format.
https://docs.gimp.org/2.8/en/gimp-export-dialog.html
2018-02-17T23:33:41
CC-MAIN-2018-09
1518891808539.63
[]
docs.gimp.org
Package bipartitegraph Overview ▹ Overview ▾ Index ▹ Index ▾ Package files bipartitegraph.go bipartitegraphmatch BipartiteGraph ¶ type BipartiteGraph struct { Left NodeOrderedSet Right NodeOrderedSet Edges EdgeSet } func NewBipartiteGraph ¶ func NewBipartiteGraph(leftValues, rightValues []interface{}, neighbours func(interface{}, interface{}) (bool, error)) (*BipartiteGraph, error) func (*BipartiteGraph) LargestMatching ¶ func (bg *BipartiteGraph) LargestMatching() (matching EdgeSet)
http://docs.activestate.com/activego/1.8/pkg/github.com/onsi/gomega/matchers/support/goraph/bipartitegraph/
2018-02-17T22:55:54
CC-MAIN-2018-09
1518891808539.63
[]
docs.activestate.com
Variable::Magic - Associate user-defined magic to variables from Perl. - NAME - VERSION - SYNOPSIS - DESCRIPTION - FUNCTIONS - CONSTANTS - MGf_COPY - MGf_DUP - MGf_LOCAL - VMG_UVAR - VMG_COMPAT_SCALAR_LENGTH_NOLEN - VMG_COMPAT_SCALAR_NOLEN - VMG_COMPAT_ARRAY_PUSH_NOLEN - VMG_COMPAT_ARRAY_PUSH_NOLEN_VOID - VMG_COMPAT_ARRAY_UNSHIFT_NOLEN_VOID - VMG_COMPAT_ARRAY_UNDEF_CLEAR - VMG_COMPAT_HASH_DELETE_NOUVAR_VOID - VMG_COMPAT_CODE_COPY_CLONE - VMG_COMPAT_GLOB_GET - VMG_PERL_PATCHLEVEL - VMG_THREADSAFE - VMG_FORKSAFE - VMG_OP_INFO_NAME - VMG_OP_INFO_OBJECT - COOKBOOK - PERL MAGIC HISTORY - EXPORT - CAVEATS - DEPENDENCIES - SEE ALSO - AUTHOR - BUGS - SUPPORT NAME Variable::Magic - Associate user-defined magic to variables from Perl. VERSION Version 0.62 SYNOPSIS" } DESCRIPTION : Magic is not copied on assignment. You attach it to variables, not values (as for blessed references). Magic does not replace the original semantics.. Magic is multivalued. You can safely apply different kinds of magics to the same variable, and each of them will be invoked successively. Magic is type-agnostic. The same magic can be applied on scalars, arrays, hashes, subs or globs. But the same hook (see below for a list) may trigger differently depending on the type of the variable. Magic is invisible at Perl level. Magical and non-magical variables cannot be distinguished with ref, tiedor another trick. Magic is notably faster. : get This magic is invoked when the variable is evaluated. It is never called for arrays and hashes. set This magic is called each time the value of the variable changes. It is called for array subscripts and slices, but never for hashes. lenor grep). The length is returned from the callback as an integer. Starting from perl 5.12, this magic is no longer called by the lengthkeyword,. clear). free This magic is called when a variable is destroyed as the result of going out of scope (but not when it is undefined). It behaves roughly like Perl object destructors (i.e. DESTROYmethods), except that exceptions thrown from inside a free callback will always be propagated to the surrounding code. copy When applied to tied arrays and hashes, this magic fires when you try to access or change their elements. Starting from perl 5.17.0, it can also be applied to closure prototypes, in which case the magic will be called when the prototype is cloned. The "VMG_COMPAT_CODE_COPY_CLONE" constant is true when your perl support this feature. dup This magic is invoked when the variable is cloned across threads. It is currently not available. local When this magic is set on a variable, all subsequent localizations of the variable will trigger the callback. It is available on your perl if and only if MGf_LOCALis true. The following actions only apply to hashes and are available if and only if "VMG_UVAR" is true. They are referred to as uvar magics. fetch This magic is invoked each time an element is fetched from the hash. store This one is called when an element is stored into the hash. exists This magic fires when a key is tested for existence in the hash. delete This magic is triggered when a key is deleted in the hash, regardless of whether the key actually exists in it. You can refer to the tests to have more insight of where the different magics are invoked. FUNCTIONSwhen no private data constructor is supplied with the wizard). Other arguments depend on which kind of magic is involved : len $_[2]contains the natural, non-magical length of the variable (which can only be a scalar or an array as len magic is only relevant for these types). The callback is expected to return the new scalar or array length to use, or undefto default to the normal length. copy When the variable for which the magic is invoked is an array or an hash, $_[2]is a either an alias or a copy of the current key, and $_[3]is an alias to the current element (i.e. the value). Since $_[2]might be a copy, it is useless to try to change it or cast magic on it. Starting from perl 5.17.0, this magic can also be called for code references. In this case, $_[2]is always undefand $_[3]is a reference to the cloned anonymous subroutine. fetch, store, exists and delete $_[2]is an alias to the current key. Note that $_[2]may rightfully be readonly if the key comes from a bareword, and as such it is unsafe to assign to it. You can ask for a copy instead by passing copy_key => 1to "wizard" which, at the price of a small performance hit, allows you to safely assign to $_[2]in order to e.g. redirect the action to another key. Finally, if op_info => $numis also passed to wizard, then one extra element is appended to @_. Its nature depends on the value of $num: VMG_OP_INFO_NAME $_[-1]is the current op name. VMG_OP_INFO_OBJECT $_[-1]is the B::OPobject for the current op. Both result in a small performance hit, but just getting the name is lighter than getting the op object. These callbacks are always executed in scalar context. The returned value is coerced into a signed integer, which is then passed straight to the perl magic API. However, note that perl currently only cares about the return value of the len magic callback and ignores all the others. Starting with Variable::Magic 0.58, a reference returned from a non-len magic callback will not be destroyed immediately but will be allowed to survive until the end of the statement that triggered the magic. This lets you use this return value as a token for triggering a destructor after the original magic action takes place. You can see an example of this technique in the cookbook. Each callback can be specified as : a code reference, which will be called as a subroutine. a string reference, where the string denotes which subroutine is to be called when magic is triggered. If the subroutine name is not fully qualified, then the current package at the time the magic is invoked will be used instead. a reference; CONSTANTS_CODE_COPY_CLONE True for perls that call copy magic when a magical closure prototype is cloned.. COOKBOOK Associate an object to any perl variable! } Recursively cast magic on datastruct. Delayed magic actions Starting with Variable::Magic 0.58, the return value of the magic callbacks can be used to delay the action until after the original action takes place : my $delayed; my $delayed_aux = wizard( data => sub { $_[1] }, free => sub { my ($target) = $_[1]; my $target_data = &getdata($target, $delayed); local $target_data->{guard} = 1; if (ref $target eq 'SCALAR') { my $orig = $$target; $$target = $target_data->{mangler}->($orig); } return; }, ); $delayed = wizard( data => sub { return +{ guard => 0, mangler => $_[1] }; }, set => sub { return if $_[1]->{guard}; my $token; cast $token, $delayed_aux, $_[0]; return \$token; }, ); my $x = 1; cast $x, $delayed => sub { $_[0] * 2 }; $x = 2; # $x is now 4 # But note that the delayed action only takes place at the end of the # current statement : my @y = ($x = 5, $x); # $x is now 10, but @y is (5, 5) PERL MAGIC HISTORY The places where magic is invoked have changed a bit through perl history. Here is a little list of the most recent ones. 5.6.x p14416 : copy and dup magic. 5.8.9 p28160 : Integration of p25854 (see below). p32542 : Integration of p31473 (see below). 5.9.3 p25854 : len magic is no longer called when pushing an element into a magic array. p26569 : local magic. 5.9.5 p31064 : Meaningful uvar magic. p31473 : clear magic was not invoked when undefining an array. The bug is fixed as of this version. 5.10.0 Since PERL_MAGIC_uvaris uppercased, hv_magic_check()triggers copy magic on hash stores for (non-tied) hashes that also have uvar magic. 5.11.x p32969 : len magic is no longer invoked when calling lengthwith a magical scalar. p34908 : len magic is no longer called when pushing / unshifting an element into a magical array in void context. The pushpart was already covered by p25854. g9cdcb38b : len magic is called again when pushing into a magical array in non-void context. EXPORT The functions "wizard", "cast", "getdata" and "dispell" are only exported on request. All of them are exported by the tags ':funcs' and ':all'. All the constants are also only exported on request, either individually or by the tags ':consts' and ':all'. CAVEATS. DEPENDENCIES A C compiler. This module may happen to build with a C++ compiler as well, but don't rely on it, as no guarantee is made in this regard. Carp (core since perl 5), XSLoader (since 5.6.0). SEE ALSO perlguts and perlapi for internal information about magic. perltie and overload for other ways of enhancing objects. AUTHOR Vincent Pit, <perl at profvince.com>,. You can contact me by mail or on irc.perl.org (vincent). BUGS Please report any bugs or feature requests to bug-variable-magic at rt.cpan.org, or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes. SUPPORT You can find documentation for this module with the perldoc command. perldoc Variable::Magic This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://docs.activestate.com/activeperl/5.26/perl/lib/Variable/Magic.html
2018-02-17T22:59:49
CC-MAIN-2018-09
1518891808539.63
[]
docs.activestate.com
Contents PEP Acceptance This PEP was accepted by Nick Coghlan on the 15th of May, 2012. Motivation Several very good IP address modules for python already exist. The truth is that all of them struggle with the balance between adherence to Pythonic principals and the shorthand upon which network engineers and administrators rely. ipaddress aims to strike the right balance. Rationale The existence of several Python IP address manipulation modules is evidence of an outstanding need for the functionality this module seeks to provide. Background PEP 3144 and ipaddr have been up for inclusion before. The version of the library specified here is backwards incompatible with the version on PyPI and the one which was discussed before. In order to avoid confusing users of the current ipaddr, I've renamed this version of the library "ipaddress". The main differences between ipaddr and ipaddress are: - ipaddress *Network classes are equivalent to the ipaddr *Network class counterparts with the strict flag set to True. - ipaddress *Interface classes are equivalent to the ipaddr *Network class counterparts with the strict flag set to False. - The factory functions in ipaddress were renamed to disambiguate them from classes. - A few attributes were renamed to disambiguate their purpose as well. (eg. network, network_address) - A number of methods and functions which returned containers in ipaddr now return iterators. This includes, subnets, address_exclude, summarize_address_range and collapse_address_list. Due to the backwards incompatible API changes between ipaddress and ipaddr, the proposal is to add the module using the new provisional API status: Relevant messages on python-dev: Specification The ipaddr module defines a total of 6 new public classes, 3 for manipulating IPv4 objects and 3 for manipulating IPv6 objects. The classes are as follows: - IPv4Address/IPv6Address - These define individual addresses, for example the IPv4 address returned by an A record query for (74.125.224.84) or the IPv6 address returned by a AAAA record query for ipv6.google.com (2001:4860:4001:801::1011). - IPv4Network/IPv6Network - These define networks or groups of addresses, for example the IPv4 network reserved for multicast use (224.0.0.0/4) or the IPv6 network reserved for multicast (ff00::/8, wow, that's big). - IPv4Interface/IPv6Interface - These hybrid classes refer to an individual address on a given network. For example, the IPV4 address 192.0.2.1 on the network 192.0.2.0/24 could be referred to as 192.0.2.1/24. Likewise, the IPv6 address 2001:DB8::1 on the network 2001:DB8::/96 could be referred to as 2001:DB8::1/96. It's very common to refer to addresses assigned to computer network interfaces like this, hence the Interface name. All IPv4 classes share certain characteristics and methods; the number of bits needed to represent them, whether or not they belong to certain special IPv4 network ranges, etc. Similarly, all IPv6 classes share characteristics and methods. ipaddr makes extensive use of inheritance to avoid code duplication as much as possible. The parent classes are private, but they are outlined here: - _IPAddrBase - Provides methods common to all ipaddr objects. - _BaseAddress - Provides methods common to IPv4Address and IPv6Address. - _BaseInterface - Provides methods common to IPv4Interface and IPv6Interface, as well as IPv4Network and IPv6Network (ipaddr treats the Network classes as a special case of Interface). - _BaseV4 - Provides methods and variables (eg, _max_prefixlen) common to all IPv4 classes. - _BaseV6 - Provides methods and variables common to all IPv6 classes. Comparisons between objects of differing IP versions results in a TypeError [1]. Additionally, comparisons of objects with different _Base parent classes results in a TypeError. The effect of the _Base parent class limitation is that IPv4Interface's can be compared to IPv4Network's and IPv6Interface's can be compared to IPv6Network's. Reference Implementation The current reference implementation can be found at: Or see the tarball to include the README and unittests. More information about using the reference implementation can be found at:
http://docs.activestate.com/activepython/3.6/peps/pep-3144.html
2018-02-17T23:01:08
CC-MAIN-2018-09
1518891808539.63
[]
docs.activestate.com
[incr\ Tcl] NAMEdelete - delete things in the interpreter SYNOPSISitcl::delete option ?arg arg ...? DESCRIPTION The delete command is used to delete things in the interpreter. It is implemented as an ensemble, so extensions can add their own options and extend the behavior of this command. By default, the delete command handles the destruction of namespaces. The option argument determines what action is carried out by the command. The legal options (which may be abbreviated) are: - delete class name ?name...? - Deletes one or more [incrTcl] classes called name. This deletes all objects in the class, and all derived classes as well. If an error is encountered while destructing an object, it will prevent the destruction of the class and any remaining objects. To destroy the entire class without regard for errors, use the "delete namespace" command. - delete object name ?name...? - Deletes one or more [incrTcl] objects called name. An object is deleted by invoking all destructors in its class hierarchy, in order from most- to least-specific. If all destructors are successful, data associated with the object is deleted and the name is removed as a command from the interpreter. If the access command for an object resides in another namespace, then its qualified name can be used: itcl::delete object foo::bar::x - delete namespace name ?name...? - Deletes one or more namespaces called name. This deletes all commands and variables in the namespace, and deletes all child namespaces as well. When a namespace is deleted, it is automatically removed from the import lists of all other namespaces. KEYWORDSnamespace, proc, variable, ensemble [ itcl ] Copyright © 1989-1994 The Regents of the University of California. Copyright © 1994-1996 Sun Microsystems, Inc.
http://docs.activestate.com/activetcl/8.5/tcl/itcl/delete.n.html
2018-02-17T23:02:21
CC-MAIN-2018-09
1518891808539.63
[]
docs.activestate.com
Tcl8.6.7/Tk8.6.7 Documentation > Tcl Commands, version 8.6.7 > platformplatform — System identification support code and utilities SYNOPSISpackage require platform ?1.0.10? platform::generic platform::identify platform::patterns identifier DESCRIPTIONThe platform package provides several utility commands useful for the identification of the architecture of a machine running Tcl. Whilst.This can be used to allow an application to be shipped with multiple builds of a shared library, so that the same package works on many versions of an operating system. For example:]]
http://docs.activestate.com/activetcl/8.6/tcl/TclCmd/platform.html
2018-02-17T23:03:32
CC-MAIN-2018-09
1518891808539.63
[]
docs.activestate.com
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here. Class: Aws::Plugins::GlacierChecksums - Inherits: - Seahorse::Client::Plugin - Object - Seahorse::Client::Plugin - Aws::Plugins::GlacierChecksums - Defined in: - aws-sdk-core/lib/aws-sdk-core/plugins/glacier_checksums.rb Overview Computes the :checksum of the HTTP request body for operations that require the X-Amz-Sha256-Tree-Hash header. This includes: :complete_multipart_upload :upload_archive :upload_multipart_part The :upload_archive and :upload_multipart_part operations accept a :checksum request parameter. If this param is present, then the checksum is assumed to be the proper tree hash of the file to be uploaded. If this param is not present, then the required tree hash checksum will be generated. The :complete_multipart_upload operation does not accept a checksum and this plugin will always compute this of the HTTP request body on your behalf. Defined Under Namespace Constant Summary - CHECKSUM_OPERATIONS = [ :upload_archive, :upload_multipart_part, ] Method Summary Methods inherited from Seahorse::Client::Plugin #add_handlers, #add_options, #after_initialize, after_initialize, before_initialize, #before_initialize, option Methods included from Seahorse::Client::HandlerBuilder #handle, #handle_request, #handle_response
https://docs.aws.amazon.com/sdkforruby/api/Aws/Plugins/GlacierChecksums.html
2018-02-17T23:46:23
CC-MAIN-2018-09
1518891808539.63
[]
docs.aws.amazon.com
Event ID 108 — Windows Media Center Extender Connectivity Applies To: Windows Vista The Windows Media Center Extender connects to the Windows Media Center PC using UDP and TCP protocols. Event Details Resolve Resolution steps are not available Troubleshooting information specific to this issue is not available. For general troubleshooting information that might help you resolve the problem, see Firewalls and Media Center Extender on the Microsoft Web site (). Verify Using the event log, you can verify that the Windows Media Center Extender connected to the Windows Media Center PC. If the connection is successful, Event ID 113 with the McrMgr event source will be written to the event log. To perform this procedure, you must be a member of the local Administrators group, or you must have been delegated the appropriate authority. To verify that Event ID 113 is being written to the event log: - Click Start, and then click Control Panel. - Double-click Administrative Tools, and then click Event Viewer. - If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue. - Expand Application and Services Logs, and then click Media Center. - Look for an event with a Source named McrMgr and an Event ID of 113. Related Management Information Windows Media Center Extender Connectivity Windows Media Center Extender
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc733477(v=ws.10)
2018-02-18T00:07:16
CC-MAIN-2018-09
1518891808539.63
[array(['images/ee406008.red%28ws.10%29.jpg', None], dtype=object)]
docs.microsoft.com
Migration Store Encryption Applies To: Windows 7 This topic discusses USMT options for migration store encryption to protect the integrity of user data during a migration. USMT 4.0 Encryption Options USMT 4.0 LoadState commands, so that these commands can create or read the store during encryption and decryption. The new encryption algorithms can be specified on the ScanState and LoadState command lines by using the /encrypt:"encryptionstrength" and the /decrypt:"encryptionstrength" command-line options. All of the encryption application programming interfaces (APIs) used by USMT are available in the Windows® XP, Windows Vista®, and Windows® 7 new command-line encryption options in USMT 4.0.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-7/dd560765(v=ws.10)
2018-02-17T23:52:23
CC-MAIN-2018-09
1518891808539.63
[]
docs.microsoft.com
In this chapter we are going to create a simple Corona uses the Lua scripting language. If you've ever programmed in any language, you will find Lua an easy transition. Our Introduction to Lua guide provides an overview of Lua, or you can learn Lua on YouTube. In a very short time, you will find yourself writing that great app that you've dreamed of creating! If you're completely new to programming, Lua is still easy to learn and this guide will help you along. In addition to Corona, you will need a text editor. There are several editors available and choosing one is much like choosing a car — everybody has their own preferences and you should explore which one suits you best. If you don't already have a favorite text editor, the following options are recommended: Your first app is going to be very simple, but it will demonstrate some important concepts. We are going to make a simple tapping game to keep a balloon in the air. Each time the balloon is tapped, we will "push" it a little higher. The best way to use this guide is to follow every step — type in the code, add the images, and see your project gradually come to life. It may take a little more time, but you will gain a much better understanding. Included with each chapter is a downloadable file which contains all of the images, audio files, and other assets related to that chapter. This chapter's source files are available here. Creating a new project in Corona is easy. In just a few simple steps you'll be ready to make your first app! Open the Corona Simulator. Click New Project from the welcome window or select New Project... from the File menu. BalloonTapand ensure that the Blank template option is selected. Leave the other settings at default and click OK (Windows) or Next (Mac). This will create the basic files for your first game in the location (folder) that you specified. This is also the folder in which you'll place all of your app files/assets, including images, program files, etc. For this project, you will need three image files, placed within the BalloonTap project folder created above: To get going quickly, you can download and use the default images included with this chapter's source files. Inside the archive, you'll find the three images outlined above. If you choose to create your own images for this project or any other project, note these basic image guidelines: The first image that we need to load is the background. Corona places everything on the screen from back to front in regards to layering, so the first image we load will exist behind other images that are loaded afterward. While there are ways to change the layering order of images and send them to the back or front of the display stack, we'll keep this project simple and load them in a logical order. Using your chosen text editor, locate and open the main.lua file within your project folder. The main.lua file is the foundational In this main.lua file, type in the highlighted command: ----------------------------------------------------------------------------------------- -- -- main.lua -- ----------------------------------------------------------------------------------------- local background = display.newImageRect( "background.png", 360, 570 ) There are a several aspects involved with this command. Let's break it down: local, is a Lua command indicating that the next word will be a variable. A variable, just like you learned in math class, is used to store a value. In this case, that value will be an image used as your background. Note that local is always lowercase and it's used here to declare a variable; for example, the first time you use the variable, you should add the word local in front of it. background, is the name of our variable. Any time we want to make changes to the image stored in background, we will use this variable name. Remember to always use different variable names each time you use a variable. Just as it gets confusing if everyone in a classroom is named "John," using the same variable name for all of your objects creates confusion in your program. The = (equal sign) is used to assign the variable background to an image. display.newImageRect() is one of the Corona APIs display.newImageRect() is special in that it can resize/scale the image (this will be explained in just a moment). Inside the parentheses are the parameters which we pass to display.newImageRect(), sometimes referred to as arguments. The first parameter is the name of the image file that we want to load, including the file extension ( .png). The specified name must match the actual file name exactly, including case-sensitive matching! For instance, if the actual file name is background.png, do not enter it as "BackGround.PNG". The next two parameters, 360 and 570 specify the size that we want the background image to be. In this case, we'll simply use the image's pixel dimensions, although as noted above, display.newImageRect() allows you to resize/scale the image via these numbers. The final step for the background is to position it at the correct location on the screen. Immediately following the line you just entered, add the two highlighted commands: local background = display.newImageRect( "background.png", 360, 570 ) background.x = display.contentCenterX background.y = display.contentCenterY By default, Corona will position the center of an object at the coordinate point of 0,0 which is located in the x and y properties, however, we can move the background image to a new location. For this project, we'll place the background in the center of the screen — but what if we don't know exactly which coordinate values represent the center? Fortunately, Corona provides some convenient shortcuts for this. When you specify the values display.contentCenterX and display.contentCenterY, Corona will set the center coordinates of the screen as the background.x and background.y properties. Let's check the result of your code! Save your modified main.lua file and then, from within the Corona Simulator, "relaunch" it using If you get an error or can't see the background, there are a few possibilities as to the cause: main.lua. Remember that the Time to load the platform. This is very similar to loading the background. Following the three lines of code you've already typed, enter the following highlighted commands: local background = display.newImageRect( "background.png", 360, 570 ) background.x = display.contentCenterX background.y = display.contentCenterY local platform = display.newImageRect( "platform.png", 300, 50 ) platform.x = display.contentCenterX platform.y = display.contentHeight-25 As you probably noticed, there is one minor change compared to the background: instead of positioning the platform in the vertical center, we want it near the bottom of the screen. By using the command display.contentHeight, we know the height of the content area. But remember that platform.y places the center of the object at the specified location. So, because the height of this object is 50 pixels, we subtract 25 pixels from the value, ensuring that the entire platform can be seen on screen. Save your main.lua file and relaunch the Simulator to see the platform graphic. To load the balloon, we'll follow the same process. Below the previous commands, type these lines: local balloon = display.newImageRect( "balloon.png", 112, 112 ) balloon.x = display.contentCenterX balloon.y = display.contentCenterY In addition, to give the balloon a slightly transparent appearance, we'll reduce the object's opacity (alpha) slightly. On the next line, set the balloon's alpha property to 80% ( 0.8): local balloon = display.newImageRect( "balloon.png", 112, 112 ) balloon.x = display.contentCenterX balloon.y = display.contentCenterY balloon.alpha = 0.8 Save your main.lua file and relaunch the Simulator. There should now be a balloon in the center of the screen. Time to get into physics! Corona includes the Box2D physics engine for your use in building apps. While using physics is not required to make a game, it makes it much easier to handle many game situations. Including physics is very easy with Corona. Below the previous lines, add these commands: local physics = require( "physics" ) physics.start() Let's explain these two lines in a little more detail: The command local physics = require( "physics" ) physics for later reference. This gives you the ability to call other commands within the physics library using the physics namespace variable, as you'll see in a moment. physics.start() does exactly what you might guess — it starts the physics engine. If you save and relaunch you won't see any difference in your physics.addBody: local physics = require( "physics" ) physics.start() physics.addBody( platform, "static" ) This tells the physics engine to add a physical "body" to the image that is stored in platform. In addition, the second parameter tells Corona to treat it as a static physical object. What does this mean? Basically, static physical objects are not affected by gravity or other physical forces, so anytime you have an object which shouldn't move, set its type to "static". Now add a physical body to the balloon: local physics = require( "physics" ) physics.start() physics.addBody( platform, "static" ) physics.addBody( balloon, "dynamic", { radius=50, bounce=0.3 } ) In contrast to the platform, the balloon is a dynamic physical object. This means that it's affected by gravity, that it will respond physically to collisions with other physical objects, etc. In this case, the second parameter ( "dynamic") is actually optional because the default body type is already dynamic, but we include it here to help with the learning process. The final part of this physics.addBody command is used to adjust the balloon's body properties — in this case we'll give it a round shape and adjust its bounce/restitution value. Parameters must be placed in curly brackets ( {}) (referred to as a table in the Lua programming language). Because the balloon is a round object, we assign it a radius property with a value of 50. This value basically matches the size of our balloon image, but you may need to adjust it slightly if you created your own balloon image. The bounce value can be any 0 means that the balloon has no bounce, while a value of 1 will make it bounce back with 100% of its collision energy. A value of 0.3, as seen above, will make it bounce back with 30% of its energy. A bounce value greater than 1 will make an object bounce back with more than 100% of its collision energy. Be careful if you set values above 1 since the object may quickly gain momentum beyond what is typical or expected. Even if you change the balloon's bounce property to 0, it will still bounce off the platform object because, by default, objects have a bounce value of 0.2. To completely remove bouncing in this game, set both the balloon and the platform to bounce=0. Save your main.lua file and relaunch the Simulator. As a fun experiment, you can try adjusting the bounce value and relaunch the project to see the effect. At this point, we have a balloon that drops onto a platform and bounces slightly. That's not very fun, so let's make this into a game! For our balloon tap game to work, we need to be able to push the balloon up a little each time it's tapped. To perform this kind of feature, programming languages use what are called functions. Functions are short (usually) sections of code that only run when we tell them to, like when the player taps the balloon. Let's create our first function: local function pushBalloon() end Functions are essential to developing apps with Corona, so let's examine the basic structure: As before, we use the keyword local to declare the function. The keyword function tells Corona that this is a function and that its set of commands will be called by the name pushBalloon. The ending parentheses ( ()) are required. In later chapters we will put something inside these parentheses, but for now you can leave this as shown. As mentioned above, functions are end. This tells Lua that the function is finished. Excellent, we now have a function! However, it's currently an empty function so it won't actually do anything if we run it. Let's fix that by adding the following line of code inside the function, between where we declare the function (its opening line) and the closing end keyword: local function pushBalloon() balloon:applyLinearImpulse( 0, -0.75, balloon.x, balloon.y ) end It's considered good programming practice to indent at least one tab or balloon:applyLinearImpulse is a really cool command. When applied to a dynamic physical object like the balloon, it applies a "push" to the object in any direction. The parameters that we pass tell the physics engine how much force to apply (both horizontally and vertically) and also where on the object's body to apply the force. The first two parameters, 0 and -0.75, indicate the amount of directional force. The first number is the horizontal, or x direction, and the second number is the vertical, or y direction. Since we only want to push the balloon upwards 0 as the first parameter. For the second parameter, with a value of -0.75, we tell the physics engine to push the balloon up a little bit. The value of this number determines the amount of force that is applied: the bigger the number, the higher the force. The third and fourth parameters, balloon.x and balloon.y, tell the physics engine where to apply the force, relative to the balloon itself. If you apply the force at a location which is not the center of the balloon, it may cause the balloon to move in an unexpected direction or spin around. For this game, we will keep the force focused on the center of the balloon. That's it! We could, if needed, add additional commands inside the pushBalloon() function, but for this simple game, we only need to push the balloon upward with a small amount of force. Events are what create interactivity and, in many ways, Corona is an Adding an event listener is easy — do so now, following the function: local function pushBalloon() balloon:applyLinearImpulse( 0, -0.75, balloon.x, balloon.y ) end balloon:addEventListener( "tap", pushBalloon ) Let's inspect the structure of this new command: First, we must tell Corona which object is involved in the event listener. For this game, we want to detect an event related directly to the balloon object. Immediately following this, add a colon ( :), then addEventListener. In Lua, this is called an object method. Essentially, addEventListener, following the colon, tells Corona that we want to add an event listener to balloon, specified before the colon. Inside the parentheses are two parameters which complete the command. The first parameter is the event type which Corona will listen for, in this case "tap". The second parameter is the function which should be run (called) when the event occurs, in this case the pushBalloon() function which we wrote in the previous section. Essentially, we're telling Corona to run the pushBalloon() function every time the user taps the balloon. That is everything — you have a functional game now! If you save your main.lua file and relaunch the Simulator, it should be ready to go. Try your best to continue tapping/clicking the balloon and preventing it from touching the platform! Here is the complete program, just in case you missed something: ----------------------------------------------------------------------------------------- -- -- main.lua -- ----------------------------------------------------------------------------------------- local background = display.newImageRect( "background.png", 360, 570 ) background.x = display.contentCenterX background.y = display.contentCenterY local platform = display.newImageRect( "platform.png", 300, 50 ) platform.x = display.contentCenterX platform.y = display.contentHeight-25 local balloon = display.newImageRect( "balloon.png", 112, 112 ) balloon.x = display.contentCenterX balloon.y = display.contentCenterY balloon.alpha = 0.8 local physics = require( "physics" ) physics.start() physics.addBody( platform, "static" ) physics.addBody( balloon, "dynamic", { radius=50, bounce=0.3 } ) local function pushBalloon() balloon:applyLinearImpulse( 0, -0.75, balloon.x, balloon.y ) end balloon:addEventListener( "tap", pushBalloon ) Congratulations, you have created a basic game in just 30 lines of code! But there is something missing, isn't there? Wouldn't it be nice if the game kept track of how many times the balloon was tapped? Fortunately that's easy to add! First, let's create a local Lua variable to keep track of the tap count. You can add this at the very top of your existing code. In this case, we'll use it to store an integer instead of associating it with an image. Since the player should begin the game with no score, we'll initially set its value to 0, but this can change later. local tapCount = 0 local background = display.newImageRect( "background.png", 360, 570 ) background.x = display.contentCenterX background.y = display.contentCenterY Next, let's create a visual object to display the number of taps on the balloon. Remember the rules of layering discussed earlier in this chapter? New objects will be placed in front of other objects which were loaded previously, so this object should be loaded after you load the background (otherwise it will be placed behind the background and you won't see it). After the three lines which load/position the background, add the following highlighted command: local tapCount = 0 local background = display.newImageRect( "background.png", 360, 570 ) background.x = display.contentCenterX background.y = display.contentCenterY local tapText = display.newText( tapCount, display.contentCenterX, 20, native.systemFont, 40 ) Let's inspect this command in more detail: The command begins with local tapText tapText. display.newText() is another Corona API, but instead of loading an image as we did earlier, this command creates a text object. Because we are assigning the variable tapText to this object, we'll be able to make changes to the text during our game, such as changing the printed number to match how many times the balloon was tapped. Inside the parentheses are the parameters which we pass to display.newText(). The first parameter is the initial printed value for the text, but notice that instead of setting a direct string value like "0", we actually assign the variable which we declared earlier ( tapCount). In Corona, it's perfectly valid to specify a variable as a parameter of an API, as long as it's a valid variable and the API accepts the variable's type as that parameter. The second two parameters, display.contentCenterX and 20, are used to position this text object on the screen. You'll notice that we use the same shortcut of display.contentCenterX to position the object in the horizontal center of the screen, and 20 to set its vertical y position near the top of the screen. The fourth parameter for this API is the font in which to render the text. Corona supports custom fonts across all platforms, but for this game we'll use the default system font by specifying native.systemFont. The final parameter ( 40) is the intended size of the rendered text. Let's check the result of this new code. Save your modified main.lua file and relaunch the Simulator. If all went well, the text object should now be showing, positioned near the top of the screen. Continuing with our program — by default, text created with display.newText() will be white. Fortunately, it's easy to change this. Directly following the line you just added, type the highlighted command: local tapText = display.newText( tapCount, display.contentCenterX, 20, native.systemFont, 40 ) tapText:setFillColor( 0, 0, 0 ) Simply put, this setFillColor() command modifies the fill color of the object tapText. The setFillColor() command accepts up to four numeric parameters in the range of 0 to 1 0 (alpha defaults to 1 and it can be omitted in this case). Let's move on! The new text object looks nice, but it doesn't actually do anything. To make it update when the player taps the balloon, we'll need to modify our pushBalloon() function. Inside this function, following the balloon:applyLinearImpulse() command, insert the two highlighted lines: local function pushBalloon() balloon:applyLinearImpulse( 0, -0.75, balloon.x, balloon.y ) tapCount = tapCount + 1 tapText.text = tapCount end Let's examine these lines individually: The tapCount = tapCount + 1 tapCount variable by 1 each time the balloon is tapped. The second new command, tapText.text = tapCount text property of our tapText object. This allows us to quickly change text without having to create a new object each time. Look carefully — to update the on-screen text, we update a property of the text object, not the object itself. In this case, we modify the text property of tapText by writing tapText.text, = tapCount variable on the line directly before, then update the text object with that same variable value, the visual display will always mirror the internal tapCount value. That's it! If you save your main.lua file and relaunch the Simulator, your game is essentially finished — now, each time you tap the balloon, the counter at the top of the screen will increase by 1, effectively keeping score. We've covered many concepts in this chapter. It may seem a bit overwhelming, but be patient, look at your code, and read through the sections again if necessary. If you need help, the Corona Forums are a friendly venue to communicate with other Corona developers and staff members. Here's a quick overview of what you learned in this chapter: How are you enjoying this guide? Is it helpful? Please fill out a quick survey and help us improve the Corona learning experience!
http://docs.coronalabs.com.s3-website-us-east-1.amazonaws.com/guide/programming/01/index.html
2018-02-17T23:11:35
CC-MAIN-2018-09
1518891808539.63
[]
docs.coronalabs.com.s3-website-us-east-1.amazonaws.com
In order to report request queuing, New Relic agents depend on an HTTP header set by the front-end web server (such as Apache or Nginx) or load balancer (such as HAProxy or F5). These examples use the X-Request-Start header, since it is has broader support across platforms. If this does not work with your server configuration, try using the X-Queue-Start header. The syntax should otherwise be the same. Apache Apache's mod_headers module includes a %t variable that is formatted correctly. To enable request queue reporting, add this code to your Apache config: RequestHeader set X-Request-Start "%t" Nginx If you are using Nginx version 1.2.6 or higher and the latest version of the Ruby, Python, or PHP agent, Nginx can easily be configured to report queue time. (For Nginx versions 1.2.6 or lower, you must recompile Nginx with a module or patch.) Configuring with Nginx 1.2.6 or higher uses the ${msec} variable, which is a number in seconds with milliseconds resolution. For more information, see. Add the appropriate information to your Nginx config: F5 load balancers For F5 load balancers, use this configuration snippet: when HTTP_REQUEST_SEND { # TCL 8.4 so we have to calculate the time in millisecond resolution # Calculation from:? fromgroups=#!topic/comp.lang.tcl/tV9H6TDv0t8 set secs [clock seconds] set ms [clock clicks -milliseconds] set base [expr { $secs * 1000 }] set fract [expr { $ms - $base }] if { $fract >= 1000 } { set diff [expr { $fract / 1000 }] incr secs $diff incr fract [expr { -1000 * $diff }] } set micros [format "%d%03d000" $secs $fract] # Want this header inserted as if coming from the client clientside { HTTP::header insert X-Request-Start "t=${micros}" } } Network timing Even with request queuing configured, the front-end server's setup can still affect network time in your Browser data. This is because the front-end server does not add the queuing time header until after it actually accepts and processes the request. The queuing time headers can never account for backlog in the listener socket used to accept requests. For example, if the front-end server's configuration results in a backlog of requests that queue in the listener socket, page load timing will show an increase in network time. For more help Additional documentation resources include: - Request queuing and tracking front-end time (overview of viewing request queue information in the New Relic user interface and accounting for clock skew) - Configuring request queue reporting (configuration instructions for New Relic agents)
https://docs.newrelic.com/docs/apm/applications-menu/features/request-queue-server-configuration-examples
2018-02-17T22:58:24
CC-MAIN-2018-09
1518891808539.63
[]
docs.newrelic.com
JForm::getFieldAttribute From Joomla! Documentation Revision as of 18:08,Attribute />
https://docs.joomla.org/index.php?title=API17:JForm::getFieldAttribute&direction=next&oldid=56868
2015-06-30T03:35:31
CC-MAIN-2015-27
1435375091587.3
[]
docs.joomla.org
How to create a custom button From Joomla! Documentation Revision as of 13:05, 18 June Chris Davenport (talk| contribs) 7( '/' );
https://docs.joomla.org/index.php?title=How_to_create_a_custom_button&oldid=7788
2015-06-30T04:22:36
CC-MAIN-2015-27
1435375091587.3
[]
docs.joomla.org
Difference between revisions of "Enabling Search Engine Friendly (SEF) URLs on Apache" From Joomla! Documentation Revision as of 22:46, 11. won't see this unless you change default SEF Global Config settings above. use SEF Ugly links is to browse to the target page while editing the other. Even with the three settings above, this what you may get Admin is responsible for ensuring alias names are unique for each page of your site, otherwise visiting results prior search value history about that page unless you take more technical measures. Tim McCully. In Genesis God uses one word for the light of day, the best part of hat we call "day" now. 03:46, 12 February 2011 (UTC)
https://docs.joomla.org/index.php?title=Enabling_Search_Engine_Friendly_(SEF)_URLs_on_Apache&diff=37089&oldid=37088
2015-06-30T03:51:16
CC-MAIN-2015-27
1435375091587.3
[]
docs.joomla.org
Information for "Joomla.whatsnew10.15" Basic information Display titleHelp15:Joomla.whatsnew10.15 Default sort keyJoomla.whatsnew10.15 Page length (in bytes)5,509 Page ID134208:42, 23 April 2008 Latest editorChris Davenport (Talk | contribs) Date of latest edit16:54, 22 March 2010 Total number of edits4 Total number of distinct authors3 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded template (1)Template used on this page: Template:Cathelp (view source) Retrieved from ‘’
https://docs.joomla.org/index.php?title=Help15:Joomla.whatsnew10.15&action=info
2015-06-30T03:45:57
CC-MAIN-2015-27
1435375091587.3
[]
docs.joomla.org
Difference between revisions of "How do you create a custom module?" From Joomla! Documentation Revision as of 21:44, 23 October 2011 Quick Overview: To make a custom module, go to Extensions, then Module Manager and click on the New icon. You'll see a list of available modules. Click on Custom HTML and begin editing the module. Detail: A module is a special program, or even a special type of article, that can be displayed in the Module Positions available in your joomla website's template. To see all of your template's iphone photography available module positions, add the code ?tp=1 after one of your pages URLs eg. To see the list of all the modules on your site: In the administrator site, go to Extensions > Module Manager Before you create a new one, look at an example: Login Form - it is of a particular type mod_login Title - the title of this module Show Title - no or yes if you want the title displayed Position - this relates to your available Module Positions referred to above Order - the order the module is displayed if there is more than one module in its position (you control this after SAVING the module and looking at all the modules for this position - see Order of the Modules for a position, below). Menu Assignment This is where you choose when this module will be displayed. All, None, Selected Menu Item(s) For the example Login Form, the Module will only display when the user is in the Home menu. Module Parameter Some modules have Parameters associated with them that are set from this screen. Create your new 'Custom Module' To display some information on some or all pages, in say the left column (a very common module position), you can go to the Module Manager and create a NEW custom module. In Module Manager - hit the New button, and choose Custom Html Choose a title, whether to show it, enable your module, select a position, assign menus, then fill in your Custom Output in the edit screen below the Menu Assignment. Parameters Note that Module Class Suffix is a setting that will relate to the Template you are using - it may not be set. Refer to the template's documentation to use this parameter. Try it out Put in a test module and see what you can achieve. Set Enabled to "No" to turn it off if you are not ready to display it to the world. Order of the Modules for a position In the Module Manager, restrict which modules you are looking at by choosing the position from the drop down menu - Select Position - You can now use the green arrows to move Modules up or down, or re-number the modules and click on the small save icon (between Order and Access Level) to save your new order. Access Level You use this to control who can see a module. For example, the User Menu has a red Registered as its access level - this means that this Module is only seen when a registered user is logged in. As a Super Administrator, you can restrict the Access Level to Special, and then only you (and the other admins!) will see a module when you are logged in on the front of the site.
https://docs.joomla.org/index.php?title=How_do_you_create_a_custom_module%3F&diff=62711&oldid=30832
2015-06-30T04:35:03
CC-MAIN-2015-27
1435375091587.3
[]
docs.joomla.org
Changes related to "Configuring Komodo Edit for Joomla Code Completion" ← Configuring Komodo Edit for Joomla Code Completion This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. No changes during the given period matching these criteria.
https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20130913184302&target=Configuring_Komodo_Edit_for_Joomla_Code_Completion
2015-06-30T03:34:59
CC-MAIN-2015-27
1435375091587.3
[]
docs.joomla.org
@Target(value=METHOD) @Retention(value=RUNTIME) @Documented public @interface Begin Marks a method as beginning a long-running conversation, if none exists, and if the method returns a non-null value without throwing an exception. A null outcome never begins a conversation. If the method is of type void, a conversation always begins. public abstract String[] ifOutcome public abstract boolean nested public abstract boolean join public abstract String pageflow public abstract String id public abstract FlushModeType flushMode
http://docs.jboss.org/seam/2.2.1.CR3/api/org/jboss/seam/annotations/Begin.html
2015-06-30T03:36:25
CC-MAIN-2015-27
1435375091587.3
[]
docs.jboss.org
Bivariate spline s(x,y) of degrees kx and ky on the rectangle [xb,xe] x [yb, ye] calculated from a given set of data points (x,y,z). See also: bisplrep, bisplev - an older wrapping of FITPACK UnivariateSpline - a similar class for univariate spline interpolation SmoothUnivariateSpline - to create a BivariateSpline through the given points Methods
http://docs.scipy.org/doc/scipy-0.8.x/reference/generated/scipy.interpolate.BivariateSpline.html
2015-06-30T03:31:56
CC-MAIN-2015-27
1435375091587.3
[]
docs.scipy.org
Difference between revisions of "Do not use die to debug" From Joomla! Documentation Revision as of 19:07, 28 December 2008 Joomla! 1.5 includes the ability to optionally store the user session in the database. In PHP 5, because of the order in which it does things, a connection to the database will be closed before it fires the session handlers. As a result of this, the common-place practice of using the die( 'test1' ); function will result in a plethora of errors being thrown, similar to the following: Warning: mysqli_query() [<a href='function.mysqli-query'>function.mysqli-query</a>]: Couldn't fetch mysqli in C:\Apache2\htdocs\www_site\libraries\joomla\database\database\mysqli.php on line 147 or: Warning: mysql_real_escape_string() [function.mysql-real-escape-string]: Access denied for user 'root'@'localhost' (using password: NO) in /var/www/html/libraries/joomla/database/database/mysql.php on line 105 In order to stop execution gracefully, you need to use the following code: echo 'Test'; $mainframe->close(); If you are developing your own component, you might like to include your own utility function to provide this functionality: function stop($msg = '') { global $mainframe; echo $msg; $mainframe->close(); }
https://docs.joomla.org/index.php?title=Do_not_use_die_to_debug&diff=12343&oldid=12342
2015-06-30T04:30:18
CC-MAIN-2015-27
1435375091587.3
[]
docs.joomla.org
Information for "Component" Basic information Display titleTalk:Component Default sort keyComponent Page length (in bytes)700 Page ID35Clogan (Talk | contribs) Date of page creation16:37, 22 March 2009 Latest editorClogan (Talk | contribs) Date of latest edit16:37, 22 March 2009 Total number of edits1 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’
https://docs.joomla.org/index.php?title=Talk:Component&action=info
2015-06-30T03:54:17
CC-MAIN-2015-27
1435375091587.3
[]
docs.joomla.org
Editing a template with Template Manager From Joomla! Documentation Revision as of 20:10, 23 June 2013 by Tom Hutchison (Talk | contribs) To edit a template with the Template Manager you must first access the Template manager.>.
https://docs.joomla.org/index.php?title=J3.x:Editing_a_template_with_Template_Manager&oldid=100979
2015-06-30T04:23:27
CC-MAIN-2015-27
1435375091587.3
[]
docs.joomla.org
Difference between revisions of "JDocumentHTML/getHeadData" From Joomla! Documentation Latest revision as of 06:30, 27 December 2008 Returns the data destined for the HTML <head> section in array form. Note: this only returns the data added by the set Methods here. It will not return data about files linked directly from html in the template head. Syntax array getHeadData() The array returned contains the following entries:
https://docs.joomla.org/index.php?title=JDocumentHTML/getHeadData&diff=12327&oldid=12326
2015-06-30T04:47:21
CC-MAIN-2015-27
1435375091587.3
[]
docs.joomla.org
Changes related to "Talk:How do you add a new module position?" ← Talk:How do you add a new module position? This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. No changes during the given period matching these criteria.
https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20131118081255&target=Talk%3AHow_do_you_add_a_new_module_position%3F
2015-06-30T03:58:31
CC-MAIN-2015-27
1435375091587.3
[]
docs.joomla.org
Document Bank Main Page > The working materials in the NRDC Document Bank are listed in reverse chronological order. For additional policy materials including reports and issue papers, see the Issues section of the main NRDC site. Analysis of New York City Department of Sanitation Curbside Recycling and Refuse Costs This analysis of the economics of recycling in New York City was prepared for NRDC by DSM Environmental Services.
http://docs.nrdc.org/recycling/
2014-12-18T03:56:25
CC-MAIN-2014-52
1418802765610.7
[]
docs.nrdc.org
There can be defined conditions, for which a The change to the next panel in during the installation wizard is allowedcan be controlled using validators. This can be achieved by a nested definition into in the panel definition od in the installer descriptor: There can be used built:
http://docs.codehaus.org/pages/diffpages.action?pageId=228183140&originalId=230397355
2014-12-18T04:14:39
CC-MAIN-2014-52
1418802765610.7
[]
docs.codehaus.org
4:00: Microsoft SQL Server uses a considerable amount of disk space Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/admin/deliverables/20839/TS_Setting_up_user_accnts_header_632881_11.jsp
2014-12-18T04:16:33
CC-MAIN-2014-52
1418802765610.7
[]
docs.blackberry.com
>>.4.x Stable Release The stable branch is recommended for almost all users - please report bugs and submit patches for this version. will be the last Java 1.4 release for the project. Most of the developers are working directly with maven - patch releases will only be made in the case of a serious problem. The 2.4.x branch of GeoTools is the last release available with org.geotools.feature.Feature and Java 1.4. GeoTools 2.5.x Development This is where all the really exciting development is taking place. There are three significant improvements for the 2.5.x branch: Change to Java 5; switch GeoAPI Feature (performed during the FOSS4G 2007 Code Sprint); and the creation of a User Guide. There is plenty of hard work happening here; from a reimplementation of ArcSDE support to the return of swing widgets to the library. This work is only available via our maven repository at this time. Planning If your organization is making use of GeoTools please talk to us about project goals, time line and upcoming releases. About page. News and Events Blog stream Create a blog post to share news and announcements with your team and company..
http://docs.codehaus.org/pages/viewpage.action?pageId=75366885
2014-12-18T04:10:37
CC-MAIN-2014-52
1418802765610.7
[array(['/download/attachments/593/geotools_banner_small.gif?version=2&modificationDate=1125015458988&api=v2', None], dtype=object) ]
docs.codehaus.org
Installation Guide We will show you how to download and install the different versions of the XAP platform. For a list of the supported platforms and programming language versions please consult the release notes. Please note that the XAP Lite edition is limited to a single partition. Java Installation Download the latest version of XAP from the downloads page. Unzip it into a directory of your choice: gshome-directory * On Windows, you might unzip it into c:\tools\, which will create c:\tools\ gigaspaces-xap-premium-9.7.0-ga. gshome-directory * On Unix, you might unzip it into /usr/local/, which will create /usr/local/ gigaspaces-xap-premium-9.7.0-ga. You’ll also need to grant execution permissions to the scripts in the bin folder. JAVA_HOME Set the JAVA_HOME environment variable to point to the JDK root directory Installing. Setting up your IDE gshome-directory Open your favorite java IDE (Eclipse, IntelliJ IDEA, etc), Create a new project, and add all the jars from gigaspaces-xap-premium-9.7.0-ga/lib/required to the project’s classpath. .NET installation. Other Installation Options GigaSpaces XAP.NET offers more installation scenarios and customizations. For example: - Command-line installation. - Packaging XAP.NET in another installation package. - Side-by-side installations. - Using a different jvm. For more information see Advanced Installation Scenarios. License installation GigaSpaces will send you a license to the email address you provided when you downloaded XAP. Enter this information in the gslicense.xml file that is located under the GS_HOME directory.>
https://docs.gigaspaces.com/xap/9.7/dev-java/installation-guide.html
2021-06-12T13:57:25
CC-MAIN-2021-25
1623487584018.1
[]
docs.gigaspaces.com
JPA Relationships Overview GigaSpaces JPA relationships model is different than Relational Databases model. In GigaSpaces relationships are owned, which means that an owner of a relationship holds the owned entities within itself in Space. For instance, if an Author has a One-to-many relationship with Book, in Space all the Book instances relevant for a specific Author will reside within a Collection in Author. When defining a One-to-one/One-to-many relationship the cascading type should be set to CascadeType.ALL using the relationship’s annotation cascade attribute since no-cascading is unsupported. Setting cascading globally can also be done in orm.xml: <persistence-unit-metadata> <persistence-unit-defaults> <cascade-persist/> </persistence-unit-defaults> </persistence-unit-metadata> Further information can be found on the Modeling your data page. Embedded In the following example we have a Store entity which has an embedded Address property. In this case, the Address property is saved as is within Store. // An Embeddable Address object @Embeddable public class Address implements Serializable { private String street; private String city; private String country; public Address() { } public String getStreet() { return this.street; } public String getCity() { return this.city; } public String getCountry() { return this.country; } /* Additional Getters & Setters */ } // A Store entity with an embedded Address property @Entity public class Store { private Integer id; private Address address; public Store() { } @Id @SpaceId public Integer getId() { return this.id; } @Embedded @SpaceIndex(path = "city") // Address.city is indexed public Address getAddress() { return this.address; } /* Additional Getters & Setters */ } We created an Embeddable Address object and used it as a property in our Store object. Please note that Embeddable classes must be Serializable since they’re transferred over the network. It’s possible to query a Store entity by an Address property in the following way: EntityManager em = emf.createEntityManager(); Query query = em.createQuery("SELECT store FROM com.gigaspaces.objects.Store store WHERE s.address.city = 'London'"); List<Store> result = (List<Store>) query.getResultList(); One-to-one GigaSpaces JPA One-to-one relationship is very similar to an embedded relationship except for the fact that when querying the owner entity it is possible to use an Inner Join. As with Embeddable classes, owned entities in a relationship should always be Serializable since they are transferred over the network. In the following example we show a One-to-one relationship between two entities, Order & Invoice: @Entity public class Order { private Long id; private Date date; private Invoice invoice; public Order() { } @Id @SpaceId public Long getId() { return this.id; } public Date getDate() { return this.date; } @OneToOne(cascade = CascadeType.ALL) @SpaceIndex(path = "sum", type = SpaceIndexType.EXTENDED) // Invoice.sum is indexed public Invoice getInvoice() { return this.invoice; } // Additional Getters & Setters... } // An Invoice entity which is owned in the relationship and // therefore should implement Serializable @Entity public class Invoice implements Serializable { private Long id; private Double sum; public Invoice() { } @Id @SpaceId public Long getId() { return this.id; } public Double getSum() { return this.sum; } // Additional Getters & Setters... } For One-to-one relationship we can use an Inner Join for querying: EntityManager em = emf.createEntityManager(); Query query = em.createQuery("SELECT order FROM com.gigaspaces.objects.Order order JOIN o.invoice invoice WHERE invoice.sum > 499.99"); List<Order> orders = (List<Order>) query.getResultList(); We defined an extended index on Invoice.sum and therefore the above query takes advantage of the defined index. One-to-many GigaSpaces JPA One-to-many relationship means that the owner of the relationship stores the owned entities in a collection within itself. As with One-to-one, owned entities in a relationship should always be Serializable since they are transferred over the network. Lets examine the following example: // An Author entity which will be the owner of a relationship. @Entity public class Author { private Integer id; private String name; private List<Book> books; public Author() { } @Id @SpaceId public Integer getId() { return this.id; } @SpaceRouting // Routing is determined by the author's name public String getName() { return this.name; } @OneToMany(cascade = CascadeType.ALL) @SpaceIndex(path = "[*].id") // Books are indexed by their id public List<Book> getBooks() { return this.books; } // Additional Getters & Setters.. } // A Book entity which is owned in a relationship // Book shouuld implement Serializable since its transferred over the network @Entity public class Book implements Serializable { private Integer id; private String name; public Book() { } @Id @SpaceId public Integer getId() { } public String getName() { } // Additional Getters & Setters.. } We can use a JPQL Inner Join for querying an Author by a specific Book id: EntityManager em = emf.createEntityManager(); Query query = em.createQuery("SELECT author FROM com.gigaspaces.objects.Author author JOIN author.books book WHERE book.id = 100"); Author result = (Author) query.getSingleResult(); We defined an index on Book.id and therefore the above query takes advantage of the defined index. Limitations Working with embedded entities limitations When working with embedded entities its not possible to call JPA methods on owned entities. Owned many-to-many relationship Owned many-to-many relationship is not supported since GigaSpaces data model doesn’t permit it. It is possible to implement such a relation by explicitly setting the Ids for each of the relationship participants. Unowned relationships Unowned relationships where each part of the relation is represented as a Data Type in space is not supported.
https://docs.gigaspaces.com/xap/9.7/dev-java/jpa-relationships.html
2021-06-12T13:32:25
CC-MAIN-2021-25
1623487584018.1
[]
docs.gigaspaces.com
Set a default host for a Splunk instance An event host value is the IP address, host name, or fully qualified domain name of the physical device on the network from which the event originates. Because Splunk software assigns a host value at index time for every event it indexes, host value searches enable you to easily find data originating from a specific device. Default host assignment If you have not specified other host rules for a source (using the information in subsequent topics in this chapter), the default host value for an event is the hostname or IP address of the server running the Splunk instance (forwarder or indexer) consuming the event data. When the event originates on the server on which the Splunk instance is running, that host assignment is correct and there's no need to change anything. However, if all your data is being forwarded from a different host or if you're bulk-loading archive data, you might want to change the default host value for that data. To set the default value of the host field, you can use Splunk Web or edit inputs.conf. Set the default host value using Splunk Web 1. In Splunk Web, click Settings. 3. On the Settings page, click General settings. 4. On the General settings page, scroll down to the Index settings section and change the Default host name. 5. Save your changes. This sets the default value of the host field for all events coming into that Splunk instance. You can override the value for invidividual sources or events, as described later in this chapter. Set the default host value using inputs.conf The default host assignment is set in inputs.conf during installation. You can modify the host value by editing that file in $SPLUNK_HOME/etc/system/local/ or in your own custom application directory in $SPLUNK_HOME/etc/apps/. The host assignment is specified in the [default] stanza. This is the format of the default host assignment in inputs.conf: [default] host = <string> Set <string> to your chosen default host value. <string> defaults to the IP address or domain name of the host where the data originated. Warning: Do not put quotes around the <string> value: host=foo, not host="foo". input If you are running Splunk Enterprise on a central log archive, or you are working with files an file or directory input in this manual. Override the default host value using event data Some situations require you to assign host values by examining the event data. For example, If you have a central log host sending events to your Splunk deployment, you might have several host servers feeding data to that main log server. To ensure that each event has the host value of its originating server, you need to use the event's data to determine the host value. For more information, see Set host values based on event data, 7.0.11, 7.0.13 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/Splunk/6.6.12/Data/SetadefaulthostforaSplunkserver
2021-06-12T14:05:18
CC-MAIN-2021-25
1623487584018.1
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
CDP installation worksheet The primary configuration distribution peer (CDP) is a required server component that you install after the repository. The initial grid is added when you install the primary CDP. Before installing the CDP, use this worksheet to record information specific to your system. The installation parameters in this worksheet correspond to the parameters in the GUI installation and the options file. You can install the optional BMC Atrium Orchestrator Operator Control Panel (OCP), with the CDP. To install the OCP with the CDP, use the OCP worksheet with the CDP installation worksheet. Note If you are installing a non-English version of the Operator Control Panel, you must install it separately from the CDP. You must install the non-English OCP and CDP on separate Tomcat servers. Feature Selection panel Root Directory Selection panel Root Port Selection panel Primary CDP Selection panel Authentication panel CDP Server Auto Start panel Installation Choices Summary panel
https://docs.bmc.com/docs/AtriumOrchestratorPlatform/79/cdp-installation-worksheet-592708069.html
2021-06-12T13:48:23
CC-MAIN-2021-25
1623487584018.1
[]
docs.bmc.com
When you move your on-premises workloads to the cloud, you can continue to rely on ONTAP event monitoring. EMS messages, NAS native auditing, FPolicy, and SNMP are all available in the cloud. If you are already using ONTAP System Manager or Active IQ Unified Manager for on-premises performance monitoring, you can continue to do so in the cloud. Both System Manager and Unified Manager provide detailed reporting and alerting of Cloud Volumes ONTAP health, capacity, and performance.
https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-ontap-cloud/GUID-F133D327-E0A6-4E78-843A-437E3E26298A.html
2021-06-12T14:53:03
CC-MAIN-2021-25
1623487584018.1
[]
docs.netapp.com
Crate avr_config[−][src] Foundational crate for retrieving details about the target AVR being compiled for. This crate currently exposes the CPU frequency that is being configured for. The AVR_CPU_FREQUENCY_HZ environment variable All crates that depend on this crate (with the cpu-frequency feature enabled) require $AVR_CPU_FREQUENCY_HZ when targeting AVR. The frequency will then be available in the CPU_FREQUENCY_HZ constant. It is not necessary to set this variable when AVR is not being targeted, for example, when running integration tests on the host machine.
https://docs.rs/avr-config/2.0.1/avr_config/
2021-06-12T14:46:06
CC-MAIN-2021-25
1623487584018.1
[]
docs.rs
Crate pcomb[−][src] Expand description pcomb: parser combinators for the masses This is a tiny parser combinator library for rust. Combinators allow the ability to easily compose several parsing functions to produce a much larger parser with easy control over output types and control flow. At the moment, this library only supports parsing from string slices, and requires that output types support std::iter::Extend. Crucially, it only allows statically generating combinators via the use of rust’s expressive generics. Dynamically composing combinators is in the works, but currently, trait objects do not function.
https://docs.rs/pcomb/0.1.0/pcomb/
2021-06-12T14:38:15
CC-MAIN-2021-25
1623487584018.1
[]
docs.rs
>>. Confirm ---------!
https://docs.splunk.com/Documentation/Splunk/8.2.0/Security/SecuringSplunkEnterprisewithFIPS
2021-06-12T13:49:36
CC-MAIN-2021-25
1623487584018.1
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
UpdateVPCEConfiguration Updates information about an Amazon Virtual Private Cloud (VPC) endpoint configuration. Request Syntax { "arn": " string", "serviceDnsName": " string", "vpceConfigurationDescription": " string", "vpceConfigurationName": " string", "vpceServiceName": " string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - arn The Amazon Resource Name (ARN) of the VPC endpoint configuration you want to update. Type: String Length Constraints: Minimum length of 32. Maximum length of 1011. Pattern: ^arn:.+ Required: Yes - serviceDnsName The DNS (domain) name used to connect to your private service in your VPC. The DNS name must not already be in use on the internet. Type: String Length Constraints: Minimum length of 0. Maximum length of 2048. Required: No - vpceConfigurationDescription An optional description that provides details about your VPC endpoint configuration. Type: String Length Constraints: Minimum length of 0. Maximum length of 2048. Required: No - vpceConfigurationName The friendly name you give to your VPC endpoint configuration to manage your configurations more easily. Type: String Length Constraints: Minimum length of 0. Maximum length of 1024. Required: No - vpceServiceName The name of the VPC endpoint service running in your AWS account that you want Device Farm to test. Type: String Length Constraints: Minimum length of 0. Maximum length of 2048. Required: No:
https://docs.aws.amazon.com/devicefarm/latest/APIReference/API_UpdateVPCEConfiguration.html
2021-06-12T14:37:00
CC-MAIN-2021-25
1623487584018.1
[]
docs.aws.amazon.com
Changelog for package mir_msgs 1.0.4 (2019-05-06) Update mir_msgs and mir_actions to MiR 2.3.1 The following changes were made to the actual mir_msgs: * rename mirMsgs -> mir_msgs * rename proximity -> Proximity * rename serial -> Serial * keep MirStatus msg (was replaced by RobotStatus in MiR software 2.0) Contributors: Martin Günther 1.0.3 (2019-03-04) mir_msgs: Compile new msgs + rename mirMsgs -> mir_msgs mir_msgs: Add geometry_msgs dependency Now that we have an actual msg package dependency, we don't need the std_msgs placeholder any more. mir_msgs: Add new messages on kinetic Contributors: Martin Günther 1.0.2 (2018-07-30) 1.0.1 (2018-07-17) 1.0.0 (2018-07-12) Initial release Contributors: Martin Günther
https://docs.ros.org/en/lunar/changelogs/mir_msgs/changelog.html
2021-06-12T14:27:35
CC-MAIN-2021-25
1623487584018.1
[]
docs.ros.org
PATROL KM for Log Management The PATROL KM for Log Management monitors text, script, named pipe, and binary files in your environment. The KM provides the following monitoring features: - Automatically monitors key log files - Monitors files that do not currently exist on the system - Monitors log files with dynamic names using wild card characters - Monitors the size of log files - Monitors the growth rate of log files - Monitors the content of log files - Monitors the state of log files - Monitors the age of the log files - Monitors log files using numeric comparisons The PATROL KM for Log Management also provides the following management features: - Triggers alerts when a log file exceeds a specified size - Triggers alert when a text string or regular expression is discovered within a log file - Creates automated recovery actions when a log file exceeds an acceptable size or growth rate - Configures log searches to - Ignore subsequent alerts for a specified number of polling cycles if the search finds a matching string or regular expression in a log file - Override an ignored alert if the search finds a matching string or regular expression more than n times before the ignore setting is completed - Specify the number of log scan cycles after which a WARN or ALARM state is automatically changed to OK - Creates robust searches by using NOT and AND statements with the text strings or regular expressions in the log search - Alerts for log file age - Sets multiple schedules for multiple polling cycles per log file - Disables/enables default log monitoring You can set up the following predefined recovery actions to execute when monitored log files exceed a specified size or growth rate. - Clear and back up log files - Delete files - Run in attended and unattended modes To get started with the PATROL KM for Log Management, see Configuring PATROL for Log Management. . For detailed instructions, see the BMC PATROL for Log Management 2.7.30 documentation. Was this page helpful? Yes No Submitting... Thank you
https://docs.bmc.com/docs/PATROL4Windows/51/patrol-km-for-log-management-758870334.html
2021-06-12T15:14:00
CC-MAIN-2021-25
1623487584018.1
[]
docs.bmc.com
2.5.16. Configurators¶ For advanced users or plugins writers, the configurators key is available, and holds a list of buildbot.interfaces.IConfigurator. Configurators will run after the master.cfg has been processed, and will modify the config dictionary. Configurator implementers should make sure that they are interoperable with each other, which means carefully modifying the config to avoid overriding a setting already made by the user or by another configurator. Configurators are run (thus prioritized) in the order of the configurators list. 2.5.16.1. JanitorConfigurator¶ Buildbot stores historical information in its database. In a large installation, these can quickly consume disk space, yet in many cases developers never consult this historical information. JanitorConfigurator creates a builder and Nightly scheduler which will regularly remove old information. At the moment it only supports cleaning of logs, but it will contain more features as we implement them. from buildbot.plugins import util from datetime import timedelta # configure a janitor which will delete all logs older than one month, # and will run on sundays at noon c['configurators'] = [util.JanitorConfigurator( logHorizon=timedelta(weeks=4), hour=12, dayOfWeek=6 )] Parameters for JanitorConfigurator are: logHorizon - a timedeltaobject describing the minimum time for which the log data should be maintained hour, dayOfWeek, … - Arguments given to the Nightlyscheduler which is backing the JanitorConfigurator. Determines when the cleanup will be done. With this, you can configure it daily, weekly or even hourly if you wish. You probably want to schedule it when Buildbot is less loaded.
https://docs.buildbot.net/0.9.9.post1/manual/cfg-configurators.html
2021-06-12T14:26:35
CC-MAIN-2021-25
1623487584018.1
[]
docs.buildbot.net
Distributed algorithm for circular formations with air-to-air communications. For more details we refer to Add to your firmware section: Parametersprefix: DCF_ MAX_NEIGHBORSvalue: 4 GAIN_Kvalue: 10 RADIUSvalue: 80 TIMEOUTvalue: 1500 BROAD_TIMEvalue: 200 These initialization functions are called once on startup. Whenever the specified datalink message is received, the corresponing handler function is called. The following headers are automatically included in modules.h
https://docs.paparazziuav.org/v5.18/module__distributed_circular_formation.html
2021-06-12T14:39:33
CC-MAIN-2021-25
1623487584018.1
[]
docs.paparazziuav.org
16. Working with Mesh Data¶ 16.1. What’s a mesh?¶ A mesh is an unstructured grid usually with temporal and other components. The spatial component contains a collection of vertices, edges and faces in 2D or 3D space: vertices - XY(Z) points (in the layer’s coordinate reference system) edges - connect pairs of vertices faces - a face is a set of edges forming a closed shape - typically a triangle or a quadrilateral (quad), rarely polygons with more vertices. 16.2. Supported formats¶ () Some examples of mesh datasets can be found at To load a mesh dataset into QGIS, use the Mesh tab in the Data Source Manager dialog. Read Loading a mesh layer for more details. 16.3. Mesh Dataset Properties¶ 16.3.1. Information Properties¶ The Information tab is read-only and represents an interesting place to quickly grab summarized information and metadata on the current layer. Provided information are (based on the provider of the layer) uri, vertex count, face count and dataset groups count. 16.3.2. Source Properties¶ The Source tab displays basic information about the selected mesh, including: the Layer name to display in the Layers panel setting the Coordinate Reference System: Displays the layer’s Coordinate Reference System (CRS). You can change the layer’s CRS by selecting a recently used one in the drop-down list or clicking on Select CRS button (see Coordinate Reference System Selector). Use this process only if the CRS applied to the layer is wrong or if none was applied. Use the Assign Extra Dataset to Mesh button to add more groups to the current mesh layer. 16.3.3. Symbology Properties¶ Click the Symbology button to activate the dialog as shown in the following image: Symbology properties are divided in several tabs: 16.3.3.1. General¶ The tab presents the following items: groups available in the mesh dataset dataset in the selected group(s), for example, if the layer has a temporal dimension metadata if available blending mode available for the selected dataset. The slider , the combo box and the |<, <, >, >| buttons allow to explore another dimension of the data, if available. As the slider moves, the metadata is presented accordingly. See the figure Mesh groups below as an example. The map canvas will display the selected dataset group as well. You can apply symbology to each group using the tabs. 16.3.3.2. Contours Symbology¶ Under Groups, click on to show contours with default visualization parameters. In the tab you can see and change the current visualization options of contours for the selected group, as shown in Fig. 16.7 below:.. 16.3.3.3. Vectors Symbology¶ In the tab , click on to display vectors if available. The map canvas will display the vectors in the selected group with default parameters. Click on the tab to change the visualization parameters for vectors as shown in the image below:: Defined by Min and Max: You specify the minimum and maximum length for the vectors, QGIS will adjust their visualization accordingly Scale to magnitude: You specify the (multiplying) factor to use Fixed: all the vectors are shown with the same length 16.3.3.4. Rendering¶ In the tab , QGIS offers two possibilities to display the grid, as shown in Fig. 16.9: Native Mesh Renderingthat shows quadrants Triangular Mesh Renderingthat display triangles The line width and color can be changed in this dialog, and both the grid renderings can be turned off.
https://docs.qgis.org/testing/en/docs/user_manual/working_with_mesh/mesh_properties.html
2021-06-12T14:32:03
CC-MAIN-2021-25
1623487584018.1
[]
docs.qgis.org
Crate material_yew[−][src] Expand description A Material components library for Yew. It wrpas around Material Web Components exposing Yew components. Example usage: use material_yew::MatButton; use yew::html; html! { <MatButton label="Click me!" /> } All the main components from the modules are re-exported. The specialized components used for populating slots and models can be accessed from their respective modules. More information can be found on the website and in the GitHub README
https://docs.rs/material-yew/0.1.0/material_yew/
2021-06-12T14:34:50
CC-MAIN-2021-25
1623487584018.1
[]
docs.rs
Named and Keyed Services¶ Autofac provides three typical ways to identify services. The most common is to identify by type: builder.RegisterType Services can be further identified using a service name. Using this technique, the Named() registration method replaces As(). builder.RegisterType<OnlineState>().Named<IDeviceState>("online"); To retrieve a named service, the ResolveNamed() method is used: var r = container.ResolveNamed<IDeviceState>("online"); Named services are simply keyed services that use a string as a key, so the techniques described in the next section apply equally to named services. Keyed Services¶ Using strings as component names is convenient in some cases, but in others we may wish to use keys of other types. Keyed services provide this ability. For example, an enum may describe the different device states in our example: public enum DeviceState { Online, Offline } Each enum value corresponds to an implementation of the service: public class OnlineState : IDeviceState { } The enum values can then be registered as keys for the implementations as shown below. var builder = new ContainerBuilder(); builder.RegisterType<OnlineState>().Keyed<IDeviceState>(DeviceState.Online); builder.RegisterType<OfflineState>().Keyed<IDeviceState>(DeviceState.Offline); // Register other components here Resolving Explicitly¶ The implementation can be resolved explicitly with ResolveKeyed(): var r = container.ResolveKeyed<IDeviceState>(DeviceState.Online); This does however result in using the container as a Service Locator, which is discouraged. As an alternative to this pattern, the IIndex type is provided. Resolving with an Index¶ Autofac.Features.Indexed.IIndex<K,V> is a relationship type that Autofac implements automatically. Components that need to choose between service implementations based on a key can do so by taking a constructor parameter of type IIndex<K,V>.. Resolving with Attributes¶ The metadata feature of Autofac provides a KeyFilterAttribute that allows you to mark constructor parameters with an attribute specfying which keyed service should be used. The attribute usage looks like this: public class ArtDisplay : IDisplay { public ArtDisplay([KeyFilter("Painting")] IArtwork art) { ... } } See the metadata documentation for more info on how to get this set up.
https://autofaccn.readthedocs.io/en/latest/advanced/keyed-services.html
2020-10-20T00:16:15
CC-MAIN-2020-45
1603107867463.6
[]
autofaccn.readthedocs.io
User Guide¶ Units¶ Units use in CEASIOMpy are normally given in International System of Units, except if something else is mentioned. CPACS format¶ CEASIOMpy is based on the CPACS format. CPACS is an XML file format with allow to define an aircraft geometry and a lot of discipline specific parameters. CEASIOMpy convention¶ The convention used in CEASIOMpy are base on the CPACS conventions The following values are stored on the CPACS file (at /cpacs/vehicles/aircraft/model/reference) * Reference length: … * Reference area: … * Reference points (x,y,z): … TODO: continue
https://ceasiompy.readthedocs.io/en/latest/user_guide/detailed_user_guide.html
2020-10-20T00:42:20
CC-MAIN-2020-45
1603107867463.6
[]
ceasiompy.readthedocs.io
TOPICS× Limitations and behavior for trick play Limitations for trick play mode: - The master playlist must contain Iframe-only segments.Only the key frames from the Iframe track are displayed on the screen. - The audio track and closed captions are disabled. - Play and pause are enabled. - starting trick play mode, ad breaks are ignored and no ad events are fired. - The timeline exposed by TVSDK to the player is not modified even if ad breaks are skipped. - The current time value jumps forward (on fast forward) or backward (on fast rewind) with the duration of the skipped ad break.This jump behavior for the current time allows the stream duration to remain unmodified during trick play. Your player can track the time relative only to the main content. No time jumps are performed on the values returned for the local time when skipping an ad. - The MediaPlayerEvent.AD_BREAK_SKIPPED event is dispatched immediately before an ad break is about to be skipped.Your player can use this event to implement custom logic related to the skipped ad breaks. - Exiting trick play invokes the same ad playback policy as when exiting seek.As with seeking, the behavior depends on whether your application's playback policy is different from the default. The default is that the last skipped ad break is played at the point where you come out of trick play.
https://docs.adobe.com/content/help/en/primetime/programming/tvsdk-2-7-for-android/content-playback-options/implement-fast-forward/c-psdk-android-2_7-trick-play-limitations.html
2020-10-20T01:59:42
CC-MAIN-2020-45
1603107867463.6
[]
docs.adobe.com
- - - Multiple-Firewall Environment! Multiple-Firewall Environment In a multiple-firewall environment, the Citrix ADC appliance is placed between two sets of firewalls, the external set connecting to the public Internet, and the internal set connecting to the internal private network. The external set typically handles the egress traffic. These firewalls mainly implement access control lists to allow or deny access to external resources. The internal set typically handles the ingress traffic. These firewalls implement security to safeguard the intranet from malicious attacks apart from load-balancing the ingress traffic. The multiple-firewall environment allows you to load-balance traffic coming from another firewall. By default, the traffic coming from a firewall is not load balanced on the other firewall across a Citrix ADC appliance. Having firewall load balancing enabled on both the sides of Citrix ADC improves the traffic flow in both the egress and ingress direction and ensures faster processing of the traffic. The following figure shows a multiple-firewall load balancing environment Figure 1. Firewall Load Balancing (multiple-firewall) With a configuration like the one shown in Figure 1, you can configure the Citrix ADC to load balance the traffic through the an internal firewall even if it is load balanced by an external firewall. For example, with this feature configured, the traffic coming from the external firewalls (firewalls 1, 2, and 3) is load balanced on the internal firewalls (firewalls 4, 5, and 6) and vice versa. Firewall load balancing is supported only for MAC mode LB virtual server. The service type ANY configures the Citrix ADC to accept all traffic. To avail benefits related to HTTP and TCP, configure the service and virtual server with type HTTP or TCP. For FTP to work, configure the service with type FTP. Configuring the Citrix ADC in a Multiple-Firewall Environment To configure a Citrix ADC appliance in a multiple-firewall environment, you have to enable the load balancing feature, configure a virtual server to load balance the egress traffic across the external firewalls, configure a virtual server to load balance the ingress traffic across the internal firewalls, and enable firewall load balancing on the Citrix ADC appliance. To configure a virtual server to load balance traffic across a firewall in the multiple-firewall environment, you need to: - Configure a wildcard service for each firewall - Configure a monitor for each wildcard service - Configure a wildcard virtual server to load balance the traffic sent to the firewalls - Configure the virtual server in MAC rewrite mode - Bind firewall services to the wildcard virtual server Enabling the load balancing feature To configure and implement load balancing entities such as services and virtual servers, you need to enable the load balancing feature on the Citrix ADC device. To enable load balancing by using the CLI: At the command prompt, type the following command to enable load balancing and verify the configuration: enable ns feature <featureName> show ns feature Example: enable ns feature LoadBalancing Done show ns feature Feature Acronym Status ------- ------- ------ 1) Web Logging WL OFF 2) Surge Protection SP ON 3) Load Balancing LB ON . . . 24) NetScaler Push push OFF Done To enable load balancing by using the GUI: - In the navigation pane, expand System, and then click Settings. - In the Settings pane, under Modes and Features, click Change basic features. - In the Configure Basic Features dialog box, select the Load Balancing check box, and then click Ok. Configuring a wildcard service for each firewall To accept traffic from all the protocols, you need to configure wildcard service for each firewall by specifying support for all the protocols and ports. To configure a wildcard service for each firewall by using the CLI: At the command prompt, type the following command to configure support for all the protocols and ports: add service <name>@ <serverName> <serviceType> <port_number> Example: add service fw-svc1 10.102.29.5 ANY * To configure a wildcard service for each firewall by using the GUI: Navigate to Traffic Management > Load Balancing > Services. In the details pane, click Add. In the Create Services dialog box, specify values for the following parameters as shown: - Service Name—name - Server—serverName -* A required parameter In Protocol, select Any and in Port, select *. Click Create, and then click Close. The service you created appears in the Services pane. Configuring a monitor for each service A PING monitor is bound by default to the service. You will need to configure a transparent monitor to monitor hosts on the trusted side through individual firewalls. firewall is UP but one of the next hop devices from that firewall is down, the appliance includes the firewall while performing load balancing and forwards the packet to the firewall. However, the packet is not delivered to the final destination because one of the next hop devices is down. By binding a transparent monitor, if any of the devices (including the firewall) are down, the service is marked as DOWN and the firewall is not included when the appliance performs firewall load balancing. Binding a transparent monitor will override the PING monitor. To configure a PING monitor in addition to a transparent monitor, after you create and bind a transparent monitor, you need to bind a PING monitor to the service. To configure a transparent monitor by using the CLI: At the command prompt, type the following commands to configure a transparent monitor and verify the configuration: add lb monitor <monitorName> <type> [-destIP <ip_addr|ipv6_addr|*>] [-transparent (YES | NO )] bind lb monitor <monitorName> <serviceName> Example: add monitor monitor-HTTP-1 HTTP -destip 10.10.10.11 -transparent YES bind monitor monitor-HTTP-1 fw-svc1. To configure a receive string by using the CLI: At the command prompt, type the following command: add lb monitor <monitorName> <type> [-destIP <ip_addr|ipv6_addr|*>] [-transparent (YES | NO )] [-send <string>] [-recv <string>] Example: add lb monitor monitor-udp-1 udp-ecv -destip 10.10.10.11 -transparent YES –send "test message" –recv "site_is_up" To create and bind a transparent monitor by using the GUI: Navigate to Traffic Management > Load Balancing > Monitors. In the details pane, click Add. In the Create Monitor dialog box, specify values for the following parameters as shown: - Name* - Type*—type - Destination IP - Transparent -* A required parameter Click Create, and then click Close. In the Monitors pane, select the monitor that you just configured and verify that the settings displayed at the bottom of the screen are correct. Configuring a virtual server to load balance the traffic sent to the firewalls To load balance any kind of traffic, you need to configure a wildcard virtual server specifying the protocol and port as any value. To configure a virtual server to load balance the traffic sent to the firewalls by using the CLI: At the command prompt, type the following command: add lb vserver <name>@ <serviceType> <IPAddress> <port_number> Example: add lb vserver Vserver-LB-1 ANY * * To configure a virtual server to load balance the traffic sent to the firewalls by using the GUI: - Navigate to Traffic Management > Load Balancing > Virtual Servers. - In the details pane, click Add. - In Protocol, select Any, and in IP Address and Port, select *. - Click Create, and then click Close. The virtual server you created appears in the Load Balancing Virtual Servers pane. Configuring the virtual server to MAC rewrite mode To configure the virtual server to use MAC address for forwarding the incoming traffic, you need to enable the MAC rewrite mode. To configure the virtual server in MAC rewrite mode by using the CLI: At the command prompt, type the following command: set lb vserver <name>@ -m <RedirectionMode> Example: set lb vserver Vserver-LB-1 -m MAC To configure the virtual server in MAC rewrite mode by using the GUI: - Navigate to Traffic Management > Load Balancing > Virtual Servers. - In the details pane, select the virtual server for which you want to configure the redirection mode (for example, Vserver-LB1), and then click Open. - On the Advanced tab, under the Redirection Mode mode, click Open. - Click Ok. Binding firewall services to the virtual server To access a service on Citrix ADC appliance, you need to bind it to a wildcard virtual server. To bind firewall services to the virtual server by using the CLI: At the command prompt, type the following command: bind lb vserver <name>@ <serviceName> Example: bind lb vserver Vserver-LB-1 Service-HTTP-1 To bind firewall services to the virtual server by using the GUI: - Navigate to Traffic Management > Load Balancing > Virtual Servers. - In the details pane, select the virtual server for which you want to configure the redirection mode (for example, Vserver-LB1), and then click Open. - In the Configure Virtual Server (Load Balancing) dialog box, on the Services tab, select the Active check box next to the service that you want to bind to the virtual server(for example, Service-HTTP-1 ). - Click Ok. Configuring the multiple-firewall load balancing on the Citrix ADC appliance To load balance traffic on both the sides of a Citrix ADC using firewall load balancing, you need to enable mulitpl-firewall load balancing by using the vServerSpecificMac parameter. To configure multiple-firewall load balancing by using the CLI: At the command prompt, type the following command: set lb parameter -vServerSpecificMac <status> Example: set lb parameter -vServerSpecificMac ENABLED To configure multiple-firewall load balancing by using the GUI: - Navigate to Traffic Management > Load Balancing > Virtual Servers. - In the details pane, select the virtual server for which you want to configure the redirection mode (for example, Configure Load Balancing parameters). - In the Set Load Balancing Parameters dialog box, select the Virtual Server Specific MAC check box. - Click Ok. Saving and Verifying the Configuration When you’ve finished the configuration tasks, be sure to save the configuration. You should also check to make sure that the settings are correct. To save and verify the configuration by using the CLI: At the command prompt, type the following commands to configure a transparent monitor and verify the configuration: - save ns config - show vserver Example: save config show lb vserver FWLBVIP2 FWLBVIP2 (*:*) - ANY Type: ADDRESS State: UP Last state change was at Mon Jun 14 07:22:54 2010 Time since last state change: 0 days, 00:00:32.760 Effective State: UP Client Idle Timeout: 120 sec Down state flush: ENABLED Disable Primary Vserver On Down : DISABLED No. of Bound Services : 2 (Total) 2 (Active) Configured Method: LEASTCONNECTION Current Method: Round Robin, Reason: A new service is bound Mode: MAC Persistence: NONE Connection Failover: DISABLED 1) fw-int-svc1 (10.102.29.5: *) - ANY State: UP Weight: 1 2) fw-int-svc2 (10.102.29.9: *) - ANY State: UP Weight: 1 Done show service fw-int-svc1 fw-int-svc1 (10.102.29.5:*) - ANY State: DOWN Last state change was at Thu Jul 8 14:44:51 2010 Time since last state change: 0 days, 00:01:50.240 Server Name: 10.102.29.5 Server ID : 0 1) Monitor Name: monitor-HTTP-1 State: DOWN Weight: 1 Probes: 9 Failed [Total: 9 Current: 9] Last response: Failure - Time out during TCP connection establishment stage Response Time: 2000.0 millisec 2) Monitor Name: ping State: UP Weight: 1 Probes: 3 Failed [Total: 0 Current: 0] Last response: Success - ICMP echo reply received. Response Time: 1.275 millisec Done To save and verify the configuration by using the GUI: - In the details pane, click Save. - In the Save Config dialog box, click Yes. - Navigate to Traffic Management > Load Balancing > Virtual Servers. - In the details pane, select the virtual server that you created in step 5 and verify that the settings displayed in the Details pane are correct. - Navigate to Traffic Management > Load Balancing > Services. - In the details pane, select the service that you created in step 5 and verify that the settings displayed in the Details pane are correct. Monitoring a Firewall Load Balancing Setup in a Multiple-Firewall Environment After the configuration is up and running, you should view the statistics for each service and virtual server to check for possible problems. command line interface To display a summary of the statistics for all the virtual servers currently configured on the Citrix ADC appliance, or for a single virtual server, at the command prompt, type: stat lb vserver [-detail] [<name>] Example: >stat lb vserver -detail Virtual Server(s) Summary vsvrIP port Protocol State Req/s Hits/s One * 80 HTTP UP 5/s 0/s Two * 0 TCP DOWN 0/s 0/s Three * 2598 TCP DOWN 0/s 0/s dnsVirtualNS 10.102.29.90 53 DNS DOWN 0/s 0/s BRVSERV 10.10.1.1 80 HTTP DOWN 0/s 0/s LBVIP 10.102.29.66 80 HTTP UP 0/s 0/s Done To display virtual server statistics by using the GUI: - Navigate to Traffic Management > Load Balancing > Virtual Servers > Statistics. - If you want to display the statistics for only one virtual server, in the details pane, select the virtual server, and > Statistics. - If you want to display the statistics for only one service, select the service, and click Statistics. Multiple-Firewall.
https://docs.citrix.com/en-us/citrix-adc/current-release/firewall-load-balancing/multiple-firewall-environment.html
2020-10-19T23:56:12
CC-MAIN-2020-45
1603107867463.6
[array(['/en-us/citrix-adc/media/multiple-flb.png', 'Multiple-Firewall Environment'], dtype=object)]
docs.citrix.com
Configuring Server-Initiated Connections For each user logged on to Citrix can be logged on to Citrix Gateway then: following are the different types of server-initiated connections that Citrix Gateway supports: For TCP or UDP server-initiated connections, the server has prior knowledge about the user device’s IP address and port and makes a connection to it. Citrix Citrix Gateway to support applications, such as active FTP connections. Port command. This is used in an active FTP and in certain Voice over IP protocols. Connections between plug-ins. Citrix Gateway supports connections between plug-ins by using Citrix the.
https://docs.citrix.com/en-us/citrix-gateway/12-1/install/ng-advanced-server-initiated-connections-con.html
2020-10-19T23:31:35
CC-MAIN-2020-45
1603107867463.6
[]
docs.citrix.com
The mParticle Command Line Interface (CLI) can be used to communicate with various mParticle services and functions through simple terminal commands. Through the CLI, an engineer can directly interface with many of mParticle’s services without needing to make requests directly, (such as commands via cUrl or Postman). Also, many of these requests can be integrated in any Continuous Integration/Continuous Deployment (CI/CD) systems. The CLI installs as a simple npm package. Simply install globally and then type mp once installed. $ npm install -g @mparticle/cli $ mp [COMMAND] running command... $ mp (-v|--version|version) @mparticle/cli/1.X.X darwin-x64 node-v10.XX.X $ mp --help [COMMAND] USAGE $ mp COMMAND To verify your installation and version, use the mp --version $ mp --version @mparticle/cli/1.X.X darwin-x64 node-v10.XX.X Simply use mp help to view a list of the available commands. $ mp help mParticle Command Line Interface VERSION @mparticle/cli/1.X.X darwin-x64 node-v10.XX.X USAGE $ mp [COMMAND] COMMANDS autocomplete display autocomplete installation instructions help display help for mp planning Manages Data Planning As a convenience, we provide a simple autocomplete feature, where you can type in part of a command, then press <TAB> to autocomplete a command. Simply type mp autocomplete for instructions on configuring this feature. Simply use npm install -g @mparticle/cli to upgrade to the latest version. To perform commands on the CLI, you pass in flags such as authentication credentials or record identifiers. Some of these parameters can be added to an optional configuration file, mp.config.json, to be shared between commands or other mParticle applications. The CLI will automatically search in the current working directory for a valid json filed named mp.config.file. Alternatively, a json file can be passed in with the --config=/path/to/config flag. For example, if you need to store configs for multiple projects, you could store them in a central location and pass in either a relative or absolute path to the cli: $> mp planning:data-plan-versions:fetch --config=~/.myconfigs/custom.config.json It is recommended to have a single mp.config.json file at the root of your project and always run the CLI from the root. If you are using our data planning linters, you must name your file mp.config.json and keep it at the root of your folder. { "global": { "workspaceId": "XXXXX", "clientId": "XXXXXX", "clientSecret": "XXXXXXXXX" }, "planningConfig": { "dataPlanVersionFile": "./path/to/dataPlanVersionFile.json" } } This contains settings that would pertain to your account credentials and application. workspaceId: The workspace identifier for your team’s workspace clientId: A unique Client Identification string provided by your Customer Success Manager clientSecret: A secret key provided by your Customer Success Manager It is recommended that you always have these three credentials in your configuration as they are used by other Platform API services, such as Data Planning These are configurations pertaining to your project’s Data Master resources, such as data plans and data plan versions. planningConfig is required if you use our data plan linting tools, which you can learn more about here. Note that from the UI under Data Master/Plans, the json you download is a specific data plan version. dataPlanVersionFile: A relative or absolute path file to your desired data plan version (used in place of dataPlanFileand versionNumber) dataPlanId: The ID of your current Data Plan dataPlanFile: A relative or absolute path to your data plan file (must be used with versionNumberbelow) versionNumber: The Current Version Number for your Data Plan (must be used with dataPlanFile) At its core, the CLI exposes services in a manner that is consistent with our REST APIs. Each command will offer a unique set of sub commands, as well as arguments and flags. The CLI also provides universal command flags for global functions, such as --help or --outfile. The CLI command structure is as follows: mp [COMMAND]:[SUBCOMMAND]:[subcommand] --[flag]=[value][args...] By default, every command will output to the CLI’s standard out. By adding a flag of --outFile=/path, the user can output the response to a log file (or json file) depending on the use case. The CLI provides a --help flag which reveals all acceptable parameters and flags for a command, as well as a list of commands. Furthermore, top level commands will reveal their respective sub commands upon execution. Any CLI command that requires any mParticle HTTP API resources allows two options for authentication. You can pass credentials via either 1. command line or 2. an mp.config.json file in the root of your project. Both of these methods will internally generate a bearer token on your behalf, as describe in Platform API Authentication. Credentials Required: Simply pass your authentication credentials via the following CLI flags: $ mp [COMMAND]:[SUBCOMMAND] --workspaceID=XXXX --clientId=XXXXX --clientSecret=XXXXXX To integrate with various services, we recommend adding an mp.config.json file to the root of your project. This will allow you to set various properties related to your mParticle account as well as other project settings, such as your data plan directory. For more information on mp.config.json. For example, to authenticate, make sure the following is in your mp.config.json file: // mp.config.json { "global": { "workspaceId": "XXXXX", "clientId": "XXXXXX", "clientSecret": "XXXXXXXXX" } } This configuration file can then be referenced via the cli flag --config. Additionally, the cli will search your current working directory for mp.config.json. For customers subscribed to Data Master, the CLI exposes commands to allow for Creating, Fetching, Updating, and Deleting data plans, as well as validating your events against a downloaded Data Plan. Please be aware that all of these services require Platform API authentication credentials via mp.config.json or via CLI arguments: clientId, clientSecret and workspaceId as well as Data Planning access. Fetching a Data Plan requires that a data plan exists on the server. Simply pass the dataPlanId as a flag to fetch this resource. The Resource must exist on the server, otherwise this will fail. $ mp planning:data-plans:fetch --dataPlanId=XXXXXX To fetch a Data Plan Version, simply use mp planning:data-plan-versions:fetch and pass a dataPlanId and versionNumber. Use the following command to create a Data Plan Resource (or Data Plan Version) on the server. $ mp planning:data-plans:create --dataPlan="{ // Data plan as string //}" You can also replace dataPlan with dataPlanFile to use a path to a locally stored data plan if that is more convenient. For example: $ mp planning:data-plans:create --dataPlanFile=/path/to/dataplan/file.json To create a Data Plan Version, simply use mp planning:data-plan-versions:create and pass a dataPlanId as a secondary flag. To edit an existing Data Plan (or Data Plan Version) on the server, use the following: $ mp planning:data-plans:update --dataPlanId=XXXX --dataPlan="{ // Data plan as string //}" You can also replace dataPlan with dataPlanFile to use a path to a locally stored data plan if that is more convenient. For example: $ mp planning:data-plans:update --dataPlanId=XXXXX --dataPlanFile=/path/to/dataplan/file To create a Data Plan Version, simply use mp planning:data-plan-versions:update and pass a dataPlanId as a secondary flag. To delete a data plan, simply pass the dataPlanId into the delete command. $ mp planning:data-plans:delete --dataPlanId=XXXX Deleting a Data Plan version is similar, only requiring an additional flag of versionNumber $ mp planning:data-plans:delete --dataPlanId=XXXXX --versionNumber=XX Validating an Event Batch is a more complex task and the CLI provides flexibility by allowing validation to be done either locally or via the server, depending on your needs. Running a validation locally does not make a request on our servers, and as such is faster and ideal for a CI/CD environment. $ mp planning:batches:validate --batch="{ // batch as string}" --dataPlanVersion="{ // data plan version as string }" This will locally run your batch against a data plan version and return any validation results to the console. This command also supports an --outfile flag that will write the validation results to a file in your local directory, in case you’d like to save the results for future use. Both batch and dataPlanVersion support a batchFile and dataPlanVersionFile parameter (as well as dataPlan/ dataPlanFile and versionNumber) options for less verbose validation commands. Was this page helpful?
https://docs.mparticle.com/developers/cli/
2020-10-19T23:46:42
CC-MAIN-2020-45
1603107867463.6
[]
docs.mparticle.com
Integrations RadiumOne Mobile Analytics is a complete business intelligence solution for understanding the full user experience of your mobile app–from installation, to activity, to conversion. mParticle’s integration forwards the following event types to RadiumOne Mobile Analytics: To identify devices and users, mParticle forwards the following information with each forwarded event, if available: mParticle’s Connect integration supports custom mappings. You can map your events and attributes for RadiumOne Mobile Analytics In order to enable mParticle’s integration with RadiumOne Mobile Analytics, you will need an RadiumOne Mobile Analytics account to obtain your Application ID. Once logged into your analytics account, your Application ID is available by clicking Settings. Was this page helpful?
https://docs.mparticle.com/integrations/radiumone/
2020-10-20T00:50:43
CC-MAIN-2020-45
1603107867463.6
[]
docs.mparticle.com
Wireless Configure and manage access points, wireless networks, and devices. Network requirements To use any access point with Sophos Wireless, the access point has to be able to communicate with Sophos Central. Therefore, the following requirements have to be fulfilled: - DHCP and DNS servers are configured to provide an IP address to the access point and answer its DNS requests (IPv4 only). - Access point can reach Sophos Central without requiring a VLAN to be configured on the access point for this connection. - Communication on ports 443, 123, 80 to any internet server is possible. - There is no HTTPS proxy on the communication path. Warning Don’t disconnect your access point from the power outlet when the lights blink rapidly. This means that a firmware flash is in progress. For example, a firmware flash after a scheduled firmware update. For more details about planning and setting up your wireless network, see the following video:
https://docs.sophos.com/central/Customer/help/en-us/central/Customer/concepts/Wireless.html
2020-10-20T00:32:14
CC-MAIN-2020-45
1603107867463.6
[]
docs.sophos.com
Contents: Contents: An exported flow can be imported into Trifacta® flow, you must import a ZIP file containing the JSON definition. Steps: - Export the flow from the source system. See Export Flow. - Login to the import system, if needed. - Click Flows. - From the context menu in the Flow page, select Import Flow. - Select the ZIP file containing the exported flow. Click Open. If there are issues with the import, click the Download link to review the missing or malformed objects.The flow is imported and available for use in the Flows page. Import into Prod instance After creating any import rules in your Prod instance, please do the following. Steps: - Export the flow from the source system. See Export Flow. - Login to the Prod instance. The Deployment Manager is displayed. -. - If there are issues with the import, click the Download link to review the missing or malformed objects. Tip: After you import, you should open the flow in Flow View and run a job to verify that the import was successful and the rules were applied. See Flow View Page. This page has no comments.
https://docs.trifacta.com/display/r071/Import+Flow
2020-10-20T00:55:01
CC-MAIN-2020-45
1603107867463.6
[]
docs.trifacta.com
Create a Prototype API with an Inline Script¶ Generally, you would need to create APIs with inline scripts for testing purposes. You can deploy a new API or a new version of an existing API as a prototype. Thereby, this provides subscribers an early implementation of the API that they can try out and test without a subscription or monetization, and provide feedback to improve the API. After a period of time, the publishers can make changes that the users requested and publish the API. Let's create a prototyped API with an inline script, deploy it as a prototype, and invoke it using the API Console, which is integrated in the API Developer Portal. Step 1 - Create a Prototype API with Inline Script¶ adminas the username and password. https://<hostname>:9443/publisher Example: Click CREATE API, and click Design a new REST API. Provide the API name, context, and version. Click CREATE. For this example, provide the following values. You are directed to the API Overview page. Click Resources to navigate to the Resources page. Select the HTTP Verb, provide the URI Pattern and click on + to add a new resource. Tip After the new Resource is added, delete the default Resources (/*) by clicking on the Delete Button [1] of each resource. Or select all the resources at once by clicking on Select all for Delete [2] button. Expand the newly added resource. Note that the path parameter named townis set in the Parameters section. Tip To specify multiple parameters in the API resource, separate the parameters with a forward slash in the URI Pattern. {param1}/{param2} Click SAVE to save the API. Click Endpoints to navigate to the Endpoints page. Select Prototype Implementation in the Prototype Endpoint card, which is in the Select an Endpoint Type to Add page, and click ADD. Note that this page has been prompted because no endpoints have been added to the API yet. You will be directed to the endpoints page. Note The inline JavaScript engine does not provide support for SOAP APIs. If you opt for the endpoint implementation method instead of inline, you need to provide an endpoint to a prototype API. For example, <> Expand the GETmethod and enter the following as the script. This script reads the payload that the user sends with the API request and returns it as a JSON value. The value mc is the message context. mc.setProperty('CONTENT_TYPE', 'application/json'); // Set the content type of the payload to the message context var town = mc.getProperty('uri.var.town'); // Get the path parameter 'town' and store in a variable mc.setPayloadJSON('{ "Town" : "'+town+'"}'); // Set the new payload to the message context. Click SAVE to save the API. Step 2 - Deploy the API as a prototype¶ - Click Lifecycle to navigate to the Lifecycle page. - Click DEPLOY AS A PROTOTYPE to deploy the API as a prototype. []() Step 3 - Invoke the API¶ Click View in Dev Portal to navigate to the API Developer Portal after the API is deployed. !!! tip You can invoke prototyped APIs without signing in to the API Developer Portal or subscribing to the API. The purpose of a prototype API is to advertise and provide an early implementation of the API for users to test. The Location API opens in the Developer Portal. Click Try Out to navigate to the API Console. Expand the GETmethod, click Try it out. Give any value for the town (e.g., London) and click Execute to invoke the API. Note that the payload that you gave as a JSON output appears in the response. You have successfully created an API with inline script, deployed it as a prototype, and invoked it via the integrated API Console. An API can also be prototyped by moving the API to the PROTOTYPED state by changing the API lifecycle state and providing the prototype endpoints. For more information, see the Deploy and Test Prototype APIs tutorial. Info Related Guides
https://apim.docs.wso2.com/en/3.0.0/learn/design-api/mock-api/create-a-mock-api-with-an-inline-script/
2020-10-20T00:44:07
CC-MAIN-2020-45
1603107867463.6
[array(['https://apim.docs.wso2.com/en/3.0.0/assets/img/learn/create-api-prototype-lc-page.png', None], dtype=object) ]
apim.docs.wso2.com
6.64 Recording Crypto Server Release Notes Helpful Links Releases Info Product Documentation Genesys Interaction Recording Genesys Products What's New This release includes only resolved issues. Resolved Issues This release contains the following resolved issues: Recording Crypto Server (RCS) now connects to the Primary Configuration Server in a scenario consisting of the following sequence of events: - Backup Configuration Server starts up first, to assume Primary role - RCS connects to Backup Configuration Server in Primary role - Primary Configuration Server starts up next to assume Backup role - Backup Configuration Server, in Primary role, either shuts down or fails over to Primary (GIR-1510) Recording Crypto Server, when in backup mode, now loads new keys from the keystore after being copied. (GIR-1325) Upgrade Notes No special procedure is required to upgrade to release 8.5.006.64. This page was last edited on May 20, 2015, at 11:33. Feedback Comment on this article:
https://docs.genesys.com/Documentation/RN/latest/rcs85rn/rcs8500664
2020-10-20T01:22:59
CC-MAIN-2020-45
1603107867463.6
[]
docs.genesys.com
Configure authentication to access Sensu", "group_search": { "base_dn": "dc=acme,dc=org" }, "user_search": { "base_dn": "dc=acme,dc=org" } } ] }, "metadata": { "name": "openldap" } } Example LDAP configuration: All attributes type: ldap api_version: authentication/v2 metadata: name: openldap spec: groups_prefix: ldap servers: - binding: password:" }, "group_search": { "base_dn": "dc=acme,dc=org", "attribute": "member", "name_attribute": "cn", "object_class": "groupOfNames" }, "user_search": { "base_dn": "dc=acme,dc=org", "attribute": "uid", "name_attribute": "cn", "object_class": "person" } } ], "groups_prefix": "ldap", "username_prefix": "ldap" }, "metadata": { "name": "openldap" } } LDAP specification Top-level attributes port attributes. If you are not using LDAP over TLS/SSL, make sure to set the value of the security attribute to "insecure" for plaintext communication. Error message: certificate signed by unknown authority If you are using a self-signed certificate, make sure to set the insecure attribute to true. This will bypass verification of the certificate’s signing authority. Error message: failed to bind: ... The first step for authenticating a user with the LDAP provider is to bind to the LDAP server using the service account specified in the binding object. Make sure the user_dn specifies a valid DN and that its password is correct. Error message: user <username> was not found The user search failed. No user account could be found with the given username. Check the user_search object and make sure that: - The specified base_dncontains the requested user entry DN - The specified attributecontains the username as its value in the user entry - The object_classattribute corresponds to the user entry object class Error message: ldap search for user <username> returned x results, expected only 1 The user search returned more than one user entry, LDAP users and LDAP groups can be referred as subjects of a cluster role or role binding depends on the groups_prefix and username_prefix configuration attributes values of the LDAP provider. For example, for the groups_prefix ldap and the group dev, the resulting group name in Sensu is ldap:dev. Issue: Permissions are not granted via the LDAP group(s) During authentication, the LDAP provider will print in the logs all groups found in LDAP (for example, found 1 group(s): [dev]. Keep in mind that this group name does not contain the groups_prefix at this point. The Sensu backend logs each attempt made to authorize an RBAC request. This is useful for determining why a specific binding didn’t grant the request. For example: [...] the user is not a subject of the ClusterRoleBinding cluster-admin [...] [...] could not authorize the request with the ClusterRoleBinding system:user [...] [...] could not authorize the request with any ClusterRoleBindings [...]. NOTE:": "API: -
https://docs.sensu.io/sensu-go/5.21/operations/control-access/auth/
2020-10-20T00:19:40
CC-MAIN-2020-45
1603107867463.6
[]
docs.sensu.io
Sets the animator in recording mode, and allocates a circular buffer of size frameCount. After this call, the recorder starts collecting up to frameCount frames in the buffer. Note it is not possible to start playback until a call to StopRecording is made. See Also: StopRecording, recorderStartTime, recorderStopTime, StartPlayback, StopPlayback, playbackTime.
https://docs.unity3d.com/kr/2017.2/ScriptReference/Animator.StartRecording.html
2020-10-20T01:20:42
CC-MAIN-2020-45
1603107867463.6
[]
docs.unity3d.com
Miscellaneous / Animation Animation jsPlumb offers a simple animate function, inserts a callback for jsPlumb to repaint whatever it needs to at each step. You could of course do this yourself; it's a convenience method really. The method signature is: The arguments are as follows: - el - element id, or element object from the library you're using. - properties - properties for the animation, currently only leftand topare supported. - options - options for the animation, such as callbacks etc.
https://docs.jsplumbtoolkit.com/community/current/articles/animation.html
2020-10-19T23:29:54
CC-MAIN-2020-45
1603107867463.6
[]
docs.jsplumbtoolkit.com
Creating Content From PFG¶ Description This how-to covers simple creation of portal content from PloneFormGen. We’ll create web pages from sample form submissions. A question that’s come up frequently on Gitter and the Plone forum is “How do I create an event, news item, page, or some other content item from PloneFormGen? It’s common that there’s some security need or extra content needed that prevents just using Plone’s “add item.” This is actually very easy if you know a little Python and are willing to learn something about the content items you want to create. Please note that I’m not going to show you how to create new content types here. Just how to use PFG to create content objects from existing types. If you want to create new content types, learn to use Dexterity. Your first step should be to determine the attributes you want to set in the new content item and how they’ll map from your form fields. In this case, we’re going to use the sample contact form created when you first create a form folder to create a page (Document). Our mapping of form fields to content attributes will look like this: Note that for each form field, we’ve determined its ID in the form. We’ll use those to look up the field in the form submission. Next, we need to learn the methods that are used to set our attributes on a Document object. How do you learn these? It’s always nice to read the source, but when I’m working fast, I usually just use DocFinderTab and look for “set*” methods matching the attributes. Now, determine where you want to put the new content. That’s your target folder. It’s convenient to locate that folder in a parent folder of the form object, as you may then use the magic of acquisition to find it without learning how to traverse the object database. Now, in the form folder, we add a “Custom Script Adapter” - which is just a very convenient form of Python script. Then, just customize the script to look something like the following: # Find our target folder from the context. The ID of # our target folder is "submissions" target = context.submissions # The request object has. target.invokeFactory("Document", id=uid, title=form['topic']) # Find our new object in the target folder obj = target[uid] # Set its format, content and description obj.setFormat('text/plain') obj.setText(form['comments']) obj.setDescription(form['replyto']) # Force it to be reindexed with the new content obj.reindexObject() That’s it. This will really work. Security¶ At the moment, the person that submits your form will need to be logged in as a user that has the right to add pages to the target folder, then change their attributes. You may need to allow other users (even anonymous ones) to submit the form. That’s where the Proxy role setting of the custom script adapter comes in. You may change this setting to Manager, and the script will run as if the user has the manager role - even if they’re anonymous. I hope it’s obvious that you want to be very, very careful writing a script that will run with the Manager role. Review it, and review it again to make sure it will do only what you want. Never trust unchecked form input to determine target or content ids. If I’m doing this trick with a form that will be exposed to the public, I often will use a Python script rather than the custom script adapter, as it allows me to determine the proxy role for the script more precisely than choosing between None and Manager. I may even create a new role with minimal privileges, and those only in the target folder. Credit! Note A big thank’s to Mikko Ohtamaa for contributing the Custom Script Adapter to PloneFormGen.
https://docs.plone.org/working-with-content/managing-content/ploneformgen/creating_content.html
2020-10-19T23:59:38
CC-MAIN-2020-45
1603107867463.6
[]
docs.plone.org
Scout supports the last 3 versions of iOS. For example, if iOS 13 is the latest Apple OS, we would support iOS 11, 12 and 13. Android: Scout Staff / Customer Mobile Apps Any device running Android 4.4+
https://docs.scoutforpets.com/en/articles/431204-system-requirements
2020-10-20T00:21:06
CC-MAIN-2020-45
1603107867463.6
[]
docs.scoutforpets.com
Kinematic 3D Retro-Deformation of Fault Blocks Picked from 3D Seismics Tanner, David C. Lohr, Tina Krawczyk, Charlotte M. Oncken, Onno Endres, Heike Samiee, Ramin Trappe, Henning Kukla, Peter A. Universitätsverlag Göttingen Article in Anthology Verlagsversion Deutsch 11. Symposium "Tektonik, Struktur- und Kristallingeologie" Tanner, David C.; Lohr, Tina; Krawczyk, Charlotte M.; Oncken, Onno; Endres, Heike; Samiee, Ramin; Trappe, Henning; Kukla, Peter A., 2006-03: Kinematic 3D Retro-Deformation of Fault Blocks Picked from 3D Seismics. In: Philipp, S.; Leiss, B; Vollbrecht, A.; Tanner, D.; Gudmundsson, A. (eds.): 11. Symposium "Tektonik, Struktur- und Kristallingeologie"; 2006, Univ.-Verl. Göttingen, p. 226 - 228., DOI. Movement on fault planes causes a large amount of smaller-scale deformation, ductile or brittle, in the area surrounding the fault. Much of this deformation is below the resolution of reflection seismics (i.e. sub-seismic, <10m displacement), but it is important to determine this deformation, since it can make up a large portion of the total bulk strain, for instance in a developing sedimentary basin. Calculation of the amount of sub-seismic strain around a fault by 3-D geometrical kinematic retro-deformation can also be used to predict the orientation and magnitude of these smaller-scale structures. However, firstly a 3-D model of the fault and its faulted horizons must be constructed at a high enough resolution to be able to preserve fault and horizon morphology with a grid spacing of less than 10 m. Secondly, the kinematics of the fault need to be determined, and thirdly a suitable deformation algorithm chosen to fit the deformation style. Then by restoring the faulted horizons to their pre-deformation state (a ‘regional’), the moved horizons can be interrogated as to the strain they underwent. Since strain is commutative, the deformation demonstrated during this retro-deformation is equivalent to that during the natural, forward deformation...
https://e-docs.geo-leo.de/handle/11858/00-1735-0000-0001-3447-A
2020-10-20T00:10:20
CC-MAIN-2020-45
1603107867463.6
[]
e-docs.geo-leo.de
Sendbird Calls enables real-time voice and video calls between users within your Sendbird-integrated app. The Calls SDK for JavaScript is used to initialize, configure, and build voice and video calling functionality into your JavaScript application. This quick start shows a brief overview of the Calls SDK’s structure and features, then goes through the preliminary steps of implementing the Calls SDK into your own project. Our sample app demonstrates an implementation of the core features of Sendbird Calls SDK. Download the app from our GitHub repository to get an idea of what you can do with the actual SDK and to get started building your own project. Note: The fastest way to see our Calls SDK in action is to build your app on top of our sample app. Make sure to change the application ID with your own when initializing the Calls SDK. Go to Create a Sendbird application from your dashboard section to learn more. Sendbird Calls can only be used by your Sendbird application users. When a user logs in, the user can send and receive direct calls with other users on the same Sendbird application. Direct call in the Calls SDK refers to a one-to-one call. To make a direct call, the caller first initializes a call request to the callee. The callee will receive incoming call notifications to all logged in devices. When the callee accepts the call on any of these devices, a media connection is established between the caller and callee which marks the start of a direct call. Call-related events are delivered through call event handlers. The event handlers include onRinging(), onConnected(), onEnded(), and other event callbacks. By using event callbacks of the handlers, your app can implement appropriate responses and actions accordingly such as updating call status on the UI level. The requirements for Calls SDK for JavaScript are: Node npm(or yarn) WebRTCAPIs. If you are ready to integrate Calls to your app, follow the step-by-step instructions below. A Sendbird application comprises everything required in a Calls make and receive calls with one another without any further setup. It should be noted that all data are limited to the scope of a single application, and users in different Sendbird applications can't make and receive calls to each other. Note: For application administrators, the Sendbird Dashboard provides call logs to keep track of all calls made on the application. Installing the Calls SDK is simple if you’re familiar with using external libraries or SDK’s in your projects. You can install the Calls SDK with npm or yarn by entering the command below on the command line. Note: For npm, you can automatically download our Calls SDK with just an npm installcommand by adding sendbird-callsdependency to the package.jsonfile of your project. For yarn, a similar method can be applied to download the Calls SDK. # npm npm install sendbird-calls # yarn yarn add sendbird-calls Import the Calls SDK in ES6 module as shown below: import SendBirdCall from "sendbird-calls"; SendBirdCall.init(APP_ID) ... Note: If you are using TypeScript, set --esModuleInteropoption to true for default imports or use import * as SendBirdCall from "sendbird-calls". Or add the following code in the header to install and initialize the Calls SDK. <script type="text/javascript" src="SendBirdCall.min.js"></script> <script type="text/javascript"> SendBirdCall.init(APP_ID) </script> The Calls SDK requires access permissions to microphone and camera to communicate with Sendbird server. To grant these permissions, call the SendBirdCall.useMedia() function. This will allow the user to retrieve a list of available media devices or to retrieve any actual media streams. Note: When a user makes or receives a call for the first time by your JavaScript application running in a browser, the browser might prompt the user to grant microphone and camera access permissions. The Calls SDK adds call features to your client app with a few simple steps. To make your first call, do the following steps: First,. SendBirdCall.init(APP_ID); Note: You can implement both the Chat and Calls SDKs to your client app. Two SDKs can work on the same Sendbird application for them to share users. In this case, you can allow Calls to retrieve a list of users in the client app by using the Chat SDK’s method or Chat API. To make and receive calls, authenticate a user to Sendbird server by using their user ID through the authenticate() method. To receive calls, the SendBirdCall instance should be connected with Sendbird server. Connect socket by using the SendBirdCall.connectWebSocket() method after a user’s authentication has been completed. // Authentication const authOption = { userId: USER_ID, accessToken: ACCESS_TOKEN }; SendBirdCall.authenticate(authOption, (res, error) => { if (error) { // Authentication failed } else { // Authentication succeeded } }); // Establishing websocket connection SendBirdCall.connectWebSocket() .then(/* Succeeded to connect */) .catch(/* Failed to connect */); Add a device-specific SendBirdCallListener event handler using the SendBirdCall.addListener() method. Once the event handler is added, responding to device events such as incoming calls can be managed as shown below: //The UNIQUE_HANDLER_ID below is a unique user-defined ID for a specific event handler. SendBirdCall.addListener(UNIQUE_HANDLER_ID, { onRinging: (call) => { ... } }); Note: If a SendBirdCallListenerevent handler isn’t registered, a user can't receive an onRingingcallback event, thus recommended to add this handler at the initialization of the app. Also, a SendBirdCallListenerevent handler is automatically removed when the app closes by default. Initiate a call by providing the callee’s user id into the SendBirdCall.dial() method. Use the CallOption object to choose initial call configuration, such as audio or video capabilities, video settings, and mute settings. const dialParams = { userId: CALLEE_ID, isVideoCall: true, callOption: { localMediaView: document.getElementById('local_video_element_id'), remoteMediaView: document.getElementById('remote_video_element_id'), audioEnabled: true, videoEnabled: true } }; const call = SendBirdCall.dial(dialParams, (call, error) => { if (error) { // Dialing failed } // Dialing succeeded }); call.onEstablished = (call) => { ... }; call.onConnected = (call) => { ... }; call.onEnded = (call) => { ... }; call.onRemoteAudioSettingsChanged = (call) => { ... }; call.onRemoteVideoSettingsChanged = (call) => { ... }; Note: A media viewer is a HTMLMediaElement such as <audio>and <video>to display media stream. The remoteMediaViewis required for the remote media stream to be displayed. It is also recommended to set a media viewer's autoplayproperty to true. <video id="remote_video_element_id" autoplay> Note: Media viewers can also be set using the call.setLocalMediaView()or call.setRemoteMediaView()method. //Setting media viewers lazily call.setLocalMediaView(document.getElementById('local_video_element_id')); call.setRemoteMediaView(document.getElementById('remote_video_element_id')); happening during the call through its callback methods. SendBirdCall.addListener(UNIQUE_HANDLER_ID, { onRinging: (call) => { call.onEstablished = (call) => { ... }; call.onConnected = (call) => { ... }; call.onEnded = (call) => { ... }; call.onRemoteAudioSettingsChanged = (call) => { ... }; call.onRemoteVideoSettingsChanged = (call) => { ... }; const acceptParams = { callOption: { localMediaView: document.getElementById('local_video_element_id'), remoteMediaView: document.getElementById('remote_video_element_id'), audioEnabled: true, videoEnabled: true } }; call.accept(acceptParams); } }); Note: If media viewer elements have been set by the call.setLocalMediaView()and call.setRemoteMediaView()methods, make sure that the same media viewers are set in the acceptParam’s callOption. If not, they will be overridden during executing the call.accept()method. The callee’s client app receives an incoming call through the connection with Sendbird server established by the SendBirdCall.connectWebSocket() method.
https://docs.sendbird.com/javascript/calls_quick_start
2020-10-20T00:46:05
CC-MAIN-2020-45
1603107867463.6
[]
docs.sendbird.com
Managed product logs contain information about the performance of your managed products. You can obtain information for specific products or groups of products administered by the parent or child server. With Control Manager’s data query on logs and data filtering capabilities, administrators can focus on the information they need. More logs mean abundant information about the Control Manager network. However, these logs occupy disk space. You must balance the need for information with your available system resources. Managed products generate different kinds of logs depending on their function.
http://docs.trendmicro.com/en-us/enterprise/control-manager-60-service-pack-3/ch_ag_monitor_logs/understand_logs/understand_logs_part3.aspx
2019-06-16T05:39:32
CC-MAIN-2019-26
1560627997731.69
[]
docs.trendmicro.com
When you run an in-place upgrade from SharePoint 2007 to SharePoint. Using the In-place Upgrade method, the following Bamboo products migrate to SP2010 with no errors or additional steps: NOTE: If you are using a Bamboo product that is not on the above list and you are considering using the In-Place Upgrade method to migrate your farm, refer to the specific migration information for your product to learn more about possible limitations before proceeding with a migration.
https://docs.bamboosolutions.com/document/how_to_migrate_bamboo_web_part_from_sharepoint_2007_to_sharepoint_2010_using_the_in-place_upgrade_method/
2019-06-16T05:53:07
CC-MAIN-2019-26
1560627997731.69
[]
docs.bamboosolutions.com
Instagram is a photo and video-sharing social networking platform. Given this, visual industries such as fashion and makeup outlets, furniture producers, and travel agencies will naturally benefit most from using it. The good news is that Instagram can be integrated with WordPress in several ways. However, one of the most popular approaches is adding an Instagram profile feed to WordPress. The Instagram API requires authentication - requests explicitly made on behalf of a user. Authenticated requests require an access_token. These tokens are unique to a user and should be stored securely. Access tokens may expire at any time in the future. The Instagram Access Token is a long string of characters unique to your account that grants other applications access to your Instagram feed. Without the token, your website will be unable to talk to the Instagram servers. The token provides a secure way for a website to ask Instagram’s permission to access your profile and display its images. Follow the steps below to obtain an Instagram Access Token: First, You’ll need to sign in with an Instagram account if you’re not already logged in. Navigate to Instagram Developer Page to get started. Press on the Manage Clients, then click the Register a New Client button. Complete the following options: Application Name — Enter an appropriate name, which fits Instagram requirements. Description — Optionally, enter a short description. Company Name — Specify your company name. Website URL, Valid redirect URIs — Enter your store URL (e.g.). Contact email — Enter your contact email address. Move to the Security tab. Uncheck the Disable implicit OAuth and Enforce signed requests checkboxes. Enter the reCAPTCHA words and press the Register button. Paste the following URL into your browser address bar: Navigate to the Manage Clients page and copy your Client ID. Replace the YOUCLIENT_ID_HERE text with your actual Client ID. Note that the client_id= parameter should be followed by your Client ID with no sign like + at the beginning. You should get something like this:…/496516440&redirect_uri= Paste your store URL instead of text and hit the Enter. Click on the Authorize button. Once the page loading is done, look for the address bar within your browser. You’ll see your store URL link with your access token indicated after the #access_token=tag.…/cc3a56f8b588fed Copy the key shown in the address bar to enter it in your WordPress admin panel so your site can access the Instagram APIs. Login to the your WordPress Dashboard. Click the Conj PowerPack menu. From the sidebar on the left, select the 3rd Party API tab. Locate the Instagram Access Token text-field and paste your token in. Click the Save Changes button.
https://docs.conj.ws/3rd-party-api/generate-instagram-access-token
2019-06-16T04:29:26
CC-MAIN-2019-26
1560627997731.69
[]
docs.conj.ws
This core skills course will get you building projects with Jitter right away. We recommend keeping an open patcher ready to start patching along with the text. By the end, you will have learned and practiced the basic skills to create a variety of projects that incorporate audio-driven graphics, video capture and playback, 3D geometry, and more. - Display a Video — Play your first video - Live Capture — Grab live video - Control Jitter with Messages — Controlling Jitter Objects - Adding 3D Objects — 3 dimensional programming - Jitter Matrix Exploration - Part 1 — The Jitter Matrix, Part 1 - Jitter Matrix Exploration - Part 2 — The Jitter Matrix, Part 2 - Generating Geometry — Create 3d geometry using the matrix - Audio into a matrix — Trigger visuals from audio input - Building live video effects — transform video in real time - Composing the screen — your screen is a canvas
https://docs.cycling74.com/max8/vignettes/video_and_graphics_tutorials_topic
2019-06-16T05:13:05
CC-MAIN-2019-26
1560627997731.69
[]
docs.cycling74.com
The Subscribers feature in Ghost allows you to capture email addresses of people who would like to subscribe to your publication's content. Email addresses can be viewed and edited from Ghost admin, but no emails will be sent without further integrations with an email tool. Enabling subscribers This feature can be enabled in the "Labs" settings menu in Ghost admin. When it is enabled, a subscribe button will appear on your site, if your theme supports it. Managing subscribers. Integrations From here, you can integrate your Ghost publication with your favourite email client such as Mailchimp or ActiveCampaign. For example, you can automatically pass the subscriber list to a dedicated email tool using Zapier, and then set up automated emails or campaigns for your subscribers.
https://docs.ghost.org/faq/enable-subscribers-feature/
2019-06-16T05:00:14
CC-MAIN-2019-26
1560627997731.69
[array(['https://docs.ghost.io/content/images/2018/10/subscription-feature-in-Casper.png', None], dtype=object) array(['https://docs.ghost.io/content/images/2019/04/subscribers.png', None], dtype=object) ]
docs.ghost.org
1 Introduction After you have installed the Mendix software on your on-premises server and have deployed your first app (for details, see Deploying Mendix on Microsoft Windows), it is time to activate your license. This how-to will guide you through this process. This how-to will teach you how to do the following: - Activate a Mendix license on a Microsoft Windows server 2 Prerequisites Before starting with this how-to, make sure you have completed the following prerequisites: - To activate a Mendix instance on-premises you need an on-premises license (call your Customer Success Manager for more information) - Install Mendix on your Microsoft Windows server (for more information, see Deploying Mendix on Microsoft Windows) - Be registered as the technical contact for the license - This is usually done in the license request process - If you are not the technical contact, ask him or her to follow this how-to to activate the license - Have your MxID and password ready - Have login access and access to the Mendix Service Console on the server 3 Retrieve the Server ID In this section, you will retrieve the server ID from your Mendix server, which is used in the license activation process. These steps should be executed on the Microsoft Windows server. - Start the Mendix Service Console. - Select your app in the overview on the left side of the console. - The app needs to be running in order for you to be able to activate the license. If the app is not running, click Start service to start the app. - Click Advanced and select the Show or add license… option. - Next to Server ID, click Copy to clipboard. 4 Obtain a License Key from Mendix Support In this section, you will submit your server ID in the Mendix Support Portal to request a license key for your server. - Open your browser and navigate to. - Do one of the following: - For a new app, use the Request New App Node app – see Licensing Apps for more information - For an existing app, create a ticket using the Standard change: Change On-Prem Licensed Node template - Mendix support will use the supplied server ID to generate a license key for your server. 5 Insert the License Key on the Server In this section, you will enter the license key into the Mendix server, thus completing the license activation process. - Return to the Mendix Service Console License dialog box (as described in 3 Retrieve the Server ID). - Paste your license key into the License key text box. - Click Activate license. - Congratulations! Your license has been activated.
https://docs.mendix.com/developerportal/deploy/activate-a-mendix-license-on-microsoft-windows
2019-06-16T05:18:24
CC-MAIN-2019-26
1560627997731.69
[]
docs.mendix.com