content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Authentication of SOAP required - Web Products WS-Security and HTTPS required About WS-Security WS-Security, which is officially called Web Services Security: SOAP Message Security, is an open standard published by OASIS that defines mechanisms for signing and encrypting SOAP messages. The License Service supports version 1.0 of the WS-Security specification. For more information and a link to the WS-Security 1.0 specification, go to the OASIS-Open web site for WS-Security. Tip The easiest way to comply with the WS-Security requirements is to use a SOAP toolkit that supports WS-Security 1.0 and X.509 certificates. What Needs to Be Signed You must sign the Timestamp element, and if you're using WS-Addressing, we recommend you also sign the Action header element. Alternately, you can instead sign Timestamp, Body, the Action header element, and the To header element. For information about WS-Addressing, go to. Message Expiration.
http://docs.aws.amazon.com/AmazonDevPay/latest/DevPayDeveloperGuide/LSAPI_Auth_SOAP.html
2017-11-18T03:19:48
CC-MAIN-2017-47
1510934804518.38
[]
docs.aws.amazon.com
Instances are the copies of object resources which have been placed in a room to interact and do things to make your game come to life. The following functions exist that deal with instances. - instance_change - instance_copy - instance_create - instance_destroy - instance_exists - instance_find - instance_furthest - instance_nearest - instance_number - instance_place - instance_position
http://docs.yoyogames.com/source/dadiospice/002_reference/objects%20and%20instances/instances/instance%20functions/index.html
2017-11-18T02:54:39
CC-MAIN-2017-47
1510934804518.38
[]
docs.yoyogames.com
Derived column syntax reference Use derived columns to define new calculated columns after data has been imported. You can define the derived columns with D language functions, and they must return integer (long) or decimal (double) type values. This document provides a detailed overview of the language features available when coding a derived column function. See Create a Derived Column and Derived column examples for more information about creating and using derived columns. Derived Columns must return an Integer, String, or Decimal Derived columns must return an integer (technically a long), string, or decimal (double) value. Derived Columns use the D programming language The D programming language is a compiled language with a feel similar to C/C++. When coding a D function for use in an Interana derived column, keep in mind the following limitations: - Your function must comply with the @safe annotation (statically checked to exhibit no possibility of undefined behavior) - Your function must comply with the pure keyword (cannot access global or static, mutable state except through its arguments) For performance and security reasons, many standard D library functions are not available. Available D Libraries We do not currently support any D libraries. Tips for working with derived columns - You must escape "." in the column name or it will not compile. You can either rename the column or surround these columns with the c("<column_name>")function. - Derived columns reference the friendly column name. If you change that name after creating the derived column, the derived column will no longer work. - Avoid using D language reserved characters in column names. For example, Derived Columns cannot reference columns named "c", a reserved character in D. See the D language Lexical topic for more information. Interana built-in functions that operate on columns Within your derived column, you can reference the following custom functions: Arithmetic operations with derived columns (get and getd functions) In version 2.23, we added the get function for derived columns. Use these when the data you're referencing contains null values, and you want a safe way to compute arithmetic operations on multiple fields. This supports long and double values. The syntax for these is long get(string field_name, long on_null); and double getd(string field_name, double on_null);. When the fetched value is null (does not exist), the value you specify for on_null will be returned instead. This allows you to perform arithmetic operations, where performing those operations on null values would result in errors. For example, you can set on_null to 0 to compute the sum of two fields x and y: return get("x", 0) + get("y", 0); If you are performing a multiplication operation on two fields ( x and y), set on_null to 1: return get("x", 1) * get("y", 1); For example, you can create a derived column based on screen width and height values that accounts for possible null values: double return_screen_area(){ return get("screen_width", 0.0) * get("screen_height", 0.0); } So if screen_width has a NULL, the computed value will be: 0.0 * screen_height_value = 0.0 Referencing columns Within your D function, you can reference Interana columns of type: - int - string - int_set - string_set You can also reference columns containing "." characters. You must surround these columns with the c("<column_name>") function. Referencing lookup columns As of version 2.18, you can reference a lookup column when defining a derived column. You can use the columns in the lookup table as normal columns, or use the lc() function to get the value from that column. Use the syntax lc("column_name") to reference a lookup column. Unsupported references Interana does not support the following references: - time columns (for example, of type milli_time) - named expressions, including cohorts, sessions, metrics, funnels - other derived columns set_size and get_item functions Note that when you import set columns to Interana, the import process preserves the order of items in the column and does not de-duplicate items in the set. Interana also does not distinguish between null and empty data.
https://docs.interana.com/Guides/Interana_Guides_2.23/Reference/Derived_column_syntax_reference
2017-11-18T02:58:30
CC-MAIN-2017-47
1510934804518.38
[]
docs.interana.com
Restarting or cancelling a workflow Gravity Forms Administrators can cancel the Workflow for an entry or restart the Step or entire Workflow. On the Workflow detail page administrators will see a box with the available admin tasks for the current entry. If a Step or Workflow is restarted then all emails will get resent just as if the step was starting for the first time. Users without Gravity Forms Administrator capabilities will not see the box with the admin task options
http://docs.gravityflow.io/article/25-restarting-or-cancelling-a-workflow
2017-11-18T02:40:46
CC-MAIN-2017-47
1510934804518.38
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/555e4a9be4b027e1978e1d69/images/557320d1e4b01a224b4295ba/file-sxFceAigMq.png', None], dtype=object) ]
docs.gravityflow.io
OverviewOverview An online auction system is an online auctioning system where all the buyers or auction users or participants place a competitive bid for the auctioned products or services. With the Online Auction System add-on, admin can enable the bidding feature on the products that the admin wants to go for auction in the Magento 2 store. It is an online marketplace, where the auction participants can submit their bids online. The winner of the auction is the one who places the highest bid. Further, the product or services bills are paid through online payment. Note: Only the customers, who are logged in can place a bid on products. Features are as follows: - Admin can create and manage auctions for the required products - Admin can set up the start bidding price, start date and end date, and so on. - Admin can track all the bid information. - Admin can set up the incremental bidding price. - Admin can extend the auction time. - An automated mail is sent to the winner of the auction
https://docs.cedcommerce.com/magento-2/online-auction-system-magento-2-admin-guide
2017-11-18T02:49:53
CC-MAIN-2017-47
1510934804518.38
[array(['https://docs.cedcommerce.com/wp-content/plugins/documentor/skins/bar/images/search.png', None], dtype=object) ]
docs.cedcommerce.com
A cloud category type reservation provides access to the provisioning services of a cloud service account for a particular vRealize Automation business group. Available cloud reservation types include Amazon, OpenStack, vCloud Air, and vCloud Director. A reservation is a share of the memory, CPU, networking, and storage resources of one compute resource allocated to a particular vRealize Automation business group. A business group can have multiple reservations on one endpoint or reservations on multiple endpoints. The allocation model for a reservation depends on the allocation model in the associated datacenter. Available allocation models are Allocation Pool, Pay As You Go, and reservation pool. For information about allocation models, see thevCloud Director or vCloud Air documentation. In addition to defining the share of fabric resources allocated to the business group, a reservation can define policies, priorities, and quotas that determine machine placement.
https://docs.vmware.com/en/vRealize-Automation/7.0/com.vmware.vrealize.automation.doc/GUID-3B132E6E-934A-48F9-8936-68938903A8D0.html
2017-11-18T03:07:31
CC-MAIN-2017-47
1510934804518.38
[]
docs.vmware.com
Exception handling catches any errors that occur when a schema element runs. Exception handling defines how the schema element behaves when the error occurs. All elements in a workflow, except for decisions and start and end elements, contain a specific output parameter type that serves only for handling exceptions. If an element encounters an error during its run, it can send an error signal to an exception handler. Exception handlers catch the error and react according to the errors they receive. If the exception handlers you define cannot handle a certain error, you can bind an element's exception output parameter to an Exception element, which ends the workflow run in the failed state. Exceptions act as a try and catch sequence within a workflow element. If you do not need to handle a given exception in an element, you do not have to bind that element's exception output parameter. The output parameter type for exceptions is always an errorCode object.
https://docs.vmware.com/en/vRealize-Orchestrator/6.0/com.vmware.vrealize.orchestrator-dev.doc/GUID6E763B22-3ED2-4FA5-8977-271111713B66.html
2017-11-18T03:07:43
CC-MAIN-2017-47
1510934804518.38
[]
docs.vmware.com
Changes related to "Creating a Database for Joomla!" ← Creating a Database for Joomla! This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. No changes during the given period matching these criteria.
https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&limit=50&target=Creating_a_Database_for_Joomla!
2015-04-18T05:50:50
CC-MAIN-2015-18
1429246633799.48
[]
docs.joomla.org
Difference between revisions of "Extensions Language Manager Content" From Joomla! Documentation Latest revision as of 18:40, 28 April - Num./ - Published. Whether the item has been published or not. You can change the Published state by clicking on the icon in this column. - Options window where settings such as default parameters can be edited. See Options. -
https://docs.joomla.org/index.php?title=Help16:Extensions_Language_Manager_Content&diff=85573&oldid=29065
2015-04-18T05:24:28
CC-MAIN-2015-18
1429246633799.48
[]
docs.joomla.org
jdoc statements From Joomla! Documentation <translate> jdoc statements are included in every Joomla template and indicate where the output from other parts of Joomla or its extensions should be positioned in the overall web page. A typical jdoc statement looks like this: <jdoc:include</translate> Contents jdoc:include The <jdoc:include /> statement is a Joomla! template's method of displaying content specific to the page being viewed. There are various <jdoc:include /> statements, each returning a different part of a Joomla! page. The type attribute.) Component <jdoc:include This element should only appear once in the <body> element of the Template to render the main content of the page with respect to the current page being viewed. Head <jdoc:include This element should only appear once in the <head> element of the Template to render the content of the style, script and meta elements associated with the current page. Installation Module style attribute.
https://docs.joomla.org/index.php?title=Jdoc_statements&oldid=63022
2015-04-18T05:35:33
CC-MAIN-2015-18
1429246633799.48
[]
docs.joomla.org
You are viewing an old version of this page. View the current version. Compare with Current View Page History Version 1 These pages provide the components for the overall layout of boo.codehaus.org WARNING: Do not edit casually. Powered by a free Atlassian Confluence Open Source Project License granted to Codehaus. Evaluate Confluence today.
http://docs.codehaus.org/pages/viewpage.action?pageId=228170925
2015-04-18T05:15:35
CC-MAIN-2015-18
1429246633799.48
[]
docs.codehaus.org
Difference between revisions of "Screen.contactmanager.15" From Joomla! Documentation Revision as of 17:48, 1 Select Components → Contacts → Contacts from the drop-down menu on the back-end of your Joomla! installation. Or select the 'Contacts' link from the Contacts Manager - Categories. Description. You can change the order by entering the sequential order and clicking the 'Save Order' icon in the column heading. - Access Level. Who has access to this item. Current options are: - Public: Everyone has access - Registered: Only registered users have access - Special: Only users with author status or higher have access - You can change an item's Access Level by clicking on the icon in the column. - Category. The Category this item belongs to. Clicking on the Category title opens the Category for editing. See Category Manager - Edit. -. See Contacts Parameters section below. - Help. Opens this Help Screen. Global Configuration Click the Parameters button to open the Contacts Global Configuration window. This window allows you to set default parameters for Contacts, as shown below. - Save. Press Save to save your settings. - Cancel. Press Cancel to cancel your changes. - e-mail.. Quick Tips.
https://docs.joomla.org/index.php?title=Help15:Screen.contactmanager.15&diff=5772&oldid=5771
2015-04-18T05:46:17
CC-MAIN-2015-18
1429246633799.48
[]
docs.joomla.org
Information for "Unable to delete modules after updating" Basic information Display titleJ3.x:Unable to delete modules after updating Default sort keyUnable to delete modules after updating Page length (in bytes)793 Page ID30020:57, 16 November 2013 Latest editorTom Hutchison (Talk | contribs) Date of latest edit09:44, 10 November 2014 Total number of edits’
https://docs.joomla.org/index.php?title=J3.x:Unable_to_delete_modules_after_updating&action=info
2015-04-18T05:32:22
CC-MAIN-2015-18
1429246633799.48
[]
docs.joomla.org
Deploying an Update Server From Joomla! Documentation Marked as needs review as needs some differentiating or integrating into the managing component updates tutorial. Currently its too similar and will just confuse people - as far as I'm aware the process for components and all other extensions is the same anyway
https://docs.joomla.org/index.php?title=Talk:Deploying_an_Update_Server&oldid=86238
2015-04-18T05:02:07
CC-MAIN-2015-18
1429246633799.48
[]
docs.joomla.org
Does anyone find these two statements contradictory? Am I not understanding something, or are these statements actually contradicting each other? Statement ONE from output_handler: "output_handler must be empty if this [zlib.output_compression] is set 'On' ! Instead you must use zlib.output_handler." Statement TWO from zlib.output_handler: "You cannot specify additional output handlers if zlib.output_compression is activated ..." Statement ONE says you have to use zlib.output_handler, if zlib.output_compression is turned ON. Statement TWO says that, if zlib.output_compression is turned ON, you cannot use zlib.output_handler. what the heck?
http://docs.php.net/manual/en/zlib.configuration.php
2015-04-18T05:01:54
CC-MAIN-2015-18
1429246633799.48
[]
docs.php.net
CreateVpnGateway Creates a virtual private gateway. A virtual private gateway is the endpoint on the VPC side of your VPN connection. You can create a virtual private gateway before creating the VPC itself.. - AmazonSideAsn A private Autonomous System Number (ASN) for the Amazon side of a BGP session. If you're using a 16-bit ASN, it must be in the 64512 to 65534 range. If you're using a 32-bit ASN, it must be in the 4200000000 to 4294967294 range. Default: 64512 Type: Long Required: No - AvailabilityZone The Availability Zone for the virtual private gateway. apply to the virtual private gateway. Type: Array of TagSpecification objects Required: No - Type The type of VPN connection this virtual private gateway supports. Type: String Valid Values: ipsec.1 Required: Yes Response Elements The following elements are returned by the service. - requestId The ID of the request. Type: String - vpnGateway Information about the virtual private gateway. Type: VpnGateway object Errors For information about the errors that are common to all actions, see Common client error codes. Examples Example 1 This example creates a virtual private gateway. Sample Request &Type=ipsec.1 &AUTHPARAMS Sample Response <CreateVpnGatewayResponse xmlns=""> <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId> <vpnGateway> <vpnGatewayId>vgw-fe4aa197</vpnGatewayId> <state>available</state> <type>ipsec.1</type> <amazonSideAsn>64512</amazonSideAsn> <attachments/> </vpnGateway> </CreateVpnGatewayResponse> Example 2 This example creates a virtual private gateway and specifies a private ASN of 65001 for the Amazon side of the gateway. Sample Request &Type=ipsec.1 &AmazonSideAsn=65001 &AUTHPARAMS Sample Response <CreateVpnGatewayResponse xmlns=""> <requestId>fe90b404-d4e5-4153-8677-31dexample</requestId> <vpnGateway> <vpnGatewayId>vgw-f74aa19e</vpnGatewayId> <state>available</state> <type>ipsec.1</type> <amazonSideAsn>65001</amazonSideAsn> <attachments/> </vpnGateway> </CreateVpnGatewayResponse> See Also For more information about using this API in one of the language-specific Amazon SDKs, see the following:
https://docs.amazonaws.cn/en_us/AWSEC2/latest/APIReference/API_CreateVpnGateway.html
2021-09-16T16:19:49
CC-MAIN-2021-39
1631780053657.29
[]
docs.amazonaws.cn
Response Codes Whenever a D&B Direct web service request is unsuccessful, one of the following response codes will be returned. *This column displays the corresponding HTTP status code that will be returned for REST API calls. **Data Exchange calls will return the HTTP status that was received from the partner.
https://docs.dnb.com/direct/2.0/en-US/response-codes
2021-09-16T15:48:28
CC-MAIN-2021-39
1631780053657.29
[]
docs.dnb.com
CDistributedSmoothNodeBase¶ - class CDistributedSmoothNodeBase¶ This class defines some basic methods of DistributedSmoothNodeBase which have been moved into C++ as a performance optimization. Inheritance diagram - CDistributedSmoothNodeBase(CDistributedSmoothNodeBase const&) = default¶ - void broadcast_pos_hpr_full(void)¶ Examines the complete pos/hpr information to see which of the six elements have changed, and broadcasts the appropriate messages. - void broadcast_pos_hpr_xy(void)¶ Examines only X and Y of the pos/hpr information, and broadcasts the appropriate messages. - void broadcast_pos_hpr_xyh(void)¶ Examines only X, Y, and H of the pos/hpr information, and broadcasts the appropriate messages. - void initialize(NodePath const &node_path, DCClass *dclass, CHANNEL_TYPE do_id)¶ Initializes the internal structures from some constructs that are normally stored only in Python. Also reads the current node’s pos & hpr values in preparation for transmitting them via one of the broadcast_pos_hpr_*() methods. - void set_clock_delta(PyObject *clock_delta)¶ Tells the C++ instance definition about the global ClockDelta object. - void set_repository(CConnectionRepository *repository, bool is_ai, CHANNEL_TYPE ai_id)¶ Tells the C++ instance definition about the AI or Client repository, used for sending datagrams.
https://docs.panda3d.org/1.10/cpp/reference/panda3d.direct.CDistributedSmoothNodeBase
2021-09-16T15:34:15
CC-MAIN-2021-39
1631780053657.29
[]
docs.panda3d.org
nsb = new NServiceBusRoleEntrypoint(); public override bool OnStart() { nsb.Start(); return base.OnStart(); } public override void OnStop() { nsb.Stop(); base.OnStop(); } } Next, define the endpoint behavior. The role has been named AsA_Worker. Specify the transport and persistence using the UseTransport and UsePersistence methods. public class EndpointConfig : IConfigureThisEndpoint, AsA_Worker { public void Customize(BusConfiguration busConfiguration) { // Configure transport, persistence, etc. } } This will integrate and configure the default infrastructure: - Configuration setting will be read from the app.file and merged with the settings from the service configuration file. config -() { var busConfiguration = new BusConfiguration(); busConfiguration.AzureConfigurationSource(); busConfiguration.UseTransport<AzureStorageQueueTransport>(); busConfiguration.UsePersistence<AzureStoragePersistence>(); var startableBus = Bus.Create(busConfiguration); var bus = startableBus.Start(); } } A short explanation of each: AzureConfigurationSource: overrides any settings known to the NServiceBus Azure configuration section within the app.config file with settings from the service configuration file. - public class Bootstrapper : IWantToRunWhenBusStartsAndStops { public void Start() { // Do startup actions here. } public void Stop() { // Do cleanup actions here. } }
https://docs.particular.net/nservicebus/hosting/cloud-services-host/?version=cloudserviceshost_6
2021-09-16T15:29:46
CC-MAIN-2021-39
1631780053657.29
[]
docs.particular.net
Linux Virtuozzo Templates Virtuozzo application templates are RPM packages which, when installed on a node, allow easy deployment of an application in as many Containers as required, saving a lot of critical system resources like disk space. You can obtain the Plesk templates at the Plesk website, or download them using the Virtuozzo command-line utility call " vzup2date -z" (Virtuozzo 4 and later) or by means of yum on Virtuozzo. Versioned and version-free templates Starting with Plesk 10.4, Plesk ships two sets of EZ templates for each Plesk release: major-version templates, and version-free templates. Both provide the same software components, the only difference between them is that template updates are installed when a Container is updated: - Provider-controlled versioned templates automatically get all the latest updates and upgrades released for the major version of Plesk. For example, if the versioned template of Plesk 10 ( pp10) is installed in a Container, the vzpkg update <CT_ID>command will update it to the latest released version of Plesk 10.x.x, be it 10.0.1 or 10.1.0. More specifically, versioned templates perform upgrades allowed by a typical Plesk license. This license allows you to perform upgrades within the second major version number. For example, from 10.1 to 10.2, but not from 10.4 to 11.0. - Version-free auto-upgrade templates get all updates and upgrades regardless of the Plesk license key. In other words, such templates automatically update to the last available Plesk version regardless of its number once this version in released. For example, this can be an upgrade from 10.3 to 10.4 or from 10.4 to 11.0. Note that if your Plesk license does not allow complex upgrades, you will need to obtain a new license key after each such upgrade. For example, if the version-free base template of Plesk ( pp) is installed in a Container (for example, 10.x.x), the vzpkg update <CT_ID>command will update it to the latest released version of Plesk x.x.x, be it 10.x.x or 11.x.x. You can tell versioned templates and version-free ones apart from their names: The name prefix of the first type contains the major version ( pp12), while the latter does not contain any version numbers ( pp). Toggling auto-detection of EZ templates Virtuozzo 4.0 and later versions can discover EZ templates in a container and perform automatic actions depending on the templates. This feature provides opportunities for business automation software (like PBAs) to automatically find products installed in a container and start billing the container owner. The discovery algorithm is straightforward: If the system finds all packages included in an EZ template, it considers the template to be installed. The major drawback of this approach is that Plesk 9.x and SMB are very close to each other in terms of packages, so the auto-detection engine can make incorrect decisions. For example, if only one of the applications is present in a container, the system considers that both templates are installed. The most noticeable outcome of this detection problem is that the system fails to update both applications and set proper billing for them. It is possible to stop the auto-detection if you use the billing automation software or if you want to install tightly bound Plesk products. To do this, modify the /etc/vztt/vztt.conf file by setting APP_TEMPLATE_AUTODETECTION=no. Shipped templates Since both versioned and version-free sets of templates provide the same components, we will list only the versioned ones for the simplicity sake. The following EZ templates are shipped for Plesk 12.5:
https://docs.plesk.com/en-US/12.5/deployment-guide/66140/
2021-09-16T16:02:01
CC-MAIN-2021-39
1631780053657.29
[]
docs.plesk.com
How to Find Your Recording in Chorus Can’t find the call you swore was recorded in Chorus? We’re sorry, we know the feeling -- so we created an "Unrecorded Meetings" page that shows all meetings that were not recorded over the last 30 days, and provides the reason they weren’t recorded. 1. To access your list of unrecorded meetings, first click on the "My Recordings" Page. 2. Then click on the ‘Why wasn’t my meeting recorded?’ link at the top of the My Recordings page. PRO TIP: To make sure your upcoming meetings are recorded, use the Chorus Calendar also on the "My Recordings" page. The Chorus Calendar is on the right side of the screen, and allows you to toggle recordings on. Please sign in to leave a comment.
https://docs.chorus.ai/hc/en-us/articles/360033985714-Where-Can-I-Find-My-Unrecorded-Meetings-
2020-02-17T00:11:31
CC-MAIN-2020-10
1581875141460.64
[array(['/hc/article_attachments/360043860074/cant_find_recording_1.png', 'cant_find_recording_1.png'], dtype=object) array(['/hc/article_attachments/360043860274/find_recording_2.png', 'find_recording_2.png'], dtype=object) array(['/hc/article_attachments/360043860734/chorus_calendar.png', 'chorus_calendar.png'], dtype=object) ]
docs.chorus.ai
Helping You Recover Your Work in Office 2010 My name is Nitie and I work in the Office Reliability Team. My team's goal is to improve Office reliability and your experience using our software. When thinking about how we can improve Office, we usually aim to make the product do exactly what you ask it to do, but in this case I want to highlight a new feature where we are doing something automatically in order to help protect you from accidentally losing your work.. How do we protect you from accidently not saving a document?.
https://docs.microsoft.com/en-us/archive/blogs/office2010/helping-you-recover-your-work-in-office-2010
2020-02-17T02:31:29
CC-MAIN-2020-10
1581875141460.64
[array(['https://msdnshared.blob.core.windows.net/media/TNBlogsFS/prod.evol.blogs.technet.com/telligent.evolution.components.attachments/13/7730/00/00/03/28/24/89/image1.png', 'Version list in the backstage view, and image of the save dialog Version list in the backstage view, and image of the save dialog'], dtype=object) array(['https://msdnshared.blob.core.windows.net/media/TNBlogsFS/prod.evol.blogs.technet.com/telligent.evolution.components.attachments/13/7730/00/00/03/28/24/90/image2.png', 'Recent documents and general Backstage view version information Recent documents and general Backstage view version information'], dtype=object) ]
docs.microsoft.com
Tracking and event logging for your Azure Data Box and Azure Data Box Heavy A Data Box or Data Box Heavy order goes through the following steps: order, set up, data copy, return, upload to Azure and verify, and data erasure. Corresponding to each step in the order, you can take multiple actions to control the access to the order, audit the events, track the order, and interpret the various logs that are generated. The following table shows a summary of the Data Box or Data Box Heavy order steps and the tools available to track and audit the order during each step. This article describes in detail the various mechanisms or tools available to track and audit Data Box or Data Box Heavy order. The information in this article applies to both, Data Box and Data Box Heavy. In the subsequent sections, any references to Data Box also apply to Data Box Heavy. Set up access control on the order You can control who can access your order when the order is first created. Set up Role-based Access Control (RBAC) roles at various scopes to control the access to the Data Box order. An RBAC role determines the type of access – read-write, read-only, read-write to a subset of operations. The two roles that can be defined for the Azure Data Box service are: - Data Box Reader - have read-only access to an order(s) as defined by the scope. They can only view details of an order. They can’t access any other details related to storage accounts or edit the order details such as address and so on. - Data Box Contributor - can only create an order to transfer data to a given storage account if they already have write access to a storage account. If they do not have access to a storage account, they can't even create a Data Box order to copy data to the account. This role does not define any Storage account related permissions nor grants access to storage accounts. To restrict access to an order, you can: - Assign a role at an order level. The user only has those permissions as defined by the roles to interact with that specific Data Box order only and nothing else. - Assign a role at the resource group level, the user has access to all the Data Box orders within a resource group. For more information on suggested RBAC use, see Best practices for RBAC. Track the order You can track your order through the Azure portal and through the shipping carrier website. The following mechanisms are in place to track the Data Box order at any time: To track the order when the device is in Azure datacenter or your premises, go to your Data Box order > Overview in Azure portal. To track the order while the device is in transit, go to the regional carrier website, for example, UPS website in US. Provide the tracking number associated with your order. Data Box also sends email notifications anytime the order status changes based on the emails provided when the order was created. For a list of all the Data Box order statuses, see View order status. To change the notification settings associated with the order, see Edit notification details. Query activity logs during setup Your Data Box arrives on your premises in a locked state. You can use the device credentials available in the Azure portal for your order. When a Data Box is set up, you may need to know who all accessed the device credentials. To figure out who accessed the Device credentials blade, you can query the Activity logs. Any action that involves accessing Device details > Credentials blade is logged into the activity logs as ListCredentialsaction. Each sign into the Data Box is logged real time. However, this information is only available in the Audit logs after the order is successfully completed. View error log during data copy During the data copy to Data Box or Data Box Heavy, an error file is generated if there are any issues with the data being copied. Error.xml file Make sure that the copy jobs have finished with no errors. If there are errors during the copy process, download the logs from the Connect and copy page. - If you copied a file that is not 512 bytes aligned to a managed disk folder on your Data Box, the file isn't uploaded as page blob to your staging storage account. You will see an error in the logs. Remove the file and copy a file that is 512 bytes aligned. - If you copied a VHDX, or a dynamic VHD, or a differencing VHD (these files are not supported), you will see an error in the logs. Here is a sample of the error.xml for different errors when copying to managed disks. <file error="ERROR_BLOB_OR_FILE_TYPE_UNSUPPORTED">\StandardHDD\testvhds\differencing-vhd-022019.vhd</file> <file error="ERROR_BLOB_OR_FILE_TYPE_UNSUPPORTED">\StandardHDD\testvhds\dynamic-vhd-022019.vhd</file> <file error="ERROR_BLOB_OR_FILE_TYPE_UNSUPPORTED">\StandardHDD\testvhds\insidefixedvhdx-022019.vhdx</file> <file error="ERROR_BLOB_OR_FILE_TYPE_UNSUPPORTED">\StandardHDD\testvhds\insidediffvhd-022019.vhd</file> Here is a sample of the error.xml for different errors when copying to page blobs. <file error="ERROR_BLOB_OR_FILE_SIZE_ALIGNMENT">\PageBlob512NotAligned\File100Bytes</file> <file error="ERROR_BLOB_OR_FILE_SIZE_ALIGNMENT">\PageBlob512NotAligned\File786Bytes</file> <file error="ERROR_BLOB_OR_FILE_SIZE_ALIGNMENT">\PageBlob512NotAligned\File513Bytes</file> <file error="ERROR_BLOB_OR_FILE_SIZE_ALIGNMENT">\PageBlob512NotAligned\File10Bytes</file> <file error="ERROR_BLOB_OR_FILE_SIZE_ALIGNMENT">\PageBlob512NotAligned\File500Bytes</file> Here is a sample of the error.xml for different errors when copying to block blobs. <file error="ERROR_CONTAINER_OR_SHARE_NAME_LENGTH">\ab</file> <file error="ERROR_CONTAINER_OR_SHARE_NAME_ALPHA_NUMERIC_DASH">\invalid dns name<">\testdirectory-~!@#$%^&()_+{}</file> <file error="ERROR_CONTAINER_OR_SHARE_NAME_ALPHA_NUMERIC_DASH">\test__doubleunderscore</file> <file error="ERROR_CONTAINER_OR_SHARE_NAME_IMPROPER_DASH">\-startingwith-hyphen</file> <file error="ERROR_CONTAINER_OR_SHARE_NAME_ALPHA_NUMERIC_DASH">\Starting with Capital</file> <file error="ERROR_CONTAINER_OR_SHARE_NAME_ALPHA_NUMERIC_DASH">\_startingwith_underscore</file> <file error="ERROR_CONTAINER_OR_SHARE_NAME_IMPROPER_DASH">\55555555--4444--3333--2222--111111111111</file> <file error="ERROR_CONTAINER_OR_SHARE_NAME_LENGTH">\1</file> <file error="ERROR_CONTAINER_OR_SHARE_NAME_ALPHA_NUMERIC_DASH">\11111111-_2222-_3333-_4444-_555555555555</file> <file error="ERROR_CONTAINER_OR_SHARE_NAME_IMPROPER_DASH">\test--doublehyphen</file> <file error="ERROR_BLOB_OR_FILE_NAME_CHARACTER_ILLEGAL" name_encoding="Base64">XEludmFsaWRVbmljb2RlRmlsZXNcU3BjQ2hhci01NTI5Ni3vv70=</file> <file error="ERROR_BLOB_OR_FILE_NAME_CHARACTER_ILLEGAL" name_encoding="Base64">XEludmFsaWRVbmljb2RlRmlsZXNcU3BjQ2hhci01NTMwMS3vv70=</file> <file error="ERROR_BLOB_OR_FILE_NAME_CHARACTER_ILLEGAL" name_encoding="Base64">XEludmFsaWRVbmljb2RlRmlsZXNcU3BjQ2hhci01NTMwMy3vv70=</file> <file error="ERROR_BLOB_OR_FILE_NAME_CHARACTER_CONTROL">\InvalidUnicodeFiles\Ã.txt</file> <file error="ERROR_BLOB_OR_FILE_NAME_CHARACTER_ILLEGAL" name_encoding="Base64">XEludmFsaWRVbmljb2RlRmlsZXNcU3BjQ2hhci01NTMwNS3vv70=</file> <file error="ERROR_BLOB_OR_FILE_NAME_CHARACTER_ILLEGAL" name_encoding="Base64">XEludmFsaWRVbmljb2RlRmlsZXNcU3BjQ2hhci01NTI5OS3vv70=</file> <file error="ERROR_BLOB_OR_FILE_NAME_CHARACTER_ILLEGAL" name_encoding="Base64">XEludmFsaWRVbmljb2RlRmlsZXNcU3BjQ2hhci01NTMwMi3vv70=</file> <file error="ERROR_BLOB_OR_FILE_NAME_CHARACTER_ILLEGAL" name_encoding="Base64">XEludmFsaWRVbmljb2RlRmlsZXNcU3BjQ2hhci01NTMwNC3vv70=</file> <file error="ERROR_BLOB_OR_FILE_NAME_CHARACTER_ILLEGAL" name_encoding="Base64">XEludmFsaWRVbmljb2RlRmlsZXNcU3BjQ2hhci01NTI5OC3vv70=</file> <file error="ERROR_BLOB_OR_FILE_NAME_CHARACTER_ILLEGAL" name_encoding="Base64">XEludmFsaWRVbmljb2RlRmlsZXNcU3BjQ2hhci01NTMwMC3vv70=</file> <file error="ERROR_BLOB_OR_FILE_NAME_CHARACTER_ILLEGAL" name_encoding="Base64">XEludmFsaWRVbmljb2RlRmlsZXNcU3BjQ2hhci01NTI5Ny3vv70=</file> Here is a sample of the error.xml for different errors when copying to Azure Files. <file error="ERROR_BLOB_OR_FILE_SIZE_LIMIT">\AzFileMorethan1TB\AzFile1.2TB</file> <file error="ERROR_CONTAINER_OR_SHARE_NAME_ALPHA_NUMERIC_DASH">\testdirectory-~!@#$%^&()_+{}</file> <file error="ERROR_CONTAINER_OR_SHARE_NAME_IMPROPER_DASH">\55555555--4444--3333--2222--111111111111</file> <file error="ERROR_CONTAINER_OR_SHARE_NAME_IMPROPER_DASH">\-startingwith-hyphen</file> <file error="ERROR_CONTAINER_OR_SHARE_NAME_ALPHA_NUMERIC_DASH">\11111111-_2222-_3333-_4444-_555555555555</file> <file error="ERROR_CONTAINER_OR_SHARE_NAME_IMPROPER_DASH">\test--doublehyphen</file> <file error="ERROR_CONTAINER_OR_SHARE_NAME_LENGTH">\ab</file> <file error="ERROR_CONTAINER_OR_SHARE_NAME_ALPHA_NUMERIC_DASH">\invalid dns name</file> <file error="ERROR_CONTAINER_OR_SHARE_NAME_ALPHA_NUMERIC_DASH">\test__doubleunderscore<">\_startingwith_underscore</file> <file error="ERROR_CONTAINER_OR_SHARE_NAME_LENGTH">\1</file> <file error="ERROR_CONTAINER_OR_SHARE_NAME_ALPHA_NUMERIC_DASH">\Starting with Capital</file> In each of the above cases, resolve the errors before you proceed to the next step. For more information on the errors received during data copy to Data Box via SMB or NFS protocols, go to Troubleshoot Data Box and Data Box Heavy issues. For information on errors received during data copy to Data Box via REST, go to Troubleshoot Data Box Blob storage issues. Inspect BOM during prepare to ship During prepare to ship, a list of files known as the Bill of Materials (BOM) or manifest file is created. - Use this file to verify against the actual names and the number of files that were copied to the Data Box. - Use this file to verify against the actual sizes of the files. - Verify that the crc64 corresponds to a non-zero string. For more information on the errors received during prepare to ship, go to Troubleshoot Data Box and Data Box Heavy issues. BOM or manifest file The BOM or manifest file contains the list of all the files that are copied to the Data Box device. The BOM file has file names and the corresponding sizes as well as the checksum. A separate BOM file is created for the block blobs, page blobs, Azure Files, for copy via the REST APIs, and for the copy to managed disks on the Data Box. You can download the BOM files from the local web UI of the device during the prepare to ship. These files also reside on the Data Box device and are uploaded to the associated storage account in the Azure datacenter. BOM file format BOM or manifest file has the following general format: <file size = "file-size-in-bytes" crc64="cyclic-redundancy-check-string">\folder-path-on-data-box\name-of-file-copied.md</file> Here is a sample of a manifest generated when the data was copied to the block blob share on the Data Box. <file size="10923" crc64="0x51c78833c90e4e3f">\databox\media\data-box-deploy-copy-data\connect-shares-file-explorer1.png</file> <file size="15308" crc64="0x091a8b2c7a3bcf0a">\databox\media\data-box-deploy-copy-data\get-share-credentials2.png</file> <file size="53486" crc64="0x053da912fb45675f">\databox\media\data-box-deploy-copy-data\nfs-client-access.png</file> <file size="6093" crc64="0xadb61d0d7c6d4deb">\databox\data-box-cable-options.md</file> <file size="6499" crc64="0x080add29add367d9">\databox\data-box-deploy-copy-data-via-nfs.md</file> <file size="11089" crc64="0xc3ce6b13a4fe3001">\databox\data-box-deploy-copy-data-via-rest.md</file> <file size="7749" crc64="0xd2e346a4588e307a">\databox\data-box-deploy-ordered.md</file> <file size="14275" crc64="0x260296e5d1b1608a">\databox\data-box-deploy-copy-data.md</file> <file size="4077" crc64="0x2bb0a170225bceec">\databox\data-box-deploy-picked-up.md</file> <file size="15447" crc64="0xcec0ca8527720b3c">\databox\data-box-portal-admin.md</file> <file size="9126" crc64="0x820856b5a54321ad">\databox\data-box-overview.md</file> <file size="10963" crc64="0x5e9a14f9f4784fd8">\databox\data-box-safety.md</file> <file size="5941" crc64="0x8631d62fbc038760">\databox\data-box-security.md</file> <file size="12536" crc64="0x8c8ff93e73d665ec">\databox\data-box-system-requirements-rest.md</file> <file size="3220" crc64="0x7257a263c434839a">\databox\data-box-system-requirements.md</file> The BOM or manifest files are also copied to the Azure storage account. You can use the BOM or manifest files to verify that files uploaded to the Azure match the data that was copied to the Data Box. Review copy log during upload to Azure During the data upload to Azure, a copy log is created. Copy log For each order that is processed, the Data Box service creates copy log in the associated storage account. The copy log has the total number of files that were uploaded and the number of files that errored out during the data copy from Data Box to your Azure storage account. A Cyclic Redundancy Check (CRC) computation is done during the upload to Azure. The CRCs from the data copy and after the data upload are compared. A CRC mismatch indicates that the corresponding files failed to upload. By default, logs are written to a container named copylog. The logs are stored with the following naming convention: storage-account-name/databoxcopylog/ordername_device-serial-number_CopyLog_guid.xml. The copy log path is also displayed on the Overview blade for the portal. Upload completed successfully The following sample describes the general format of a copy log for a Data Box upload that completed successfully: <?xml version="1.0"?> -<CopyLog Summary="Summary"> <Status>Succeeded</Status> <TotalFiles>45</TotalFiles> <FilesErrored>0</FilesErrored> </CopyLog> Upload completed with errors Upload to Azure may also complete with errors. Here is an example of a copy log where the upload completed with errors: <ErroredEntity Path="iso\samsungssd.iso"> <Category>UploadErrorCloudHttp</Category> <ErrorCode>409</ErrorCode> <ErrorMessage>The blob type is invalid for this operation.</ErrorMessage> <Type>File</Type> </ErroredEntity><ErroredEntity Path="iso\iSCSI_Software_Target_33.iso"> <Category>UploadErrorCloudHttp</Category> <ErrorCode>409</ErrorCode> <ErrorMessage>The blob type is invalid for this operation.</ErrorMessage> <Type>File</Type> </ErroredEntity><CopyLog Summary="Summary"> <Status>Failed</Status> <TotalFiles_Blobs>72</TotalFiles_Blobs> <FilesErrored>2</FilesErrored> </CopyLog> Upload completed with warnings Upload to Azure completes with warnings if your data had container/blob/file names that didn't conform to Azure naming conventions and the names were modified to upload the data to Azure. Here is an example of a copy log where the containers that did not conform to Azure naming conventions were renamed during the data upload to Azure. The new unique names for containers are in the format DataBox-GUID and the data for the container are put into the new renamed container. The copy log specifies the old and the new container name for container. <ErroredEntity Path="New Folder"> <Category>ContainerRenamed</Category> <ErrorCode>1</ErrorCode> <ErrorMessage>The original container/share/blob has been renamed to: DataBox-3fcd02de-bee6-471e-ac62-33d60317c576 :from: New Folder :because either the name has invalid character(s) or length is not supported</ErrorMessage> <Type>Container</Type> </ErroredEntity> Here is an example of a copy log where the blobs or files that did not conform to Azure naming conventions, were renamed during the data upload to Azure. The new blob or file names are converted to SHA256 digest of relative path to container and are uploaded to path based on destination type. The destination can be block blobs, page blobs, or Azure Files. The copylog specifies the old and the new blob or file name and the path in Azure. <ErroredEntity Path="TesDir028b4ba9-2426-4e50-9ed1-8e89bf30d285\Ã"> <Category>BlobRenamed</Category> <ErrorCode>1</ErrorCode> <ErrorMessage>The original container/share/blob has been renamed to:9856b9ab-6acb-4bc3-8717-9a898bdb1f8c\Ã"> <Category>BlobRenamed</Category> <ErrorCode>1</ErrorCode> <ErrorMessage>The original container/share/blob has been renamed to: AzureFilef92f6ca4-3828-4338-840b-398b967d810b\Ã"> <Category>BlobRenamed</Category> <ErrorCode>1</ErrorCode> <ErrorMessage>The original container/share/blob has been renamed to:> Get chain of custody logs after data erasure After the data is erased from the Data Box disks as per the NIST SP 800-88 Revision 1 guidelines, the chain of custody logs are available. These logs include the audit logs and the order history. The BOM or manifest files are also copied with the audit logs. Audit logs Audit logs contain information on how to power on and access shares on the Data Box or Data Box Heavy when it is outside of Azure datacenter. These logs are located at: storage-account/azuredatabox-chainofcustodylogs Here is a sample of the audit log from a Data Box: 9/10/2018 8:23:01 PM : The operating system started at system time 2018-09-10T20:23:01.497758400Z. 9/10/2018 8:23:42 PM : An account was successfully logged on. Subject: Security ID: S-1-5-18 Account Name: WIN-DATABOXADMIN Account Domain: Workgroup Logon ID: 0x3E7 Logon Information: Logon Type: 3 Restricted Admin Mode: - Virtual Account: No Elevated Token: No Impersonation Level: Impersonation New Logon: Security ID: S-1-5-7 Account Name: ANONYMOUS LOGON Account Domain: NT AUTHORITY Logon ID: 0x775D5 Linked Logon ID: 0x0 Network Account Name: - Network Account Domain: - Logon GUID: {00000000-0000-0000-0000-000000000000} Process Information: Process ID: 0x4 Process Name: Network Information: Workstation Name: - Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: NfsSvr Authentication Package:MICROSOFT_AUTHENTICATION_PACKAGE_V1_0. 9/10/2018 8:25:58 PM : An account was successfully logged on. Download order history Order history is available in Azure portal. If the order is complete and the device cleanup (data erasure from the disks) is complete, then go to your device order and navigate to Order details. Download order history option is available. For more information, see Download order history. If you scroll through the order history, you see: - Carrier tracking information for your device. - Events with SecureErase activity. These events correspond to the erasure of the data on the disk. - Data Box log links. The paths for the audit logs, copy logs, and BOM files are presented. Here is a sample of the order history log from Azure portal: ------------------------------- Microsoft Data Box Order Report ------------------------------- Name : gus-poland StartTime(UTC) : 9/19/2018 8:49:23 AM +00:00 DeviceType : DataBox ------------------- Data Box Activities ------------------- Time(UTC) | Activity | Status | Description 9/19/2018 8:49:26 AM | OrderCreated | Completed | 10/2/2018 7:32:53 AM | DevicePrepared | Completed | 10/3/2018 1:36:43 PM | ShippingToCustomer | InProgress | Shipment picked up. Local Time : 10/3/2018 1:36:43 PM at AMSTERDAM-NLD 10/4/2018 8:23:30 PM | ShippingToCustomer | InProgress | Processed at AMSTERDAM-NLD. Local Time : 10/4/2018 8:23:30 PM at AMSTERDAM-NLD 10/4/2018 11:43:34 PM | ShippingToCustomer | InProgress | Departed Facility in AMSTERDAM-NLD. Local Time : 10/4/2018 11:43:34 PM at AMSTERDAM-NLD 10/5/2018 8:13:49 AM | ShippingToCustomer | InProgress | Arrived at Delivery Facility in BRIGHTON-GBR. Local Time : 10/5/2018 8:13:49 AM at LAMBETH-GBR 10/5/2018 9:13:24 AM | ShippingToCustomer | InProgress | With delivery courier. Local Time : 10/5/2018 9:13:24 AM at BRIGHTON-GBR 10/5/2018 12:03:04 PM | ShippingToCustomer | Completed | Delivered - Signed for by. Local Time : 10/5/2018 12:03:04 PM at BRIGHTON-GBR 1/25/2019 3:19:25 PM | ShippingToDataCenter | InProgress | Shipment picked up. Local Time : 1/25/2019 3:19:25 PM at BRIGHTON-GBR 1/25/2019 8:03:55 PM | ShippingToDataCenter | InProgress | Processed at BRIGHTON-GBR. Local Time : 1/25/2019 8:03:55 PM at LAMBETH-GBR 1/25/2019 8:04:58 PM | ShippingToDataCenter | InProgress | Departed Facility in BRIGHTON-GBR. Local Time : 1/25/2019 8:04:58 PM at BRIGHTON-GBR 1/25/2019 9:06:09 PM | ShippingToDataCenter | InProgress | Arrived at Sort Facility LONDON-HEATHROW-GBR. Local Time : 1/25/2019 9:06:09 PM at LONDON-HEATHROW-GBR 1/25/2019 9:48:54 PM | ShippingToDataCenter | InProgress | Processed at LONDON-HEATHROW-GBR. Local Time : 1/25/2019 9:48:54 PM at LONDON-HEATHROW-GBR 1/25/2019 10:30:20 PM | ShippingToDataCenter | InProgress | Departed Facility in LONDON-HEATHROW-GBR. Local Time : 1/25/2019 10:30:20 PM at LONDON-HEATHROW-GBR 1/28/2019 7:11:35 AM | ShippingToDataCenter | InProgress | Arrived at Delivery Facility in AMSTERDAM-NLD. Local Time : 1/28/2019 7:11:35 AM at AMSTERDAM-NLD 1/28/2019 9:07:57 AM | ShippingToDataCenter | InProgress | With delivery courier. Local Time : 1/28/2019 9:07:57 AM at AMSTERDAM-NLD 1/28/2019 1:35:56 PM | ShippingToDataCenter | InProgress | Scheduled for delivery. Local Time : 1/28/2019 1:35:56 PM at AMSTERDAM-NLD 1/28/2019 2:57:48 PM | ShippingToDataCenter | Completed | Delivered - Signed for by. Local Time : 1/28/2019 2:57:48 PM at AMSTERDAM-NLD 1/29/2019 2:18:43 PM | PhysicalVerification | Completed | 1/29/2019 3:49:50 PM | DeviceBoot | Completed | Appliance booted up successfully. 1/29/2019 3:49:51 PM | AnomalyDetection | Completed | No anomaly detected. 2/12/2019 10:37:03 PM | DataCopy | Resumed | 2/13/2019 12:05:15 AM | DataCopy | Resumed | 2/15/2019 7:07:34 PM | DataCopy | Completed | Copy Completed. 2/15/2019 7:47:32 PM | SecureErase | Started | 2/15/2019 8:01:10 PM | SecureErase | Completed | Azure Data Box:<Device-serial-no> has been sanitized according to NIST 800-88 Rev 1. ------------------ Data Box Log Links ------------------ Account Name : gusacct Copy Logs Path : databoxcopylog/gus-poland_<Device-serial-no>_CopyLog_<GUID>.xml Audit Logs Path : azuredatabox-chainofcustodylogs\<GUID>\<Device-serial-no> BOM Files Path : azuredatabox-chainofcustodylogs\<GUID>\<Device-serial-no> Next steps - Learn how to Troubleshoot issues on your Data Box and Data Box Heavy. Feedback
https://docs.microsoft.com/en-us/azure/databox/data-box-logs
2020-02-17T01:31:21
CC-MAIN-2020-10
1581875141460.64
[array(['media/data-box-logs/copy-log-path-1.png', 'Path to copy log in Overview blade when completed'], dtype=object) array(['media/data-box-logs/copy-log-path-2.png', 'Path to copy log in Overview blade when completed with errors'], dtype=object) array(['media/data-box-logs/copy-log-path-3.png', 'Path to copy log in Overview blade when completed with warnings'], dtype=object) ]
docs.microsoft.com
TLS/SSL and PyMongo¶ PyMongo supports connecting to MongoDB over TLS/SSL. This guide covers the configuration options supported by PyMongo. See the server documentation to configure MongoDB. Dependencies¶ listed below. Python 2.x¶ The ipaddress module is required on all platforms. When using CPython < 2.7.9 or PyPy < 2.5.1: - On Windows, the wincertstore module is required. - On all other platforms, the certifi module is required.:.
https://pymongo.readthedocs.io/en/stable/examples/tls.html
2020-02-17T02:10:38
CC-MAIN-2020-10
1581875141460.64
[]
pymongo.readthedocs.io
prevents including past days to events with the 'weekly' recurrence scheduler.config.repeat_precise = true; By default, when the user specifies the 'weekly' recurrence, the scheduler includes the current week to this recurrence, regardless of whether the user creates an event after, between or before the included day(s). For example, the user creates an event on Thursday and sets the 'weekly' repetition on Monday and Wednesday. The saved event will contain the current week, i.e. the past Monday and Wednesday, even though it has been created on Thursday. If you set the repeat_precise option to true, the start date of a recurring event will be the date of the first real occurrence, i.e. in our example it will be Monday of the next week.
https://docs.dhtmlx.com/scheduler/api__scheduler_repeat_precise_config.html
2020-02-17T02:05:23
CC-MAIN-2020-10
1581875141460.64
[]
docs.dhtmlx.com
Welcome to the API(v1.0) for Dynamics 365 Business Central With Dynamics 365 you can create Connect apps. Connect apps establishes a point-to-point connection between Dynamics 365 Business Central and a 3rd party solution or service and is typically created using standard REST API to interchange data. Any coding language capable of calling REST APIs can be used to develop your Connect app. For more information to get started on Connect apps, see Developing Connect Apps for Dynamics 365 Business Central. Before you start using the Business Central APIs, please familiarize yourself with the Microsoft APIs Terms of Use. Tip For information about enabling APIs for Dynamics NAV see Enabling the APIs for Dynamics 365 Business Central. See Also Microsoft APIs Terms of Use Enabling the APIs for Dynamics 365 Business Central Development in AL Developing Connect Apps for Dynamics 365 Business Central OpenAPI Specification Feedback Feedback wird geladen...
https://docs.microsoft.com/de-de/dynamics-nav/api-reference/v1.0/
2020-02-17T01:54:58
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Migrate your organization data to Office 365 Enterprise. Migrate email to Office 365 - Migrate with Exchange Hybrid using the Exchange Deployment Assistant. (Administrator) - Learn more about the different ways to migrate email to Office 365. - Find alternative ways people in your organization can migrate their own email, contacts, and calendars. Migrate files and folders - Migrate to SharePoint Online and OneDrive. (Administrator) - SharePoint Server hybrid configuration roadmaps. (Administrator) Migrate Skype for Business users - Migrate to Skype for Business Online. (Administrator) - Download the Skype for Business meeting update tool and run it on each workstation. (Administrator and/or end user) Need to talk to Support? Contact support for business products. Feedback
https://docs.microsoft.com/en-us/office365/enterprise/migrate-data-to-office-365
2020-02-17T01:08:33
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
) Product Administration Guide. user management, see Working with Users, Roles and Permissions in the WSO2 Product). The topics given below explains how these protocols are enabled and configured for WSO2 MB: - Advanced Message Queuing Protocol (AMQP) - Message Queuing and Telemetry Transport (MQTT) Note that WSO2 Message Broker does not require the transports enabled through the Carbon Kernel. Configuring multitenancy You can create multiple tenants in your product server, which will allow you to maintain tenant isolation in a single server/cluster. For instructions on configuring multiple tenants for your server, see Working with Multiple Tenants in the WSO2 Product Administration Product Administration Guide. For instructions on performance tuning recommendations that are specific to WSO2 MB functionality, see the topics given below. - Clustering Performance - Database Performance - Message Publisher and Consumer Performance - Tuning Flow Control posts, see Changing the Default Ports in the WSO2 Product Administration Guide. in the WSO2 Product MB instructions on applying patches (issued by WSO2), see WSO2 Patch Application Process in the WSO2 Product Administration Guide. Monitoring the server Monitoring is an important part of maintaining a product server. Listed below are the monitoring capabilities that are available for WSO.5.0 is shipped with JVM Metrics, which allows you to monitor statistics of your server using Java Metrics. For instructions on setting up and using Carbon metrics for monitoring, see Using WSO2 Metrics in the WSO2 Product Administration Guide. JMX-based Monitoring For information on monitoring your server using JMX, see JMX-based monitoring in the WSO2 Product Administration Guide. Troubleshooting WSO2 MB For details on how you can troubleshoot and trace errors that occur in your WSO2 MB server, see Troubleshooting WSO2 MB.
https://docs.wso2.com/pages/viewpage.action?pageId=53121914
2020-02-17T01:53:36
CC-MAIN-2020-10
1581875141460.64
[]
docs.wso2.com
[This is preliminary documentation and is subject to change.] Gets a value indicating whether this instance needs collision. Entities not marked explicitly to need collision will still get collision, however, if other entities need collision, entities not marked as needing collision may lose it in favor of those. Namespace: RageNamespace: Rage Assembly: RagePluginHook (in RagePluginHook.dll) Version: 0.0.0.0 (0.56.1131.11510) Syntax Property ValueType: Boolean true if this entity needs collision; otherwise, false. See Also
http://docs.ragepluginhook.net/html/P_Rage_Entity_NeedsCollision.htm
2020-02-17T01:50:32
CC-MAIN-2020-10
1581875141460.64
[]
docs.ragepluginhook.net
Control and AutomationControl The WinDriver virtual user has two classes for handling window controls. A control represents a window, dialog, button editbox, etc. Control This class is used for objects that are identified by a unique windows handle value. The class calls C++ Windows API calls and sends Windows Messages to query or update the control. This class works well on legacy windows controls written in C, C++ Visual Basic, Delphi and other languages or frameworks, including many .Net applications where the windows controls are based on standard windows controls. Searching and interacting with WinDriver control objects is generally fast and has a low overhead. It has limitations when the controls are not derived from standard windows classes, for instance certain "custom controls" or for controls using other GUI frameworks such as Windows Presentation Foundation (WPF) controls or controls displayed in web browsers. For these controls you can try the AutomationControl. AutomationControl This class wraps the Windows Automation Element object so it is very similar in usage to the WinDriver Control class. It enables scripts to work with controls created using frameworks such as WPF, Silverlight, HTML pages (IE7 and later), Firefox and even some Java UI frameworks. You can use a tool such as Microsoft's UISpy tool to identify the controls and their properties. Windows Automation Element objects have large set of methods and properties for working with the control and these have a consistency that applies to many different control types. Searching for AutomationControl objects can have a high overhead particularly when there are a large number of objects in the Windows control hierarchy (although the default MaxSearchDepth of 3 will limit the elapsed time) The AutomationControl class was developed to allow working with objects that were not visible using the basic WinDriver Control class. It has almost the same methods for consistency. The Control class method ToAutomationControl is a convenient method for converting an Control object to an AutomationControl object. This is useful when a parent object can be quickly found using Control methods but the contents may only be found using AutomationControl methods.
http://docs.eggplantsoftware.com/epp/9.0.0/ePP/wdvuscontrol_and_automationcontrol.htm
2020-02-17T01:05:26
CC-MAIN-2020-10
1581875141460.64
[]
docs.eggplantsoftware.com
Composes an email message and immediately queues it for sending. In order to send email using the SendEmail operation, your message must meet the following requirements: Warning. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. send-email [--destination <value>] [--message <value>] [--reply-to-addresses <value>] [--return-path <value>] [--source-arn <value>] [--return-path-arn <value>] [--tags <value>] [--configuration-set-name <value>] --from <value> [--to <value>] [--cc <value>] [--bcc <value>] [--subject <value>] [--text <value>] [--html <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>] --destination (structure) The destination for this email, composed of To:, CC:, and BCC: fields. Shorthand Syntax: ToAddresses=string,string,CcAddresses=string,string,BccAddresses=string,string JSON Syntax: { "ToAddresses": ["string", ...], "CcAddresses": ["string", ...], "BccAddresses": ["string", ...] } --message (structure) The message to be sent. Shorthand Syntax: Subject={Data=string,Charset=string},Body={Text={Data=string,Charset=string},Html={Data=string,Charset=string}} JSON Syntax: { "Subject": { "Data": "string", "Charset": "string" }, "Body": { "Text": { "Data": "string", "Charset": "string" }, "Html": { "Data": "string", "Charset": "string" } } } --reply-to-addresses (list) The reply-to email address(es) for the message. If the recipient replies to the message, each reply-to address will receive the reply. Syntax: "string" "string" ... --return-path (string). --source-arn (string) . --return-path-arn (string) . --tags (list) A list of tags, in the form of name/value pairs, to apply to an email that you send using SendEmail . Tags correspond to characteristics of the email that you define, so that you can publish email sending events. Shorthand Syntax: Name=string,Value=string ... JSON Syntax: [ { "Name": "string", "Value": "string" } ... ] --configuration-set-name (string) The name of the configuration set to use when you send an email using SendEmail . --from (string)?= . --to (string) The email addresses of the primary recipients. You can specify multiple recipients as space-separated values --cc (string) The email addresses of copy recipients (Cc). You can specify multiple recipients as space-separated values --bcc (string) The email addresses of blind-carbon-copy recipients (Bcc). You can specify multiple recipients as space-separated values --subject (string) The subject of the message --text (string) The raw text body of the message --html (string) The HTML body of the send a formatted email using Amazon SES The following example uses the send-email command to send a formatted email: aws ses send-email --from [email protected] --destination --message Output: { "MessageId": "EXAMPLEf3a5efcd1-51adec81-d2a4-4e3f-9fe2-5d85c1b23783-000000" } The destination and the message are JSON data structures saved in .json files in the current directory. These files are as follows: destination.json: { "ToAddresses": ["[email protected]", "[email protected]"], "CcAddresses": ["[email protected]"], "BccAddresses": [] } message.json: { "Subject": { "Data": "Test email sent using the AWS CLI", "Charset": "UTF-8" }, "Body": { "Text": { "Data": "This is the message body in text format.", "Charset": "UTF-8" }, "Html": { "Data": "This message body contains HTML formatting. It can, for example, contain links like this one: <a class=\"ulink\" href=\"\" target=\"_blank\">Amazon SES Developer Guide</a>.", "Charset": "UTF-8" } } } Replace the sender and recipient email addresses with the ones you want to use. Note that the sender's email address must be verified with Amazon SES. Until you are granted production access to Amazon SES, you must also verify the email address of each recipient unless the recipient is the Amazon SES mailbox simulator. For more information on verification, see Verifying Email Addresses and Domains in Amazon SES in the Amazon Simple Email Service Developer Guide. The Message ID in the output indicates that the call to send-email was successful. If you don't receive the email, check your Junk box. For more information on sending formatted email, see Sending Formatted Email Using the Amazon SES API in the Amazon Simple Email Service Developer Guide.
https://docs.aws.amazon.com/cli/latest/reference/ses/send-email.html
2020-02-17T00:25:20
CC-MAIN-2020-10
1581875141460.64
[]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Get-CFGComplianceSummaryByConfigRule-Select <String> Get-CFGComplianceSummaryByConfigRule -Select ComplianceSummary.NonCompliantResourceCount CapExceeded CappedCount ----------- ----------- False 9This sample returns the number of Config rules that are non-compliant. AWS Tools for PowerShell: 2.x.y.z
https://docs.aws.amazon.com/powershell/latest/reference/items/Get-CFGComplianceSummaryByConfigRule.html
2020-02-17T01:06:45
CC-MAIN-2020-10
1581875141460.64
[]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Get-SES2DomainStatisticsReport-Domain <String>-EndDate <DateTime>-StartDate <DateTime>-Select <String>-PassThru <SwitchParameter> EndDatethat you specify has to be less than or equal to 30 days after the StartDate. AWS Tools for PowerShell: 2.x.y.z
https://docs.aws.amazon.com/powershell/latest/reference/items/Get-SES2DomainStatisticsReport.html
2020-02-17T01:32:28
CC-MAIN-2020-10
1581875141460.64
[]
docs.aws.amazon.com
The Bearer Client has been deprecated. Below you'll find information on its usage, however, we suggest using the Bearer Agent for your API consumption and usage going forward. The Bearer universal API client is available in many languages (and more coming...). All of our API clients have the ability to call any APIs using the same syntax. In addition, some of them have additional features, such as: Receive webhooks. Add a connect button to trigger the OAuth flow. Add a setup component to ask for a user's API credentials. Here is a comparison of what Bearer features are supported in each language: Note: alongside theses languages, we provide an Express middleware (for Node.js), Rails support and React components. Learn more about how to use Bearer in each language below:
https://docs.bearer.sh/deprecated/bearer-client/integration-clients
2020-02-17T00:53:41
CC-MAIN-2020-10
1581875141460.64
[]
docs.bearer.sh
You can think of flows as a folder in which you can put bot dialogs that are related to the same topic. Keep in mind that flows, just like the connections between dialog states, are simply a way of organizing your bot. They do not restrict the movement of users across your bot. Users can jump from one flow to another by using intents. Bot builders can set-up a next bot dialog to another flow Some tips in choosing how to split flows: Group all flows that have a functional relation. In our Choo Choo example, you could group all bot dialogs that are meant to help book a ticket, all general questions about trains, and all support flows (e.g. I lost my bag on the train). Reserve one flow for general questions, such as your offloading settings and your not understood bot dialog. There are four kinds of bot dialogs, each with its own color and functionalities. Every message that a bot will send to a user is a bot message. This includes text messages, buttons, quick replies, etc. Use this bot dialog type to gather input from your users. Every bot dialog type has a settings and NLP tab, which stay the same throughout the different types. In Chatlayer.ai you can have two views on your dialog states, where you configure what the bot will answer to a user. The flow view: a visual representation of all your dialog states, where you can easily see which bot dialogs are related to each other. The table view: the same information about your dialog states, but in a table, making it easier to filter, search and sort on dialog states. View your flows as a tree representation. This view is helpful to see how your flows are constructed. The parent child relations between bot dialogs is being used to visually organise the bot dialog states. Changing the parent child relation will not change the way your conversation flows work: it is purely for organising the bot dialogs in a logical matter on the canvas. The user is only redirected by intent recognition or click on UI components such as buttons and quick replies. View your bot dialog states in a table which is helpful for searching, filtering and sorting dialog states. This is the ID associated with the dialog state. You can use this to debug the bot in the Emulator. The name associated to this dialog state. This is also the label that is shown in the tree view, the list view or the translations module. You can enter any name you like and change it as often as you wish. The bot dialog flow determines in which flow the bot dialog you are editing or creating will live. The Parent Dialog State can be used to visually organise the dialog states. Changing the Parent Dialog State will not restrict the conversation flows: it is purely for organising the dialog states in a logical matter on the canvas. You can only choose a parent that is present in the flow you have selected. Every bot dialog can be linked to an Intent. When a user is entering free-form text, it is analysed by the NLP model. If the NLP model recognises an intent with a high enough accuracy (above the threshold), the user will reach the bot dialog coupled to the intent. Multiple bot dialogs can reuse the same intent by using different Context settings. A dialog state can only be linked to one Intent. To restrict the usage of an intent in a bot dialog to certain points in the dialog flow, you can use input contexts. If you specify an input context and the linked intent is recognised, the bot will check if the input context is active and act accordingly: combine multiple required contexts for sub flows in flows. To set a required context for a certain bot dialog, type the name of your context in the Search or create required context field. If it doesn't exist a new context is created. An existing context is re-used. You can also click on the available contexts to select one. If the required context is not active, this dialog state will not be reached, even though the intent linked to it was recognised. If the required context is active, the dialog state will be reached. To learn more about using context, see the Context concepts documentation To set an output context for a certain bot dialog, type the name of your context in the Search or create output context field. If it doesn't exist a new context is created. An existing context is re-used. You can also click on the available contexts to select one. The number on the left of the context name is the lifetime of the context. For example if you specify a lifetime of 3, this context will remain active for 3 user messages. After the user has entered three messages, this context will not be active anymore. Combine multiple input contexts for sub flows in flows.
https://docs.chatlayer.ai/bot-answers/dialog-state
2020-02-17T01:02:36
CC-MAIN-2020-10
1581875141460.64
[]
docs.chatlayer.ai
Microsoft 365 usage analytics Microsoft 365 usage analytics is also available for Microsoft 365 US Government Community. Overview of Microsoft 365 usage analytics Use Microsoft 365 usage analytics within Power BI to gain insights on how your organization is adopting the various services within Microsoft 365 to communicate and collaborate. You can visualize and analyze Microsoft 365 usage data, create custom reports and share the insights within your organization and gain insights into how specific regions or departments are utilizing Microsoft 365. Microsoft 365 usage analytics is a template app template app includes user attributes from Active Directory, enabling the ability to pivot in certain reports. The following Active Directory attributes are included: location, department and organization. See Enable Microsoft 365 usage analytics to start collecting data. Microsoft 365 usage analytics contains a number of reports detailed in the following sections. You can access detailed reports for each area by selecting the data tables. You can view all pre-built reports by selecting the tabs at the bottom of the site, once you are viewing the reports. For more detailed instructions, read Navigating and utilizing the reports in Microsoft 365 usage analytics and Customizing the reports in Microsoft 365 usage analytics. Executive summary The executive summary is a high-level, at-a-glance view of Microsoft 365 for Business adoption, usage, mobility, communication, collaboration, and storage reports, and is meant for business decision makers. It provides a view into how some individual services are being used, based on all the users who have been enabled and those who are active. All values of the month shown on the report refer to the latest complete month. This summary lets you quickly understand usage patterns in Office and how and where your employees are collaborating. Overview The Microsoft 365 overview report contains the following reports. You can view them by choosing the tab on top of the report page. All values of the month shown on the top section of the report refer to the latest complete month. Adoption – Offers an all-up summary of adoption trends. Use the reports in this section to learn how your users have adopted Microsoft 365, as well as how overall usage of the individual services has changed month over month. You can see how may users are enabled, how many people in your organization are actively using Microsoft 365, how many are returning users, and how many are using the product for the first time. Usage – Offers a drill-down view into the volume of active users and the key activities for each product for the last 12 months. Use the reports in this section to learn how people in your organization are using Microsoft 365. Communication – You can see at a glance whether people in your organization prefer to stay in touch by using Teams, Yammer, email, or Skype calls. You can observe if there are shifts in patterns in the use of communication tools among your employees. Collaboration – See how people in your organization use OneDrive and SharePoint to store documents and collaborate with each other, and how these trends evolve month over month. You can also see how many documents are shared internally or externally and how many SharePoint sites or OneDrive accounts are actively being used, broken out by owners and other collaborators. Storage – Use this report to track cloud storage for mailboxes, OneDrive, and SharePoint sites. Mobility – Track which clients and devices people use to connect to email, Teams, Skype, or Yammer. Activation and licensing The activation and license page offers reports on Microsoft 365 activation; that is, how many users have downloaded and activated Office apps and how many licenses have been assigned by your organization. The month value towards the top refers to the current month, and the metrics reflect values aggregated from the beginning of the month to the current date. Activation – Track service plan (for example, Microsoft 365 ProPlus, Project, and Visio) activations in your organization. Each person with an Office license can install products on up to five devices. You can also use reports in this section to see the devices on which people have installed Office apps. Note that to activate a plan, a user must install the app and sign in with their account. Licensing – This report contains an overview of license types, the count of users who were assigned each license type, and the license assignment distribution for each month. The month value towards the top refers to the current month, and the metrics reflect values aggregated from the beginning of the month to the current date. Product usage This report contains a separate report for each Microsoft 365 service, including Exchange, Microsoft 365 groups, OneDrive, SharePoint, Skype, Teams, and Yammer. Each report contains total enabled vs. total active user reports, counts of entities such as mailboxes, sites, groups, and accounts, as well as activity type reports where appropriate. All values of the month shown on the top section of the report refer to the latest complete month. User activity User activity reports are available for certain individual services. These reports provide user-level detail usage data joined with Active Directory attributes. In addition, the Department Adoption report lets you slice by Active Directory attributes so that you can see active users across all individual services. All metrics are aggregated for the latest complete month. Is this template app going to be available through purchase or will it be free? It is not free, you will need a Power BI Pro license. For details see prerequisites for installing, customizing, and distributing a template app. To share the dashboards with others, please see more at Share dashboards and reports. Who can connect to Microsoft 365 usage analytics? You have to be either a Global admin, Exchange admin, Skype for Business admin, SharePoint admin, Global reader or Report reader in order to establish the connection to the template app. See About admin roles for more information. Who can customize the usage analytics reports? Only the user who made the initial connection to the template app can customize the reports or create new reports in the Power BI web interface. See Customizing the reports in Microsoft 365 usage analytics for instructions. Can I only customize the reports from the Power BI web interface? In addition to customizing the reports from the Power BI web interface, users can also use Power BI Desktop to connect directly to the Microsoft 365 reporting service to build their own reports. How can I get the pbit file that this dashboard is associated with? You can access to the pbit file from the Microsoft Download center. Who can view the dashboards and reports? If you connected to the template app, template app with a group of people? Yes. To enable a group of admins to work together on the same template app, template app? The data in the template app currently covers the same set of activity metrics available in the Activity Reports. As reports are added to the activity reports, they will be added to the template app in a future release. How does the data in the template app differ from the data in the usage reports? The underlying data you see in the template app matches the data you see in the activity reports in the Microsoft 365 admin center. The key differences are that in the admin center data is available for the last 7/30/90/180 days while the template app presents data on a monthly basis for up to 12 months. In addition, user level details in the template app are only available for the last complete month for users who were assigned a product license and performed an activity. When should I use the template app and when the usage reports? The Activity Reports are a good starting point to understand usage and adoption of Microsoft 365. The template app combines the Microsoft 365 usage data and your organization’s Active Directory information and enables admins to analyze the data set using the visual analytics capabilities of Power BI. This enables admins to not just visualize and analyze Microsoft 365 usage data, but also slice it by Active Directory properties such as departments, location etc. They can also create custom reports and share the insights within their organization. How often is the data refreshed? When you connect to the template app for the first time, it will automatically populate with your data for the previous 12 months. After that, the template app data will refresh weekly. Customers can choose to modify the refresh schedule if their use of this data demands a different update rhythm. The back-end Microsoft 365 service will refresh data on a daily basis and provides data that is between 5-8 days latent from the current date. The Content date column in each dataset represents the freshness date of the data in the template app. How is an active user defined? The definition of active user is the same as the definition of active user in the activity reports. What SharePoint site collections are included in the SharePoint reports? The current version of the template app includes file activity from SharePoint team sites and SharePoint group sites. Which groups are included in the Microsoft 365 Groups usage report? The current version of the template app includes usage from Outlook groups, Yammer groups, and SharePoint groups. It does not include groups related to Microsoft Teams or Planner. When will an updated version of the template app become available? Major changes to the template app will be released twice a year which may include new reports or new data. Minor changes to the reports may be released on a more frequent basis. Is it possible to integrate the data from the template app into existing solutions? The data in the template app can be retrieved through the Microsoft 365 APIs (in preview). When they ship to production they will be merged within the Microsoft Graph reporting APIs. Are there plans to expand the template app to show usage data from other Microsoft products? This is considered for future improvements. Check the Microsoft 365 Roadmap for updates. How can I pivot by company information in Active Directory? Company information is included one of the Active Directory fields in the template app template app across multiple subscriptions? At this time, the template app is for a single subscription, as it is associated with the credentials that was used to initially connect to it. Is it possible to see usage by plan (i.e. E1, E3)? In the template app, usage is represented at the per product level. Data about the various subscriptions that are assigned to users are provided, however it is not possible to correlate user activity to the subscription assigned to user. Is it possible to integrate other data sets into the template app? You can use Power BI Desktop to connect to the Microsoft 365 APIs (in preview) to bring additional data sources to combine with the template app data. For more information see the Customize document. Is it possible to see the "Top Users" reports for a specific timeframe? All user level reports present aggregated data for the previous month. Will the template app be localized? This is currently not on the roadmap. I have a specific question about the data I'm seeing for my organization. Who can I reach out to? You can use the feedback button in the admin center activity overview page, or you can open a support case to get help with the template app. How can partners access the data? If a partner has delegated admin rights, he or she can connect to the template app on behalf of their customer. Can I hide identifiable information such as user, group, and site names in reports? Yes, see Make the collected data anonymous.
https://docs.microsoft.com/en-us/office365/admin/usage-analytics/usage-analytics?cid=kerryherger&view=o365-worldwide
2020-02-17T00:51:06
CC-MAIN-2020-10
1581875141460.64
[array(['../media/office365usage-exec-summary.png?view=o365-worldwide', 'Image of the Microsoft 365 usage executive summary.'], dtype=object) ]
docs.microsoft.com
ShowcaseInfo.json Generation Adding content in SSE is easy. For end users, you can run everything via the GUI. For partners, and SSE authors, you do need to interact with the JSON a bit. Schema The schema is thoroughly documented in the dedicated SSE Schema section. Architecture The beating heart of SSE is ShowcaseInfo.json. This is what contains almost all of the data that is of use to SSE. At the core of nearly every SSE dashboard is either a call to the rest endpoint /services/SSEShowcaseInfo or a Splunk search for | sseanalytics which immediately calls that rest endpoint on the back end. The rest endpoint is defined as such: web.conf: [expose:SSEShowcaseInfo] methods = GET,POST pattern = SSEShowcaseInfo restmap.conf: [script:sseshowcaseinfo] match = /SSEShowcaseInfo script = generateShowcaseInfo.py scripttype = persist handler = generateShowcaseInfo.ShowcaseInfo requireAuthentication = true output_modes = json passPayload = true passHttpHeaders = true passHttpCookies = true When generateShowcaseInfo.py is called, it then uses the URL to pull out configuration parameters, and then walks through the steps below to pull in the core ShowcaseInfo.json, add in any custom content, and enrich it with a bunch of files and lookups. Steps Initialize Connections Initialize the service object that will be used by API calls. It also pulls the URL for splunkd in case the kvstore API is failing and it needs to fallback to a GET to the API directly. (Rare, but sometimes kvstore can be finicky.) Checking Enabled Content Pull all of the contents of essentials_updates.conf via the API (aggregated across all apps). Then use that to pull out which apps are disabled vs enabled. Pulling Updated MITRE ATT&CK We have a kvstore dedicated to storing large JSON files. That is handled via pullJSON, another rest endpoint. pullJSON supports localization (not relevant for MITRE), and has the ability to grab the file if the data isn’t in the kvstore. The overall flow for updating these kvstore items is (as described in Partner Integration - Posting Content Online, which uses a similar method) that the user’s web browser will ask the kvstore when the last MITRE check was. If it’s been over a day, it will then go and look for an updated version of MITRE – if it finds it, it will stash it in the kvstore and pullJSON will render that file instead of whatever is on the file system. Pulling SSE kvstores Simple Pulls The following kvstore collections are pulled in with no modifications: - bookmark - local_search_mappings - data_source_check Custom Logic The following have minimal processing: - custom_content - inputs are cleaned as described in Partner Integration - Security and Data Cleaning - data_inventory_products - the inputs are pulled in and tossed into an array of scores per DSC, so that they can be easily combined at the product level (where there may be multiple DSCs). Also the DSC to productId match is created. Grab Core Files ShowcaseInfo.json ShowcaseInfo.json is all the shipped content in SSE. Search Builders Each of the search builder files are pulled in and tossed into a file. If there were naming conflicts (e.g., “Detect Bad - Live” in two different JSON files) it will silently override, but that shouldn’t happen. Other Supporting Files There are a variety of other supporting files that are pulled in for enrichment as well: - data_inventory.json - MITRE ATT&CK and MITRE Pre-ATT&CK – these will actually never be grabbed directly (this code should be rationalized out) as the pullJSON call above will never fail to deliver some MITRE content. Enrich Custom Content Next we add the custom content in to the main ShowcaseInfo as if we grabbed it from the JSON file itself. MITRE Parsing Grab the MITRE configurations and spin them into a variety of objects for usage later. Parsing out MITRE ATT&CK from the JSON is a bit of a pain. The short version of it, for the purposes that SSE cares about: - Tactics are objects of type x-mitre-tactic - Techniques are objects of type attack-pattern - Groups are objects of type intrusion-set All of these are related to each other through objects of type relationship and relationship_type uses. All objects have an ID, which maps to the source_ref and target_ref fields. The implementation of parsing this out in SSE is probably not ideal (it feels like a classic CompSci exam question) but it works and is fast enough. Clear out invalid characters There should never be invalid characters in an ID, but if there are it would break things downstream, so let’s clear them out. Enrichment and Processing A variety of enrichments occur here. It’s frankly easiest just to read the code, as they’re mostly tactical. Enrich with Searches This step looks through all of the search builder JSONs (showcase_simple_search.json etc.). It then goes through all of the Showcases in ShowcaseInfo.json, for each with any examples it goes through each example and looks for it in the list of all the search builder JSONs. if there is a match, it adds that as ShowcaseInfo['summaries']['my_summary_id']['examples'][NUM]['showcase'] = my_search_builder_obj Local Search Mappings This checks each piece of content against the list of savedsearch.conf mappings in local_search_mappings and then sets the search_title to the equivalent search. (Known Limitation: only one search can be provided per piece of SSE content. Customers with multiple searches that are similar to the same object would need to create custom content in SSE to complete the mapping. The UI doesn’t explicitly prevent this behavior.) Data Availability Enrichment Here we take each piece of content and look it up in the rendered Data Availability matrix to add the fields data_available and data_available_numeric. MITRE Enrichment Here we take the MITRE ATT&CK Framework variables (parsed above) and enrich each piece of content with the: - Tactic and Technique Name - Technique Description - Which MITRE Matrix it came from (ATT&CK, Pre-ATT&CK) - Threat Groups - Search Keywords (e.g., mimikatz) Defaulting Missing Fields Many parts of SSE expect certain fields to be present, and while those fields generally are, we don’t want downstream code to fail due to unreliable content configuration. So for simplicity, we have a variety of ways to clean out content, from setting it to an empty string, setting the string “None”, etc. Those are all configured here. Excluding Disabled Channels This will iterate through all of the content in ShowcaseInfo, then check to see if the channel is in the exclusions list ( channel_exclusion[...]) and pull the content if it should be excluded. This can be overridden by supplying ignoreChannelExclusion=true in the request. Exporting to sse_content_exported There is a lookup (sse_content_exported) used for mapping savedsearch.conf names to the metadata inside of SSE. Whenever you run ShowcaseInfo it will check to make sure the content in that lookup is correct, and fix it if not. The fields included in that export are listed in the fields = [...] line at the top of this section, and are mirrored in collections.conf and transforms.conf. Optional Minifying A few pages (particularly the main Security Contents page) can request a Mini version of ShowcaseInfo by adding fields=mini to the URL. This strips out all the fields not specified in mini_fields and dramatically reduces the amount of data transferred back. The goal here is to make it so that the user doesn’t have to download 1+ MB over a bad internet connection. (There was at one point a series of bugs that ballooned that to multiple megabytes in size – but while analyzing that bug, the option to shrink down the file volume was added.) As of this writing, the raw JSON downloaded is about 1.2 MB which is automatically compressed by Splunk down to 202 KB of network. Return Data is returned in a JSON format. Error Handling At no point should ShowcaseInfo return an error (this was the case in early versions of the rest endpoint, but that was problematic). Instead, there are two elements added into the SSE output, debug and throwError. throwError will fire if there is a critical error – effectively if an exception is caught on an important step in this process (which is almost all the steps). The debug object will contain a variety of debugging information (particularly: timing checks), and any exceptions will also be added into this debug. From the Security Contents page, the ShowcaseInfo response is added to the window object, so you can open up the JavaScript console and run: console.log("throwError Status", ShowcaseInfo.throwError); console.log("debug Status", ShowcaseInfo.debug); throwError will hopefully be false, and here is an (abridged) output fo the debug status: [ "Stage -5 Time Check:9.05990600586e-06", "Stage -4 Time Check:8.20159912109e-05", "Stage -3 Time Check:9.20295715332e-05", "Stage -4 Time Check:0.000141143798828", { "localecheck": "" }, "Not going cached! Prepare for a long ride.", "Stage -1 Time Check:0.000144004821777", { "channel_exclusion": { "mitrepreattack": false, "custom": false, "Splunk_App_for_Enterprise_Security": false, "mitreattack": false, "Enterprise_Security_Content_Update": false, "Splunk_Security_Essentials": false, "Splunk_User_Behavior_Analytics": false }, "override": false, "msg": "Final Channel Exclusion" }, "Stage 0 Time Check:0.106211185455", "Stage 1 Time Check:1.20219802856", "Stage 2 Time Check:1.27426409721", { "store": "bookmark", "message": "I got a kvstore request" }, "Stage 3 Time Check:1.29301118851", { "store": "local_search_mappings", "message": "I got a kvstore request" }, "... abridged ...", "Stage 25 Time Check:1.62445807457" ] Accessing via JavaScript From a Splunk dashboard, running the following in the browser will get you the resulting object: require(['json!' + $C['SPLUNKD_PATH'] + '/services/SSEShowcaseInfo?bust=' + Math.random()], function(showcase){ window.debug_showcase = showcase; console.log("Got Showcase", showcase); }) Accessing via Splunk Search You can access the output of all of the summaries via the | sseanalytics command. It helpfully breaks out the key fields separately, and automatically converts the inline-multi-value pipe-separated fields into native Splunk multi-value fields. You can also ask for the full JSON output, identical to what is shared when doing a direct REST call. | sseanalytics include_json=true | search channel=ButtercupLabs | table id summaries *
https://docs.splunksecurityessentials.com/developing/showcaseinfo/
2020-02-17T00:27:23
CC-MAIN-2020-10
1581875141460.64
[]
docs.splunksecurityessentials.com
Indicates the number of redirects which this UnityWebRequest will follow before halting with a “Redirect Limit Exceeded” system error. If you wish to disable redirects altogether, set this property to zero - this UnityWebRequest will then refuse to follow redirects. If a redirect is encountered while redirects are disabled, the request will halt with a “Redirect Limit Exceeded” system error. If you do not wish to limit the number of redirects, you may set this property to any negative number. This is not recommended. If the redirect limit is disabled and the UnityWebRequest encounters a redirect loop, the UnityWebRequest will consume processor time until Abort is called. Default value: 32.
https://docs.unity3d.com/kr/2019.1/ScriptReference/Networking.UnityWebRequest-redirectLimit.html
2020-02-17T02:09:48
CC-MAIN-2020-10
1581875141460.64
[]
docs.unity3d.com
Copy the anura-{version}.jar to {home}/appserver/lib and update your {home}/appserver/conf/custom.properties. The simplest possible configuration looks like this: anura.license={delivered by brix} anura.1.name=first anura.1.userId=123 Then restart your appserver. To test if it works, point your browser to - there you should see some basic JSON about the endpoint: { "version": 2.5, "status": 200, "data": { "apiVersion": 2.5, "celumVersion": "5.12.4", "userId": 123, ... } } If it doesn't work, check the appserver.log for ch.brix.anura-messages complaining about invalid configuration properties or license problems. To be configured in {home}/appserver/conf/custom.properties type: String, required: yes, default: - The license for this plugin (determines validity, expiration date and how many endpoints you can add). This is delivered by brix after you supply the customer's name (xxx in {home}/appserver/conf/xxx.license.dat) type: String, required: no, default: 0 0/5 * * * ? How often to run the cleanup system tasks. This removes old cached entries and temporary files. type: String, required: no, default: 0 0 0 * * ? How often to run the cache flush task. This removes everything from all caches and is mostly there to force refreshes (when you want to push an update right away). type: String, required: yes, default: - The name (whatever you like) where this endpoint can be reached at. This defines the URL that you'll use in the front end, e.g. anura.1.name=foo ->, and is the way that anura tells the different endpoints apart. type: long, required: yes, default: - The ID of the CELUM user that is used to evaluate all permissions of this endpoint. If you can't see something in Anura, make sure the user that you've specified has the appropriate permissions in CELUM. type: long, required: no, default: 3600 Practically everything gets cached internally for performance reasons. This setting defines how long an individual cache entry lasts (in seconds) until it is either reloaded automatically (node structures) or evicted (everything else).or nginx proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;) and turn on the remote IP valve CELUM's Appserver (in /opt/celum/appserver/conf/server.xml, add <Valve className="org.apache.catalina.valves.RemoteIpValve" />to the <Host>-section. type: boolean, required: no, default: false Force even single file downloads to be delivered in a ZIP type: boolean, required: no, default: false Relay redirects internally instead of forwarding. Usually requests to preview files etc. are relayed to the storage server with a HTTP redirect. This can cause problems when you employ caching reverse proxies, load balancers or use CDN services for that purpose (such as cloudflare, akamai etc.). When set to true, redirects will be followed internally, so every response appears to be coming from the appserver (and hence can be cached). Cache poisons: callbackand depending on your use case also token(this will bypass authentications that use token!). You should exclude these GET parameters when generating the cache ID (as these may change but represent the same content). If you want to cache binary files for a different amount of time than the JSON responses (and you can't use the MIME type), the relevant parameters of asset.do are: thmb, previewand download.. type: List of long (comma separated), required: no, default: 103 Node type IDs to show in the detail view's path information. This basically behaves like the "keyword paths" feature in CELUM 4 and mostly exists for backwards compatibility. type: boolean, required: no, default: true Stops recursing on tree calls when there are no children, because there's no point in showing loads of empty children. Only works when the asset_count argument is provided (otherwise it doesn't count at all). type: String, required: no, default: link Name of the relation to consider being a "linked asset", see {home}/appserver/spring/asset-relations.xml -> property name="your-id". type: int, required: no, default: 5 The maximum number of linked assets to load. type: bean name, required: no, default: - Custom bean that loads additional information in node referencing infofields, such as information fields on the referenced node. Must implement the NodeInfoProvider interface: package ch.brix.anura.provider; public interface NodeInfoProvider { String getInfo(InformationFieldValue<?, ?> info, Node node, Locale locale); } type: bean name, required: no, default: anuraDefaultDownloader Custom bean to handle asset downloads differently (e.g. prompt for login or reason etc). Known implementations: anura.loginFilterDownloader.requireLogin- list of download format IDs to require a login for, e.g. 1,2,3. Since 2.6.8 you can also pass -1to require a login for every format. anura.1.mailInputReason- also require a reason (text area) to be filled out. Default is false. anura.1.mailInputCss- Custom CSS to add to the mail input, e.g. .something {foo: bar;}. Must implement the DownloadHandler interface: package ch.brix.anura.download; public interface DownloadHandler { void download(AnuraRequest request, AnuraResponse response, AnuraConfig config, List<DownloadRequest> download, Locale locale) throws Exception; } This property is subject to license restrictions. If you've configured it but it doesn't do anything, check the appserver.logfor license errors - the property might have been dropped. type: bean name, required: no, default: - Custom bean to resolve where the video file comes from (e.g. some CDN) when using the built-in video player. Must implement the VideoStreamProvider interface. If none is provided (and no videoPlayerProvider is configured), the video preview from the storage server is used. Known implementations: anura.infofieldStreamProvider.sourceInfofieldId- ID of the information field to read the file URL part from, e.g. 101 anura.infofieldStreamProvider.prefix- Static prefix for the URL, e.g. anura.infofieldStreamProvider.suffix- Static suffix for the URL, e.g. .mp4 Must implement the VideoStreamProvider interface: package ch.brix.anura.provider; public interface VideoStreamProvider { String getVideoUrl(AssetId assetId, AnuraConfig config); } type: bean name, required: no, default: - Custom bean to resolve what video player URL (e.g. vimeo) to use. When configured, this will take precedence over the videoStreamProvider. Known implementations: anura.infofieldPlayerProvider.sourceInfofieldId- ID of the information field to read the player URL part from, e.g. 101 anura.infofieldPlayerProvider.prefix- Static prefix for the URL, e.g. anura.infofieldPlayerProvider.suffix- Static suffix for the URL, e.g. ?autoplay=true anura.1.videoProviderStageHandlerId=123 Must implement the VideoPlayerProvider interface: package ch.brix.anura.provider; public interface VideoPlayerProvider { String getVideoPlayer(AssetId assetId, AnuraConfig config, boolean autoplay); } type: string, required: no, default: - Performs arbitrary search & replace on generated video URLs (only works together with videoReplaceString). Use Case: Some players pass their player ID (look and feel) in their public URL, but it's always the same. This way you can smiply override this on a per-dispatcher basis. type: string, required: no, default: - Performs arbitrary search & replace on generated video URLs (only works together with videoSearchRegex). type: List of String (comma separated), required: no, default: - Blacklists certain asset properties from being delivered in the asset details response - one of name, asset_type, created, modified, extension, filesize, downloadable, dpi, duration, dimensions, aspect_ratio, colorspace, profile, codec, pages, vector, raster, duration, scanType, frameRate, channel, bitRate, sampleRate, artist, trackTitle, albumTitle, trackNumber, year, genre, original_name type: List of long (comma separated), required: no, default: - Blacklist certain download formats (because of the SDK issue where you can only get download format permissions based on the file extension, rather than based on the actual asset) type: String, required: no, default: - Custom CSS to add to the built-in video player page. type: List of String (comma separated), required: no, default: - Specify alternative preview images if the asset doesn't have one. By default, the generic CELUM placeholder image is used. You can override this by file category, e.g. default=/images/default-dummy.jpg,image=/images/image-dummy.jpg,video=/images/video-dummy.jpg type: List of long (comma separated), required: no, default: - IDs of additional asset information fields to send with every asset response. Less is more, but it may be useful for asset markers etc. type: List of long (comma separated), required: no, default: - IDs of additional node information fields to send with every tree response. Less is more, but it may be useful for asset markers etc. type: bean name, required: no, default: - Custom bean to do verify access tokens sent via the token parameter. This is useful when Anura is running in a login-protected CMS environment. In that case the CMS would create/store tokens (here's an example using JWT) and pass it as &token=.... In your custom verifier you'd then go ask the CMS if it knows a given token (and cache that for a bit!). Known implementations: anura.1.staticTokenproperty (built-in). The corresponding JS for the front-end would be $.anura.tokenProvider = function () {return 'sameStaticTokenAsInTheProperty';}; anura.1.statusCodeTokenEndpoint(e.g.) with the provided token. HTTP 200 indicates success. since 2.8 anura-login-token.jarextension and an interceptor on the front-end. Note that the flow is slightly different in this case: Custom implementations must implement the CustomSearchProvider interface: package ch.brix.anura.verifier; import ch.brix.anura.model.AnuraConfig; public interface TokenVerifier { boolean isValid(AnuraConfig config, String token); // go ask the CMS (or whatever) if a given token is valid (and please cache it!) } This property is subject to license restrictions. If you've configured it but it doesn't do anything, check the appserver.logfor license errors - the property might have been dropped. type: bean name, required: no, default: -, since: 2.7.0 Custom search parser based on the optional search_custom parameter. Through this mechanism you can implement your own business logic and return an SDK AssetFilter that will then be applied in addition to all other search parameters (AND). There are no known public implementations. Custom implementations must implement the CustomSearchProvider interface: package ch.brix.anura.provider.search; import ch.brix.anura.model.AnuraConfig; public interface CustomSearchProvider { AssetFilter parseCustomSearch(String request, AnuraConfig anuraConfig, Locale locale); } type: String, required: no, default: -, since: 2.7.3 Enforces an additional, global view restriction in every response, expressed through a search filter. This filter gets applied to whatever else is happening through an AND operation. This way the asset either doesn't show up in a list response, or it triggers a not found (404) in direct queries. The syntax is the same as the API's search, but without the search_ prefix. Example: anura.1.globalFilter=infofield=137,null,now - filters on the information field with the ID 137 (a date field) and looks for a date between whenever and now. This simulates the Asset Availability feature, but with a custom information field. type: bean name, required: no, default: - What method to use to provide faceted search. The only known implementation is anuraSolrSearchRequestHandler, which requires you to have setup your SOLR server accordingly. type: boolean, required: no, default: true Enables on-the-fly ZIP generation as soon as the first download is ready. Turn this of to wait for all conversions to have finished instead (as in 2.7 and before). since 2.8 Released 2015-10-16 Released 2016-05-16 Released 2016-07-27 Released 2017-01-03 Released 2017-04-05 Released 2017-07-18 Released 2018-04-05 Released 2019-06-19 anura.loginFilterDownloader.requireLogin +instead of , tokenVerifierand added the staticTokenVerifier customSearchProvider Released 2020-01-13 anura.1.globalFilter anuraStatusCodeTokenVerifieras a new TokenVerifier anura.1.zipStreamingEnabled
https://docs.brix.ch/de/anura/backend
2020-02-17T00:06:29
CC-MAIN-2020-10
1581875141460.64
[]
docs.brix.ch
An Easy Way to Create OLE DB Connection Strings This article may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist. To maintain the flow of the article, we've left these URLs in the text, but disabled the links. An Easy Way to Create OLE DB Connection Strings This content is no longer actively maintained. It is provided as is, for anyone who may still be using these technologies, with no warranties or claims of accuracy with regard to the most recent product version or service release.by Sean Kavanagh One of the aspects of ADO that can be confusing at first is the use of the Connection object. Building connection strings correctly, with all the proper OLE DB provider settings, can be cumbersome. Fortunately, this process can be simplified greatly. In this article, we'll show you how to use the Data Link API to set up data source connections easily. Overview You can think of the Data Link API as a front-end to your connection string. The result is a Microsoft Data Link file, which has a UDL extension. You can then extract the connection string from the UDL and incorporate it directly into your application, or you can simply reference the UDL when opening connections. Create a UDL file The way you create a UDL file will depend on your Windows installation. Chances are you'll be able to create the file directly from the shortcut menu. First, right-click on the Windows desktop, or in the folder where you want to create the file. Next, select New from the shortcut menu. If you see a choice for Microsoft Data Link, select it--that's all there is to it. If you're using Windows 2000, you most likely won't find a Microsoft Data Link choice. If you don't see Microsoft Data Link listed in the shortcut menu, you can still create a UDL file, but you'll have to do a little more work. Creating a UDL when Microsoft Data Link isn't an available choice For an alternative way of creating a UDL file, right-click on the Windows desktop and choose New/Text Document from the shortcut menu. Next, ensure that Windows is set up to display file extensions. If you don't see the TXT extension included in the new text file's name, you'll need to change your Windows configuration. To do so, open any folder and choose View/Folder Options from the menu bar. Then, click on the View tab, clear the Hide File Extensions For Known File Types check box, and click OK. Next, right-click on the text file you just created, select Rename from the shortcut menu, and change the TXT extension to UDL. Finally, press [Enter] and click Yes when Windows asks if you're sure you want to change the extension. Building the connection At this point, double-click on the UDL file you created to launch the Data Link API. The first property sheet of the Data Link Properties dialog box is where you select the type of OLE DB provider you need to connect to the database. Simply choose from the list of available providers, as shown in Figure A, and click Next. Figure A: The first sheet of the Data Link Properties dialog box displays a list of the available providers. .gif) The Connection property sheet of the Data Link Properties dialog box is context sensitive--it only shows options that are relevant to the provider selected on the previous sheet. For instance, Figure B shows an example of a UDL configured for the Northwind database, using the Jet 4.0 provider. Figure B: This Connection property sheet shows the settings available when setting up a Jet 4.0 connection. .gif) In contrast, a connection to a SQL Server database requires more information, as shown in Figure C. Figure C: The options available on the Connection property sheet depend on the provider selected on the first property sheet. .gif) Regardless of the provider you're using, one of the nicest aspects of working with Data Link API is that you can easily verify that you've configured everything correctly. Simply click the Test Connection button, and you'll receive either a positive confirmation or an appropriate error message. The Advanced property sheet of the Data Link Properties dialog box allows you to specify additional network and access permission settings. As you can see in Figure D, which shows a UDL file configured for the Jet 4.0 provider, only the options relevant to the selected provider will be enabled. Figure D: The Advanced property sheet lets you specify access permissions and network settings, if the options are relevant to the selected provider. .gif) The final sheet of the Data Link Properties dialog box, shown in Figure E, provides a summary of the initialization properties for the database connection you've set up. You can edit any of the properties directly from this sheet by double-clicking on the property name or selecting the name and clicking the Edit Value button. Figure E: You can edit any initialization properties from this property sheet. .gif) Working with the finished UDL file Once you've configured the connection settings, click OK to close the Data Link Properties dialog box. You can either reference this data link file from an application, or you can copy the connection string that it generates directly into your Access application. Referencing the UDL file To open a connection (we've named it cnn) to a database using a UDL file, use the syntax: One drawback to this technique is that you need to ensure that the UDL file is distributed with your application.One drawback to this technique is that you need to ensure that the UDL file is distributed with your application. cnn.ConnectionString = "File Name=path\filename.udl;" cnn.Open Copying the connection string into your code To get the connection string created by the data link file, rename the UDL file so that it has a TXT extension. Then, open the text file, preferably with WordPad. (For some reason, although Notepad seemed to open the files correctly in our Windows NT environment, opening the UDL with Notepad in Windows 95/98 produced strange results.) Once you've opened the file, you'll find text resembling what was generated by our previous SQL Server example [oledb] ; Everything after this line is an OLE DB initstring Provider=SQLOLEDB.1;Password=imnottelling; _ Persist Security Info=True;User ID=zorroadmin; _ Initial Catalog=Emissions_Data.MDF; _ Data Source=zorro;Extended Properties="Trusted_ Connection=yes"; _ Network Library=DBMSSOCN You can now simply copy and paste the OLE DB connection string into your application, using the syntax cnn.Open "connection string" Conclusion If you don't already, you'll probably soon find that you have to start using ADO in your Access applications. Learning a new language is a difficult task, and constructing complex OLE DB connection strings doesn't make it any easier. Fortunately, you can use the Data Link API to simplify the process significantly. Copyright © 2000 Element K Content LLC. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of Element K Content LLC is prohibited. Element K is a service mark of Element K LLC.
https://docs.microsoft.com/en-us/previous-versions/office/developer/office-xp/aa140076(v=office.10)?redirectedfrom=MSDN
2020-02-17T02:24:05
CC-MAIN-2020-10
1581875141460.64
[array(['images%5caa140076.ima0086a(en-us,office.10', '[ Figure A ] [ Figure A ]'], dtype=object) array(['images%5caa140076.ima0086b(en-us,office.10', '[ Figure B ] [ Figure B ]'], dtype=object) array(['images%5caa140076.ima0086c(en-us,office.10', '[ Figure C ] [ Figure C ]'], dtype=object) array(['images%5caa140076.ima0086d(en-us,office.10', '[ Figure D ] [ Figure D ]'], dtype=object) array(['images%5caa140076.ima0086e(en-us,office.10', '[ Figure E ] [ Figure E ]'], dtype=object) ]
docs.microsoft.com
An Evas object is the most basic visual entity used in Evas. Everything, be it a single line or a complex list of UI components, is an Evas object. Primitive objects are the base upon which to build a complex interface: rectangles, lines, polygons, images, textblocks, and texts. There is only one function to deal with rectangle objects. However, the rectangle is manipulated using the generic Evas object functions. The Evas rectangle serves a number of key functions when working on Evas programs. A common requirement of Evas programs is to have a solid color background, which can be accomplished with the following code. Evas_Object *bg = evas_object_rectangle_add(evas_canvas); /* Set the rectangle's red, green, blue and opacity levels */ /* Opaque white background */ evas_object_color_set(bg, 255, 255, 255, 255); /* Covers full canvas */ evas_object_resize(bg, WIDTH, HEIGHT); evas_object_show(bg); When debugging visual issues with Evas programs, the rectangle is a useful tool. The rectangle’s simplicity means that it is easier to pinpoint issues with it than with more complex objects. A common technique to use when writing an Evas program and not getting the desired visual result is to replace an object with a solid color rectangle and seeing how it interacts with the other elements. This often allows us to notice clipping, parenting, or positioning issues. Once the issues are identified and corrected, the rectangle can be replaced with the original object, and in all likelihood any remaining issues are specific to that object’s type. Clipping serves 2 main functions: An Evas text object shows a basic single-line single-style text. Evas_Object *text = evas_object_text_add(evas_canvas); evas_object_text_text_set(text, "some text"); evas_object_color_set(text, 127, 0, 0, 127); evas_object_show(text); To set the text, use the evas_object_text_text_set() function. You can get the current text with the evas_object_text_text_get() function. To manage the text style: To set the font, use the evas_object_text_font_set() function with the following parameters: text: The text object font: The font name you want to use size: The font size you want to use. To query the current font, use the evas_object_text_font_get() function. To set the text style, use the evas_object_text_style_set() function with the style as the second parameter. To query the current style, use the evas_object_text_style_get() function. If the text does not fit, make an ellipsis on it by using the evas_object_text_ellipsis_set() function. The (float) value specifies, which part of the text is shown. 0.0: The beginning is shown and the end trimmed. 1.0: The beginning is trimmed and the end shown. -1.0: Ellipsis is disabled. To query the current ellipsis value, use the evas_object_text_ellipsis_get() function. When the text style is set to glow, set the glow color using the evas_object_text_glow_color_set(), function, where the second, third, fourth, and fifth parameters are respectively the red, green, blue, and alpha values. The effect is placed at a short distance from the text but not touching it. For glows set right at the text, use the evas_object_text_glow2_color_set() function. To query the current color, use the evas_object_text_glow_color_get() and evas_object_text_glow2_color_get() functions. If the text style is set to display a shadow, use the evas_object_text_shadow_color_set() function, where the second, third, fourth, and fifth parameters are respectively the red, green, blue, and alpha values. To query the current color, use the evas_object_text_shadow_color_get() function. If the text style is set to display an outline, use the evas_object_text_outline_color_set() function, where the second, third, fourth, and fifth parameters are respectively the red, green, blue, and alpha values. To query the current color, use the evas_object_text_outline_color_get() function. A smart object is a special Evas object that provides custom functions to handle automatically clipping, hiding, moving, resizing color setting and more on child elements, for the smart object’s user. They can be, for example, a group of objects that move together, or implementations of whole complex UI components, providing some intelligence and extension to simple Evas objects. A container is a smart object that holds children Evas objects in a specific fashion. A table is a smart object that packs children using a tabular layout. In the following example, a non-homogeneous table is added to the canvas with its padding set to 0. 4 different colored rectangles are added with different properties. To create a table, use the evas_object_table_add() function. table = evas_object_table_add(evas); evas_object_table_homogeneous_set(table, EVAS_OBJECT_TABLE_HOMOGENEOUS_NONE); evas_object_table_padding_set(table, 0, 0); evas_object_resize(table, WIDTH, HEIGHT); evas_object_show(table); rect = evas_object_rectangle_add(evas); evas_object_color_set(rect, 255, 0, 0, 255); evas_object_size_hint_min_set(rect, 100, 50); evas_object_show(rect); evas_object_table_pack(table, rect, 1, 1, 2, 1); rect = evas_object_rectangle_add(d.evas); evas_object_color_set(rect, 0, 255, 0, 255); evas_object_size_hint_min_set(rect, 50, 100); evas_object_show(rect); evas_object_table_pack(table, rect, 1, 2, 1, 2); rect = evas_object_rectangle_add(d.evas); evas_object_color_set(rect, 0, 0, 255, 255); evas_object_size_hint_min_set(rect, 50, 50); evas_object_show(rect); evas_object_table_pack(table, rect, 2, 2, 1, 1); rect = evas_object_rectangle_add(d.evas); evas_object_color_set(rect, 255, 255, 0, 255); evas_object_size_hint_min_set(rect, 50, 50); evas_object_show(rect); evas_object_table_pack(table, rect, 2, 3, 1, 1); To set the table layout, use the evas_object_table_homogeneous_set() function. The following values can be homogeneous: EVAS_OBJECT_TABLE_HOMOGENEOUS_NONE: This default value has columns and rows calculated based on hints of individual cells. This is flexible, but much heavier on computations. EVAS_OBJECT_TABLE_HOMOGENEOUS_TABLE: The table size is divided equally among children, filling the whole table area. If the children have a minimum size that is larger than this (including padding), the table overflows and is aligned respecting the alignment hint, possibly overlapping sibling cells. EVAS_OBJECT_TABLE_HOMOGENEOUS_ITEM: The greatest minimum cell size is used: if no element is set to expand, the contents of the table are the minimum size and the bounding box of all the children is aligned relatively to the table object using the evas_object_table_align_get()function. If the table area is too small to hold this minimum bounding box, the objects keep their size and the bounding box overflows the box area, still respecting the alignment. To get the current mode, use the evas_object_table_homogeneous_get()function. The table’s content alignment is set using the evas_object_table_align_set() function, where the second and third parameters ( horizontal and vertical) are floating values. To see the current values, use the evas_object_table_align_get() function. To set the padding, use the evas_object_table_padding_set() function. To see the current value, use the evas_object_table_padding_get() function. To see the current column and row count, use the evas_object_table_col_row_size_get() function. A grid is a smart object that packs its children as with a regular grid layout. Grids are added to the canvas with the evas_object_grid_add() function. To change a grid’s virtual resolution, use the evas_object_grid_size_set() function, and to get the current value, use the evas_object_grid_size_get() function. To add an object, use the evas_object_grid_pack() function, where the third, fourth, fifth, and sixth parameters are the following: x: Virtual x coordinate of the child y: Virtual y coordinate of the child w: Virtual width of the child h: Virtual height of the child A box is a simple container that sets its children objects linearly. To add a box to your canvas, use the evas_object_box_add() function. To add a child to the box, use the following functions: evas_object_box_append(): The child is appended. evas_object_box_insert_after(): The child is added after the reference item. evas_object_box_insert_before(): The child is added before the reference item. evas_object_box_insert_at(): The child is added at the specified position. To set the alignment, use the evas_object_box_align_set() function with the following values. horizontal: 0.0 means aligned to the left, 1.0 means to the right vertical: 0.0 means aligned to the top, 1.0 means to the bottom Evas has the following predefined box layouts available: evas_object_box_layout_horizontal() evas_object_box_layout_vertical() evas_object_box_layout_homogeneous_horizontal() evas_object_box_layout_homogeneous_vertical() evas_object_box_layout_homogeneous_max_size_horizontal() evas_object_box_layout_homogeneous_max_size_vertical() evas_object_box_layout_flow_horizontal() evas_object_box_layout_flow_vertical() evas_object_box_layout_stack() Using Evas, you can create and manipulate image objects. Evas supports image loaders of various formats as plug-in modules. The image formats that Evas supports include bmp, edj, gif, ico, jpeg, pmaps, png, psd, svg, tga, tiff, wbmp, webp, and xpm. Figure: Evas image loader Evas has over 70 image object functions. The following functions are discussed in this document: Evas_Object *evas_object_image_add(Evas *e); void evas_object_image_file_set(Evas_Object *obj, const char *file, const char *key); void evas_object_image_fill_set(Evas_Object *obj, int x, int y, int w, int h); void evas_object_image_filled_set(Evas *e, Eina_Bool setting); Evas_Object *evas_object_image_filled_add(Evas *e); void evas_object_image_smooth_scale_set(Evas_Object *obj, Eina_Bool smoothscale); void evas_object_image_load_size_set(Evas_Object *obj, int w, int h); void evas_object_image_data_set(Evas_Object *obj, void *data); void *evas_object_image_data_get(const Evas_Object *obj, Eina_Bool for_writing); void evas_object_image_size_set(Evas_Object *obj, int w, int h); void evas_object_image_data_update_add(Evas_Object *obj, int x, int y, int w, int h); Eina_Bool evas_object_image_save(const Evas_Object *obj, const char *file, const char *key, const char *flags); A common use case of an image object is to set a file as the image data source. In the following example, the main() function creates an image object and displays it on a window. The image object size is 300x300 and the source image resolution is 100x127. The image is scaled into 300 by 300 to fill the image object area using the evas_object_image_fill_set() function. #include <Elementary.h> int main(int argc, char **argv) { elm_init(argc, argv); /* Create a window object */ Evas_Object *win = elm_win_add(NULL, "main", ELM_WIN_BASIC); evas_object_resize(win, 400, 400); evas_object_show(win); /* Return Evas handle from window */ Evas *e = evas_object_evas_get(win); /* Create an image object */ Evas_Object *img = evas_object_image_add(e); /* Set a source file to fetch pixel data */ evas_object_image_file_set(img, "./logo.png", NULL); /* Set the size and position of the image on the image object area */ evas_object_image_fill_set(img, 0, 0, 300, 300); evas_object_move(img, 50, 50); evas_object_resize(img, 300, 300); evas_object_show(img); elm_run(); elm_shutdown(); return 0; } Figure: Image object display To manage image objects in Evas: Limiting visibility Evas always supports the image file type it was compiled with. Check your software packager for the information and use the evas_object_image_extension_can_load_get() function. Create the image object. Set a source file on it, so that the object knows where to fetch the image data. Define how to fill the image object area with the given pixel data. You can use a sub-region of the original image, or have it tiled repeatedly on the image object. img = evas_object_image_add(canvas); evas_object_image_file_set(img, "path/to/img", NULL); evas_object_image_fill_set(img, 0, 0, w, h); If the entire source image is to be displayed on the image object, stretched to the destination size, use the evas_object_image_filled_set() function helper that you can use instead of the evas_object_image_fill_set() function: evas_object_image_filled_set(img, EINA_TRUE); Scaling images Resizing image objects scales the source images to the image object size, if the source images are set to fill the object area using the evas_object_image_filled_set() function. Control the aspect ratio of an image for different sizes with functions to load images scaled up or down in memory. Evas has a scale cache, which caches scaled versions of images used often. You can also have Evas rescale the images smoothly, however, that is computationally expensive. You can decide how to fill the image object area with the given image pixel data by setting the position, width, and height of the image using the evas_object_image_fill_set() function. Without setting this information, the image is not displayed. If the size of the image is bigger than the image object area, only a sub-region of the original image is displayed. If the image is smaller than the area, images are tiled repeatedly to fill the object area. Figure: Image scaling The evas_object_image_filled_set() function scales the image to fit the object area. Resizing the image object automatically triggers an internal call to the evas_object_image_fill_set() function. The evas_object_image_filled_add() function creates a new image object that automatically scales its bound image to the object area. This is a helper function around the evas_object_image_add() and evas_object_image_filled_set() functions. A scaled image’s quality can vary depending on the scaling algorithm. Smooth scaling improves the image quality in the process of size reducing or enlarging. Evas runs its own smooth scaling algorithm by default and provides an API for you to disable the function. The algorithm is implemented using the SIMD (Single Instruction Multiple Data) vectorization for software rendering. It is optimized for Intel and ARM CPU through the MMX and NEON instruction sets respectively. There is a trade-off between image smoothness and rendering performance. The load gets bigger as the image gets bigger. Users can avoid such scaling overload by using the same size of the image object and the source image. In the following example, 2 image objects are created to show the effects of smooth scaling. The one with smooth scaling applied appears softer on the screen. ); evas_object_image_file_set(img, "./logo.png", NULL); evas_object_move(img, 0, 0); evas_object_resize(img, 200, 200); evas_object_show(img); /* Create another image object */ Evas_Object *img2 = evas_object_image_filled_add(e); evas_object_image_file_set(img2, "./logo.png", NULL); /* Disable smooth scaling */ evas_object_image_smooth_scale_set(img2, EINA_FALSE); evas_object_move(img2, 200, 0); evas_object_resize(img2, 200, 200); evas_object_show(img2); elm_run(); elm_shutdown(); return 0; } Figure: Smooth scaling effects Evas caches scaled image data and reuses them. You can save the memory by loading the image in the scaled size to the memory at the beginning. This option is available only for jpeg format at the moment. The following example shows how to load the image in the scaled size. ); /* Load the image scaled into the object size before evas_object_image_file_set() is called */ evas_object_image_load_size_set(img, 300, 300); evas_object_image_file_set(img, "./logo.png", NULL); evas_object_move(img, 50, 50); evas_object_resize(img, 300, 300); evas_object_show(img); elm_run(); elm_shutdown(); return 0; } You can set raw data to the image object manually using the evas_object_image_data_set() function instead of setting an image file as the data source. The image data must be in raw data form. For a 200x200 sized image with alpha channel enabled (32 bits per pixel), the size of the image data is 14000 (=200*200*4) bytes. Image objects fetch metadata such as width or height from the header of the image files. Since the raw data does not have the metadata, you must set the size of the image using the evas_object_image_size_set() function. The evas_object_image_data_get() function returns the data pointer of an image object and requires a parameter to determine whether the data is modified or not. If you pass EINA_TRUE for for_writing, Evas updates the image pixels in the next rendering cycle. The evas_object_image_data_update_add() helps to mark the updated area for rendering efficiency. The following example code and figure show how to specify the area to update: evas_object_image_data_update_add(image, 100, 100, 50, 50); evas_object_image_data_update_add(image, 180, 100, 50, 50); evas_object_image_data_update_add(image, 85, 200, 160, 80); Figure: Partial image update The following code creates an image object and sets a source file on it. Then it implements the blur effect to the pixel data and saves them using the evas_object_image_save() function. #include <Elementary.h> void image_blur(Evas_Object *img) { unsigned char *img_src = evas_object_image_data_get(img, EINA_TRUE); int w; int h; evas_object_image_size_get(img, &w, &h); int blur_size = 4; int x; int y; int xx; int yy; for (y = 0; y < h; y++) { for (x = 0; x < w; x++) { int avg_color[3] = {0, 0, 0}; int blur_pixel_cnt = 0; for (xx = x; (xx < x + blur_size) && (xx < w); xx++) { for (yy = y; (yy < y + blur_size) && (yy < h); yy++) { int idx = (yy * w * 4) + (xx * 4); avg_color[0] += img_src[idx + 0]; avg_color[1] += img_src[idx + 1]; avg_color[2] += img_src[idx + 2]; ++blur_pixel_cnt; } } avg_color[0] /= blur_pixel_cnt; avg_color[1] /= blur_pixel_cnt; avg_color[2] /= blur_pixel_cnt; for (xx = x; (xx < x + blur_size) && (xx < w); xx++) { for (yy = y; (yy < y + blur_size) && (yy < h); yy++) { int idx = (yy * w * 4) + (xx * 4); img_src[idx + 0] = avg_color[0]; img_src[idx + 1] = avg_color[1]; img_src[idx + 2] = avg_color[2]; } } } } evas_object_image_data_update_add(img, 0, 0, w, h); } int main(int argc, char **argv) { elm_init(argc, argv); Evas_Object *win = elm_win_add(NULL, "main", ELM_WIN_BASIC); evas_object_resize(win, 200, 200); evas_object_show(win); Evas *e = evas_object_evas_get(win); Evas_Object *img = evas_object_image_filled_add(e); evas_object_image_file_set(img, "./logo.png", NULL); evas_object_resize(img, 200, 200); evas_object_show(img); image_blur(img); evas_object_image_save(img, "logo2.png", NULL, "quality=100 compress=8"); elm_run(); elm_shutdown(); return 0; } Figure: Blur effect In image viewer applications, you can display an image in full size. The navigation to the adjacent images on your album must be fluid and fast. Thus, while displaying a given image, the program can load the next and previous image in the background to be able to immediately repaint the screen with a new image. Evas addresses this issue with image preloading: prev = evas_object_image_filled_add(canvas); evas_object_image_file_set(prev, "/path/to/prev", NULL); evas_object_image_preload(prev, EINA_FALSE); next = evas_object_image_filled_add(canvas); evas_object_image_file_set(next, "/path/to/next", NULL); evas_object_image_preload(next, EINA_FALSE); If you are loading an image which is too big, set its loading size smaller. Load a scaled down version of the image in the memory if that is the size you are displaying (this can speed up the loading considerably): img = evas_object_image_filled_add(canvas); evas_object_image_file_set(img, "/path/to/next", NULL); evas_object_image_load_scale_down_set(img, 2); /* Loading image size is img/2 */ If you know you are showing a sub-set of the image pixels, you can avoid loading the complementary data: evas_object_image_load_region_set(img, x, y, w, h); With Evas, you can specify image margins to be treated as borders. The margins then maintain their aspects when the image is resized. This makes setting frames around other UI objects easier. The following figure illustrates the border behavior when the image is resized. Figure: Borders in Evas Unlike basic text objects, a textblock handles complex text, managing multiple styles and multiline text based on HTML-like tags. However, these extra features are heavier on memory and processing cost. The textblock objects is an object that shows big chunks of text. Textblock supports many features, including text formatting, automatic and manual text alignment, embedding items (icons, for example). Textblock has 3 important parts: the text paragraphs, the format nodes and the cursors. To set markup to format text, use for example <font_size=50>Big!</font_size>. Set more than one style directive in one tag with <font_size=50 color=#F00>Big and Red!</font_size>. Note that </font_size> is used although the format also included color. This is because the first format determines the matching closing tag’s name. You can use anonymous tags, such as <font_size=30>Big</>, which pop any type of format, but it is advisable to use the named alternatives instead. Textblock supports the following formats: font: Font description in fontconfig such as format, for example "Sans:style=Italic:lang=hi" or "Serif:style=Bold". font_weight: Overrides the weight defined in font. For example, font_weight=Boldis the same as font=:style=Bold. The supported weights are normal, thin, ultralight, light, book, medium, semibold, bold, ultrabold, black, and extrablack. font_style: Overrides the style defined in font. For example, font_style=Italicis the same as font=:style=Italic. The supported styles are normal, oblique, and italic. font_width: Overrides the width defined in font. For example, font_width=Condensedis the same as font=:style=Condensed. The supported widths are normal, ultracondensed, extracondensed, condensed, semicondensed, semiexpanded, expanded, extraexpanded, and ultraexpanded. lang: Overrides the language defined in font. For example, lang=heis the same as font=:lang=he. font_fallbacks: A comma delimited list of fonts to try if finding the main font fails. font_size: The font size in points. font_source: The source of the font, for example an eet file. color: The text color in one of the following formats: #RRGGBB, #RRGGBBAA, #RGB, or #RGBA. underline_color: The color in one of the following formats: #RRGGBB, #RRGGBBAA, #RGB, or #RGBA. underline2_color: The color in one of the following formats: #RRGGBB, #RRGGBBAA, #RGB, or #RGBA. outline_color: The color in one of the following formats: #RRGGBB, #RRGGBBAA, #RGB, or #RGBA. shadow_color: The color in one of the following formats: #RRGGBB, #RRGGBBAA, #RGB, or #RGBA. glow_color: The color in one of the following formats: #RRGGBB, #RRGGBBAA, #RGB, or #RGBA. glow2_color: The color in one of the following formats: #RRGGBB, #RRGGBBAA, #RGB, or #RGBA. strikethrough_color: The color in one of the following formats: #RRGGBB, #RRGGBBAA, #RGB, or #RGBA. align: The text alignment in one of the following formats: auto(according to text direction), left, right, center, or middle, which take a value between 0.0 and 1.0 or a value between 0% to 100%. valign: The vertical text alignment in one of the following formats: top, bottom, middle, center, baseline, or base, which take a value between 0.0 and 1.0 or a value between 0% to 100%. wrap: The text wrap in one of the following formats: word, char, mixed, or none. left_margin: Either resetor a pixel value indicating the margin. right_margin: Either resetor a pixel value indicating the margin. underline: The style of underlining in one of the following formats: on, off, single, or double. strikethrough: The style of text that is either onor off. backing_color: The background color in one of the following formats: #RRGGBB, #RRGGBBAA, #RGB, or #RGBA. backing: The background color enabled or disabled: onor off. style: The style of the text in one of the following formats: off, none, plain, shadow, outline, soft_outline, outline_shadow, outline_soft_shadow, glow, far_shadow, soft_shadow, or far_soft_shadow. The direction is selected by adding bottom_right, bottom, bottom_left, left, top_left, top, top_right, or right. For example, style=shadow,bottom_right. tabstops: The pixel value for tab width. linesize: To force a line size in pixels. linerelsize: Either a floating point value or a percentage indicating the wanted size of the line relative to the calculated size. linegap: To force a line gap in pixels. linerelgap: Either a floating point value or a percentage indicating the wanted size of the line relative to the calculated size. item: Creates an empty space that is filled by an upper layer. Use size, abssize, or relsizeto define the item’s size, and an optional vsize = full/ascent to define the item’s position in the line. linefill: Either a float value or percentage indicating how much to fill the line. ellipsis: A value between 0.0 and 1.0 to indicate the type of ellipsis, or -1.0 to indicate that an ellipsis is not wanted. password: Either onor off, this is used to specifically turn replacing chars with the password mode (that is, replacement char) on and off. An Evas object can be clipped – in other words, its visible area is restricted with the clipper object. It is often necessary to show only parts of an object, and while it may be possible to create an object that corresponds only to the part that must be shown (which is not always possible), it is usually easier to use a clipper. A clipper is a rectangle that defines what is visible and what is not. To do this, create a solid white rectangle (by default, so you need not use the evas_object_color_set() function) and give it a position and size of what is wanted visible. The following code example shows how to show the center half of my_evas_object: Evas_Object *clipper = evas_object_rectangle_add(evas_canvas); evas_object_move(clipper, my_evas_object_x / 4, my_evas_object_y / 4); evas_object_resize(clipper, my_evas_object_width / 2, my_evas_object_height / 2); evas_object_clip_set(my_evas_object, clipper); evas_object_show(clipper); A solid white clipper does not produce a change in the color of the clipped object, only hides what is outside the clipper’s area. Changing the color of an object is accomplished by using a colored clipper. Clippers with color function by multiplying the colors of the clipped object. The following code shows how to remove all the red from an object. Evas_Object *clipper = evas_object_rectangle_add(evas); evas_object_move(clipper, my_evas_object_x, my_evas_object_y); evas_object_resize(clipper, my_evas_object_width, my_evas_object_height); evas_object_color_set(clipper, 0, 255, 255, 255); evas_object_clip_set(obj, clipper); evas_object_show(clipper); Evas allows different transformations to be applied to all kinds of objects. These are applied by means of UV mapping. With UV mapping, 1 map points in the source object to a 3D space positioning at target. This allows rotation, perspective, scale, and many other effects depending on the map that is used. A map consists of a set of points, but currently only 4 are supported. Each of these points contains a set of canvas coordinates x and y that are used to alter the geometry of the mapped object, and a z coordinate that indicates the depth of that point. This last coordinate does not normally affect the map, but is used by several of the utility functions to calculate the right position of the point given other parameters. The coordinates for each point are set with the evas_map_point_coord_set() function. In the following example, there is a rectangle whose coordinates are (100, 100) and (300, 300). Evas_Object *object = evas_object_rectangle_add(evas); evas_object_move(object, 100, 100); evas_object_resize(object, 200, 200); Evas_Map map = evas_map_new(4); evas_map_point_coord_set(map, 0, 100, 100, 0); evas_map_point_coord_set(map, 1, 300, 100, 0); evas_map_point_coord_set(map, 2, 300, 300, 0); evas_map_point_coord_set(map, 3, 100, 300, 0); To ease the process: Use the evas_map_util_points_populate_from_geometry() function, where the map coordinates are set to the given rectangle, and_geometry(map, 100, 100, 200, 200, 0); You can also use the evas_map_util_points_populate_from_object() function. Evas_Object *object = evas_object_rectangle_add(evas); evas_object_move(object, 100, 100); evas_object_resize(object, 200, 200); Evas_Map map = evas_map_new(4); evas_map_util_points_populate_from_object(map, object); You can also use evas_map_util_points_populate_from_object_full(), where_object_full(map, object, 0); Several effects are applied to an object by setting each point of the map to the right coordinates. The following example creates a simulated perspective. evas_map_point_coord_set(map, 0, 300, 100, 0); evas_map_point_coord_set(map, 1, 450, 120, 0); evas_map_point_coord_set(map, 2, 450, 260, 0); evas_map_point_coord_set(map, 3, 300, 300, 0); The Z coordinate is not used when setting points by hand, and thus its value is not important. Regardless of the specific way you create a map, to apply it to a specific object, use the following functions: evas_object_map_set(object, map); evas_object_map_enable_set(object, EINA_TRUE); Evas provides utility functions for common transformations: evas_map_util_rotate(): This function performs a rotation of the angledegrees around the center point with the coordinates (cx, cy). evas_map_util_zoom(): This function performs a zoomxand zoomyzoom in the X and Y directions respectively, with the center point with the coordinates (cx, cy). For example, the following code rotates an object around its center. int x; int y; int w; int h; evas_object_geometry_get(object, &x, &y, &w, &h); Evas_Map *map = evas_map_new(4); evas_map_util_points_populate_from_object(map, object); evas_map_util_rotate(map, 45, x + (w / 2), y + (h / 2)); evas_object_map_set(object, map); evas_object_map_enable_set(object, EINA_TRUE); evas_map_free(m); The following code rotates an object around the center of the window. int w; int h; evas_output_size_get(evas, &w, &h); Evas_Map *map = evas_map_new(4); evas_map_util_points_populate_from_object(map, object); evas_map_util_rotate(map, 45, w / 2, h / 2); evas_object_map_set(object, map); evas_object_map_enable_set(object, EINA_TRUE); evas_map_free(m); Evas provides utility functions for 3D transformations. To make a 3D rotation, use the evas_map_util_3d_rotate() function. With this code, you can set the Z coordinate of the rotation center, and the angles to rotate through around all axes. Rotating in the 3D space does not look natural. A more natural look becomes by adding perspective to the transformation, which is done with the evas_map_util_3d_perspective() function on the map after its position has been set. Use the following parameters: pxand pyspecify the “infinite distance” point in the 3D conversion, where all lines converge to. z0specifies the Z value at which there is a 1:1 mapping between spatial coordinates and screen coordinates: any points on this Z value do not have their X and Y coordinates modified in the transform, while those further away (Z value higher) shrink into the distance, and those less than this value expand. focaldetermines the “focal length” of the camera: this is the distance in reality between the camera lens plane (the rendering results are undefined at or closer than this) and the z0value; this function allows for some “depth” control. Each point in a map can be set to a color, which is multiplied with the object’s own color and linearly interpolated in between adjacent points. To do this, use evas_map_point_color_set(map, index, r, g, b, a) for each point of the map, or evas_map_util_points_color_set() to set every point into the same color. To add lighting for the objects, which is useful with 3D transforms, use the evas_map_util_3d_lighting() function with the following parameters: lightx, lightyand lightzare the local light source coordinates; lightr, lightgand lightbare the local light source colors; ambientr, ambientgand ambientbare the ambient light colors. Evas sets the color of each point based on the distance to the light source, the angle with which the object is facing the light and the ambient light. The orientation of each point is important. If the map is defined counter-clockwise, the object faces away from you and becomes obscured, since no light does not reflect from it. Note Except as noted, this content is licensed under LGPLv2.1+.
https://docs.tizen.org/application/native/guides/ui/efl/evas-objects
2020-02-17T00:38:44
CC-MAIN-2020-10
1581875141460.64
[]
docs.tizen.org
Playback position in seconds. Use this to read current playback time or to seek to a new playback time.. See Also: timeSamples variable. using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { AudioSource audioSource; void Start() { audioSource = GetComponent<AudioSource>(); } void Update() { if (Input.GetKeyDown(KeyCode.Return)) { audioSource.Stop(); audioSource.Play(); } Debug.Log(audioSource.time); } }
https://docs.unity3d.com/kr/2018.4/ScriptReference/AudioSource-time.html
2020-02-17T02:03:17
CC-MAIN-2020-10
1581875141460.64
[]
docs.unity3d.com
This page describes automated tests for a Murano project: Murano project has separate CI server, which runs tests for all commits and verifies that new code does not break anything. Murano CI uses OpenStack QA cloud for testing infrastructure. Murano CI url: Anyone can login to that server, using launchpad credentials. There you can find each job for each repository: one for the murano and another one for murano-dashboard. Other jobs allow to build and test Murano documentation and perform another usefull work to support Murano CI infrastructure. All jobs are run on fresh installation of operation system and all components are installed on each run. Murano project has a Web User Interface and all possible user scenarios should be tested. All UI tests are located at the Automated tests for Murano Web UI are written in Python using special Selenium library. This library is used to automate web browser interaction from Python. For more information please visit First of all make sure that all additional components are installed. [murano] horizon_url = murano_url = user = *** password = *** tenant = *** keystone_url = All tests are kept in sanity_check.py and divided into 5 test suites: - TestSuiteSmoke - verification of Murano panels; check, that could be open without errors. - TestSuiteEnvironment - verification of all operations with environment are finished successfully. - TestSuiteImage - verification of operations with images. - TestSuiteFields - verification of custom fields validators. - TestSuitePackages - verification of operations with Murano packages. - TestSuiteApplications - verification of Application Catalog page and of application creation process. To specify which tests/suite to run, pass test/suite names on the command line: - to run all tests: nosetests sanity_check.p - to run a single suite: nosetests sanity_check.py:<test suite name> - to run a single test: nosetests sanity_check.py:<test suite name>.<test name> In case of SUCCESS execution, you should see something like this: ......................... Ran 34 tests in 1.440s OK In case of FAILURE, folder with screenshots of the last operation of tests that finished with errors would be created. It’s located in muranodashboard/tests/functional folder. There are also a number of command line options that can be used to control the test execution and generated outputs. For more details about nosetests, try: $ nosetests -h All Murano services have tempest-based automated tests, which allow to verify API interfaces and deployment scenarious. Tempest tests for Murano are located at the: The following Python files are contain basic tests suites for different Murano components. Murano API tests are run on devstack gate and located at Murano Engine Tests are run on murano-ci :
https://murano.readthedocs.io/en/stable-kilo/articles/test_docs.html
2020-02-17T00:26:02
CC-MAIN-2020-10
1581875141460.64
[]
murano.readthedocs.io
Example: Launching a Load-Balancing, Autoscaling Environment with Public Instances in a VPC You can deploy an Elastic Beanstalk application in a load balancing, autoscaling environment in a single public subnet. Use this configuration if you have a single public subnet without any private resources associated with your Amazon EC2 instances. In this configuration, Elastic Beanstalk assigns public IP addresses to the Amazon EC2 instances so that each can directly access the Internet through the VPC Internet gateway. You do not need to create a network address translation (NAT) configuration in your VPC. To deploy an Elastic Beanstalk application in a load balancing, autoscaling environment in a single public subnet,. Topics ID by clicking Subnets in the Amazon VPC console. Deploying with the AWS Toolkits, AWS CLI, EB CLI, or Elastic Beanstalk. When you create your configuration file with your option settings, you need to specify the following configuration options: aws:ec2:vpc Namespace: - VPCId The identifier of your VPC. - Subnets The identifier(s) of the subnet(s) to launch the instances in. You can specify multiple identifiers by separating them with a comma. - AssociatePublicIpAddress Specifies whether to launch instances in your VPC with public IP addresses. Instances with public IP addresses do not require a NAT device to communicate with the Internet. You must set the value to trueif you want to include your load balancer and instances in a single public subnet. The following is an example of the option settings you could set when deploying your Elastic Beanstalk application inside a VPC. option_settings: aws:ec2:vpc: VPCId: " vpd_id" Subnets: " instance_subnet, etc" AssociatePublicIpAddress: "true"
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/vpc-no-nat.html
2017-10-17T04:23:56
CC-MAIN-2017-43
1508187820700.4
[array(['images/aeb-vpc-apip-topo.png', 'Elastic Beanstalk and VPC Topology'], dtype=object) array(['images/vpc-one-subnet-pub.png', 'Subnet ID for your VPC'], dtype=object) ]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Deletes all the files in this directory as well as this directory. Namespace: Amazon.S3.IOAssembly: AWSSDK.S3.dllVersion: 3.x.y.z public virtual void Delete() .NET Framework: Supported in: 4.5, 4.0, 3.5
http://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/S3/MS3IOS3DirectoryInfoDelete.html
2017-10-17T04:19:42
CC-MAIN-2017-43
1508187820700.4
[]
docs.aws.amazon.com
C++ Coding Guidelines¶ Naming¶ - Class names are CamelCase, beginning with a capital letter. - Constant names are ALL_CAPITALS with underscores separating words. - Method, method parameter and local variable names are lowercase with underscores separating words, e.g. method_name. - Member variable names are lowercase with underscores separating words and also begin with an underscore, e.g. _member_variable. C++ Feature Use¶ - We assume a C++11-compliant compiler, so C++11 features are allowed with some exceptions. - No use of autotype declarations. - No use of using namespace ..., if the namespace is particularly lengthy, consider using namespace aliasing (e.g. namespace po = boost::program_options). - Avoid using Boost (or similar) libraries that return special library-specific pointers, to minimize “infection” of the code-base. Consider using the C++11 equivalents instead. Formatting¶ C++ code contributed to Clearwater should be formatted according to the following conventions: - Braces on a separate line from function definitions, ifstatements, etc. - Two-space indentation - Pointer operators attached to the variable type (i.e. int* foorather than int *foo) ifblocks must be surrounded by braces For example: if (x) int *foo = do_something(); will be replaced with if (x) { int* foo = do_something(); } It’s possible to fix up some code automatically using astyle, with the options astyle --style=ansi -s2 -M80 -O -G -k1 -j -o. This fixes up a lot of the most common errors (brace style, indentation, overly long lines), but isn’t perfect - there are some cases where breaking the rules makes the code clearer, and some edge cases (e.g. around switch statements and casts on multiple lines) where our style doesn’t always match astyle’s. Commenting¶ Where it is necessary to document the interface of classes, this should be done with Doxygen-style comments - three slashes and appropriate @param and @returns tags. /// Apply first AS (if any) to initial request. // // See 3GPP TS 23.218, especially s5.2 and s6, for an overview of how // this works, and 3GPP TS 24.229 s5.4.3.2 and s5.4.3.3 for // step-by-step details. // // @Returns whether processing should stop, continue, or skip to the end. AsChainLink::Disposition AsChainLink::on_initial_request(CallServices* call_services,
http://docs.projectclearwater.org/en/stable/Clearwater_CPP_Coding_Guidelines.html
2017-10-17T03:45:09
CC-MAIN-2017-43
1508187820700.4
[]
docs.projectclearwater.org
Creating and discovering plugins¶ Often when creating a Python application or library you’ll want the ability to provide customizations or extra features via plugins. Because Python packages can be separately distributed, your application or library may want to automatically discover all of the plugins available. There are three major approaches to doing automatic plugin discovery: Using naming convention¶ If all of the plugins for your application follow the same naming convention, you can use pkgutil.iter_modules() to discover all of the top-level modules that match the naming convention. For example, Flask uses the naming convention flask_{plugin_name}. If you wanted to automatically discover all of the Flask plugins installed: import importlib import pkgutil flask_plugins = { name: importlib.import_module(name) for finder, name, ispkg in pkgutil.iter_modules() if name.startswith('flask_') } If you had both the Flask-SQLAlchemy and Flask-Talisman plugins installed then flask_plugins would be: { 'flask_sqlachemy': <module: 'flask_sqlalchemy'>, 'flask_talisman': <module: 'flask_talisman'>, } Using naming convention for plugins also allows you to query the Python Package Index’s simple API for all packages that conform to your naming convention. Using namespace packages¶ Namespace packages can be used to provide a convention for where to place plugins and also provides a way to perform discovery. For example, if you make the sub-package myapp.plugins a namespace package then other distributions can provide modules and packages to that namespace. Once installed, you can use pkgutil.iter_modules() to discover all modules and packages installed under that namespace: import importlib import pkgutil import myapp.plugins def iter_namespace(ns_pkg): # Specifying the second argument (prefix) to iter_modules makes the # returned name an absolute name instead of a relative one. This allows # import_module to work without having to do additional modification to # the name. return pkgutil.iter_modules(ns_pkg.__path__, ns_pkg.__name__ + ".") myapp_plugins = { name: importlib.import_module(name) for finder, name, ispkg in iter_namespace(myapp.plugins) } Specifying myapp.plugins.__path__ to iter_modules() causes it to only look for the modules directly under that namespace. For example, if you have installed distributions that provide the modules myapp.plugin.a and myapp.plugin.b then myapp_plugins in this case would be: { 'a': <module: 'myapp.plugins.a'>, 'b': <module: 'myapp.plugins.b'>, } This sample uses a sub-package as the namespace package ( myapp.plugin), but it’s also possible to use a top-level package for this purpose (such as myapp_plugins). How to pick the namespace to use is a matter of preference, but it’s not recommended to make your project’s main top-level package ( myapp in this case) a namespace package for the purpose of plugins, as one bad plugin could cause the entire namespace to break which would in turn make your project unimportable. For the “namespace sub-package” approach to work, the plugin packages must omit the __init__.py for your top-level package directory ( myapp in this case) and include the namespace-package style __init__.py in the namespace sub-package directory ( myapp/plugins). This also means that plugins will need to explicitly pass a list of packages to setup()’s packages argument instead of using setuptools.find_packages(). Warning Namespace packages are a complex feature and there are several different ways to create them. It’s highly recommended to read the Packaging namespace packages documentation and clearly document which approach is preferred for plugins to your project. Using package metadata¶ Setuptools provides special support for plugins. By providing the entry_points argument to setup() in setup.py plugins can register themselves for discovery. For example if you have a package named myapp-plugin-a and it includes in its setup.py: setup( ... entry_points={'myapp.plugins': 'a = myapp_plugin_a'}, ... ) Then you can discover and load all of the registered entry points by using pkg_resources.iter_entry_points(): import pkg_resources plugins = { entry_point.name: entry_point.load() for entry_point in pkg_resources.iter_entry_points('myapp.plugins') } In this example, plugins would be : { 'a': <module: 'myapp_plugin_a'>, } Note The entry_point specification in setup.py is fairly flexible and has a lot of options. It’s recommended to read over the entire section on entry points.
http://python-packaging-user-guide.readthedocs.io/guides/creating-and-discovering-plugins/
2017-10-17T04:00:40
CC-MAIN-2017-43
1508187820700.4
[]
python-packaging-user-guide.readthedocs.io
Clearwater Automatic Clustering and Configuration Sharing¶ Clearwater has a feature that allows nodes in a deployment to automatically form the correct clusters and share configuration with each other. This makes deployments much easier to manage. For example: - It is easy to add new nodes to an existing deployment. The new nodes will automatically join the correct clusters according to their node type, without any loss of service. The nodes will also learn the majority of their config from the nodes already in the deployment. - Similarly, removing nodes from a deployment is straightforward. The leaving nodes will leave their clusters without impacting service. - It makes it much easier to modify configuration that is shared across all nodes in the deployment. This features uses etcd as a decentralized data store, a clearwater-cluster-manager service to handle automatic clustering, and a clearwater-config-manager to handle configuration sharing. Etcd masters and proxies¶ Clearwater nodes can run either as an etcd master or an etcd proxy. When deploying a node, you can chose whether it acts as a master or proxy by filling in either the etcd_cluster or etcd_proxy config option in /etc/clearwater/local_config (see the configuration options reference for more details). There are some restrictions on which nodes can be masters or proxies: - There must always be at least 3 etcd masters in the cluster - The first node to be deployed in a site must be an etcd master The automated and manual install instructions will both create a deployment with all nodes running as etcd masters.
http://docs.projectclearwater.org/en/stable/Automatic_Clustering_Config_Sharing.html
2017-10-17T03:46:24
CC-MAIN-2017-43
1508187820700.4
[]
docs.projectclearwater.org
Outputs¶ The model structure used in MOSFiT makes it ammenable to producing outputs from models that need not be fit against any particular transient. In this section we walk through how the user can extract various data products. Light curve options¶ By default, MOSFiT will only compute model observations at the times a particular transient was observed using the instrument for which it was observed at those times. If a transient is sparsely sampled, this will likely result in a choppy light curve with no prediction for intra-observation magnitudes/fluxes. Smooth light curves¶ A smooth output light curve can be produced using the -S option, which when passed no argument returns the light curve with every instrument’s predicted observation at all times. If given an argument (e.g. -S 100), MOSFiT will return every instrument’s predicted observation at all times plus an additional \(S\) observations between the first and last observation. Extrapolated light curves¶ If the user wishes to extrapolate beyond the first and last observations, the -E option will extend the predicted observations by \(E\) days both before and after the first/last detections. Predicted observations that were not observed¶ The user may wish to generate light curves for a transient in instruments/bands for which the transient was not observed; this can be accomplished using the --extra-bands, extra-instruments, extra-bandsets, and extra-systems options. For instance, to generate LCs in Hubble’s UVIS filter F218W in the Vega system in addition to the observed bands, the user would enter: mosfit -e LSQ12dlf -m slsn --extra-instruments UVIS --extra-bands F218W --extra-systems Vega Mock light curves in a magnitude-limited survey¶ Generating a light curve from a model in MOSFiT is achieved by simply not passing any event to the code with the -e option. The command below will dump out a default number of parameter draws to a walkers.json file in the products folder: mosfit -m slsn By default, these light curves will be the exact model predictions, they will not account for any observational error. If Gaussian Processes were used (by default they are enabled for all models), the output predictions will include an e_magnitude value that is set by the variance predicted by the GP model; if not, the variance parameter from maximum likelihood is used. If the user wishes to produce mock observations for a given instrument, they should use the -l option, which sets a limiting magnitude and then randomly draws observations based upon the flux error implied by that limiting magnitude (the second argument to -l sets the variance of the limiting magnitude from observation to observation). For example, if the user wishes to generate mock light curves as they might be observed by LSST assuming a limiting magnitude of 23 for all bands, they would execute: mosfit -m slsn -l 23 0.5 --extra-bands u g r i z y --extra-instruments LSST Saving the chain¶ Because the chain can be quite large (a full chain for a model with 15 free parameters, 100 walkers, and 20000 iterations will occupy ~120 MB of disk space), by default MOSFiT does not output the full chain to disk. Doing so is achieved by passing MOSFiT the -c option: mosfit -m slsn -e LSQ12dlf -c Note that the outputted chain includes both the burn-in and post-burn-in phases of the fitting procedure. The position of each walker in the chain as a function of time can be visualized using the included mosfit.ipynb Jupyter notebook. Memory can be quite scarce on some systems, and storing the chain in memory can sometimes lead to out of memory errors (it is the dominant user of memory in MOSFiT). This can be mitigated to some extent by automatically thinning the chain if it gets too large with the -M option, where the argument to -M is in MB. Below, we limit the chain to a gigabyte, which should be sufficient for most modern systems: mosfit -m slsn -e LSQ12dlf -M 1000 Arbitrary outputs¶ Internally, MOSFiT is storing the outputs of each module in a single dictionary that is handed down through the execution tree like a hot potato. This dictionary behaves like a list of global variables, and when a model is executed from start to finish, it will be filled with values that were produced by all modules included in that module. The user can dump any of these variables to a supplementary file extras.json by using the -x option, followed by the name of the variable of interest. For instance, if the user is interested in the spectral energy distributions and bolometric luminosities associated with the SLSN model of a transient, they can simply pass the seds and dense_luminosities keys to -x: mosfit -m slsn -x seds dense_luminosities
http://mosfit.readthedocs.io/en/latest/outputs.html
2017-10-17T03:41:32
CC-MAIN-2017-43
1508187820700.4
[]
mosfit.readthedocs.io
Compressing a Stream Telerik RadZipLibrary can significantly facilitate your efforts in compressing a stream, for example to send it over the internet. The library provides CompressedStream class that is designed to compress and decompress streams. This article covers the following topics: API Overview CompressedStream class allows you to compress and decompress a stream. You need to initialize the class using one of the constructor overloads. CompressedStream(Stream baseStream, StreamOperationMode mode, CompressionSettings settings) CompressedStream(Stream baseStream, StreamOperationMode mode,CompressionSettings settings, bool useCrc32, EncryptionSettings encryptionSettings) The parameters accepted by the constructors serve the following functions: Stream baseStream: A reference to a stream where the compressed result will be written when compressing data or the compressed stream that needs to be decompressed when decompressing data. StreamOperationMode mode: Specifies the operation mode of the compressed stream – Write for compressing data and Read for decompressing. CompressionSettings settings: The settings used for the compression. The compression settings can be of type DeflateSettings, LzmaSettings and StoreSettings. You can read more on the topic in the Compression Settings article. bool useCrc32: Indicates whether to use CRC32 (true) or Adler32 (false) checksum algorithm. EncryptionSettings encryptionSettings: Specifies the encryption settings that will be used. If null value is passed, encryption is not performed. More information on the topic is available in the Protect ZipArchive article. Compressing a Stream You can create a compressed stream by initializing a new instance of the CompressedStream class and passing as a parameter the stream in which the compressed data will be written. You need to specify the operation mode to Wrtie and the compression settings that should be used. [C#] Example 1: Write to compressed stream using (CompressedStream compressedStream = new CompressedStream(outputStream, StreamOperationMode.Write, new DeflateSettings())) { // write to compressed stream } [VB.NET] Example 1: Write to compressed stream Using compressedStream As New CompressedStream(outputStream, StreamOperationMode.Write, New DeflateSettings()) ' write to compressed stream End Using If you want to compress a specific stream (inputStream), you need to copy it to the compressed stream that you've created. [C#] Example 2: Write stream to compressed stream using (CompressedStream compressedStream = new CompressedStream(outputStream, StreamOperationMode.Write, new DeflateSettings())) { inputStream.CopyTo(compressedStream); compressedStream.Flush(); } [VB.NET] Example 2: Write stream to compressed stream Using compressedStream As New CompressedStream(outputStream, StreamOperationMode.Write, New DeflateSettings()) inputStream.CopyTo(compressedStream) compressedStream.Flush() End Using Decompressing a Stream Decompressing a stream is just as simple as compressing it. All you need to do is create new instance of the CompressedStream class and pass it the stream from which the compressed data will be extracted, operation mode Read, and the compression settings that need to be used. [C#] Example 3: Decompressed stream using (CompressedStream compressedStream = new CompressedStream(inputStream, StreamOperationMode.Read, new DeflateSettings())) { compressedStream.CopyTo(outputStream); } [VB.NET] Example 3: Decompressed stream Using compressedStream As New CompressedStream(inputStream, StreamOperationMode.Read, New DeflateSettings()) compressedStream.CopyTo(outputStream) End Using CompressedStream Properties CompressedStream derives from the Stream class and therefore it supports all its properties. In addition, it exposes a set of properties that provide further information about the compressed stream. BaseStream: Property of type Stream, which obtains the stream that is compressed. Checksum: Numeric value representing the checksum of the compressed stream. CompressedSize: The size of the compressed stream. Length: The uncompressed size of the stream.
https://docs.telerik.com/devtools/document-processing/libraries/radziplibrary/features/compress-stream
2017-10-17T04:12:40
CC-MAIN-2017-43
1508187820700.4
[]
docs.telerik.com
What is CiviEngage? CiviEngage is a Drupal module that is downloaded with CiviCRM when you install it on a Drupal site. When enabled, it enhances CiviCRM's core functions for non-profits focused on community organising and civic engagement work. If your site is built with Joomla! or WordPress,: -. Scenario: Canvassing.
https://docs.civicrm.org/user/en/4.6/civic-engagement/what-is-civiengage/
2017-03-23T08:13:20
CC-MAIN-2017-13
1490218186841.66
[]
docs.civicrm.org
Investigating the release of coprecipitated uranium from iron oxides Kategorien: Umweltgeochemie insgesamt Zusammenfassung The removal of uranium (VI) from zerovalent iron permeable reactive barriers and wetlands can be explained by its association with iron oxides. The long term stability of immobilized U is yet to be addressed. The present study investigates the remobilization of U(VI) from iron oxides via diverse reaction pathways (acidification, reduction, complex formation). Prior, uranium coprecipitation experiments were conducted under various conditions. The addition of various amounts of a pH-shifting agents (pyrite), an iron complexing agent (EDTA) or iron (III) reduction agent (TiCl3) yielded in uranium remobilization, concentrations above the US EPA allowedmaximum contaminant level(MCL=30 æg/l). This study demonstrates that U(VI) release in nature strongly depends on the conditions and the mechanism of its fixation by geological materials.
http://e-docs.geo-leo.de/handle/11858/00-1735-0000-0001-33E6-A
2017-03-23T08:16:09
CC-MAIN-2017-13
1490218186841.66
[]
e-docs.geo-leo.de
Docker4Drupal bundle consist of the following containers: Supported Drupal versions: 6, 7, 8. Requirements¶ - Install Docker (Linux, Docker for Mac or Docker for Windows (10+ Pro)) - For Linux additionally install docker compose Must know before you start¶ - To make sure you don't lose your MariaDB data DO NOT use docker-compose down(Docker will destroy volumes), instead use docker-compose stop. Alternatively, you can specify manual volume for /var/lib/mysql(see compose file), this way your data will always persist - To avoid potential problems with permissions between your host and containers please follow this instructions - For macOS users: Out of box Docker for Mac has poor performance on macOS. However there's a workaround based on docker-sync project, read instructions here Usage¶ Feel free to adjust volumes and ports in the compose file for your convenience. - Download docker-compose.yml file from docker4drupal repository and put it to your Drupal project codebase directory. This directory will be mounted to PHP and Nginx containers - Depending on your Drupal version make sure you're using correct tags (versions) of Nginx and PHP images - Make sure you have the same database credentials in your settings.php file and MariaDB service definition in the compose file - Optional: import existing database - Optional: add additional services (Varnish, Apache Solr, Memcached, Node.js) by uncommenting the corresponding lines in the compose file - Optional: configure domains - Run containers: docker-compose up -d - That's it! You drupal website should be up and running at You can stop containers by executing: docker-compose stop Also, read how to access containers and how to get logs Status¶ We're actively working on this instructions and containers. More options will be added soon. If you have a feature request or found a bug please submit an issue on GitHub or join us on Slack We update containers from time to time by releasing new images tags. License¶ This project is licensed under the MIT open source license.
http://docs.docker4drupal.org/en/latest/
2017-03-23T08:07:44
CC-MAIN-2017-13
1490218186841.66
[]
docs.docker4drupal.org
Varnish container¶ Integration¶ Drupal 7¶ To spin up a container with Redis cache and use it as a default cache storage Blog post is coming. Customization¶ See the list of environment variables available for customization at wodby/drupal-varnish.
http://docs.docker4drupal.org/en/latest/containers/varnish/
2017-03-23T08:08:45
CC-MAIN-2017-13
1490218186841.66
[]
docs.docker4drupal.org
Getting Started From BaseX Documentation This page is one of the Main Sections of the documentation. It gives a quick introduction on how to start, run, and use BaseX. [edit] Overview - First Steps - Startup: How to get BaseX running - Command-Line Options - User Interfaces - Graphical User Interface (see available Shortcuts) - Database Server: The client/server architecture - Standalone Mode: The comand-line interface - Web Application: The HTTP server - DBA: Browser-based database administration - General Info - Databases: How databases are created, populated and deleted - Parsers: How different input formats can be converted to XML - Commands: Full overview of all database commands - Options: Listing of all database options - Integration BaseX: Introduction - BaseX for Dummies. Written by Paul Swennenhuis: Part I, Part I (files), Part II. - BaseX Adventures. Written by Neven Jovanović. - Tutorial. Written by Imed Bouchrika. - XQuery pour les Humanités Numériques. Written by Farid Djaïdja (French). XML and XQuery - XML Technologies. Our university course on XML, XPath, XQuery, XSLT, Validation, Databases, etc. - XQuery Tutorial. From W3 Schools. - XQuery: A Guided Tour. From the book "XQuery from the Experts". - XQuery Summer Institute. Exercises and Answers. BaseX: Talks, Questions - Our Annual User Meetings. Slides and videos. - Our Mailing List. Join and contribute. - GitHub Issue Tracker. Please use our mailing list before entering new issues. - Stack Overflow. Questions on basex.
http://docs.basex.org/wiki/Getting_Started
2017-03-23T08:13:18
CC-MAIN-2017-13
1490218186841.66
[]
docs.basex.org
This post illustrates how to enable the automated transmission of meter readings from your Accuenergy AcuREV20XX meters to the Wattics Energy Analytics Dashboard in nine easy steps. Accuenergy AcuREV20XXuREV20XX user manual. STEP 3: Provide network access to your meter Connect your AcuREV20XX meter to your Ethernet LAN with a cat5 cable. STEP 4: Configure the meter time clock Set the meter time clock to your local time via the meter display menu: - Select SETTINGS at the main screen and press OK. - Enter the meter password (default is 0000) and press OK. - Press the left cursor until you reach the TIME SET tab. - Set the local time and press OK. STEP 5: Retrieve your meter IP address The meter IP address can be retrieved via the meter display menu: - Select NET at the main screen and press OK. - Enter the meter password (default is 0000) and press OK. - Press the right cursor until you reach the IP ADDRESS tab. - Write the IP address of your meter down (e.g. 192.168.1.254). STEP_3<< Click on the ‘Settings’ menu tab and submit the password (default password is ‘12345678’). STEP 7: Configure the network parameters After the password has been entered, a ‘Network Parameter Configuration’ page is displayed. >>IMAGE (most typical setup). - 8:_7<< Click on ‘Submit’ to complete the configuration. Reboot your meter (power down and up again), your meter is now configured to upload new data readings to the Wattics platform every five minutes. STEP
http://docs.wattics.com/2016/02/16/how-to-connect-your-accuenergy-acurev20xx-meter-to-wattics/
2017-03-23T08:16:09
CC-MAIN-2017-13
1490218186841.66
[array(['/wp-content/uploads/2016/02/AcuREV-dash.jpg', None], dtype=object) array(['/wp-content/uploads/2016/02/AcuREV-time.jpg', None], dtype=object) array(['/wp-content/uploads/2016/02/AcuREV-IP.jpg', None], dtype=object) array(['/wp-content/uploads/2016/02/urlupload.png', None], dtype=object) array(['/wp-content/uploads/2016/02/Accuenergy_WebServer.png', None], dtype=object) array(['/wp-content/uploads/2016/02/Accuenergy_password.png', None], dtype=object) array(['/wp-content/uploads/2016/02/Accuenergy_NetworkParameter.png', None], dtype=object) array(['/wp-content/uploads/2016/02/Accuenergy_HTTPpush.png', None], dtype=object) array(['/wp-content/uploads/2016/02/Home.png', None], dtype=object)]
docs.wattics.com
This is an iframe, to view it upgrade your browser or enable iframe display. Prev 1.3. History of Wireless LANs Although wireless communications are nothing new, Norman Abramson, as a professor at the University of Hawaii, developed what is acknowledged as the first computer network using wireless communications in 1970. Known as ALOHAnet, it enabled wireless communication between a small set of islands and pioneered today's wireless networks, as well as lending concepts to Ethernet development. More information can be found at the ALOHAnet page at Wikipedia . Wireless LANs under the IEEE 802.11 specifications did not become widely used until the introduction of the 802.11b standard in 1999. With more available devices, higher data rates and cheaper hardware, wireless access has now become widespread. The IEEE recently ratified the 802.11n standard. This standard addresses several performance and security issues and is discussed later in this guide. Prev 1.2. What is a Wireless LAN? Up 1.4. Benefits of Wireless LANs
https://docs.fedoraproject.org/en-US/Fedora/12/html/Wireless_Guide/sect-Wireless_Guide-Introduction-History_Of_Wireless_LANs.html
2017-03-23T08:11:04
CC-MAIN-2017-13
1490218186841.66
[]
docs.fedoraproject.org
Retrieving Your Lost or Forgotten Passwords or Access Keys For security reasons, you cannot retrieve console passwords or the secret access key part of an access key pair after you create it. If you lose one of these, it cannot be recovered and you must have your administrator reset your password or create a new access key for you, as appropriate. If you have the permissions needed to create your own access keys, you can find instructions for creating a new one at Creating, Modifying, and Viewing Access Keys (AWS Management Console). You should follow best practice and periodically change your password and AWS access keys. In AWS, you change access keys by rotating them. This means that you create a new one, configure your application(s) to use the new key, and then delete the old one. You are allowed to have two access key pairs active at the same time for just this reason. For more information, see Rotating Access Keys (AWS CLI, Tools for Windows PowerShell, and AWS API).
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys_retrieve.html
2017-03-23T08:12:24
CC-MAIN-2017-13
1490218186841.66
[]
docs.aws.amazon.com
Billing is a process of collecting usage, aggregating it, applying required usage and subscription charges and finally generating invoices for the customers. Billing process also includes receiving and recording payments from the customers. Usage charges are the charges taken from the customers based on the service utilization. They are calls and SMSes. Subscription charges include both recurring and one occurrence charges paid for: - Plans - bundles covering certain destinations with a bulk of minutes included or at special rate - Packages - derived from plans, extended by charged for DIDs and PBX seats - DIDs - service charges for virtual numbers - Products - all other items offered by provider The main functions of the voipswitch billing system can be grouped as below: - Rating & billing: involves rating the products or services usage and producing monthly bills. Charging is real-time which means that the events are taken as they occur and charged immediately. - Payment processing: involves posting of the customer payments to customer's account. - Pre-pay and post-pay services: This involves supporting both pre-paid and post-paid customer base. - Multiple currencies: multiple currencies support is required if you have multinational customers. Accounts are associated with currencies through tarrifs. - Products & services: This involves providing flexible way to maintain various products and services and sell them individually or in packages. - Discount applications: This involves defining various discount schemes in order to reduce customer churn and attract and increase customer base. Page: 1.6.1 Prepaid and postpaid Page: 1.6.2 Tariffs Page: 1.6.3 Plans Page: 1.6.4 Packages Page: 1.6.5 Invoicing Page: 1.6.6 Vouchers Page: 1.6.7 Products Page: 1.6.8 Currencies
http://docs.voipswitch.com/display/doc/1.6+Billing
2017-03-23T08:12:06
CC-MAIN-2017-13
1490218186841.66
[]
docs.voipswitch.com
Working Groups according to the joomla website.Read more…) 5 years ago. (Purge). Working Groups/box-footer Working Groups/box-header Working Groups/Working Groups Working Groups/box-footer All Production Working Groups - Documentation · Bug Squad · JavaScript · Search · Translations Working Groups/box-header 2 language packs Working Groups/box-footer Working Groups/box-header Working Groups/How can you help Working Groups/box-footer Working Groups/box-header Working Groups/Questions Working Groups/box-footer
http://docs.joomla.org/index.php?title=Working_Groups&diff=12797&oldid=6876
2014-04-16T09:39:03
CC-MAIN-2014-15
1397609521558.37
[]
docs.joomla.org
. Note that Linux systems must have the libaio library installed downloaded the EUM installer. Change permissions on the downloaded installer script to make it executable, as follows: Run the script as follows: On Windows: Run the installer: Click Next. In the AppDynamics End User Monitoring Setup screen: With the initial configuration information gathered, the installer completes the setup of the EUM Server. When finished, the EUM Server is running. After installing the EUM Server, you must perform the additional post-installation tasks: Configure the Events Services properties in the eum.properties file Secure the EUM Server by setting up a custom keystore Update the JVM options in the $APPDYNAMICS_HOME\EUM\eum-processor\bin\eum-processor-launcher.vmoptions file. Update the JVM options in the $APPDYNAMICS_HOME/EUM/eum-processor/bin/eum-processor file. Follow the provision instructions for your deployment: Provision EUM Licenses for Multi-Tenant Controllers To configure the Events Services properties in the eum.properties file: binin the EUM/eum-processordirectory. eum.propertiesfile for editing. In the eum.properties file, enter the values as follows: The <eum_key> is the Events Service key that appears as the appdynamics.es.eum.key value in the Administration Console: The configuration should appear similar to the following example: eum.properties file, restart the EUM Server. To connect the EUM Server with the AppDynamics Controller: on the machine on which you will run EUM installer with the following: Modify values of the installation parameters based on your own environment and requirements. Particularly ensure that the directory paths and passwords match your environment. Run the installer with the following command: On Windows, use:
https://docs.appdynamics.com/plugins/viewsource/viewpagesrc.action?pageId=45485753
2020-08-03T12:30:34
CC-MAIN-2020-34
1596439735810.18
[]
docs.appdynamics.com
Import your unique codes Code lists are where your unique codes are stored. You can import your set of codes from a CSV file or by pasting your codes (one on each line). Start by creating your list under the Code Lists tab: Click the import codes button to add your set of unique codes. If you want to use a CSV file to import your codes you can download this example file to see what the file should look like. The important thing is that the file must contain a Code column. If you need to remove any non-delivered codes, rename the list, delete it or export any codes you can do this using the Export/Manage menu:
https://docs.couponcarrier.io/article/25-import-your-unique-codes
2020-08-03T12:44:38
CC-MAIN-2020-34
1596439735810.18
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/595227fa0428637ff8d41632/images/59db2e40042863379ddc7f89/file-CPyTY512Gg.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/595227fa0428637ff8d41632/images/59db2ecc2c7d3a40f0ed4a69/file-NDyUxDqybU.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/595227fa0428637ff8d41632/images/59db2f182c7d3a40f0ed4a6b/file-5bqZFJRIEr.png', None], dtype=object) ]
docs.couponcarrier.io
Insert Calculated Field Dialog - 2 minutes to read The Insert Calculated Field dialog allows end-users to add new calculated fields to the Pivot Table, as well as modify or remove the existing ones. End-users can invoke this dialog by clicking the Calculated Field... item in the Fields, Items, & Sets drop-down menu. Add the Calculations ribbon group to enable this menu (see the Getting Started topic for details on how to provide a Ribbon UI for the SpreadsheetControl). In the Insert Calculated Field dialog's Name and Formula boxes, end-users can specify the name (PivotField.Name) and formula (PivotField.Formula) for a new calculated field. The formula can contain constants and references to other fields in the PivotTable report. End-users can select the desired field in the Fields list and click the Insert Field button to include a field reference in the formula. Clicking the Add button adds the new field to the data area of the PivotTable report. End-users can modify the existing calculated field by selecting the desired field in the Name drop-down list and changing its formula. The calculated field's name cannot be edited from the dialog. Clicking the Delete button removes the selected calculated field. TIP Call one of the PivotCalculatedFieldCollection.Add method overloads to create a calculated field using the Spreadsheet API. Refer to the How to: Create a Calculated Field example for more details.
https://docs.devexpress.com/WindowsForms/118750/controls-and-libraries/spreadsheet/visual-elements/dialogs/insert-calculated-field-dialog
2020-08-03T11:50:03
CC-MAIN-2020-34
1596439735810.18
[array(['/WindowsForms/images/xtraspreadsheet_insertcalculatedfielddialog128674.png', 'XtraSpreadsheet_InsertCalculatedFieldDialog'], dtype=object) array(['/WindowsForms/images/xtraspreadsheet_insertcalculatedfielddialog_ribbon128673.png', 'XtraSpreadsheet_InsertCalculatedFieldDialog_Ribbon'], dtype=object) ]
docs.devexpress.com
Providing initial data for models¶ It’s sometimes useful to pre-populate your database with hard-coded data when you’re first setting up an app. You can provide initial data with fixtures or migrations. JSON, XML or YAML (with PyYAML installed) a path to a fixture file, which overrides searching the usual directories. See also Fixtures are also used by the testing framework to help set up a consistent test environment.
https://docs.djangoproject.com/en/1.11/howto/initial-data/
2020-08-03T11:54:55
CC-MAIN-2020-34
1596439735810.18
[]
docs.djangoproject.com
How To Setup Kernel Modules A loadable kernel module is a way to add or remove code from the kernel at runtime. It is an ideal way to install. References: - Kernel Modules Tutorial to walk you through the step by step instructions of building an .mdef for your kernel module and update your system. - Find detailed specifications on adding a kernel module to your target in our Kernel Module Definition .mdef reference. These tutorials will walk you through real world examples of kernel modules and demonstrate how to set up: - application dependencies - kernel module dependencies - install and remove scripts - bundle binary files with scripts
https://docs.legato.io/latest/howToKMod.html
2020-08-03T11:52:55
CC-MAIN-2020-34
1596439735810.18
[]
docs.legato.io
Session memory – who’s this guy named Max and what’s he doing with my memory? SQL Server MVP Jonathan Kehayias (blog). In a previous post (Option Trading: Getting the most out of the event session options): max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.). As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Clarification:. Max approximates True as memory approaches 64K' EXEC sp_executesql @session. - Mike
https://docs.microsoft.com/en-us/archive/blogs/extended_events/session-memory-whos-this-guy-named-max-and-whats-he-doing-with-my-memory
2020-08-03T12:54:27
CC-MAIN-2020-34
1596439735810.18
[]
docs.microsoft.com
Add a Dynamic Date in the Footer Bottom Since Ocean Extra plugin 1.1.8 version, you can easily add a dynamic date in the footer bottom for your copyright. What is a dynamic date? A dynamic date is a date that changes automatically every year. How do I add a dynamic date in the footer? A shortcode has been created specially for this use, in your Customizer (Appearance > Customize), go to Footer Bottom and enter the following shortcode in the Copyright textarea: [oceanwp_date] This shortcode will display the current year, so: 2020. If you want to display the year of creation of your site in combination with the current year, here is theshort code to insert: [oceanwp_date year="2015"] This shortcode will display: 2015 - 2020.
https://docs.oceanwp.org/article/367-add-a-dynamic-date-in-the-footer-bottom
2020-08-03T12:27:34
CC-MAIN-2020-34
1596439735810.18
[]
docs.oceanwp.org
search command. The msearch command is designed to be used as a tool for the onboarding and troubleshooting of metrics data and the exploration of metrics indexes. See msearch in the Search Reference manual. Do not use msearch for large-scaled searches of metrics data. Such searches will be very slow to complete. Use mstats for large metrics searches instead.: - You cannot use automatic lookups with metrics data. This is because automatic lookups are applied to individual events, whereas metrics are analyzed as an aggregate. - You cannot perform search-time extractions. - You can enrich metrics with the equivalent of custom indexed fields, which are treated as dimensions. - You can use reserved fields such as "source", "sourcetype", or "host" as dimensions. However, when extracted dimension names are reserved names, the name is prefixed with "extracted_" to avoid name collision. For example, if a dimension name is "host", search for "extracted_host" to find it. - Dimensions that start with underscore ( _ ) are not indexed, so they are not searchable. As of release 8.0.0 of the Splunk platform, metrics indexing and search is case sensitive. This means, for example, that metrics search commands treat the following as three distinct metrics: cap.gear, CAP.GEAR, and Cap.Gear. Search examples To list all metric names in all metrics indexes: | mcatalog values(metric_name) WHERE index=* To list all dimensions in all metrics indexes: | mcatalog values(_dims) WHERE index=* To list counts of metric names over 10-second intervals: | mstats count where metric_name=* span=10s BY metric_name To perform a simple count of a dimension: | mstats count where index=mymetricsdata metric_name=aws.ec2.CPUUtilization To calculate an average value of measurements for every 30-second interval: | mstats avg(_value) WHERE index=mymetricdata AND metric_name=aws.ec2.CPUUtilization span=30s You can also display results in a chart. The following example uses a wildcard search and group by: | mstats avg(_value) prestats=t WHERE index=mymetricindex AND metric_name="cpu.*" span=1m by metric_name | timechart avg(_value) as "Avg" span=1m by metric_name This type of search can be used to stack different CPU metrics that add up to 100%. This search shows an example of using an EVAL statement: | mstats avg(_value) as "Avg" WHERE metric_name="memory.free.value" span=5s | eval mem_gb = Avg / 1024 / 1024 / 1024 | timechart max("mem_gb") span=5s Use the REST API to list metrics data You can also use the Metrics Catalog REST API endpoints to enumerate metrics data: - Use the GET /services/catalog/metricstore/metricsendpoint to list metric names. - Use the GET /services/catalog/metricstore/dimensionsendpoint to list dimension names. - Use the GET /services/catalog/metricstore/dimensions/{dimension-name}/valuesendpoint to list values for given dimensions. You can also use filters with these endpoints to limit results by index, dimension, and dimension values. See Metrics Catalog endpoint descriptions in the REST API Reference Manual. This documentation applies to the following versions of Splunk® Enterprise: 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/Splunk/8.0.5/Metrics/Search
2020-08-03T13:09:00
CC-MAIN-2020-34
1596439735810.18
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Table of Contents Product Index Sci-Fi stories always need a place to store Sci-Fi objects, Now you can with the Sci-Fi Starship Cargo Hold. Store all your toxic, harmful, radioactive, dangerous materials here out of harms way to be stored and studied. Comes with a transporter room as well so you can beam cargo in directly. The Sci-Fi Starship Cargo Hold comes with 3 cargo props, three separate transporter beam props, a forcefield prop for the doors. Also comes with 123 texture, height, normal and roughness maps, plus 2 HDRI planet.
http://docs.daz3d.com/doku.php/public/read_me/index/62941/start
2020-08-03T11:50:12
CC-MAIN-2020-34
1596439735810.18
[]
docs.daz3d.com
Table of Contents Product Index Z Photo Booth and Poses is the ultimate fun small environment where your characters can have fun! The Photo Booth. Various, colorful materials have been added to add more variety to the look of the scene. The Poses have been carefully adjusted for Genesis 3 and.
http://docs.daz3d.com/doku.php/public/read_me/index/63733/start
2020-08-03T12:18:14
CC-MAIN-2020-34
1596439735810.18
[]
docs.daz3d.com
This will create number only input field with increase/decrease buttons. Object defining html attributes for controls root element. Callback that will be called when value changed. If set to true all trailing zeroes after decimal separator are trimmed. (default: true). Maximum allowed value. (default: 9999) Minimum allowed value. (default: 1) If true control width will ajust to parent width else will be fixed width. (default: false) Number of digits after decimal dot. (default: 0) If true then when value is at max and increase button is pressed value will change to min instead of stopping. (default: false) Value by what controls value will change when increase/decrease buttons are pressed. (default: 1) Style class to add on root element of the control. Units name to be rendered on the right of the field (ex. 'px' for pixels). Initial control value. Set initial visibility of control. (default: true) This will create number only input field with increase/decrease buttons.
http://docs.site.pro/interfaces/_sizeselector_.sizeselectordef.html
2020-08-03T12:06:09
CC-MAIN-2020-34
1596439735810.18
[]
docs.site.pro
Contributor Tools - IDE - Maven Targets and Plugins - Modifying a Thrift RPC definition - Modifying a Protocol Buffer Message - Usage of ./bin/alluxio IDE We recommend using either Eclipse or IntelliJ IDEA to contribute to Alluxio. If you are using IntelliJ IDEA, you may need to change the Maven profile to ‘developer’ in order to avoid import errors. You can do this by going to View > Tool Windows > Maven Projects locally cloned the checkstyle, findbugs, and other plugins. To speed up compilation you may use the command: mvn -T 2C compile -DskipTests -Dmaven.javadoc.skip -Dfindbugs.skip -Dcheckstyle.skip -Dlicense.skip This command will skip many of our checks that are in place to help keep our code neat. We recommend running all checks before committing. You may replace the compile target in the above command with any other valid target to skip checks as well. The targets install, verify, and compile will be most useful. Creating a Local Install If you want to test your changes with a compiled version of the repository, you may generate the jars with the Maven install target. mvn install -DskipTests After the install target executes, you may configure and start a local cluster with the following commands: If you haven’t configured or set up a local cluster yet, run the following commands cp conf/alluxio-site.properties.template conf/alluxio-site.properties echo "alluxio.master.hostname=localhost" >> conf/alluxio-site.properties ./bin/alluxio format Once you’ve run those configurationioFSTest#createFileTest the working of some APIs in an interactive manner, you may leverage the Scala shell, as discussed in this blog. The fuse tests are ignored if the libfuse library is missing. To run those tests, please install the correct libraries mentioned in this page. Modifying a Thrift RPC definition Alluxio uses Thrift 0.9.3 for RPC communication between clients and servers. The .thrift files defined in [email protected] brew link --force [email protected] Then to regenerate the Java code, run bin/alluxio thriftGen Modifying a Protocol Buffer Message Alluxio uses Protocol Buffers 2.5.0 to read and write journal messages. The .proto files defined in core/protobuf/src/proto/@2.5 brew link --force [email protected] Then to regenerate the Java code, run bin/alluxio protoGen Usage of ./bin/alluxio Most commands in bin/alluxio are for developers. The following table explains the description and the syntax of each command. In addition, these commands have different prerequisites. The prerequisite for the format, formatWorker, journalCrashTest, readJournal, version, validateConf and validateEnv commands is that you have already built Alluxio (see Build Alluxio Master Branch about how to build Alluxio manually). Further, the prerequisite for the fs, loadufs, logLevel, runTest and runTests commands is that you have a running Alluxio system.
https://docs.alluxio.io/os/user/1.8/en/contributor/Developer-Tools.html
2020-08-03T13:00:52
CC-MAIN-2020-34
1596439735810.18
[]
docs.alluxio.io
As a project manager, you have set up a Tasks List for the team to work on their assignments. Every time a Tasks List item is created or updated, you want to send an alert message to the person to whom the task is assigned. This use case uses the standard SharePoint Tasks List with the Assigned To field defined as a lookup field to the SharePoint User Profile where the e-mail address is stored. Step 1 – Create Alert Event - Select the Tasks List from the drop-down list. - Select the trigger action for when the item is either created or modified. - Select mail frequency for when you want the alert e-mails to go out. - For filter criteria, select All Items in the list. Step 2 – Create Recipients The Assigned To column in the Tasks list is a lookup field into SharePoint’s User Profile Information. We will set Alert Plus to use this field to resolve the recipient e-mail address. That way each e-mail alert will go to the individual that is assigned the task. - Select “Lookup an E-Mail Address in the Alert List” from the first drop-down list. - Select the “Assigned to” column which is the index field to the user profile information. - Select “in SharePoint User Profiles” as the source of e-mail addresses. Step 3 – Create E-Mail Message Create your notification e-mail message.
https://docs.bamboosolutions.com/document/alert_on_a_tasks_list_and_send_alerts_for_every_task_to_the_assigned_person/
2020-08-03T12:21:07
CC-MAIN-2020-34
1596439735810.18
[array(['/wp-content/uploads/2017/06/hw05076.jpg', 'hw05076.jpg'], dtype=object) array(['/wp-content/uploads/2017/06/hw05078.jpg', 'hw05078.jpg'], dtype=object) array(['/wp-content/uploads/2017/06/hw05080.jpg', 'hw05080.jpg'], dtype=object) ]
docs.bamboosolutions.com
How to create and use Autogenerated Code Lists Regular Code Lists are managed manually. This means that you need to generate a list of codes and import these into the list. Once all the codes have been delivered you need to generate new codes and import them again. Autogenerated Code Lists create codes automatically when needed for supported platforms. For example, if a Code Email configuration is triggered because of a new email subscriber, an autogenerated code list will generate a new code and then send it to the subscriber. The benefit of this is that the autogenerated code lists can apply expiry dates relative to the time it was created. A popular option is to send out codes to new email subscribers that are only valid for 24-72 hours. We currently support the following e-commerce platforms: - Shopify - WooCommerce - Coupon Carrier - Generate random codes that can be used for our scanner service and mark-as-used button. Read more about this feature. How to create a new autogenerated code list Go to the "Code lists" tab in Coupon Carrier and choose to "Create a new Code List". You can then choose to either create a "Manual list" if you'd like to import your own set of codes or choose "Autogenerated List" to connect to your e-commerce store. Once selected, you'll be given the option to choose which type of platform you want to connect to and then provide the credentials needed to connect Coupon Carrier to the store. Once you've connected to your store and the list has been created, you can configure the code generation depending on your needs. The screenshot below shows the options available for Shopify but the options for WooCommerce are very similar. Advanced settings If you need more control over the code generation you can use the "Advanced settings" which allow you to specify the exact properties that we use to create the code in Shopify. This requires that you read their documentation so that you can correctly override the default values. Using these settings you can create codes that only work for specific products, collections, etc. Contact us if you need help with this.
https://docs.couponcarrier.io/article/13-what-are-autogenerated-code-lists
2020-08-03T12:47:34
CC-MAIN-2020-34
1596439735810.18
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/595227fa0428637ff8d41632/images/5e5523842c7d3a7e9ae83fc2/file-DyiCrWwiiL.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/595227fa0428637ff8d41632/images/5e55242604286364bc95d0fb/file-639zC1RMhd.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/595227fa0428637ff8d41632/images/5e5525a52c7d3a7e9ae83fe9/file-HTa9z8WsXB.png', None], dtype=object) ]
docs.couponcarrier.io
General availability deployment After a release wave is generally available, all environments will be automatically turned on to receive mandatory updates which will enable the early access features and the general available features of a release. Tip Check out Dynamics 365 and Power Platform Release Plans to learn more about new features to be released in the release waves. Throughout a release wave, your environments will be updated during one of the weekend maintenance windows based on your environments' region. The specific dates when the updates will occur will be published to the Message Center. Each notification will include the dates, the maintenance window, and the Release Plan reference for the list of optimizations, fixes, and enhancements. Each environment should see the new features and build numbers by Monday morning, local time. See Policies and communications. Note If you have enabled the early access updates in your environments, you'll continue getting updates throughout the release wave. If you did not opt in for the early access updates in your environments, your environment will be automatically updated to receive the new release based on the general availability deployment schedule for your region. Deployment schedule The deployment schedule will be updated soon. See also Dynamics 365 release schedule Dynamics 365 and Power Platform Release Plans Policies and communications
https://docs.microsoft.com/en-us/power-platform/admin/general-availability-deployment
2020-08-03T13:23:59
CC-MAIN-2020-34
1596439735810.18
[]
docs.microsoft.com
Technical Blog Technical and Product News and Insights from Rackspace Use Microsoft Flow to provision sites in Sharepoint Office 365 A cloud-based solution, Microsoft® (MS) Flow effectively automates and simplifies business processes by creating automated workflows with MS Flow. This post describes how to create a fully automated bulk solution to provision a sub-site in SharePoint® Office 365® (SPO) for any site creation requests that come through a custom list. You can achieve this goal by using a scheduled MS Flow workflow and the REST API features. When you create a workflow, keep the following considerations in mind: Create new requests in a custom list. The approval workflow might not run on the newly created item. A scheduled MS Flow (Power Automate) workflow picks up all the approved requests from the list and provisions a site. The scheduling time depends on your requirements. After you provision the site, you should update the value in a custom list so that the system won’t pick up the request when the next workflow runs. Steps to build the solution: Create a custom list Create and schedule a workflow Create a SharePoint list Create a custom SharePoint list, Site Creation Request, with the first eight columns in Figure 1. Figure 1 : Set of columns Now add items to the Site Creation Request list. For example, add two items to the list, one without a unique permission and another with a unique permission, as shown in Figure 2. The IsUniquePermission column indicates if the requested site inherits permissions from its parent or is a unique permission. The Site Template column contains the Site Template ID in the following format: - Teams: STS#3 - Communication Site: SITEPAGEPUBLISHING#0 Figure 2 : Add an item to site creation request list Create and schedule a workflow Use the following steps to create and schedule an MS Flow workflow: Step 1: Build a scheduled workflow Build a scheduled flow, as shown in Figure 3, pass all the parameters, and click Create. Figure 3: Create the scheduled workflow Step 2: Add a variable action In the next screen, after the recurrence step, add two Initialize Variable actions for the List Name and IsUniquePermission variables, as shown in Figure 4. Figure 4: Add the initialize variable action Step 3: Add Get Items action Add a Get Items action to fetch all the records from the Site Creation list based on the condition, Approved is equal to Yes and Site Created is equal to No, as shown in Figure 5. Note: The Filter Query parameter receives only OData Query. Figure 5: Add the get items action Step 4: Add an Apply to Each action Add the Apply to each action and select the value from the previous Get Items action, as shown in Figure 6. Figure 6: Add the Apply to each action Step 5: Add a Compose action Inside the Apply to each block, add a Compose action to get the Site Template ID of the current item in the loop. Split the selected Site Templated Id and get only the Site ID from the value, by using the following command, as shown in Figure 7. split(item()['SiteTemplate']?['Value'],'-') Figure 7: Add the Apply to each action Step 6: Add a Send HTTP action to provision site Inside the Apply to each block, add a Send an HTTP request to SharePoint action to construct and execute a SharePoint REST API call to provision a site based on the parameters, as shown in Figure 8. The details of the map request follow: - Site Address: Maps to the RootSiteURL column. - Method: - URI: /_api/web/webinfos/add - Accept header: application/json;odata=verbose - Content Type header: application/json;odata=verbose - Body: { ‘parameters’: { ‘__metadata’: { ‘type’: ‘SP.WebInfoCreationInformation’ }, ‘Url’:‘@{items(‘Apply_to_each’)[‘SubSite’]}‘, ‘Title’:‘@{items(‘Apply_to_each’)[‘SubSite’]}‘, ‘Description’:‘My Description’, ‘Language’:‘1033’, ‘WebTemplate’:‘@{trim(outputs(‘Get_Site_Template_Id’)[1])}‘, ‘UseUniquePermissions’:‘@{items(‘Apply_to_each’)[‘isUniquePermission’]}’ } } The body parameter details include the following elements: Title: Maps to the column SubSite. WebTemplate: Gets the output from the Site Template ID action by using the command: trim(outputs('Get_Site_Template_Id')[1]) UseUniquePermissions: Maps to the column IsUniquePermission. Figure 8: Add the HTTP request to provision the site Step 7: Add a Send HTTP action to update column Add a Send an HTTP request to SharePoint to update the Site Created column of the current item to YES, as shown in Figure 9. The details of the map request follow: - Site Address: Maps to the RootSiteURL column. - Method: - URI: _api/web/lists/GetByTitle(‘@{variables(‘ListName’)}‘)/items(@{items(‘Apply_to_each’)[‘ID’]}) - Syntax: _api/web/lists/GetByTitle(‘ListName’)/items(ID) - Accept header: application/json;odata=verbose - Content Type header: application/json;odata=verbose - Body: { ‘__metadata’: { ‘type’: ‘SP.Data.Site_x0020_Creation_x0020_RequestListItem’}, ‘SiteCreated’: true } Note: The highlighted value in Figure 9 is the static name of the Site Creation list. Figure 9: Add the HTTP request to update an item Step 8: Complete the Apply to each action Complete the Apply to each action, as shown in Figure 10. Figure 10: Apply to each block Step 9: Complete the scheduled workflow Complete the scheduled workflow, as shown in Figure 11. Figure 11: Complete the workflow for site creation Conclusion I hope this post helps you understand how MS Flow and the REST API work together with SharePoint sites and list-based operations. One of the most significant advantages of Flow is that it is incredibly easy to use, and even people with no technical background can create workflows without trouble. Use the Feedback tab to make any comments or ask questions. You can also chat now to start the conversation. Learn more about Microsoft Office 365.
https://docs.rackspace.com/blog/use-microsoft-flow-to-provision-sites-in-sharepoint-office-365/
2020-08-03T11:56:32
CC-MAIN-2020-34
1596439735810.18
[array(['Picture1.png', None], dtype=object) array(['Picture2.png', None], dtype=object) array(['Picture3.png', None], dtype=object) array(['Picture4.png', None], dtype=object) array(['Picture5.png', None], dtype=object) array(['Picture6.png', None], dtype=object) array(['Picture7.png', None], dtype=object) array(['Picture8.png', None], dtype=object) array(['Picture9.png', None], dtype=object) array(['Picture10.png', None], dtype=object) array(['Picture11.png', None], dtype=object)]
docs.rackspace.com
DeepPavlov tutorials¶ Introduction to DeepPavlov¶ Jupyter notebook | slides Install the library and understand a simple “Hello World!” Bot written in 7 lines of code. Experiment with basic pattern matching rule-based bot. Data preparation in DeepPavlov¶ Learn how to read and prepare data for trainable components. Named Entity Recognition with DeepPavlov¶ Jupyter notebook | slides | video Build a simple convolutional neural network to solve the named entity recognition task. Master data downloading, preprocessing and batching then train and score the model. Task-oriented bot with DeepPavlov¶ Jupyter notebook | slides | video. Chit-chat bot with DeepPavlov¶ Jupyter notebook | slides | video Implement in DeepPavlov sequence-to-sequence encoder-decoder model with attention mechanism and teacher forcing for chit-chat.
https://docs.deeppavlov.ai/en/0.0.6.5/intro/tutorials.html
2019-11-12T04:35:07
CC-MAIN-2019-47
1573496664567.4
[]
docs.deeppavlov.ai
Copy File This action allows you to copy a file to a specified path.. - File Identifier . It can be: The file ID (Example: 77), the file path (Example: assets/Cars/Ford.jpg) or the file path from current portal (Example: Portals/0/assets/Cars/Ford.jpg). Supports My Tokens. - Destination Folder . This field supports expressions, by passing the path of the destination folder.
https://docs.dnnsharp.com/actions/dnn/copy-file.html
2019-11-12T02:43:55
CC-MAIN-2019-47
1573496664567.4
[array(['http://static.dnnsharp.com/documentation/copy_file.png', None], dtype=object) ]
docs.dnnsharp.com
Upgrade an Azure Kubernetes Service (AKS) cluster As part of the lifecycle of an AKS cluster, you often need to upgrade to the latest Kubernetes version. It is important you apply the latest Kubernetes security releases, or upgrade to get the latest features. This article shows you how to upgrade the master components or a single, default node pool in an AKS cluster. For AKS clusters that use multiple node pools or Windows Server nodes (currently in preview in AKS), see Upgrade a node pool in AKS. Before you begin This article requires that you are running the Azure CLI version 2.0.65 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI. Warning An AKS cluster upgrade triggers a cordon and drain of your nodes. If you have a low compute quota available, the upgrade may fail. See increase quotas for more information. Check for available AKS cluster upgrades To check which Kubernetes releases are available for your cluster, use the az aks get-upgrades command. The following example checks for available upgrades to the cluster named myAKSCluster in the resource group named myResourceGroup: az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster --output table Note When you upgrade an AKS cluster, Kubernetes minor versions cannot be skipped. For example, upgrades between 1.12.x -> 1.13.x or 1.13.x -> 1.14.x are allowed, however 1.12.x -> 1.14.x is not. To upgrade, from 1.12.x -> 1.14.x, first upgrade from 1.12.x -> 1.13.x, then upgrade from 1.13.x -> 1.14.x. The following example output shows that the cluster can be upgraded to versions 1.13.9 and 1.13.10: Name ResourceGroup MasterVersion NodePoolVersion Upgrades ------- ---------------- --------------- ----------------- --------------- default myResourceGroup 1.12.8 1.12.8 1.13.9, 1.13.10 If no upgrade is available, you will get: ERROR: Table output unavailable. Use the --query option to specify an appropriate query. Use --debug for more info. Upgrade an AKS cluster With a list of available versions for your AKS cluster, use the az aks upgrade command to upgrade. During the upgrade process, AKS adds a new node to the cluster that runs the specified Kubernetes version, then carefully cordon and drains one of the old nodes to minimize disruption to running applications. When the new node is confirmed as running application pods, the old node is deleted. This process repeats until all nodes in the cluster have been upgraded. The following example upgrades a cluster to version 1.13.10: az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.13.10 It takes a few minutes to upgrade the cluster, depending on how many nodes you have. Note There is a total allowed time for a cluster upgrade to complete. This time is calculated by taking the product of 10 minutes * total number of nodes in the cluster. For example in a 20 node cluster, upgrade operations must succeed in 200 minutes or AKS will fail the operation to avoid an unrecoverable cluster state. To recover on upgrade failure, retry the upgrade operation after the timeout has been hit. To confirm that the upgrade was successful, use the az aks show command: az aks show --resource-group myResourceGroup --name myAKSCluster --output table The following example output shows that the cluster now runs 1.13.10: Name Location ResourceGroup KubernetesVersion ProvisioningState Fqdn ------------ ---------- --------------- ------------------- ------------------- --------------------------------------------------------------- myAKSCluster eastus myResourceGroup 1.13.10 Succeeded myaksclust-myresourcegroup-19da35-90efab95.hcp.eastus.azmk8s.io Next steps This article showed you how to upgrade an existing AKS cluster. To learn more about deploying and managing AKS clusters, see the set of tutorials. Feedback
https://docs.microsoft.com/en-us/azure/aks/upgrade-cluster
2019-11-12T04:21:49
CC-MAIN-2019-47
1573496664567.4
[]
docs.microsoft.com
class in UnityEngine.Playables / Extensions for all the types that implements IPlayableOutput. Extension methods are static methods that can be called as if they were instance methods on the extended type. using UnityEngine; using UnityEngine.Playables; public class ExamplePlayableBehaviour : PlayableBehaviour { void Start() { PlayableGraph graph = PlayableGraph.Create(); ScriptPlayableOutput scriptOutput = ScriptPlayableOutput.Create(graph, "MyOutput"); // Calling method PlayableExtensions.SetWeight on ScriptPlayableOutput as if it was an instance method. scriptOutput.SetWeight(10); // The line above is the same as calling directly PlayableExtensions.SetDuration, but it is more compact and readable. PlayableOutputExtensions.SetWeight(scriptOutput, 10); } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/ScriptReference/Playables.PlayableOutputExtensions.html
2019-11-12T04:22:13
CC-MAIN-2019-47
1573496664567.4
[]
docs.unity3d.com
One of the functions provided in the tools class is tools.estimate_rosette_leaf_count(). This implements a pre-trained convolutional neural network (which is accessible directly at networks.rosetteLeafRegressor()) to count the number of leaves on a rosette-type plant. This guide reviews the basic process which was used to train the regression model to perform this leaf-counting task. It is intended to help users who wish to train their own models for similar tasks. The full code for this model appears in the models/leaf_counter_regressor.py source file. Gathering the Training Data The data used to train the leaf counter comes from the IPPN dataset of top-view arabidopsis rosette images. These images come with a CSV file called Leaf_counts.csv which provides the ground-truth number of leaves corresponding to each image. Setting Up Model Parameters Let's break down the setup of our model. See the documentation on model options for more information about these settings. import deepplantphenomics as dpp model = dpp.RegressionModel(debug=True, save_checkpoints=False, tensorboard_dir='/home/user/tensorlogs', report_rate=20) These lines import the DPP library and start a new model for regression problems. We specify debug=True to see console output, save_checkpoints=False prevents the saving of checkpoints during training (it will still save the model at the end), and tensorboard_dir specifies the location to write Tensorboard accumulators so we can visualize the training process. report_rate=20 means that we will report results for one training batch and one testing batch every 20 batches. # 3 channels for colour, 1 channel for greyscale channels = 3 # Setup and hyperparameters model.set_batch_size(4) model.set_number_of_threads(8) model.set_image_dimensions(128, 128, channels) model.set_resize_images(True) These lines tell us about the input images. In this case, we are going to use batches of 4 examples for each iteration of training (since this is a very small dataset). We are going to use 8 threads for each Tensorflow input producer. This is useful if a single producer thread can't keep up with the GPU. It normally doesn't matter, but we're training on a machine with a lot of cores so why not use them? Since the size of images varies in this dataset, we are going to choose to resize them to 128x128. We could also choose to resize them by cropping or padding instead. model.set_num_regression_outputs(1) model.set_test_split(0.2) model.set_validation_split(0.0) model.set_learning_rate(0.0001) model.set_weight_initializer('xavier') model.set_maximum_training_epochs(500) These are hyper-parameters to use for training. The first line specifies that we are doing a regression problem with one output: the number of leaves. We are going to use 20% of the examples for testing and none of them for validation, meaning that 80% of the examples are used for training. We are not using any regularization. We will use an initial learning rate of 0.0001. We are going to initialize our layer weights using the Xavier (Glorot) initialization scheme. We will train until 500 epochs - i.e. until we have seen all of the examples in the training set 500 times. Specifying Augmentation Options Since the size of the dataset is extremely small (165 images), it is necessary to use data augmentation. This means that we are going to artificially expand the size of the dataset by applying random distortions to some of the training images. The augmentations we are going to use are: randomly skewing the brightness and/or contrast, randomly flipping the images horizontally and/or vertically, and applying a random crop to the images. The brightness/contrast augmentations are probably not needed as all of the images are taken under the same scene conditions, but it may help the trained network generalize to other datasets. # Augmentation options model.set_augmentation_brightness_and_contrast(True) model.set_augmentation_flip_horizontal(True) model.set_augmentation_flip_vertical(True) model.set_augmentation_crop(True) At test time, the images will be cropped to center in order to maintain the same input size. To illustrate the importance of data augmentation, here are test regression loss results showing the difference adding each augmentation makes: A function is included specifically for loading the data for this task. # Load all data for IPPN leaf counting dataset model.load_ippn_leaf_count_dataset_from_directory('./data/Ara2013-Canon') For other tasks, your own images and labels can be loaded via loaders for directories and CSV files. For example, if you had your images in a directory called data and a CSV file data/my_labels.csv where the first column is the filename and the second column is the number of leaves, you could do this instead: # ALTERNATIVELY - Load labels and images model.load_multiple_labels_from_csv('./data/my_labels.csv', id_column=0) model.load_images_with_ids_from_directory('./data') Building the Network Architecture We are going to use a small convolutional neural network for this task. It is comprised of four convolutional layers. There are no fully connected layers except the output layer. Each convolutional layer is followed by a pooling layer. # Define a model architecture model.add_input_layer() model.add_convolutional_layer(filter_dimension=[5, 5, channels, 32], stride_length=1, activation_function='tanh') model.add_pooling_layer(kernel_size=3, stride_length=2) model.add_convolutional_layer(filter_dimension=[5, 5, 32,_output_layer() Depending on your task, you may have better results with larger or smaller networks. Don't assume that a large model is better, especially with small datasets! Try a few different configurations with different feature extractors (the convolutional layers and accompanying machinery) and classifiers (the fully connected layers). Training We begin training the model by simply calling the training function. # Begin training the regression model model.begin_training() The model will train until 500 epochs. We will see updates both in the console as well as in Tensorboard. At the end, loss statistics will be reported for the entire test set. 09:40AM: Results for batch 32980 (epoch 499) - Loss: 0.19386, samples/sec: 871.05 09:40AM: Stopping due to maximum epochs 09:40AM: Saving parameters... 09:40AM: Computing total test accuracy/regression loss... 09:40AM: Mean loss: -0.0272586610582 09:40AM: Loss standard deviation: 0.624978633174 09:40AM: Mean absolute loss: 0.480917639203 09:40AM: Absolute loss standard deviation: 0.400074431613 09:40AM: Min error: -1.19493865967 09:40AM: Max error: 1.5458946228 09:40AM: MSE: 0.391341326526 09:40AM: R^2: 0.904088812561 09:40AM: All test labels: 09:40AM: [ 9. 6. 6. 7. 9. 7. 10. 7. 9. 7. 9. 8. 11. 8. 9. 10. 13. 8. 9. 11. 13. 10. 11. 13. 7. 7. 8. 8. 6. 7. 6. 7. 9. 6. 6. 7.] 09:40AM: All predictions: 09:40AM: [ 8.1905098 6.8377347 6.05786324 6.85530901 9.53642273 6.90101051 9.07618999 7.18060684 9.11283112 7.32292271 10.06754875 9.54589462 10.39970398 8.09113407 8.87572861 9.58766937 11.90369415 7.8541441 8.67022324 11.41111469 11.82732868 10.79200935 11.04158878 11.80506134 6.51270151 7.24674559 7.92943382 8.56169319 5.93615294 6.48214674 6.16266203 7.30149126 8.1905098 6.8377347 6.05786324 6.85530901] 09:40AM: Histogram of L2 losses: 09:40AM: [2 0 0 1 0 0 0 0 0 1 0 0 0 0 2 0 0 0 0 0 0 1 0 0 1 1 0 0 1 0 0 1 0 0 0 0 0 0 3 2 0 2 0 0 0 3 1 1 0 1 1 0 1 0 1 1 0 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 2 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1] 09:40AM: Shutdown requested, ending session... For regression problems, the loss value is the L2 norm of the ground truth label subtracted from the regression output. This means that for a one-dimensional output, like leaf count, we can interpret the loss as the absolute difference in count. Also, for one-dimensional output, notice that the L2 norm is reported as the "absolute" loss, while the relative difference is also reported. This is useful in cases (such as leaf counting) where we are interested in over- and under-prediction. For multi-dimensional outputs, the mean/std and absolute mean/std will be identical, since the L2 norm is never negative. An error histogram is output as a vector of frequencies for 100 bins. Note that the min and max loss are also reported. The first bin corresponds to the interval (-inf, min] and the last bin corresponds to the inerval [max, inf). The area between these bins is divided into 98 bins of equal size. MSE (mean squared error) and R squared are also provided. For smaller test sets, the whole set of ground truth and predicted values are provided so that you can calculate whatever other statistics you need. My Model's Not Converging, What Can I Do? This model seems to do quite well on this task, as you can see the loss rapidly decreasing until it settles around a particular value. In other cases, your model may thrash around, never improving. There are a few things you can try to encourage convergence. - Lower the learning rate by an order of magnitude. - Tune DropOut rates, or remove DropOut layers. - Try a larger model. It may not have enough representational capacity for the problem. - Get more data!
https://deep-plant-phenomics.readthedocs.io/en/latest/Tutorial-Training-The-Leaf-Counter/
2019-11-12T03:31:12
CC-MAIN-2019-47
1573496664567.4
[array(['../leaf-counter-augmentation.png', 'augmentation-results'], dtype=object) ]
deep-plant-phenomics.readthedocs.io
0385 Gelato Messina General information - Submitterʼs name - Gelato Messina -. - Accommodation and Food Services - Manufacturing - Agriculture, Forestry and Fishing - Do you have a particular regional interest? If required, you may select multiple regions. - No. - 225113 Marketing Specialist - 351112 Pastrycook Please outline the evidence or data that would support these occupations being added to the MLTSSL. - Gelato Messina is a primary producer (diary), bespoke food manufacturer, restaurateur operating a fine dining operation and a gelato and cake retailer. We employ approximately 200 people across three States in Australia. From farm gate to finished product, we make everything ourselves. Based on our experience in the hospitality industry, we strongly believe that 351112 Pastrycook & 225113 Marketing Specialist should be added to the MLTSSL. With reference to the 351112 Pastrycook position, we find it incredibly hard to fill available vacancies from the domestic marketplace alone. Traditional advertisements yield 10 - 15 candidates only per campaign with the majority being foreign applicants seeking sponsorship. 4-5 of those applicants will be Australian based local employees, 2 will turn up to interviews with no qualifications or work experience and often 1-2 other Australian applicants will simply not present for an interview. While we continue to seek out local skill sets and take on apprentices, we continue to need access to foreign skilled pastry chefs who have been trained in Australia by reputable training institutions like Le Cordon Bleu and William Angliss. Discussions with the reputable training providers that we partner with reveal a troubling statistic. 90% - 95% of their pastry students are foreigners with few examples of local students enrolling in Patisserie and Other Culinary courses. By applying the 351112 Pastrycook position to the STSOL list, these providers are reporting a decline in foreign enrolments in their courses in favour of occupations on the MLTSSL. This will exacerbate the skills shortage as the foreign cohort who make up the majority of pastry students enrolled with these providers in pastry opt out for careers on the MLTSSL. The shortage is accute and in many instances we need access to overseas Senior Pastry Chefs in order to get the depth of experience to carry out the work we need and importantly train and mentor our locals. [40816|101736]
https://docs.employment.gov.au/0385-gelato-messina
2019-11-12T02:52:40
CC-MAIN-2019-47
1573496664567.4
[]
docs.employment.gov.au
Upgrading From v1 to v2 Version 2 was created because the typehints in version 1 was holding the package back in some cases (like multiple select which requires an array of values instead of a string which was assumed). Luckily, bumping the version number in composer.json and running composer update should be non-breaking. Here are some caveats to look out for: - The package now ships with a html()function by default, which returns an instance of the Htmlbuilder class. If you’ve defined your own method, you’ll need to remove it. - Various type hints have been removed throughout the package, if you’ve extended a class to override it’s methods, you’ll need to update them accordingly (everything still behaves the same!)
https://docs.spatie.be/laravel-html/v2/upgrading/
2019-11-12T03:48:56
CC-MAIN-2019-47
1573496664567.4
[]
docs.spatie.be
Jump to: GET Response Notes | API Call Builder | Related Models Count 'publisher' records by filter criteria. GET Response Notes The number of 'publisher' records matching filter. Returns: integer API Call Builder Javascript is required to use the API Call Builder. Related Models - Accounts referenced as account - AdNetworks referenced as ad_network
https://tune.docs.branch.io/management/advertiser-publishers-count/
2019-11-12T03:12:45
CC-MAIN-2019-47
1573496664567.4
[]
tune.docs.branch.io
TOPICS× Server administration overview This documentation explains how to administer the Scene7 Image Rendering server. Image Rendering consists of two major components: - A Java package is deployed with the Image Serving Platform Server and manages client connection, caching, material catalogs. - A native code module is deployed as an extension library for the Image Server and implements the core image rendering functionalities. Both components are collectively called the Render Server . Image Rendering shares many server facilities with Image Serving, and all options are configured by editing a configuration file. Additional configuration attributes are provided by the default catalog ( default.ini) or specific material catalogs. See Material Catalogs for details. The Image Rendering install folder ( install_folder ) is [ install_root /ImageRendering]. On Windows, the default install_root is C:\Program Files\Scene7. A different folder may be specified during installation. On Linux, install_root must always be /usr/local/scene7. Symbolic links may be used. All file paths are case-sensitive on UNIX and case-insensitive on Windows.
https://docs.adobe.com/content/help/en/dynamic-media-developer-resources/image-serving-api/image-rendering-api/server-administration/c-ir-server-overview.html
2019-11-12T03:43:04
CC-MAIN-2019-47
1573496664567.4
[]
docs.adobe.com
TÓPICOS× Performing QA Steps Steps to help you perform QA steps before approving and deploying your Target implementation. The following sections contain more information: Previewing DTM Changes on Your Production Website Changes saved within DTM are immediately available in DTM’s staging library and should be QA’d before they are published. At most, they will take a minute or two to roll out across the Akamai network. To QA DTM changes on your production website: - Load your production website in your browser. - Use the DTM Switch tool or console commands to force DTM to load your staging library. - Reload the page to preview the changes made in DTM. When you use the DTM Switch or console statements to load the staging library, you should see “-staging” appended to the end of the file via your developer tool’s Network tab: If you are ever concerned that you are not seeing the latest changes, you can run the Console statement “_satellite.buildDate” to confirm the timestamp of the build you are loading. Ensuring that the Target Tool is Loading Properly - Open your web browser’s developer console. - Turn on “Debug” mode with the DTM Switch plugin. When the Target Tool loads the following statements display in the console: Also, using the Network tab, you can verify that the mbox.js file has loaded. It will look something like this: Ensuring that Target Mboxes are Firing Properly In the Network tab, you can verify that your mboxes are firing (they typically begin with “standard?” or “ajax?”): The Adobe Marketing Cloud Debugger is also tremendously helpful to verify the calls and parameters:
https://docs.adobe.com/content/help/pt-BR/dtm/implementing/deploying/performing-qa-steps.html
2019-11-12T03:32:54
CC-MAIN-2019-47
1573496664567.4
[array(['/content/dam/help/dtm.pt-BR/help/target/qa-approval-deployal-steps/assets/network_tab.png', None], dtype=object) array(['/content/dam/help/dtm.pt-BR/help/target/qa-approval-deployal-steps/assets/sat_build_date.png', None], dtype=object) array(['/content/dam/help/dtm.pt-BR/help/target/qa-approval-deployal-steps/assets/console.png', None], dtype=object) array(['/content/dam/help/dtm.pt-BR/help/target/qa-approval-deployal-steps/assets/network_tab_2.png', None], dtype=object) array(['/content/dam/help/dtm.pt-BR/help/target/qa-approval-deployal-steps/assets/network_tab_3.png', None], dtype=object) array(['/content/dam/help/dtm.pt-BR/help/target/qa-approval-deployal-steps/assets/mc_debugger.png', None], dtype=object) ]
docs.adobe.com
OverviewOverview WooCommerce is a free E-Commerce plugin of WordPress that enables users to sell their products using the internet. It is designed to handle all sorts of online stores. It supports all sized businesses such as small, medium, and large. CedCommerce Amazon-WooCommerce Integration helps sellers to feed the product data from their WooCommerce store to amazon.com, and thus, connects it with one of the biggest e-commerce industry players. To cater to these businesses successfully, CedCommerce offers Amazon-WooCommerce Integration to help the store owners to sell WooCommerce products on Amazon. This powerful extension enables synchronization of product data, orders, and stock units. Key Features are as follows: - Profile Based Products Upload: Admin can create the Profile and assign it to the products for automating the upload procedure. - Product Data Validation: The extension enables validating the product information in accordance with Amazon standard and values. - Easy product upload: Admin can upload the Products at a single click when they are ready to upload. - Bulk Uploading: To facilitate the uploading large number of products and to minimize the manual work, the Amazon marketplace API integration extension enables sellers to upload products in bulk. - Feed status: Feed of all API requests made to Amazon are logged through which all details of that feed can be viewed. - Inventory Management: Real-time Synchronization of inventory. - Pricing: Enables admin to set Variable Pricing.
https://docs.cedcommerce.com/woocommerce/amazon-woocommerce-integration-guide-0-0-1/
2019-11-12T02:51:30
CC-MAIN-2019-47
1573496664567.4
[array(['https://docs.cedcommerce.com/wp-content/plugins/documentor/skins/bar/images/search.png', 'search_box'], dtype=object) array(['https://docs.cedcommerce.com/wp-content/plugins/documentor/core/images/loader.gif', None], dtype=object) ]
docs.cedcommerce.com
The Build tool is used for creating final terrain object which is optimized for runtime performance. To enable Build tool, click on the Build tab in the Inspector or press F7: • Chunk Size: Size of each terrain chunk, choose an appropriate value to take advantage of LODs and light map baking. • Mark Static: Should it mark the terrain and its child as static for baking? • Generate LOD: Should it generate a lower resolution of this terrain to reduce rendering overhead? • LOD Transition: Transition between LODs. After hitting Build button, the final version of the terrain will be created as child game objects. To continue editing, you should click on the Fix button on top of the Inspector GUI:
https://docs.pinwheel.studio/polaris/advanced-terrain-system/build-final-terrain
2019-11-12T04:35:23
CC-MAIN-2019-47
1573496664567.4
[]
docs.pinwheel.studio
HttpClientMethods The methods in HttpClientMethods are intended for use in Groovy code only and are, as the class name implies, for performing HTTP-related operations. For Groovy only Gloop already offers what we call HTTP client services, which are a special type of Gloop services that allows application developers to send HTTP requests and consume HTTP responses. You can also use the http method in the HttpMethods class to make ad-hoc HTTP requests. Methods Response as String HttpClientMethods returns the response content as a String. Depending on the service's response return type, you may need to decode it as XML or JSON. Gloop has JsonMethods and XmlMethods classes for accomplishing such task. Usage The code snippets below show how to use some of the methods of the HttpClientMethods. You can find more details on the Javadocs. Using the http(String uri) method: Using the http(String uri, String body) method: JSON as request body When sending a JSON request body using the method above, you need to encode it as a String, including new lines and tabs. Otherwise, the method will still send a request. However, the requested service may give an Invalid JSON Payload error (or similar).
https://docs.torocloud.com/martini/latest/developing/one-liner/http-client-methods/
2019-11-12T04:24:56
CC-MAIN-2019-47
1573496664567.4
[]
docs.torocloud.com
Snark is a serverless cloud-native platform for large scale data ETL, distributed machine learning training and inference. Snark manages your data and models in a persistent storage on cloud and allows you to run parallel tasks across a fleet of cloud instances with data streaming to your instances on the fly. Snark's serverless framework let you scale any local machine learning pipeline (R/H2O/Sklearn/Xgboost/etc) to the cloud with zero friction. Step 1. Go to lab.snark.ai to sign up. Please verify your account. Step 2. Install snark with pip3 > pip3 install snark In case of difficulties with installation, take a look at the troubleshooting section. Step 3. Sign in through the CLI > snark login Step 4. Copy your code and data to persistent storage Use our CLI to copy your code and data to Snark Storage. Snark Storage will be mounted as /snark when your code runs in cloud instances. For single small file, you can also upload on our web UI at [lab.snark.ai] Storage section with the Upload button. > cd /path/to/project/ > git clone > snark cp -r /path/to/project/examples snark://project/ Step 5. Create Yaml description Snark uses yaml training workflow descriptions. Below is a basic MNIST example. version: 1 experiments: mnist_dev_test: image: pytorch/pytorch:latest hardware: gpu: k80 command: - cd /snark/project/examples/mnist - python main.py This yaml file describes an mnist experiment which uses pytorch on a single K80 GPU. The workflow runs as a combination of three commands described by the file. Save as mnist.yml. Step 6. Start the workflow Use snark up to start the workflow. > snark up -f mnist.yml Snark up starts the experiment described by the yaml file. It spins up a cluster with 1 K80 GPU. In general workflows executed by snark can be of different nature such as distributed ML training, hyperparameter search, batch-inference and deployment. Step 7 List Experiments snark ps command will list the experiments along with their ids and states. > snark ps Experiment Ids are used to tear down the workflows. Step 8 Tear Down Experiments Snark workflows can be torn down using snark down command > snark down {experiment_id} The snark down command shuts down all cloud resources utilized by the experiment workflow.
http://docs.snark.ai/en/latest/
2019-11-12T02:43:13
CC-MAIN-2019-47
1573496664567.4
[]
docs.snark.ai
Salt is available in the FreeBSD ports at sysutils/py-salt. By default salt is packaged using python 2.7, but if you build your own packages from FreeBSD ports either by hand or with poudriere you can instead package it with your choice of python. Add a line to /etc/make.conf to choose your python flavour: echo "DEFAULT_VERSIONS+= python=3.6" >> /etc/make.conf Then build the port and install: cd /usr/ports/sysutils/py-salt make install.
https://docs.saltstack.com/en/2018.3/topics/installation/freebsd.html
2019-11-12T03:19:40
CC-MAIN-2019-47
1573496664567.4
[]
docs.saltstack.com
For some reason an internal 3ds Max process reports this error when tyFlow is present. It appears to be benign and can simply be ignored. Since the message is not reported by tyFlow itself, the exact source of the message is unknown. This large error message popup can appear on machines that do not have the 3ds Max root directory added to the system PATH environment variable, for whatever reason. Adding the 3ds Max root directory to the system PATH environment variable manually should suppress the error.
http://docs.tyflow.com/faq/miscellaneous/
2020-10-20T02:31:22
CC-MAIN-2020-45
1603107869785.9
[]
docs.tyflow.com
No, D-Tools will not affect your QuickBooks account in any ways you don't ask it to. We know your accounting is precious, and the last thing we want to do is mess it up. D-Tools Cloud only allows for pushing Estimates from Projects, not invoices. If you send an estimate you didn't mean to or later decide you don't want to use, you will never have shown the client and you will be able to delete the estimate in QuickBooks without breaking anything in your accounting software. If you are happy with the estimate, you can turn that estimate into an invoice in QuickBooks online. Accounts and Products you are pushing to QuickBooks will only be pushed when you do so. It will not affect any products in existing estimates and invoices and will only affect products in future estimates. Integrate with QuickBooks
https://docs.d-tools.cloud/en/articles/1686662-can-d-tools-affect-my-quickbooks
2020-10-20T03:51:05
CC-MAIN-2020-45
1603107869785.9
[]
docs.d-tools.cloud
Sales Documents In addition to the email messages related to a sale, your store generates invoices, packing slips, and credit memos in both HTML and PDF formats. Before your store goes live, make sure to update these documents with your logo and store address. You can customize the address format and include additional information for reference. - Invoices - Packing Slips - Credit Memos
https://docs.magento.com/user-guide/v2.3/marketing/sales-communications.html
2020-10-20T03:59:39
CC-MAIN-2020-45
1603107869785.9
[]
docs.magento.com
Getting started with SQL Native Client [Chris Lee] As Acey Bunch explained in April, SQL Native Client meets the needs of developers wanting to take advantage of new features in SQL Server 2005 from ADO, ODBC and OLE DB applications. For those of you who haven’t looked at SQL Native Client yet we now need to start the education process of how to use it. The good news is that it’s very simple. We have implemented a very small number of new interfaces for OleDb, but most new features are implemented via connection or statement attributes, which you already know how to use. I’ll start at a very basic level and talk about how to convert existing applications to use SQL Native Client. This comes in three stages: first getting existing code running; second, preparing to exploit new features; third, using new features. In this post I’ll deal with the first two of these. The third will be covered in my next post, using Multiple Active Result Sets (MARS) as an example. If you have SQL Server 2005 installed on your machine, SQL Native Client is already installed. If not, SQLNCLI.msi is included with the SQL Server 2005 distribution, but isn’t copied when SQL Server 2005 is installed, so just copy it from the distribution in the \Setup folder. Stage 1:Getting existing code running For ADO you change your connection string to use SQLNCLI as the provider, and add a keyword to enable SQL Server 2000 data type compatibility, so “ …;Provider=SQLOLEDB;…” becomes “…;Provider=SQLNCLI;DataTypeCompatibility=80;…” ADO is a generic data access API that is now part of the Windows Platform and is not part of SQL Native Client, so the ADO on your machine did not change when you installed SQL Native Client. Therefore, it doesn’t have any specific knowledge of SQL Server in general, much less the new datatypes introduced for SQL Server 2005. For this reason, when using ADO we have to tell SQL Native Client to map new SQL Server 2005 data types to data types that ADO does understand. I’ll explain this in a later article, for now accept that we need this, and that it doesn’t get in the way of using other new features of SQL Server 2005. For ODBC you change the driver name from ‘SQL Server’ to ‘SQL Native Client’. If your application uses a DSN you need to create a new DSN and select ‘SQL Native Client’ as the driver. If you use DSN-less connections just update the connection string in your application. For OLE DB you simply change the provider name from ‘SQLOLEDB’ to ‘SQLNCLI’, or use CLSID_SQLNCLI instead of CLSID_SQLOLEDB. Stage 2: Preparing to use new featuresFor ADO: If you’re using ADO, you’re already good to go. ODBC and OLE DB applications need to use sqlncli.h to gain access to new features. You also need to be using Visual Studio.Net. Sqlncli.h is a new common header file for both ODBC and OLE DB and is typically installed to C:\Program Files\Microsoft SQL Server\90\SDK\Include. For ODBC sqlncli.h is a straight replacement for odbcss.h. If you’re using bcp API calls alongside ODBC calls then you need to link with sqlncli.lib instead of odbcbcp.lib. For OLE DB you can add the #include for sqlncli.h after the #include for sqloledb.h if you need to use both old and new providers (sqlncli.h doesn’t contain the CLSIDs for SQLOLEDB), or you can replace the #include for sqloledb.h with the #include for sqlncli.h if you don’t need the old CLSIDs. If you need both headers, the #include for sqloledb.h must come first. Since sqlncli contains symbols for both ODBC and OLE DB, there’s a chance you may get a name clash between one of your own symbols and a symbol defined for use by the ‘other’ API (the one you’re not going to be using). In this case you can add a #define to get rid of the symbols for the API you don’t need. If you’re using OLE DB you #define _SQLNCLI_OLEDB_ and if you’re using ODBC you #define _SQLNCLI_ODBC_. Next steps … You’re now prepared to start using the new features of SQL Server 2005 available with SQL Native Client. Consult SQL Native Client Programming in SQL Server Programming Refernce in Books Online for details of these features. The documentation for SQL Native Client has been updated quite a lot recently, so you need to use the latest build available to you. Most of the new features are very simple to program and are controlled by connection or statement properties. In my next post I’ll take a look at Multiple Active Result Sets (MARS). Chris Lee Program Manager, DataWorks Disclaimer: This posting is provided "AS IS" with no warranties, and confers
https://docs.microsoft.com/en-us/archive/blogs/dataaccess/getting-started-with-sql-native-client-chris-lee
2020-10-20T04:31:22
CC-MAIN-2020-45
1603107869785.9
[]
docs.microsoft.com
Customer Segment Attributes Magento Commerce only The content on this page is for Magento Commerce only. Learn more Customer segments are defined in a manner similar to shopping cart and catalog price rules. For an attribute to be used in a customer segment condition, the Use in Customer Segment property must be set to Yes. Customer segment conditions can incorporate the following types of attributes:
https://docs.magento.com/user-guide/v2.3/marketing/customer-segment-attributes.html
2020-10-20T03:59:17
CC-MAIN-2020-45
1603107869785.9
[]
docs.magento.com
Troubleshooting - What could be preventing items from getting added to cart? - What are partial payments and how do I handle them? - How do I hide certain items that I don’t want to sell online? - WP Engine Cache Exclusion for Checkout Page - Troubleshooting - Where do the payments from Online Order go? - How do I get customers name and address to show on the receipt(s) - Api Error or Order Type Error - Smart Online Order - How to copy the clover data from development site to production site. - How to fix Scrolling issue when using Nice Scroll - Why are my delivery orders not taxed? - How do I change the order of how items are being displayed? - Why are items showing “Out of Stock”?
https://docs.smartonlineorder.com/category/73-troubleshooting
2020-10-20T03:30:21
CC-MAIN-2020-45
1603107869785.9
[]
docs.smartonlineorder.com
Build a query in the search window The search window toolbar includes quick access to all these groups of operations for data querying. Operations over columns window The Operations Over Columns window opens when you select one of the operations above mentioned. This is where you define the required function and select the arguments needed for your query. The Create Column and Aggregate Function tabs contain the same fields. Both types of operations create a new column to contain the results of the selected operation performed on the selected argument(s), or columns. For example, the capture below shows an aggregation that will add a new column called HTTPrequests and will contain the count of grouped values in the user column. Note that you must group your data before performing an aggregation operation, so the Aggregate function tab will not be visible if your data is not grouped. The Create Column tab includes buttons to filter the list of operations according to their case sensitivity. Some operations have a case sensitive and case insensitive version, so you can use these buttons to show only the version you need. - The Filter Data and Or tabs contain different fields and options because a filter doesn't add a column; but. Just like the Create column tab, the Filter data tab includes buttons to show only case insensitive or case insensitivity versions of those operations that have both options. - The Group by tab contains a selector where you can choose the time period by which you want to group your data. Furthermore, you can also select No temporal if you don't want to group by time. In the capture below we are grouping the data in the uri and method columns every 15 minutes. In most of the tabs, you need to select an Operation from the drop-down list, then click New Argument to activate the field where you identify the necessary arguments. These two fields are interdependent. That is to say, the system will automatically validate or reject certain arguments based on the operation you have selected. Similarly, the system will identify valid operations in green and invalid operations in orange based on any arguments you have selected. For example, the capture below shows that for the selected argument eventdate, the operations that can be performed on that type of field are in green, while the invalid operations are shown in orange. Each operation requires a specific number or type of argument(s). regardless of the option selected. You can select the default option in your User preferences, and Admin users can do the same for all the users in the domain in their Domain preferences. Alternatively, you can create a filter in one of the following ways: If you select a cell from the data table and press ENTER, the Operations over columns window will be open in the Filter data tab, and the Equal (eq, =) operation selected. The cell selected and the column it belongs to will be automatically added as arguments of the filter. Select the arrow icon that appears when hovering over a column header to see the list of distinct values in that column, then click a value name. The Operations over columns window will be open in the Filter data tab, and the Equal (eq, =) operation selected. The column and value selected will be automatically added as arguments of the filter. Using collections If there is any running collection in your domain, you will see an additional set of buttons that allow you to display only default filter operations (standard), running collections (custom) or both (all). Learn more about collections and how to use them in this article. Filter column data using the OR selector You can also filter the data using the OR selector, allowing you). We also cloned the 400 status code and applied a different operation to the cloned version. You can also apply an OR filter directly in the top 10 values list. The Operations Over Columns window will appear once you select the first value. Group data Events in a data table can easily be grouped to facilitate analysis. The result of grouping is a data table presenting all the different row value combinations of the grouped columns. Grouping is also required in order to subsequently apply aggregation operations on the data. - Select theicon in the query window toolbar and the Operations Over Columns window appears with the Group By option selected. - Choose the time period you want to use to group the events and the arguments you want to use to define the groups. - Select Group by. The result will be a row for each unique combination of arguments and time period. After grouping the data, you select Additional tools → Edit Client Period. Aggregate. Create columns You can create new columns in your data tables based on other data already present. For example, apply a geolocation operation to an existing IP address column to create a new column that identifies the country. Devo comes: Note that if you have grouped or aggregated your data, you will only have access to standard operations, and the buttons will not be visible. - no matter the option selected. You can select the default option in your User preferences, and Admin users can do the same for all the users in the domain in their Domain preferences.
https://docs.devo.com/confluence/ndt/searching-data/building-a-query/build-a-query-in-the-search-window
2020-10-20T03:39:58
CC-MAIN-2020-45
1603107869785.9
[]
docs.devo.com
The transporter is used mainly to carry flow items from one object to another. It has a fork lift that will raise to the position of a flow item if it is picking up or dropping off to a rack. It can also carry several flow items at a time if needed. The transporter is a task executer. It implements offset travel in two ways. First, if there is an involved flow item for the travel operation, then it travels so that the front of the fork lift is positioned at the destination x/y location and raises its fork to the z height of the destination location. Second, if there is no involved flow item for the offset travel operation, then it travels so that its x/y center and z base arrive at the destination location. The transporter uses the standard events that are common to all task executers. See Task Executer Concepts - Events for an explanation of these events. This object uses the task executer states. See Task Executer Concepts - States for more information. The transporter uses the standard statistics that are common to all task executers. See Task Executer Concepts - Statistics for an explanation of these statistics. The transporter object has six tabs with various properties. The last five tabs are the standard tabs that are common to all task executers (except the dispatcher). For more information about the properties on those tabs, see: Only the Transporter tab is unique to the transporter object. However, most of the properties on this tab are the same as the properties on The Task Executer Tab. Only the first two properties are unique to the Transporter tab. This property determines how fast the lift raises and lowers. When this box is checked, the transporter object will use the default animations for this object. If you want to use your own custom animations, you should clear this box. See The Task Executer Tab for an explanation of the remaining properties.
https://docs.flexsim.com/en/20.0/Reference/3DObjects/TaskExecuters/Transporter/Transporter.html
2020-10-20T03:21:04
CC-MAIN-2020-45
1603107869785.9
[]
docs.flexsim.com
- Restore Data from GCS Using BR This document describes how to restore the TiDB cluster data backed up using TiDB Operator in Kubernetes. BR is used to perform the restore. The restore method described in this document is implemented based on Custom Resource Definition (CRD) in TiDB Operator v1.1 or later versions. This document shows an example in which the backup data stored in the specified path on Google Cloud Storage (GCS) is restored to the TiDB cluster. Required database account privileges - The SELECTand UPDATEprivileges of the mysql.tidbtable: Before and after the restoration, the RestoreCR needs a database account with these privileges to adjust the GC time. Prerequisites Download backup-rbac.yaml, and execute the following command to create the role-based access control (RBAC) resources in the test2namespace: kubectl apply -f backup-rbac.yaml -n test2 Create the gcs-secretsecret which stores the credential used to access the GCS: kubectl create secret generic gcs-secret --from-file=credentials=./google-credentials.json -n test1 The google-credentials.jsonfile stores the service account key that you download from the GCP console. Refer to GCP Documentation for details. Create the restore-demo2-tidb-secretsecret which stores the root account and password needed to access the TiDB cluster: kubectl create secret generic restore-demo2-tidb-secret --from-literal=user=root --from-literal=password=<password> --namespace=test2 Process of restore-gcs namespace: test2 spec: # backupType: full br: cluster: demo2 clusterNamespace: test2 # logLevel: info # statusAddr: ${status-addr} # concurrency: 4 # rateLimit: 0 # checksum: true # sendCredToTikv: true to: host: ${tidb_host} port: ${tidb_port} user: ${tidb_user} secretName: restore-demo2-tidb-secret gcs: projectId: ${project-id} secretName: gcs-secret bucket: ${bucket} prefix: ${prefix} # location: us-east1 # storageClass: STANDARD_IA # objectAcl: private After creating the RestoreCR, execute the following command to check the restore status: kubectl get rt -n test2 -owide This example restores the backup data stored in the spec.gcs.prefix folder of the spec.gcs.bucket bucket on GCS to the TiDB cluster spec.to.host. For more information on the configuration items of BR and GCS, refer to backup-gcs.yaml. More descriptions of fields in the Restore CR are as follows: .spec.metadata.namespace: The namespace where the RestoreCR is located. .spec.to.host: The address of the TiDB cluster to be restored. .spec.to.port: The port of the TiDB cluster to be restored. .spec.to.user: The accessing user of the TiDB cluster to be restored. .spec.to.tidbSecretName: The secret containg the password of the .spec.to.userin the TiDB cluster. .spec.to.tlsClientSecretName: The secret of the certificate used during the restore. If TLS is enabled for the TiDB cluster, but you do not want to restore data using the ${cluster_name}-cluster-client-secretas described in Enable TLS between TiDB Components, you can use the .spec.to.tlsClientSecretNameparameter to specify a secret for the restore. To generate the secret, run the following command: kubectl create secret generic ${secret_name} --namespace=${namespace} --from-file=tls.crt=${cert_path} --from-file=tls.key=${key_path} --from-file=ca.crt=${ca_path} .spec.tableFilter: BR only restores tables that match the table filter rules. This field can be ignored by default. If the field is not configured, BR restores all schemas except the system schemas. Note: To use the table filter to exclude db.table, you need to add the *.*rule to include all tables first. For example: tableFilter: - "*.*" - "!db.table" In the examples above, some parameters in .spec.br can be ignored, such as logLevel, statusAddr, concurrency, rateLimit, checksum, timeAgo, and sendCredToTikv. .spec.br.cluster: The name of the cluster to be backed up. .spec.br.clusterNamespace: The namespaceof the cluster to be backed up. .spec.br.logLevel: The log level ( infoby default). .spec.br.statusAddr: The listening address through which BR provides statistics. If not specified, BR does not listen on any status address by default. .spec.br.concurrency: The number of threads used by each TiKV process during backup. Defaults to 4for backup and 128for restore. .spec.br.rateLimit: The speed limit, in MB/s. If set to 4, the speed limit is 4 MB/s. The speed limit is not set by default. .spec.br.checksum: Whether to verify the files after the backup is completed. Defaults to true. .spec.br.timeAgo: Backs up the data before timeAgo. If the parameter value is not specified (empty by default), it means backing up the current data. It supports data formats such as "1.5h" and "2h45m". See ParseDuration for more information. .spec.br.sendCredToTikv: Whether the BR process passes its GCP privileges to the TiKV process. Defaults to true. Troubleshooting If you encounter any problem during the restore process, refer to Common Deployment Failures.
https://docs.pingcap.com/tidb-in-kubernetes/stable/restore-from-gcs-using-br/
2020-10-20T03:36:37
CC-MAIN-2020-45
1603107869785.9
[]
docs.pingcap.com