content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
5 Tips about google docs You Can Use TodayReplying to @davidleskis1 Glad to listen to It can be sorted now! Arrive at out When you've got another Google account queries Sooner or later. Docs Phrase processing for groups Generate and edit textual content files right in the browser—no dedicated software package demanded. Various individuals can perform at the same time, and every modify is saved routinely. The brand new sites isn't capable of render inbound links printed from products and services like Smartsheet any longer. ...Am I missing how it's performed in The brand new Sites? Greater than letters and text Google Docs delivers your files to daily life with clever modifying and styling instruments to help you effortlessly structure textual content and paragraphs. Select from many fonts, include backlinks, pictures, and drawings. What varieties of files can I import and change to Docs? The subsequent file kinds could be converted to Docs: . I take advantage of this app a whole lot and it works extremely properly. Great for on the move. Put comprehensive views "on paper" wherever you transpire being. Scheduled Launch will observe 2 weeks later on. Remember to read this e-mail thoroughly, as you could possibly must consider motion to make certain new Sites performs inside your organization. Jeremy Seifert: brilliant! I take advantage of responses To do that a great deal, so this just appears to be formalizing the exact same notion. Excellent possibility. Replying to @brinkeguthrie Hmm. Could you allow us to know if It is happening with a certain file or several files? Hold us up to date. Take a look at in Slides generates style and design strategies determined by the material within your slide. Implement tips with only one click—no cropping, resizing or reformatting expected. Like using a designer by your aspect. It's a neat way to switch back and forth I can normally have my get the job done around. I utilize it to be a interesting way to carry on Functioning from my laptop computer more than to mobile and back Entire Review Aditya Agarwal July nine, 2017 Hardly ever strike “help save” again Your alterations are quickly saved as you kind. You can also use revision historical past to determine old versions of the identical doc, sorted by date and who made the change. Performs with Word When I compare to ol'Sites - I now also Seriously pass up interaction to check here embed Hangouts (or regardless of what videoconf.Instrument -once more Collaboration in target!) along with the separation in between G+ and Google Images attributes is fairly obvious - and now result in difficulty. In 2011, the corporate experienced introduced ideas to build three details facilities in a price of much more than $two hundred million in Asia (Singapore, Hong Kong and Taiwan) and explained they might be operational inside of two years.
http://google-docs27924.shotblogs.com/5-tips-about-google-docs-you-can-use-today-2935658
2017-09-19T16:52:03
CC-MAIN-2017-39
1505818685912.14
[]
google-docs27924.shotblogs.com
Connecting to Informatica Cloud Setting up a MemSQL connection in Informatica Cloud involves four simple steps: - Install and Configure An Informatica Cloud Secure Agent - Add Agent IP To a MemSQL Cloud Security Group - Create a Connection to MemSQL In Informatica Cloud - Configure MemSQL To Use Prepared Statements Step 1: Installing And Configuring An Informatica Cloud Secure Agent The first step is to install and configure the Informatica Secure Agent. Note that this agent may be installed on a machine of your choosing, and does not need to be co-located with your MemSQL cluster. A single Secure Agent may be used to connect to multiple MemSQL Cloud clusters. The first step in this process is to download the most recent Secure Agent. To do so, select the “Configure” button from the top toolbar and choose “Runtime Environments”. Click on the “Download Secure Agent” button to download the most recent version of the agent. Once you have download the agent, detailed steps on how to install it may be foundhere. To enable your Secure Agent to use the MemSQL Cloud CA to communicate using SSL, please see the Informatica Docs. Once installed and configured, your agent will be displayed in the list of Runtime Environments. Step 2: Add Agent IP To MemSQL Cloud Security Group For Your Cluster After installing and configuring your Secure Agent, you will need to add the IP address of the installation machine to a MemSQL Cloud Security Group to grant the agent network access to the MemSQL Cloud cluster. For more information on how to do this, please see Managing Security Groups. Step 3: Create A Connection To MemSQL In Informatica Cloud Once you have successfully configured your secure agent, the next step is to create a MemSQL connection. Instructions to do so are below, and a video tutorial of how to do this may be found here. Click on the “Configure” option in the top menu bar and select “Connections” to navigate to the connections page. Select “New” to add a new connection. Give your a cluster a name, and choose “MySQL” as the Type. Enter the connection properties for your cluster. Note that for “Runtime Environment” you should choose the Environment created in the previous steps. For Code Page you should select “UTF-8”. Click “Test” to test your connection. Step 4: Configure MemSQL To Use Prepared Statements In order to use Informatica Cloud with MemSQL Cloud, you must set the global variable enable_binary_protocol=true. Please contact MemSQL support to help you enable this feature.
https://docs.memsql.com/memsql-cloud/latest/connecting-to-informatica-cloud/
2017-09-19T17:01:33
CC-MAIN-2017-39
1505818685912.14
[array(['/images/adf36f4-24815df-INFA_1.png', 'image'], dtype=object)]
docs.memsql.com
This article describes the various elements that you use to plan transportation routes in Microsoft Dynamics 365 for Finance and Operations, Enterprise edition. You can use route plans and route guides for complex transportation routes that have multiple stops. If the same route will be used on a regular basis, you can set up a scheduled route. Route plans A route plan contains route segments that provide information about the stops that are visited on the route and the carriers that are used for each segment. You must define the stops on the route as hubs. A hub can be a vendor, a warehouse, a customer, or even just a reloading place where you change carrier. For each segment, you can define “spot rates” for various charges. Here are some examples: - Charges for travelling to the given segments - Charges for a picking up the goods - Charges for dropping off the goods Each route plan must be associated with a route guide. Route guides A route guide defines the criteria for matching a load to a specific route plan. For example, you can specify an origin hub and a destination hub, limits for the container volume or weight, and a shipping carrier, service, or group. Route guides are available on the Rate route workbench page, where loads can be matched to routes either manually or automatically. If the route guide is for a scheduled route, it's also available on the Load building workbench page. Scheduled routes A scheduled route is a predefined route plan that has a schedule for the shipping dates. Scheduled routes and non-scheduled routes differ in the way that loads are assigned to them. If you assign a non-scheduled route by using the Rate route workbench, only the load and the route guide are validated. If you assign a scheduled route, the dates and addresses from the orders and the hubs, and the date on the route plan, are also considered. You don't have to use the Rate route workbench page to manually assign loads to a scheduled route. Instead, you can use the Load building workbench to suggest that loads be built based on the customer addresses and delivery dates from sales orders for a given scheduled route. For scheduled routes, the route plan will have fixed origin and destination hubs. Typically, the shipping carrier and service will be the same for all segments, but they can differ. The destination hubs are created by using the postal codes of the customers that are visited on the route. Several route schedules can be defined for one route plan. The route plan must be associated with a route guide. However, for scheduled routes, the plan can be associated with only one route guide. The route schedule is used only to create the actual routes on the Route schedule page. You can use the default load template when you propose loads on the Load building workbench. Load building workbench The Load building workbench uses the customer addresses and delivery dates from sales orders, and the scheduled routes that are available, to propose a load. By default, the values from the route are entered on the workbench. However, you can select a "from" date that is earlier than the "from" date on the route. When a load is proposed, the delivery address and delivery date of all open sales orders are checked. If the postal code of the delivery address matches the postal code of a hub in the route plan, and if the delivery date is within the range that is selected in the criteria, the sales order is proposed for the load. The capacity of the load template is also considered. Only one load is proposed at a time. If you have a sales order that isn't included, you might have to use a different load template (for example, a load template for a bigger truck or container) or plan an extra delivery.
https://docs.microsoft.com/en-us/dynamics365/unified-operations/supply-chain/transportation/plan-freight-transportation-routes-multiple-stops
2017-09-19T17:08:14
CC-MAIN-2017-39
1505818685912.14
[]
docs.microsoft.com
RealTimeUndefined Real Time Undefined elements are special elements that represent that absence of a value. These values are not actually part of the model, but rather are returned when the user asks for an element that does not exists, either by its path or from its parent element. undefined The value of a RealTimeNull is always undefined. You can not set the value of a RealTimeUndefined element. Events This element has no events since it never represents an attached value.
https://docs.convergence.io/guide/models/data/real-time-undefined.html
2017-12-11T07:42:04
CC-MAIN-2017-51
1512948512584.10
[]
docs.convergence.io
DescribeSchemas Returns information about the schema for the specified endpoint. Request Syntax { "EndpointArn": " string", "Marker": " string", "MaxRecords": number} Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - EndpointArn The Amazon Resource Name (ARN) string that uniquely identifies the endpoint. Type: String Required: Yes - Marker An optional pagination token provided by a previous request. Syntax { "Marker": "string", "Schemas": [ "string" ] } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. Errors For information about the errors that are common to all actions, see Common Errors. - InvalidResourceStateFault The resource is in a state that prevents it from being used for database migration. HTTP Status Code: 400 - ResourceNotFoundFault The resource could not be found. HTTP Status Code: 400 Example.DescribeSchemas { "EndpointArn":"arn:aws:dms:us-east- 1:152683116123:endpoint:WKBULDZKUDQZIHPOUUSEH34EMU", "MaxRecords":0, "Marker":"" } Sample Response HTTP/1.1 200 OK x-amzn-RequestId: <RequestId> Content-Type: application/x-amz-json-1.1 Content-Length: <PayloadSizeBytes> Date: <Date> { "Schemas":[ "testDB", "tmp" ] } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
http://docs.aws.amazon.com/dms/latest/APIReference/API_DescribeSchemas.html
2017-12-11T07:57:16
CC-MAIN-2017-51
1512948512584.10
[]
docs.aws.amazon.com
Related Links Table of Contents Synopsis ntpkeygen [-M] Description This program generates the keys used in NTP’s symmetric key cryptography. The program produces a file containing ten pseudo-random printable ASCII strings suitable for the MD5 message digest algorithm included in the distribution. It also produces an additional ten hex-encoded random bit strings suitable for the SHA-1 and other message digest algorithms. The message digest keys file must be distributed and stored using secure means beyond the scope of NTP itself. Besides the keys used for ordinary NTP associations, additional keys can be defined as passwords for the ntpq utility program. Command Line Options - -M, --md5key Dummy option for backward compatibility in old scripts. This program always runs in -M mode. Running the program The safest way to run the ntpkeygen program is logged in directly as root. The recommended procedure is change to the keys directory, usually /usr/local/etc, then run the program. command. Key file access and location The ntpd(8). Random Seed File All key generation schemes must have means to randomize the entropy seed used to initialize the internal pseudo-random number generator used by the library routines. It is important to understand that entropy must be evolved for each generation, for otherwise the random number sequence would be predictable. Various means dependent on external events, such as keystroke intervals, can be used to do this and some systems have built-in entropy sources. This implementation uses Python’s random.SystemRandom class, which relies on os.urandom(). The security of os.urandom() is improved in Python 3.5+. Cryptographic Data Files The ntpkeygen program generates a file of symmetric keys ntpkey_MD5key_hostname.filestamp. Since the file contains private shared keys, it should be visible only to root and distributed by secure means to other subnet hosts. The NTP daemon loads the file ntp.keys, so ntpkeygen installs a soft link from this name to the generated file. Subsequently, similar soft links must be installed by manual or automated means on the other subnet hosts. This file is needed to authenticate some remote configuration commands used by the ntpq(1) utility. Comments may appear in the file, and are preceded with the # character. Following any headers the keys are entered one per line in the format:.
https://docs.ntpsec.org/latest/ntpkeygen.html
2017-12-11T07:43:50
CC-MAIN-2017-51
1512948512584.10
[array(['pic/sx5.gif', 'pic/sx5.gif'], dtype=object)]
docs.ntpsec.org
100% garanti Modern sport is evolving faster than ever. Even if sport is a game, the business rules are almost overtaking the sport values.In this article, we are going to discuss about sports in which the game, the rule and / or organisational changes were made to attract more customers (spectators, media, and sponsors).In order to illustrate this, and to get different examples, two sports will be taken into account:- Table Tennis- FootballWhy table tennis and Football? Those sports have seen their rules change but for different reasons. I/- Table Tennis 1) The acceleration rule 2) Ball size & set duration II/- Football 1) The Bosman ruling 2) Rules changes Conclusion Enter the password to open this PDF file: - Consultez plus de 91303 études en illimité sans engagement de durée. Nos formules d'abonnement
https://www.docs-en-stock.com/business-comptabilite-gestion-management/evolution-sport-rules-125933.html
2017-12-11T07:41:33
CC-MAIN-2017-51
1512948512584.10
[]
www.docs-en-stock.com
The “Selections to make” function enables you to choose which fields ReportWriter uses and compares in selecting the records for your report. You can define up to 25 selection criteria per report. We’ll continue the example from Creating a question field, in which we defined two question fields (START and END) to be used as selection criteria for our report. We need to compare the Order initiated date field to the values the user enters in the START (Start date) and END (End date) fields to determine which records will be included in our report. We want to include all records for which the order date is greater than or equal to the user‑defined start date and less than or equal to the user‑defined end date. Connect If this is the first selection criterion you define, skip this field. If this is the second (or subsequent) selection criterion, enter the connection operator (AND or OR) to indicate how this criterion is connected to the first criterion. AND designates that the two criteria must both be true for a record to match. OR designates that a record will be considered to match if one or the other criterion is true. Comparisons occur from the top of the list to the bottom. For example, a AND b OR c is not the same as b OR c AND a To define parenthetical type selection criteria like (a AND b) OR (c AND d), see Evaluation order for parenthetical selection statements. This field is blank for our first selection criterion. Once we’ve defined the first criterion (that is, that the order date is greater than or equal to the value the user enters in the START field), we’ll select AND as the connection operator, because we want both criteria to be true (not one or the other). Field Enter the name of the field you want to use as a selection field. Note that the fields you specify as selection criteria don’t have to be fields to print. To display the list of available fields that can be used in the conditional, select Field functions > List selections. You can select fields from the list of available fields. See Choosing Fields for more information about selecting fields. If you select an arrayed field, the Field Subscript window is displayed. See Selecting an arrayed field for details on entering subscript information for an arrayed field. To use only part of a field in a selection, you can specify a range. See Using only part of a field for comparison (ranging). For our example, we selected ORD_DATE from the list of available fields. (See figure 1.) Compare Select one of the comparison operators: EQ = Equal to NE = Not equal to LE = Less than or equal to LT = Less than GE = Greater than or equal to GT = Greater than For the start date we selected GE. Value Enter either a specific value or the name of a field to which data in your records will be compared. For a detailed explanation of the rules that apply to comparisons and the type of data you can enter in this field, see Completing the Value field. We selected START from the list of available fields for this criterion. Completing the Value field The following rules apply to comparisons. If the field is numeric… The value can be one of the following: To display the list of available fields, select Field functions > List selections. See Choosing Fields for more information about selecting fields. Decimal values are displayed in selection criteria in parentheses—for example, (1) or (FLD:F1). If the field is alpha, user, or enumerated… The value can be one of the following: When used in a string that’s enclosed in double quotes, an asterisk is treated as a regular character, not a wildcard character. (For example, if you enter “*abc”, ReportWriter searches for that exact string: *abc.) One form of wildcard matching is supported for case‑sensitive strings. For example, “abc”* represents any character string that starts with abc, and the comparison is case sensitive. Asterisks and question marks can be used together. For example, ?ab* represents any string whose second and third characters are a and b. When used in a string that’s enclosed in quotation marks, a question mark is treated as a regular character. (For example, if you enter “?abc”, ReportWriter searches for that exact string: ?abc.) Alpha values are displayed in selection criteria in curly braces (for example, {ABC} or {FLD:F2}). Additionally, if the field is user‑defined and you enter a string value (rather than a field), this value is passed to the RPS_DATA_METHOD subroutine. RPS_DATA_METHOD is a user‑replaceable subroutine in which you can reformat the input data. ReportWriter also stores the data in that format for future selection comparisons. You can enter it in a variety of ways. For example, if today is August 6, 2013, the following date formats are all valid: 08‑06‑13 8‑6‑13 08/06/93 8/6/93 080613 Aug 6 2013 Aug 6 13 The default date is today’s date. If you omit any portion of the date, ReportWriter fills it in for you. For example, if you omit the year, ReportWriter fills in the current year; if you omit the month, ReportWriter fills in the current month; and so on. Therefore, formats like the following are also valid for the date August 6, 2013: RETURN (no entry) 6 //2013 8/6 /6/13 Aug Aug 6 The RPTDATE environment variable can be set to change the default order used when entering dates. If RPTDATE is not set or is set to 0, the order is month, day, year. If RPTDATE is set to 1, the order is day, month, year. If RPTDATE is set to 2, the order is year, month, day. Asterisks (*) and question marks (?) are not allowed in date fields. Date values are displayed in selection criteria in parentheses, for example, (1/1/13) or (FLD:F3). If the value is a time… You must enter the actual storage format. Time values are displayed in selection criteria in parentheses, for example, (123000). If you press enter without typing any data… The following default values are entered in the specified types of fields: Evaluation order for parenthetical selection statements When specifying selection criteria, you can define parenthetical type selection criteria such as (a AND b) OR (c AND d). As with other selection statements, ReportWriter performs comparisons from the top of the list to the bottom. For example, this: is not the same as this: (CMCLNT.CCHOUR).EQ.(1) OR (CMCLNT.CCHOUR).EQ.(2) OR (CMCLNT.CCHOUR).EQ.(3) AND (CMCLNT.CCOPEN).GT.(6/01/13) If you use the first example as a selection statement, a record is selected if any of the following are true: If you use the second example, a record is selected only when the time difference is one, two, or three hours and the account was opened after June 1, 2013. The same precedence rules apply to conditional statements that are assigned to print fields, calculation fields, and break lines. You can use the copy function to add a selection that’s similar to an existing one.
http://docs.synergyde.com/rw/rwChap9Definingselectioncriteria.htm
2017-12-11T07:43:18
CC-MAIN-2017-51
1512948512584.10
[]
docs.synergyde.com
You might have used workarounds to solve problems with Netscape rendering in releases prior to PowerBuilder 7.0.2 C3. Some of these workarounds might not work correctly in later releases because of improvements to Netscape rendering. Specifically, if you used computed fields or text fields containing only spaces, the Web DataWindow generator now creates a table entry for these fields, making the table display twice as wide. If you see this behavior, delete these placeholder fields and use a more standard layout.
https://docs.appeon.com/pb2019r2/upgrading_pb_apps/ch19s07.html
2020-10-23T21:32:44
CC-MAIN-2020-45
1603107865665.7
[]
docs.appeon.com
The Transfer Order - InboundShipments add-on supports item aliases. When you create a transfer order, select an item alias associated with the Amazon SKU for each line item. You can have a single item in NetSuite and have multiple variations or aliases for the same. When you select the parent item, all the associated aliases are populated. In the Parent SKU field, select the Amazon SKU for the particular item. While processing the item receipt, the integration app checks for the Item ID. If no match is found, then check for the item alias in the alias record. Before you select an item alias, in NetSuite modify the saved search as follows: - Login to your NetSuite account. - In the global search, enter “Celigo Amazon Transfer Order Export Search.” - On the “Celigo Amazon Transfer Order Export Search: Results” page, click Edit this Search. - Go to the Results > Columns tab and add a new column. - Add new entry as follows: - Click Save. Note: Repeat steps 1-6 for the “Celigo Amazon Child Transfer Order Export Search” saved search. In your integration app, configure or add the following mappings: - Go to the Shipments (TO) section. - Next to the : Read more about related solutions: Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/360049266752-Sync-item-aliases-to-NetSuite-transfer-order
2020-10-23T21:27:36
CC-MAIN-2020-45
1603107865665.7
[]
docs.celigo.com
identityProvider resource type Namespace: microsoft.graph Important APIs under the /beta version in Microsoft Graph are subject to change. Use of these APIs in production applications is not supported. To determine whether an API is available in v1.0, use the Version selector. Represents identity providers with External Identities for both Azure Active Directory tenant and an Azure AD B2C tenant. For Azure AD B2B scenarios in an Azure AD tenant, the identity provider type can be Google or Facebook. Configuring an identity provider in your Azure AD tenant enables new Azure AD B2B guest scenarios. For example, an organization has resources in Microsoft 365 that need to be shared with a Gmail user. The Gmail user will use their Google account credentials to authenticate and access the documents. In an Azure AD B2C tenant, the identity provider type can be Microsoft, Google, Facebook, Amazon, LinkedIn, Twitter or any openIdConnectProvider. The following identity providers are in preview: Weibo, QQ, WeChat, and GitHub. Configuring an identity provider in your Azure AD B2C tenant enables users to sign up and sign in using a social account or a custom OpenID Connect supported provider in an application. For example, an application can use Azure AD B2C to allow users to sign up for the service using a Facebook account or their own custom identity provider that complies with OIDC protocol. If it is a custom OpenID Connect identity provider with OpenIDConnect as type then it is represented using openIdConnectProvider resource type, which will inherit from identityProvider resource type. Methods Properties Where to get the client ID and secret Each identity provider has a process for creating an app registration. For example, users create an app registration with Facebook at developers.facebook.com. The resulting client ID and client secret can be passed to create identityProvider. Then, each user object in the directory can be federated to any of the tenant's identity providers for authentication. This enables the user to sign in by entering credentials on the identity provider's sign in page. The token from the identity provider is validated by Azure AD before the tenant issues a token to the application. JSON representation The following is a JSON representation of the resource. { "id": "String", "type": "String", "name": "String", "clientId": "String", "clientSecret": "String" }
https://docs.microsoft.com/en-us/graph/api/resources/identityprovider?view=graph-rest-beta
2020-10-23T22:38:55
CC-MAIN-2020-45
1603107865665.7
[]
docs.microsoft.com
Hooks Hooks This document provides information on the mhPayPal snippet hooks, its properties and some example usages. If looking for more information about the mhPayPal package in which the snippet is included, please visit ?mhPayPal instead. If you are looking for more information on the mhPayPal snippet itself, please visit ?mhPayPal.Snippet Usage. Introduction By means of the &preHooks, &postHooks and &postPaymentHooks, the mhPayPal snippet allows you to customize the flow and add functionality to the mhPayPal core, without having to wreck upgrade paths. A hook is either included in mhPayPal or is a snippet installed in your MODX set up. Built-in Hooks The email and email2 hooks do exactly the same, allowing you to send out two entirely different emails to different emailaddresses. They both support the same properties and its behavior is the same. You assign the properties mentioned below to the mhPayPal snippet itself. When using the email2 hook, make sure to append a "2" to the property, eg emailTo becomes emailTo2. Redirect The redirect hook can be used to redirect the user to a different page. You would most likely want to do this after the payment was completed, by sending the user to a different "Thank you" style page. Developing Custom Hooks Custom hooks may not yet be fully implemented, but will be in a future release. Simply call a snippet in one of the hooks properties, and it will be executed. Available in the $scriptProperties and as $variables are all the data available at that point. Configuration properties. $mhpp class. Examples Nothing yet. Sorry!
https://docs.modx.com/current/en/extras/mhpaypal/mhpaypal.snippet-usage/hooks
2020-10-23T21:30:51
CC-MAIN-2020-45
1603107865665.7
[]
docs.modx.com
InstallShield 2019 Project • This information applies to the following project types: Once you have determined that a major upgrade is the best upgrade solution for you, you can begin to create a major upgrade in the Upgrades view. Note that a major upgrade signifies significant change in functionality and warrants a change in the ProductCode property; you can update this property in the General Information view. Note • When you change the product code, Windows Installer treats your latest and previous product versions as unrelated, even though the ProductName values are likely the same. If you want both versions of your product to be installable on the same system, you can simply change the product code and the main installation directory (often INSTALLDIR). Essentially, a major upgrade is two operations rolled into one installation package. The major upgrade either installs the new version of the product and then silently uninstalls the older version or silently uninstalls the older version and then installs the newer version. The sequence of these two separate operations depends on how you configure the installation package of the newer version of the product. For more information on creating a major upgrade using InstallShield, refer to this section of the InstallShield Help Library. See Also Preventing the Current Installation from Overwriting a Future Major Version of the Same Product Upgrades View
https://docs.revenera.com/installshield25helplib/helplibrary/CreatingMajorUpgrades.htm
2020-10-23T21:53:43
CC-MAIN-2020-45
1603107865665.7
[]
docs.revenera.com
Under the Export tab, you can export customized maps of your Mix. This option lets you control exactly how your Mix is exported. Mixer supports multi-channel packing and several image options. By default, the export tab will be complete with default settings for a standard export. Image Options Maps can be exported to several file formats, and each format supports a different set of channels and bit depth. Bit-depth The bit depth controls the range of values that can be used. For example, 8-bit color depth supports 256 colors, which is often enough for most applications. However, Mixer supports up to 32-bit depth, which supports over 16 million colors. Using a higher bit depth will result in maps with a greater size on disk. Multi-channel Packing Depending on how much information you want to export on a single map, you can choose between 1 (Greyscale), 3 (RGB), and 4 (RGBA) channels. Each channel can contain a different set of information. File Format Comparison Color Space Mixer exports support both linear and sRGB (gamma) space values. Linear space values will be mathematically accurate, and are used by default for the normal map and other greyscale values like gloss and displacement. On the other hand, sRGB values will be ‘corrected’ for visual accuracy, and are used by default for the albedo/diffuse and specular maps. What you choose will depend on the what the maps will be later used in, and how that engine/renderer/DCC deals with linear and sRGB color spaces. In general, unless you require linear maps, sRGB maps are the more common option for most tools. Map Options Each channel can be filled with a certain set of information about the Mix: - Value: Enter a constant value to fill the channel with. - Diffuse/Albedo (R/G/B): The red, green, or blue color values of the surface. - Specular (R/G/B): The red, green, or blue values of the specular reflections map. - Gloss: The smoothness values of the surface. - Roughness: The roughness values of the surface. - Occlusion: The occlusion map values of the surface. - Metalness: The metallic values of the surface. Additionally, the Normal and Displacement maps have additional options when selected - Normal (X/Y/Z): The X, Y, or Z values of the normal vectors of the surface. - An Invert toggle inverts the normals. - Displacement: The vertical displacement values of the surface. - A Normalize toggle normalizes the displacement vectors. - A Center toggle centers all displacement values. - A 0-1 range input field specifies the range of your Mix’s displacement in centimeters. Export Options - Surface Name: Specify the name of the Mix. - Export Location: Specify the location where you want the Mix and its corresponding files to be exported. - Create subfolder: Select to create a subfolder with the Mix’s name to hold the exported maps. - Add resolution to file names: Select to append the resolution to the name of each file. - Export Resolution: Specify the texture resolution at which you want to export your Mix. - Export Format: Specify the file format for your maps (e.g. PNG, TGA, EXR, etc). Maps In this section, you get granular control over each map. You can adjust the map name, format and bit depth. Following are the adjustments you can make in this section: - Add or remove a map. - From the hamburger menu on the top right you can: - Revert to default values. - Save your export settings as a preset for later use. Map Properties After adding a new map, there are properties that can be customized based on your needs. - 1: Name of the map file. - 2: Toggle to show or hide settings. - 3: Select file format. Each format has a different set of channels and bit depth available. For a comparison of the file formats available, see above comparison. - 4: Number of channels to include in the map. Select between Greyscale (single channel), RGB (three channels), or RGBA (four channels). - 5: Bit depth of map file. Select between 8, 16, and 32-bit. - 6: Select data for a certain channel. Fore more information, see above Map Options. - If Normal or Displacement is selected, additional toggles will appear. - 7: Type of data to fil channel with (Linear or sRGB). For more information, see above Color Space. Once you’re satisfied with your export options, click the Export [x] maps button to export your mix to the specified location. Post your comment on this topic.
http://docs.quixel.com/mixer/1/en/topic/export
2020-10-23T21:04:35
CC-MAIN-2020-45
1603107865665.7
[]
docs.quixel.com
Unity Game iOS Integration Guide This doc introduce the integration steps for Unity games with export to iOS platform. Install UnityInstall Unity Unity Version: 2019.3.6f1 Install Unity Modules In Unity Hub: ToolsTools - Unity (game engine) - Visual Studio (IDE) - C# (logic) - required: Xcode Create Your Games at the portalCreate Your Games at the portal CelerX Game Developer Portal Import CelerX iOS PluginImport CelerX iOS Plugin - Download zip file and place everything into plugin/iosfolder in your unity project - Drop the CelerUnityBridge.csfile anywhere in your workspace, that will be your briging file that communicates with Celer Game Sdk Export Your ProjectExport Your Project - In most cases, unity will embed every depedencies with your project, if not, go head and embed everything in the framework/plugin/iosfolder, but with one exception - Change Celersdk.frameworkto Do not embed(or Linkfor older versions of xcode) Extra SetupExtra Setup -- You can either setup your permissions in with your unilty plugin, or modify the iOS project exported by the Unity project PermissionsPermissions bitcodebitcode Unfornunately, you will have to disable bitcode if your are using Unity version 2019.3.6f1 or above. Since newer versions of unity have bitcode enabled by default. Disable bitcode in the exported xcode project Integrate CelerX APIs (C#)Integrate CelerX APIs (C#) InitializeDelegate() Give a welcome scene in your game. Call launchCelerXUI API in welcome scene script while start game button be triggered. SetDelegate(ICelerXMatchListener callback) set listener to you game life cycle public class MatchInfo { public string shouldLaunchTutorial; public string matchId; public double sharedRandomSeed; public GamePlayer currentPlayer; public List<GamePlayer> players; } public class GamePlayer { public string playerId; public string fullName; public string playIndex; } LevelLoaded() If game has been loaded and game rendering was finished, developers must call ready function to confirm game can be start, then CelerX will show a ready button in match view for player. public interface ICelerXMatchListener { /* * This Function will be callback to C sharp after CelerX platform has match player finishing. Developers can get any match information from param `MatchInfo`, such as players information and sharedRandomSeed. Developers must give ui rendering in this callback. */ void onMatchJoined(MatchInfo mathInfo); /* * Calling after player clicking ready button and CelerX platform confirmed everything is ready, game can start now. */ void onMatchReadyToStart(); } onMatchJoined(MatchInfo mathInfo) This is when a match is generated for the game, use the random seed to generate your level and preload your game, when you game has been loaded, call CelerUnityBridge.LevelLoaded() to notify the sak onMatchReadyToStart This method will be called when "Ready" button has been clicked and CelerX has confirmed that everything is ready, It indicates that game can start now. Users will see your game scene here. Put your game start logics here. submit() When the game is over, call this API to submit final score to CelerX platform Sequence DiagramSequence Diagram NotesNotes - For a pure OC project target before ios 12.2, please enable Always Embed Swift Standard Librariessettings [] - Minimuim sdk requirement is iOS 11
https://docs.celerx.app/docs/unity-ios-game
2020-10-23T21:50:46
CC-MAIN-2020-45
1603107865665.7
[array(['https://user-images.githubusercontent.com/9431599/93301941-c2944300-f82b-11ea-8639-7e63c70721bc.png', 'modules_install'], dtype=object) array(['https://user-images.githubusercontent.com/9431599/93202419-40514380-f785-11ea-96f6-351d55184a77.png', 'disableBitCode'], dtype=object) array(['https://user-images.githubusercontent.com/9431599/93562604-edf86880-f9b8-11ea-885d-68bb3b6e1699.png', 'Screen Shot 2020-09-18 at 2 11 08 PM'], dtype=object) ]
docs.celerx.app
How to cancel the Pipeline Wizard.With Copado v13 we have introduced the Pipeline Wizard that will help you to complete the Setup of your Pipeline. The wizard will ask you to enter some details related to the Git Repository you are going to use, production org and pipeline template. If at some point of the process there is an issue, the wizard might get stuck and anytime you click on the Pipeline Manager tab you will be redirected to the same point of the process which is never completed. You also may need to cancel the Wizard because you already completed the Pipeline or you just do not want to complete it and instead, use an existing one. How do you cancel the wizard? 1. Click on the Pipeline tab. 2. Click on New. The pop up below appears. 3. Click on the "Start New" button. You will be redirected to the initial Wizard page. 4. Close that tab. The Wizard in progress is now cancelled.
https://docs.copado.com/article/hwzdr6uxfc-how-to-cancel-the-pipeline-wizard
2020-10-23T22:10:02
CC-MAIN-2020-45
1603107865665.7
[]
docs.copado.com
A Reflection Probe is rather like a cameraA component which creates an image of a particular viewpoint in your scene. The output is either drawn to the screen or captured as a texture. More info See in Glossary that captures a spherical view of its surroundings in all directions. The captured image is then stored as a Cubemap that can be used by objects with reflective materials. Several reflection probes can be used in a given sceneA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info See in Glossary and objects can be set to use the cubemap produced by the nearest probe. The result is that the reflections on the object can change convincingly according to its environment. There are two buttons at the top of the Reflection Probe Reflection Probes property (Simple, Blend Probes or Blend Probes and Skybox) and drag the chosen probe onto its Anchor Override property. See the Reflection Probes section in the manual for further details about principles and usage.
https://docs.unity3d.com/2020.1/Documentation/Manual/class-ReflectionProbe.html
2020-10-23T22:40:42
CC-MAIN-2020-45
1603107865665.7
[]
docs.unity3d.com
Let's walk through a complete example, using the Liquidity Bootstrapping use case. First, we give the token a symbol and name, set the basic pool parameters, and determine the permissions. All we really need to be able to do is change the weights, so we can set all the other permissions false. As noted earlier, setting the permissions as strict as possible minimizes the trust investors need to place in the pool creator. Liquidity providers for this pool can rest assured that the fee can never be changed, no tokens can be added or removed, and they cannot be prevented from adding liquidity (e.g., by being removed from the whitelist, or having the cap lowered). // XYZ and DAI are addresses// XYZ is the "project token" we're launchingconst poolParams = {tokenSymbol: 'LBT',tokenName: 'Liquidity Bootstrapping Token',tokens: [XYZ, DAI],startBalances: [toWei('4000'), toWei('1000')],startWeights: [toWei('32'), toWei('8')],swapFee: toWei('0.005'), // 0.5%}const permissions = {canPauseSwapping: false,canChangeSwapFee: false,canChangeWeights: true,canAddRemoveTokens: false,canWhitelistLPs: false,canChangeCap: false,canRemoveAllTokens: false,}; Next, we use these structs to deploy a new Configurable Rights Pool (and the underlying Core Pool). // If deploying locally; otherwise use the published addresses// This is the factory for the underlying Core BPoolbFactory = await BFactory.deployed();// This is the Smart Pool factorycrpFactory = await CRPFactory.deployed();// Static call to get the return valueconst crpContract = await crpFactory.newCrp.call(bFactory.address,poolParams,permissions,);// Transaction that actually deploys the CRPawait crpFactory.newCrp(bFactory.address,poolParams,permissions,);// Wait for it to get minedconst crp = await ConfigurableRightsPool.at(crpContract);// Creating the pool transfers collateral tokens// Must allow the contract to spend themawait dai.approve(crp.address, MAX);await xyz.approve(crp.address, MAX);// Create the underlying pool// Mint 1,000 LBT pool tokens; pull collateral into BPool// Override with fast block wait times for testing purposes// (Defaults are 2 hours min delay / 2 weeks min duration)await crp.createPool(toWei('1000'), 10, 10); At this point we have an initialized pool. The admin account has 1,000 LPTs, and the underlying BPool is holding the tokens. The pool is enabled for public trading and adding liquidity. To facilitate the token launch - with low slippage, low initial capital, and stable prices over time, per the paper referenced above - we want to gradually "flip" the weights over time. We start with the project token at a high weight (32/(32+8), or 80%, and collateral DAI at 20%. At the end of the launch, we want XYZ at 20%, and DAI at 80%. We accomplish this by calling updateWeightsGradually; we're allowed to do this because the canChangeWeights permission was set to true. // Start changing the weights in 100 blocksconst block = await web3.eth.getBlock('latest');const startBlock = block.number + 100;const blockRange = 10000;// "Flip" the weights linearly, over 10,000 blocksconst endBlock = startBlock + blockRange;// Set the endWeights to the reverse of the startWeightsconst endWeights = [toWei('8'), toWei('32')];// Kick off the weight curveawait crp.updateWeightsGradually(endWeights, startBlock, endBlock); Of course, smart contracts can't change state by themselves. What updateWeightsGradually actually does is put the contract into a state where it will respond to the pokeWeights call by setting all the weights according to the point on the "weight curve" corresponding to the current block. (It will revert before the start block, and set the weights to their final values if called after the end block.) // Sample code to print the current weights,// Then call pokeWeights to update themfor (i = 0; i < blockRange; i++) {weightXYZ = await controller.getDenormalizedWeight(XYZ);weightDAI = await controller.getDenormalizedWeight(DAI);block = await web3.eth.getBlock("latest");console.log('Block: ' + block.number + '. Weights -> XYZ: ' +(fromWei(weightXYZ)*2.5).toFixed(4) + '%\tDAI: ' +(fromWei(weightDAI)*2.5).toFixed(4) + '%');// Actually cause the weights to changeawait crp.pokeWeights();} There are many subtleties; for instance, you could implement a non-linear bootstrapping weight curve by calculating the weights off-chain and setting them directly. For a comprehensive set of tests that demonstrate all features of the Configurable Rights Pool, see our GitHub.
https://docs.balancer.finance/guides/liquidity-bootstrapping
2020-10-23T21:26:26
CC-MAIN-2020-45
1603107865665.7
[]
docs.balancer.finance
The Celigo Magento connector does not import tax from Magento to NetSuite when creating Sales Orders. The tax configuration in both NetSuite and Magento need to match in order to ensure that tax will be calculated correctly on both sides. For NetSuite to compute taxes, the items need to be set as taxable, as well as the customer. NetSuite will then perform a lookup of the appropriate tax percentage to be applied. If either the customer or the item is not set as taxable, then it would not apply tax which may result in a tax variance. The connector will record the tax variance under the Magento tab of the Sales Order. Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/228192008-Magento-Tax-Calculation
2020-10-23T21:17:07
CC-MAIN-2020-45
1603107865665.7
[array(['/hc/en-us/article_attachments/215150848/mgtaxvar.jpg', None], dtype=object) ]
docs.celigo.com
Use the VM Configuration dashboard to view the overall configuration of virtual machines in your environment, especially for the areas that need attention. Design Considerations See the Configuration Dashboards page for common design considerations among all the dashboards for configuration management. As there are many configurations to be verified, if you have a larger screen, add additional checks as you deem fit, or add legends to the pie-charts. How to Use the Dashboard - Click the row to select a data center from the data center table. - In a large environment, loading thousands of VMs increases the web page loading time. As a result, the VM is grouped by data center. In addition, it might make sense to review the VM configuration per data center. - For a small environment, vSphere World is provided, so you can view all the VMs in the environment. The VM Configuration dashboard is organized into three sections for ease of use. All the three sections display the VM configuration for the selected data center. - The first section covers limits, shares, and reservations. - Their values can easily become inconsistent among VMs, especially in an environment with multiple vCenter Servers. - Shares should be mapped to a service level, to provide a larger proportion of shared resources to those VMs who pay more. This means that you should only have as many shares as your service levels. If your IaaS provides gold, silver, and bronze, then you should have only three types of shares. - Value of the shares and reservation is relative. If you move a VM from one cluster to another (in the same or different vCenter Server), you might have to adjust the shares. - Reservation impacts your capacity. Memory reservation works differently from CPU reservation, and it is more permanent. - The second section covers VMware Tools. - VMware Tools is a key component of any VM, and should be kept running and up to date. - The third section covers other key VM configurations. - Keep the configurations consistent by minimizing the variants. This helps to reduce complexity. - VM Network Cards widget. If you suspect that your environment might have a VM with no NIC, consider adding it as a dedicated bucket. - The last section of the dashboard is collapsed by default. - You can view all the VMs with their key configurations. - You can sort the columns and export the results into a spreadsheet for further analysis. Points to Note - The number of buckets in the pie-chart or bar-chart are balanced between the available screen estate, ease of use, and functionality. Modify the buckets to either reflect your current situation or your desired ideal state. - No data to display does not imply that there is something wrong with data collection by vRealize Operations Cloud. It might signify that none of the objects meet the filtering criteria of the widget, and as a result there is nothing to display. - To view the content of a slice in a pie-chart or a bucket in a bar-chart, click on it. The list cannot be exported. Clicking an object name, takes you to the object summary page. The page provides key configuration information, with other summary information. - The pie-chart and bar-chart cannot drive other widgets. For example, you cannot select one of the pie-slices or buckets, and expect it to act as a filter to a list or a table. - You can apply a specific color in a pie-chart or distribution chart for a specific numeric value, but not string value. For example, you cannot apply the color red to the value Not Installed.
https://docs.vmware.com/en/VMware-vRealize-Operations-Cloud/services/config-guide/GUID-45EACC5A-906E-431B-8EAE-E67B17EC96A3.html
2020-10-23T22:32:41
CC-MAIN-2020-45
1603107865665.7
[]
docs.vmware.com
SIOS DataKeeper allows the administrator to specify which IP addresses should be used as mirror end-points. This allows the replicated data to be transmitted across a specific network which permits the user to segment mirrored traffic away from the client network if desired. Dedicated LAN for Replication While it is not required, a dedicated (private) network between the two servers will provide performance benefits and not adversely affect the client network. Feedback Thanks for your feedback. Post your comment on this topic.
http://docs.us.sios.com/sps/8.7.1/en/topic/specifying-network-cards-for-mirroring
2020-10-23T22:24:27
CC-MAIN-2020-45
1603107865665.7
[]
docs.us.sios.com
datatest — Testing tools for data preparation¶ Datatest extends the standard library’s unittest package to provide testing tools for asserting data correctness. To understand the basics of datatest, please see Introduction to Datatest. To use datatest effectively, users should be familiar with Python’s standard unittest library and with the data they want to test. (If you’re already familiar with datatest, you might want to skip to the list of assert methods.) Quick Install¶ pip install datatest For installation details, see or the README.rst file included with the source distribution.
https://datatest.readthedocs.io/en/0.6.0.dev1/
2020-10-23T22:20:50
CC-MAIN-2020-45
1603107865665.7
[]
datatest.readthedocs.io
RecommendationFeedbackSummary Information about recommendation feedback summaries. Contents - Reactions List for storing reactions. Reactions are utf-8 text code for emojis. Type: Array of strings Array Members: Minimum number of 0 items. Maximum number of 1 item. Valid Values: ThumbsUp | ThumbsDown Required: No - RecommendationId The recommendation ID that can be used to track the provided recommendations. Later on it can be used to collect the feedback. Type: String Length Constraints: Minimum length of 1. Maximum length of 64. Required: No - UserId The ID of the user that gave the feedback. The UserIdis an IAM principal that can be specified as an AWS account ID or an Amazon Resource Name (ARN). For more information, see Specifying a Principal in the AWS Identity and Access Management User Guide. Type: String Length Constraints: Minimum length of 1. Maximum length of 256. Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/codeguru/latest/reviewer-api/API_RecommendationFeedbackSummary.html
2020-10-23T22:05:05
CC-MAIN-2020-45
1603107865665.7
[]
docs.aws.amazon.com
ReminderBase.Snooze(TimeSpan) Method Notifies the scheduler to defer the triggering of a reminder by the specified interval. Namespace: DevExpress.XtraScheduler Assembly: DevExpress.XtraScheduler.v20.1.Core.dll Declaration public bool Snooze( TimeSpan remindAfter ) Public Function Snooze( remindAfter As TimeSpan ) As Boolean Parameters Returns Remarks The Snooze method defers the reminder's alert time by the time interval specified. The time of the alert can be obtained via the ReminderBase.AlertTime property. If an appointment occurs in the past, then the following behavior is implemented: If an outdated appointment is of AppointmentType.Normal type, then AlertTime=Now+Snooze. If it is a member of a recurring appointment series, then the current valid appointment in a chain will hold the reminder. See Also Feedback
https://docs.devexpress.com/CoreLibraries/DevExpress.XtraScheduler.ReminderBase.Snooze(System.TimeSpan)
2020-10-23T22:25:18
CC-MAIN-2020-45
1603107865665.7
[]
docs.devexpress.com
DescribeRecommendationFeedback Describes the customer feedback for a CodeGuru Reviewer recommendation. Request Syntax GET /feedback/ CodeReviewArn?RecommendationId= RecommendationId&UserId= UserId - RecommendationId The recommendation ID that can be used to track the provided recommendations and then to collect the feedback. Length Constraints: Minimum length of 1. Maximum length of 64. Required: Yes - UserId Optional parameter to describe the feedback for a given user. If this is not supplied, it defaults to the user making the request. The UserIdis an IAM principal that can be specified as an AWS account ID or an Amazon Resource Name (ARN). For more information, see Specifying a Principal in the AWS Identity and Access Management User Guide. Length Constraints: Minimum length of 1. Maximum length of 256. Request Body The request does not have a request body. Response Syntax HTTP/1.1 200 Content-type: application/json { "RecommendationFeedback": { "CodeReviewArn": "string", "CreatedTimeStamp": number, "LastUpdatedTimeStamp": number, "Reactions": [ "string" ], "RecommendationId": "string", "UserId": "string" } } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - RecommendationFeedback The recommendation feedback given by the user. Type: RecommendationFeedback:
https://docs.aws.amazon.com/codeguru/latest/reviewer-api/API_DescribeRecommendationFeedback.html
2020-10-23T22:20:28
CC-MAIN-2020-45
1603107865665.7
[]
docs.aws.amazon.com
Password field is empty when a NetworkCredential object is deserialized at the WCF service This article helps you working around the problem that the password field is empty when you deserialize a NetworkCredential object that's passed as a parameter to a Windows Communication Foundation (WCF) service operation. Original product version: Microsoft .NET Framework 4.5 Original KB number: 3021166 Symptoms When you deserialize a NetworkCredential object that was passed as a parameter to a WCF service operation, you discover that the password field is empty. For example, you have a WCF Contract defined as follows: [ServiceContract] public interface IService { [OperationContract] string GetData(NetworkCredential myCredential); } When the GetData operation is called from a client that passes a NetworkCredential string, the myCredential.Password value is empty. Cause It's a known issue that was introduced in the .NET Framework 4.0. The issue occurs when a new property SecurePassword value is added to NetworkCredential. This property overwrites the original password string when the NetworkCredential object is deserialized on the service side. Workaround To work around this issue, pass the user name and password as strings, and then create a NetworkCredential object at the service.
https://docs.microsoft.com/en-us/troubleshoot/dotnet/framework/password-empty-networkcredential-deserialized-wcf
2020-10-23T22:36:01
CC-MAIN-2020-45
1603107865665.7
[]
docs.microsoft.com
Diffusion monitoring console A web console for monitoring the Diffusion™ server. About The Diffusion monitoring console is an optional publisher, provided as console.dar. It is deployed by default and can be undeployed in the same manner as any DAR file. It exists to give you an easy way to monitor your Diffusion solution using a web browser. Dependencies The console requires the latest version of a modern browser such as Chrome, Firefox, Edge, or Safari. Internet Explorer is no longer supported. Logging in The console is available in a fresh local installation at. The console is secured by a principal (username) and password. The principal you use to log in must have permissions to view and act on information on the Diffusion server, for example by having the ADMINISTRATOR role. - principal: 'admin' - password: 'password' This user has the correct permissions to use all of the console's capabilities. For more information, see Pre-defined users. Video tour An introductory video tour of the Diffusion console is available on the Push Technology YouTube channel. Features: Overview tab The Overview tab of the console contains panels providing key information about the server. Changing the panel layout You can edit the panels on the Overview screen. - Grab a panel header and drag it to move a panel. - Click the X icon to remove a panel. - Click on the wrench icon to configure a panel. Sourcing monitoring metrics While configuring a panel, you can add any topic in the topic tree to the metrics that the panel tracks (including both built-in metrics and topics you have created). Use the Topics tab to find topics. You can add topics to a panel using the Add to Overview button in the Topics tab. Features: Sessions tab The Sessions tab shows a live list of the sessions connected to the Diffusion server in the Open sessions section, including session ID, IP address, connection and transport type, and total session time. You can use the Metric Collectors section of this tab to configure a session metric collector. These enable you to gather information on a subset of all sessions. The Metrics section displays the output of your session metric collectors. Each session metric collector provides information about the number of sessions (open, connected, peak and total), as well as inbound and outbound traffic in both bytes and number of messages. You can optionally group the sessions within a collector by session properties. In the Metric Collectors section, specify the sessions to include using the session filter syntax. Enter session properties as a comma-separated list. Make sure to include the $ symbol in front of each one. For example: $Roles, $ClientType, $Connector. For more information about metric collectors, see Metrics and Configuring metrics. Features: Topics tab You can use this section to browse and interact with the Diffusion topic tree. You can browse the live topic tree, subscribe to topics and add/delete topics. This tab also enables you to create topic metric collectors and topic views. Use the menu icon (three horizontal lines) at right to subscribe to or delete topics. The icon also offers Subscribe Recursive and Delete Recursive options. These act on all the topics below the selected topic in the topic tree. Once you have subscribed to a topic, you can view its type and value in the Subscriptions section of this tab. Note that you must be logged in to the console using a principal with the correct permissions to successfully subscribe to, add or delete topics. You can use the Metric Collectors section of this tab to create topic metric collectors, and view them in the Metrics section. Each topic metric collector provides information on a subset of the topics in the topic tree. In the Metric Collectors section, specify the topics to include using the topic selector syntax. You can optionally choose to group by topic type. For more information about metric collectors, see Metrics and Configuring metrics. In the Topic Views section you can create a topic view using a topic view definition. Features: Logs tab The Logs tab shows a live color-coded display of log entries emitted by the server at the levels of INFO, WARN, and ERROR. Features: Security tab The Security tab shows a live list of security principals and roles that are configured on the Diffusion server. For more information about security, see Security. Create, edit, or delete principals: The Principals table shows a list of the principals that the system authentication handler is configured to allow to connect to the Diffusion server. The table also shows the roles that are assigned to any client session that authenticates with the principal. Click the + button to add a new principal and define its associated password and roles. Click the spanner icon next to an existing principal to edit its password or roles. Click the X icon next to an existing principal to delete that principal. Edit authentication policy and roles for anonymous users: The Anonymous sessions table shows the authentication decision for client sessions that connect anonymously to the Diffusion server. You can choose to ALLOW or DENY anonymous connections or to ABSTAIN from the authentication decision, which then passes to the next configured authentication handler. Click the spanner icon to edit the authentication decision for anonymous connections and, if that decision is ALLOW, edit any roles that are assigned to anonymous sessions. Edit authentication policy and roles for named sessions: The Named sessions table enables you to edit the authentication policy for named sessions. Create, edit, or delete roles: The Roles table shows a list of roles that have been configured in the security store of the Diffusion server. These are the roles that you can choose to assign to any principals that connect to the Diffusion server. Click the + button to add a new role and define its permissions and any roles it inherits from. Click the spanner icon next to an existing role to edit its permissions and any roles it inherits from. Click the X icon next to an existing role to delete that role. This page last modified: 2019/09/04
https://docs.pushtechnology.com/docs/6.3.2/manual/html/administratorguide/systemmanagement/r_diffusion_monitoring_console.html
2020-10-23T21:26:31
CC-MAIN-2020-45
1603107865665.7
[array(['console/console_default_layout.png', 'Screenshot of the Overview tab showing panels.'], dtype=object) array(['console/console_topics_hamburg.png', 'Screenshot of the Topics tab showing subscribe/delete controls.'], dtype=object) array(['console/console_security_tab.png', 'Screenshot of the security tab.'], dtype=object)]
docs.pushtechnology.com
The value for a calculated field is determined by the field's formula. These formulas can reference other field's values or literal values. More complex results are possible using built-in functions. To reference the value of a field, enclose the field's reference name in brackets. For example: [fieldRef] Formulas can include the basic mathematical operators.
https://docs.resultra.org/doku.php?id=formulas
2020-10-23T21:08:13
CC-MAIN-2020-45
1603107865665.7
[]
docs.resultra.org
Correlation search overview for ITSI A correlation search. Do not create correlation searches by manually editing $SPLUNK_HOME/etc/apps/itsi/local/savedsearches.conf. The search will not appear on the correlation search lister page. Always create correlation searches directly in the IT Service Intelligence app. Predefined correlation searches The following correlation searches are delivered with ITSI. They are all disabled by default except Splunk App for Infrastructure Alerts. You can enable them and modify them to meet your needs.!
https://docs.splunk.com/Documentation/ITSI/4.4.5/Configure/Correlationsearchoverview
2020-10-23T22:26:15
CC-MAIN-2020-45
1603107865665.7
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
U1db Elements - Database - Database implements on-disk storage for documents and indexes. - Document - Document proxies a single document stored in the Database. - Index - An Index defines what fields can be filtered using Query. - Query - Query filters documents based on the query and index. - Synchronizer - Synchronizer handles synchronizing between two databases.
https://phone.docs.ubuntu.com/en/apps/api-qml-current/U1db
2020-10-23T22:30:45
CC-MAIN-2020-45
1603107865665.7
[]
phone.docs.ubuntu.com
Use Zendesk As a Content Source This article explains how to index the posts, articles, and tickets stored in your Zendesk instance. Establish a Connection - Navigate to Content Sources. - Click Add new content source. - Select Zendesk. - Enter a name. - Insert your Zendesk instance web address in Client URL. - Select an authentication method. - Press Connect. Set Up Crawl Frequency - Click to fire up a calendar and select a date. Only the posts, articles, and tickets created or edited after the selected date will be indexed. - Use the Frequency dropdown to select how often SearchUnify should index the Zendesk data. - Click Set. Select Fields for Indexing You can index your entire Zendesk data, or only a subset of it. SearchUnify supports three content types out-of-the-box: blogs, articles, and tickets. - Click to select a content type. - Add content fields one at a time. Each field is a property of blogs, articles, and tickets. - OPTIONAL. SearchUnify assigns each field a label, type, and either an isSearchableor isFilterabletag. The values don't require a change, but advanced users can edit them. - Press Save. - Repeat the steps 2-4 for the remaining two content types. - Navigate to By Topics. - Use the index to find your topics and check enable for each one of it. - Press Save. You have successfully installed Zendesk as a content source. Last updated: Friday, September 25, 2020
https://docs.searchunify.com/Content/Content-Sources/Zendesk.htm
2020-10-23T21:22:05
CC-MAIN-2020-45
1603107865665.7
[]
docs.searchunify.com
Macs that have the following Sophos products installed can be managed by SafeGuard Enterprise and/or report status information. The status information is displayed in the SafeGuard Management Center: Sophos SafeGuard File Encryption for Mac 6.1 and later Sophos SafeGuard Disk Encryption for Mac 6.1 / Sophos SafeGuard Native Device Encryption 7.0 Sophos SafeGuard Disk Encryption for Mac 6 - only reporting
https://docs.sophos.com/esg/sgn/8-0/admin/en-us/webHelp/concepts/ManageMacsAbout.htm
2020-10-23T21:49:12
CC-MAIN-2020-45
1603107865665.7
[]
docs.sophos.com
Why it is important If you have different groups of users that have different levels of access within your platform or who use different functionalities, then you would want them to see only the userlanes that are relevant to them. Segmentation will allow you to divide your users into different groups, and show them only the userlanes that are relevant to them, providing them with a more personalized onboarding experience. This article will help you to create a logical and coherent concept to base your segmentation upon. How it works The segmentation depends mainly on the information you have about your users. The more data you have the more granular you can segment your users. Follow these steps to create a good concept to segment your users: 1. Define groups and the characteristics that set them apart: Think about your groups of users and the information that you normally gather about them in your application (e.g. trial/paying, last login, etc.). Do you have enough? Do you need extra information? If you think you need more information talk with your developers to see if this is possible to do. 2. Decide which userlanes are relevant for everyone and which ones are specific to certain groups: Is most of your application important to the majority of your customers? Or are there many parts that are only used by specific groups? You can segment single userlanes or whole chapters. How do you know what would work best for you? Here you have some ideas (based on segmentation): User roles: - Relevant if: you have a rights management system with different user roles such as administrator, regular user, etc. - Recommended segmentation: chapter level Departments: - Relevant if: users from different departments use your software differently - Recommended segmentation: there is probably a greater overlap between the userlanes required for the different user groups. Hence, it is often easier to apply the segmentation to single userlanes Purchased package / Feature sets: - Relevant if: you sell different versions of your product with different feature sets - Recommended segmentation: if you organize the userlanes in chapters based on the purchased package, then chapter level. If the overall chapter structure stays the same for every package but depending on the included features, a user might need more or fewer userlanes, then segment single userlanes Trial status: - Relevant if: your product has a trial period - Recommended segmentation: different chapters for your trial and converted users As a final piece of advice, we recommend to start with a broader segmentation and then refine the segmentation based on the analytics and the feedback of your users. This would mean that you start just using chapter segmentation and only then continue to refine your segmentation applying it to single userlanes within those chapters. You should think about this - You can connect different conditions for your user segment with AND or OR. This also allows you to create nested conditions. For example, you can create a user segment that contains all users that are admins OR paying users AND that are 'first time active' within the last 14 days. - When a specific segment is applied to a chapter, this segmentation will affect all userlanes within this chapter. If you apply another segment to a single userlane in this chapter those conditions will apply additionally.
https://docs.userlane.com/en/articles/2405842-best-practices-create-a-solid-segmentation-concept
2020-10-23T22:12:59
CC-MAIN-2020-45
1603107865665.7
[]
docs.userlane.com
Drill down within a statement item You can drill down within a statement item to visualize a subset of its data. Before you beginRole required: service_charging_analyst About this task Drill down within a statement item to see the entity or the key field that has the data retrieved from the source table. You can also see the basis on which the mapping is done to the particular field that has the relevant data to retrieve. You can also edit and change the drilldown method and use the weighted method. In such a case, the system uses the weighted metric to retrieve data from the source table. Procedure Navigate to Financial Reporting > Administration > Statement Item Drilldowns. Click New to create a statement item or click the name of an existing statement item drilldown that you want to edit. Click the type of statement item that you want to drill down. Based on the type of statement item drilldown that you select, fill in the relevant form fields. Table 1. Statement Item Drilldown form Field Description Name Name of the statement item drilldown. Drilldown basis Based on your option, the system drills down based on the field mapped or using the weighted metric method.For more information, see Allocation metrics in Cost Transparency application. Table The source table that has the statement item information. Type The type of the statement item, determined based on the source from where the information is retrieved.You cannot edit the field as you have already selected the type of statement item drilldown that you want to perform. Mapping field Maps to the field in the table which has the drilldown data. Weighted Metric The drilldown is done on calculations based on an aggregate value from a segment. Cost model Select the cost model for which the drilldown can be applied. Click Submit to enter a record or Update if you have edited an existing record. What to do next After you define the statement items, associate the statement items to the showback statements. You can use the showback statement to report consumed services to the business unit head, which displays the detailed service charge lines that the unit has utilized as a part of the business service. For example, Email service is a business service. When a business unit uses the email service, then the service charges for consuming the email services are reported as a showback statement to the business unit head or the department head.
https://docs.servicenow.com/bundle/jakarta-it-business-management/page/product/it-finance/task/drill-down-within-statement-item.html
2018-01-16T13:39:05
CC-MAIN-2018-05
1516084886436.25
[]
docs.servicenow.com
Expressions can appear in SQL statements and clauses. Syntax for many statements and expressions includes the term Expression, or a term for a specific kind of expression such as TableSubquery. Expressions are allowed in these specified places within statements. Some locations allow only a specific type of expression or one with a specific property. Of course, many other statements include these elements as building blocks, and so allow expressions as part of these elements. The following sections list all the possible SQL expressions and indicate where the expressions are allowed. General expressions are expressions that might result in a value of any type. Boolean expressions are expressions that result in boolean values. Most general expressions can result in boolean values. Boolean expressions commonly used in a WHERE clause are made of operands operated on by SQL operators. See SQL Boolean Operators. Character expressions are expressions that result in a CHAR or VARCHAR value. Most general expressions can result in a CHAR or VARCHAR value.
http://gemfirexd.docs.pivotal.io/docs/1.3.1/userguide/reference/language_ref/ref-sql-expressions.html
2017-05-23T01:04:48
CC-MAIN-2017-22
1495463607245.69
[]
gemfirexd.docs.pivotal.io
. Widgets are controls which you drop on the page (in page content editing mode) and configure them to display already existing content. You can configure the widgets to display different part of content, by combining widgets and by tagging and classifying content. Technically speaking, the concept for a Sitefinity CMS widget is the same as the one for an ASP.NET control...: You can also use Sitefinity CMS Thunder to create and customize widgets. Back To Top
http://docs.sitefinity.com/widgets-add-content-and-functionality-to-pages
2017-05-23T01:21:06
CC-MAIN-2017-22
1495463607245.69
[]
docs.sitefinity.com
Like any other higher-level kernel-mode driver, a storage filter driver (SFD) must have one or more Dispatch routines to handle every IRP_MJ_XXX request for which the underlying storage driver supplies a Dispatch entry point. Depending on the nature of its device, the Dispatch entry point of an SFD might do one of the following for any given request: For a request that requires no special handling, set up the I/O stack location in the IRP for the next-lower driver, possibly call IoSetCompletionRoutine to set up its IoCompletion routine for the IRP, and pass the IRP on for further processing by lower drivers with IoCallDriver. For a request already handled by a storage class driver, modify the SRB in the I/O stack location of the IRP before setting up the I/O stack location, possibly set an IoCompletion routine, and pass the IRP to the next-lower driver with IoCallDriver. Set up a new IRP with an SRB and CDB for its device, call IoSetCompletionRoutine so the SRB (and the IRP if the driver calls IoAllocateIrp or IoBuildAsynchronousFsdRequest) can be freed, and pass the IRP on with IoCallDriver An SFD is most likely to set up new IRPs with the major function code IRP_MJ_INTERNAL_DEVICE_CONTROL. Processing requests For requests that require no special handling, the Dispatch routine of an SFD usually calls IoSkipCurrentIrpStackLocation with an input IRP and then calls IoCallDriver with pointers to the class driver's device object and the IRP. Note that an SFD seldom sets its IoCompletion routine in IRPs that require no special handling both because a call to the IoCompletion routine is unnecessary and because it degrades I/O throughput for the driver's devices. If an SFD does set an IoCompletion routine, it calls IoCopyCurrentIrpStackLocationToNext instead of IoSkipCurrentIrpStackLocation and then calls IoSetCompletionRoutine before calling IoCallDriver. For requests that do require special handling, the SFD can do the following: Create a new IRP with IoBuildDeviceIoControlRequest, IoAllocateIrp, IoBuildSynchronousFsdRequest, or IoBuildAsynchronousFsdRequest, usually specifying an I/O stack location for itself. Check the returned IRP pointer for NULL and return STATUS_INSUFFICIENT_RESOURCES if an IRP could not be allocated. If the driver-created IRP includes an I/O stack location for the SFD, call IoSetNextIrpStackLocation to set up the IRP stack location pointer. Then, call IoGetCurrentIrpStackLocation to get a pointer to its own I/O stack location in the driver-created IRP and set up it up with state to be used by its own IoCompletion routine. Call IoGetNextIrpStackLocation to get a pointer to the next-lower driver's I/O stack location in the driver-created IRP and set it up with the major function code IRP_MJ_SCSI and an SRB (see Storage Class Drivers). Translate data to be transferred to the device into a device-specific, nonstandard format if necessary. Call IoSetCompletionRoutine if the driver allocated any memory, such as memory for an SRB, SCSI request-sense buffer, MDL, and/or IRP with a call to IoAllocateIrp or IoBuildAsynchronousFsdRequest, or if the driver must translate data transferred from the device in a device-specific, nonstandard format. Pass the driver-created IRP to (and through) the next-lower driver with IoCallDriver. Handling SRB formats Starting with Windows 8, an SFD filtering between the class driver and the port driver must check for the supported SRB format. Specifically, this involves detecting the SRB format and accessing the members of the structure correctly. The SRB in the IRP is either an SCSI_REQUEST_BLOCK and or an STORAGE_REQUEST_BLOCK. A filter driver can determine ahead of time which SRBs are supported by the port driver below by issuing an IOCTL_STORAGE_QUERY_PROPERTY request and specifying the StorageAdapterProperty identifier. The SrbType and AddressType values returned in the STORAGE_ADAPTER_DESCRIPTOR structure indicate the SRB format and addressing scheme used by the port driver. Any new SRBs allocated and sent by the filter driver must be of the type returned by the query. Similarly, starting with Windows 8, SFDs supporting only SRBs of the SCSI_REQUEST_BLOCK type must check that the SrbType value returned in the STORAGE_ADAPTER_DESCRIPTOR structure is set to SRB_TYPE_SCSI_REQUEST_BLOCK. To handle the situation when SrbType is set to SRB_TYPE_STORAGE_REQUEST_BLOCK instead, the filter driver must set a completion routine for IOCTL_STORAGE_QUERY_PROPERTY when the StorageAdapterProperty identifier is set in the request sent by drivers above it. In the completion routine, the SrbType member in the STORAGE_ADAPTER_DESCRIPTOR is modified to SRB_TYPE_SCSI_REQUEST_BLOCK to correctly set the supported type. The following is an example of a filter dispatch routine which handles both SRB formats. NTSTATUS FilterScsiIrp( PDEVICE_OBJECT DeviceObject, PIRP Irp ) { PFILTER_DEVICE_EXTENSION deviceExtension = DeviceObject->DeviceExtension; PIO_STACK_LOCATION irpStack = IoGetCurrentIrpStackLocation(Irp); NTSTATUS status; PSCSI_REQUEST_BLOCK srb; ULONG srbFunction; ULONG srbFlags; // // Acquire the remove lock so that device will not be removed while // processing this irp. // status = IoAcquireRemoveLock(&deviceExtension->RemoveLock, Irp); if (!NT_SUCCESS(status)) { Irp->IoStatus.Status = status; IoCompleteRequest(Irp, IO_NO_INCREMENT); return status; } srb = irpStack->Parameters.Scsi.Srb; if (srb->Function == SRB_FUNCTION_STORAGE_REQUEST_BLOCK) { srbFunction = ((PSTORAGE_REQUEST_BLOCK)srb)->SrbFunction; srbFlags = ((PSTORAGE_REQUEST_BLOCK)srb)->SrbFlags; } else { srbFunction = srb->Function; srbFlags = srb->SrbFlags; } if (srbFunction == SRB_FUNCTION_EXECUTE_SCSI) { if (srbFlags & SRB_FLAGS_UNSPECIFIED_DIRECTION) { // ... // filter processing for SRB_FUNCTION_EXECUTE_SCSI // ... } } IoMarkIrpPending(Irp); IoCopyCurrentIrpStackLocationToNext(Irp); IoSetCompletionRoutine(Irp, FilterScsiIrpCompletion, DeviceExtension->DeviceObject, TRUE, TRUE, TRUE); IoCallDriver(DeviceExtension->TargetDeviceObject, Irp); return STATUS_PENDING; } Setting up requests Like a storage class driver, an SFD might have BuildRequest or SplitTransferRequest routines to be called from the driver's Dispatch routines, or might implement the same functionality inline. For more information about BuildRequest and SplitTransferRequest routines, see Storage Class Drivers. For more information about general requirements for Dispatch routines, see Writing Dispatch Routines.
https://docs.microsoft.com/en-us/windows-hardware/drivers/storage/storage-filter-driver-s-dispatch-routines
2017-05-23T02:24:50
CC-MAIN-2017-22
1495463607245.69
[]
docs.microsoft.com
Basic Setup¶ Use or create a model for storing images and/or files. For simplicity here we will use the models in file_picker.uploads, Image and File. Use or create another model to contian the text field(s) to be inserted in by the picker. Here we will use the Post model from the sample_project.article. Which has two text fields, Body and Teaser. To use the pickers on both the teaser and body fields use a formfield_override to override the widget with the file_picker.widgets.SimpleFilePickerWidget: import file_picker from django.contrib import admin from sample_project.article import models as article_models class PostAdmin(admin.ModelAdmin): formfield_overrides = { models.TextField: { 'widget': file_picker.widgets.SimpleFilePickerWidget(pickers={ 'image': "images", # a picker named "images" from file_picker.uploads 'file': "files", # a picker named "files" from file_picker.uploads }), }, } class Media: js = ("",) admin.site.register(article_models.Post, PostAdmin) Simple File Picker Widget¶ To use the simple file picker widget override the desired form field’s widget. It takes in a dictionary with expected keys “image” and/or “file” these define which link to use “Add Image” and/or “Add File”. For an example of usage look at the.
http://django-file-picker.readthedocs.io/en/latest/setup.html
2017-05-23T01:09:22
CC-MAIN-2017-22
1495463607245.69
[]
django-file-picker.readthedocs.io
This topic helps IT administrators simplify Windows enrollment for their Windows 10 devices in Intune when adding their work account to their personally-owned devices or joining their corporate-owned devices to your Azure Active Directory. automatic enrollment. Step 1: Create CNAMEs there is more than one verified domain, create a CNAME record for each domain. The CNAME resource records must contain the following information: CNAME resource records must have the following information: EnterpriseEnrollment-s.manage.microsoft.com – Supports a redirect to the Intune service with domain recognition from the email’s domain name If your company uses multiple domains for user credentials, create CNAME records for each domain. For example, if your company’s website is contoso.com, you would create a CNAME in DNS that redirects EnterpriseEnrollment.contoso.com to EnterpriseEnrollment-s.manage.microsoft.com. Changes to DNS records might take up to 72 hours to propagate. You cannot verify the DNS change in Intune until the DNS record propagates. Step 2: Verify CNAME (optional) In the Intune administration console, choose Admin > Mobile Device Management > Windows. Enter the URL of the verified domain of the company website in the Specify a verified domain name box, and then choose Test Auto-Detection. Tell users how to enroll Windows devices Tell your users how to enroll their Windows devices and what to expect after they're brought into management. For end-user enrollment instructions, see Enroll your Windows device in Intune. You can also send users to What can my IT admin see on my device. For more information about end-user tasks, see Resources about the end-user experience with Microsoft Intune. See also Prerequisites for enrolling devices in Microsoft Intune
https://docs.microsoft.com/en-us/intune-classic/deploy-use/set-up-windows-device-management-with-microsoft-intune
2017-05-23T01:09:35
CC-MAIN-2017-22
1495463607245.69
[]
docs.microsoft.com
Amazon Redshift and PostgreSQL JDBC and ODBC. For more information about drivers and configuring connections, see JDBC and ODBC Drivers for Amazon Redshift in the Amazon Redshift Cluster Management Guide. To avoid client-side out-of-memory errors when retrieving large data sets using JDBC, you can enable your client to fetch data in batches by setting the JDBC fetch size parameter. For more information, see Setting the JDBC Fetch Size Parameter. Amazon Redshift does not recognize the JDBC maxRows parameter. Instead, specify a LIMIT clause to restrict the result set. You can also use an OFFSET clause to skip to a specific starting point in the result set.
http://docs.aws.amazon.com/redshift/latest/dg/c_redshift-postgres-jdbc.html
2017-05-23T01:11:51
CC-MAIN-2017-22
1495463607245.69
[]
docs.aws.amazon.com
Ticket #773 (closed enhancement: community) Qemu should support -net usb -redir ...... Description The qemu-neo1973 emulator should be able to fake a host with USB networking such that the qemu user networking stack can allow a redirected connection without needing a linux host with the dummy_hcd and gadgetfs modules loaded. Change History comment:1 Changed 10 years ago by balrogg@… - Status changed from new to closed - Cc balrogg@… added - Resolution set to invalid comment:2 Changed 10 years ago by hozer@… - Status changed from closed to reopened - rep_platform changed from PC to Other - Resolution invalid deleted - op_sys changed from Linux to All? comment:3 Changed 10 years ago by balrogg@… It would work in Windows and MacOS if they didn't lack gadgetfs support - this lack should be compensated for in these projects instead of in qemu. Ofcourse if someone implements a workaround in qemu (which shouldn't be difficult because it would mainly consist of copying and pasting the kernel code into qemu) it can be merged into qemu-neo1973 tree, but not upstream. See the problems qemu already has with maintaining slirp in our cvs - this would add a very similar issue of maintaining code that belongs in a separate project. For networking, fixing the USB NIC from may be easier. Other possibilities are networking over emulated bluetooth or GPRS but they will have the same problems of a lot of code not fitting in qemu. comment:4 Changed 9 years ago by john_lee@… - Status changed from reopened to new - Owner changed from dodji@… to balrogg@… Andrzej, please decide what to do with this. If we leave it to the community please reassign to michael@… comment:5 Changed 9 years ago by balrogg@… - Owner changed from balrogg@… to michael@…..
http://docs.openmoko.org/trac/ticket/773
2017-05-23T01:10:26
CC-MAIN-2017-22
1495463607245.69
[]
docs.openmoko.org
Ticket #1931 (closed task: worksforme) Ringtone disapperared Description After i installed the 2008.8 it worked, then after 1 or 2 reboot it disappeared now i can't hear the ringtone and the "tick" when you touch the screen while i can hear music whithout problems Change History Note: See TracTickets for help on using tickets. resolved as 'worksforme' now. if you see this again, please kindly reopen.
http://docs.openmoko.org/trac/ticket/1931
2017-05-23T01:05:13
CC-MAIN-2017-22
1495463607245.69
[]
docs.openmoko.org
California Public Utilities Commission 505 Van Ness Ave., San Francisco _______________________________________________________________________________ FOR IMMEDIATE RELEASE PRESS RELEASE Media Contact: Terrie Prosper, 415.703.1366, [email protected] Docket #: R.08-12-009 CPUC LAUNCHES PLAN TO MODERNIZE ELECTRIC GRID SAN FRANCISCO, June 24, 2010 - The California Public Utilities Commission (CPUC) today embarked on a momentous path toward modernizing the state's electric grid from one based on industrial age technology to one based on the technology of the information age. The decision adopted today sets out a framework and an overall vision for a Smart Grid in California and requires the state's investor-owned utilities to begin the transformation of the electric grid into a safer, more reliable, efficient, affordable, and interoperable system. The California legislature and Governor have enshrined the importance of modernizing the state's electric grid through the enactment of Senate Bill 17 (Padilla), signed into law on October 11, 2009. "Moving to a Smart Grid will allow utilities to help customers save money by reducing their electricity demand, provide consumers with more control over their energy use and help deploy clean, renewable energy sources like wind and solar around the state," said Governor Schwarzenegger. "I applaud the Public Utilities Commission for taking action and working with utilities to update and modernize our electric grid." Added Senator Alex Padilla (D-Pacoima), "Thoughtful planning is the key to success. I am pleased to see that the CPUC is providing the utilities with a roadmap that will lead to the modernization of the electric grid and provide consumers with real benefits." A Smart Grid is characterized by the ability to use real-time information to anticipate, detect, and respond to system problems. One example of new technology being deployed for smarter grid operations is measurements that are taken every two or four seconds, offering an almost continuous view into the power system. That real-time updating of the transmission system allows the grid to respond instantly to outages by scheduling electricity around constrained or downed areas across the grid, limiting the size and scope of outages. Real-time operations can also ease congestion and bottlenecks and reduce line losses (where electricity is "lost" due to heat and other factors), all of which result in savings to consumers by transporting electricity more efficiently. "The current grid has been operating in much the same way for over 100 years, and as such lacks the flexibility to adapt to new supply resources and increasing consumer demands," said CPUC President Michael R. Peevey. "The roadmap we provided today for the utilities will ensure that we have the best available information to create Smart Grid policies." Said Commissioner Nancy E. Ryan, the Commissioner assigned to this proceeding, "Smart Grid technology offers California energy consumers many potential benefits such as fewer new power plants and transmission lines, safer and more reliable service, cleaner air, and lower bills. This rigorous planning process will ensure that utility customers actually see these benefits." To ensure that the state's utilities follow a common outline in preparing their Smart Grid deployment plans, today's decision provides a roadmap of issues the utilities must address in their plans in order to bring the best results at the lowest possible cost to consumers, including: · Smart Grid Vision Statement to help orient a utility's efforts to upgrade its electrical system to meet today's requirements and tomorrow's needs using the latest technologies. · Smart Grid Strategy to consider whether using existing communications infrastructure can reduce the costs of deploying the Smart Grid. · Grid Security and Cyber Security Strategy to ensure that these issues are considered explicitly at the planning stage. · Cost Estimates of Smart Grid technologies and infrastructure investments that a utility expects to make in the next five years, and provisional cost ranges for potential Smart Grid technologies and investments for the following five years. · Metrics that permit the assessment of progress. Today's decision is the culmination of more than two years of work by the CPUC, utilities, consumer advocates, technology companies, and other interested parties to modernize the electric grid and bring savings to consumers. It also kicks off the next phase of the process as the decision directs parties to continue to address issues surrounding security, privacy, and third party access to customer information. The proposal voted on today is available at. For more information on the CPUC, please visit. ###
http://docs.cpuc.ca.gov/PUBLISHED/NEWS_RELEASE/119756.htm
2015-01-30T12:26:29
CC-MAIN-2015-06
1422115855561.4
[]
docs.cpuc.ca.gov
java.lang.Object org.springframework.core.io.support.PropertiesLoaderSupportorg.springframework.core.io.support.PropertiesLoaderSupport org.springframework.beans.factory.config.PropertiesFactoryBeanorg.springframework.beans.factory.config.PropertiesFactoryBean public class PropertiesFactoryBean Allows for making a properties file from a classpath location available as Properties instance in a bean factory. Can be used for to populate any bean property of type Properties via a bean reference. singleton. Properties public PropertiesFactoryBean() public final void setSingleton(boolean final final Object getObject() throws IOException FactoryBean If this method returns null, the factory will consider the FactoryBean as not fully initialized and throw a corresponding FactoryBeanNotInitializedException. getObjectin interface FactoryBean null; a nullvalue will be considered as an indication of incomplete initialization) IOException) protected Object createInstance() throws IOException Invoked on initialization of this FactoryBean in case of a singleton; else, on each getObject() call. IOException- if an exception occured during properties loading getObject(), PropertiesLoaderSupport.mergeProperties()
http://docs.spring.io/spring/docs/1.2.9/api/org/springframework/beans/factory/config/PropertiesFactoryBean.html
2015-01-30T13:03:48
CC-MAIN-2015-06
1422115855561.4
[]
docs.spring.io
Media Manager Media Files - Media Files - Upload - Search Upload to pictures Sorry, you don't have enough rights to upload files. File - Date: - 2017/10/31 10:32 - Filename: - dns-create-dnsview-view.png - Format: - PNG - Size: - 13KB - Width: - 874 - Height: - 276 - References for: - how_to_use_dns_plugin
https://docs.fusiondirectory.org/start?tab_files=upload&do=media&tab_details=view&image=en%3Apictures%3Aplugin%3Adns%3Adns-create-dnsview-view.png&ns=pictures
2019-12-05T16:55:40
CC-MAIN-2019-51
1575540481281.1
[]
docs.fusiondirectory.org
Before You BeginBefore You Begin Before you discover a System in Sunshower.io, you will need to create an account with your cloud service provider with the appropriate permissions. Below is a set of guides for all the supported clouds. Amazon Web Services Identity Access Management (IAM)Amazon Web Services Identity Access Management (IAM) If you're using Amazon Web Services, you will need to create an IAM Role before you can discover and optimize your System. This guide assumes a basic familiarity with AWS IAM roles and the AWS Console. If you're not familiar with the AWS Console or AWS IAM, please contact us and we can help you get started. Step 1: Log into AWSStep 1: Log into AWS Navigate to and select the large Orange button to the top right. This should take you to the AWS signin portal. Use your AWS credentials to sign into the console: Step 2: Locate IAM Management ConsoleStep 2: Locate IAM Management Console You'll be redirected to the AWS Management Console. In the Find Services field, search for "IAM": Select the entry from the dropdown; you'll be taken to the IAM Dashboard: Step 3: Create IAM RoleStep 3: Create IAM Role In the left-hand menu, select Users. You'll be taken to the user management page Add User to create a new user. The user details page will prompt you for a username, it's best to make it descriptive and related to sunshower (e.g. sunshower-io-readonly). Select the Access Type: Programmatic Access checkbox and proceed by clicking Next: Permissions This will bring you to the Permissions Page: Before you select Create Group, read over the options below for your Sunshower.io IAM Role: Sunshower.io can be run in - Read-Only mode or - Management mode depending on what you want to use it for. Read-Only ModeRead-Only Mode Read-only mode grants Sunshower.io only enough access to run its optimizations. Certain features (discussed below) will not be available. Unless you specifically want one of Sunshower.io's management features, such as lifecycle scheduling, we recommend running it in Read-Only mode. Create a Read-Only IAM CredentialCreate a Read-Only IAM Credential Create Group. This will open the Group Creation dialog that we will use to assign Sunshower.io read-only permissions. Before entering your group name, select Create Policy, which will open a new browser tab to the policy definition page. Select the JSON tab and paste the following JSON document into the text-area: read-only-policy.json { "Version": "2012-10-17", "Statement": [ { "Action": [ "autoscaling:Describe*", "ec2:Describe*", "cloudwatch:Describe*", "cloudwatch:Get*", "cloudwatch:List*" ], "Effect": "Allow", "Resource": "*" } ] } Enter the policy name and description. You should see something like: Click Create Policy. You'll be redirected to a list of policies. In your browser, return to the Create Group tab. Click Refresh to load the policy that you just created. Enter the policy name in the Search box to find the Sunshower.io policy: Select the policy and enter the group name, then click Create Group. This will return you to the Add user to group page with the correct group selected. Click Next: Tags You shouldn't need to add anything here. Click Next: Review. You'll be presented with a summary: Create: User. This will present you with your new IAM Credential. IMPORTANT Do not close this window yet. You will have to repeat part of the process We're not fully bluring ours out so you can see what they look like, but the ones presented here are not active: Under the Secret access key column, click Show to reveal your Secret access key. The Access key ID and Secret access key need to be saved securely as they're required for System Discovery Congrats! That was the hard part! Continue to Discovery to create your system Management ModeManagement Mode If you're using Sunshower.io to modify your infrastructure or run infrastructure schedules, you should opt for Management mode, keeping in mind that it does allow us more access to your infrastructure. AWS IAM is very granular, so depending on how much access you give us we may or may not be able to perform an operation. The process is exactly the same as in Read-Only Mode (Above), except that when you're creating your IAM Policy Document you should use:
https://docs.sunshower.io/pages/en/guide/before-you-begin.html
2019-12-05T18:01:07
CC-MAIN-2019-51
1575540481281.1
[array(['/assets/img/signin.38ec7873.png', 'AWS Console Page'], dtype=object) array(['/assets/img/aws-signin-portal.ce2bde67.png', 'AWS Console Page'], dtype=object) array(['/assets/img/iam-search.d41e2457.png', 'AWS IAM Search'], dtype=object) array(['/assets/img/iam-dashboard.691bd995.png', 'AWS IAM Dashboard'], dtype=object) array(['/assets/img/iam-user-page.5ecc765d.png', 'AWS IAM Dashboard'], dtype=object) array(['/assets/img/iam-user-details.bf7584b3.png', 'AWS IAM Dashboard'], dtype=object) array(['/assets/img/iam-create-group.e276a320.png', 'AWS IAM Dashboard'], dtype=object) array(['/assets/img/iam-create-policy.645c035a.png', 'AWS IAM Dashboard'], dtype=object) array(['/assets/img/iam-sunshower-readonly-policy.5a4b6089.png', 'AWS IAM Dashboard'], dtype=object) array(['/assets/img/iam-sunshower-create-group.265a5c5d.png', 'AWS IAM Dashboard'], dtype=object) array(['/assets/img/iam-add-user-to-group.4e81f69a.png', 'AWS IAM Dashboard'], dtype=object) array(['/assets/img/iam-user-summary.402c95a8.png', 'AWS IAM Dashboard'], dtype=object) array(['/assets/img/iam-credential-created-page.d492eafc.png', 'AWS IAM Dashboard'], dtype=object) ]
docs.sunshower.io
Machine translation¶ Machine translation setup¶ Weblate has builtin support for several machine translation services and it’s up to administrator to enable them. The services have different terms of use, so please check whether you are allowed to use them before enabling in Weblate. The individual services are enabled using MACHINE_TRANSLATION_SERVICES. The source language can be configured at Project configuration. Amagama¶ Special installation of tmserver run by Virtaal authors. To enable this service, add trans.machine.tmserver.AmagamaTranslation to MACHINE_TRANSLATION_SERVICES. See also Amagama Translation Memory server Apertium¶ A free/open-source machine translation platform providing translation to limited set of languages. The recommended way how to use Apertium is to run own Apertium APy server. Alternatively you can use Apertium server, but you should get API key from them, otherwise number of requests is rate limited. To enable this service, add trans.machine.apertium.ApertiumAPYTranslation to MACHINE_TRANSLATION_SERVICES. Glosbe¶ Free dictionary and translation memory for almost every living language. API is free to use, regarding indicated data source license. There is a limit of call that may be done from one IP in fixed period of time, to prevent from abuse. To enable this service, add trans.machine.glosbe.GlosbeTranslation to MACHINE_TRANSLATION_SERVICES. See also Google Translate¶ Machine translation service provided by Google. This service uses Translation API and you need to obtain API key and enable billing on Google API console. To enable this service, add trans.machine.google.GoogleTranslation to MACHINE_TRANSLATION_SERVICES. See also MT_GOOGLE_KEY, Google translate documentation Microsoft Translator¶ Deprecated since version 2.10. Note This service is deprecated by Microsoft as needs to be replaced by Microsoft Cognitive Services Translator. Machine translation service provided by Microsoft, it’s known as Bing Translator as well. You need to register at Azure market and use Client ID and secret from there. To enable this service, add trans.machine.microsoft.MicrosoftTranslation to MACHINE_TRANSLATION_SERVICES. Microsoft Cognitive Services Translator¶ New in version 2.10. Note This is replacement service for Microsoft Translator. Machine transation service provided by Microsoft in Azure portal as a one of Cognitive Services. You need to register at Azure portal and use key you obtain there. To enable this service, add trans.machine.microsoft.MicrosoftCognitiveTranslation to MACHINE_TRANSLATION_SERVICES. MyMemory¶ Huge translation memory with machine translation. Free, anonymous usage is currently limited to 100 requests/day, or to 1000 requests/day when you provide contact email in MT_MYMEMORY_EMAIL. you can also ask them for more. To enable this service, add trans.machine.mymemory.MyMemoryTranslation to MACHINE_TRANSLATION_SERVICES. tmserver¶ You can run your own translation memory server which is bundled with Translate-toolkit and let Weblate talk to it. You can also use it with amaGama server, which is enhanced version of tmserver. First you will want to import some data to the translation memory: To enable this service, add trans.machine.tmserver.TMServerTranslation to MACHINE_TRANSLATION = '' Weblate¶ Weblate can be source of machine translation as well. There are two services to provide you results - one does exact search for string, the other one finds all similar strings. First one is useful for full string translations, the second one for finding individual phrases or words to keep the translation consistent. To enable these services, add trans.machine.weblatetm.WeblateSimilarTranslation (for similar string matching) and/or trans.machine.weblatetm.WeblateTranslation (for exact string matching) to MACHINE_TRANSLATION_SERVICES. Note For similarity matching, it is recommended to have Whoosh 2.5.2 or later, earlier versions can cause infinite looks under some occasions. Custom machine translation¶ You can also implement own machine translation services using few lines of Python code. Following example implements translation to fixed list of languages using dictionary Python module: # -*- coding: utf-8 -*- # # Copyright © 2012 - 2016. ''' from weblate.trans.machine.base import MachineTranslation import dictionary class SampleTranslation(MachineTranslation): ''' Sample machine translation interface. ''' name = 'Sample' def download_languages(self): ''' Returns list of languages your machine translation supports. ''' return set(('cs',)) def download_translations(self, source, language, text, unit, user): ''' Returns tuple with translations. ''' return [(t, 100, self.name, text) for t in dictionary.translate(text)] You can list own class in MACHINE_TRANSLATION_SERVICES and Weblate will start using that.
https://docs.weblate.org/en/weblate-2.10.1/admin/machine.html
2019-12-05T17:24:53
CC-MAIN-2019-51
1575540481281.1
[]
docs.weblate.org
External integration Some rich text editors, such as CKEditor or TinyMCE, allow you to specify a URL based location of plugins outside of the normal plugins directory. This option is useful when loading the rich text editor from a CDN or when you want to have the editor directory separate from your custom integrations. In case you are using a programing language for which MathType Integrations are not available this is also a good option. You can install MathType integration as an external integration
https://docs.wiris.com/en/mathtype/mathtype_web/integrations/external-plugin?do=login&sectok=64b4c97bf1e54b67625e20467a667dbc
2020-02-17T01:14:41
CC-MAIN-2020-10
1581875141460.64
[]
docs.wiris.com
Spike sorting with klusta Let's see how to run klusta on your data. Preparing your files You need to specify three pieces of information in order to run klusta: - The raw data file(s) - The probe (PRB) file - The parameters (PRM) file Note: you shouldn't run more than one spike sorting session in a given directory. Use a different directory for every dataset. Raw data Your raw data needs to be stored in one or several flat binary files with no header (support for files with headers is coming soon). The extension is generally .dat but it could be something else. Typically, the data type is int16 or uint16. The bytes need to be arranged as follows: t0ch0 t0ch1 t0ch2 ... t0chN t1ch0 t1ch1 t1ch2 ...``` Channel moves first, time second. Using the Python/NumPy convention, this corresponds to a (n_samples, n_channels) array stored in C order. NumPy can read this file efficiently with np.memmap(dat_path, dtype=dtype, shape=(n_samples, n_channels)) (memory mapping allows to only load what you need in RAM). You can have several successive files (also known as recordings in klusta) for your experiment. They will be virtually concatenated by klusta. The offsets will be preserved in the output files. Probe file You need to specify the layout of your probe in a Python file. This file contains a few fields: - The list of channels: typically this is just 0, 1, 2, ..., N, but you can omit dead or ignored channels. - The adjacency graph of channels that are closed to each other on the probe. This will be used in the spike detection process. - The 2D coordinates of the channels on the probe (optional, only used for visualization purposes). Here is an example: channel_groups = { # Shank index. 0: { # List of channels to keep for spike detection. 'channels': list(range(32)), # Adjacency graph. Dead channels will be automatically discarded # by considering the corresponding subgraph. 'graph': [ (0, 1), (0, 2), (1, 2), (1, 3), ... ], # 2D positions of the channels, only for visualization purposes. # The unit doesn't matter. 'geometry': { 0: (0, 0), 1: (10, 20), ... } } } The full example is here. This is a 32-channel staggered Buzsaki probe. It is already included in klusta, so you can just specify it by its name. We plan to include more built-in probes in the future. If your probe is not included in klusta, you can just create a new PRB file. Since a PRB file is just a Python file, you can programmatically generate the list of channels or the adjacency graph in the file, instead of typing everything by hand. Parameters file The parameters file is also a Python file. It contains the paths to your raw data files and your PRB file, as well as the metadata related to your data and the spike sorting process. Here is an example: experiment_name = 'hybrid_10sec' prb_file = '1x32_buzsaki' # or the path to your PRB file traces = dict( raw_data_files=['myrecording.dat', ], # path to your .dat file(s) sample_rate=20000, # sampling rate in Hz n_channels=32, # number of channels in the .dat files dtype='int16', # the data type used in the .dat files ) # Parameters for the spike detection process. spikedetekt = dict( ) # Parameters for the automatic clustering process. klustakwik2 = dict( num_starting_clusters=100, ) You can specify custom parameters for spike detection and automatic clustering. The default parameters can be found here: Launching the spike sorting process We recommend that you use a dedicated directory for every experiment. This directory should contain: - Your PRM file - Your PRB file (optional) Your raw data files can be stored in the directory, or elsewhere, in which case you need to specify the full absolute paths in the PRM file. Once your directory is ready, launch the spike sorting session with: $ klusta yourfile.prm This will generate a .kwik file and a .kwx file with the results. Type the following to get the list of all options: $ klusta --help Here are common options: --output-dir: the output directory containing the resulting kwik file --detect-only: only do spike detection. --cluster-only: only do automatic clustering (spike detection needs to have been done before). --overwrite: overwrite all previous results. --debug: display more information about the sorting process. Using the GUI Type the following to open the KlustaViewa GUI on your files once spike sorting has been done: klustaviewa yourfile.kwik
https://klusta.readthedocs.io/en/latest/sorting/
2020-02-17T00:16:24
CC-MAIN-2020-10
1581875141460.64
[]
klusta.readthedocs.io
Remove-ADResource Property List Member Syntax Remove-ADResourcePropertyListMember [-WhatIf] [-Confirm] [-AuthType <ADAuthType>] [-Credential <PSCredential>] [-Identity] <ADResourcePropertyList> [-Members] <ADResourceProperty[]> [-PassThru] [-Server <String>] [<CommonParameters>] Description The Remove-ADResourcePropertyListMember cmdlet can be used to remove one or more resource properties from a resource property list in Active Directory. Examples -------------------------- EXAMPLE 1 -------------------------- C:\PS>Remove-ADResourcePropertyListMember "Global Resource Property List" -Members Country Description Removes the resource property specified as a list member ("Country") from the specified resource property list ("Global Resource Property List"). -------------------------- EXAMPLE 2 -------------------------- C:\PS>Remove-ADResourcePropertyListMember "Corporate Resource Property List" Department,Country Description Removes the resource properties named 'Department' and 'Country' from the resource property list ("Corporate Resource Property List"). -------------------------- EXAMPLE 3 -------------------------- C:\PS>Get-ADResourcePropertyList -Filter "Name -like 'Corporate*'" | Remove-ADResourcePropertyListMember Department,Country Description Gets the resource property lists that have a name that begins with "Corporate" and then pipes it to Remove-ADResourcePropertyListMember, which then removes the resource properties with the name 'Department' and 'Country' from it. object that are defined in the current Windows PowerShell session as input for the parameter. -Members $rpObject1, $rpObject2 You cannot pass objects through the pipeline to this parameter.ResourcePropertyList An ADResourcePropertyList object is received by the Identity parameter. Outputs when using this cmdlet. Feedback
https://docs.microsoft.com/en-us/powershell/module/activedirectory/Remove-ADResourcePropertyListMember?view=winserver2012-ps
2020-02-17T01:21:28
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
panda3d.direct.CMetaInterval¶ - class CMetaInterval¶ This interval contains a list of nested intervals, each of which has its own begin and end times. Some of them may overlap and some of them may not. Inheritance diagram __init__(name: str) → None setPrecision(precision: float) → None¶ Indicates the precision with which time measurements are compared. For numerical accuracy, all floating-point time values are converted to integer values internally by scaling by the precision factor. The larger the number given here, the smaller the delta of time that can be differentiated; the limit is the maximum integer that can be represented in the system. getPrecision() → float¶ Returns the precision with which time measurements are compared. See setPrecision(). pushLevel(name: str, rel_time: float, rel_to: RelativeStart) → int¶ Marks the beginning of a nested level of child intervals. Within the nested level, a RelativeStart time of RS_level_begin refers to the start of the level, and the first interval added within the level is always relative to the start of the level. The return value is the index of the def entry created by this push. addCInterval(c_interval: CInterval, rel_time: float, rel_to: RelativeStart) → int¶ Adds a new CInterval to the list. The interval will be played when the indicated time (relative to the given point) has been reached. The return value is the index of the def entry representing the new interval. addExtIndex(ext_index: int, name: str, duration: float, open_ended: bool, rel_time: float, rel_to: RelativeStart) → int¶ Adds a new external interval to the list. This represents some object in the external scripting language that has properties similar to a CInterval (for instance, a Python Interval object). The CMetaInterval object cannot play this external interval directly, but it records a placeholder for it and will ask the scripting language to play it when it is time, via isEventReady()and related methods. The ext_index number itself is simply a handle that the scripting language makes up and associates with its interval object somehow. The CMetaInterval object does not attempt to interpret this value. The return value is the index of the def entry representing the new interval. popLevel(duration: float) → int¶ Finishes a level marked by a previous call to pushLevel(), and returns to the previous level. If the duration is not negative, it represents a phony duration to assign to the level, for the purposes of sequencing later intervals. Otherwise, the level’s duration is computed based on the intervals within the level. setIntervalStartTime(name: str, rel_time: float, rel_to: RelativeStart) → bool¶ Adjusts the start time of the child interval with the given name, if found. This may be either a C++ interval added via addCInterval(), or an external interval added via addExtIndex(); the name must match exactly. If the interval is found, its start time is adjusted, and all subsequent intervals are adjusting accordingly, and true is returned. If a matching interval is not found, nothing is changed and false is returned. getIntervalStartTime(name: str) → float¶ Returns the actual start time, relative to the beginning of the interval, of the child interval with the given name, if found, or -1 if the interval is not found. getIntervalEndTime(name: str) → float¶ Returns the actual end time, relative to the beginning of the interval, of the child interval with the given name, if found, or -1 if the interval is not found. getNumDefs() → int¶ Returns the number of interval and push/pop definitions that have been added to the meta interval. getDefType(n: int) → DefType¶ Returns the type of the nth interval definition that has been added. - Return type DefType getCInterval(n: int) → CInterval¶ Return the CInterval pointer associated with the nth interval definition. It is only valid to call this if get_def_type(n) returns DT_c_interval. - Return type - getExtIndex(n: int) → int¶ Return the external interval index number associated with the nth interval definition. It is only valid to call this if get_def_type(n) returns DT_ext_index. isEventReady() → bool¶ Returns true if a recent call to priv_initialize(), priv_step(), or priv_finalize() has left some external intervals ready to play. If this returns true, call getEventIndex(), getEventT(), and popEvent()to retrieve the relevant information. getEventIndex() → int¶ If a previous call to isEventReady()returned true, this returns the index number (added via add_event_index()) of the external interval that needs to be played. getEventT() → float¶ If a previous call to isEventReady()returned true, this returns the t value that should be fed to the given interval. getEventType() → EventType¶ If a previous call to isEventReady()returned true, this returns the type of the event (initialize, step, finalize, etc.) for the given interval. - Return type EventType popEvent() → None¶ Acknowledges that the external interval on the top of the queue has been extracted, and is about to be serviced by the scripting language. This prepares the interval so the next call to isEventReady()will return information about the next external interval on the queue, if any. - enum RelativeStart¶
https://docs.panda3d.org/1.10/cpp/reference/panda3d.direct.CMetaInterval
2020-02-17T01:01:28
CC-MAIN-2020-10
1581875141460.64
[]
docs.panda3d.org
Timetables¶ Add trip pattern 15). For Add trip pattern modifications, speed and dwell time values can be set for each timetable, either at the segment level or as overall average values. The user interface also displays travel times derived from these values. While segment-level running-time values can be modified, the speed values are what Conveyal Analysis actually saves and uses for calculation. Recalculated travel time values may differ slightly from explicitly entered values, due to rounding of speed values or if segment lengths change. Before analyzing scenarios, we recommend re-opening modifications with timetables and double checking that values reflect desired travel times. a timetable specifying that a particular trip. In this case, a single fixed timetable will be created, with the first departure at the start time, and then additional departures with exactly the specified frequency until (but not including) the end time. For example, in the scenario given above, the vehicles would be scheduled to depart at exactly 9:00, 9:15, 9:30 until 6:45 (but not at 7:00 because the end time is not included). If the schedule is not known, but it is known that the schedules of two lines will be interrelated (e.g. using timed transfers or pulsed schedules), the Phasing feature may be enabled. Copying Timetables¶ Timetable entries can be copied between Add trip pattern and Convert to frequency modifications. Some users may find it convenient to use a single template add trip patterns modification that specifies commonly used service windows and frequencies. For example, you could create a “Base Timetables” modification and deactivate it from all scenarios. You could then add multiple timetables to this template, for example: - pattern, the appropriate timetables could then be copied from this template by clicking: Copy timetable
https://analysis-ui.readthedocs.io/en/latest/edit-scenario/timetable.html
2020-02-17T01:44:53
CC-MAIN-2020-10
1581875141460.64
[array(['../img/new-timetable.png', 'add timetable'], dtype=object)]
analysis-ui.readthedocs.io
Software Download Directory Live Forms v8.1 is no longer supported. Please visit Live Forms Latest for our current Cloud Release. Earlier documentation is available too. To design/modify a form step in the flow designer, first click it to select it; it turns light blue. Click the pencil icon Form Designer documentation.to launch the Form Designer. For more information on designing forms, please see the When a form is added to your flow it is added as a copy. Editing it, saves changes to the copy and will not affect the original form that was dragged in from the palette. However, you can download a form step from a workflow as a standalone form by clicking the forms icon. On This Page: There are two design patterns to be considered when designing your workflow. Choosing one design pattern over the other really depends on the purpose of your workflow. The choices are: For example, if you were creating a workflow from a fifty tab Excel spreadsheet, you can create forms for each tab then drag the individual forms into your flow for your steps. Workflows where one form gets routed to a lot of people and they all have to work on it collaboratively, typically use the Linked steps approach. When you create a flow, Live Forms creates an XSD schema of the flow that combines all the fields in all the forms in the flow. For example, if you go to the Flows home page and click to download or open a flow's schema, you'll see that it contains elements for all the controls in each form in the flow. When designing the forms you want to use in a flow, be aware that if controls in the different forms have the same name, their data will merge in the XML document that Live Forms generates when the flow runs and is submitted. While the Form Designer automatically prevents you from giving two controls the same name within the same form, it doesn't prevent you from giving controls in different forms the same name. For example, suppose one or more forms in a flow have a text control named FirstName. On one form, this might be an employee's first name; on another, it might be a manager's or customer's first name. When the flow runs and is submitted, the two first names will merge in the resulting XML file. To avoid this, you should give the fields unique names — such as EMPFirstName' or MGRFirstName' — so they'll be separate elements in the flow's XSD schema and separate pieces of data in the submitted flow's XML file. Another way to avoid data merging is to "nest" controls with the same name within Section controls (which control nesting) that have different names in different forms. For example, suppose you have one form with a Section Control named Employee' that contains a text field named FirstName. A second form has a Section control named Manager that also contains a text field named FirstName. If you use both of these forms in a flow, the FirstName data does not merge because the two controls are nested — at the same level — within Sections that have different names. See Section Controls in Designing Forms for more information on using these controls. Be aware that Section controls can themselves contain Section controls, and that this affects the nesting level of the controls they contain. The same data merge consideration is true for form data generated by schemas. For example, suppose you're designing an employee performance review flow that includes two forms. Form 1 contains past performance review data (perhaps read-only) from a database; Form 2 is the same but blank for the current review. If both forms update the same database table, they may be generated from the same XSD created by a database connector. The issue is that data in Form 2 (current review) will merge with the Form 1 values (past review) if their schemas are the same. One solution for this problem would be to write two different queries in the database connector so that the past review and current review schemas have different namespaces, which would prevent them from merging. Digital and Wet Signatures in Signed Sections named the same in more than one form used in a flow will not be merged. However, identically named signature controls in two different steps in a flow share the same wet signature.
https://docs.frevvo.com/d/display/frevvo81/Designing+Steps+in+a+Flow
2020-02-17T00:39:23
CC-MAIN-2020-10
1581875141460.64
[array(['/d/images/icons/linkext7.gif', None], dtype=object)]
docs.frevvo.com
Announcing Compatibility Certification of Windows 8, Microsoft Office 2013 and Internet Explorer 10 with Dynamics AX 2009 SP1 Dynamics AX Sustained Engineering is proud to announce the following compatibility between a released version of Dynamics AX and Windows 8, Microsoft Office 2013 and Internet Explorer 10. The System requirements of Dynamics AX 2009 SP1 have been updated. Follow the Dynamics AX Sustained Engineering blog for the latest updates. For further information, feel free to contact [email protected].
https://docs.microsoft.com/en-us/archive/blogs/dynamicsaxse/announcing-compatibility-certification-of-windows-8-microsoft-office-2013-and-internet-explorer-10-with-dynamics-ax-2009-sp1
2020-02-17T02:21:10
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Microsoft SDL and Microsoft Tag Tag! you’re it. Have you played with Microsoft Tag yet? If not check out. It’s great for providing follow up information for events, business cards, marketing collateral. You simply download the phone app from and snap the image below. Once you snap the image, it takes you to the URL or resource you’ve designated behind the image. In this case,. I think it’s pretty cool, so check it out.
https://docs.microsoft.com/en-us/archive/blogs/georgeop/microsoft-sdl-and-microsoft-tag
2020-02-17T02:32:04
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Deletes a monitoring schedule. Also stops the schedule had not already been stopped. This does not delete the job execution history of the monitoring schedule. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. delete-monitoring-schedule --monitoring-schedule-name <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>] --monitoring-schedule-name (string) The name of the monitoring schedule.
https://docs.aws.amazon.com/cli/latest/reference/sagemaker/delete-monitoring-schedule.html
2020-02-17T01:43:31
CC-MAIN-2020-10
1581875141460.64
[]
docs.aws.amazon.com
Take a peer offline Use the CLI offline command to take a peer offline. Depending on your needs, you can take a peer offline permanently or temporarily. In both cases, the cluster performs actions to regain its valid and complete states: - A valid cluster has searchable's usually to perform an upgrade or other maintenance for a short period of time. You want the = Take a peer offline= Use the CLI splunk offline command to take a peer offline. By using the offline command, you minimize any disruption to your searches. During and after a peer goes offline, the cluster performs actions to regain its valid and complete states: - A valid cluster has primary. remedial bucket-fixing activities to return the cluster to a complete state. Note: The peer goes down after a maximum of 5-10 minutes, even if searches are still in progress.". If you are performing an operation that involves taking many peers offline, you should. Estimate the cluster recovery time when a peer goes offline When a peer goes offline for a period that exceeds restart_timeout,!
https://docs.splunk.com/Documentation/Splunk/6.0/Indexer/Takeapeeroffline
2020-02-17T02:14:39
CC-MAIN-2020-10
1581875141460.64
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
The topics in this section provide an overview of how to prepare and integrate the Dashboard Control into a desktop or web application. eXpressApp Framework applications provide a ready-to-use UI for designing and viewing dashboards in WinForms and ASP.NET applications. Users can create dashboards at runtime and persist them in the application database. A list of application dashboards, which can be invoked from the navigation panel, is accompanied by actions used to manage dashboards (create, view, modify). Refer to the Dashboards Module section to learn more.
https://docs.devexpress.com/Dashboard/117113/designer-and-viewer-applications
2020-02-17T01:15:47
CC-MAIN-2020-10
1581875141460.64
[]
docs.devexpress.com
Tier 1 Switch Support Read This First - If a switch release is not shown in this table, Genesys does not support it. - Information on supported hardware and third-party software required to run Genesys applications is available in the Genesys Supported Operating Environment Reference Guide Wiki. - We upgrade the switch to the latest switch version and test it with the latest GA version of T-Server. - We upgrade the switch to the latest link version and test it with the latest GA version of T-Server. - We announce the support based on the latest switch version. Customers may choose the latest link version or older link versions. Any compatibility issues discovered between the latest switch version and link version needs to be addressed to the switch vendor. - The latest GA version of T-Server will then support all previous versions of the switch in compatibility mode. - If you need to upgrade the switch version, the latest GA version of T-Server must be used together with any previous Genesys suite versions. The only Genesys migration activity needed is a T-Server upgrade. - We recommend that you upgrade the PBX version first, and the T-Server second. If you do not plan to upgrade the PBX, there is no clear need to upgrade the T-Server. This page was last edited on February 12, 2020, at 22:16. Feedback Comment on this article:
https://docs.genesys.com/Documentation/System/latest/SMI/Tier1SwitchSupport
2020-02-17T01:36:36
CC-MAIN-2020-10
1581875141460.64
[]
docs.genesys.com
Server Calls¶ Server calls are calls made by your server to planviewer.nl. Note When a call is made and a 403 http return code is returned, two things are possible: You are not using the right keys/secret combination or you connect to instead of https: - Application - Viewers - Layers - List the layers in a viewer - Sort the layers in a viewer - Create a new layer for a viewer - Upload a new vector layer for a viewer - Get a layer’s details - Update a layer’s details - Delete a layer - Get the feature data for a vector layer - Upload the SLD for a vector layer - Remove the SLD for a vector layer - Check if a vector layer SLD exists - Upload new data for an existing vector layer - Get the legend for a layer - Add feature info and a geometry to a vector layer - List all properties and geometry of a vector layer - Remove properties and geometry of a vector layer - Update properties and the geometry of a vector layer - Field mappings
https://docs.planviewer.nl/mapsapi/server_calls/index.html
2020-02-17T00:48:07
CC-MAIN-2020-10
1581875141460.64
[]
docs.planviewer.nl
RedHat Linux (64-bit)¶ This document describes how to install Safespring BaaS on RedHat Enterprise Linux (64-bit). There are two ways of installing BaaS: * Manually signing up nodes, * Automatically signing up nodes In both cases, the software is distributed through RPM repositories and the first parts of the installation are identical. 1. Configure the repository¶ The original instructions on the repositories are found at Github. They are replicated here for simplicity. The repositories are located at (though this page is currently not indexed). EL6¶ CentOS 6.7 and RedHat EL 6.7 are tested. curl -o /etc/pki/rpm-gpg/RPM-GPG-KEY-IPnett \ rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-IPnett curl -o /etc/yum.repos.d/ipnett-el6.repo \ The commands will: - Download our RPM GPG signing key, - Import the key, - Install the repo file for use by the package management system. EL7¶ CentOS 7 and RedHat EL 7 are tested. curl -o /etc/pki/rpm-gpg/RPM-GPG-KEY-IPnett \ rpmkeys --import /etc/pki/rpm-gpg/RPM-GPG-KEY-IPnett curl -o /etc/yum.repos.d/ipnett-el7.repo \ The commands will: - Download our RPM GPG signing key, - Import the key, - Install the repo file for use by the package management system. 2. Installation of software¶ Now you must decide whether you want to install a package which allows for automatic node registration or if you prefer, for one reason or another, to manage the node registration yourself. 2.a) Installation with automatic node registration¶ 2.a.1) Installation of software¶ The following command will install a package that contains an enrollment-script, and it depends on the regular ipnett-baas package, which in turn depends on the TSM software: yum install ipnett-baas-setup 2.a.2) Automatic enrollment¶ After successfully having installed the software, the service can be automatically enrolled with by using the ipnett-baas-setup program. Brief usage instructions are listed below: # ipnett-baas-setup /usr/bin/ipnett-baas-setup [-a application] [-c ON|OFF] [-C cost_center] [-d ON|OFF] [-D ON|OFF] [-e ON|OFF] [-f credentials_file] [-H host_name] [-i host_description] [-m mail_address] [-p platform] [-t auth_token] Also, see man ipnett-baas-setup for more information The program requires an authentication token to communicate with the API. It is recommended that an API key with enrollment capabilities are used to automatically enroll hosts, since these have limited permissions due to risk of key misplacement. The API-key can be given to the program in two ways. A) ipnett-baas-setup -f $path-to-file, expects a YaML formatted file: access_key_id: $the_access_key secret_access_key: $the_secret_key B) ipnett-baas-setup -t $token, expects a base64 encoding of the key and secret key: echo -n $the_access_key:$the_secret_key | openssl enc -base64 -e A typical invocation of the script would be something like: ipnett-baas-setup -f /root/ipnett-baas-credentials.yaml -m [email protected] -C $costcenter -p RHEL-6 The platform switch, -p, is RHEL-6 for EL6-like systems and RHEL-7 for EL7-like systems. More detailed usage instructions are found in the man-page, man ipnett-baas-setup. Please study the man page to see all the available instructions. 2.a.3) Service activation¶ ipnett-baas-setup will, after a fully successful invocation, on EL6/7 both activate and launch the dsmcad service. ipnett-baas It will install TSM and prepare it for operations with the service (i.e. install CA certificates, etc). But it will not itself register the client with the service. The manual routine for node registration is thus described below. 2.b.1) Create a node in the BaaS Portal¶ You must first create a node (backup client entitlement) in the BaaS Portal (or using the API). When you create a node, you receive both a nodename and a password.) Enable TSM autostart¶ EL6: chkconfig dsmcad on EL7: systemctl enable dsmcad 2.b.5) Start TSM¶ EL6: service dsmcad start EL7:. TODO: Add examples here
https://docs.safespring.com/backup/install/rhel/
2020-02-17T00:37:31
CC-MAIN-2020-10
1581875141460.64
[]
docs.safespring.com
Smartphone Web nav close Life Cycle Event to Game Server A lifecycle event is a mechanism for notifying the game server that an event occurred for an application. The following events are currently provided. See also here for more information. Event Description addapp An application was installed removeapp An application was uninstalled joingroup A user joined an official group of the game leavegroup A user left the official group of the game postdiary A user wrote a diary entry related to the game Revision History 03/15/2013 Document migrated PREVIOUS Response from Gadget Server to User NEXT Sandbox Start Guide
https://docs.mobage.com/display/JPSPBP/LCE+to+GameServer
2020-02-17T00:33:34
CC-MAIN-2020-10
1581875141460.64
[]
docs.mobage.com
The Funny QnA Valentine K!
https://docs.microsoft.com/en-us/archive/blogs/betsya/the-funny-qna-valentine
2020-02-17T02:43:18
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
New! Get a 14 day free trial by clicking here! Integrate your Lead Manager with BombBomb to send video emails to leads from within the Lead Manager. In the Lead Manager, you will be able to track your leads actions: opening the email, watching the video. This information is available in the leads Activity History. BombBomb also has a section on their integrations page with a screencast on how to use Real Geeks integration:- Now your website is connected with BombBomb. Visit a lead page on your Lead Manager and you'll see a BombBomb widget on right column to record and send a video to that lead. You will need an API Key provided by BombBomb to enable the integration. To find this: Note that if you click Reset my API Key your key will change, and your integration will stop working. You will then have to replace your old key over at Real Geeks to repair it. New leads (shortly after their first assignment to an agent) will be automatically imported into your BombBomb account as a contact. This includes the leads: The lead will also automatically be added to a list in BombBomb based on their source. This includes leads generated from other sources like Zillow, Realtor.com and Zapier. For example, leads that sign up via a property search on your website will be added to a list titled “RG Website Property Search”. Likewise, a lead imported to the lead manager through zillow will be added to a BombBomb list titled like “RG Zillow”. What about Existing Leads? Once the integration is enabled a widget will be available on the right column of a lead page in your Lead Manager Just click Send BombBomb to record a video and send the email to this lead. Every email you send will be automatically saved in your BombBomb account under the Emails tab. Activities will be created when the lead opens your email and watches the video. It's also possible to see if a lead unsubscribes from your BombBomb list. Note that if a lead unsubscribes from your BombBomb emails it does not affect their property updates emails or any other emails sent by the Lead Manager. Is BombBomb connected to one user, or all of them? Can I have individual accounts for each user? At present, the BombBomb integration is one account per Lead Manager. This means that an account personalized to a single user will show these personalizations for all users who use that BombBomb link. This may change in the future, but at the current time it isn't possible to change this. If I update the lead on the Lead Manager will it update on BombBomb and vice-versa? No. At this moment changing the lead on the Lead Manager will not modify the associated BombBomb contact. We are working on a solution to this. I'm on a trial, and BombBomb isn't sending e-mails! BombBomb will only allow you to e-mail the first fifty contacts that you add to your trial account. You will need to purchase a paid plan to e-mail the other, “oversubscribed” contacts before they can be messaged. BombBomb does not notify Real Geeks on updates to contact details, so changes to BombBomb will not affect your lead on the Lead Manager
https://docs.realgeeks.com/bombbomb
2020-02-17T01:14:11
CC-MAIN-2020-10
1581875141460.64
[]
docs.realgeeks.com
A New Home on the Web for the Windows Communication Foundation and the .NET Framework 3.0.
https://docs.microsoft.com/en-us/archive/blogs/clemensv/a-new-home-on-the-web-for-the-windows-communication-foundation-and-the-net-framework-3-0
2020-02-17T02:15:50
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Prepare Active Directory for Skype for Business Server Summary: Learn how to prepare your Active Directory domain for an installation of Skype for Business Server. Download a free trial of Skype for Business Server from the Microsoft Evaluation Center. Skype for Business Server works closely with Active Directory. You must prepare the Active Directory domain to work with Skype for Business Server. This process is accomplished in the Deployment Wizard and is only done once for the domain. This is because the process creates groups and modifies the domain, and you need to do that only once. You can do steps 1 through 5 in any order. However, you must do steps 6, 7, and 8 in order, and after steps 1 through 5, as outlined in the diagram. Preparing Active Directory is step 4 of 8. For more information about planning for Active Directory, see Environmental requirements for Skype for Business Server or Server requirements for Skype for Business Server 2019. Prepare Active Directory Skype for Business Server is tightly integrated with Active Directory Domain Services (AD DS). Before Skype for Business Server can be installed for the first time, Active Directory must be prepared. The section of the Deployment Wizard titled Prepare Active Directory prepares the Active Directory environment for use with Skype for Business Server. Note Skype for Business Server uses (AD DS) to track and communicate with all of the servers in a topology. The majority of these servers must be joined to the domain so that Skype for Business Server can work properly. Keep in mind that servers such as Edge and Reverse Proxy should not be domain joined. Important The Prepare Active Directory procedure should be run only once for each domain in the deployment. Watch the video steps for Prepare Active Directory: Prepare Active Directory from the Deployment Wizard Log on as a user with Schema Admins credentials for the Active Directory domain. Open Skype for Business Server Deployment Wizard. Tip If you want to review the log files that are created by the Skype for Business Server Deployment Wizard, you can find them on the computer where the Deployment Wizard was run, in the Users directory of the AD DS user who ran the step. For example, if the user logged on as the domain administrator in the domain, contoso.local, the log files are located in: C:\Users\Administrator.Contoso\AppData\Local\Temp. Click the Prepare Active Directory link. Step 1: Prepare schema a. Review the prerequisites information for Step 1 which can be accessed by clicking the drop-down under the Step 1 title. b. Click Run in Step 1 to launch the Prepare Schema wizard. c. Take note that the procedure should be run only once for each deployment, and then click Next. d. Once the schema has been prepared, you can view the log by clicking View Log. e. Click Finish to close the Prepare Schema wizard, and return to the Prepare Active Directory steps. Step 2: Verify replication of schema partition a. Log on to the domain controller for the domain. b. Open ADSI Edit from the Tools drop-down menu in Server Manager. c. On the Action menu, click Connect to. d. In the Connection Settings dialog box under Select a well known Naming Context, select Schema, and then click OK. e. a. Review the prerequisites information for Step 3 which can be accessed by clicking the drop-down under the Step 3 title. b. Click Run in Step 3 to launch the Prepare Current Forest wizard. c. Take note that the procedure should be only run once per deployment, and then click Next. d. Specify the domain where the universal groups will be created. If the server is part of the domain, you can choose Local domain, and click Next. e. Once the forest has been prepared, you can view the log by clicking View Log. f. Click Finish to close the Prepare Current Forest wizard, and return to the Prepare Active Directory steps. g. Click Skype for Business Server Management Shell from the Apps page to launch PowerShell. h. Type the command Get-CsAdForest, and press Enter. i. If the result is LC_FORESTSETTINGS_STATE_READY, the forest has successfully been prepared, as shown in the figure. Step 4: Verify replication of the global catalog a. On a domain controller (preferably in a remote site from the other domain controllers), in the forest where the Forest Preparation was run, open Active Directory Users and Computers. b. In Active Directory Users and Computers, expand the domain name of your forest or a child domain. c. Click the Users container on the left side pane, and look for the Universal group CsAdministrator in the right side pane. If CsAdministrator (among other new Universal groups that begin with Cs) is present, Active Directory replication has been successful. d. If the groups are not yet present, you can force the replication, or wait 15 minutes and refresh the right side pane. When the groups are present, replication is complete. Step 5: Prepare the current domain a. Review the prerequisites information for Step 5. b. Click Run in Step 5 to launch the Prepare Current Domain wizard. c. Take note that the procedure should only be run once for each domain in the deployment, and then click Next. d. Once the domain has been prepared, you can view the log by clicking View Log. e. Click Finish to close the Prepare Current Domain wizard, and return to the Prepare Active Directory steps. These steps must be completed in every domain where Skype for Business Server objects are found, otherwise services might not start. This includes any type of Active Directory object, such as users, contact objects, administrative groups, or any other type of object. You can use Set-CsUserReplicatorConfiguration -ADDomainNamingContextList to add only the domains with Skype for Business Server objects, if needed. Step 6: Verify replication in the domain a. Click the Skype for Business Server Management Shell from the Apps page to launch PowerShell. b. Use the command Get-CsAdDomain to verify replication within the domain. Get-CsAdDomain [-Domain <Fqdn>] [-DomainController <Fqdn>] [-GlobalCatalog <Fqdn>] [-GlobalSettingsDomainController <Fqdn>] Note If you do not specify the Domain parameter, the value is set to the local domain. Example of running the command for the contoso.local domain: Get-CsAdDomain -Domain contoso.local -GlobalSettingsDomainController dc.contoso.local Note forest. If the global settings are in the Configuration container (which is typical with new deployments or upgrade deployments where the settings have been migrated to the Configuration container), you define any domain controller in the forest. If you do not specify this parameter, the cmdlet assumes that the settings are stored in the Configuration container and refers to any domain controller in Active Directory. c. If the result is LC_DOMAINSETTINGS_STATE_READY, the domain has successfully replicated. Step 7: Add users to provide administrative access to the Skype for Business Server Control Panel a. Log on as a member of the Domain Admins group or the RTCUniversalServerAdmins group. b. Open Active Directory Users and Computers, expand your domain, click the Users container, right-click CSAdministrator, and choose Properties. c. In CSAdministrator Properties, click the Members tab. d. On the Members tab, click Add. In Select Users, Contacts, Computers, Service Accounts, or Groups, locate the Enter the object names to select. Type the user name(s) or group name(s) to add to the group CSAdministrators. Click OK. e. On the Members tab, confirm that the users or groups that you selected are present. Click OK. Caution The Skype for Business Server Control Panel is a role-based access control tool. Membership in the CsAdministrator group gives a user who is using the Skype for Business Server Control Panel full control for all configuration functions available. There are other roles available that are designed for specific functions. For details on the roles available, see Environmental requirements for Skype for Business Server or Server requirements for Skype for Business Server 2019. Note that users do not have to be enabled for Skype for Business Server in order to be made members of the management groups. Caution To help retain security and role-based access control integrity, add users to the groups that define what role the user performs in management of the Skype for Business Server deployment. Log off, and then log back on to Windows so that your security token is updated with the new Skype for Business Server security group, and then reopen the Deployment Wizard. Verify that you see a green checkmark next to Prepare Active Directory to confirm success, as shown in the figure. See also Active Directory Domain Services for Skype for Business Server 2015
https://docs.microsoft.com/en-us/skypeforbusiness/deploy/install/prepare-active-directory?redirectedfrom=MSDN
2020-02-17T02:15:08
CC-MAIN-2020-10
1581875141460.64
[array(['../../sfbserver/media/2c52d307-7859-4009-9489-024b2e130bb3.png', 'overview diagram'], dtype=object) ]
docs.microsoft.com
IDragSourceHelper2 interface Exposes a method that adds functionality to IDragSourceHelper. This method sets the characteristics of a drag-and-drop operation over an IDragSourceHelper object. Inheritance The IDragSourceHelper2 interface inherits from IDragSourceHelper. IDragSourceHelper2 also has these types of members: Methods The IDragSourceHelper2 interface has these methods. Remarks This interface also provides the methods of the IDragSourceHelper interface, from which it inherits. If you want to adjust the behavior of the drag image by calling IDragSourceHelper2::SetFlags, that call should be made before you call InitializeFromWindow or InitializeFromBitmap.
https://docs.microsoft.com/en-us/windows/win32/api/shobjidl/nn-shobjidl-idragsourcehelper2?redirectedfrom=MSDN
2020-02-17T02:00:37
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Getting Started¶ Add the latest web3j version to your project build configuration. Maven¶ Java 8: <dependency> <groupId>org.web3j</groupId> <artifactId>core</artifactId> <version>4.5.5</version> </dependency> Android: <dependency> <groupId>org.web3j</groupId> <artifactId>core</artifactId> <version>4.2.0-android</version> </dependency> Start a client¶ Start up an Ethereum client if you don’t already have one running, such as Geth: $ geth --rpcapi personal,db,eth,net,web3 --rpc --rinkeby $ parity --chain testnet Or use Infura, which provides free clients running in the cloud: Web3j web3 = Web3j.build(new HttpService("")); For further information refer to Using Infura with web3j. Instructions on obtaining Ether to transact on the network can be found in the testnet section of the docs. When you no longer need a Web3j instance you need to call the shutdown method to close resources used by it. web3.shutdown() Start sending requests¶ To send synchronous requests: Web3j web3 = Web3j.build(new HttpService()); // defaults to Web3ClientVersion web3ClientVersion = web3.web3ClientVersion().send(); String clientVersion = web3ClientVersion.getWeb3ClientVersion(); To send asynchronous requests using a CompletableFuture (Future on Android): Web3j web3 = Web3j.build(new HttpService()); // defaults to Web3ClientVersion web3ClientVersion = web3.web3ClientVersion().sendAsync().get(); String clientVersion = web3ClientVersion.getWeb3ClientVersion(); To use an RxJava Flowable: Web3j web3 = Web3j.build(new HttpService()); // defaults to web3.web3ClientVersion().flowable().subscribe(x -> { String clientVersion = x.getWeb3ClientVersion(); ... }); IPC¶ web3j also supports fast inter-process communication (IPC) via file sockets to clients running on the same host as web3j. To connect simply use the relevant IpcService implementation instead of HttpService when you create your service: // OS X/Linux/Unix: Web3j web3 = Web3j.build(new UnixIpcService("/path/to/socketfile")); ... // Windows Web3j web3 = Web3j.build(new WindowsIpcService("/path/to/namedpipefile")); ... Note: IPC is not available on web3j-android. Working with smart contracts web3j’s Command Line Tools: web3j solidity generate . Transactions¶ web3j provides support for both working with Ethereum wallet files (recommended) and Ethereum client admin commands for sending transactions. To send Ether to another party using your Ethereum wallet file:(); Or if you wish to create your own custom transaction: Web3j web3 = Web3j.build(new HttpService()); // defaults to Credentials credentials = WalletUtils.loadCredentials("password", "/path/to/walletfile"); // get the next available nonce EthGetTransactionCount ethGetTransactionCount = web3j.ethGetTransactionCount( address, DefaultBlockParameterName.LATEST).send(); BigInteger nonce = ethGetTransactionCount.getTransactionCount(); // create our transaction RawTransaction rawTransaction = RawTransaction.createEtherTransaction( nonce, <gas price>, <gas limit>, <toAddress>, <value>); // sign & send our transaction byte[] signedMessage = TransactionEncoder.signMessage(rawTransaction, credentials); String hexValue = Numeric.toHexString(signedMessage); EthSendTransaction ethSendTransaction = web3j.ethSendRawTransaction(hexValue).send(); // ... Although it’s far simpler using web3j’s Transfer for transacting with Ether. Using an Ethereum client’s admin commands (make sure you have your wallet in the client’s keystore): Admin web3j = Admin.build(new HttpService()); // defaults to PersonalUnlockAccount personalUnlockAccount = web3j.personalUnlockAccount("0x000...", "a password").sendAsync().get(); if (personalUnlockAccount.accountUnlocked()) { // send a transaction } If you want to make use of Parity’s Personal or Trace, or Geth’s Personal client APIs, you can use the org.web3j:parity and org.web3j:geth modules respectively. Publish/Subscribe (pub/sub)¶ Parity() Command line tools¶ A web3j fat jar is distributed with each release providing command line tools. The command line tools allow you to use some of the functionality of web3j from the command line: - Wallet creation - Wallet password management - Transfer of funds from one wallet to another - Generate Solidity smart contract function wrappers Please refer to the documentation for further information. Further details¶ In the Java 8 build: - web3j provides type safe access to all responses. Optional or null responses are wrapped in Java 8’s Optional type. - Asynchronous requests are wrapped in a Java 8 8.
https://web3j.readthedocs.io/en/latest/getting_started.html
2020-02-17T00:20:47
CC-MAIN-2020-10
1581875141460.64
[]
web3j.readthedocs.io
Print a Copy of a PO Purchasers can access and print off signed copies of their Purchase Orders the day following the PO approval. - Click the green menu button near the upper-left ( ). - Click Purchasing - Click My Purchase Order (for your own POs) or Purchase Order (for other staff’s POs in your purchasing groups) - Click the arrow button to open the PO ( ). - Click Deliveries on the left side - Click the down arrow button, then Download Purchase Order
https://docs.glenbard.org/index.php/uncategorized/print-a-copy-of-a-po/
2021-04-10T18:39:36
CC-MAIN-2021-17
1618038057476.6
[array(['https://docs.glenbard.org/wp-content/uploads/2020/04/Purchase-Order-Deliveries-2020-04-14-10-20-15.png', None], dtype=object) ]
docs.glenbard.org
- MongoDB CRUD Operations > - Text Search > - Text Search Operators Text Search Operators¶ On this page Query Framework”:.
https://docs.mongodb.com/v4.0/core/text-search-operators/
2021-04-10T19:01:32
CC-MAIN-2021-17
1618038057476.6
[]
docs.mongodb.com
New Zealand’s National Climate Database, CliFlo holds data from about 6500 climate stations, with observations dating back to 1850. CliFlo returns raw data at ten minute, hourly, and daily frequencies. CliFlo also returns statistical summaries, inclusive of about eighty different types of monthly and annual statistics and six types of thirty−year normals. The clifro package is designed to minimise the hassle in downloading data from CliFlo. It does this by providing functions for the user to log in, easily choose the appropriate datatypes and stations, and then query the database. Once the data have been downloaded, they are stored as specific objects in R with the primary aim to ensure data visualisation and exploration is done with minimal effort and maximum efficiency. This package extends the functionality of CliFlo by returning stations resulting from simultaneous searches, the ability to visualise where these climate stations are by exporting to KML files, and elegant plotting of the climate data. The vignettes and help files are written with the intention that even inexperienced R users can use clifro easily. Exporting the climate data from R is fairly easy and for more experienced useRs, automated updating of spreadsheets or databases can be made much easier. A current CliFlo subscription is recommended for clifro, otherwise data from only one station is available. The subscription is free and lasts for 2 years or 2,000,000 rows without renewal, which enables access to around 6,500 climate stations around New Zealand and the Pacific. Note this package requires internet access for connecting to the National Climate Database web portal. # Install the latest CRAN release install.packages("clifro") # Or the latest development version if(!require(devtools)) install.packages("devtools") devtools::install_github("ropensci/clifro") # Then load the package library(clifro) The following small example shows some of the core functionality in clifro. We can search for climate stations anywhere in New Zealand and return the station information in the form of a KML file. For example, we can return all the climate stations (current and historic) in the greater Auckland region. all.auckland.st = cf_find_station("Auckland", search = "region", status = "all") cf_save_kml(all.auckland.st, "all_auckland_stations") Note the open stations have green markers and the closed stations have red markers. The only station available for unlimited public access to climate data is the Reefton electronic weather station (EWS). We can download the 2014 wind and rain data and easily visualise the results very easily. public.cfuser = cf_user() # Choose the datatypes daily.wind.rain.dt = cf_datatype(c(2, 3), c(1, 1), list(4, 1), c(1, NA)) # Choose the Reefton EWS station reefton.st = cf_station() # Send the query to CliFlo and retrieve the data daily.datalist = cf_query(user = public.cfuser, datatype = daily.wind.rain.dt, station = reefton.st, start_date = "2012-01-01 00", end_date = "2013-01-01 00") #> connecting to CliFlo... #> reading data... #> UserName is = public #> Number of charged rows output = 0 #> Number of free rows output = 732 #> Total number of rows output = 732 #> Copyright NIWA 2020 Subject to NIWA's Terms and Conditions #> See: #> Comments to: [email protected] # Have a look at what data is now available daily.datalist #> List containing clifro data frames: #> data type start end rows #> df 1) Surface Wind 9am only (2012-01-01 9:00) (2012-12-31 9:00) 366 #> df 2) Rain Daily (2012-01-01 9:00) (2012-12-31 9:00) 366 # Plot the data using default plotting methods. The clifro package is released with a contributor code of conduct. By participating in this project you agree to abide by its terms. To cite package ‘clifro’ in publications use: Seers B and Shears N (2015). “New Zealand's Climate Data in R - An Introduction to clifro.” The University of Auckland, Auckland, New Zealand. <URL:>. A BibTeX entry for LaTeX users is @TechReport{, title = {New Zealand's Climate Data in R --- An Introduction to clifro}, author = {Blake Seers and Nick Shears}, institution = {The University of Auckland}, address = {Auckland, New Zealand}, year = {2015}, url = {}, }
https://docs.ropensci.org/clifro/
2021-04-10T19:45:13
CC-MAIN-2021-17
1618038057476.6
[]
docs.ropensci.org
. Additional resources As you develop your expertise in authentication and security, we recommend the following ThoughtSpot U course: See other training resources at
https://docs.thoughtspot.com/5.1/admin/setup/configure-SAML-with-tscli.html
2021-04-10T20:10:06
CC-MAIN-2021-17
1618038057476.6
[array(['/5.1/images/ts-u.png', 'ThoughtSpot U'], dtype=object)]
docs.thoughtspot.com
Persisted Grants¶ Many grant types require persistence in IdentityServer. These include authorization codes, refresh tokens, reference tokens, and remembered user consents. Internally in IdentityServer, the default storage for these grants is in a common store called the persisted grants store. Persisted Grant¶ The persisted grant is the data type that maintains the values for a grant. It has these properties: Key - The unique identifier for the persisted grant in the store. Type - The type of the grant. SubjectId - The subject id to which the grant belongs. ClientId - The client identifier for which the grant was created. Description - The description the user assigned to the grant or device being authorized. CreationTime - The date/time the grant was created. Expiration - The expiration of the grant. ConsumedTime - The date/time the grant was “consumed” (see below). Data - The grant specific serialized data. Note The Data property contains a copy of all of the values (and more) and is considered authoritative by IdentityServer, thus the above values, by default, are considered informational and read-only. The presence of the record in the store without a ConsumedTime and while still within the Expiration represents the validity of the grant. Setting either of these two values, or removing the record from the store effectively revokes the grant. Grant Consumption¶. Persisted Grant Service¶ Working with the grants store directly might be too low level. As such, a higher level service called IPersistedGrantService is provided. It abstracts and aggregates the different grant types into one concept, and allows querying and revoking the persisted grants for a user. It contains these APIs: GetAllGrantsAsync - Gets all the grants for a user based upon subject id. RemoveAllGrantsAsync - Removes grants from the store based on the subject id and optionally a client id and/or a session id.
https://identityserver4.readthedocs.io/en/latest/topics/persisted_grants.html
2021-04-10T18:50:50
CC-MAIN-2021-17
1618038057476.6
[]
identityserver4.readthedocs.io
Connector is used to export data produced by the Avro console producer to a Vertica database. Note Before you begin, start the Vertica database and manually create a table using the same name as the Kafka topic, and with the same schema as used for the data in the Kafka topic. The Vertica target table name needs to match the Kafka topic name and you cannot override this table with a different naming strategy. auto.create is not supported at this moment. Start the Vertica connector by loading its configuration with the following command: Caution You must include a double dash ( --) between the connector name and your flag. For more information, see this post. confluent local load VerticaSinkConnector -- -d vertica-sink-connector.properties { Connect REST API { ": [] }
https://docs.confluent.io/5.4.1/connect/kafka-connect-vertica/sink/index.html
2021-04-10T19:44:52
CC-MAIN-2021-17
1618038057476.6
[]
docs.confluent.io
PowerSchool Security setup To setup a staff member in PowerSchool please follow the directions below When setting up a staff member, make sure one full day has passed after the staff member has been added to FirstClass. This will help to insure that the account is ready for PowerSchool - Create the staff member or search for the staff member - Click on Security Settings - Check Sign in to PowerTeacher - Click LDAP Lookup - A popup window will appear. - If it does not appear, make sure the browser is not blocking popups - Also check to see if the popup open behind or underneath the current window - Scroll to the bottom of the popup and check the box next to Update Username for Teacher and Admin - If this box does not appear, then there were no matches - If it has not been a full day after the staff member has been entered into PowerSchool, they will not be found - Check the staff members spelling. If FirstClass and PowerSchool have different spelings, the staff member may not be found - Review the options and select the correct person - Click the select next to the correct match - Adjust the School Affliations section - Add the school the staff member is a part of, if it is not already present - If this is a traveling teacher, add additional schools - The Home School will be the school the teacher is mostly attends - If it is equal then the home school will be where the teacher eats lunch - Click Submit - If the staff member needs PowerSchool Admin, Click Admin Access and Roles - Starting the 2014-2015 school year all teachers will have access to PowerSchool Admin - Add the schools the staff needs to access. - Only traveling teachers and district staff will have more then one school - Click submit
https://docs.glenbard.org/index.php/ps-2/admin-ps/general-admin-ps/powerschoo-account-setup/
2021-04-10T19:29:09
CC-MAIN-2021-17
1618038057476.6
[]
docs.glenbard.org
Import all text into a string buffer. More... #include <XMLStringBufferImportContext.hxx> Import all text into a string buffer. Paragraph elements (<text:p>) are recognized and cause a return character (0x0a) to be added. Definition at line 32 of file XMLStringBufferImportContext.hxx. Definition at line 30 of file XMLStringBufferImportContext.cxx. Referenced by createFastChildContext(). Definition at line 38 of file XMLStringBufferImportContext.cxx. This method is called for all characters that are contained in the current element. The default is to ignore them. Reimplemented from SvXMLImportContext. Definition at line 48 of file XMLStringBufferImportContext.cxx. References rTextBuffer. Reimplemented from SvXMLImportContext. Definition at line 42 of file XMLStringBufferImportContext.cxx. References SvXMLImportContext::GetImport(), rTextBuffer, and XMLStringBufferImportContext(). endFastElement is called before a context will be destructed, but after an elements context has been parsed. It may be used for actions that require virtual methods. The default is to do nothing. Reimplemented from SvXMLImportContext. Definition at line 53 of file XMLStringBufferImportContext.cxx. References rTextBuffer, TEXT, u, XML_ELEMENT, and xmloff::token::XML_P. Definition at line 34 of file XMLStringBufferImportContext.hxx. Referenced by characters(), createFastChildContext(), and endFastElement().
https://docs.libreoffice.org/xmloff/html/classXMLStringBufferImportContext.html
2021-04-10T18:23:32
CC-MAIN-2021-17
1618038057476.6
[]
docs.libreoffice.org
- Replication > - Replica Set Members > - Replica Set Arbiter Replica Set Arbiter¶ On this page. Important Do not run an arbiter on systems that also host the primary or the secondary members of the replica set. To add an arbiter, see Add an Arbiter to Replica Set. For the following MongoDB versions, pv1 increases the likelihood of w:1 roll. Security¶ Authentication¶ When running with authorization, arbiters exchange credentials with other members of the set to authenticate. MongoDB encrypts the authentication process, and.
https://docs.mongodb.com/v4.0/core/replica-set-arbiter/
2021-04-10T18:14:39
CC-MAIN-2021-17
1618038057476.6
[]
docs.mongodb.com
You're reading the documentation for a version of ROS 2 that has reached its EOL (end-of-life), and is no longer officially supported. If you want up-to-date information, please have a look at Foxy. Implement a custom memory allocator¶ Table of Contents This tutorial will teach you how to integrate a custom allocator for publishers and subscribers so that the default heap allocator is never called while your ROS nodes are executing. The code for this tutorial is available here. Background¶ Suppose you want to write real-time safe code, and you’ve heard about the many dangers of calling “new” during the real-time critical section, because the default heap allocator on most platforms is nondeterministic. By default, many C++ standard library structures will implicitly allocate memory as they grow, such as std::vector. However, these data structures also accept an “Allocator” template argument. If you specify a custom allocator to one of these data structures, it will use that allocator for you instead of the system allocator to grow or shrink the data structure. Your custom allocator could have a pool of memory preallocated on the stack, which might be better suited to real-time applications. In the ROS 2 C++ client library (rclcpp), we are following a similar philosophy to the C++ standard library. Publishers, subscribers, and the Executor accept an Allocator template parameter that controls allocations made by that entity during execution. Writing an allocator¶ To write an allocator compatible with ROS 2’s allocator interface, your allocator must be compatible with the C++ standard library allocator interface. The C++11 library provides something called allocator_traits. The C++11 standard specifies that a custom allocator only needs to fulfil a minimal set of requirements to be used to allocate and deallocate memory in a standard way. allocator_traits is a generic structure that fills out other qualities of an allocator based on an allocator written with the minimal requirements. For example, the following declaration for a custom allocator would satisfy allocator_traits (of course, you would still need to implement the declared functions in this struct): template <class T> struct custom_allocator { using value_type = T; custom_allocator() noexcept; template <class U> custom_allocator (const custom_allocator<U>&) noexcept; T* allocate (std::size_t n); void deallocate (T* p, std::size_t n); }; template <class T, class U> constexpr bool operator== (const custom_allocator<T>&, const custom_allocator<U>&) noexcept; template <class T, class U> constexpr bool operator!= (const custom_allocator<T>&, const custom_allocator<U>&) noexcept; You could then access other functions and members of the allocator filled in by allocator_traits like so: std::allocator_traits<custom_allocator<T>>::construct(...) To learn about the full capabilities of allocator_traits, see . However, some compilers that only have partial C++11 support, such as GCC 4.8, still require allocators to implement a lot of boilerplate code to work with standard library structures such as vectors and strings, because these structures do not use allocator_traits internally. Therefore, if you’re using a compiler with partial C++11 support, your allocator will need to look more like this: template<typename T> struct pointer_traits { using reference = T &; using const_reference = const T &; }; // Avoid declaring a reference to void with an empty specialization template<> struct pointer_traits<void> { }; template<typename T = void> struct MyAllocator : public pointer_traits<T> { public: using value_type = T; using size_type = std::size_t; using pointer = T *; using const_pointer = const T *; using difference_type = typename std::pointer_traits<pointer>::difference_type; MyAllocator() noexcept; ~MyAllocator() noexcept; template<typename U> MyAllocator(const MyAllocator<U> &) noexcept; T * allocate(size_t size, const void * = 0); void deallocate(T * ptr, size_t size); template<typename U> struct rebind { typedef MyAllocator<U> other; }; }; template<typename T, typename U> constexpr bool operator==(const MyAllocator<T> &, const MyAllocator<U> &) noexcept; template<typename T, typename U> constexpr bool operator!=(const MyAllocator<T> &, const MyAllocator<U> &) noexcept; Writing an example main¶ Once you have written a valid C++ allocator, you must pass it as a shared pointer to your publisher, subscriber, and executor. auto alloc = std::make_shared<MyAllocator<void>>(); auto publisher = node->create_publisher<std_msgs::msg::UInt32>("allocator_example", 10, alloc); auto msg_mem_strat = std::make_shared<rclcpp::message_memory_strategy::MessageMemoryStrategy<std_msgs::msg::UInt32, MyAllocator<>>>(alloc); auto subscriber = node->create_subscription<std_msgs::msg::UInt32>( "allocator_example", 10, callback, nullptr, false, msg_mem_strat, alloc); std::shared_ptr<rclcpp::memory_strategy::MemoryStrategy> memory_strategy = std::make_shared<AllocatorMemoryStrategy<MyAllocator<>>>(alloc); rclcpp::executors::SingleThreadedExecutor executor(memory_strategy); You will also need to use your allocator to allocate any messages that you pass along the execution codepath. auto alloc = std::make_shared<MyAllocator<void>>(); Once you’ve instantiated the node and added the executor to the node, it’s time to spin: uint32_t i = 0; while (rclcpp::ok()) { msg->data = i; i++; publisher->publish(msg); rclcpp::utilities::sleep_for(std::chrono::milliseconds(1)); executor.spin_some(); } Passing an allocator to the intra-process pipeline¶ Even though we instantiated a publisher and subscriber in the same process, we aren’t using the intra-process pipeline yet. The IntraProcessManager is a class that is usually hidden from the user, but in order to pass a custom allocator to it we need to expose it by getting it from the rclcpp Context. The IntraProcessManager makes use of several standard library structures, so without a custom allocator it will call the default new. auto context = rclcpp::contexts::default_context::get_global_default_context(); auto ipm_state = std::make_shared<rclcpp::intra_process_manager::IntraProcessManagerState<MyAllocator<>>>(); // Constructs the intra-process manager with a custom allocator. context->get_sub_context<rclcpp::intra_process_manager::IntraProcessManager>(ipm_state); auto node = rclcpp::Node::make_shared("allocator_example", true); Make sure to instantiate publishers and subscribers AFTER constructing the node in this way. Testing and verifying the code¶ How do you know that your custom allocator is actually getting called? The obvious thing to do would be to count the calls made to your custom allocator’s allocate and deallocate functions and compare that to the calls to new and delete. Adding counting to the custom allocator is easy: T * allocate(size_t size, const void * = 0) { // ... num_allocs++; // ... } void deallocate(T * ptr, size_t size) { // ... num_deallocs++; // ... } You can also override the global new and delete operators: void operator delete(void * ptr) noexcept { if (ptr != nullptr) { if (is_running) { global_runtime_deallocs++; } std::free(ptr); ptr = nullptr; } } void operator delete(void * ptr, size_t) noexcept { if (ptr != nullptr) { if (is_running) { global_runtime_deallocs++; } std::free(ptr); ptr = nullptr; } } where the variables we are incrementing are just global static integers, and is_running is a global static boolean that gets toggled right before the call to spin. The example executable prints the value of the variables. To run the example executable, use: allocator_example or, to run the example with the intra-process pipeline on: allocator_example intra-process You should get numbers like: Global new was called 15590 times during spin Global delete was called 15590 times during spin Allocator new was called 27284 times during spin Allocator delete was called 27281 times during spin We’ve caught about 2/3 of the allocations/deallocations that happen on the execution path, but where do the remaining 1/3 come from? As a matter of fact, these allocations/deallocations originate in the underlying DDS implementation used in this example. Proving this is out of the scope of this tutorial, but you can check out the test for the allocation path that gets run as part of the ROS 2 continuous integration testing, which backtraces through the code and figures out whether certain function calls originate in the rmw implementation or in a DDS implementation: Note that this test is not using the custom allocator we just created, but the TLSF allocator (see below). The TLSF allocator¶ ROS 2 offers support for the TLSF (Two Level Segregate Fit) allocator, which was designed to meet real-time requirements: For more information about TLSF, see Note that the TLSF allocator is licensed under a dual-GPL/LGPL license. A full working example using the TLSF allocator is here:
https://docs.ros.org/en/eloquent/Tutorials/Allocator-Template-Tutorial.html
2021-04-10T19:54:21
CC-MAIN-2021-17
1618038057476.6
[]
docs.ros.org
Domain separation in Service Level Management This is an overview of domain separation and Service Level Service Level Management helps customers monitor, measure, and report on agreed service level agreements (SLAs); SLA definitions encapsulate these agreements. Users can see only content in the domain to which they have access. How domain separation works in Service Level Management The intention of SLM is to provide customers with an expectation of service within a known timescale and the ability to monitor when service levels are not being met. To learn specific terms and definitions see Service Level Management concepts . SLA definitions and task SLAs have domain fields. However, task SLAs are created only in the domain of its attached task record. SLA definitions must be defined in a tenant domain (or global) in order for task SLAs to be created and attached to a given task (or extensions). Task SLAs attach to a task if an SLA definition exists in the task records domain or in an ancestor domain. Task SLAs always inherit the domain of its attached task record, which includes the workflow running on the task SLA record. If a task record ever flips, the task SLA also flips. If an SLA definition exists in an ancestor’s domain, the definition can be overridden in a sub-domain (delegated administration). Domain-separated tables SLA definition [contract_sla] Task SLA [task_sla] Use cases An ESS user in the ACME domain logs in and creates an incident, at which point an SLA is attached. The SLA is created in the domain of the associated task record (incident), which is the ACME domain. The ESS user is not able to read SLA records. These are restricted to the following roles: Administrator ITIL SLA Administrator SLA Manager An ITIL user in the Acme domain logs in and creates an incident. The process above is the same except that the ITIL user can read the SLA record attached to the incident. If an SLA definition exists in the Acme domain and doesn’t meet the needs of an Acme sub-domain (Acme child) an SLA Administrator can remediate. SLA Administrators can navigate to the ACME SLA definition when their session domain is ACME child, make the relevant changes, and save them. The SLA Administrator is alerted that an override has been created. An ITIL user sets the session domain to Acme child and creates an incident. The task SLA is created using the SLA definition from Acme child. Related topicsDomain separation
https://docs.servicenow.com/bundle/orlando-it-service-management/page/product/service-level-management/concept/domain-separation-sla.html
2021-04-10T18:42:23
CC-MAIN-2021-17
1618038057476.6
[]
docs.servicenow.com
OEApplyStateFromRef¶ This function can be used to change the tautomer state or a fix a broken valence state in a 3D molecule, based on a reference state (not required to be 3D). The function matches the substructures and transfers the state of the reference, like bond-orders, formal charge assignments, and hydrogen assignments to the input molecule. The input molecule can be smaller than the reference molecule, as this is intended to work for small molecule crystal structures where part of the molecule can have been degraded in the experiment. It also works for molecules that have been covalently bound to a protein, such that there is an R-group in place of the broken covalent bond. Due to symmetries in the substructure graph match, that are not unique in 3D, the function returns an iterator of output molecules where the state of the reference has been applied. An simple example would be a 3D molecule with two carboxylic acid groups on either side of a beneze ring. If the state being applied has one neutral carboxylic acid, and one negatively charged, two output molecules would have to be generated. While they are symmetric, if the 3D molecule is bound in a protein pocket, the two groups are not identical and one very likely fits the local environment better than the other, which will need to be checked. Note The state of hydrogens being implicit or explicit in the output will match that of the state of the reference molecle. The function currently ignores chirality of the two molecules, since this cannot be changed in a 3D molecule without conformational changes.OESystem::OEIterBase<OEMolBase> * OEApplyStateFromRef(const OEMolBase& input, const OEMolBase& refMol) OESystem::OEIterBase<OEMolBase> * OEApplyStateFromRef(const OEMolBase& input, const std::string& refSmiles) The SMILES version uses OEMolToSmiles to convert the incoming smiles to a molecule usable for the substructure search. See also - OEMolToSmiles function - OESubSearch function
https://docs.eyesopen.com/toolkits/java/oechemtk/OEChemFunctions/OEApplyStateFromRef.html
2021-04-10T18:49:03
CC-MAIN-2021-17
1618038057476.6
[]
docs.eyesopen.com
Contents: Contents: This page contains a set of tips for how to improve the overall performance of job execution. Filter data early If you know that you are dropping some rows and columns from your dataset, add these transform steps early in your recipe. This reduction simplifies working with the content through the application and, at execution, speeds the processing of the remaining valid data. Since you may be executing your job multiple times before it is finalized, it should also speed your development process. - To drop columns: - Select Drop from the column drop-down for individual columns. See Column Menus. - Use the droptransform to remove multiple discrete columns or ranges of columns. See Drop Transform. - To delete rows: Use the deletetransform with a rowparameter value to identify the rows to remove. For example, the following removes all rows that lack a value for the idcolumn: delete row:ISMISSING(id) You can paste Wrangle steps into the Transformer Page. Similarly, you can use the keeptransform to retain the rows of interest, dropping the rows that do not match. For example, the following transform keeps all rows that lack a value in the idcolumn: keep row:NOT(ISMISSING(colA)) Perform joins early Join operations should be performed early in your recipe. These steps bring together your data into a single consistent dataset. By doing them early in the process, you reduce the chance of having changes to your join keys impacting the results of your join operations. See Join Page. Perform unions late Union operations should be performed later in the recipe so that you have a small chance of changes to the union operation, including dataset refreshes, affecting the recipe and the output. See Union Page. Run jobs on the default running environment When configuring a job, Trifacta Wrangler Enterprise analyzes the size of your dataset to determine the best of the available running environments on which to execute the job. This option is presented as the default option in the dialog. Unless you have specific reasons for doing otherwise, you should accept the default suggestion. This page has no comments.
https://docs.trifacta.com/display/r050/Optimize+Job+Processing
2021-04-10T19:46:30
CC-MAIN-2021-17
1618038057476.6
[]
docs.trifacta.com
Troubleshooting Application Server startup issues A Truesight Server Automation Application Server fails to start. This topic helps you locate and review the Application Server logs to determine the root cause of the problem and either help you identify and resolve the issue or create a BMC Customer Support case. Issue symptoms The Application Server fails to start using any of the documented methods. The following symptoms might be observed: - On a Windows Application Server, the "BladeLogic Application Server" Windows Service fails to start. - On a Linux Application Server, the /etc/init.d/blappserv start command fails with an error. - The Application Server fails to start when attempted from the "Configuration - Infrastructure Management - Application Servers" node in the TrueSight Server Automation Console. - The following message is displayed while attempting to connect to the Application Server from the TrueSight Server Automation Console: "Could not connect to "service:authsvc.bladelogic:blauth://appserver:port" Issue scope The problem may affect all or specific Application Servers in your environment. Diagnosing and reporting an issue Resolutions for common issues Was this page helpful? Yes No Submitting... Thank you
https://docs.bmc.com/docs/tssa89/troubleshooting-application-server-startup-issues-956765408.html
2021-04-10T19:34:28
CC-MAIN-2021-17
1618038057476.6
[]
docs.bmc.com
What's New Correction—Updated 26 March 2021 “Jamf Cloud Distribution Service (JCDS) 1.4.2 Enhancements” was incorrectly announced as available in this release of Jamf Pro. It will be available in an upcoming release of Jamf Pro. Compatibility with macOS, iOS, iPadOS, and tvOS Jamf Pro now provides compatibility for the following: macOS 11.3 iOS 14.5 iPadOS 14.5 tvOS 14.5 This includes compatibility for the following management workflows: Enrollment and inventory reporting Configuration profiles App distribution Self Service installation Self Service launches and connections App distribution via Self Service Policies Restricted software Compatibility and new feature support are based on testing with the latest Apple beta releases. Apple Push Notification Service (APNs) HTTP/2 Communication Protocol As announced in the Apple Push Notification Service Update, Apple will no longer support the legacy binary protocol for Apple Push Notification service (APNs) connections. To address this change, beginning with Jamf Pro 10.28.0, HTTP/2 is the default protocol for connections to APNs. Note: If your environment is hosted on-premise and you want to continue to use the binary protocol, you must change the MDM Push Notification Certificate settings. Navigate to Settings > Global Management > Push Certificates and click "MDM Push Notification Certificate". Click Edit and select "Binary" for the protocol in the connection settings. For related information, see the following documentation from Apple: Jamf Protect Deployment Enhancement You can now automatically deploy the Jamf Protect package to computers in the scope of a plan configuration profile. This allows you to skip the manual process of downloading and uploading the Jamf Protect package and using a policy to deploy it. To use this deployment method, you need the following: A Jamf Protect subscription One or more plans in Jamf Protect Registration of your Jamf Protect tenant in Jamf Pro To enable this feature, navigate to Settings > Jamf Applications > Jamf Protect and select the Automatically deploy the Jamf Protect PKG with plans checkbox. For more information, see the Deploying Jamf Platform Products Using Jamf Pro to Connect, Manage, and Protect Mac Computers technical paper. Additional Reporting Capabilities for Computers You can create a smart computer group or an advanced search based on the following criteria: Availability of the RestartDevice MDM Command via the Jamf Pro API You can now use the RestartDevice MDM command to immediately restart computers in your environment. This command is available using the Jamf Pro API. When combined with configuration profiles in Jamf Pro, this command includes the functionality to manage required legacy kernel extensions in macOS 11. You can also enable a macOS notification that requests users to restart the computer at their convenience. For more information, see the Manage Legacy Kernel Extensions in macOS 11 Using Jamf Pro Knowledge Base article. Mobile Device Configuration Profiles The following table provides an overview of the mobile device configuration profile enhancements in this release, organized by payload: Additional Remote Commands for Mobile Devices The following remote commands for mobile devices have been added to Jamf Pro: Additional Reporting Capabilities for Mobile Devices You can create a smart mobile device group or an advanced search based on the following criteria: Transitive Groups for Azure Single Sign-On and Cloud Identity Provider When single sign-on (SSO) with Azure is configured in Jamf Pro, you can now enforce transitive membership in the user and group directory lookups when Azure is added as a cloud identity provider. This ensures that all Azure groups that a group is a member of are included in a directory lookup. There is no need to run recursive queries to list groups for which a user is a member. The term "transitive" is used by Microsoft to describe relationships in Active Directory. For more information, see Glossary in the Active Directory Technical Specification from Microsoft. Important: Including transitive membership in lookups may affect Jamf Pro privileges granted for the user account or group. Jamf Pro combines the privileges added for each group the account is a member of. To access this feature, navigate to Settings > System Settings > Cloud Identity Providers and click the Azure instance you want to edit. Click Edit and select the Transitive groups for SSO checkbox. Note: The transitive groups for Azure single sign-on and cloud identity provider feature is not enabled by default. Server Name for LDAP Users or Groups When adding a new LDAP user or group in Jamf Pro, you can now see which directory server configured in Jamf Pro the user or group originates from. The new Server column now displays in the Add LDAP User or Group table in the Add LDAP Account and Add LDAP Group assistants. Active Directory Certificate Services (AD CS) Enhancements Jamf Pro 10.28.0 includes performance enhancements that allow for a larger volume of certificate requests. In addition, the default frequency for the renewal monitor has been changed from 24 hours to 6 hours. Deleting a DigiCert Certificate Authority You can now delete DigiCert certificate authorities (CA) from Jamf Pro. To access this feature, navigate to Settings > Global Management > PKI Certificates, click View on the DigiCert CA that you want to delete, and then click at the bottom of the page. For more information, see the Integrating with DigiCert Using Jamf Pro technical paper. Self Service for macOS Branding Enhancement The main header space in the Self Service for macOS navigation bar has been increased and now adjusts to two lines to support longer organization names. This change was made in response to community feedback and is part of a larger redesign project. Future releases will continue to iterate on the redesign of Self Service for macOS. New URL Scheme for Self Service for iOS If you have the Microsoft Endpoint Manager integration enabled, you can now direct your users to the Register with Microsoft item in Self Service 10.10.5 or later using the following URL scheme: selfserviceios://registerdc Note: Self Service 10.10.5 will be available in the App Store when it is approved by Apple. Volume Purchasing Debug Mode You can now enable debug mode logging for Volume Purchasing in Jamf Pro. This allows you to view the debug logs specific to Volume Purchasing directly in the Jamf Pro user interface. In addition, you can enable the Volume Purchasing traffic logs to view the communication logs between Jamf Pro and Apple's servers. To access this feature, navigate to Settings > Jamf Pro Information > Jamf Pro Server Logs > Volume Purchasing tab. Changes to the Jamf Pro Server Actions Privileges The following changes and updates have been made to the privileges in the Jamf Pro Server Actions category of a Jamf Pro user account. These changes only impact functionality in the Jamf Pro API: The Send Mobile Device Shared Device Command privilege has been added and replaces the functionality associated with the Send Mobile Device Quota Size Command, including the functionality with the MaximumResidentUsers MDM command. As a result, the Send Mobile Device Quota Size Command privilege has been removed. Note: When upgrading to Jamf Pro 10.28.0 or later, the Send Mobile Device Shared Device Command privilege will automatically be enabled if the Send Mobile Device Quota Size Command privilege was enabled prior to upgrading. The following privileges have been added: Send Disable Bootstrap Token Command Send Enable Bootstrap Token Command Send Application Attributes Command Send Application Configuration Command Send Set Timezone Command Session Expiration Improvements Jamf Pro now uses a shared user session token for all browser tabs. For example, logging in or out on one tab of Jamf Pro will do the same in all other open tabs of Jamf Pro as well. When presented with a session expiration warning, clicking Continue Session will extend the session for all tabs. Jamf Pro now displays session expiration warnings by dynamically updating the title of the browser tab and animating the favicon. This provides a convenient way to be notified that the session is about to expire even when you are not focused on the tab. Note: The favicon animation is not supported in Internet Explorer 11 and Safari. Other Changes and Enhancements When the default Enforce value for the Kerberos User setup delay setting is included in the computer Single Sign-On Extensions payload, the value for the delayUserSetup key in the configuration profile is now set to false. This better reflects the expected behavior of the User setup delay setting when it is sent to a computer in scope. The Send Update button was removed from the Conditional Access settings. You can still send an update from a computer's inventory information by navigating to History > macOS Intune Logs > Send Update. "Mappings" has been removed from the column titles in the Test table for cloud identity providers. Jamf Pro API Changes and Enhancements The Jamf Pro API is open for user testing. The base URL for the Jamf Pro API is /api. You can access documentation for both the Jamf Pro API and the Classic API from the new API landing page. To access the landing page, append "/api" to your Jamf Pro URL. For example: The following endpoints were added: GET /v1/notifications DELETE /v1/notifications/{type}/{id} PUT /v1/jamf-protect DELETE /v1/pki/venafi/{id} The following endpoints were deprecated: GET /notifications/alerts DELETE /notifications/alerts/{type}/{id} For more information on these changes, see the Jamf Pro API documentation. Further Considerations Feature requests implemented in this release can be viewed at: See Product Documentation for a list of new and recently updated Jamf Pro guides and technical papers. Privileges associated with new features in Jamf Pro are disabled by default. It is recommended that you clear your browser's cache after upgrading Jamf Pro to ensure that the Jamf Pro interface displays correctly.
https://docs.jamf.com/10.28.0/jamf-pro/release-notes/What's_New.html
2021-04-10T19:46:00
CC-MAIN-2021-17
1618038057476.6
[array(['images/download/attachments/81950058/Screen_Shot_2021-02-19_at_4.23.22_PM.png', 'images/download/attachments/81950058/Screen_Shot_2021-02-19_at_4.23.22_PM.png'], dtype=object) ]
docs.jamf.com
Assigning App Licenses to Users You can distribute app licenses purchased through Apple School Manager to users using managed distribution. With managed distribution, users are invited to receive apps through their Apple ID. This license can be revoked at any time and transferred to another user. If an app requires in-app purchases, you must assign app licenses to users to let them make in-app purchases. Inviting Users In Jamf School, navigate to Users > Users in the sidebar. Search for the user you want to invite to Apple School Manager. Click +Invite User. Click Send Invite to send the invite to the users e-mail address. You can view the invitation status by navigating to Users > Users. Users with a status of “Associated” have accepted the invitation and can receive app licenses. Distributing App Licenses to Users Requirements To distribute app licenses to users, you need: Mobile devices with iOS 7 or later A personal Apple ID assigned to each user. Note: The user must be signed in to the mobile device with their Apple ID to receive apps purchased through Apple School Manager. A service token (VPP token) in Jamf School (For more information, see Integrating Jamf School with Apple School Manager.) Procedure Depending on the type of purchase, apps purchased through Apple School Manager automatically display in either Apps or Documents in the sidebar in Jamf School. In Jamf School, navigate to Apps or Documents in the sidebar. Search for the app or book you want to distribute and click on the name. You can see how many licenses are purchased and how many are still available. Select the Auto-grant VPP licenses to users in scope and Auto-revoke VPP licenses from out-of-scope users checkboxes. Click + to scope the licenses to a group of devices assigned to users you want to assign the license to. For more information on how to create device groups, see Device Groups. Click Save. Revoking App Licenses Revoking a Single License From a User In Jamf School, navigate to User & Groups > Overview in the sidebar. Search for the user that has the license you want to revoke and click on the name. Click VPP Licenses. Click Revoke license for the app you want to revoke the license from. Revoking All Licenses From a User Retiring a user account reclaims any assigned licenses and disassociates the account from your Apple School Manager account. In Jamf School, navigate to User & Groups > Overview in the sidebar. Search for the user that has the license you want to revoke and click on the name. Click Retire user in the VPP section.
https://docs.jamf.com/jamf-school/deploy-guide-docs/Assigning_App_Licenses_to_Users.html
2021-04-10T18:24:57
CC-MAIN-2021-17
1618038057476.6
[]
docs.jamf.com
Spring: You can choose any name after "xmlns:"; int consider. First, you may want to control the central TaskScheduler instance. You can do so by providing a single bean with the name "taskScheduler". This is also defined as a constant: IntegrationContextUtils.TASK_SCHEDULER_BEAN_NAME By default Spring Integration relies on an instance of ThreadPoolTaskScheduler as described in the Task Execution and Scheduling section of the Spring Framework reference manual. That default TaskScheduler will startup automatically with a pool of 10 threads. If you provide your own TaskScheduler instance instead, you can set the 'autoStartup' property to false, and/or you can provide your own pool size value.. <int:channel <int:queue </int it is itself annotated with Spring's @Component annotation and is therefore recognized automatically as a bean definition when using Spring component-scanning. Even more important annotations are available in Spring Integration: The behavior of each is described in its own chapter or section within this reference.: <int:service-activator <bean class="org.bar.Foo"/> </int); } <int:service-activator <bean class="org.bar.Foo"/> </int:service-activator> Now there is no ambiguity since the configuration explicitly maps to the 'bar' method which has no name conflicts.
https://docs.spring.io/spring-integration/docs/2.2.0.M4/reference/html/configuration.html
2017-10-17T06:03:07
CC-MAIN-2017-43
1508187820927.48
[]
docs.spring.io
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Describes the specified Systems Manager document. For PCL this operation is only available in asynchronous form. Please refer to DescribeDocumentAsync. Namespace: Amazon.SimpleSystemsManagement Assembly: AWSSDK.SimpleSystemsManagement.dll Version: 3.x.y.z
http://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/SSM/MSSMSSMDescribeDocumentString.html
2017-10-17T06:11:23
CC-MAIN-2017-43
1508187820927.48
[]
docs.aws.amazon.com
The User Directory Web Part provides two different methods for searching for people within your organization: the Simple Search and the Advanced Search. For SharePoint 2007: If you configured User Directory for the SharePoint user database and selected to search using the MOSS Profile Index, you must have already configured your SharePoint installation to use the MOSS Profile Index in order to use this feature with User Directory.
https://docs.bamboosolutions.com/document/how_to_search_for_people_with_user_directory/
2021-09-17T03:05:36
CC-MAIN-2021-39
1631780054023.35
[]
docs.bamboosolutions.com
notion of time, and how it is modeled and integrated. For example, some operations such as Windowing are defined based on time boundaries. Kafka Streams supports the following notions of time: event-time¶ The point in time when an event or data record occurred (that, perhaps because the data producers don’t embed timestamps (such as with older versions of Kafka’s Java producer client) or the producer cannot assign timestamps directly (for example, does not have access to a local clock). Timestamps¶. We call it the event-time of the application to differentiate with the wall-clock-time when this application is actually executing. Event-time. Other aspects of time¶).
https://docs.confluent.io/5.3.1/streams/concepts.html
2021-09-17T03:06:08
CC-MAIN-2021-39
1631780054023.35
[]
docs.confluent.io
Beforehand make sure that you have created an Google Cloud Function function in your Google Cloud Google Cloud Function as type (you may add the additional Authorization parameter as stated in the beginning of this doc.) Click on save to save the connector. Visit the alert source (view) whose incidents should trigger your serverless function. Navigate to the Incident actions tab and click on the Create incident action button. Choose Google Cloud Function as type and select your previously created connector. Enter a name and the url targeting your public function. You may also customize the HTTP request body that is used to invoke your function. Click on save to create the incident action, you may test the connection in the following screen.
https://docs.ilert.com/integrations/gcf/
2021-09-17T05:09:37
CC-MAIN-2021-39
1631780054023.35
[]
docs.ilert.com
PATRIC Groups¶ Overview¶ In PATRIC, “Groups” are custom collections of selected genomes or features. They are particularly useful for organizing and managing data sets of interest for further exploration and analysis. See also:¶ Creating and Accessing Groups on the PATRIC Website¶ A group can created by selecting a set of desired items genomes or features in a table in PATRIC and clicking the Group button on the vertical green Action Bar on the right side of the table. This will open a pop-up window to enable creating a new group containing the selected items or to add the selected items to an existing group in the Workspace. Once created, the new group will appear in the home Workspace. By default, Genome Groups will appear in the “Genome Groups” folder, and likewise, Feature Groups will appear in the “Feature Groups” folder. Using Workspace Groups in the PATRIC Website¶ Many PATRIC features are available to work with data in groups, including analyzing the items in the group with PATRIC’s tools. For example, after creating a Genome Group, you could use the Phylogenetic Tree Building Service to build a phylogenetic tree using the genomes in the group by selecting the group from the “select genome group” dropdown list. All the genome groups you have created will appear in this list. Managing Groups in the Workspace¶ An initial set of directory folders are provided as default locations for groups based on data type, incluidng Genome Groups and Feature Groups. Double-clicking the folder displays a list of the groups in that folder. Clicking the group name selects it, and information about the group is provided in the Information Panel on the right-hand side. See Workspace for more information. Group Comparison¶ The PATRIC workspace provides Venn Diagram tool for comparison of membership of items in a groups. Selecting 2 or 3 groups (using Clicking the Venn Diagram button displays an interactive Venn Diagram showing the selected groups and the counts of items from each group in the intersecting and non-intetersecting sections. Clicking one of the sections (or multi-selecting using Action Bar¶ After selecting one or more groups in the Workspace, a set of options becomes available in the vertical green Action Bar on the right side of the table. These include Hide: Toggles (hides) the right-hand side Details Pane. Genomes: Displays the Genomes Table, listing the genomes that correspond to the selected group. Venn Diagram: Displays an interactive Venn diagram showing the intersection of up to 3 genome groups. Available only when more than one group is selected. Download: Downloads the selected item. Delete: Deletes the selected items (rows). Rename: Allows renaming the selected item. Copy: Creates copies the selected items and allows the copies to be put into another folder in the Workspace. Move: Allows moving of the selected item(s) into another folder in the Workspace.
https://docs.patricbrc.org/user_guides/workspaces/groups.html
2021-09-17T03:15:37
CC-MAIN-2021-39
1631780054023.35
[array(['../../_images/create_group.png', 'Creating a Group'], dtype=object) array(['../../_images/genome_group.png', 'Genome Group'], dtype=object) array(['../../_images/phylo_tree_genome_group.png', 'Phylogenetic Tree Using Genome Group'], dtype=object) array(['../../_images/venn_diagram_action.png', 'Venn Diagram Button'], dtype=object) array(['../../_images/venn_diagram.png', 'Venn Diagram'], dtype=object)]
docs.patricbrc.org
RSK2RSK.m Arguments Input -Required- RSK -Optional- outputdir: directory for output rsk file, current directory as default - suffix : string to append to output rsk file name, default is current time in format of YYYYMMDDTHHMM Output newfile- file name of output rsk file RSK2RSK writes a new RSK file containing the data and various metadata from the Matlab rsk structure. It is designed to store post-processed data in a sqlite file that is readable by Ruskin. The new rsk file is in "EPdesktop" format, which is the simplest Ruskin table schema. RSK2RSK effectively provides a convenient method for Matlab users to easily share post-processed RBR logger data with others without recourse to CSV, MAT, or ODV files. The tables created by RSK2RSK include: - channels - data - dbinfo - deployments - downloads - epochs - errors - events - instruments - region - regionCast - regionComment - regionGeoData - regionProfile - schedules Example using RSK2RSK as below: rsk = RSKopen('rsk_file.rsk'); rsk = RSKreadprofiles(rsk); rsk = RSKaddmetadata(rsk,'profile',1:3,'latitude',[45,44,46],'longitude',[-25,-24,-23]); outputdir = '/Users/Tom/Jerry'; newfile = RSK2RSK(rsk,'outputdir',outputdir,'suffix','processed');
https://docs.rbr-global.com/rsktools/export/rsk2rsk-m
2021-09-17T04:36:00
CC-MAIN-2021-39
1631780054023.35
[]
docs.rbr-global.com
Bid price The bid price for your campaign can have a significant impact on performance. You may need to increase your bid to generate more referrals and grow share of voice (SOV) for an audience. You may need to decrease the bid to ensure you're not spending more than you want to acquire a new customer. There are two ways to change your bid: - Directly from the Campaign Details page. This is useful when you only need to change your maximum bid. - From the audience wizard. This is useful when you need to change the bid price and other settings. For both ways of changing the bid, go to the Campaign Overview page. Select a campaign to drill down into its linked audiences. #Changing bid price from the Campaign Details page From the Campaign Details page, click Edit Max bid from the Max bid column of the audience you would like to edit. Enter your new bid price and click Update to save the new bid price. #Changing bid price from the audience wizard On the Campaign Details page, click Edit campaign in the Actions column for the desired audience. The audience wizard appears, and you can edit the audience targeting. The audience wizard experience is identical to audience creation, except the wizard is prepopulated with the audience’s current settings. Go to the Bid step to change your bid. Click Save to save your edits to the audience.
https://docs.rokt.com/docs/user-guides/rokt-ads/audiences/bid-price
2021-09-17T03:53:07
CC-MAIN-2021-39
1631780054023.35
[]
docs.rokt.com
Overview Rokt's ability to match offer referrals to conversion events can drastically improve your campaign performance on the Rokt platform. Our algorithms learn from every conversion and constantly make adjustments to improve campaign targeting and bidding. Better conversion data also improves analytics and reporting, helping you make informed decisions with your advertising budget. It’s critical that you choose a reliable method of sharing conversion data with Rokt to realize these benefits. We offer a variety of options for integration based on your company’s needs and capabilities. Each option has trade-offs, so take a look at each and work with your account manager to choose what’s best for your business. #Choosing an integration We offer a number of options to choose from. For best results, Rokt endorses integrating with both the Event API and Web SDK. This combination mitigates the chance of dropped conversions due to browser restrictions or systems failures. #Event API The Event API is a flexible and secure integration that allows your company’s server to talk directly to Rokt’s. The Event API supports a variety of attributes and gives your company total control over what data you share. Learn more about the steps required to implement the Event API. #Web SDK The Web SDK is a single line of JavaScript code that you add to your site’s confirmation page. When the page loads, the Web SDK sends the conversion data to Rokt. You can configure the Web SDK to share a variety of attributes related to conversion. The Web SDK can sometimes encounter coverage issues due to browser updates, third-party integrations, or customer-installed extensions, which is why we recommend using both the Web SDK and Event API together. Learn more about the steps required for Web SDK integration. #Third-party measurement providers Rokt provides the option to record conversion events using third-party measurement providers like Tune and AppsFlyer. However, third-party providers can be less accurate than a direct connection like the Event API or Web SDK. Extra steps are necessary to validate data sent through third-party providers, so speak to your account manager if you are interested in this method. Learn more about what third-party providers are supported and how to set up a connection. #Secure file transfer You can set up scheduled Secure File Transfer Protocol (SFTP) requests to transfer conversion data from your company’s database to Rokt’s. If you choose this method, you will lose some benefits around real-time reporting. Learn more about setting up an SFTP transfer. #Manual You can manually upload a batch list of conversion data into One Platform. Keep in mind that this method requires frequent manual uploads, and you will miss out on benefits associated with real-time reporting. Learn more about manual import. #Shopify application Shopify store owners can take advantage of the Rokt Ecommerce Shopify app to share conversion data with Rokt. Install the app and start sharing conversion data right away. Contact your account manager if you'd like to explore the Shopify option. #Identifying conversions Regardless of the implementation method you choose, Rokt strongly suggests that you share a personal identifier so that we can correctly match a conversion to a campaign referral. A raw email address is the preferred identifier, but you should include at least one of the following: - Phone number (hashed or raw) - Rokt Tracking ID To support customer data security, Rokt supports both hashed and raw personal identifiers. A hash is a computed value, based upon an input (like an email address), that is virtually impossible to reverse without the original input. SHA256 hashing is strongly recommended, although we can also support MD5 hashing if required for compatibility reasons. Advertisers that are unable to share either email or phone number should consider using the Rokt Tracking ID. There are extra steps required to set up the Rokt Tracking ID, and it is not as reliable as using email or phone number. In general, the more data about the transaction that you share, the better Rokt’s machine learning algorithms can optimize your campaigns. Some useful attributes include: transaction amount, currency, quantity, or payment card identifier. See the full list of attributes you can share with Rokt. #Defining attribution window An attribution window refers to the maximum time (in days or monthly cohort periods) that can elapse between a Rokt referral and a conversion. You need to define an attribution window so that conversions can be associated to the right campaign referral. Rokt only attributes conversions to a campaign if a customer opted in to the offer within the set attribution window; conversions resulting from an impression are not considered attributable to Rokt. For example, a customer could view your offer, not click the call to action, but convert anyway because they've been made aware of your offer. For these situations, Rokt can only provide ad hoc reports on conversion and impression data. #Viewing conversion data Once you set up a conversion attribution mechanism and begin sharing data, you can see these conversions in your campaign, audience, and creative reports. All conversions are counted against a particular creative. All your creatives under an audience are then aggregated to show total conversion numbers for that audience. Then all audiences under a campaign are aggregated to create total conversion numbers for your campaign. When looking at your conversion data over a date range, you see the number of conversions and transactions that happened in that date range, regardless of when the interaction with the Rokt offer occurred. note If you’re looking at more recent date ranges, you may see higher numbers for cost per acquisition (CPA). This is because your most recent Rokt referrals haven't had a full chance to convert and the conversion window is still open.
https://docs.rokt.com/docs/user-guides/rokt-ads/conversions/overview
2021-09-17T04:53:39
CC-MAIN-2021-39
1631780054023.35
[]
docs.rokt.com
Adding Rokt to your site Adding a Rokt placement to your site is the first step for Rokt Ecommerce partners. Placements are flexible iframes that are used to display any type of Rokt campaign. To set up a placement on your website, you can use the Rokt Web SDK (instructions below). Rokt also offers a range of mobile SDKs to set up placements on native Android, iOS, and React Native applications. Shopify stores can set up a Rokt placement in seconds using the Rokt Ecommerce app—no coding needed! Single page applications #1. Get your unique Rokt snippet Your account manager may provide your snippet, or you can find it in One Platform. Your snippet includes a customer identifier (we recommend raw email customer address) and contextual data. You can read more about why Rokt asks for personal identifiers and contextual attributes here. #Sample snippet caution If you are copying the below example, ensure roktAccountid is replaced with your account's unique ID. You can get your roktAccountid from your account manager or in One Platform. ({ // Required email: "", // Suggested - Transaction amount: "", currency: "", quantity: "", paymenttype: "", ccbin: "", margin: "", confirmationref: "", // Suggested - Customer firstname: "", lastname: "", mobile: "", title: "", gender: "", dob: "", age: "", language: "", // Suggested - Address zipcode: "", city: "", state: "", country: "", });}); #2. Enable the preparative iframe Choose a page that occurs earlier in the customer journey, ideally where a customer might spend a little more time. A shipping or payment details page works well. Add this code snippet anywhere in the page HTML: <iframe aria-</iframe> This snippet will load and cache Rokt assets earlier in the customer journey. Then when you are ready to show a Rokt placement, Rokt assets will be ready to go, resulting in a faster load time. Rokt’s preparative iframe is secure, and has no access to your page or site data. You can read more about the benefits of the preparative iframe here. #3. Add the Rokt snippet to your site Add the Rokt snippet from Step 1 between the HTML <head></head> tags of any page where you want to display a Rokt placement. When you add the snippet to your page, make sure to populate the customer and transactional data. Make sure you configure customer email address so that Rokt can identify customers and choose a relevant offer. Populate any contextual attributes about the transaction to help Rokt better personalize what offers the customer sees. Rokt recommends a direct integration as a best practice, but the option to integrate with a tag manager is available. #4. Set up pages and placements The Rokt team will set up relevant pages and placements for you in One Platform. We can customize your placement to match your brand guidelines and UX needs. #Embedded placements If you are planning to add an embedded placement to your site, you need to specify the HTML element that the placement should be anchored to. For example: <div id="rokt-placeholder"></div>. Let the Rokt team know what element the placement should target. #5. Test your integration Ensure that the Rokt Web SDK is loading on the right page and includes the correct attributes. Read our guide on testing your integration. #More information - General Web SDK reference - Web SDK security - Two-step data integration - Encrypting attributes in transit - Single page application integrations Native mobile app integrations Add a Rokt placement to your iOS, Android, or React Native mobile applications.
https://docs.rokt.com/docs/developers/integration-guides/getting-started/adding-rokt-to-your-site
2021-09-17T04:27:24
CC-MAIN-2021-39
1631780054023.35
[]
docs.rokt.com
🚧 This site is under construction and not finalized yet 🚧 This is a simple guide to explain how to create your own relayer and apply to add it to Typhoon. This will assume you have basic IT knowledge and are experienced in running servers. Currently the requirements to apply as a relayer is to hold xxxx TYPH Tokens. This might get revised in the future so make sure to check back. The basic steps to create your own relayer are: Decide a fee you want to charge: As a relayer you can charge what you think is right for the service. The relayer provided us is charging 1%, but you can decide on what you think is reasonable. Get the necessary TYPH tokens: The relayer wallet has to hold the required TYPH tokens in order to get listed on the site Setup your own node: You can use anything you like to build a new relayer from scratch, or you can use the Golang-based one that we provide Apply for official listing: Go to the typhoon-feedback Github repository and create a open a new topic/issue with your proposal
https://docs.typhoon.network/relayers/apply-as-relayer
2021-09-17T03:07:07
CC-MAIN-2021-39
1631780054023.35
[]
docs.typhoon.network
scipy.optimize¶ Functions in the optimize module can be called by prepending them by scipy.optimize.. The module defines the following three functions: Note that routines that work with user-defined functions still have to call the underlying python code, and therefore, gains in speed are not as significant as with other vectorised operations. As a rule of thumb, a factor of two can be expected, when compared to an optimised python implementation. bisect¶ scipy: bisect finds the root of a function of one variable using a simple bisection routine. It takes three positional arguments, the function itself, and two starting points. The function must have opposite signs at the starting points. Returned is the position of the root. Two keyword arguments, xtol, and maxiter can be supplied to control the accuracy, and the number of bisections, respectively. # code to be run in micropython from ulab import scipy as spy def f(x): return x*x - 1 print(spy.optimize.bisect(f, 0, 4)) print('only 8 bisections: ', spy.optimize.bisect(f, 0, 4, maxiter=8)) print('with 0.1 accuracy: ', spy.optimize.bisect(f, 0, 4, xtol=0.1)) 0.9999997615814209 only 8 bisections: 0.984375 with 0.1 accuracy: 0.9375 Performance¶ Since the bisect routine calls user-defined python functions, the speed gain is only about a factor of two, if compared to a purely python implementation. # code to be run in micropython from ulab import scipy as spy def f(x): return (x-1)*(x-1) - 2.0 def bisect(f, a, b, xtol=2.4e-7, maxiter=100): if f(a) * f(b) > 0: raise ValueError rtb = a if f(a) < 0.0 else b dx = b - a if f(a) < 0.0 else a - b for i in range(maxiter): dx *= 0.5 x_mid = rtb + dx mid_value = f(x_mid) if mid_value < 0: rtb = x_mid if abs(dx) < xtol: break return rtb @timeit def bisect_scipy(f, a, b): return spy.optimize.bisect(f, a, b) @timeit def bisect_timed(f, a, b): return bisect(f, a, b) print('bisect running in python') bisect_timed(f, 3, 2) print('bisect running in C') bisect_scipy(f, 3, 2) bisect running in python execution time: 1270 us bisect running in C execution time: 642 us fmin¶ scipy: The fmin function finds the position of the minimum of a user-defined function by using the downhill simplex method. Requires two positional arguments, the function, and the initial value. Three keyword arguments, xatol, fatol, and maxiter stipulate conditions for stopping. # code to be run in micropython from ulab import scipy as spy def f(x): return (x-1)**2 - 1 print(spy.optimize.fmin(f, 3.0)) print(spy.optimize.fmin(f, 3.0, xatol=0.1)) 0.9996093749999952 1.199999999999996 newton¶ scipy: newton finds a zero of a real, user-defined function using the Newton-Raphson (or secant or Halley’s) method. The routine requires two positional arguments, the function, and the initial value. Three keyword arguments can be supplied to control the iteration. These are the absolute and relative tolerances tol, and rtol, respectively, and the number of iterations before stopping, maxiter. The function retuns a single scalar, the position of the root. # code to be run in micropython from ulab import scipy as spy def f(x): return x*x*x - 2.0 print(spy.optimize.newton(f, 3., tol=0.001, rtol=0.01)) 1.260135727246117
https://micropython-ulab.readthedocs.io/en/stable/scipy-optimize.html
2021-09-17T03:25:10
CC-MAIN-2021-39
1631780054023.35
[]
micropython-ulab.readthedocs.io
Toil API¶ This section describes the API for writing Toil workflows in Python. Job methods¶ Jobs are the units of work in Toil which are composed into workflows. - class toil.job. Job(memory=None, cores=None, disk=None, preemptable=None, unitName=None, checkpoint=False)[source]¶ Class represents a unit of work in toil. __init__(memory=None, cores=None, disk=None, preemptable=None, unitName=None, checkpoint=False). addService(service, parentService=None)[source]¶ Add a service. The toil.job.Job.Service.start()method of the service will be called after the run method has completed but before any successors are run. The service’s toil.job.Job.Service.stop()method will be called once the successors of the job have been run. Services allow things like databases and servers to be started and accessed by jobs in a workflow. addChild. - static wrapFn(fn, *args, **kwargs)[source]¶ Makes a Job out of a function. Convenience function for constructor of toil.job.FunctionWrappingJob. - static wrapJobFn(fn, . rv(*path)[source]¶ Creates a promise ( toil.job.Promise) representing a return value of the job’s run method, or, in case of a function-wrapping job, the wrapped function’s return value. prepareForPromiseRegistration(jobStore)[source]¶.! checkNewCheckpointsAreLeafVertices()[source]¶. defer(function, *args, **kwargs)[source]¶ Register a deferred function, i.e. a callable that will be invoked after the current attempt at running this job concludes. A job attempt is said to conclude when the job function (or the. Job.FileStore. open(*args, *=None)[source]¶ Downloads a file described by fileStoreID from the file store to the local directory. If a user path is specified, it is used as the destination. If a user path isn’t specified, the file is stored in the local temp directory with an encoded name.. Job.Runner¶ The Runner contains the methods needed to configure and start a Toil run. - class Job. Runner[source]¶ Used to setup and run Toil workflow. - static addToilOptions(parser)[source]¶ Adds the default toil options to an optparseor argparseparser object. - static startToil(job, options)[source]¶ Deprecated by toil.common.Toil.run. Toil¶ The Toil class provides for a more general way to configure and start. restart()[source]¶ Restarts a workflow that has been interrupted. This method should be called if and only if a workflow has previously been started and has not finished. - classmethod getJobStore(locator)[source]¶ Create an instance of the concrete job store implementation that matches the given locator. - static createBatchSystem(config) Job.Service¶ The Service class allows databases and servers to be spawned within a Toil workflow. - class Job. Service(memory=None, cores=None, disk=None, preemptable=None, unitName=None)[source]¶ Abstract class used to define the interface to a service. __init__(memory=None, cores=None, disk=None, preemptable=None, unitName=None)[source]¶ Memory, core and disk requirements are specified identically to as in toil.job.Job.__init__().) EncapsulatedJob¶ The subclass of Job for encapsulating a job, allowing a subgraph of jobs to be treated as a single job. - class toil.job. EncapsulatedJob(job)[source]¶' = A.encapsulate() A'.addChild(B) # B will run after A and all its successors have completed, A and its subgraph of # successors in effect appear to be just one job.())) Exceptions¶ Toil specific exceptions. - exception toil.job. JobGraphDeadlockException(string)[source]¶ An exception raised in the event that a workflow contains an unresolvable dependency, such as a cycle. See toil.job.Job.checkJobGraphForDeadlocks(). - exception toil.jobStores.abstractJobStore. ConcurrentFileModificationException(jobStoreFileID)[source]¶ Indicates that the file was attempted to be modified by multiple processes at once. - exception toil.jobStores.abstractJobStore. JobStoreExistsException(locator)[source]¶ Indicates that the specified job store already exists. - exception toil.jobStores.abstractJobStore. NoSuchFileException(jobStoreFileID, customName=None)[source]¶ Indicates that the specified file does not exist. - exception toil.jobStores.abstractJobStore. NoSuchJobException(jobStoreID)[source]¶ Indicates that the specified job does not exist.
https://toil.readthedocs.io/en/3.10.0/developingWorkflows/toilAPI.html
2021-09-17T04:06:06
CC-MAIN-2021-39
1631780054023.35
[]
toil.readthedocs.io
Writing documentation so search can find it¶ One of the keys to writing good documentation is to make it findable. Readers use a combination of internal site search and external search engines such as Google or duckduckgo. To ensure Ansible documentation is findable, you should: Use headings that clearly reflect what you are documenting. Use numbered lists for procedures or high-level steps where possible. Avoid linking to github blobs where possible. Using clear headings in documentation¶ We all use simple English when we want to find something. For example, the title of this page could have been any one of the following: Search optimization Findable documentation Writing for findability What we are really trying to describe is - how do I write documentation so search engines can find my content? That simple phrase is what drove the title of this section. When you are creating your headings for documentation, spend some time to think about what you would type in a search box to find it, or more importantly, how someone less familiar with Ansible would try to find that information. Your heading should be the answer to that question. One word of caution - you do want to limit the size of your headings. A full heading such as How do I write documentation so search engines can find my content? is too long. Search engines would truncate anything over 50 - 60 characters. Long headings would also wrap on smaller devices such as a smart phone. Using numbered lists for zero position snippets¶ Google can optimize the search results by adding a feature snippet at the top of the search results. This snippet provides a small window into the documentation on that first search result that adds more detail than the rest of the search results, and can occasionally answer the reader’s questions right there, or at least verify that the linked page is what the reader is looking for. Google returns the feature snippet in the form of numbered steps. Where possible, you should add a numbered list near the top of your documentation page, where appropriate. The steps can be the exact procedure a reader would follow, or could be a high level introduction to the documentation topic, such as the numbered list at the top of this page. Problems with github blobs on search results¶ Search engines do not typically return github blobs in search results, at least not in higher ranked positions. While it is possible and sometimes necessary to link to github blobs from documentation, the better approach would be to copy that information into an .rst page in Ansible documentation. Other search hints¶ While it may not be possible to adapt your documentation to all search optimizations, keep the following in mind as you write your documentation: Search engines don’t parse beyond the `#` in an html page. So for example, all the subheadings on this page are appended to the main page URL. As such, when I search for ‘Using number lists for zero position snippets’, the search result would be a link to the top of this page, not a link directly to the subheading I searched for. Using local TOCs helps alleviate this problem as the reader can scan for the header at top of the page and click to the section they are looking for. For critical documentation, consider creating a new page that can be a direct search result page. Make your first few sentences clearly describe your page topic. Search engines return not just the URL, but a short description of the information at the URL. For Ansible documentation, we do not have description metadata embedded on each page. Instead, the search engines return the first couple of sentences (140 characters) on the page. That makes your first sentence or two very important to the reader who is searching for something in Ansible.
https://docs.ansible.com/ansible/latest/dev_guide/style_guide/search_hints.html
2021-09-17T03:28:46
CC-MAIN-2021-39
1631780054023.35
[]
docs.ansible.com
Operator configuration The operator for Cloud Native PostgreSQL: By default, the above variables are not set. Values in INHERITED_ANNOTATIONS and INHERITED_LABELS support path-like wildcards. For example, the value example.com/* will match both the value example.com/one and example.com/two. Defining an operator config map The example below customizes the behavior of the operator, by defining a default license key (namely a company key) and the label/annotation names to be inherited by the resources created by any Cluster object that is deployed at a later time. apiVersion: v1 kind: ConfigMap metadata: name: postgresql-operator-controller-manager-config namespace: postgresql-operator-system data: INHERITED_ANNOTATIONS: categories INHERITED_LABELS: environment, workload, app.
https://docs.enterprisedb.io/cloud-native-postgresql/1.8.0/operator_conf/
2021-09-17T03:05:55
CC-MAIN-2021-39
1631780054023.35
[]
docs.enterprisedb.io
$accumulator (aggregation)¶ On this page Definition¶ $accumulator¶ New in version 4.4. Defines a custom accumulator operator. Accumulators are operators that maintain their state (e.g. totals, maximums, minimums, and related data) as documents progress through the pipeline. Use the $accumulatoroperator to execute your own JavaScript functions to implement behavior not supported by the MongoDB Query Language. See also $function. $accumulatoris available in these stages:Important Executing JavaScript inside of an aggregation operator may decrease performance. Only use the $accumulatoroperator if the provided pipeline operators cannot fulfill your application's needs. Syntax¶ The $accumulator operator has this syntax: Behavior¶ The following steps outline how the $accumulator operator processes documents: - The operator begins at an initial state, defined by the init function. - For each document, the operator updates the state based on the accumulate function. The accumulate function's first argument is the current state, and additional arguments are be specified in the accumulateArgs array. - When the operator needs to merge multiple intermediate states, it executes the merge function. For more information on when the merge function is called, see Merge Two States with $merge. - If a finalize function has been defined, once all documents have been processed and the state has been updated accordingly, finalize converts the state to a final output. Merge Two States with $merge¶ As part of its internal operations, the $accumulator operator may need to merge two separate, intermediate states. The merge function specifies how the operator should merge two states. For example, $accumulator may need to combine two states when: $accumulatoris run on a sharded cluster. The operator needs to merge the results from each shard to obtain the final result. A single $accumulatoroperation exceeds its specified memory limit. If you specify the allowDiskUseoption, the operator stores the in-progress operation on disk and finishes the operation in memory. Once the operation finishes, the results from disk and memory are merged together using the merge function.TipSee also: Javascript Enabled¶ To use $accumulator, you must have server-side scripting enabled. If you do not use $accumulator (or $function, $where, or mapReduce),. Examples¶ Use $accumulator to Implement the $avg Operator¶ This example walks through using the $accumulator operator to implement the $avg operator, which is already supported by MongoDB. The goal of this example is not to implement new functionality, but to illustrate the behavior and syntax of the $accumulator operator with familiar logic. In mongosh, create a sample collection named books with the following documents: The following operation groups the documents by author, and uses $accumulator to compute the average number of copies across books for each author: Result¶ This operation returns the following result: Behavior¶ The $accumulator defines an initial state where count and sum are both set to 0. For each document that the $accumulator processes, it updates the state by: - Incrementing the countby 1 and - Adding the values of the document's copiesfield to the sum. The accumulate function can access the copiesfield because it is passed in the accumulateArgs field. With each document that is processed, the accumulate function returns the updated state. Once all documents have been processed, the finalize function divides the sum of the copies by the count of documents to obtain the average. This removes the need to keep a running computed average, since the finalize function receives the cumulative sum and count of all documents. Comparison with $avg¶ This operation is equivalent to the following pipeline, which uses the $avg operator: Use initArgs to Vary the Initial State by Group¶ You can use the initArgs option in to vary the initial state of $accumulator. This can be useful if you want to, for example: - Use the value of a field which is not in your state to affect your state, or - Set the initial state to a different value based on the group being processed. In mongosh, create a sample collection named restaurants with the following documents: Suppose an application allows users to query this data to find restaurants. It may be useful to show more results for the city where the user lives. For this example, we assume that the user's city is called in a variable called userProfileCity. The following aggregation pipeline groups the documents by city. The operation uses the $accumulator to display a different number of results from each city depending on whether the restaurant's city matches the city in the user's profile: Results¶ If the value of userProfileCity is Bettles, this operation returns the following result: If the value of userProfileCity is Onida, this operation returns the following result: If the value of userProfileCity is Pyote, this operation returns the following result: If the value of userProfileCity is any other value, this operation returns the following result: Behavior¶ The init function defines an initial state containing max and restaurants fields. The max field sets the maximum number of restaurants for that particular group. If the document's city field matches userProfileCity, that group contains a maximum of 3 restaurants. Otherwise, if the document _id does not match userProfileCity, the group contains at most a single restaurant. The init function receives both the city userProfileCity arguments from the initArgs array. For each document that the $accumulator processes, it pushes the name of the restaurant to the restaurants array, provided that name would not put the length of restaurants over the max value. With each document that is processed, the accumulate function returns the updated state. The merge function defines how to merge two states. The function concatenates the restaurant arrays from each state together, and the length of the resulting array is limited using the slice() method to ensure that it does not exceed the max value. Once all documents have been processed, the finalize function modifies the resulting state to only return the names of the restaurants. Without this function, the max field would also be included in the output, which does not fulfill any needs for the application.
https://docs.mongodb.com/v5.0/reference/operator/aggregation/accumulator/
2021-09-17T03:31:20
CC-MAIN-2021-39
1631780054023.35
[]
docs.mongodb.com