content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Setting Up TIBCO ComputeDB ODBC Driver
Note
This is currently tested and supported only on Windows 10 (32-bit and 64-bit systems).
Download and Install Visual C++ Redistributable for Visual Studio 2013
Step 1: Install the TIBCO ComputeDB ODBC Driver
Download the TIBCO ComputeDB 1.2.0 Enterprise Edition.
Click ODBC INSTALLERS to download the TIB_compute-odbc_1.2.0_win.zip file.
Follow steps 1 and 2 to install the TIBCO ComputeDB ODBC driver.
Step 2: Create TIBCO ComputeDB DSN from ODBC Data Sources 64-bit/32-bit
To create TIBCO ComputeDB DSN from ODBC Data Sources:
Open the ODBC Data Source Administrator window:
a. On the Start page, type ODBC Data Sources, and select Set up ODBC data sources from the list or select ODBC Data Sources in the Administrative Tools.
b. Based on your Windows installation, open ODBC Data Sources (64-bit) or ODBC Data Sources (32-bit).
In the ODBC Data Source Administrator window, select either the User DSN or System DSN tab.
Click Add to view the list of installed ODBC drivers on your machine.
From the list of drivers, select TIBCO ComputeDB ODBC Driver and click Finish.
The TIBCO ComputeDB ODBC Configuration dialog is displayed. Enter the following details to create a DSN:
Note
ODBC driver cannot connect to the locator and must connect directly to one of the servers. Therefore, in cases where you start a cluster with multiple nodes on different machines and if the server and locator are collocated on a specific machine, then the port number of the server would be higher than that of the locator port which will be 1528. In case the locator is not collocated with the server on a machine then the server port will be 1527. Ensure that you provide the IP Address/Host Name and Port number of the data server. If you provide the details of the locator, the connection fails.
Enabling SSL
The following instructions describe how to configure SSL in a DSN:
- Select the Enable SSL checkbox.
- To allow authentication using self-signed trusted certificates, specify the full path of the PEM file containing the self trusted certificate. For a self-signed trusted certificate, a common host name should match.
- To configure two-way SSL verification, select the Two Way SSL checkbox and then do the following:
- In the Trusted Certificate file field, specify the full path of the PEM file containing the CA-certificate.
- In the Client Certificate File field, specify the full path of the PEM file containing the client's certificate.
- In the Client Private Key File field, specify the full path of the file containing the client's private key.
- In the Client Private key password field, provide the private key password.
- Enter the ciphers that you want to use. This is an optional input. If left empty then default ciphers are
"ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH"
For information about connecting Tableau using TIBCO ComputeDB ODBC Driver, refer to Connect Tableau using ODBC Driver | https://tibco-computedb.readthedocs.io/en/enterprise_docv1.2.0/setting_up_odbc_driver-tableau_desktop/ | 2022-06-25T11:10:52 | CC-MAIN-2022-27 | 1656103034930.3 | [] | tibco-computedb.readthedocs.io |
What is CampaignChain?¶
CampaignChain is open-source campaign management software to plan, execute and monitor digital marketing campaigns across multiple online communication channels, such as Twitter, Facebook, Google Analytics or third-party CMS, e-commerce and CRM tools.
For developers, CampaignChain is a platform to integrate key marketing campaign management functions with data from multiple channels. It is implemented in PHP on top of the Symfony framework..
This is a high-level overview of the features and architecture:
.
Operations and Locations¶
Locations are created when connecting to a new Channel or by an Operation. Upon creation by a Channel, the URL of the Location is usually known and can be stored in the system when creating the new Location. For example, when connecting a new Twitter user stream to CampaignChain, the user’s URL on Twitter will be accessible (e.g.).
This is different when it comes to Operations. An Operation could well create a Location stub without the URL and only provide the URL after the Operation has been executed. For example, the URL of a scheduled tweet will only be generated by Twitter once the tweet has been posted. Hence, CampaignChain allows Operations to create Locations without a URL, but requires them to provide a URL when the Operation gets executed.
Visual Summary¶
The following diagram explains the relationship between the various entities.
It should be clear from this diagram an Activity is never related directly to a Channel. The relationship is always Channel -> Location -> Activity -> Operation.
A more concrete example of this relationship is illustrated below.
Modules and Hooks¶
CampaignChain has been designed so that it does not require you to replace existing digital marketing applications. Instead, it serves as a platform for integrating such applications and acts as a cockpit for managing digital marketing campaigns.
Modules¶
Due to CampaignChain’s modules architecture, any online channel along with its locations can be integrated. Furthermore, custom campaigns, milestones, activities and operations can be developed. Given that CampaignChain is built on top of the Symfony framework, modules can use functionality provided by other modules.
Hooks¶
Hooks are reusable components that provide common functionality and can be used across modules to configure campaigns, milestones, channels, locations, activities and operations. CampaignChain already provides a number of hooks and developers can easily add new ones.
For example, CampaignChain comes with an assignee hook, which makes it possible to assign specific channels or activities to members of a marketing team. Similarly, CampaignChain’s due date hook can be used to specify a due date for a Twitter post activity; the same hook can be reused to define a due date for a campaign milestone.
Call to Action¶
CampaignChain allows tracking Calls to Action across various Channels and Locations to understand which Operations had the highest impact. Imagine the following conversion funnel:
- A Twitter post links to a landing page on a website.
- The landing page includes a registration form to download something.
- All the personal data collected in the form will be saved as leads in a CRM.
With CampaignChain, you will be able to understand how many leads have been generated by that specific Twitter post.
Learn more about the details of CampaignChain’s Call to Action (CTA) Tracking.
User Interface¶
CampaignChain’s Web-based user interface has been implemented with Bootstrap 3. Thus, it is responsive and works on desktop computers as well as mobile devices such as tablets and smartphones. | https://campaignchain-docs.readthedocs.io/en/latest/ce/developer/book/overview.html | 2022-06-25T11:52:28 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['../../../_images/architecture.png',
'../../../_images/architecture.png'], dtype=object)
array(['../../../_images/components-conceptual.png',
'../../../_images/components-conceptual.png'], dtype=object)
array(['../../../_images/components-realized.png',
'../../../_images/components-realized.png'], dtype=object)] | campaignchain-docs.readthedocs.io |
NtQueryInformationProcess function (winternl.h)
[NtQueryInformationProcess may be altered or unavailable in future versions of Windows. Applications should use the alternate functions listed in this topic.]
Retrieves information about the specified process.
Syntax
__kernel_entry NTSTATUS NtQueryInformationProcess( [in] HANDLE ProcessHandle, [in] PROCESSINFOCLASS ProcessInformationClass, [out] PVOID ProcessInformation, [in] ULONG ProcessInformationLength, [out, optional] PULONG ReturnLength );
Parameters
[in] ProcessHandle
A handle to the process for which information is to be retrieved.
[in] ProcessInformationClass
The type of process information to be retrieved. This parameter can be one of the following values from the PROCESSINFOCLASS enumeration.
[out] ProcessInformation:
typedef struct _PROCESS_BASIC_INFORMATION { NTSTATUS ExitStatus; PPEB PebBaseAddress; ULONG_PTR AffinityMask; KPRIORITY BasePriority; ULONG_PTR UniqueProcessId; ULONG_PTR InheritedFromUniqueProcessId; } PROCESS_BASIC_INFORMATION;
ULONG_PTR
When the ProcessInformationClass parameter is ProcessWow64Information, the buffer pointed to by the ProcessInformation parameter should be large enough to hold a ULONG_PTR. If this value is nonzero, the process is running in a WOW64 environment. Otherwise, the process is not running in a WOW64 environment.
Use the IsWow64Process2.
[in] ProcessInformationLength
The size of the buffer pointed to by the ProcessInformation parameter, in bytes.
[out, optional] ReturnLength
A pointer to a variable in which the function returns the size of the requested information. If the function was successful, this is the size of the information written to the buffer pointed to by the ProcessInformation parameter . See Logging Errors for more details.
CheckRemoteDebuggerPresent | https://docs.microsoft.com/en-us/windows/win32/api/winternl/nf-winternl-ntqueryinformationprocess?redirectedfrom=MSDN | 2022-06-25T12:02:33 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.microsoft.com |
Introduction
The BOLD Multi Currency app auto-switches conversion in your store based on customer geolocation.
Within the enabled integration with Bold Multi Currency, your customers can see their local currency for products in Searchanise widgets.
Instructions
Important info
- Before integrating the Bold Multi Currency app with the Smart Search & Filter app, make sure both apps are installed in your store.
- The integration with Bold Multi Currency won’t work if Shopify multicurrency support is enabled in the app (the Smart Search & Filter control panel > Preferences section > Products tab > Enable Shopify multicurrency support option).
To integrate the Smart Search & Filter app with the Bold Multi Currency app, follow these steps:
- Go to the Smart Search & Filter control panel > Integrations section > Multicurrency part.
- Set the toggle for BOLD Multi Currency to On.
- Apply the changes
That’s it! Our app now synchronizes with BOLD Multi Currency, and your customers can see their local currency for products in Searchanise widgets.
Enjoying your experience with Searchanise?
We’d appreciate it if you could take some time to leave a review. | https://docs.searchanise.io/bold-multi-currency-integration-shopify/ | 2022-06-25T11:45:36 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.searchanise.io |
Type Converter
This example demonstrates serialization of data types that do not have built-in support in GigaSpaces c++ API. It shows you how to add your own code to convert your data type to the supported type, and vice versa.
Supported types are stated in the
gs.xml file. For more details, refer to the C++ Mapping File section.
The code for this example is located at
<XAP Root>\cpp\examples\PocoUserPackaging. This path will be referred to as
<Example Root> in this page.
This example can be built and run on Windows OS only. If you use Visual Studio open the solution
PocoUserPackaging.sln located in
<XAP Root>\cpp\examples\PocoUserPackaging\. It is recommended to set your solution configuration to
Release and do a rebuild that will generate all related files.
In this example, we translate an object of type
std::map (which is not supported by GigaSpaces c++ API) into two buffers of type
std::vector (which is a supported type).
By implementing this conversion code, we are able to read and write the original object type to and from the space just like any other supported type. In other words, we “teach” the space how to deal with our object, and as a result, we don’t need to worry about conversion in the future.
To write and implement your conversion code:
Step 1. Add the attribute
[custom-serialization="true"] to the
class element in the
gs.xml file:
<class name="UserMessage" custom-
Step 2. Write the type conversion code (use
<Example Root>\UserMessageSerializerPackaging.cpp as an example):
#define DONT_INCLUDE_SERIALIZER_EXPORTS #include "UserMessageSerializer.cpp" void UserMessageSerializer::PreSerialize(const IEntry* ptr,bool isTemplate) { if(!isTemplate) { UserMessage* p = (UserMessage*)ptr; p->buffer1.clear(); p->buffer2.clear(); std::map<std::string,std::string>::iterator iter = p->userMap.begin(); for( ;iter!=p->userMap.end();iter++) { p->buffer1.push_back(iter->first); // push the key p->buffer2.push_back(iter->second);// push the data } } }; void UserMessageSerializer::PostDeserialize(IEntry* pNewObject) { UserMessage* p = (UserMessage*)pNewObject; p->userMap.clear(); for(int i=0;i<p->buffer1.size();i++) { p->userMap[p->buffer1[i]] = p->buffer2[i];// extract the data into the map } p->buffer1.clear(); p->buffer2.clear(); };
These functions extend the functionality of the generated serializer (
UserMessageSerializer.cpp). The first function
PreSerialize converts the map field to two vectors before the serialization. The second function
PostDeserialize converts the two vectors back to a map field after the de serialization.
Step 3. Handle the c++ serializer code generation, build the shared library (DLL) from your extension code, and place the library in the appropriate directory.
As shown in the previous example, you can use a custom build with the supplied makefile (at
<Example Root>/makefileSerializer.mk).
Step 4. Rebuild and run your code.
The console will have the following output:
Retrieved a space proxy Did snapshot for UserMessage class Wrote a UserMessage to space. Content of the UserMessage's map (the unsupported field): map[key1] = ALPHA map[key2] = BETA Wrote another UserMessage to space. Content of the UserMessage's map (the unsupported field): map[key1] = ALPHA map[key2] = BETA map[key3] = GAMMA Took a UserMessage from the space. Content of the UserMessage's map (the unsupported field): map[key1] = ALPHA map[key2] = BETA map[key3] = GAMMA Read a UserMessage from the space. Content of the UserMessage's map (the unsupported field): map[key1] = ALPHA map[key2] = BETA Press Enter to end this example... | https://docs.gigaspaces.com/xap/10.2/dev-java/cpp-type-converter.html | 2022-06-25T10:30:17 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.gigaspaces.com |
Announcing Updates to the REST API Experience
This post was written by Jessie Huang, Program Manager on the docs.microsoft.com team.
Today, we are happy to announce the release of two key improvements to the REST experience on docs.microsoft.com:
- The preview version of the REST API Browser--an easy way to search and discover REST APIs developed by Microsoft.
- REST Try It--an interactive experience allowing you to try REST APIs directly in your favorite web browser.
Discover REST APIs in One Location
If you are already familiar with any of our existing API Browser experiences, such as that for .NET, PowerShell, Python or JavaScript, you will feel right at home with this new release - it allows you to quickly get documentation for a wide variety of REST APIs without having to rely on a less focused search experience. The API Browser is all about getting you on your way quickly by finding the right API in seconds.
Previously, if you wanted to find specific REST API documentation on docs.microsoft.com, you may have had to look for it through our site-wide search, or use any of the navigation components on content pages, such as the table of contents. That may be time-consuming, especially in situations when you know exactly what you are looking for. We wanted to make that easier.
Starting today, you can discover Microsoft REST APIs in a much more efficient manner, from one centralized location -. Simply select a service from the dropdown to find all available service APIs:
Need to save a click? For some of the most frequently used APIs, you can always count on Quick Filters - clicking on one of them will instantly bring up the comprehensive set of APIs for a given service, for its latest release:
We designed the search experience in a way that allows you to find the necessary API information whether you know the exact keywords or not - or even if you don't know the service it's in! By default, we will search across all APIs documented on docs.microsoft.com. Only when a service is selected, the search will happen within the scope of that service.
Need to search for multiple keywords at once? You can do that too!
Try Azure REST APIs in Your Browser
Found an Azure API of interest, but don't know if it's quite what you need to help you build out your scenario? Worry not, because starting today you can try Azure REST APIs directly in your browser, on docs.microsoft.com!
All you need to do is press the green Try It button:
Once you select Try It for a REST API, you will be taken to a focused view, allowing you to experiment with different parameter values or request body. You can just as easily change the article context by using the Contents hamburger menu in the top left corner of the page:
We Want Your Feedback
There are many improvements we plan on implementing in the near-term, so this experience is by no means final, for example we will enable Try it not only for Azure REST APIs. We want you to tell us how we can make our REST experience better - just open a new issue, and we will get back to you as soon as possible.
You can also follow our Twitter account for latest updates and notifications!
Dreaming of making the world better for developers? Join our team! | https://docs.microsoft.com/en-US/teamblog/announcing-rest-improvements | 2022-06-25T12:34:45 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['media/announcing-rest-improvements/products.gif',
'Discover REST APIs by selecting products'], dtype=object)
array(['media/announcing-rest-improvements/quickfilters.gif',
'Discover REST APIs via in-page quick filter'], dtype=object)
array(['media/announcing-rest-improvements/search.gif',
'Searching REST APIs by a single keyword'], dtype=object)
array(['media/announcing-rest-improvements/searchmultiplewords.gif',
'Searching REST APIs by multiple keywords'], dtype=object)
array(['media/announcing-rest-improvements/try-it-demo.gif',
'REST Try It - Launching the experience'], dtype=object)
array(['media/announcing-rest-improvements/try-it-context.gif',
'REST Try It - Changing the article context'], dtype=object)] | docs.microsoft.com |
12.2 Conic Optimization¶
Conic optimization is an extension of linear optimization (see Sec. 12.1 (Linear Optimization)) allowing conic domains to be specified for subsets of the problem variables. A conic optimization problem to be solved by MOSEK.
The set \(\K\) is a Cartesian product of convex cones, namely \(\K = \K_1 \times \cdots \times \K_p\). Having the domain restriction \(x \in \K\), is thus equivalent to
where \(x = (x^1, \ldots , x^p)\) is a partition of the problem variables. Please note that the \(n\)-dimensional Euclidean space \(\real^n\) is a cone itself, so simple linear variables are still allowed. The user only needs to specify subsets of variables which belong to non-trivial cones.
In this section we discuss the formulations which apply to the following cones supported by MOSEK:
The set \(\real^n\).
The zero cone \(\{(0,\ldots,0)\}\).
Quadratic cone\[\Q^n = \left\lbrace x \in \real^n: x_1 \geq \sqrt{\sum_{j=2}^n x_j^2} \right\rbrace.\]
Rotated quadratic cone\[\Qr^n = \left\lbrace x \in \real^n: 2 x_1 x_2 \geq \sum_{j=3}^n x_j^2,\quad x_1 \geq 0,\quad x_2 \geq 0 \right\rbrace.\]
Primal exponential cone\[\EXP = \left\lbrace x \in \real^{3}: x_1\geq x_2\exp(x_3/x_2),\quad x_1,x_2 \geq 0 \right\rbrace\]
as well as its dual\[\EXP^* = \left\lbrace x \in \real^{3}: x_1 \geq -x_3 e^{-1}\exp(x_2/x_3), \quad x_3\leq 0,x_1 \geq 0 \right\rbrace.\]
Primal power cone (with parameter \(0< \alpha< 1\))\[\POW^{\alpha,1-\alpha}_n = \left\lbrace x \in \real^{n}: x_1^\alpha x_2^{1-\alpha}\geq \sqrt{\sum_{j=3}^{n} x_j^2},\quad x_1,x_2 \geq 0 \right\rbrace\]
as well as its dual\[(\POW^{\alpha,1-\alpha}_n )^* = \left\lbrace x \in \real^{n}: \left(\frac{x_1}{\alpha}\right)^\alpha \left(\frac{x_2}{1-\alpha}\right)^{1-\alpha} \geq \sqrt{\sum_{j=3}^{n} x^2_j}, \quad x_1, x_2 \geq 0 \right\rbrace.\]
MOSEK supports also the cone of positive semidefinite matrices. Since that is handled through a separate interface, we discuss it in Sec. 12.3 (Semidefinite Optimization).
12.2.1 Duality for Conic Optimization¶
Corresponding to the primal problem (12.8), there is a dual problem
where the dual cone \(\K^*\) is a Cartesian product of the cones dual to \(\K_t\). In practice this means that \(s_n^x\) has one entry for each entry in \(x\). Please note that the dual problem of the dual problem is identical to the original primal problem.
If a bound in the primal problem is plus or minus infinity, the corresponding dual variable is fixed at 0, and we use the convention that the product of the bound value and the corresponding dual variable is 0. This is equivalent to removing variable \((s_l^x)_j\) from the dual problem. In other words:
A solution
to the dual problem is feasible if it satisfies all the constraints in (12.9). If (12.9) has at least one feasible solution, then (12.9) is (dual) feasible, otherwise the problem is (dual) infeasible.
A solution
is denoted a primal-dual feasible solution, if \((x^*)\) is a solution to the primal problem (12.8) and \((y^*,(s_l^c)^*,(s_u^c)^*,(s_l^x)^*,(s_u^x)^*,(s_n^x)^*)\) is a solution to the corresponding dual problem (12.9)., under some non-degeneracy assumptions that exclude ill-posed cases, a conic optimization problem has an optimal solution if and only if there exist feasible primal-dual solution so that the duality gap is zero, or, equivalently, that the complementarity conditions
are satisfied.
If (12.8) has an optimal solution and MOSEK solves the problem successfully, both the primal and dual solution are reported, including a status indicating the exact state of the solution.
12.2.2 Infeasibility for Conic Optimization¶
12.2.2.1 Primal Infeasible Problems¶
If the problem (12.8).12) is unbounded, and that (12.8) is infeasible.
12.2.2.2 Dual Infeasible Problems¶
If the problem (12.9).13) is unbounded, and that (12.9) is infeasible.
In case that both the primal problem (12.8) and the dual problem (12.9) are infeasible, MOSEK will report only one of the two possible certificates — which one is not defined (MOSEK returns the first certificate found).
12.2.3 Minimalization vs. Maximalization¶
When the objective sense of problem (12.8) is maximization, i.e.
the objective sense of the dual problem changes to minimization, and the domain of all dual variables changes sign in comparison to (12.2). The dual problem thus takes the form
This means that the duality gap, defined in (12.10).13) such that \(c^Tx>0\). | https://docs.mosek.com/latest/toolbox/prob-def-conic.html | 2022-06-25T11:55:00 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.mosek.com |
Depending on whether you use a Stencil or a Blueprint theme, the way to add the Instant Search Widget to your storefront will vary.
Stencil themes
Stencil themes are installed for new stores automatically. BigCommerce Cornerstone theme is the default Stencil theme. Searchanise widgets are added to your store automatically upon installing the app, so there’s no need to take further action from your side.
If you notice that Searchanise widgets are taking too much time to load, you can paste the Searchanise widgets’ code into your theme. Look here for instructions on how to do that.
Blueprint themes
You need to paste the Searchanise widgets’ code into your theme. To do so, follow these steps:
- Go to BigCommerce admin panel > Storefront > My themes > Current Theme.
- Click the Edit HTML/CSS link to open the in-browser template file editor.
- Open the HTMLHead.html file in the template file editor.
- Insert the following Searchanise widgets’ code before the </head> tag:
<script src="//searchserverapi.com/widgets/bigcommerce/init.js?api_key=[api_key]"></script>Important info
Replace [api_key] with the API Key of the Searchanise engine for your store. You can find it in the Searchanise control panel > Dashboard section.
- Save the changes.
We’d appreciate it if you could take some time to leave a review. | https://docs.searchanise.io/add-instant-search-widget-to-storefront-bigcommerce/ | 2022-06-25T10:11:25 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.searchanise.io |
TOPICS×
CNAME and Adobe Target
Instructions for working with Adobe Client Care to implement CNAME (Canonical Name) support in Adobe Target. To best handle ad blocking issues, or ITP-related cookie policies, a CNAME is used so calls are made to a domain owned by the customer rather than a domain owned by Adobe.
Request CNAME support
Perform the following steps to request CNAME support in Target:
- Determine the list of hostnames you need for your SSL certificate (see FAQ).
- For each hostname, create a CNAME record in your DNS pointing to your regular Target hostname clientcode.tt.omtrdc.net .For example, if your client code is "cnamecustomer" and your proposed hostname is target.example.com , your DNS CNAME record should look something like (aka BYOC), please fill out these additional details:
- Certificate organization (example: Example Company Inc):
- Certificate organizational unit (optional, example: Marketing):
- Certificate country (example: US):
- Certificate state/region (example: California):
- Certificate city (example: San Jose):
- If Adobe is purchasing the certificate, Adobe will work with DigiCert to purchase and deploy your certificate on Adobe's production servers.If the customer is purchasing the certificate (BYOC), Adobe Client Care will send you the certificate signing request (CSR), which you will need to use when purchasing the certificate through your certificate authority of choice. After the certificate is issued, you must send a copy of the certificate and any intermediate certificates back to Adobe Client Care for deployment.Adobe Client Care will notify you when your implementation is ready.
- After completing the preceding tasks and Adobe Client Care has notified you that the implementation is ready, you must update the serverDomain to the new CNAME in at.js.
Frequently Asked Questions
The following information answers frequently asked questions about requesting and implementing CNAME support in Target:
Can I provide my own certificate (aka bring-your-own-certificate or BYOC)?
Yes, you can provide your own certificate; however, it is not recommended. Management of the SSL certificate lifecycle is significantly easier for both Adobe and you when Adobe purchases and controls the certificate. SSL certificates must be renewed every year, which means Adobe Client Care must contact you every year to send Adobe a new certificate in a timely manner. Some customers might have difficulty producing a renewed certificate in a timely manner every year, which jeopardizes their Target implementation because browsers will refuse connections when the certificate expires.
Be aware that if you request a Target bring-your-own-certificate CNAME implementation, you are responsible for providing renewed certificates to Adobe Client Care every year. Allowing your CNAME certificate to expire before Adobe can deploy a renewed certificate will result in an outage for your specific Target implementation.
What hostnames should I choose? How many hostnames per domain should I choose?
Target CNAME implementations require only one hostname per domain on the SSL certificate and in the customer's DNS, so that's what we recommend. Some customers might require additional hostnames per domain for their own purposes (testing in staging, for example), which is supported.
Most customers choose a hostname like target.example.com , so that's what we recommend, but the choice is ultimately yours. Be sure not to request a hostname of an existing DNS record as that would cause a conflict and delay time to resolution of your Target CNAME request.
I already have a CNAME implementation for Adobe Analytics, can we use the same certificate or hostname?
No, Target requires a separate hostname and certificate.
Is my current implementation of Target impacted by ITP 2.x?'ll need a separate Target CNAME only in the case of ad-blocking scenarios where Target is blocked.
For more information about ITP, see Apple Intelligent Tracking Prevention (ITP) 2.x .
What kind of service disruptions can I expect when my CNAME implementation is deployed?
There is no service disruption when the certificate is deployed (including certificate renewals). However, when you change the hostname in your Target implementation code ( serverDomain in at.js) to the new CNAME hostname ( target.example.com ), web browsers will treat returning visitors as new visitors and their profile data will be lost because the previous cookie will be inaccessible under the old hostname ( clientcode.tt.omtrdc.net ) due to browser security models. This is a one-time disruption only on the initial cut-over to the new CNAME. Certificate renewals do not have the same effect since the hostname doesn't change.
What key type and certificate signature algorithm will be used for my CNAME implementation?
All certificates are RSA SHA-256 and keys are RSA 2048-bit, by default. Key sizes larger than 2048-bit are not currently supported.
How can I validate my CNAME implementation is ready for traffic?
Use the following set of commands (in the MacOs or Linux command-line terminal, using bash and curl 7.49+):
- First paste this bash function into your terminal:
function validateEdgeFpsslSni { domain=$1 for edge in mboxedge{31,32,{34..38}}.tt.omtrdc.net; do echo "$edge: $(curl -sSv --connect-to $domain:443:$edge:443 2>&1 | grep subject:)" done }
- Next paste this command (replacing target.example.com with your hostname):
validateEdgeFpsslSni target.example.comIf the implementation is ready, you should see output like below. The important part is that all lines show CN=target.example.com , which matches our desired hostname. If any of them show, you might need to wait for your DNS updates to fully propagate. DNS records have an associated TTL (time-to-live) that dictates cache expiration time for DNS replies of those records, so you may need to wait at least as long as your TTLs. You can use the dig target.example.com command or the G Suite Toolbox to look up your specific TTLs.
Known limitations
- QA mode will not be sticky when you have CNAME and at.js 1.x because it is based on a third-party cookie. The workaround is to add the preview parameters to each URL you navigate to. QA mode is sticky when you have CNAME and at.js 2.x.
- Currently the overrideMboxEdgeServer setting doesn't work properly with CNAME when using at.js versions prior to at.js 1.8.2 and at.js 2.3.1. If you are using an older version of at.js this should be set as false in order to avoid failing requests. Alternatively, you should consider updating at.js to a newer, supported version.
- When using CNAME it becomes more likely that the size of the cookie header for Target calls will increase. We recommend keeping the cookie size under 8KB. | https://docs.adobe.com/content/help/en/target/using/implement-target/before-implement/implement-cname-support-in-target.html | 2020-10-23T22:40:34 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.adobe.com |
Contents
- Before you install the integration app
- Install integrator.io bundle in NetSuite
- Install Amazon bundle in NetSuite
- Configure your NetSuite account
-
- Install the integration app
The Amazon Seller Central-NetSuite integration app helps online retailers combine the powerful Amazon online marketplace platform with the proven back-office features of NetSuite and keep the orders, customers, fulfillments, pricing & inventory levels in sync.
Before you install the integration app
Before you install the Amazon Seller Central-NetSuite integration app, perform the following:
- Perform the steps mentioned in the Prerequisites and Pre-installation tasks articles.
- Install the integrator.io and Amazon bundles in NetSuite.
Install integrator.io bundle in NetSuite
- Login to your NetSuite account.
- Go to Customization > SuiteBundler > Search & Install Bundles.
- On the "Search & Install Bundles" page, in the Keywords field, enter Celigo integrator.io (bundle name) or 20038 (bundle ID).
- Click Search.
- Click Celigo.integrator.io.
- On the "Bundle Details" page, click Install.
- On the permission window, read through and click OK.
Your bundle will now be installed and you can find the same in the Search & Installed Bundles page (Customization > SuiteBundler > Search & Install Bundles > List).
Install Amazon bundle in NetSuite
- In NetSuite, go to Customization > SuiteBundler > Search & Install Bundles.
- On the "Search & Install Bundles" page, in the Keywords field, enter Celigo Amazon Connector [IO] (bundle name) or 169116 (bundle ID).
- Click Search.
- Click Celigo Amazon Connector [IO].
- On the "Bundle Details" page, click Install.
- On the permission window, read through and click OK.
Your bundle will now be installed and you can find the same in the Search & Installed Bundles page (Customization > SuiteBundler > Search & Install Bundles > List).
Configure your NetSuite account
Enable Token-Based authentication in your NetSuite account
- Log in to your NetSuite account as an Administrator.
- Go to Setup > Company > Enable Features.
- On the "Enable Features" page, click SuiteCloud.
- In the Manage Authentication section, check the TOKEN-BASED AUTHENTICATION checkbox.
- Click Save.
Create a custom role in NetSuite
- In NetSuite, go to Setup > Users/Roles > Manage Roles.
- On the "Manage Roles" page, next to the "Celigo eTail SmartConnectors" role, click Customize.
- On the "Role" page, in the Name field, enter a different name to clone the role.
- In the "Permissions" tab, configure the permissions as per your business needs as needed.
- Click Save.
Select NetSuite user and assign a role
Select the NetSuite user account that will be used to connect your Amazon SmartConnector.
- In NetSuite, go to Setup > Users/Roles > Manage Users.
- On the "Manage Users" page, click on the appropriate user to connect your Amazon integration app.
- On the selected Employee page, click Edit.
- Go to Access tab > Roles sub-tab.
- Select the role that was created in the "Create a custom role in NetSuite" step.
- Click Save.
Generate NetSuite Access Tokens
- In NetSuite, go to Setup > Users/Roles > Access Tokens > New.
- In the APPLICATION NAME drop-down box, select eTail Connectors (Token-Based Auth).
- In the USER drop-down list box, select the user name that you have edited in the previous step.
- In the ROLE drop-down list box, select the role that was assigned to the user.
- The TOKEN NAME populates automatically. Modify the name as needed.
- Click Save.
- Token ID & Token Secret will be displayed. Save the tokens in a place where you can copy it into your Celigo connection as described in the next section.
Install the integration app
Prerequisite: Be sure that you have a valid subscription license to install the integration app.
- Login to your integrator.io account.
- In the left pane, click Marketplace.
- On the Marketplace page, click Amazon MWS.
- Click Install.
Note: If you see Contact Sales, instead of the Install, contact your Account Executive to check the status of your integration app license.
- On the "My Integrations" page, you can now the Amazon MWS-NetSuite integration app. On the tile, click Continue setup.
- Configure your connections:
Note: You must obtain your Seller ID, Marketplace ID, and MWS Authentication Token (account identifiers) first before you can complete this step. The process to generate these account identifiers is described in the following section: Enable Developer Access for Celigo in Amazon Seller Central (Registration).
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/115004621768-Install-the-Amazon-Seller-Central-NetSuite-integration-app | 2020-10-23T20:58:31 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.celigo.com |
- You can find the integrator.io release notes for August 2020 here.
- In the integrator.io current UI, you will now find all the advanced settings in the "Settings" tab.
- The settings in the “Opportunity” section are further divided into three sub-tabs: Opportunity, Notes and Files, and Team Selling.
Upgrade your integration app
We've upgraded the existing infrastructure. Be sure that your integration app is updated to the following infrastructure versions:
- Integration App v1.6.0
- SuiteApp version 1.6.0
- Celigo integrator.io (NetSuite Bundle 20038) 1.13.0.0
- Integrator Distributed Adaptor Package v1.15.0
- Celigo NetSuite Package v1.7.0
The release will be available in phases, starting from the third week of August. If you are using a NetSuite Sandbox account, please upgrade NetSuite Bundles. We recommend you to upgrade your Celigo NetSuite Package to v1.7.0 to ensure a smoother release.
What’s new
Support for “team selling” feature
Note: This feature is available only in the premium edition of the integration app.
- The “opportunity team” feature must be enabled in your Salesforce account.
- The “team selling” feature must be enabled in your NetSuite account to sync the team details using the integration app. Opportunity splits are available only if you enable this feature.
The integration app now allows you to sync the information related to the opportunity splits bi-directionally between Salesforce and NetSuite. You can find two types of opportunity splits:
- Revenue splits: Use this split type to credit team members who are directly responsible for opportunity revenue that always total 100% of the opportunity amount.
- Overlay splits: Use this split type to credit supporting team members that can total any percentage of the opportunity amount, including percentages over 100%.
For more information, see Understand the Salesforce "team selling" feature.
Sync records from multiple Salesforce instances to one NetSuite instance
The integration app is now enhanced to sync records from multiple Salesforce instances to a single NetSuite instance using a multi-integration tile approach. The integration app lets you sync data between:
- One Salesforce instance to one NetSuite instance or
- Multiple Salesforce instances to one NetSuite instance.
To sync records between multiple Salesforce instances and one NetSuite instance, you should install multiple Salesforce integration apps connected to different Salesforce accounts.
For more information, see Sync data from multiple Salesforce instances to a single NetSuite instance.
What’s enhanced
Install the integration app
The integration app installation steps are now enhanced in the following sequence:
- Configure Salesforce connection
- Install integrator.io package in Salesforce
- Install the NetSuite package in Salesforce
- Configure your NetSuite connection
Note: It is recommended to configure your NetSuite connection using the Token Based Auth (Automatic) option. For more information, refer to Set up a connection to NetSuite.
- Install the integrator.io bundle in NetSuite
- Install the Salesforce SuiteApp in NetSuite
For Salesforce connection, the JWT Oauth flow type can be used for new installations in addition to Refresh token. For existing installations, we will continue to support the Refresh token mechanism.
For more information, refer to Install your Salesforce - NetSuite integration app.
Updates a sales order that is past the “Pending Approval” state in NetSuite
The “Salesforce Opportunity to NetSuite sales order Add/Update” flow is now enhanced to sync the updated opportunity details after the sales order is past the “Pending Approval” state in NetSuite. A new checkbox Allow updates to approved Sales Orders are introduced in the Settings > Opportunity > Opportunity tab. When you check this checkbox, you can still make the changes to the opportunity in Salesforce and sync to the NetSuite sales order.
By default, the checkbox is unchecked. If you uncheck the checkbox and update the opportunity in Salesforce and try to sync to NetSuite, an error message “Sales Order ##### can't be updated once it is passed the “pending approval” state" is displayed.
Sync enhanced notes and files to NetSuite sales order
As the “Document” tab is unavailable in the Salesforce Lightning experience and existing notes are enhanced to support rich text, the integration app is enhanced to sync enhanced notes and files. The “Sync Enhanced Notes to NetSuite Sales Orders” flow syncs the Salesforce enhanced notes that are based on the rich text formatting.
- All the settings are now categorized in the “Notes and Files” tab in the Settings > Opportunity section.
- Documents cannot be in the Salesforce Lightning Experience.
Improved uninstaller
Prerequisite: To uninstall the “contract renewals” add-on, the NetSuite connection should be online.
You can now uninstall the integration app and the add-on without any errors in the following scenarios:
- If you refresh the page during uninstallation.
- If you partially installed the integration app (with or without configuring the connections)
- If the NetSuite connection is offline in your integration app.
- If the installation or uninstallation is stuck in between.
- If the integration app is on the older version.
- If the bundle is removed as part of your NetSuite sandbox account refresh.
What’s fixed
Incorrect discount calculation of Item groups on the NetSuite sales order
If the Salesforce opportunity with an item group has a discount applied and when you run the “Salesforce Opportunity to NetSuite Sales Order” flow, the flow now syncs the correct calculation without any double calculation for discount.
Salesforce Opportunity amount: 30
Item group Quantity: 1
Discount: 10% on line amount
Amount synced on NetSuite sales order: 24
The correct amount to be synced on NetSuite sales order: 27
Unique key to be displayed as “Name”
The Unique key for an item in NetSuite setting in the Settings > General section now displays the fields related to all NetSuite items. The setting option is renamed to “Name” from “Item Name/Number.”
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/360047661932-Salesforce-NetSuite-IO-release-notes-v1-6-0-August-2020 | 2020-10-23T21:26:56 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.celigo.com |
Django 1.9 release notes¶
December 1, 2015
Welcome 1.9 requires Python 2.7, 3.4, or 3.5. We highly recommend and only officially support the latest release of each series.
The Django 1.8 series is the last to support Python 3.2 and 3.3.
What’s new in Django 1.9¶
Performing actions after a transaction commit¶
The new
on_commit() hook allows performing actions
after a database transaction is successfully committed. This is useful for
tasks such as sending notification emails, creating queued tasks, or
invalidating caches.
This functionality from the django-transaction-hooks package has been integrated into Django.
Password validation¶
Django now offers password validation to help prevent the usage of weak
passwords by users. The validation is integrated in the included password
change and reset forms and is simple to integrate in any other code.
Validation is performed by one or more validators, configured in the new
AUTH_PASSWORD_VALIDATORS setting.
Four validators are included in Django, which can enforce a minimum length, compare the password to the user’s attributes like their name, ensure passwords aren’t entirely numeric, or check against an included list of common passwords. You can combine multiple validators, and some validators have custom configuration options. For example, you can choose to provide a custom list of common passwords. Each validator provides a help text to explain its requirements to the user.
By default, no validation is performed and all passwords are accepted, so if
you don’t set
AUTH_PASSWORD_VALIDATORS, you will not see any
change. In new projects created with the default
startproject
template, a simple set of validators is enabled. To enable basic validation in
the included auth forms for your project, you could set, for example:', }, ]
See Password validation for more details.
Permission mixins for class-based views¶
Django now ships with the mixins
AccessMixin,
LoginRequiredMixin,
PermissionRequiredMixin, and
UserPassesTestMixin to provide the
functionality of the
django.contrib.auth.decorators for class-based views.
These mixins have been taken from, or are at least inspired by, the
django-braces project.
There are a few differences between Django’s and
django-braces’
implementation, though:
- The
raise_exceptionattribute can only be
Trueor
False. Custom exceptions or callables are not supported.
- The
handle_no_permission()method does not take a
requestargument. The current request is available in
self.request.
- The custom
test_func()of
UserPassesTestMixindoes not take a
userargument. The current user is available in
self.request.user.
- The
permission_requiredattribute supports a string (defining one permission) or a list/tuple of strings (defining multiple permissions) that need to be fulfilled to grant access.
- The new
permission_denied_messageattribute allows passing a message to the
PermissionDeniedexception.
New styling for
contrib.admin¶
The admin sports a modern, flat design with new SVG icons which look perfect on HiDPI screens. It still provides a fully-functional experience to YUI’s A-grade browsers. Older browser may experience varying levels of graceful degradation.
Running tests in parallel¶
The
test command now supports a
--parallel option to run a project’s tests in multiple processes in parallel.
Each process gets its own database. You must ensure that different test cases don’t access the same resources. For instance, test cases that touch the filesystem should create a temporary directory for their own use.
This option is enabled by default for Django’s own test suite provided:
- the OS supports it (all but Windows)
- the database backend supports it (all the built-in backends but Oracle)
Minor features¶
django.contrib.admin¶
- Admin views now have
model_adminor
admin_siteattributes.
- The URL of the admin change view has been changed (was at
/admin/<app>/<model>/<pk>/by default and is now at
/admin/<app>/<model>/<pk>/change/). This should not affect your application unless you have hardcoded admin URLs. In that case, replace those links by reversing admin URLs instead. Note that the old URL still redirects to the new one for backwards compatibility, but it may be removed in a future version.
ModelAdmin.get_list_select_related()was added to allow changing the
select_related()values used in the admin’s changelist query based on the request.
- The
available_appscontext variable, which lists the available applications for the current user, has been added to the
AdminSite.each_context()method.
AdminSite.empty_value_displayand
ModelAdmin.empty_value_displaywere added to override the display of empty values in admin change list. You can also customize the value for each field.
- Added jQuery events when an inline form is added or removed on the change form page.
- The time picker widget includes a ‘6 p.m’ option for consistency of having predefined options every 6 hours.
- JavaScript slug generation now supports Romanian characters.
django.contrib.admindocs¶
- The model section of the
admindocsnow also describes methods that take arguments, rather than ignoring them.
django.contrib.auth¶
- The default iteration count for the PBKDF2 password hasher has been increased by 20%. This backwards compatible change will not affect users who have subclassed
django.contrib.auth.hashers.PBKDF2PasswordHasherto change the default value.
- The
BCryptSHA256PasswordHasherwill now update passwords if its
roundsattribute is changed.
AbstractBaseUserand
BaseUserManagerwere moved to a new
django.contrib.auth.base_usermodule so that they can be imported without including
django.contrib.authin
INSTALLED_APPS(doing so raised a deprecation warning in older versions and is no longer supported in Django 1.9).
- The permission argument of
permission_required()accepts all kinds of iterables, not only list and tuples.
- The new
PersistentRemoteUserMiddlewaremakes it possible to use
REMOTE_USERfor setups where the header is only populated on login pages instead of every request in the session.
- The
password_reset()view accepts an
extra_email_contextparameter.
django.contrib.contenttypes¶
- It’s now possible to use
order_with_respect_towith a
GenericForeignKey.
django.contrib.gis¶
- All
GeoQuerySetmethods have been deprecated and replaced by equivalent database functions. As soon as the legacy methods have been replaced in your code, you should even be able to remove the special
GeoManagerfrom your GIS-enabled classes.
- The GDAL interface now supports instantiating file-based and in-memory GDALRaster objects from raw data. Setters for raster properties such as projection or pixel values have been added.
- For PostGIS users, the new
RasterFieldallows storing GDALRaster objects. It supports automatic spatial index creation and reprojection when saving a model. It does not yet support spatial querying.
- The new
GDALRaster.warp()method allows warping a raster by specifying target raster properties such as origin, width, height, or pixel size (amongst others).
- The new
GDALRaster.transform()method allows transforming a raster into a different spatial reference system by specifying a target
srid.
- The new
GeoIP2class allows using MaxMind’s GeoLite2 databases which includes support for IPv6 addresses.
- The default OpenLayers library version included in widgets has been updated from 2.13 to 2.13.1.
django.contrib.postgres¶
- Added support for the
rangefield.contained_bylookup for some built in fields which correspond to the range fields.
- Added
JSONField.
- Added PostgreSQL specific aggregation functions.
- Added the
TransactionNowdatabase function.
django.contrib.sessions¶
- The session model and
SessionStoreclasses for the
dband
cached_dbbackends are refactored to allow a custom database session backend to build upon them. See Extending database-backed session engines for more details.
django.contrib.sites¶
get_current_site()now handles the case where
request.get_host()returns
domain:port, e.g.
example.com:80. If the lookup fails because the host does not match a record in the database and the host has a port, the port is stripped and the lookup is retried with the domain part only.
Cache¶
django.core.cache.backends.base.BaseCachenow has a
get_or_set()method.
django.views.decorators.cache.never_cache()now sends more persuasive headers (added
no-cache, no-store, must-revalidateto
Cache-Control) to better prevent caching. This was also added in Django 1.8.8.
CSRF¶
- The request header’s name used for CSRF authentication can be customized with
CSRF_HEADER_NAME.
- The CSRF referer header is now validated against the
CSRF_COOKIE_DOMAINsetting if set. See How it works for details.
- The new
CSRF_TRUSTED_ORIGINSsetting provides a way to allow cross-origin unsafe requests (e.g.
POST) over HTTPS.
Database backends¶
- The PostgreSQL backend (
django.db.backends.postgresql_psycopg2) is also available as
django.db.backends.postgresql. The old name will continue to be available for backwards compatibility.
File Storage¶
Storage.get_valid_name()is now called when the
upload_tois a callable.
Filenow has the
seekable()method when using Python 3.
Forms¶
ModelFormaccepts the new
Metaoption
field_classesto customize the type of the fields. See Overriding the default fields for details.
- You can now specify the order in which form fields are rendered with the
field_orderattribute, the
field_orderconstructor argument , or the
order_fields()method.
- A form prefix can be specified inside a form class, not only when instantiating a form. See Prefixes for forms for details.
- You can now specify keyword arguments that you want to pass to the constructor of forms in a formset.
SlugFieldnow accepts an
allow_unicodeargument to allow Unicode characters in slugs.
CharFieldnow accepts a
stripargument to strip input data of leading and trailing whitespace. As this defaults to
Truethis is different behavior from previous releases.
- Form fields now support the
disabledargument, allowing the field widget to be displayed disabled by browsers.
- It’s now possible to customize bound fields by overriding a field’s
get_bound_field()method.
Generic Views¶
- Class-based views generated using
as_view()now have
view_classand
view_initkwargsattributes.
method_decorator()can now be used with a list or tuple of decorators. It can also be used to decorate classes instead of methods.
Internationalization¶
- The
django.views.i18n.set_language()view now properly redirects to translated URLs, when available.
- The
django.views.i18n.javascript_catalog()view now works correctly if used multiple times with different configurations on the same page.
- The
django.utils.timezone.make_aware()function gained an
is_dstargument to help resolve ambiguous times during DST transitions.
- You can now use locale variants supported by gettext. These are usually used for languages which can be written in different scripts, for example Latin and Cyrillic (e.g.
be@latin).
- Added the
django.views.i18n.json_catalog()view to help build a custom client-side i18n library upon Django translations. It returns a JSON object containing a translations catalog, formatting settings, and a plural rule.
- Added the
name_translatedattribute to the object returned by the
get_language_infotemplate tag. Also added a corresponding template filter:
language_name_translated.
- You can now run
compilemessagesfrom the root directory of your project and it will find all the app message files that were created by
makemessages.
makemessagesnow calls xgettext once per locale directory rather than once per translatable file. This speeds up localization builds.
blocktranssupports assigning its output to a variable using
asvar.
- Two new languages are available: Colombian Spanish and Scottish Gaelic.
Management Commands¶
- The new
sendtestemailcommand lets you send a test email to easily confirm that email sending through Django is working.
- To increase the readability of the SQL code generated by
sqlmigrate, the SQL code generated for each migration operation is preceded by the operation’s description.
- The
dumpdatacommand output is now deterministically ordered. Moreover, when the
--outputoption is specified, it also shows a progress bar in the terminal.
- The
createcachetablecommand now has a
--dry-runflag to print out the SQL rather than execute it.
- The
startappcommand creates an
apps.pyfile. Since it doesn’t use
default_app_config(a discouraged API), you must specify the app config’s path, e.g.
'polls.apps.PollsConfig', in
INSTALLED_APPSfor it to be used (instead of just
'polls').
- When using the PostgreSQL backend, the
dbshellcommand can connect to the database using the password from your settings file (instead of requiring it to be manually entered).
- The
djangopackage may be run as a script, i.e.
python -m django, which will behave the same as
django-admin.
- Management commands that have the
--noinputoption now also take
--no-inputas an alias for that option.
Migrations¶
Initial migrations are now marked with an
initial = Trueclass attribute which allows
migrate --fake-initialto more easily detect initial migrations.
Added support for serialization of
functools.partialand
LazyObjectinstances.
When supplying
Noneas a value in
MIGRATION_MODULES, Django will consider the app an app without migrations.
When applying migrations, the “Rendering model states” step that’s displayed when running migrate with verbosity 2 or higher now computes only the states for the migrations that have already been applied. The model states for migrations being applied are generated on demand, drastically reducing the amount of required memory.
However, this improvement is not available when unapplying migrations and therefore still requires the precomputation and storage of the intermediate migration states.
This improvement also requires that Django no longer supports mixed migration plans. Mixed plans consist of a list of migrations where some are being applied and others are being unapplied. This was never officially supported and never had a public API that supports this behavior.
The
squashmigrationscommand now supports specifying the starting migration from which migrations will be squashed.
Models¶
QuerySet.bulk_create()now works on proxy models.
- Database configuration gained a
TIME_ZONEoption for interacting with databases that store datetimes in local time and don’t support time zones when
USE_TZis
True.
- Added the
RelatedManager.set()method to the related managers created by
ForeignKey,
GenericForeignKey, and
ManyToManyField.
- The
add()method on a reverse foreign key now has a
bulkparameter to allow executing one query regardless of the number of objects being added rather than one query per object.
- Added the
keep_parentsparameter to
Model.delete()to allow deleting only a child’s data in a model that uses multi-table inheritance.
Model.delete()and
QuerySet.delete()return the number of objects deleted.
- Added a system check to prevent defining both
Meta.orderingand
order_with_respect_toon the same model.
Date and timelookups can be chained with other lookups (such as
exact,
gt,
lt, etc.). For example:
Entry.objects.filter(pub_date__month__gt=6).
- Time lookups (hour, minute, second) are now supported by
TimeFieldfor all database backends. Support for backends other than SQLite was added but undocumented in Django 1.7.
- You can specify the
output_fieldparameter of the
Avgaggregate in order to aggregate over non-numeric columns, such as
DurationField.
- Added the
datelookup to
DateTimeFieldto allow querying the field by only the date portion.
- Added the
Greatestand
Leastdatabase functions.
- Added the
Nowdatabase function, which returns the current date and time.
Transformis now a subclass of Func() which allows
Transforms to be used on the right hand side of an expression, just like regular
Funcs. This allows registering some database functions like
Length,
Lower, and
Upperas transforms.
SlugFieldnow accepts an
allow_unicodeargument to allow Unicode characters in slugs.
- Added support for referencing annotations in
QuerySet.distinct().
connection.queriesshows queries with substituted parameters on SQLite.
- Query expressions can now be used when creating new model instances using
save(),
create(), and
bulk_create().
Requests and Responses¶
- Unless
HttpResponse.reason_phraseis explicitly set, it now is determined by the current value of
HttpResponse.status_code. Modifying the value of
status_codeoutside of the constructor will also modify the value of
reason_phrase.
- The debug view now shows details of chained exceptions on Python 3.
- The default 40x error views now accept a second positional parameter, the exception that triggered the view.
- View error handlers now support
TemplateResponse, commonly used with class-based views.
- Exceptions raised by the
render()method are now passed to the
process_exception()method of each middleware.
- Request middleware can now set
HttpRequest.urlconfto
Noneto revert any changes made by previous middleware and return to using the
ROOT_URLCONF.
- The
DISALLOWED_USER_AGENTScheck in
CommonMiddlewarenow raises a
PermissionDeniedexception as opposed to returning an
HttpResponseForbiddenso that
handler403is invoked.
- Added
HttpRequest.get_port()to fetch the originating port of the request.
- Added the
json_dumps_paramsparameter to
JsonResponseto allow passing keyword arguments to the
json.dumps()call used to generate the response.
- The
BrokenLinkEmailsMiddlewarenow ignores 404s when the referer is equal to the requested URL. To circumvent the empty referer check already implemented, some Web bots set the referer to the requested URL.
Templates¶
- Template tags created with the
simple_tag()helper can now store results in a template variable by using the
asargument.
- Added a
Context.setdefault()method.
- The django.template logger was added and includes the following messages:
- A
DEBUGlevel message for missing context variables.
- A
WARNINGlevel message for uncaught exceptions raised during the rendering of an
{% include %}when debug mode is off (helpful since
{% include %}silences the exception and returns an empty string).
- The
firstoftemplate tag supports storing the output in a variable using ‘as’.
Context.update()can now be used as a context manager.
- Django template loaders can now extend templates recursively.
- The debug page template postmortem now include output from each engine that is installed.
- Debug page integration for custom template engines was added.
- The
DjangoTemplatesbackend gained the ability to register libraries and builtins explicitly through the template
OPTIONS.
- The
timesinceand
timeuntilfilters were improved to deal with leap years when given large time spans.
- The
includetag now caches parsed templates objects during template rendering, speeding up reuse in places such as for loops.
Tests¶
- Added the
json()method to test client responses to give access to the response body as JSON.
- Added the
force_login()method to the test client. Use this method to simulate the effect of a user logging into the site while skipping the authentication and verification steps of
URLs¶
- Regular expression lookaround assertions are now allowed in URL patterns.
- The application namespace can now be set using an
app_nameattribute on the included module or object. It can also be set by passing a 2-tuple of (<list of patterns>, <application namespace>) as the first argument to
include().
- System checks have been added for common URL pattern mistakes.
Validators¶
- Added
django.core.validators.int_list_validator()to generate validators of strings containing integers separated with a custom character.
EmailValidatornow limits the length of domain name labels to 63 characters per RFC 1034.
- Added
validate_unicode_slug()to validate slugs that may contain Unicode characters.
Backwards incompatible changes in 1.9¶
Warning
In addition to the changes outlined in this section, be sure to review the Features removed in 1.9 for the features that have reached the end of their deprecation cycle and therefore been removed. If you haven’t updated your code within the deprecation timeline for a given feature, its removal may appear as a backwards incompatible change.
Database backend API¶
A couple of new tests rely on the ability of the backend to introspect column defaults (returning the result as
Field.default). You can set the
can_introspect_defaultdatabase feature to
Falseif your backend doesn’t implement this. You may want to review the implementation on the backends that Django includes for reference (#24245).
Registering a global adapter or converter at the level of the DB-API module to handle time zone information of
datetimevalues passed as query parameters or returned as query results on databases that don’t support time zones is discouraged. It can conflict with other libraries.
The recommended way to add a time zone to
datetimevalues fetched from the database is to register a converter for
DateTimeFieldin
DatabaseOperations.get_db_converters().
The
needs_datetime_string_castdatabase feature was removed. Database backends that set it must register a converter instead, as explained above.
The
DatabaseOperations.value_to_db_<type>()methods were renamed to
adapt_<type>field_value()to mirror the
convert_<type>field_value()methods.
To use the new
datelookup, third-party database backends may need to implement the
DatabaseOperations.datetime_cast_date_sql()method.
The
DatabaseOperations.time_extract_sql()method was added. It calls the existing
date_extract_sql()method. This method is overridden by the SQLite backend to add time lookups (hour, minute, second) to
TimeField, and may be needed by third-party database backends.
The
DatabaseOperations.datetime_cast_sql()method (not to be confused with
DatabaseOperations.datetime_cast_date_sql()mentioned above) has been removed. This method served to format dates on Oracle long before 1.0, but hasn’t been overridden by any core backend in years and hasn’t been called anywhere in Django’s code or tests.
In order to support test parallelization, you must implement the
DatabaseCreation._clone_test_db()method and set
DatabaseFeatures.can_clone_databases = True. You may have to adjust
DatabaseCreation.get_test_db_clone_settings().
Default settings that were tuples are now lists¶
The default settings in
django.conf.global_settings were a combination of
lists and tuples. All settings that were formerly tuples are now lists.
is_usable attribute on template loaders is removed¶.
Filesystem-based template loaders catch more specific exceptions¶
When using the
filesystem.Loader
or
app_directories.Loader
template loaders, earlier versions of Django raised a
TemplateDoesNotExist error if a template source existed
but was unreadable. This could happen under many circumstances, such as if
Django didn’t have permissions to open the file, or if the template source was
a directory. Now, Django only silences the exception if the template source
does not exist. All other situations result in the original
IOError being
raised.
HTTP redirects no longer forced to absolute URIs¶
Relative redirects are no longer converted to absolute URIs. RFC 2616
required the
Location header in redirect responses to be an absolute URI,
but it has been superseded by RFC 7231 which allows relative URIs in
Location, recognizing the actual practice of user agents, almost all of
which support them.
Consequently, the expected URLs passed to
assertRedirects should generally
no longer include the scheme and domain part of the URLs. For example,
self.assertRedirects(response, '') should be
replaced by
self.assertRedirects(response, '/some-url/') (unless the
redirection specifically contained an absolute URL, of course).
In the rare case that you need the old behavior (discovered with an ancient
version of Apache with
mod_scgi that interprets a relative redirect as an
“internal redirect”), you can restore it by writing a custom middleware:
class LocationHeaderFix(object): def process_response(self, request, response): if 'Location' in response: response['Location'] = request.build_absolute_uri(response['Location']) return response
Dropped support for PostgreSQL 9.0¶
Upstream support for PostgreSQL 9.0 ended in September 2015. As a consequence, Django 1.9 sets 9.1 as the minimum PostgreSQL version it officially supports.
Dropped support for Oracle 11.1¶
Upstream support for Oracle 11.1 ended in August 2015. As a consequence, Django 1.9 sets 11.2 as the minimum Oracle version it officially supports.
Template
LoaderOrigin and
StringOrigin are removed¶
In previous versions of Django, when a template engine was initialized with
debug as
True, an instance of
django.template.loader.LoaderOrigin or
django.template.base.StringOrigin was set as the origin attribute on the
template object. These classes have been combined into
Origin and is now always set regardless of the
engine debug setting. For a minimal level of backwards compatibility, the old
class names will be kept as aliases to the new
Origin class until
Django 2.0.
Changes to the default logging configuration¶
To make it easier to write custom logging configurations, Django’s default
logging configuration no longer defines
django.request and
django.security loggers. Instead, it defines a single
django logger,
filtered at the
INFO level, with two handlers:
console: filtered at the
INFOlevel and only active if
DEBUG=True.
mail_admins: filtered at the
ERRORlevel and only active if
DEBUG=False.
If you aren’t overriding Django’s default logging, you should see minimal
changes in behavior, but you might see some new logging to the
runserver
console, for example.
If you are overriding Django’s default logging, you should check to see how your configuration merges with the new defaults.
HttpRequest details in error reporting¶
It was redundant to display the full details of the
HttpRequest each time it appeared as a stack frame
variable in the HTML version of the debug page and error email. Thus, the HTTP
request will now display the same standard representation as other variables
(
repr(request)). As a result, the
ExceptionReporterFilter.get_request_repr() method and the undocumented
django.http.build_request_repr() function were removed.
The contents of the text version of the email were modified to provide a
traceback of the same structure as in the case of AJAX requests. The traceback
details are rendered by the
ExceptionReporter.get_traceback_text() method.
Removal of time zone aware global adapters and converters for datetimes¶
Django no longer registers global adapters and converters for managing time
zone information on
datetime values sent to the database as
query parameters or read from the database in query results. This change
affects projects that meet all the following conditions:
- The
USE_TZsetting is
True.
- The database is SQLite, MySQL, Oracle, or a third-party database that doesn’t support time zones. In doubt, you can check the value of
connection.features.supports_timezones.
- The code queries the database outside of the ORM, typically with
cursor.execute(sql, params).
If you’re passing aware
datetime parameters to such
queries, you should turn them into naive datetimes in UTC:
from django.utils import timezone param = timezone.make_naive(param, timezone.utc)
If you fail to do so, the conversion will be performed as in earlier versions (with a deprecation warning) up until Django 1.11. Django 2.0 won’t perform any conversion, which may result in data corruption.
If you’re reading
datetime values from the results, they
will be naive instead of aware. You can compensate as follows:
from django.utils import timezone value = timezone.make_aware(value, timezone.utc)
You don’t need any of this if you’re querying the database through the ORM,
even if you’re using
raw()
queries. The ORM takes care of managing time zone information.
Template tag modules are imported when templates are configured¶
The
DjangoTemplates backend now
performs discovery on installed template tag modules when instantiated. This
update enables libraries to be provided explicitly via the
'libraries'
key of
OPTIONS when defining a
DjangoTemplates backend. Import
or syntax errors in template tag modules now fail early at instantiation time
rather than when a template with a
{% load %} tag is first
compiled.
django.template.base.add_to_builtins() is removed¶
Although it was a private API, projects commonly used
add_to_builtins() to
make template tags and filters available without using the
{% load %} tag. This API has been formalized. Projects should now
define built-in libraries via the
'builtins' key of
OPTIONS when defining a
DjangoTemplates backend.
simple_tag now wraps tag output in
conditional_escape¶
In general, template tags do not autoescape their contents, and this behavior is
documented. For tags like
inclusion_tag, this is not a problem because
the included template will perform autoescaping. For
assignment_tag, the output will be escaped
when it is used as a variable in the template.
For the intended use cases of
simple_tag,
however, it is very easy to end up with incorrect HTML and possibly an XSS
exploit. For example:
@register.simple_tag(takes_context=True) def greeting(context): return "Hello {0}!".format(context['request'].user.first_name)
In older versions of Django, this will be an XSS issue because
user.first_name is not escaped.
In Django 1.9, this is fixed: if the template context has
autoescape=True
set (the default), then
simple_tag will wrap the output of the tag function
with
conditional_escape().
To fix your
simple_tags, it is best to apply the following practices:
- Any code that generates HTML should use either the template system or
format_html().
- If the output of a
simple_tagneeds escaping, use
escape()or
conditional_escape().
- If you are absolutely certain that you are outputting HTML from a trusted source (e.g. a CMS field that stores HTML entered by admins), you can mark it as such using
mark_safe().
Tags that follow these rules will be correct and safe whether they are run on Django 1.9+ or earlier.
Paginator.page_range¶
Paginator.page_range is
now an iterator instead of a list.
In versions of Django previous to 1.8,
Paginator.page_range returned a
list in Python 2 and a
range in Python 3. Django 1.8 consistently
returned a list, but an iterator is more efficient.
Existing code that depends on
list specific features, such as indexing,
can be ported by converting the iterator into a
list using
list().
Implicit
QuerySet
__in lookup removed¶
In earlier versions, queries such as:
Model.objects.filter(related_id=RelatedModel.objects.all())
would implicitly convert to:
Model.objects.filter(related_id__in=RelatedModel.objects.all())
resulting in SQL like
"related_id IN (SELECT id FROM ...)".
This implicit
__in no longer happens so the “IN” SQL is now “=”, and if the
subquery returns multiple results, at least some databases will throw an error.
contrib.admin browser support¶
The admin no longer supports Internet Explorer 8 and below, as these browsers have reached end-of-life.
CSS and images to support Internet Explorer 6 and 7 have been removed. PNG and GIF icons have been replaced with SVG icons, which are not supported by Internet Explorer 8 and earlier.
The jQuery library embedded in the admin has been upgraded from version 1.11.2 to 2.1.4. jQuery 2.x has the same API as jQuery 1.x, but does not support Internet Explorer 6, 7, or 8, allowing for better performance and a smaller file size. If you need to support IE8 and must also use the latest version of Django, you can override the admin’s copy of jQuery with your own by creating a Django application with this structure:
app/static/admin/js/vendor/ jquery.js jquery.min.js
SyntaxError when installing Django setuptools 5.5.x¶
When installing Django 1.9 or 1.9.1 with setuptools 5.5.x, you’ll see:
Compiling django/conf/app_template/apps.py ... File "django/conf/app_template/apps.py", line 4 class {{ camel_case_app_name }}Config(AppConfig): ^ SyntaxError: invalid syntax Compiling django/conf/app_template/models.py ... File "django/conf/app_template/models.py", line 1 {{ unicode_literals }}from django.db import models ^ SyntaxError: invalid syntax
It’s safe to ignore these errors (Django will still install just fine), but you
can avoid them by upgrading setuptools to a more recent version. If you’re
using pip, you can upgrade pip using
pip install -U pip which will also
upgrade setuptools. This is resolved in later versions of Django as described
in the Django 1.9.2 release notes.
Miscellaneous¶
- The jQuery static files in
contrib.adminhave been moved into a
vendor/jquerysubdirectory.
- The text displayed for null columns in the admin changelist
list_displaycells has changed from
(None)(or its translated equivalent) to
-(a dash).
django.http.responses.REASON_PHRASESand
django.core.handlers.wsgi.STATUS_CODE_TEXThave been removed. Use Python’s stdlib instead:
http.client.responsesfor Python 3 and httplib.responses for Python 2.
ValuesQuerySetand
ValuesListQuerySethave been removed.
- The
admin/base.htmltemplate no longer sets
window.__admin_media_prefix__or
window.__admin_utc_offset__. Image references in JavaScript that used that value to construct absolute URLs have been moved to CSS for easier customization. The UTC offset is stored on a data attribute of the
<body>tag.
CommaSeparatedIntegerFieldvalidation has been refined to forbid values like
',',
',1', and
'1,,2'.
- Form initialization was moved from the
ProcessFormView.get()method to the new
FormMixin.get_context_data()method. This may be backwards incompatible if you have overridden the
get_context_data()method without calling
super().
- Support for PostGIS 1.5 has been dropped.
- The
django.contrib.sites.models.Site.domainfield was changed to be
unique.
- In order to enforce test isolation, database queries are not allowed by default in
SimpleTestCasetests anymore. You can disable this behavior by setting the
allow_database_queriesclass attribute to
Trueon your test class.
ResolverMatch.app_namewas changed to contain the full namespace path in the case of nested namespaces. For consistency with
ResolverMatch.namespace, the empty value is now an empty string instead of
None.
- For security hardening, session keys must be at least 8 characters.
- Private function
django.utils.functional.total_ordering()has been removed. It contained a workaround for a
functools.total_ordering()bug in Python versions older than 2.7.3.
- XML serialization (either through
dumpdataor the syndication framework) used to output any characters it received. Now if the content to be serialized contains any control characters not allowed in the XML 1.0 standard, the serialization will fail with a
ValueError.
CharFieldnow strips input of leading and trailing whitespace by default. This can be disabled by setting the new
stripargument to
False.
- Template text that is translated and uses two or more consecutive percent signs, e.g.
"%%", may have a new msgid after
makemessagesis run (most likely the translation will be marked fuzzy). The new
msgidwill be marked
"#, python-format".
- If neither
request.current_appnor
Context.current_appare set, the
urltemplate tag will now use the namespace of the current request. Set
request.current_appto
Noneif you don’t want to use a namespace hint.
- The
SILENCED_SYSTEM_CHECKSsetting now silences messages of all levels. Previously, messages of
ERRORlevel or higher were printed to the console.
- The
FlatPage.enable_commentsfield is removed from the
FlatPageAdminas it’s unused by the application. If your project or a third-party app makes use of it, create a custom ModelAdmin to add it back.
- The return value of
setup_databases()and the first argument of
teardown_databases()changed. They used to be
(old_names, mirrors)tuples. Now they’re just the first item,
old_names.
- By default
LiveServerTestCaseattempts to find an available port in the 8081-8179 range instead of just trying port 8081.
- The system checks for
ModelAdminnow check instances rather than classes.
- The private API to apply mixed migration plans has been dropped for performance reasons. Mixed plans consist of a list of migrations where some are being applied and others are being unapplied.
- The related model object descriptor classes in
django.db.models.fields.related(private API) are moved from the
relatedmodule to
related_descriptorsand renamed as follows:
ReverseSingleRelatedObjectDescriptoris
ForwardManyToOneDescriptor
SingleRelatedObjectDescriptoris
ReverseOneToOneDescriptor
ForeignRelatedObjectsDescriptoris
ReverseManyToOneDescriptor
ManyRelatedObjectsDescriptoris
ManyToManyDescriptor
- If you implement a custom
handler404view, it must return a response with an HTTP 404 status code. Use
HttpResponseNotFoundor pass
status=404to the
HttpResponse. Otherwise,
APPEND_SLASHwon’t work correctly with
DEBUG=False.
Features deprecated in 1.9¶
assignment_tag()¶
Django 1.4 added the
assignment_tag helper to ease the creation of
template tags that store results in a template variable. The
simple_tag() helper has gained this same
ability, making the
assignment_tag obsolete. Tags that use
assignment_tag should be updated to use
simple_tag.
{% cycle %} syntax with comma-separated arguments¶
The
cycle tag supports an inferior old syntax from previous Django
versions:
{% cycle row1,row2,row3 %}
Its parsing caused bugs with the current syntax, so support for the old syntax will be removed in Django 1.10 following an accelerated deprecation.
ForeignKey and
OneToOneField
on_delete argument¶
In order to increase awareness about cascading model deletion, the
on_delete argument of
ForeignKey and
OneToOneField will be required
in Django 2.0.
Update models and existing migrations to explicitly set the argument. Since the
default is
models.CASCADE, add
on_delete=models.CASCADE to all
ForeignKey and
OneToOneFields that don’t use a different option. You
can also pass it as the second positional argument if you don’t care about
compatibility with older versions of Django.
Field.rel changes¶
Field.rel and its methods and attributes have changed to match the related
fields API. The
Field.rel attribute is renamed to
remote_field and many
of its methods and attributes are either changed or renamed.
The aim of these changes is to provide a documented API for relation fields.
GeoManager and
GeoQuerySet custom methods¶
All custom
GeoQuerySet methods (
area(),
distance(),
gml(), …)
have been replaced by equivalent geographic expressions in annotations (see in
new features). Hence the need to set a custom
GeoManager to GIS-enabled
models is now obsolete. As soon as your code doesn’t call any of the deprecated
methods, you can simply remove the
objects = GeoManager() lines from your
models.
Template loader APIs have changed¶
Django template loaders have been updated to allow recursive template
extending. This change necessitated a new template loader API. The old
load_template() and
load_template_sources() methods are now deprecated.
Details about the new API can be found in the template loader
documentation.
Passing a 3-tuple or an
app_name to
include()¶
The instance namespace part of passing a tuple as an argument to
include()
has been replaced by passing the
namespace argument to
include(). For
example:
polls_patterns = [ url(...), ] urlpatterns = [ url(r'^polls/', include((polls_patterns, 'polls', 'author-polls'))), ]
becomes:
polls_patterns = ([ url(...), ], 'polls') # 'polls' is the app_name urlpatterns = [ url(r'^polls/', include(polls_patterns, namespace='author-polls')), ]
The
app_name argument to
include() has been replaced by passing a
2-tuple (as above), or passing an object or module with an
app_name
attribute (as below). If the
app_name is set in this new way, the
namespace argument is no longer required. It will default to the value of
app_name. For example, the URL patterns in the tutorial are changed from:
urlpatterns = [ url(r'^polls/', include('polls.urls', namespace="polls")), ... ]
to:
urlpatterns = [ url(r'^polls/', include('polls.urls')), # 'namespace="polls"' removed ... ]
app_name = 'polls' # added urlpatterns = [...]
This change also means that the old way of including an
AdminSite instance
is deprecated. Instead, pass
admin.site.urls directly to
url():
from django.conf.urls import url from django.contrib import admin urlpatterns = [ url(r'^admin/', admin.site.urls), ]
URL application namespace required if setting an instance namespace¶
In the past, an instance namespace without an application namespace would serve the same purpose as the application namespace, but it was impossible to reverse the patterns if there was an application namespace with the same name. Includes that specify an instance namespace require that the included URLconf sets an application namespace.
current_app parameter to
contrib.auth views¶
All views in
django.contrib.auth.views have the following structure:
def view(request, ..., current_app=None, ...): ... if current_app is not None: request.current_app = current_app return TemplateResponse(request, template_name, context)
As of Django 1.8,
current_app is set on the
request object. For
consistency, these views will require the caller to set
current_app on the
request instead of passing it in a separate argument.
django.contrib.gis.geoip¶
The
django.contrib.gis.geoip2 module supersedes
django.contrib.gis.geoip. The new module provides a similar API except that
it doesn’t provide the legacy GeoIP-Python API compatibility methods.
Miscellaneous¶
- The
weakargument to
django.dispatch.signals.Signal.disconnect()has been deprecated as it has no effect.
- The
check_aggregate_support()method of
django.db.backends.base.BaseDatabaseOperationshas been deprecated and will be removed in Django 2.0. The more general
check_expression_support()should be used instead.
django.forms.extrasis deprecated. You can find
SelectDateWidgetin
django.forms.widgets(or simply
django.forms) instead.
- Private API
django.db.models.fields.add_lazy_relation()is deprecated.
- The
django.contrib.auth.tests.utils.skipIfCustomUser()decorator is deprecated. With the test discovery changes in Django 1.6, the tests for
django.contribapps are no longer run as part of the user’s project. Therefore, the
@skipIfCustomUserdecorator is no longer needed to decorate tests in
django.contrib.auth.
- If you customized some error handlers, the view signatures with only one request parameter are deprecated. The views should now also accept a second
exceptionpositional parameter.
- The
django.utils.feedgenerator.Atom1Feed.mime_typeand
django.utils.feedgenerator.RssFeed.mime_typeattributes are deprecated in favor of
content_type.
Signernow issues a warning if an invalid separator is used. This will become an exception in Django 1.10.
django.db.models.Field._get_val_from_obj()is deprecated in favor of
Field.value_from_object().
django.template.loaders.eggs.Loaderis deprecated as distributing applications as eggs is not recommended.
- The
callable_objkeyword argument to
SimpleTestCase.assertRaisesMessage()is deprecated. Pass the callable as a positional argument instead.
- The
allow_tagsattribute on methods of
ModelAdminhas been deprecated. Use
format_html(),
format_html_join(), or
mark_safe()when constructing the method’s return value instead.
- The
enclosurekeyword argument to
SyndicationFeed.add_item()is deprecated. Use the new
enclosuresargument which accepts a list of
Enclosureobjects instead of a single one.
- The
django.template.loader.LoaderOriginand
django.template.base.StringOriginaliases for
django.template.base.Originare deprecated.
Features removed in 1.9¶
These features have reached the end of their deprecation cycle and are removed in Django 1.9. See Features deprecated in 1.7 for details, including how to remove usage of these features.
django.utils.dictconfigis removed.
django.utils.importlibis removed.
django.utils.tzinfois removed.
django.utils.unittestis removed.
- The
syncdbcommand is removed.
django.db.models.signals.pre_syncdband
django.db.models.signals.post_syncdbis removed.
- Support for
allow_syncdbon database routers is removed.
- Automatic syncing of apps without migrations is removed. Migrations are compulsory for all apps unless you pass the
migrate --run-syncdboption.
- The SQL management commands for apps without migrations,
sql,
sqlall,
sqlclear,
sqldropindexes, and
sqlindexes, are removed.
- Support for automatic loading of
initial_datafixtures and initial SQL data is removed.
- All models need to be defined inside an installed application or declare an explicit
app_label. Furthermore, it isn’t possible to import them before their application is loaded. In particular, it isn’t possible to import models inside the root package of an application.
- The model and form
IPAddressFieldis removed. A stub field remains for compatibility with historical migrations.
AppCommand.handle_app()is no longer supported.
RequestSiteand
get_current_site()are no longer importable from
django.contrib.sites.models.
- FastCGI support via the
runfcgimanagement command is removed.
django.utils.datastructures.SortedDictis removed.
ModelAdmin.declared_fieldsetsis removed.
- The
utilmodules that provided backwards compatibility are removed:
django.contrib.admin.util
django.contrib.gis.db.backends.util
django.db.backends.util
django.forms.util
ModelAdmin.get_formsetsis removed.
- The backward compatible shims introduced to rename the
BaseMemcachedCache._get_memcache_timeout()method to
get_backend_timeout()is removed.
- The
--naturaland
-noptions for
dumpdataare removed.
- The
use_natural_keysargument for
serializers.serialize()is removed.
- Private API
django.forms.forms.get_declared_fields()is removed.
- The ability to use a
SplitDateTimeWidgetwith
DateTimeFieldis removed.
- The
WSGIRequest.REQUESTproperty is removed.
- The class
django.utils.datastructures.MergeDictis removed.
- The
zh-cnand
zh-twlanguage codes are removed.
- The internal
django.utils.functional.memoize()is removed.
django.core.cache.get_cacheis removed.
django.db.models.loadingis removed.
- Passing callable arguments to querysets is no longer possible.
BaseCommand.requires_model_validationis removed in favor of
requires_system_checks. Admin validators is replaced by admin checks.
- The
ModelAdmin.validator_classand
default_validator_classattributes are removed.
ModelAdmin.validate()is removed.
django.db.backends.DatabaseValidation.validate_fieldis removed in favor of the
check_fieldmethod.
- The
validatemanagement command is removed.
django.utils.module_loading.import_by_pathis removed in favor of
django.utils.module_loading.import_string.
ssiand
urltemplate tags are removed from the
futuretemplate tag library.
django.utils.text.javascript_quote()is removed.
- Database test settings as independent entries in the database settings, prefixed by
TEST_, are no longer supported.
- The cache_choices option to
ModelChoiceFieldand
ModelMultipleChoiceFieldis removed.
- The default value of the
RedirectView.permanentattribute has changed from
Trueto
False.
django.contrib.sitemaps.FlatPageSitemapis removed in favor of
django.contrib.flatpages.sitemaps.FlatPageSitemap.
- Private API
django.test.utils.TestTemplateLoaderis removed.
- The
django.contrib.contenttypes.genericmodule is removed. | https://docs.djangoproject.com/en/1.10/releases/1.9/ | 2020-10-23T22:36:01 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.djangoproject.com |
Frequently asked questions about classification and labeling in Azure Information Protection
Applies to: Azure Information Protection, Office 365.
Have a question about Azure Information Protection that is specifically about classification and labeling? See if it's answered here.
Which client do I install for testing new functionality?
Currently, there are two Azure Information Protection clients for Windows:
The Azure Information Protection unified labeling client that downloads labels and policy settings from one of the following admin centers: Office 365 Security & Compliance Center, Microsoft 365 security center, Microsoft 365 compliance center. This client is now in general availability, and might have a preview version for you to test additional functionality for a future release.
The Azure Information Protection client (classic) that downloads labels and policy settings from the Azure portal. This client builds on previous general availability versions of the client.
We recommend you test with the unified labeling client if its current feature set and functionality meet your business requirements. If not, or if you have configured labels in the Azure portal that you haven't yet migrated to the unified labeling store, use the classic client. For more information, including a feature and functionality comparison table, see Choose which Azure Information Protection client to use.
The Azure Information Protection client is supported on Windows only. To classify and protect documents and emails on iOS, Android, macOS, and the web, use Office apps that support built-in labeling.
Where can I find information about using sensitivity labels for Office apps?
See the following documentation resources:
Use sensitivity labels in Office apps
Enable sensitivity labels for Office files in SharePoint and OneDrive
Apply sensitivity labels to your documents and email within Office
For information about other scenarios that support sensitivity labels, see Common scenarios for sensitivity labels.
Can a file have more than one classification?
Users can select just one label at a time for each document or email, which often results in just one classification. However, if users select a sublabel, this actually applies two labels at the same time; a primary label and a secondary label. By using sublabels, a file can have two classifications that denote a parent\child relationship for an additional level of control.
For example, the label Confidential might contain sublabels such as Legal and Finance. You can apply different classification visual markings and different Rights Management templates to these sublabels. A user cannot select the Confidential label by itself; only one of its sublabels, such as Legal. As a result, the label that they see set is Confidential \ Legal. The metadata for that file includes one custom text property for Confidential, one custom text property for Legal, and another that contains both values (Confidential Legal).
When you use sublabels, don't configure visual markings, protection, and conditions at the primary label. When you use sublevels, configure these setting on the sublabel only. If you configure these settings on the primary label and its sublabel, the settings at the sublabel take precedence.
How do I prevent somebody from removing or changing a label?
Although there's a policy setting that requires users to state why they are lowering a classification label, removing a label, or removing protection, this setting does not prevent these actions. To prevent users from removing or changing a label, the content must already be protected and the protection permissions do not grant the user the Export or Full Control usage right.
When an email is labeled, do any attachments automatically get the same labeling?
No. When you label an email message that has attachments, those attachments do not inherit the same label. The attachments remain either without a label or retain a separately applied label. However, if the label for the email applies protection, that protection is applied to Office attachments.
How can DLP solutions and other applications integrate with Azure Information Protection?
Because Azure Information Protection uses persistent metadata for classification, which includes a clear-text label, this information can be read by DLP solutions and other applications.
For more information about this metadata, see Label information stored in emails and documents.
For examples of using this metadata with Exchange Online mail flow rules, see Configuring Exchange Online mail flow rules for Azure Information Protection labels.
Can I create a document template that automatically includes the classification?
Yes. You can configure a label to apply a header or footer that includes the label name. But if that doesn't meet your requirements, for the Azure Information Protection client (classic) only, you can create a document template that has the formatting you want and add the classification as a field code.
As an example, you might have a table in your document's header that displays the classification. Or, you use specific wording for an introduction that references the document's classification.
To add this field code in your document:
Label the document and save it. This action creates new metadata fields that you can now use for your field code.
In the document, position the cursor where you want to add the label's classification and then, from the Insert tab, select Text > Quick Parts > Field.
In the Field dialog box, from the Categories dropdown, select Document Information. Then, from the Fields names dropdown, select DocProperty.
From the Property dropdown, select Sensitivity, and select OK.
The current label's classification is displayed in the document and this value will be refreshed automatically whenever you open the document or use the template. So if the label changes, the classification that is displayed for this field code is automatically updated in the document.
How is classification for emails using Azure Information Protection different from Exchange message classification? mobile mail applications, the label classification and corresponding label markings are automatically added.
You can use this same technique to use your labels with Outlook on the web and these mobile mail applications.
Note that there's no need to do this if you're using Outlook on the web with Exchange Online, because this combination supports built-in labeling when you publish sensitivity labels from the Office 365 Security & Compliance Center, Microsoft 365 security center, or Microsoft compliance center.
If you cannot use built-in labeling with Outlook on the web, see the configuration steps for this workaround: Integration with the legacy Exchange message classification | https://docs.microsoft.com/en-us/azure/information-protection/faqs-infoprotect | 2020-10-23T23:01:07 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.microsoft.com |
Requires Unity 2017.2+
Unity 5.6 and earlier are not supported. The plugin uses
UnityWebRequestfeatures introduced in 2017.1, and
Screen.safeAreafor mobile notch handling introduced in 2017.2.
Supports Android 6+, iOS 7.0+ (Android 8+ and iOS 10.0+ for native SDK)
Installation
You can install plugin from Github. Import the provided .unitypackage in your project, or paste the Monetizr folder from the Assets folder in your project.
Note about provided code examples
All plugin code is contained within the Monetizr namespace. If given code examples are not working make sure you have
using Monetizr;at the top of your C# script.
Setup
After importing the Monetizr package, in the top menu, to go Window → Monetizr Settings to continue setting up the plugin.
When prompted, click "Finish Setup". This will create a settings file, which is source control friendly and will ensure that Monetizr settings are saved between updates.
Congratulations! Initial setup is finished and you are ready to use Monetizr, just enter your API access token! Note: if at this point the window simply goes blank, just open it again.
This window contains all the settings for Monetizr. These options are described in more detail throughout this documentation.
Showing the product
Use the public test token
4D2E54389EB489966658DDD83E2D1 to show the product
T-shirt.
MonetizrClient.Instance.ShowProductForTag("T-shirt");
A product is shown in an Offer View.
Updated 3 months ago | https://docs.themonetizr.com/docs/unity | 2020-10-23T21:57:28 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['https://files.readme.io/de7de27-2020-06-17_16-20-49.png',
'2020-06-17_16-20-49.png'], dtype=object)
array(['https://files.readme.io/de7de27-2020-06-17_16-20-49.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/93fc235-2020-06-17_16-23-55.png',
'2020-06-17_16-23-55.png'], dtype=object)
array(['https://files.readme.io/93fc235-2020-06-17_16-23-55.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/ce83fc4-2020-06-17_16-43-27.png',
'2020-06-17_16-43-27.png'], dtype=object)
array(['https://files.readme.io/ce83fc4-2020-06-17_16-43-27.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/7b8f415-Guide-Features-Offer_View.jpg',
'Guide-Features-Offer_View.jpg'], dtype=object)
array(['https://files.readme.io/7b8f415-Guide-Features-Offer_View.jpg',
'Click to close...'], dtype=object) ] | docs.themonetizr.com |
Data Reference in this section is an advanced function. We recommend that you familiarize yourself with other sections in the User Manuals before diving into Data Reference. For more information, reach out to [email protected].
Introduction - What’s Data Reference[1]
Most enterprises have well-established digital solutions, but many suffer from siloed solutions among subsidiaries and across partnering entities. ToolChain provides a platform of Data Reference which
- Enables sharing data inputs across users, departments, and companies
- Improves cross-functions, cross-subsidiary, and cross-company data capture efficiency
- Empowers data contributors to own their data integrity
- Reduces human errors in data capture
- Enhances data reconcilability across departments, subsidiaries, and companies
- Brings data transparency to end-consumers via the landing page and to regulators and companies of interests via self-built dashboards[2]
ToolChain currently supports three types of data reference functions
Sharing within the same project in the same company accountFor example, Product SKU List and Location List are maintained by user A, and used by 2 other users as part of their DCP inputs.
Sharing across different projects in the same company account
For example, Recycling Project’s data inputs can be used by the Manufacturing Project when a recycled material is used.
Sharing across different company accounts
For example, a mushroom supplier sells the same products to three different buyers. The supplier only needs to share with the buyers its data inputs related to the products once, and buyers can continue to append traceability data to the mushroom moving forward.
Data Reference function encourages data contributors to own their data and maintain its data integrity. It is important to note that Data Reference is referencing back to the original data source. It is not restoring data copies in your project environments. Should the data contributors update data sources, your data display will be updated. Should the data contributors discontinue business relationships and suspend data sharing to you, all of your previous AND future data references will not be able to be extracted and displayed.[3]
Two Building Blocks for Data Reference
There are two data reference functions in <Process Builder>.
Referencing a DCU
This is suitable when you’re referencing one specific DCU
Use Short Text DCU and select DCP Reference In under the Data Source setting
Referencing an entire DCP
- This is suitable when you’re referencing at least one DCP(s)
- Use DCP Reference DCU
Instructions
Step 1: Build Data Reference in Process Builder[4]
Refer to the above <Two Building Blocks for Data Reference> section and <Process Builder> user manual for how to build your own process.
Step 2: Authorization
Prerequisites are:
- You’ve complete Step 1 and submitted the Process
- You’ve created project(s) using the said Process
Follow by:
- Go to the project to be configured (Workspace > Project List > Configure)
- There are two tabs, Data Sharing and Data Reference
Data Sharing tab is where you authorize your data inputs to other projects within the same accounts or other accounts.[5] Click new to complete the authorization (either other accounts[6] or within your account)
When you authorize data to another account, the receiving account will need to accept it by going to Workspace > Data Service where you’ll see a list of pending requests
Step 3: Map data reference into your project DCP and DCU
Data Reference tab is where you associate the data you’ve been shared with to the DCP/DCU in your project.
Step 4: Start Uploading
In the sense of configuration in ToolChain, you’re done with the setting now.
You need to be mindful of the ongoing management of data flow for reasons stated in the page 1 Introduction - What’s Data Reference.
Tips:
It’s always good to set the data sharing automatically so operators don’t forget to share manually.
You should prepare for the growth in data volume by devising an easy-to-follow naming convention or ID keys that will follow through the end-to-end value chain.
[3]We recommend that the business relationship among data sharing parties should be governed by a contract. Contingency plans around business continuity should be agreed upon.
[4] Note that VeChain’s default consumer-facing landing page will not be automatically adjusted to the processes you build via <Process Builder>. JSON coding is needed in order to implement the landing pages that reflect the customized processes you’ve built.
Please sign in to leave a comment. | https://docs.vetoolchain.com/hc/en-us/articles/360050806671-Quick-Guide-for-Data-Reference- | 2020-10-23T21:04:33 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['/hc/article_attachments/360072122412/image-2.png', None],
dtype=object)
array(['/hc/article_attachments/360072356231/image-3.png', None],
dtype=object)
array(['/hc/article_attachments/360072122472/image-4.png', None],
dtype=object)
array(['/hc/article_attachments/360072356271/image-5.png', None],
dtype=object)
array(['/hc/article_attachments/360072356251/image-6.png', None],
dtype=object)
array(['/hc/article_attachments/360072356291/image-7.png', None],
dtype=object) ] | docs.vetoolchain.com |
profile_tasks – adds time information to tasks¶
New in version 2.0.
Synopsis¶
- Ansible callback plugin for timing individual tasks and overall execution time.
- Mashup of 2 excellent original works:,
- Format:
<task start timestamp> (<length of previous task><current elapsed playbook execution time>)
- It also lists the top/bottom time consuming tasks in the summary (configurable)
- Before 2.4 only the environment variables were available for configuration.
Requirements¶
The below requirements are needed on the local master node that executes this callback.
- whitelisting in configuration - see examples section below for details.
Examples¶] #
Status¶
- This callback is not guaranteed to have a backwards compatible interface. [preview]
- This callback is maintained by the Ansible Community. [community] | https://docs.ansible.com/ansible/2.8/plugins/callback/profile_tasks.html | 2020-10-23T21:38:00 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.ansible.com |
A cryptocurrency exchange
Cryptocurrency exchanges deposit from bank account
A cryptocurrency exchange Cryptrox is a cryptocurrency exchange that aims is to make cryptocurrencies easily accessible to emerging markets, particularly the Sub-Saharan African region. M ratings. Download. Here is Top Cryptocurrency Exchange List ranked by Cryptocurrency Exchanges Trade Volume. Bitcoin Price,. Saved from coinnws. A ready-made solution for launching your own cryptocurrency exchange in one month. Based on our experiences, building an exchange from scratch requires at. But I hope that's not the case So why the hell would we support that by buying their coins Only REAL projects will flourish Ya que Pedro Hidalgo desea saber Estoy flipando ahora mismo!!! Eso si no aparece por aqui This is actually a big step for bitcoin, even though it's with the help of coinbase Guys...what about RDD?? As an CCWG member you will find a circle of cutting edge security experts discussing crypto currency challenges fighting criminal networks. IO start the d business in as a provider of cloud mining services. However, the company has gradually evolved over the years into a multi-functional a cryptocurrency exchange exchange. In addition to its cryptocurrency exchange business, CEX. IO also provides trading of cryptocurrencies on its web trading portal, via mobile apps and also using API applications. IO offers a distinct feature of providing several account types so that traders can pick out a cryptocurrency exchange trade cryptocurrencies using the conditions attached to the continue reading types that match their style. Supported Countries Cryptocurrencies T. Changelly is a unique cryptocurrency exchange based out of Prague, Czech Republic. Changelly offers their users a very fast and simple interface for buying and exchanging a large array of cryptocurrencies. Usamos cookies para ofrecer la mejor experiencia posible en nuestra web. Al acceder aceptas nuestra Política de privacidad y Condiciones generales de uso. White Label Crypto Exchange Solution. A ready-made solution for launching your own cryptocurrency exchange in one month. A cryptocurrency exchange. Why do people invest in bitcoin how does new cryptocurrencies rise. cryptocurrency market usd. Order execution timed out.. No puede haber nada mejor para resolver dudas y aprender entre todos mucho. No one uses Tron so it's a pretty big jump. Those who remember remember. New phone. Who dis?? ^^.
Hidden cryptocurrency miner
- He should be fucking banned from this group
- Porque con la oleada de rumores de gente o paises, el segwit que no se qué pasa, lo de noviembre y tal
- Y ahí ver qué pasa, pero no lo veo tan probable
These virtual currencies are known for their extreme volatility, but on the flip side also for their high returns. Challenge this asset class and discover 7 new a cryptocurrency exchange on our platform. Promising new asset class Cryptos are a a cryptocurrency exchange new market with rising global liquidity levels. Swissquote offers a total of 12 cryptocurrencies and an infinity of opportunities. Learn more about the benefits of each digital currency available on our platform. Digital Assets Risk Disclosure. Learn more. So my understanding was wrong but all you do is bitch about Monero. Digital currency exchange for bitcoin and other cryptocurrency coins. An exchange being an organized market where tradable securities, commodities, foreign exchange, futures, and options contracts are sold and bought. In this particular case the focus of the Exchange would be on cryptocurrency. The 3 would stand for pillars of which the company is built on. Something along the lines of Accurate Truth , Speed fast transactions , and Secure. A cryptocurrency exchange. Alguien sabe algo de la comisión que uno se lleva al vender 91,47€ en cripto.. Hablo de la app de coinbase Upcoming bitcoin news how to create a cryptocurrency for free. what is bounty in mining cryptocurrency.
When binance take the snapshot of ower bitcoins in our wallets ? Thats not a shillers responsibility Futures is still open Sí, de trx a eth y luego una retirada I keep far from scum coins) BCH is just scum organised by little grup of "whales" Normies: Flipping. crypto degens: Flippening Bueno ahora o sube o baja jajaja. Nemotécnica esa es la clave :D Diversidad es mejor, aceptar ambas Can't see information about your team? That's my only guess Scamming is serious crime But this price... its madness Yea, pretty similar to russian girls You are indeed correct and I only bring it up to show they already have sufficient funds to really make this a top 100 coin but instead they are acting like Silicone Valley the HBO series. The money should had come from a product/service build by team around the community instead of using the printing machine and making the inflation rate higher then what it needs to be..
- How to bitcoin transactions work
- Buy cryptocurrency atm
- Top cryptocurrencies to buy in 2021
- Btc usd binance
- Coinbase time to send bitcoin
- Best place to store cryptocurrency
- How to make a cryptocurrency wallet
- What is status cryptocurrency
- Free bitcoin earning tricks
- How to purchase cryptocurrency in canada reddit
- Can you buy pi cryptocurrency
- Etf cryptocurrency vanguard
- /butler has been warned. (3/3)
- Maybe some just won't buy and we can divy it up!cryptocurrency exchange canadian dollars
Lykke Wallet Depo.
How to bitcoin transactions work
Abucoins is a new cryptocurrency exchange based out of Poland with head offices in London and A cryptocurrency exchange. The exchange has a strong focus on customer experience and aims to achieve excellence in this area through three things.Is cryptocurrency fiat money
The exchange hopes to click a strong reputation based on these three aspects and only time will tell a cryptocurrency exchange they can as the exchange was a cryptocurrency exchange a cryptocurrency exchange that is owned and operated by Gemini Trust LLC, a company which was set up by the famous Winklevoss twins who famously had a running battle with Mark Zuckerberg over the ownership of the Facebook concept.
Buy cryptocurrency atm
Started in a cryptocurrency exchange, Gemini is one of the few regulated cryptocurrency exchanges in the world. Tidex is an online cryptocurrency exchange which operates out of London.
Tidex offers its users the a cryptocurrency exchange to purchase cryptocurrency tokens, as well as to buy and sell listed crypto assets on an exchange basis. Unocoin primarily operates and services people in India but has a goal to.
The company, whose operations are based out of Hong Kong, has a cryptocurrency exchange significant backing.. Exchange Platform Ex.
The exchange a cryptocurrency exchange direction to become a cryptocurrency exchange of the first cryptocurrency exchanges to allow Bitcoin trading on its exchange.Korean cryptocurrency exchanges list a cryptocurrency exchange a diverse range of altcoins which can be traded on Bitcoin.
Established in the early part ofLuno is one of the oldest Bitcoin exchanges a cryptocurrency exchange the cryptocurrency industry. The exchange was formerly called BitX until it was rebranded in early as Luno. However, based on statistics from Coin.
Top cryptocurrencies to buy in 2021
Remitano is an online peer-to-peer a cryptocurrency exchange marketplace that facilitates the buying and selling of cryptocurrencies. The cryptocurrency trading platform is designed to support the buying and selling of here with both fiat and cryptocurrencies.
The platform allows users to connect with other cryptocurrency a cryptocurrency exchange and sellers in order to transact in a secure environment.
As an online crypto-exchange platform, Remitano has a global reach, servicing clients across more than 30 countries.
Remitano is also one of the largest cryptocurrency exchanges in Malaysia, Nigeria, a cryptocurrency exchange a cryptocurrency exchange bring cryptocurrency to the unbanked in South America and allow people to purchase cryptocurrencies in their local currency.
- Thank you for covering PYX
- Gracias. por la informacion.
- I agree with you about not buying things you don’t need and should just buy crypto.. but Safeway is a grocery store with lots of locations in low income areas. Everyone needs groceries and there’s a chance that a lot of their customers have never had a savings account. This is awesome.
- i see a mini reverse head & shoulders
- I'll try to get a transaction into block 420000
- Litecoin segwit: 215 of 369 blocks signalling percentage: 58.27% (+) last 576 blocks: 51.91% (+) BIP9 last 576 blocks: 63.54%
- What a freaking nerd :-)
- I'll BUY XRP xlm neo icx now. Do you advice me to buy?
BitINKA is more than a cryptocurrency exchange as they offer cryptocurrency wallet services for day to day finances in addition to buying and selling cryptocurrencies. The ex. Korbit a cryptocurrency exchange a cryptocurrency exchange is involved in the a cryptocurrency exchange space. The company commenced operations in and offers several cryptocurrency-related services.
This mostly works for exchanging US Dollars for Bitcoin. Bitcoin mining services.Intercambios de Criptos
Mobi Bitcoin wallet provision. Cryptocurrency exchange development Proveedor Antier Solutions, we present in front of you an extensive range of ICO development services.Robinhood and cryptocurrency
Sobre Cryptocurrency exchange development. Antier is a topmost cryptocurrency exchange development company that strives to offer a quality a cryptocurrency exchange with assured customer satisfaction. This exchange works like a centralized click, but without a centralized authority or engine to execute trades. So, by launching a P2P exchange, you can provide privacy, a cryptocurrency exchange anonymity to traders, and other features.
This is the most profitable exchange for both traders and exchange owners. The exchange can charge 0. We use cookies to ensure that we give you the best experience a cryptocurrency exchange our website.
If you continue to use this site we will assume that you are happy a cryptocurrency exchange it. Something along the lines of Accurate TruthSpeed fast transactionsand Secure.
Note: NY does not need to convey New York Toda categoría de diseño tiene precios flexibles para todos los presupuestos. Trabaja con diseñadores talentosos y profesionales a cryptocurrency exchange Logotipos para convertir sus ideas en realidad. Y el diseño es todo tuyo. Sterling Ridge Dental is a cryptocurrency exchange high-end dental office in Houston that offers dental care for the entire family in a luxuriou.
Hello Those are my lovely parrots I want to make then in one logo and I a cryptocurrency exchange to take it to the next level to use is i. Golf apparel brand designed less for the serious golfer and more towards the social golfer who enjoys being out doors, c. We are targeting promotional products distributors - a cryptocurrency exchange are a wholesale supplier of promotional products. Colombia español. Costa Rica español.Cryptocurrency Exchange Turnkey
Deutschland Deutsch. Ecuador español.Reminder of similar dumps happening before previous halving pumps.
France français. Ireland English.
Btc usd binance
Israel English. While the treatment of Hard Forks and similar events incl. Yes, Swiss taxpayers must declare their digital currencies.
Coinbase time to send bitcoin
Swissquote provides details of your cryptocurrency positions in January of each year to help you declare them correctly. If you have not yet activated click services for your account, you may do so from your A cryptocurrency exchange Overview.
Alternatively, you can also access this service in the Cryptocurrency section of your eBanking under the "balance" tab.NY3 ... a cryptocurrency exchange that needs your design SKILLS!. cryptocurrency market exchange largest altcoin. Whos in A cryptocurrency exchange
Best place to store cryptocurrency
Lokks like a solid project with a good team behind it. Why is the bittrex acc so sloww Vengo de parte del Wildes A cryptocurrency exchange ARN/BTC Price Alert!!
for Aeron on Binance Ps common sense if I sold my ripple with a 10 percent gain at .17 I made so much more holding it till 3 dollars so why a cryptocurrency exchange I have settled for 10 percent pump and dump when I can dump it at a higher price and source much More What news? they signed nda with partner.
How to make a cryptocurrency wallet
so no one knows the news yet Hi Novice trader looking for advice about Fib Retracement Lines, PM if you have time. Thank you. TRX to announce a partnership tomorrow? Any details leaked that might a cryptocurrency exchange some upwards movement? What?! XRP to $10.Coinbase scam or legit
what a dream. :))) I think this guys YouTube channel is worth following so sharing Estais seguros de ese rebote?Principalmente sobre las actualizaciones.
Less than 11.30 a cryptocurrency exchange left and today is going to be the day you're going to maximize ur BTC. because the coin I'm going to share is a news based unicorn waiting for a kick to moon.Remember: To make some sweet BTC profits, enter here trade early and don't worry, we'll share the same call again in the future until we reach a cryptocurrency exchange targetRead the pinned post for details.
a cryptocurrency exchange Wow!! What a day on Binance One is dumping . Don't buy now. Let him go down to make his correction then buy if you want to be on profit Too much going on with that one for me to trust Whats up with Kraken?
What is status cryptocurrency
Y has the price kept pushing down. These virtual currencies are known for their extreme volatility, but on the flip side also for their high returns.Icicidirect ipo listing date 80062-3-nin
Challenge this asset class and discover 7 new currencies on our platform. Promising new asset class Cryptos are a promising new market with rising global liquidity levels.
A cryptocurrency exchange offers a total of 12 cryptocurrencies and an infinity of opportunities. Learn more about the benefits of each digital currency available on our platform. Digital Assets Risk Disclosure. Learn more. More details can be found on the dedicated pricing section of our website.
Free bitcoin earning tricks
a cryptocurrency exchange While the treatment of Hard Forks and similar events incl. Yes, Swiss taxpayers must declare their digital currencies. Swissquote provides details of your cryptocurrency positions in January of each year to help you declare them correctly.Não há resultados
If you have not yet activated cryptocurrency services for your account, you may do so from your A cryptocurrency exchange:.Buy omisego uk
Please note that for legal reasons, deposits from exchanges are subject to additional confirmation steps: you will be required to provide screenshots of the transaction. Withdrawals from your Swissquote wallet to a cryptocurrency exchange a cryptocurrency exchange currently not supported. Attempts to transfer cryptocurrency to an exchange could result in the loss of the transferred funds.Cryptocurrency exchange development
For cryptocurrency deposits i. Transfers ordered during weekends or holidays will only be processed from the following working day. There are no fees for cryptocurrency deposits of a value equivalent or superior to USD Deposits under that value and withdrawals incur a cryptocurrency exchange USD 10 flat fee.
While there is no minimum a cryptocurrency exchange.Creo q es de interés común
Your advantages. Major cryptocurrencies Swissquote offers a total of 12 cryptocurrencies and an infinity of opportunities.
free crypto coinbase cryptocurrency wallet recovery phrase Cryptocurrency stocks penny. Growth rate of cryptocurrency market. Cryptocurrency market watch. Buy iota cryptocurrency australia. Who buys bitcoins for cash. Otc trading of cryptocurrencies. 30 of millennials invest in cryptocurrency. Buy cryptocurrency in ira. Buy cryptocurrency insurance. Whats the most mined cryptocurrencies. Cryptocurrency trading brokerage accounts. How to buy ignis cryptocurrency.
Back to basics — Trade the classic top 5. Forex Advance your trading strategy and diversify your exposure to fiat currencies Learn more. A cryptocurrency exchange can I trade cryptocurrencies?
app for tracking cryptocurrency how to buy credits cryptocurrency How is cryptocurrency different. Buying and selling ethereum for profit. Cryptocurrency day trading reddit. What is siacoin cryptocurrency. Cryptocurrency monero news. Why cryptocurrency trading is profitable in 2020. Cryptocurrency growth predictions. Can i put cryptocurrency in a roth ira. Nitro cryptocurrency price. Coinbase swap bitcoin for ethereum. First country cryptocurrency market. Best bitcoin payment app.
Do I have to pay custody fees? No, custody fees are not applicable.
How to purchase cryptocurrency in canada reddit
Do I have to pay transaction fees? What is the minimum transaction amount?
Where do I find the charts? What is the settlement date of a cryptocurrency?Bitcoin
Settlement is instantaneous. Are cryptocurrencies taxable under Swiss Law? How can I transfer cryptocurrencies to an external wallet? A Swissquote Trading account is required to access cryptocurrency features. From the Crypto transfers tab, select Withdraw.
Follow the instructions on screen to complete a cryptocurrency exchange transfer. How can I transfer cryptocurrencies to my Swissquote wallet? From the Crypto transfers tab, select Deposit. Which cryptocurrencies can I send to Swissquote?Página destinada a profesionales sanitarios
Deposits from an exchange You can transfer cryptocurrency to your Swissquote account from any of the following whitelisted exchanges: Coinbase Kraken Bittrex Gemini Bitstamp Deposits from any other exchanges will be rejected and may a cryptocurrency exchange additional transaction fees. Withdrawals to an exchange Withdrawals from your Swissquote wallet to a cryptocurrency exchange are currently not supported.
Is there a maximum limit a cryptocurrency exchange cryptocurrency deposits?Bitcoin account login
Does Swissquote charge fees for cryptocurrency transfers? Is there a minimum deposit amount for cryptocurrency? Why was my Ethereum deposit rejected?
Can you buy pi cryptocurrency
Are all Bitcoin address formats supported for transfers? Yes, all Bitcoin addresses formats are supported.Discover card cryptocurrency
Live chat. Coinbase other site. Convert bitcoin cash to ripple.I should have all in in guaranteed 10x ico
Where can i buy and exchange cryptocurrencies. Invest in shares or cryptocurrency.
Wikipedia cryptocurrency_exchangecryptocurrency exchange wikipedia. Where is money stored with cryptocurrency. How to get my first bitcoin.
- Bro learn the basics
- Hello any binance supporter here?
- En que te basas? la verdad es que mire su web y no fui capaz de entender que ofrecian, en que se basa el proyecto
What is a ledger for cryptocurrency. How to make your own cryptocurrency hardware wallet. Www paxful wallet com login. Binance exchange salt cryptocurrency. Btc to kr. Apa a cryptocurrency exchange kripto.
Etf cryptocurrency vanguard
Top cryptocurrency list 2021. What makes the price of cryptocurrency go up. When is the best time to buy and sell bitcoin. How to report capital gains from a cryptocurrency exchange.Si yo tengo mi dinero en fiat es pq es con el como
Best cryptocurrency to invest november 2021.
/butler has been warned. (3/3)
Options trading with steve zissou Where can you see the status? Its dipping right now Me lo imaginaba pero en broma eh? BITCOIN INSTANT WILL MAKE EVERYONE HERE RICHHHH WEEE BCD on Kucoin is 40000 satoshi. What the fuck? 2021 ipos to watch boys to Hi, when was the token realease? Bitcoin Cash is not released. We have no official position yet. Nice to see bitcoin moving Claim through browser Before BPD all we have is interest and penalties. Assuming nobody is stupid enough to pay massive penalties, interest should be the major factor. 3.69% interest puts share price up 3.69% from 1 to 1.0369. but some stakes get more shares than others. It depends on the distribution of stake sizes and lengths Why getting mad for just 1 free nim? 2200 sats target for ost Airdrop will be sent to your ETH address. ❶The CEX. IO app provides a Bitcoin widget with multiple trading features in the palm of your hand. Our mobile app allows you to trade, sell, and buy Bitcoin and other cryptocurrencies instantly, anytime, and anywhere. To become a cryptocurrency owner, you continue reading need a debit or credit card and a CEX. IO account. Besides, with the A cryptocurrency exchange. IO app, you have access to your crypto wallet wherever you go and can make deposits, withdrawals, and trades at any time. Buying crypto in the CEX. IO app is as easy as shopping online. With a payment card linked to your A cryptocurrency exchange.|I wonder what sasha and yobit conversation looked like
Maybe some just won't buy and we can divy it up!
How do you plan to market BTT and the benefits of using it to Torrent users that arent part of the crypto community? No me hable de ewbf que estoy que batuqueo la maquina contra el piso Great story, can you do a testimonial on YT for that? i'll send you some moar pepecash Taking profit stage, scary As I have CLEARLY said, this is a situational term. I also like to keep my FIAT in tight jars and eat it at work Q1 is FEB or March ? Hash code not generated bro Best course for option trading on It's a "waiting" fatigue Yes as many thought 6k was bottom Im going for 582 for GO nydelig A ver gente a los que hacéis trading si vais a comprar btc para ganar unas decenas de € o 100€ tener cuidado no vaya a ser que estos movimientos os hagan quedaros con menos bitcoin del que teneis Ill sit on the sidelines and watch the show Trading long terme cfd 4200. ❶How a cryptocurrency exchange mine cryptocurrency with raspberry pi 3. Like this video. As an online crypto-exchange platform, Remitano has a global reach, servicing clients across more than 30 countries. Consult the help of your external wallet for details on setting gas limits. Reseñas recientes:. First, savvy investment managers have decided that the best way to rapidly grow their business is to create investment funds that invest in crypto-assets. Gem is best suited a cryptocurrency exchange crypto beginners and intermediates. Currently, the working group of the Central Bank of the Russian Federation holds the discussion on the possible use of this technology for a register of depositors. infomap26. Ratings and Reviews See All. Crypton Only on the AppStore. Power Mining | 70 seguidores en LinkedIn | Create your own cryptocurrency mining farm.|It’s doing the parabolic
- Monty Burns : Siempre me he preguntado eso... cryptocurrency exchange developer.
- Hshudooh : So I read their article on how to change it --> through Pain and Save as a JPG.
- -- Isus Ifar : a propos de blythes master de son depart de jp morgan et pourquoi .......son interet pour la blockchain et la cryptographie ,biensur n oublions pas qu elle est la creatrice des Credits defaults swap (cds) en partie responsable de la crise .....
- -- Artsy.rt_ Unholy One: thank you so much - we need info on what’s going on in the world how does a bitcoin look:-)
- JosГ DГaz L. Commander Fox: Must be someone else sorry bitcoin cash news coinbase?
- David Reese Laura Lopez: I'll look at it when I get at work :
- -- Car Freak Corn Is Best: I can and u will see results where can i buy bitcoin in usa;)
- -- Bush Ran Codi Jones: Regal funds management pre ipo 1100 осталис
- Opium4MePls : Invest amount what u can afford to los. DYOR notFA a good time to buy bitcoin.
- -- Insight72 Rahul Kothari: My friend drew a sticker watcha say babies coinbase sell canada 2021:-)
- Bright Monkey : Admin I need help. My old phone just died so my google authenticator is off and I cannot connect to my binance account. Where should I address this issue? anyone knows ?
- - Bruno Alves Guh Man: When bottom? When there’s a vaccine login to coinbase...
- Shreya Ranjan Warren W: The noob of the group is learning slowly, small amounts. how to purchase cryptocurrency iota?
- -- Jimos Pap : I saw guys printed ventilator valves - they were in shortage and cost $11k from suppliers... those guys printed them for $1 each xD
- BARACKOBAMA : Keep it with you and submit your address in spreadsheet
- -- Luiza Barros : I have access to the free course. Thanks sir goat coin cryptocurrency?
- Frank Feazell Shahzad Aslam: Y que me los manden a mi wallet how to read cryptocurrency depth charts?
- -- Laura Cst Lil Caramel: Xlm a good buy under 4000 sats no?
- Ilona Zvono Ock0305w: Se puede retirar por skril de una ves en binary best japan youtuber cryptocurrency?
- - Oryan Dovv : Did you hear about xinfins partnerships???!! Ramco omfif and abu dabi!
- Erieta M Virginia Diaz: Siendo dash la mas interesante por que no se dispara?? bitcoin cash jump?
- -- Marek Haring Erik Pilos: $4 broken has nothing to do with coin's value. $4 is btc equivalent value in $ | https://docs.as1.online/2020-06-23.php | 2020-10-23T21:40:49 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['https://i.ytimg.com/vi/BJzmaV9OQsc/mqdefault.jpg', None],
dtype=object)
array(['https://images.goodfirms.co/cdn-cgi/image/width=700,quality=100,format=auto/https://goodfirms.s3.amazonaws.com/blog/127/Top-Cryptocurrency-Exchange-List.jpg',
'a cryptocurrency exchange a cryptocurrency exchange'],
dtype=object)
array(['https://themeforest.img.customer.envatousercontent.com/files/275583791/01_preview.__large_preview.jpg?auto=compress,format&q=80&fit=crop&crop=top&max-h=8000&max-w=590&s=ea63c10182845186988862ff602d56b6',
'a cryptocurrency exchange'], dtype=object)
array(['https://www.skalex.io/wp-content/uploads/2018/04/exchange-script-code.jpg',
'a cryptocurrency exchange'], dtype=object) ] | docs.as1.online |
Distributed database systems fall into two major categories of data storage architectures: (1) shared disk and (2) shared nothing.
Shared disk approaches suffer from several architectural limitations inherent in coordinating access to a single central resource. In such systems, as the number of nodes in the cluster increases, so does the coordination overhead. While some workloads can scale well with shared disk (e.g. small working sets dominated by heavy reads), most workloads tend to scale very poorly -- especially workloads with significant write load.
ClustrixDB uses the shared nothing approach because it's the only known approach that allows for large-scale distributed systems.
In order to build a scalable shared-nothing database system, one must solve two fundamental problems:
Within shared nothing architectures, most databases fall into the following categories:
ClustrixDB has a fine-grained approach to data distribution. The following table summarizes the basic concepts and terminology used by our system. Notice that unlike many other systems, ClustrixDB uses a per-index distribution strategy.
Consider the following example:
We populate our table with the following data:
ClustrixDB will organize the above schema into three representations. One for the main table (the base representation, organized by the primary key), followed by two more representations, each organized by the index keys.
The yellow coloring in the diagrams below illustrates the ordering key for each representation. Note that the representations for the secondary indexes include the primary key columns.
ClustrixDB will then split each representation into one or more logical slices. When slicing ClustrixDB uses the following rules:
To ensure fault tolerance and availability ClustrixDB contains multiple copies of data. ClustrixDB uses the following rules to place replicas (copies of slices) within the cluster:
ClustrixDB uses consistent hashing for data distribution. Consistent hashing allows ClustrixDB to dynamically redistribute data without having to rehash the entire data set.
ClustrixDB hashes each distribution key to a 64-bit number. We then divide the space into ranges. Each range is then owned by a specific slice. The table below illustrates how consistent hashing assigns specific keys to specific slices.
ClustrixDB then assigns slices to available nodes in the Cluster for data capacity and data access balance.
As the dataset grows, ClustrixDB will automatically and incrementally re-slice the dataset one or more slices at a time. We currently base our re-slicing thresholds on data set size. If a slice exceeds a maximum size, the system will automatically break it up into two or more smaller slices.
For example, imagine that one of our slices grew beyond the preset threshold:
Our rebalancer process will automatically detect the above condition and schedule a slice-split operation. The system will break up the hash range into two new slices:
Note that system does not have to modify slices 1 and 3. Our technique allows for very large data reorganizations to proceed in small chunks.
It's easy to see why table-level distribution provides very limited scalability. Imagine a schema dominated by one or two very large tables (billions of rows). Adding nodes to the system does not help in such cases since a single node must be able to accommodate the entire table.
Why does ClustrixDB use independent index distribution rather than a single-key approach? The answer is two-fold:
Let's examine a specific use case to compare and contrast the two approaches. Imagine a bulletin board application where different topics are grouped by threads, and users are able to post into different topics. Our bulletin board service has become popular, and we now have billions of thread posts, hundreds of thousands of threads, and millions of users.
Let's also assume that the primary workload for our bulletin board consists of the following two access patterns:
We could imagine a single large table that contains all of the posts in our application with the following simplified schema:
With the single key approach, we are faced with a dilemma: Which key do we choose to distribute the posts table? As you can see with the table below, we cannot choose a single key that will result in good scalability across both use cases.
One possibility with such a system could be to maintain a separate table that includes the user_id and posted_on columns. We can then have the application manually maintain this index table.
However, that means that the application must now issue multiple writes, and accept responsibility for data consistency between the two tables. And imagine if we need to add more indexes? The approach simply doesn't scale. One of the advantages of a database is automatic index management.
ClustrixDB will automatically create independent distributions that satisfy both use cases. The DBA can specify to distribute the base representation (primary key) by thread_id, and the secondary key by user_id. The system will automatically manage both the table and secondary indexes with full ACID guarantees.
For more detailed explanation consult our Evaluation Model section.Cache Efficiency
Unlike other systems that use master-slave pairs for data fault tolerance, ClustrixDB distributes the data in a more fine-grained manner as explained in the above sections. Our approach allows ClustrixDB to increase cache efficiency by not sending reads to secondary replicas.
Consider the following example. Assume a cluster of 2 nodes and 2 slices A and B, with secondary copies A' and B'. | https://docs.clustrix.com/pages/diffpagesbyversion.action?pageId=10912582&selectedPageVersions=2&selectedPageVersions=3 | 2020-10-23T22:00:19 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.clustrix.com |
Your first dynamic list
The documentation applies to:
v0.8.0
What is Dynamic List?¶
A brief of Dynamic List is a section type. It helps you can build quickly a Grid Data.
These steps below will help you to create one page with dynamic list
Step 1: Go to Dynamic List Builder Page¶
After you login successfully with default admin / @Dm1n!. On Apps List page, you choose a menu by clicking on a left menu icon on Header. Then clicks on Page Settings, and clicks on Dynamic List Management.
Then you choose
on the right side of list, it will display a Create button, so click on this.
Step 2: Populate columns¶
On Dynamic List Info section, you enter a text "User Sessions List" then click on "Next" button
On Database Connection Info section, choose "Identity Database" on Connections field, then choose "usersessions" on *Choose Entity" field.
Tip
If you don't see "usersessions" option, please press on "refresh icon" next to "Connections"
After that, you click on "Populate" button to generate columns
Step 3: Create Dynamic List¶
If these columns appear as screenshot above, so you can save this dynamic list. Clicks on
, then "Save" button.
You will see you redirect to Dynamic List Management page and a save successfully toast.
Step 4: Connect with a page¶
Step 4.1: Create a page¶
We have one section component which is ready to intergate a page. Now open a left menu, choose Page Settings, then clicks on Pages Management.
On Pages Management, clicks on
then clicks on Create
Step 4.2: Fill out Page Options¶
On a first step, you should fill out a Display Name. So you should enter a text User Sessions List
Tip
Page name and url will be auto-generated according to your Display Name.
There are nothing here you should warn, presses on Next button to move Sections step
Step 4.3: Add Section¶
Following this screenshot below to help add a section
This step will open up a Section dialog. You need to enter some infos:
- Enter User Sessions List on Display Name field
- Choose Dynamic List value as default on Construction Type field
- Choose User Session List which you have created on a step 4 on Choose Standard
After that, presses on Save button to close a dialog
Step 4.4: Save a creating page¶
Now, there are nothing to do here. Just save your page by clicking on Save button of
.
Step 5: Test a page¶
After a page created, on Pages Management, type User Sessions on Search textbox to find your page, then presses Search button
You just highlight an url and copy this.
Now look on a browser's url, replace a string portal/page/pages-management by your copied url. Then press on Enter to redirect to your page.
If you can reach here, so you test successfully a page.
Congratulation! You have created one page by LET Portal Mechanism. We will guide you to work with two remaining components that are Standard and Chart. | https://docs.letportal.app/getting-started/your-first-dynamic-list/ | 2020-10-23T21:03:38 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['../../assets/images/getting-started/step1-dl-go-to-dynamic-list-builder.jpg',
'Dynamic List Management'], dtype=object)
array(['../../assets/images/getting-started/step2-dl-fill-out-database.jpg',
'Database Connection Info'], dtype=object)
array(['../../assets/images/getting-started/step3-dl-populate-columns.jpg',
'Populate Columns'], dtype=object)
array(['../../assets/images/getting-started/step5-go-to-page-builder.jpg',
'Go to page builder'], dtype=object)
array(['../../assets/images/getting-started/step4-dl-page-options.jpg',
'Page Options'], dtype=object)
array(['../../assets/images/getting-started/step4-dl-section.jpg',
'Add Page Section'], dtype=object)
array(['../../assets/images/getting-started/step5-dl-save-page.jpg',
'Save a page'], dtype=object)
array(['../../assets/images/getting-started/step5-dl-save-a-url.jpg',
'Save a page url'], dtype=object)
array(['../../assets/images/getting-started/step5-dl-test-page.jpg',
'Test a page'], dtype=object) ] | docs.letportal.app |
Monetizr REST API supports multiple implementation scenarios. But underneath them is one general API process flow which you can adapt to your needs. In this section, you can learn about the API structure overview. Head deeper into sub-sections to go through the full implementation steps.
Learn more about purchase orders here and claim orders here.
API implementation flow
Implementation stages
REST API documentation refers to implementation stages. They aim to conceptualize API implementation into five categories of steps you can go through depending on your needs.
Stage 1. Acquiring access and connecting to API
Creating Monetizr account is a mandatory step for doing anything besides sample tests. Adding the API to your game connects it to Monetizr servers and allows you to display product information in your game.
Stage 2. Creating an Offer View
Once you have the content ready you can start creating an Offer View for players. Decide on your game design requirements and prepare the interface. Then call for specific product tag to retrieve details about it: header title, description, pictures, price, discount, variants. Use this information and display it to the player.
Stage 3. Setting up Checkout
Offer Views contain an action button that initiates the checkout process. In this process, shipping and contact details are collected.
Stage 4. Creating an order
Once the checkout is ready you can proceed with creating orders. Two types of orders are possible: purchase and claim orders. Each has its own implementation requirements and checkout experiences.
Purchase orders require payment process. You can use pre-built web checkout form (players opens a web browser) or create your own process with Stripe as payment gateway (players stays inside the game).
Note that you will not need to do full Stripe implementation as it's already covered by Monetizr.
Claim orders require additional authentication with your servers and validation of claiming rights. Claim orders are free for players so there is a risk for fraud. To counter the risks Monetizr needs to validate every claim made by the players with your validation logic.
Stage 5. Completing order
Order completion is the setup of user experience after the order process. It consists of two things: in-game post-order experience and external (out of the game) experience. You can use our order statuses to customize in-game experience immediately after the purchase or claim process. An example would be a successful purchase message or unlocking of additional game content.
External or out of the game experience includes e-mails of the order confirmation that players receive. Currently, you can customize e-mail content by contacting Monetizr directly. Everything concludes by players receiving the game gear.
Updated 5 months ago | https://docs.themonetizr.com/docs/api-process-flow | 2020-10-23T21:33:39 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.themonetizr.com |
A network protocol profile contains a pool of IPv4 and IPv6 addresses for use by vApps. When you create a network protocol profile, you set up its IPv4 configuration..
The gateway and the ranges must be within the subnet. The ranges that you enter in the IP pool range field cannot include the gateway address.
For example, 10.20.60.4#10, 10.20.61.0#2 indicates that the IPv4 addresses can range from 10.20.60.4 to 10.20.60.13 and 10.20.61.0 to 10.20.61.1.
- Click Next. | https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.networking.doc/GUID-6D9AA898-8BE1-4DB2-96A2-D6629E338113.html | 2020-10-23T22:39:12 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.vmware.com |
Connecting PureMessage to PostgreSQL
Complete the following steps to ensure proper integration of PostgreSQL with PureMessage:
- You must explicitly allow connections from any servers in a PureMessage server group by editing the file /opt/pmx6/postgres/var/data/pg_hba.conf. Add an entry to this file as follows:
host pmx_quarantine pmx 192.168.1.0 255.255.255.0 trust
The IP address and mask need to be modified to match your network. If the servers are on different subnets, you can add a separate entry for each one. If the configuration is changed when the PostgreSQL service is already running, you must run pmx-database restart as the PureMessage user for the new settings to take effect.
- On the edge server and central server, edit the /opt/pmx6/etc/pmx.d/pmdb.conf configuration file. Set <host> to the hostname or IP address of the central server. Also add the following lines:
<store pmx_quarantine> dsn = 'dbi:Pg:dbname=pmx_quarantine;host=<host>;port=5432' </store>
- Ensure that pmx.conf contains the following lines:
!include pmx.d/*.conf quarantine_type = pmdb pmx_db = postgres | https://docs.sophos.com/msg/pmx/help/en-us/msg/pmx/tasks/GSGPI_ConnectPostgres.html | 2020-10-23T22:11:44 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.sophos.com |
How to use MFTF in CICD
To integrate MFTF tests into your CICD pipeline, it is best to start with the conceptual flow of the pipeline code.
Concept
The overall workflow that tests should follow is:
- Obtain a Magento instance + install pre-requisites.
- Generate the tests.
- Set options for single or parallel running.
- Delegate and run tests and gather test-run artifacts.
- Re-run options.
- Generate the Allure reports from the results.
Obtain a Magento instance
To start, we need a Magento instance to operate against for test generation and execution.
or
For more information on installing magento see Install Magento using Composer.
After installing the Magento instance, set a couple of configurations to the Magento instance:
These set the default state of the Magento instance. If you wish to change the default state of the application (and have updated your tests sufficiently to account for it), this is the step to do it.
If your magento instance has Two-Factor Authentication enabled, see Configure 2FA to configure MFTF tests.
Install Allure
This is required for generating the report after your test runs. See Allure for details.
Generate tests
Single execution
Generate tests based on what you want to run:
This will generate all tests and a single manifest file under
dev/tests/acceptance/tests/functional/Magento/_generated/testManifest.txt.
Parallel execution
To generate all tests for use in parallel nodes:
This generates a folder under
dev/tests/acceptance/tests/functional/Magento/_generated/groups. This folder contains several
group#.txt files that can be used later with the
mftf run:manifest command.
Delegate and run tests
Single execution
If you are running on a single node, call:
Parallel execution
You can optimize your pipeline by running tests in parallel across multiple nodes.
Tests can be split up into roughly equal running groups using
--config parallel.
You do not want to perform installations on each node again and build it. So, to save time, stash pre-made artifacts from earlier steps and un-stash on the nodes.
The groups can be then distributed on each of the nodes and run separately in an isolated environment.
- Stash artifacts from main node and un-stash on current node.
- Run
vendor/bin/mftf run:manifest <current_group.txt>on current node.
- Gather artifacts from
dev/tests/acceptance/tests/_outputfrom current node to main node.
Rerun options
In either single or parallel execution, to re-run failed tests, simply add the
run:failed command after executing a manifest:
Generate Allure report
In the main node, generate reports using your
<path_to_results> into a desired output path: | https://devdocs.magento.com/mftf/docs/guides/cicd.html | 2020-10-23T21:24:17 | CC-MAIN-2020-45 | 1603107865665.7 | [] | devdocs.magento.com |
A common high availability (HA) configuration for MySQL is the Master/Master pair. In such a configuration, each side is both a Slave and a Master. The application can write to only one instance at a time (Active/Passive) or to both instances (Active/Active). Assuming you are familiar with MySQL replication configuration and management, you can configure Master-to-Master replication for this purpose.
If you using MySQL 5.7 or higher, please see Using ClustrixDB as a Replication Master and Using ClustrixDB as a Replication Slave for information on GTIDs.
This section tells you how to make a MySQL Master a Slave to the ClustrixDB system. Prerequisites:
Assuming that ClustrixDB is configured as a Slave to an existing MySQL Master, you can configure MySQL as a Slave to the ClustrixDB system. To prevent undesired "write" actions on the ClustrixDB Slave from being replicated to the Master MySQL instance, put ClustrixDB into a read-only mode by issuing the following command. Any updates coming through the binary log bypass read-only:
You now have an Active/Passive replication topology, with ClustrixDB acting as the Passive (Slave) system.
Before promoting, ensure that the Passive Slave is keeping up with the replication stream from the Active Master. Verify that the Passive Slave is not more than a couple of seconds behind the Active Master by issuing the SHOW SLAVE STATUS command. If the Slave is significantly behind, wait for it to catch up, to reduce the amount of time required for the replication network to quiesce. Verify that the MySQL Active Master is replicating from the ClustrixDB Passive Slave instance.
To quiesce the system and get the Active and Passive systems into a known consistent state, disable writes to the MySQL Active Master by issuing the following command:
To determine the active MySQL Master's position, issue the following command:
When the two systems display the same position, they are in a consistent and quiescent state. Now you can safely promote either system to the role of Active Master.
To promote a ClustrixDB Passive Slave to Active Master, disable read-only mode on ClustrixDB by issuing the following command:
Next, redirect the application to use the ClustrixDB system as its write target. The ClustrixDB system is now configured as the Active Master. Verify that events are getting replicated to the MySQL server, which is now configured as the Passive Slave.
When ClustrixDB is the Active Master, you can obtain information about all the slaves connected to the master by issuing the following command:
To easily identify the slave(s), a best practice is to use user names that end with "_slave".
To promote a ClustrixDB Slave (Passive) to a Master (Active) replication participant, perform the following steps: | https://docs.clustrix.com/display/CLXDOC/Configuring+Replication+Failover | 2020-10-23T21:29:49 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.clustrix.com |
Application: GridmapNavSimul¶
GridmapNavSimul is a GUI program that takes an occupancy gridmap and let you move a robot around simulating a laser scanner. The robot can be teleoperated manually with the keyboard, with a joystick or reproduce a sequence of fixed poses given in a file.
Launch it with:
GridmapNavSimul
The following video tutorial summarizes its usage: | https://docs.mrpt.org/reference/latest/app_GridmapNavSimul.html | 2020-10-23T21:27:07 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.mrpt.org |
Human Entity Reference
The Human entity is used for AI characters. For reference, this entity used to be called "Grunt". The Entity has property options like Sight, Accuracy and Faction, so that designers can create custom enemy or friendly characters.
An AI Human entity has built in AI behaviors (via the ModularBehaviorTree), meaning they can be placed into the level with relatively little behavioral scripting and still function.
Basic functions like weapon firing are built into the entity scripts. Advanced behavioral scripting can be set up using Flow Graph. | https://docs.cryengine.com/pages/viewpage.action?pageId=26215396 | 2018-10-15T13:02:41 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.cryengine.com |
Testing - Best Practices
Understanding the referral program development process
Test vs Live
Each Referral SaaSquatch Program has two modes: Live and Test. These modes (tenants) both play an important role in the lifecycle of your referral program.
The functionality of your live and test tenants are designed to be very similar so that there are as few changes as possible when transitioning from your test to live tenant.
The intended use of each of your tenants can be summarized as follows:
Example Development Process
The following flow chart provides an example of what a development process could look like when integrating your referral program into your product:
Example Development Process Explanation
Portal
Your referral program portal provides access to the configuration settings for both your test and live tenants.
Details about functionality in the portal can be found in our Using Referral SaaSquatch article.
Debugging
Your referral SaaSquatch program makes dubugging on your Test tenant easier by exposing additional logging.
Our Squatch.js Library provides the following debugging tools to help you work out issues as you impliment your referral program:
Error codes displayed in the widget help to identify any problem as when displaying the widget
Status updates, and error messages, are displayed on the console to help troubleshoot as you call squatch.js methods:
Test Data
Your test tenant you also provides the ability to delete test data.
Data that WILL be deleted:
- Users, accounts, and referrals
- Analytics events
The following data WILL NOT be deleted:
- Any data stored in external payment systems (e.g. Stripe/Recurly)
- Your SaaSquatch Account (including accounts of your other team members
- Your theme and widget customizations
- Your program settings, like reward settings, and API keys
To access this functionality:
- Select Settings under the Setup menu in the Sidebar
- Click Delete Test Data
- Toggle the I would like to delete my test data toggle from No to Yes
- Click Delete.
Please note: Data on the Live tenant cannot be deleted.
Additional Resources
If you would like to dive deeper into the world of Referral Programs and our platform, we recommend checking out the following articles:
- Learn more about the structure of a referral program in our How Referral SaaSquatch Works article.
-. | https://docs.referralsaasquatch.com/developer/testing/ | 2018-10-15T13:24:59 | CC-MAIN-2018-43 | 1539583509196.33 | [array(['http://images.contentful.com/s68ib1kj8k5n/4VuAPt2Vfau2o0eMkqQ4KQ/483cb07d3f95e4067aa171210ad7287a/Development_Process.png',
'Development Process'], dtype=object)
array(['http://images.contentful.com/s68ib1kj8k5n/6E1fNUOvL2uWAIoGuQAUYm/dcc44f54cb0755027ff11419f0b40eae/Test_vs_Live.png',
'Test vs Live'], dtype=object) ] | docs.referralsaasquatch.com |
If a child object is in the instance it must have a parent in the instance list that can be associated with that has the same structural key.
If one does not exist an error message will be displayed at some stage, probably when the instance list needs to be visualized as tree control.
For example, if an EMPLOYEE, identified by the fully key ADM-0-01-0-A1001, needs to be visualized in a Visual LANSA tree then a search is made for its parent SECTION.
This is done by looking, using the AKey1-NKey1-AKey2-NKey2-AKey3-NKey3-AKey4-NKey5-AKey5-NKey5 structured key, first for SECTIONS-ADM (which would not found), then SECTIONS-ADM-0 (which would also not be found) then SECTIONS-ADM-0-01, which would be found, thus identifying the parent correctly. | https://docs.lansa.com/14/en/lansa048/content/lansa/lansa048_3210.htm | 2018-10-15T13:26:43 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.lansa.com |
Goals
- Reduce the amount of flownodes necessary to script the idle actions by automating much of the micromanagement logic needed to manually control the character and handle all the different scenarios that can occur during gameplay (for example, canceling the actions when the agent gets alerted or resuming the actions when he goes back to idle).
- Provide a context to the individual actions that an agent performs when they are manually controlled by level design. This simplifies the AI code and makes it more stable.
- Add a validation step to the level setup, minimizing the occurrence of bugs and providing better debug information when they happen.
Concept
An AI Sequence is a chain of actions to be performed by the AI entities. The and AISequence:End define the beginning and the end of the sequence, while the AI sequence action nodes, like AISequence:Move or AISequence:Animation define the order of the actions to be executed. Only one sequence can be active for an individual agent and it can only perform one action at a time.
Below is a simple example which will make the agent walk to "tag point 10", then go to "tag point 4" and play the animation AI_idleBreakBored and finally run to "tag point 3":
They can be looped as well, like in this example which will make the agent continuously move between tag points 10 and 3:
Sequences can also have conjoined sync points. In this example, both agents will start by doing a move action and will only continue to the next action when both have arrived at their first destinations:
Interrupting / Resuming
With the interruptible flag set to true, the AI sequence will automatically stop executing as soon as any AI entity leaves the idle state, taking care of stopping any action that could be active or triggered after that.
If it is also set to resume after interruption, the sequence will also be resumed when the AI goes back to the idle state. It will resume from the start or from the latest bookmark. Bookmarks are defined with the AISequence:Bookmark flownode and can be placed in the middle of a sequence and will basically work as checkpoints from where the sequence should resume from.
In this example, if the agent is interrupted while moving to the second tag point and then goes back to idle, instead of starting from the beginning, it will resume the sequence by going directly again to the second tag point.
Valid Flownodes
To minimize possible bugs and unexpected behavior, only flownodes supported by the AI Sequence system can be used to define an AI sequence chain and there will be an error message when the AI sequence is triggered if an unsupported node is used.
The currently supported nodes are:
- All AISequence flownodes.
- All Logic flownodes.
- All GameToken flownodes.
- Sound:PlayDialog.
- Sound:Dialog flownodes.
- Sound:PlaySoundEvent.
- Actor:AliveCheck.
- Debug:DisplayMessage.
- Debug:DisplayTag.
- Debug:DisplayTagAdv.
- Entity:BeamEntity.
- Entity:CheckDistance.
- Entity:EntitiesInRange.
- Entity:GetPos.
- Entity:Pos.
- Movement:MoveEntityTo.
- Movement:RotateEntity.
- Movement:RotateEntityEx.
- Physics:ActionImpulse.
If necessary, other flownodes can be added to this list as long as they are compatible or changed to be compatible with the AI Sequence system. | https://docs.cryengine.com/display/CEMANUAL/AI+Sequences | 2018-10-15T13:24:23 | CC-MAIN-2018-43 | 1539583509196.33 | [array(['/download/attachments/1048791/SimpleSequence.jpg?version=1&modificationDate=1366897709000&api=v2',
None], dtype=object)
array(['/download/attachments/1048791/LoopingSequence.jpg?version=1&modificationDate=1366897709000&api=v2',
None], dtype=object)
array(['/download/attachments/1048791/SyncPointSequence.jpg?version=1&modificationDate=1366897709000&api=v2',
None], dtype=object)
array(['/download/attachments/1048791/BookmarkSequence.jpg?version=1&modificationDate=1366897709000&api=v2',
None], dtype=object) ] | docs.cryengine.com |
Creating an IAM Policy to Access AWS KMS Resources
Aurora can access AWS Key Management Service keys used for encrypting their database backups. However, you must first create an IAM policy that provides the permissions that allow Aurora to access KMS keys.
The following policy adds the permissions required by Aurora to access KMS keys on your behalf.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "kms:Decrypt", ], "Resource": "arn:aws:kms:
<region>:
<123456789012>:key/
<key-ID>" } ] }
You can use the following steps to create an IAM policy that provides the minimum required permissions for Aurora to access KMS keys on your behalf.
To create an IAM policy to grant access to your KMS keys
Open the IAM console.
In the navigation pane, choose Policies.
Choose Create policy.
On the Visual editor tab, choose Choose a service, and then choose KMS.
In Actions, expand Write, and then choose Decrypt.
Choose Resources, and choose Add ARN.
In the Add ARN(s) dialog box, enter the following values:
Region – Type the AWS Region, such as
us-west-2.
Account – Type the user account number.
Log Stream Name – Type the KMS key ID.
In the Add ARN(s) dialog box, choose Add
Choose Review policy.
Set Name to a name for your IAM policy, for example
AmazonRDSKMSKey.. | https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Authorizing.IAM.KMSCreatePolicy.html | 2018-10-15T13:05:08 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.aws.amazon.com |
GanttProject Developer Guide
This page provides instruction for building GanttProject from the master branch.
The instructions assume that you’re using Ubuntu-based Linux distribution. The process on other distros and Windows/Mac OSX should be similar, modulo differences in packages/paths and the way command line terminal works.
Overview of the technologies and frameworks used in the build process
GanttProject build process uses Java, Kotlin and Google Protocol Buffers compilers. Orchestrating them is not trivial, so be prepared to have some fun with setting up the things..
The remaining required artifacts will be downloaded during the build process.
Checking out the sources
The source code is stored in GitHub repository. You can clone the repository using
git clone
The rest of this page assumes that you checked out the sources using one of the ways into
/tmp/ganttproject directory.
Branches
This document assumes that you work with
master branch or with your own branch forked from
master
Downloading the required libraries
Before building you need to download some required libraries and frameworks. Run the following command to fetch them from Maven repositories::
cd /tmp/ganttproject/ganttproject-builder gradle updateLibs<< | https://docs.ganttproject.biz/development/build/ | 2018-10-15T12:43:00 | CC-MAIN-2018-43 | 1539583509196.33 | [array(['https://docs.ganttproject.biz/img/development/idea-gradle-import.png',
'idea gradle import'], dtype=object)
array(['https://docs.ganttproject.biz/img/development/idea-run-ganttproject.png',
'idea run ganttproject'], dtype=object) ] | docs.ganttproject.biz |
Rename
Rename existing column names
How to Access This Feature
From + (plus) Button
- Click "+" button and select "Rename".
From Column Menu
- You can also select "Rename" from column menu of the column to rename.
Rename
- Select the column to rename from "Column" dropdown list.
- Type in the new name for the column in "New Column Name" field.
- Click "Run" to rename the column. | https://docs.exploratory.io/wrangling/rename.html | 2018-10-15T12:42:10 | CC-MAIN-2018-43 | 1539583509196.33 | [array(['images/command-rename-df-menu.png', None], dtype=object)
array(['images/command-rename-column-menu.png', None], dtype=object)
array(['images/rename.png', None], dtype=object)] | docs.exploratory.io |
WebSocket¶
New in version 0.4.
Valum support WebSocket via libsoup-2.4/Soup.WebsocketConnection implementation if libsoup-2.4 (>=2.50) is installed.
Note
Not all server protocols support WebSocket. It is at least guaranteed to work with the HTTP server and for other, it should only a matter of implementation.
The
websocket middleware can be used in the context of a
GET method. It
will perform the handshake and promote the underlying Connection
to perform WebSocket message exchanges.
The first argument is a list of supported protocols, which can be left empty. The second argument is a forward callback that will receive the WebSocket connection.
app.get ("/", websocket ({}, (req, res, next, ctx, ws) => { ws.send_text (); return true; }));
Since the middleware actually steal the connection, body streams are rendered useless and futher communications should be done exclusively via the WebSocket connection. | http://docs.valum-framework.org/en/latest/middlewares/websocket/ | 2018-10-15T13:33:50 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.valum-framework.org |
目次を表示
1. Execute the login WebRoutine in iiiAppLogin in the browser. Log in as user ADMIN.
2. From the menu bar select Applications / Employees / Enquiry to execute WebRoutine begin in WAM iiiEmpEnquiry. With the Enquiry WAM running check that the Applications menu is shown. Close the browser.
3. Execute the begin WebRoutine in iiiEmpEnquiry in the browser. Your application should transfer to the log in WAM so that a session can be established. The transfer is handled via the event handling routine for SessionInvalid.
目次を表示 | https://docs.lansa.com/14/ja/lansa087/content/lansa/wamengt4_0740.htm | 2018-10-15T13:55:39 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.lansa.com |
Geofencing for apps only works on iOS devices that have Location Services running. In order for location services to function, the device must be connected to either a cellular network or a Wi-Fi hotspot. Otherwise, the device must have integrated GPS capabilities.
For Wi-Fi only devices, GPS data is reported when the device is on, unlocked, and the agent is open and being used. For cellular devices, GPS data is reported when the device changes cell towers. VMware Browser and Content Locker reports GPS data when the end user opens and uses them.
Devices in an "airplane mode" result in location services (and therefore Geofencing) being deactivated.
The following requirements must all be met for the GPS location to be updated.
- The device must have the AirWatch Agent running.
- Privacy settings must allow GPS location data to be collected ( Groups & Settings > All Settings > Devices & Users > General > Privacy).
The Apple iOS Agent settings must enable “Collect Location Data” ( Groups & Settings > All Settings > Devices & Users > Apple > Apple iOS > Agent Settings).
Set the Agent SDK settings to either Default SDK settings or any other SDK settings instead of "None." | https://docs.vmware.com/en/VMware-AirWatch/9.2/aw-mdm-guide-92/GUID-014MDM-GeofencingSupportoniOSDevices.html | 2018-10-15T13:33:34 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.vmware.com |
Identify the virtual machines that waste resources because of idle, oversized, or powered-off virtual machine states or because of snapshots.
Procedure
- In the left pane of vRealize Operations Manager, click Environment.
- Select vSphere World.
- Click the Heat Map tab under the Details tab.
- Select the For each datastore, which VMs have the most wasted disk space? heat map.
- In the heat map, point to each virtual machine to view the waste statistics.
- If a color other than green indicates a potential problem, click Details for the virtual machine in the pop-up window and investigate the disk space and I/O resources.
What to do next
Identify the red, orange, or yellow virtual machines with the highest amount of wasted space. | https://docs.vmware.com/en/vRealize-Operations-Manager/6.7/com.vmware.vcom.user.doc/GUID-D49A57A3-A527-4082-A2B2-681A8CFCD8A4.html | 2018-10-15T13:31:22 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.vmware.com |
How To Email Unique Codes To a ClickFunnels Contact using Zapier
Coupon Carrier is a promotional tool to help you gain email subscribers or foot traffic by delivering unique time-sensitive discounts and coupons — Connects directly to your favorite email service provider.
Using Zapier, you can connect ClickFunnels and Coupon Carrier to distribute unique codes to your contacts. In this example, we’ll send out an email with a unique code when someone completes our funnel in ClickFunnels.
- 1
- Create a new Code Email configuration in Coupon Carrier and select Zapier as the trigger.
- 2
- Customize the email content by adding your logo, message, and optionally add merge tags if you want to include information from ClickFunnels like the contact’s name, etc. In this case, we’ll add {{ merge_fields.name }}, which we’ll later add in Zapier to provide the name of the contact.
- 3
- Select the source select an existing code list. We also need to save and activate this configuration so that we can use it in Zapier.
- 4
- Next up, log in to your Zapier account and create a new Zap. Select ClickFunnels and the New Contact Activity trigger. Choose your funnel and the step that you want to trigger this Zap. We’ll select the thank you page in our test funnel.
- 5
- Add another step and select our Coupon Carrier app and the action named “Send a Code Email”. This allows us to provide an email address (which we can get from the previous ClickFunnels step) and the Code Email configuration that we want to trigger. We can also add additional merge fields that will be available in the Code Email content. In an earlier step, we added the name tag, so we’ll add it here so that we can access the name in the email.
- 6
- We’re not ready to save it and test the Zap. You can then try it by completing the funnel to the step you configured as the trigger.
For more information on Coupon Carrier, check out our Getting Started section. | https://docs.couponcarrier.io/article/89-how-to-email-unique-codes-to-a-clickfunnels-contact-using-zapier | 2022-06-25T02:30:46 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.couponcarrier.io |
This section provides instructions on how to setup Single Sign-on (SSO) on the 5.3.3+ Delphix Engine. Delphix Engines (Masking and Virtualization) version 5.3.3+ support authentication via the SAML 2.0 standard (SP initiated and Idp Initiated). SLO (Single log-out) is not supported, that is logging out of a Delphix Engine will not terminate sessions on other Delphix Engines and will not terminate the IdP session.
- Identity Provider Configuration
- New Engine Configuration
- Existing Engine configuration
- User Management When SSO is Enabled
- API Access
- Troubleshooting SSO | https://docs.delphix.com/docs535/system-installation-configuration-and-management/installation-and-initial-system-configuration/configuring-single-sign-on | 2022-06-25T01:33:17 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.delphix.com |
Why is there an API failure? API Failure, the first thing that you should do is to check the Code Step, as well as the Code logs. You can select the Code Step, and you can check the following in the logs -
- Check the name of the users added, and make sure they are in alphabetical form and not numeric.
- Make sure the userType is correctly defined in the code.
If you have checked these and yet you are facing the same issue, please contact our Support team at [email protected]. | https://docs.haptik.ai/why-is-there-an-api-failure | 2022-06-25T01:36:55 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.haptik.ai |
Release date : Sep 11, 2021
Displays the version of the Product on all the pages of JIFFY.ai application.
Link, Join, Dataset Schema, and Datasheet Schedule options are removed on SQL Editor.
Ability to navigate to one or more Presentations from a specific grid row in the current Presentation.
All the existing docker containers on the Cognitive server has been migrated to Kubernetes. Also, the functionator module on the core server has been moved to Kubernetes.
Refinement of the existing Jiffy Table and Document table in the Dataset page with functionalities including filter, sort, pagination, reorder, resize, etc.
Few functions are enhanced to run on server as against the current behavior where all the functions run on client.
Indentations are corrected to enhance the usability and visibility of the Actions and Variable in the Action tab.
The limitation of allowing only five levels of IF/Loop conditions in UI nodes is removed.
Doc reader accepts File Server ID as the input and executes in server. Users can avoid bot machines completely.
Engine changes for supporting around 300 documents in five minutes. This achieved by moving all ML models and algorithms into Kubernetes components.
The New version of Doc reader node is introduced, that can process and extract data from documents having more than 1000 pages.
The predefined function W2 Split identifies the W2 samples on each page and split them across the PDF.
The predefined function Copy file from DocubeFS to JFS copies the files from the Docube file server to Jiffy server to enable processing the files on server.
The predefined function Delete records from Jiffy File server deletes the files Jiffy file server.
This feature supports data cleansing of table data in document tables. Unlike Field level for table data, the data cleansing rules are applied at the column level. For all the documents that come under the same category, the extraction logic will be applied.
The option to select the required area by dragging the square edges is provided to extract the selected text area.
Audit log provides the insight into the activities that are performed on the system and provides the options to monitor and evaluate later to understand any data breach.
JFS-5135: Presentation filters and filter card is improvised to have the date pickers for dates and the between filter for numbers to provide range values.
JFS-4736: The license page has been facilitated to have an expiry date to be displayed indicating the date of expiration.
The email invite to the user during user creation displays the details of the Username which helps the user for log in.
The limit to download the records for both the Jiffy Table and the Document Table on the Presentation is set to 1000.
JFS-5099: The logo on the login page is updated to the latest Jiffy logo.
The text of the filter in the presentation has been made to be more meaningful.
The options Edit and Add of Jiffy Table records are disabled when the Presentation is in edit mode.
Provided confirmation window while deleting the Dataset or Presentation from its respective listing page.
The error message "Process exited with Error" is corrected to display the actual error why the execution failed.
Previously, when a record is deleted from a JiffyTable and Document Table, the corresponding attachments were not getting deleted. New API deletes all the related files from JFS and DocubeFS.
Republish of the SQL dataset in case of modifying the query is handled.
When the filter “not in” been selected in the presentation with no values provided under the search window, the blank page was getting displayed which is fixed now.
JFS-5268: The issue with the import/export of an Hyperapp is addressed.
JFS-4580: The issue of using Jiffy Select node with OR condition along with the Update Column was fetching all the rows available in the database and updates the selected column is fixed.
JFS-5027: The clarity in the document preview on Jiffy table form getting blurred after zooming, is addressed.
JFS-4432: The task getting triggered twice upon double-clicking on a button of a form is addressed.
When 500 records are added to the form table, the page going blank is fixed.
JFS-5302: Email browser failing with errors: Using the Email browser node, during download of an attachment name having special characters that are not allowed by windows, the node fails with an error. As a solution, the special characters that are not allowed in windows are replaced by an underscore(_).
JFS-5389: The Email Browser node was stuck in In Progress state for a long time is rectified.
JFS-5120: Facilitated capturing of the errored screenshots when the user uses a dynamic script.
JFS-4908: In Rest API node, while authenticating the configuration, username length had a restriction of 60 characters. Now it increased to 255 characters.
JFS-4407: In PDF processing, after clearing the ML detected value is not getting saved in the document table.
JFS-5009: After deleting one vendor category, it was getting attached to another category (a different one) and it is now coming as a New category.
JFS-5058: For a few multipage documents, correct data in the document table was not extracting as expected and it is fixed.
JFS-4839: Added space for the functions Convert Image PDF to Text PDF and Get Document OCR Confidence.
The documents are getting failed to extract data if it is pre-processed using handwritten node.
The base documents are not getting opened in migrated apps, if it is approved in older versions. | https://docs.jiffy.ai/whats_new/release-4.5/release-4.5/ | 2022-06-25T00:51:06 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.jiffy.ai |
Evaluate performance improvements
Evaluate possible changes to indexes. Determine the impact of changes to queries and indexes. Explore Query Store hints.
Learning objectives
After completing this module, you will be able to:
- Determine when changing indexes or defining new ones can affect performance
- Evaluate wait statistics as an aid in finding areas for performance improvement
- Understand how query hints work, and when to use them
Prerequisites
- Ability to use tools for running queries against a Microsoft SQL database, either on-premises on cloud-based.
- Ability to write code in the SQL language, particularly the Microsoft T-SQL dialect, at a basic level.
- Basic understanding of structure and usage of SQL Server indexes.
- Basic understanding of relational database concepts.
- Introduction min
-
-
-
-
- Knowledge check min
- | https://docs.microsoft.com/en-us/learn/modules/evaluate-performance-improvements/?WT.mc_id=api_CatalogApi | 2022-06-25T01:30:51 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.microsoft.com |
Duo
Single sign-on with Duo
This tutorial will explain configuring Duo for single sign-on to Pritunl. Users will authenticate through Duo when downloading VPN profiles and before each VPN connection. Although Duo can be used independently for best security it should be used in combination with another single sign-on provider. If Duo is used in combination with another provider the user will need to authenticate with Duo when downloading VPN profiles and before each VPN connection. VPN re-connections will not require a Duo authentication, this can be changed with the Two-Step Authentication Cache settings.
Create Pritunl App on Duo
In the Duo admin interface select Applications then click Protect an Application and search for OpenVPN. Then click Protect this Application. Once the the application has been created set the Name to
Pritunl and set Username normalization to Simple. Then click Save Changes.
Configure Pritunl
Once the Duo app has been configured open the Pritunl settings and set Single Sign-On to Duo or one the combinations including Duo. Then copy the Integration key to Duo Integration Key, Secret key to Duo Secret Key and API hostname to Duo API Hostname.
Select Duo Mode
Pritunl supports several Duo modes. The Push mode will send a push authentication request to the users mobile device. The Phone Callback mode will call the users phone and ask the user to approve the authentication request. The Passcode mode will require the user to enter the passcode from the Duo mobile app or a hardware token from Duo. The Push + Phone Callback mode will use a phone callback if the user does not have the Duo mobile app installed.
Updated over 5 years ago | https://docs.pritunl.com/docs/duo | 2022-06-25T02:36:59 | CC-MAIN-2022-27 | 1656103033925.2 | [array(['https://files.readme.io/aaf6ca4-duo0.png', 'duo0.png'],
dtype=object)
array(['https://files.readme.io/aaf6ca4-duo0.png', 'Click to close...'],
dtype=object)
array(['https://files.readme.io/717c8ff-duo1.png', 'duo1.png'],
dtype=object)
array(['https://files.readme.io/717c8ff-duo1.png', 'Click to close...'],
dtype=object)
array(['https://files.readme.io/2681c77-duo2.png', 'duo2.png'],
dtype=object)
array(['https://files.readme.io/2681c77-duo2.png', 'Click to close...'],
dtype=object) ] | docs.pritunl.com |
Fri May 6th, 2022, 11:35 am: HPSS scheduler upgrade also finished.
Thu May 5th, 2022, 7:45 pm: Upgrade of the scheduler has finished, with the exception of HPSS.
Thu May 5th, 2022, 7:00 am - 3:00 pm EDT (approx): Starting from 7:00 am EDT, an upgrade of the scheduler of the Niagara, Mist, and Rouge clusters will be applied. This requires the scheduler to be down for about 5-6 hours, and all compute and login nodes to be rebooted.
Jobs cannot be submitted during this maintenance, but jobs submitted beforehand will remain in the queue. For most of the time, the login nodes of the clusters will be available so that users may access their files on the home, scratch, and project file systems.
Monday May 2nd, 2022, 9:30 - 11:00 am EDT: the Niagara login nodes, the jupyter hub, and nia-datamover2 will get rebooted for updates. In the process, any login sessions will get disconnected, and servers on the jupyterhub will stop. Jobs in the Niagara queue will not be affected.
Tue Apr 26, 11:20 AM EDT: A Rolling update of the Mist cluster is taking a bit longer than expected, affecting logins to Mist.
Previous messages | https://docs.scinet.utoronto.ca/index.php?title=Main_Page&oldid=3815 | 2022-06-25T00:50:19 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.scinet.utoronto.ca |
Step 1: Set up a Hadoop Virtual Machine instance
The easiest way to get started sampling searches in Hunk is to install a Virtual Machine that comes preconfigured with Hadoop.
For this tutorial we are using using the Cloudera Quickstart VM for VMware. See System and Software requirements for the full list of supported Hadoop distributions and versions.
Setting up your Virtual Machine for this tutorial
The examples in this tutorial use a Cloudera Quickstart Virtual Machine. If you are using another VM with Hadoop instance, see that product's directions for installation and setup. You may also need to modify the examples for your particular configuration.
If you wish to try out Hunk using YARN, we recommend you try using the Hortonworks Sandbox 2.0 here.
If you using Cloudera Quickstart for VM:
1. untar the Cloudera Quickstart VM on your computer:
2. Start and access the virtual machine.
3. Import the OVF file from VMware Fusion.
4. Start the VM and open the terminal to find the IP address of your virtual machine.
Using this tutorial with YARN
If you want to try out Hunk using YARN, we recommend you try out the tutorial using the Hortonworks Sandbox 2.0.
Note: You can also use one of the virtual machines provided by Hortonworks here:.! | https://docs.splunk.com/Documentation/Hunk/6.4.11/Hunktutorial/SetupyourVM | 2022-06-25T01:23:57 | CC-MAIN-2022-27 | 1656103033925.2 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Configure TA-SMTP-Reputation
The Splunk Add-ons for Microsoft Exchange must be configured before you can deploy them to Exchange Server hosts. This is because you must specifically enable support for the version of Exchange Server and Windows Server that you run.
Each add-on within the Splunk Add-ons for Microsoft Exchange package includes an
inputs.conf file that has all of the data inputs that are necessary to get Exchange Server data. These inputs are disabled by default.
To get a reputation for a particular VM then the user has to add VM IP in
reputation.conf.
Download and unpack the TA-Exchange-SMTP-Reputation add-on
- Download the Splunk Add-ons for Microsoft Exchange package from Splunkbase.
- Unpack the add-on bundle to an accessible location.
Create and edit
inputs.conf
- Open a PowerShell window, command prompt, or Explorer window.
- Create a local directory within the
TA-SMTP-Reputation add-on.
- Copy inputs.conf from the
TA-SMTP-Reputation\defaultdirectory to the
TA-SMTP-Reputation\local directory.
- Use a text editor such as Notepad to open the
TA-SMTP-Reputation\local\inputs.conffile for editing.
- Modify the
inputs.conffile so that the common data inputs that you run are enabled. Do this by changing
disabled = true to disabled = false for all input stanzas<c/ode>. See the example
inputs.conflater in this topic.
- After you update the
inputs.conf file, save it and close it.
Distribute the add-ons
If you do not have a deployment server to distribute apps and add-ons, set one up. A deployment server greatly reduces the overhead in distributing apps and add-ons to hosts. You can make one change on the deployment server and push that change to all universal forwarders in your Splunk App for Microsoft Exchange deployment. The Splunk App for Microsoft Exchange manual uses deployment server extensively in its setup instructions.
- Copy the TA-SMTP-Reputation add-on to the
%SPLUNK_HOME%\etc\deployment-appsdirectory on the deployment server.
- Push the add-on to all hosts in this server class.
This documentation applies to the following versions of Splunk® App for Microsoft Exchange (EOL): 3.5.2, 4.0.0, 4.0.1, 4.0.2, 4.0.3
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/MSExchange/3.5.2/Add-Ons/ConfigureTA-SMTP-Reputation | 2022-06-25T01:06:18 | CC-MAIN-2022-27 | 1656103033925.2 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
The backend module¶
The setup check screen¶
This screen has already been mentioned in the Installation chapter. It is mostly useful when setting up the Scheduler, as it indicates whether the CLI script is executable or not. When everything is running fine, it contains mostly one useful piece of information: when the last run took place, when it ended and whether it was started manually (i.e. from the BE module) or automatically (i.e. from the command line).
The information screen¶
This screen shows a list of all available tasks in the current TYPO3 installation. When the Scheduler has just been installed, this will be limited to the two base tasks provided by the extension. The screen shows the name and description of the task. Clicking on the “Add” icon on the right side of each row will open up the new task registration screen, with the task class already predefined.
List of available tasks in the Scheduler’s information screen
The scheduled tasks screen¶
This is the main screen when administering tasks. At first it will be empty and just offer a link to add a new task. When such registered tasks exists, this screen will show a list with various pieces of information.
Main screen of the Scheduler BE module
Disabled tasks have a gray label sign near the task name. A disabled task is a task that will not be run automatically by the command-line script, but may still be executed from the BE module.
A late task will appear with an orange label sign near the task name:
A late task in the main screen of the Scheduler BE module
The task list can be sorted by clicking the column label. With every click it switches between ascending and descending order of the items of the associated column.
The table at the center of the above screenshot shows the following:
- The first column contains checkboxes. Clicking on a checkbox will select that particular scheduled task for immediate execution. Clicking on the icon at the top of the column will toggle all checkboxes. To execute the selected tasks, click on the “Execute selected tasks” button. Read more in “Manually executing a task” below.
- The second column simply displays the id of the task.
- The third column contains the name of the task, the extension it is coming from and any additional information specific to the task (for example, the e-mail to which the “test” task will send a message or the sleep duration of the “sleep” task). It also show a summary of the task’s status with an icon.
- The fourth column shows whether the task is recurring or will run only a single time.
- The fifth column shows the frequency.
- The sixth columns indicates whether parallel executions are allowed or not.
- The seventh column shows the last execution time and indicates whether the task was launched manually or was run via the command-line script (cron).
- The eighth column shows the planned execution time. If the task is overdue, the time will show up in bold, red numbers. A task may have no future execution date if it has reached its end date, if it was meant to run a single time and that execution is done, or if the task is disabled. The next execution time is also hidden for running tasks, as this information makes no sense at that point in time.
- The last column contains possible actions, mainly editing, disable or deleting a task. There are also buttons for running the task on the next cron job or run it directly. The actions will be unavailable for a task that is currently running, as it is unwise to edit or delete it a task in such a case. Instead a running task will display a “stop” button (see “Stopping a task” below).
Note that all dates and times are displayed in the server’s time zone. The server time appears at the bottom of the screen.
At the top of the screen is a link to add a new task. If there are a lot of tasks that appear late, consider changing the frequency at which the cron job is running (see “Choosing a frequency” above).
Occasionally the following display may appear:
A scheduled task missing its corresponding class
This will typically happen when a task provided by some extension was registered, then the extension was uninstalled, but the task was not deleted beforehand. In such a case, this task stays but the Scheduler doesn’t know how to handle it anymore. The solution is either to install the related extension again or delete the registered task. | https://docs.typo3.org/c/typo3/cms-scheduler/11.5/en-us/Administration/BackendModule/Index.html | 2022-06-25T02:16:09 | CC-MAIN-2022-27 | 1656103033925.2 | [array(['../../_images/InformationScreen.png',
'Scheduler information screen'], dtype=object)
array(['../../_images/BackendModuleMainView.png', 'Scheduler main screen'],
dtype=object)
array(['../../_images/LateTask.png',
'A late task in the Scheduler main screen'], dtype=object)
array(['../../_images/MissingTaskClass.png', 'A broken task'],
dtype=object) ] | docs.typo3.org |
AWS::WAF::Web.
Contains the
Rules a Amazon CloudFront distribution to identify the requests that you want AWS WAF to filter.
If you add more than one
Rule to a
WebACL, a request needs to match only one of the specifications
to be allowed, blocked, or counted.
Syntax
To declare this entity in your AWS CloudFormation template, use the following syntax:
JSON
{ "Type" : "AWS::WAF::WebACL", "Properties" : { "DefaultAction" :
WafAction, "MetricName" :
String, "Name" :
String, "Rules" :
[ ActivatedRule, ... ]} }
YAML
Type: AWS::WAF::WebACL Properties: DefaultAction:
WafActionMetricName:
StringName:
StringRules:
- ActivatedRule
Properties
DefaultAction
The action to perform if none of the
Rulescontained in the
WebACLmatch. The action is specified by the
WafActionobject.
Required: Yes
Update requires: No interruption
MetricName
The name.
Required: Yes
Type: String
Minimum:
1
Maximum:
128
Pattern:
.*\S.*
Update requires: Replacement
Name
A friendly name or description of the
WebACL. You can't change the name of a
WebACLafter you create it.
Required: Yes
Type: String
Minimum:
1
Maximum:
128
Pattern:
.*\S.*
Update requires: Replacement
Rules
An array that contains the action for each
Rulein a
WebACL, the priority of the
Rule, and the ID of the
Rule.
Required: No
Type: List of ActivatedRule
Update requires: No interruption
Return values
Ref
When you pass the logical ID of this resource to the intrinsic
Ref function,
Ref returns the resource name, such as 1234a1a-a1b1-12a1-abcd-a123b123456.
For more information about using the
Ref function, see Ref.
Examples
Create a Web ACL
The following example defines a web ACL that allows, by default, any web request. However, if the request matches any rule, AWS WAF blocks the request. AWS WAF evaluates each rule in priority order, starting with the lowest value.
JSON
" } } ] } }
YAML"
Associate a Web ACL with a Amazon CloudFront Distribution
The follow example associates the
MyWebACL web ACL with a Amazon CloudFront distribution.
The web ACL restricts which requests can access content served by Amazon CloudFront.
JSON
" } } } }
YAML" | https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-waf-webacl.html | 2022-06-25T03:11:35 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.aws.amazon.com |
$(nproc) X 1/2 MiB.
Due to Red Hat Enterprise Linux (RHEL) behavior, a container on a node with high CPU usage might seem to consume more memory than expected. The higher memory consumption could be caused by the
kmem_cache in the RHEL kernel. The RHEL kernel creates a
kmem_cache for each cgroup. For added performance, the
kmem_cache contains a
cpu_cache, and a node cache for any NUMA nodes. These caches all consume kernel memory.
The amount of memory stored in those caches is proportional to the number of CPUs that the system uses. As a result, a higher number of CPUs results in a greater amount of kernel memory being held in these caches. Higher amounts of kernel memory in these caches can cause OpenShift Container Platform containers to exceed the configured memory limits, resulting in the container being killed.
To avoid losing containers due to kernel memory issues, ensure that the containers request sufficient memory. You can use the following formula to estimate the amount of memory consumed by the
kmem_cache, where
nproc is the number of processing units available that are reported by the
nproc command. The lower limit of container requests should be this value plus the container memory requirements:
$(nproc) X 1/2 MiB | https://docs.openshift.com/container-platform/4.8/nodes/containers/nodes-containers-using.html | 2022-06-25T02:16:17 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.openshift.com |
Policy Sentry Documentation
Policy Sentry is an AWS IAM Least Privilege Policy Generator, auditor, and analysis database. It compiles database tables based on the AWS IAM Documentation on Actions, Resources, and Condition Keys and leverages that data to create least-privilege IAM policies.
Organizations can use Policy Sentry to:
- Limit the blast radius in the event of a breach: If an attacker gains access to user credentials or Instance Profile credentials, access levels and resource access should be limited to the least amount needed to function. This can help avoid situations such as the Capital One breach, where after an SSRF attack, data was accessible from the compromised instance because the role allowed access to all S3 buckets in the account. In this case, Policy Sentry would only allow the role access to the buckets necessary to perform its duties.
- Scale creation of secure IAM Policies: Rather than dedicating specialized and talented human resources to manual IAM reviews and creating IAM policies by hand, organizations can leverage Policy Sentry to write the policies for them in an automated fashion.
Policy Sentry's policy writing templates are expressed in YAML and include the following:
- Name and Justification for why the privileges are needed
- CRUD levels (Read/Write/List/Tagging/Permissions management)
- Amazon Resource Names (ARNs), so the resulting policy only points to specific resources and does not grant access to
*resources.
Policy Sentry can also be used to:
- Query the IAM database to reduce manual search time
- Generate IAM Policies based on Terraform output
- Write least-privilege IAM Policies based on a list of IAM actions (or CRUD levels) | https://policy-sentry.readthedocs.io/en/latest/ | 2022-06-25T01:28:55 | CC-MAIN-2022-27 | 1656103033925.2 | [] | policy-sentry.readthedocs.io |
SaaS pricing models explained
In this section we will focus on pricing models. The Pricing Model configuration section is one of the most important aspects of product configuration. Since it directly influences your product sales, it is important to understand which the most common pricing models available on the platform.
You can deeply customize the pricing model for your product in Cloudesire (for example, you can add extra resources to your product), but the choice of the right pricing model is not always easy. That's why in the following paragraphs you will find an introduction to the most used pricing models in SaaS businesses.
Available pricing modelsAvailable pricing models
There are many different basic pricing models that you can implement in Cloudesire. To define a pricing model, you need at least one product into your catalog. Select the product, then "Edit" and go to the "Plans" tab. You can edit or add new plans to your product.
Freemium modelFreemium model
The Freemium model is a good way to find new customers for your application. With this model, you allow customers to use your application with basic features for free and switch to premium features in paid plans.
The challenging aspect is that you need to find ways to push customers to switch to premium versions: this means that you need to define the pricing model with at least two product versions and understand which features of your product could push users to pay. For example, premium version could include more users, more credits, more storage or more services than the free version: you need to understand which is the right hook.
Examples of SaaS business using this model are Hootsuite and Dropbox.
Pay-as-you-go modelPay-as-you-go model
Pay as you go model means that your users pay only for what they use, so it's a good choice if you want to reduce the entry barrier for customers. This offers flexibility to customers, but your product needs to have an effective metering system. Also, you need to understand how to segment your offer. Usually, you lower the price of items for larger volumes, so that users are pushed to buy more items to save money.
iStock is an example of Pay as you go pricing model
Pay-per-user modelPay-per-user model
Pay-per-user model means that your customers move up and down the tier automatically, so the cost scales according to how they use the product. For example, you pay 30$/month up to 10 users and 50$/month up to 15 users. The challenging aspect of this model is that you do not have a constant flow of revenues, they can move up and down and it's not predictable.
Slack is an example of a SaaS business that follows this model.
Fixed-Pricing modelFixed-Pricing model
The Fixed Pricing model is a good way to offer different versions of your product to different targets and generate consistent revenues. If you have a product that can target different customers and you can define different pricing tiers, you could benefit from the fixed pricing model.
For example, you could create a silver version for freelance professionals, a gold version for small companies and a platinum version for large companies.
Hubspot is a successful example of business that offers a fixed pricing model.
Features-Based modelFeatures-Based model
Feature Based model means that you create different pricing tiers based on features. The challenging aspect of this model is that you need to identify which features add to each tier, so that customers are willing to pay up to get them.
Salesforce is an example of a business that offers a feature based model.
| https://docs.cloudesire.com/docs/pricing-models.html | 2019-04-18T14:24:06 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['/img/docs/freemium-model.png', 'Pricing Models: freemium'],
dtype=object)
array(['/img/docs/pay-as-you-go-model.png',
'Pricing Models: pay as you go'], dtype=object)
array(['/img/docs/pay-per-user-model.png',
'Pricing Models: pay per user model'], dtype=object)
array(['/img/docs/fixed-price-model.png',
'Pricing Models: fixed price model'], dtype=object)
array(['/img/docs/feature-based-model.png',
'Pricing models: Feature Based model'], dtype=object)] | docs.cloudesire.com |
Making new payment plan on PayWhirl is quick and easy. From the dashboard, click on "My Plans"
On the plan page, click "Create a New Plan"
Fill out your plan name, the setup fee (if you'd like one), and specify the number installment (or leave on no limit). The Setup Fee will only be charged once at the time of sign up. Payments with a set number of installments will eventually end for the customer (after they pay the last installment). For example: if you had a product that is $100, but wanted customers to be able to pay in two separate $50 installments, you would set the Plan Amount to $50, and the Installments to 2.
Next, set your Billing Charge Amount and Interval, which is how much the subscriber will pay, and when/how the system should bill them. You can also set your currency and whether or not you will require the customer to enter a shipping address.
Note: Don't include special characters in the price such as a '$' or this will prevent the plan from saving. It only accepts numbers such as 10.99 not $10.99
You can optionally set a trial period if you would like to offer the plan for a certain time without charging the subscriber. Note: Trial periods are specified in days and cannot be edited once they are created.
For the "Enabled" setting, choose "Yes" if you would like to start offering the plan right away. Choose "No" if you would like it to remain hidden from the main widget. Disabled plans can still be selected in the Widget Builder are hidden from the main widget or customer's subscription manager.
For the "Starting Day of The Month" option, you can set which day you would like the subscriber to begin being charged. For example: if you set your Starting Date to the 15th, and a customer signs up on the 7th, that customer will not be charged until the 15th. Also, if a customer signed up on the 16th, that customer would not be charged until the 15th of the following month. This can be helpful for companies that only bill once a month on the same day of the month.
If you Don't use Shopify, click "Save" and you're done.If you DO use Shopify you will have some additional settings
The "Plan SKU" and "Fulfillment Method" spaces are for Shopify Users Only.
If you use a Shopify fulfillment solution, set the SKU for your product here. I MUST match the SKU you use for fulfillment exactly.
Additionally, there is a setting that let's you create layaway type products on Shopify using orders... You can choose to have orders placed after EACH installment is made... OR... Only after the LAST installment has been paid (like layaway).
Note: When plans are set to "no limit" orders will be placed on each installment.
Finally, just save your plan!
TROUBLESHOOTING: If you cannot save a plan because the button is greyed out or not working you most likely have something in the form that PayWhirl cannot accept. Usually it's a $ in the price field. Just remove and special characters like $ and you should be able to save the plan. | https://docs.paywhirl.com/PayWhirl/classic-system-support-v1/getting-started-and-setup/v1-creating-a-payment-plan | 2019-04-18T15:18:20 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['https://uploads.intercomcdn.com/i/o/19830122/c400e9d189bd61d5ed47feb1/note.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/12671429/e7f8e1b22aa83e8104cc2361/PW_Dash_plans.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/12671449/7c51465928ce7dccc6dca348/create_plan.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/12671458/3c003d892e8420a37e3de187/Plan_setup.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/12671464/ed4b573d8e5150319b2050b1/billing_amount_and_int.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/12671475/b6112d71c25afe83c380a48e/trial.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/12671505/1d88b7a03b1f051c22ca4cfc/plan_sku.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/12671510/f1a364aac456b2f9a057281c/installment_orders.png',
None], dtype=object) ] | docs.paywhirl.com |
Edit Volume
You can edit a volume to make changes to name and description of the volume, the size of the volume, and to make it bootable or non-bootable.
Note: In order to change the size of a volume it must be unattached, and in the “available” state.
You must be a self-service user or an administrator to perform this operation.
You can add more metadata to a volume by editing the volume.
To edit a volume, follow the steps given below.
- Log in to Clarity.
- Click Volumes and Snapshots in the left panel.
- Select the check box for the volume to edit from the list of volumes.
- Click Edit Volume on the toolbar seen above the list of volumes.
- Click Basic to make changes related to name, description and bootability of the volume.
- Click Update Volume after making the required changes.
- Click Extend Volume to increase the total size of the volume. Click Extend Volume after making the required changes.
- Click Tags and to add or remove metadata. Click Update Tags after making the required changes. | https://docs.platform9.com/user-guide/volumes/edit-volume/ | 2019-04-18T14:28:48 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.platform9.com |
Backups
ApiOmat uses a MongoDB for storing configuration and application data. Several backup machanisms can be used, depending on load, DB setup and backup hardware.
Using the restore module
The Restore Module backups each application data once a day, if configured during setup. All application databases are backed up using mongodump and saved in a directory provided in yambas.conf.
Each app administrator has the independent option to restore the backup from the last ten days. While this is straightforward, the backup capability is limited; it neither supports point in time recovery, nor does it backup the whole Apiomat configuration.
The guide Configure Backup / Restore Functionality contains further details on how to set up the restore module. | http://docs.apiomat.com/32/Backups.html | 2019-04-18T14:48:57 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.apiomat.com |
Creating a Lyrebird app
To create a new app, go to the Developer page, and click on New OAuth App.
The New application form will prompt you for:
- Your application name, description and homepage: this public information will be shown to your users when they are prompted to consent to give you access to their voice.
- Your application redirect URI: the URI to which your users will be redirected to after they consent to give you access to their voice, as part of the OAuth2 authentication flow. If you're not sure what to put here, put a placeholder URL like, then read this whole documentation and update it when you're done and understand the OAuth2 flow.
By clicking on your app after saving it, you can access the application's OAuth client credentials to be used in the next step:
- Your application Client ID: a public ID that uniquely identifies your application in the OAuth2 flow.
- Your application Client Secret: a private identifier that must stay private and acts as your application password to be used in the OAuth2 flow.
Next, you need to get an OAuth2 access token in order to use your user's digital voice. | http://docs.lyrebird.ai/avatar_api/10_creating_app.html | 2019-04-18T15:17:01 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.lyrebird.ai |
Searching¶
A search page is available by default at the
/search/ URL, which can be customized in the
urls.py file in your project. To enable a search bar in the navigation bar, check
Settings > Layout > Search box. Search results are paginated; to specify the number of results
per page, edit the value in Settings > General > Search Settings.
Search result formatting¶
Each search result is rendered using the template at
coderedcms/pages/search_result.html.
The template can be overridden per model with the
search_template attribute.
Search result filtering¶
To enable additional filtering by page type, add
search_filterable = True to the page model.
The
search_name and
search_name_plural fields are then used to display the labels for
these filters (defaults to
verbose_name and
verbose_name_plural if not specified).
For example, to enable search filtering by Blog or by Products in addition to All Results:
class BlogPage(CoderedArticlePage): search_filterable = True search_name = 'Blog Post' search_name_plural = 'Blog' class Product(CoderedWebPage): search_filterable = True search_name = 'Product' search_name_plural = 'Products'
Would enable the following filter options on the search page: All Results, Blog, Products.
Search fields¶
If using the Wagtail DatabaseSearch backend (default), only page Title and Search Description
fields are searched upon. This is due to a limitation in the DatabaseSearch backend;
other backends such as PostgreSQL and Elasticsearch will search on additional specific fields
such as body, article captions, etc. To enable more specific searching while still using the
database backend, the specific models can be flagged for inclusion in search by setting
search_db_include = True on the page model. Note that this must be set on every type of page
model you wish to include in search. When setting this flag, search is performed independently on
each page type, and the results are combined. So you may want to also specify
search_db_boost (int)
to control the order in which the pages are searched. Pages with a higher
search_db_boost
are searched first, and results are shown higher in the list. For example:
class Article(CoderedArticlePage): search_db_include = True search_db_boost = 10 ... class WebPage(CoderedWebPage): search_db_include = True search_db_boost = 9 ... class FormPage(CoderedFormPage): ...
In this example, Article search results will be shown before WebPage results when using the
DatabaseSearch backend. FormPage results will not be shown at all, due to the absence
search_db_include. If no models have
search_db_include = True, All CoderedPages
will be searched by title and description. When using any search backend other than database,
search_db_* variables are ignored. | https://docs.coderedcorp.com/cms/stable/getting_started/searching.html | 2019-04-18T15:17:10 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.coderedcorp.com |
Trans-Docs Services© has been providing high quality, professional translation, transcription, proofreading, interpretation and editing services for over 5 years. We have customers in most industry segments.
Our driving mission is delivering high-quality services at affordable prices. We provide a unique personalized service to ensure your every need is accommodated.
Our simple working principle is: “Customer Satisfaction First” Our customers use transcription services for their important phone calls, academic research, video production, and content marketing. Anytime you are listening to important information or need to review a conversation later, transcription guarantees that you won’t miss any meaningful parts of the recorded audio.
We use an entirely ONLY humans process, supported by a stringent quality policy. The quality of your translated and/or transcribed documents is our number one concern!
Our team of professionals is ready to get to work as soon as you submit your order. Our transcription experts are available to accurately convert your audio to text today.
Whatever you need to be transcribed or translated you can count on us to get it done. We offer quick turnaround on transcripts any day of the week. Contact us today to get started on your files. | https://www.trans-docs.com/ | 2019-04-18T15:13:19 | CC-MAIN-2019-18 | 1555578517682.16 | [] | www.trans-docs.com |
CloudMailin allows you to receive any volume of incoming email via a Webhook..
In this guide we're going to cover the basics of receiving your first email with CloudMailin. The first step is to head to and signup.
As soon as you sign up, you will be presented with the option to create your first CloudMailin address. Here you can enter the URL that CloudMailin should send your email to. We call this url the target or destination. CloudMailin will use an HTTP POST to deliver all of your email to this URL. This target URL must be a URL that CloudMailin can access over the public internet, don't worry though, it supports HTTPS and you can pass basic authentication to keep things secure. If you're developing locally you can also see our Local Development and Testing Guide once you're done here.
After you click submit, an email address will be generated for you and you are ready to start sending you email. Click the manage this address button to view the details for this address. Here you can manage all aspects of the address but for now we're mainly interested in the email address we've been allocated and the message history (also known as delivery statuses).
Let's go ahead an send an email to our new CloudMailin address. Let's go ahead and click the 'Send a message now' link and send an email to your site. Within a couple of seconds we should be able to see the message within the message history. If the delivery status shows 404 this means that your website hasn't been setup to receive the emails yet and is throwing a standard
404 - Not Found error. This is perfectly normal, CloudMailin intelligently uses your HTTP status codes to determine what to do with the message (more details can be found at HTTP Status Codes).
If we click on the details tab more information about this message should be available, if the message was unsuccessful this page will also show the http response given by your website to help you debug.
Now we need to write the code for the website to receive the email. For this demo it will we can just to make the destination URL return a status code of 201. Once you've written some code and your website response with a 201 status (or any 2xx status code) the delivery status will be green and this is considered a successful message. CloudMailin won't bounce this or try to send it again as your app has indicated that everything went well. | http://docs.cloudmailin.com/getting_started/ | 2019-04-18T15:25:53 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.cloudmailin.com |
8x8 Call Parking allows you to put a call on hold and pick it up from another extension on your 8x8 VoIP phone system. The Call Parking feature is included free with your 8x8 Virtual Office phone system. On parking a call, the Virtual Office system automatically assigns a parking extension number to the call and announces the number to you. You can then dial this number to retrieve and resume the call from any telephone set, or communicate the number to another extension user. You can park a call if you are in a conversation at your desk, and would like to continue the call from a different location or on the go.
The Call Park feature is also either. So you park the call in its very own automatically-numbered parking space. Then, while the caller listens to the hold music, you can intercom, instant message or tell your co-workers that there’s a call waiting. The next available employee dials the number of the parked call and promptly assists the caller.
To park a call:
Note: Call Parking is valid within a single 8x8 VoIP phone system (PBX), but it can be used between multiple extension users. | https://docs.8x8.com/8x8WebHelp/VirtualOfficeOnlineWebHelp/Content/VOOCallParking.htm | 2019-04-18T14:42:28 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.8x8.com |
OBS is unavailable when an error is reported stating "Time difference is longer than 15 minutes between the client and server."
For security purpose, OBS checks the time difference between OBS Console and the server. When the time difference is longer than 15 minutes, such an error is reported and you need to correctly set the local time. | https://docs.otc.t-systems.com/en-us/usermanual/obs/en-us_topic_0085237883.html | 2019-04-18T15:41:54 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.otc.t-systems.com |
SOAtest can test RESTful services from a WADL, or RAML definition, or from a URL.
In this section:
Testing RESTful Services When a WADL is Available:
- Ensure that the locally-hosted ParaBank application (introduced in Setting Up ParaBank) is running so that you can access the WADL.
- Right-click on any of the Test Suite: Test Suite Test Case Explorer nodes you created in a previous tutorial and choose Add New > Test Suite.
- Select REST> WADL, then click Next.
- In the WADL URL field, enter, check Create tests to validate and enforce policies on the WADL, then click Finish.
- Notice that 3 tests were added to check the WADL:
-.
- Notice that a new Test Suite has been created, with sub test suites and REST Client tools corresponding to the services provided by the WADL.
- Double-click the new REST Client tool that was created under Test Suite: accounts. All of the Path Parameters have been automatically propagated; however, to retrieve a valid result from ParaBank, we must provide a valid account ID.
- Provide a valid account ID by going to the Path table, then entering
12345under accountId.
Note that 12345 is now appended to the URL.
- Save and run the REST client.
- Double-click the Traffic Viewer attached to the REST Client and open the Response tab. You will see that the ParaBank server returned the account details for Account #12345.
Generating Tests from an OpenAPI/Swagger or RAML Definition
See Creating Tests from an OpenAPI/Swagger Definition and Creating Tests From a RAML Definition.
Testing RESTful Services with a URL
This example also serves to show how you can test RESTful services when you do not have access to a WADL describing the full application capability.
To test a RESTful service with a URL:
- Select one of your Test Suite: Test Suite Test Case Explorer nodes, then click the Add test suite toolbar button.
- In the Add Test Suite wizard, select Empty, then click Finish.
- Double-click the newly-created Test Suite, change the Name field to
REST Example, then save and close the Test Suite editor.
- Select the Test Suite: REST Example Test Case Explorer node, then click the Add test or output toolbar button.
- With Standard Test selected on the left, select REST Client on the right, then click Finish. A new REST Client tool will be added and its editor will be opened.
- Rename the tool
Loan Request (JSON Response).
- Note that Service Definition is set to None by default. Most RESTful services are purely GET-based services that are composed of a URL and a query; they usually respond with either an XML or JSON response payload.
- Provide the request to the RESTful service by entering the URL the URL field—then choosing POST from the Method dropbox.
- Notice that SOAtest has automatically populated the table with path template and query parameters. If you wanted to add additional parameters, you could do so here.
- In the HTTP Options tab, choose the HTTPHeaders option, click Add, then add a new header with name
Acceptand value
application/json. If this header is omitted, the service response will be in XML. Once added, the service response will be in JSON.
- Save the REST Client editor.
- Run the test and review the traffic in the Traffic Viewer. Note that you can switch from Literal to Tree view if you want to see a graphical representation of the JSON message.
Literal view:
Tree view: | https://docs.parasoft.com/pages/viewpage.action?pageId=36810990 | 2019-04-18T14:23:12 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.parasoft.com |
pwnlib.util.cyclic — Generation of unique sequences¶
pwnlib.util.cyclic.
cyclic(length = None, alphabet = None, n = None) → list/str[source]¶
A simple wrapper over
de_bruijn(). This function returns at most length elements.
If the given alphabet is a string, a string is returned from this function. Otherwise a list is returned.
Notes
The maximum length is len(alphabet)**n.
The default values for alphabet and n restrict the total space to ~446KB.
If you need to generate a longer cyclic pattern, provide a longer alphabet, or if possible a larger n.
Example
Cyclic patterns are usually generated by providing a specific length.
>>> cyclic(20) 'aaaabaaacaaadaaaeaaa'
>>> cyclic(32) 'aaaabaaacaaadaaaeaaafaaagaaahaaa'
The alphabet and n arguments will control the actual output of the pattern
>>> cyclic(20, alphabet=string.ascii_uppercase) 'AAAABAAACAAADAAAEAAA'
>>> cyclic(20, n=8) 'aaaaaaaabaaaaaaacaaa'
>>> cyclic(20, n=2) 'aabacadaeafagahaiaja'
The size of n and alphabet limit the maximum length that can be generated. Without providing length, the entire possible cyclic space is generated.
>>> cyclic(alphabet = "ABC", n = 3) 'AAABAACABBABCACBACCBBBCBCCC'
>>> cyclic(length=512, alphabet = "ABC", n = 3) Traceback (most recent call last): ... PwnlibException: Can't create a pattern length=512 with len(alphabet)==3 and n==3
The alphabet can be set in context, which is useful for circumstances when certain characters are not allowed. See
context.cyclic_alphabet.
>>> context.>> cyclic(10) 'AAAABAAACA'
The original values can always be restored with:
>>> context.clear()
The following just a test to make sure the length is correct.
>>> alphabet, n = range(30), 3 >>> len(alphabet)**n, len(cyclic(alphabet = alphabet, n = n)) (27000, 27000)
pwnlib.util.cyclic.
cyclic_find(subseq, alphabet = None, n = None) → int[source]¶
Calculates the position of a substring into a De Bruijn sequence.
Examples
Let’s generate an example cyclic pattern.
>>> cyclic(16) 'aaaabaaacaaadaaa'
Note that ‘baaa’ starts at offset 4. The cyclic_find routine shows us this:
>>> cyclic_find('baaa') 4
The default length of a subsequence generated by cyclic is 4. If a longer value is submitted, it is automatically truncated to four bytes.
>>> cyclic_find('baaacaaa') 4
If you provided e.g. n=8 to cyclic to generate larger subsequences, you must explicitly provide that argument.
>>> cyclic_find('baaacaaa', n=8) 3515208
We can generate a large cyclic pattern, and grab a subset of it to check a deeper offset.
>>> cyclic_find(cyclic(1000)[514:518]) 514
Instead of passing in the byte representation of the pattern, you can also pass in the integer value. Note that this is sensitive to the selected endianness via context.endian.
>>> cyclic_find(0x61616162) 4 >>> cyclic_find(0x61616162, endian='big') 1
You can use anything for the cyclic pattern, including non-printable characters.
>>> cyclic_find(0x00000000, alphabet=unhex('DEADBEEF00')) 621
pwnlib.util.cyclic.
cyclic_metasploit(length = None, sets = [ string.ascii_uppercase, string.ascii_lowercase, string.digits ]) → str[source]¶
A simple wrapper over
metasploit_pattern(). This function returns a string of length length.
Example
>>> cyclic_metasploit(32) 'Aa0Aa1Aa2Aa3Aa4Aa5Aa6Aa7Aa8Aa9Ab' >>> cyclic_metasploit(sets = ["AB","ab","12"]) 'Aa1Aa2Ab1Ab2Ba1Ba2Bb1Bb2' >>> cyclic_metasploit()[1337:1341] '5Bs6' >>> len(cyclic_metasploit()) 20280
pwnlib.util.cyclic.
cyclic_metasploit_find(subseq, sets = [ string.ascii_uppercase, string.ascii_lowercase, string.digits ]) → int[source]¶
Calculates the position of a substring into a Metasploit Pattern sequence.
Examples
>>> cyclic_metasploit_find(cyclic_metasploit(1000)[514:518]) 514 >>> cyclic_metasploit_find(0x61413161) 4
pwnlib.util.cyclic.
de_bruijn(alphabet = None, n = None) → generator[source]¶
Generator for a sequence of unique substrings of length n. This is implemented using a De Bruijn Sequence over the given alphabet.
The returned generator will yield up to
len(alphabet)**nelements.
pwnlib.util.cyclic.
metasploit_pattern(sets = [ string.ascii_uppercase, string.ascii_lowercase, string.digits ]) → generator[source]¶
Generator for a sequence of characters as per Metasploit Framework’s Rex::Text.pattern_create (aka pattern_create.rb).
The returned generator will yield up to
len(sets) * reduce(lambda x,y: x*y, map(len, sets))elements. | https://docs.pwntools.com/en/stable/util/cyclic.html | 2019-04-18T15:07:59 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.pwntools.com |
Thursday, June 09, 2016
This section of docs.zen-cart.com is for documenting some features specific to v1.5.5 code.
As content is added, you will see various sections on the left menu, for easy access to those topics.
More content will be added over time.
You can contribute to this documentation by forking the zencart/documentation repository on github and issuing Pull Requests. | https://docs.zen-cart.com/Developer_Documentation/1.5.5 | 2019-04-18T15:26:28 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.zen-cart.com |
{"_id":"56ca81146b58fb0b00c6d2fc","__v":2,"project":"56bc8e679afb8b0d00d62dcf","},"category":{"_id":"56c4180d70187b17005f43b4","__v":5,"project":"56bc8e679afb8b0d00d62dcf","version":"56bc8e689afb8b0d00d62dd2","pages":["56c418c854b6030d00ec29a2","56c418e670187b17005f43b7","56c418efc4796b0d007ef03a","56c418f94040602b0064cea1","56ca81146b58fb0b00c6d2fc"],"sync":{"url":"","isSync":false},"reference":false,"createdAt":"2016-02-17T06:49:49.187Z","from_sync":false,"order":1,"slug":"buddy-basics","title":"Buddy Basics"},"githubsync":"","parentDoc":null,"user":"56b98db7bb36440d0001f492","updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-02-22T03:31:32.930Z","link_external":false,"link_url":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":1,"body":"### Permission Scopes\n\nBuddy supports two permissions scopes:\n\n* `app`: Any app user has the access in question\n* `user`: Only the owning user has access\n\n### Permissions on Objects\n\nMost Buddy objects support two types of standard permissions, available as parameters on the object's _create_ or _update_ operation:\n\n* `readPermissions`\n* `writePermissions`\n\n This allows your application to create objects that can be seen or modified either by any app user or just by the user that created the object. \n\n**Note:** In all cases _only the user that created an object can delete it_.\n\nObject owners can also grant read or write permissions to specific users, also known as sharing, see below for details.\n\n### [](#sharing)Sharing with Other Users\n\nBuddy also supports the ability to grant read or write permissions to a specific user via the `sharing` APIs.\n\nFor each Buddy object type, there is a `sharing` route that supports setting or clearing permissions for another user ID. This operation can only be called by the owner of the object.\n\nThe general format is:\n\n PUT /[object type]/[object ID]/sharing/[user ID]\n {\n permissions: '[permissions]'\n }\n\nWhere:\n\n* **object type**: The type of object such as `blobs`, `users`, `checkins`, etc.\n* **object ID**: The ID of the object to set permissions on\n* **user ID**: The ID of the user to grant permissions to\n* **permissions**: The permissions to set as `Read`, `Write`, `Read,Write`, or `None`\n\nNote that permissions `None` is equivalent to:\n\n DELETE /[object type]/[object ID]/sharing/[user ID]\n\n\nAs an example, if there is a picture with ID `pic1234`, with `User` permissions, only the owning user can view or modify it. But the owner would like to make the picture visible to another user, say a user with the ID `user5678`.\n\n PUT /pictures/pic1234/sharing/user5678\n { \n permissions: 'Read'\n }\n\nNow, user `user5678` will be able to successfully call:\n\n GET /pictures/pic1234\n\n#### Setting Bulk Permissions\n\nYou can also set permissions for more than one user at a time, using the following call:\n\n POST /pictures/pic1234/sharing\n {\n userIds: ['user5678', 'user8765'],\n permissions: 'Read,Write'\n }\n\n\n\n### Metadata Visibility\n\nFor Buddy's [Metadata](doc:metadata), the same `app` or `user` values are available via a parameter called `visibility`. \n\nThis works similarly to object permissions but is for both read and write. However, the difference is that `user` visibility means that the value is available per-user. In other words every user in your app can add the same metadata key and they will not conflict. \n\nFor example, each user could specify their own category on a picture, and the metadata system would make this possible. Conversely, if you wanted any user to be able to \"like\" a photo, such that the count of likes is viewable by all users, you could use an `app` visibility metadata value.","excerpt":"","slug":"access-permissions","type":"basic","title":"Access Permissions"} | http://docs.buddy.com/docs/access-permissions | 2019-04-18T14:40:51 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.buddy.com |
Dependencies¶
This driver depends on the Register and Bus Device libraries. Please ensure they are also available on the CircuitPython filesystem. This is easily achieved by downloading a library and driver bundle.
Usage Notes¶
Of course, you must import the library to use it:
import adafruit_bno055DA, SCL i2c = I2C(SCL, SDA)
Once you have the I2C object, you can create the sensor object:
sensor = adafruit_bno055.BNO055(i2c)
And then you can start reading the measurements:
print(sensor.temperature) print(sensor.euler) print(sensor.gravity)no055 - | https://circuitpython.readthedocs.io/projects/bno055/en/latest/ | 2019-04-18T15:17:26 | CC-MAIN-2019-18 | 1555578517682.16 | [] | circuitpython.readthedocs.io |
Checklist.
And as always, if you have any challenges with your checklist that you can't figure out, reach out to us at [email protected] and we'll help sort things out! | https://docs.appcues.com/article/358-checklist-diagnostics | 2019-04-18T15:01:52 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559e91f8e4b0b0593824b4a9/images/5ba2bf022c7d3a16370f4da0/file-pdEZyDJjYW.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559e91f8e4b0b0593824b4a9/images/5b6cb29c2c7d3a03f89d8947/file-ft19q0JsqS.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559e91f8e4b0b0593824b4a9/images/5b6cb47b2c7d3a03f89d8963/file-Gs3wyJl73p.png',
None], dtype=object) ] | docs.appcues.com |
CodeRed CMS 0.13.0 release notes¶
New features¶
New CMS installations will now cache 301/302 redirects and 404 pages. Existing CMS installations should switch to new wagtail-cache middleware to gain this behavior (see upgrade considerations).
Settings > General > From Email Address now allows specifying sender name in “Sender Name <[email protected]>” format.
Settings > Tracking > Track button clicks now tracks ALL anchor clicks, not just Button Blocks.
Bug fixes¶
Minor bug fixes to form page template.
Minor bug fixes to search page template.
Fixed bug that prevented previewing Event page types.
Fixed Structured Data error on Event page types.
Body previews now properly render HTML entities (apostrophes, non-breaking spaces, etc.).
Images in Rich Text Blocks are now properly positioned left/right/full-width on the front-end.
Fixed AMP rendering issues with images in Rich Text Blocks.
Upgrade considerations¶
Robots.txt settings were REMOVED from Settings > General. If you had custom custom robots.txt specified here, move your robots.txt content to
website/templates/robots.txtfile before upgrading.
Existing installations must add the new wagtail-cache middleware to
MIDDLEWAREin the Django settings file as described in wagtail-cache documentation after upgrading.
You will need to run
python manage.py makemigrations websiteand
python manage.py migrateafter upgrading. | https://docs.coderedcorp.com/cms/stable/releases/v0.13.0.html | 2019-04-18T14:24:17 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.coderedcorp.com |
Deployment Guide
This deployment guide can be used to install the Genesys WebRTC Service in your environment. It includes the following information:
- New In This Release—An overview of new features and improvements included with each release of the Genesys WebRTC Service.
- Product Overview—An overview of the Genesys WebRTC Service.
- Deploying WebRTC Service—Step-by-step guide to installing and deploying the Genesys WebRTC Service on your computer.
- Deployment Models—A description of how the Genesys WebRTC Service can be deployed in a production environment, taking into account a variety of typical deployment types.
- Log Events Reference—Descriptions of the log events generated by the Genesys WebRTC Gateway.
- Configuration Options Reference—Descriptions of the configuration options available for Genesys WebRTC applications.
- Hardware Sizing Information—Useful hardware sizing and performance information.
Next Steps
After you have successfully installed the Genesys WebRTC Service, you might want to read the Genesys WebRTC Service Developer's Guide to find general guidelines and programming advice when working with your Genesys WebRTC Service deployment.
This page was last modified on March 24, 2017, at 06:04.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/WRTC/latest/Deployment/Welcome | 2019-04-18T14:17:21 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.genesys.com |
Geek of All Trades: The Instant Cloud - Your Office 365 Proof of Concept
Office 365 is exceptionally well-suited for small businesses looking to leap into cloud computing, with a full range of productivity and collaboration features.
Greg Shields
The subject of cloud computing seems to have no middle ground. Nearly everyone I speak with falls into either the “hate it and don’t trust it” crowd or the group who “loves it and uses it all the time.” I side with the latter group. The dichotomy reminds me of an old friend’s saying about sushi: “You either love it, or you haven’t really tried it yet.”
I can almost predict a person’s response to cloud computing based on the size of their company. Those in larger companies distrust cloud computing. Small businesses and entrepreneurs are much more eager to embrace it for the cost savings.
The big-company response makes sense in many ways. Larger firms are subject to additional regulation, security control and issues around corporate culture. To IT professionals supporting large enterprises, “in the cloud” also means “I have limited control.” This translates to “I don’t trust it,” which leads to “it isn’t for me.”
The extent of this distrust is at complete odds with needs of the little guys. To the small shop, “Is it working?” is usually more important than, “Is it within my control?”
That’s one reason why I’m particularly excited about the new cloud-based Microsoft Office 365 platform. There are editions designed for companies both large and small. Office 365 gathers the Microsoft office productivity and collaboration tools within an Internet-accessible interface. With Office 365, you get Microsoft Office, Exchange, SharePoint and Lync bundled in a strikingly seamless package.
What Is Office 365?
When I first saw a demo of Office 365—which is currently in limited beta—I asked to be guided through the features targeted to small business owners (as they’re this column’s primary audience). Considering the raw complexity involved in properly architecting, installing and maintaining these products, Office 365 represents a much-needed lifeline for the Jack-of-all-trades IT professional.
What does this lifeline look like? Imagine a team in your office needs to communicate about an upcoming project. There are a few e-mails sent back and forth, and the group decides they need to set up a quick meeting. The presence capabilities built into Office 365 indicate who is and who is not available. Clicking on names in Outlook lets the team send instant messages to each other and confirm availability. Once everyone is ready, the team can immediately convert the conversation thread into an online meeting (see Figure 1).
Figure 1 Creating an online meeting directly within Outlook using Office 365
During the meeting, the team can simultaneously co-edit Office documents from SharePoint while communicating through video chat. They’re not just screen-sharing—each team member is updating the document at the same time (see Figure 2). They’re making changes to the document in real time, with each person watching as the document evolves.
Figure 2 Collaborating on a document while video chatting using Office 365
It gets better. Small businesses are always working together on collaborative projects and always need to share data. Office 365 lets you grant outside users upload and download rights to a SharePoint site. You can also grant permissions to edit rich client documents. You’ll appreciate this feature if your business has dealt with the pain of transferring data via FTP or through not-always-trustworthy file-sharing Web sites.
It’s just as painful—if not more so—for a small business to lose data as it is for a large enterprise. More than one small company has shuttered its doors when “the laptop that contained our important data” went missing. Seeing the pain of fellow entrepreneurs makes me glad for the seamless and automatic backup and restore capabilities of Office 365.
The Office 365 data-protection story is multilayered. Deleted data is sent to a SharePoint recycle bin, instead of being completely wiped out. A secondary administrative recycle bin extends the deletion cycle. Outlook users enjoy the same protections with their Deleted Items folder. You can even bring deleted mailboxes back to life up to 30 days after deletion.
Search and legal hold features are also part of the package. While not currently available in the private beta, you’ll eventually be able to search across mailboxes when necessary.
Using a simple control panel (see Figure 3), the “accidental IT pro” in any small business can easily accomplish basic administration tasks without the usual complexity. Creating users, mailboxes and distribution lists—as well as managing Exchange, SharePoint and Lync—are all exposed in the interface. You can even create your own Web site using a WYSIWYG interface linked directly into SharePoint.
Figure 3 Managing the administrative services in Office 365
While the simplicity is nice, it’s what isn’t there that’s fundamentally cool. None of these capabilities are technically new. Exchange has long been a fully featured collaboration platform, SharePoint has always been a document repository, and Lync (formerly Office Communications Server) has been the place where real-time communication occurs. The value proposition here isn’t with the feature set alone—it’s in the fact that all these pieces are already integrated for you. If you’ve ever tried to connect these three massive packages yourself, you know that doing it correctly takes time and experience. Doing it with 99.9 percent guaranteed uptime usually requires expensive outside help.
Decide for Yourself
After the demonstration, I decided to start my own Proof of Concept for my company, Concentrated Technology. We’re a small analyst firm with a handful of users spread throughout the country. With little in the way of central IT infrastructure, a service like this seemed perfect for our needs. I signed up for the Office 365 beta.
Getting started requires a bit of demographic data as well as creating a company URL. While Web sites in the private beta are limited to sub-domains on Microsoft.com, I’m told there will be domain-transfer capabilities added prior to it being released to manufacturing.
Building the initial set of services takes a few minutes. As you wait, Microsoft provides a quick start guide that explains the basics of administering the interface. You can migrate existing data from an Exchange server all at once or in batches of mailboxes. It also supports migrating mail from IMAP servers. However, IMAP migrations will not automatically migrate contacts and calendar entries. You can also migrate SharePoint data, but automating the process requires third-party tools.
The next step is installing Microsoft Office, the Microsoft Online Services Connector and Lync 2010 to local desktops. You know what Office does. The connector extends the reach of Microsoft Office from the local desktop to the Office 365 cloud services. Lync completes the installation, adding collaboration features such as instant messaging, audio, video and online meetings. Once everything is ready, ActiveSync for Android, iPhone, BlackBerry and Windows Phone integrates mobile devices into the workspace.
Limited, for Now
Office 365 is currently in limited beta, which means Microsoft will limit the number of people using the service as the company works out the kinks. Microsoft should be releasing a public beta “in the months to come.”
While it’s perfect for smaller businesses, bigger companies aren’t left out in the cold. Microsoft is building its Enterprise Edition of Office 365 for organizations larger than 25 people. That will include expanded features that are more in line with the traditional Exchange, SharePoint and Lync experience.
Enterprises that already have these pieces integrated and fully functioning with acceptable uptime may see little reason to jump to a cloud service. For the rest of us, using a cloud service like Office 365 quickly bestows small businesses with the collaboration tools enjoyed by the largest enterprises. And, most importantly, while you might not control everything, it works.
Greg Shields*, MVP, is a partner at Concentrated Technology. Get more of Shields’ Jack-of-all-trades tips and tricks at ConcentratedTech.com.*
SIDEBAR:? Shields’ “Top 20 IT Tips” will appear in an upcoming TechNet Magazine issue. He’ll recognize the top 20 smartest IT JOATs in the industry alongside their game-changing tip or trick. Submit yours today! Get your name published, extol your virtues and remind everyone why you’re the ones that get the real work done. Send your tips to [email protected]. Every submitted tip will get a response. | https://docs.microsoft.com/en-us/previous-versions/technet-magazine/gg675927%28v%3Dmsdn.10%29 | 2019-04-18T15:04:24 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['images/gg675927.fig_1_creating_an_online_meeting_directly_within_outlook_using_office_365%28en-us%2cmsdn.10%29.jpg',
None], dtype=object)
array(['images/gg675927.fig_2_collaborating_on_a_document_while_video_chatting_using_office_365%28en-us%2cmsdn.10%29.jpg',
None], dtype=object)
array(['images/gg675927.fig_3_managing_the_administrative_services_in_office_365%28en-us%2cmsdn.10%29.jpg',
None], dtype=object)
array(['images/ff404193.greg_shields%28en-us%2cmsdn.10%29.jpg',
'Greg Shields Greg Shields'], dtype=object) ] | docs.microsoft.com |
This article covers the various performance metrics displayed by the Performance Analysis Dashboard and Performance Analysis Overview, and how to interpret different metric values and combinations of values across different metrics for APS Sentry.
Note: For Mode: S = Sample and H = History.
Note: We are actively adding information to the performance metrics tables to complete data for undefined metrics. | https://docs.sentryone.com/help/aps-sentry-performance-metrics | 2019-04-18T15:06:21 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.sentryone.com |
View port mirroring session details, including status, sources, and destinations.
Procedure
- In the vSphere Web Client, navigate to the distributed switch.
- On the Configure tab, expand Settings and click Port mirroring.
- Select a port mirroring session from the list to display more detailed information at the bottom of the screen. Use the tabs to review configuration details.
- (Optional) Click New to add a new port mirroring session.
- (Optional) Click Edit to edit the details for the selected port mirroring session.
- (Optional) Click Remove to delete the selected port mirroring session. | https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.networking.doc/GUID-DDF75CE0-FD2E-4272-941D-61A3D8D3A4BF.html | 2019-04-18T15:06:03 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.vmware.com |
Middlewares¶
Middlewares works in between the master channel and slave channels, they look through messages and statuses delivered between channels, passing them on, make changes or discarding them, one after another.
Like channels, middlewares will also each have an instance
per EFB session, managed by the coordinator. However, they
don’t have centrally polling threads, which means if a
middleware wants to have a polling thread or something
similar running in the background, it has to stop the thread
using Python’s
atexit or otherwise.
Message and Status Processing¶
Each middleware by default has 2 methods,
process_message()
which processes message objects, and
process_status()
which processes status objects. If they are not overridden,
they will not touch on the object and pass it on as is.
To modify an object, just override the relative method and
make changes to it. To discard an object, simply return
None.
When an object is discarded, it will not be passed further
to other middlewares or channels, which means a middleware
or a channel should never receive a
None message or
status.
Other Usages¶
Having rather few limitation compare to channels, middlewares are rather easy to write, which allows it to do more than just intercept messages and statuses.
Some ideas:
- Periodic broadcast to master / slave channels
- Integration with chat bots
- Automated operations on vendor-specific commands / additional features
- Share user session from slave channel with other programs
- etc… | https://ehforwarderbot.readthedocs.io/en/latest/guide/middleware.html | 2019-04-18T15:34:41 | CC-MAIN-2019-18 | 1555578517682.16 | [] | ehforwarderbot.readthedocs.io |
Selection quantification¶
BASELINe quantifies selection pressure by calculating the posterior probability density function (PDF) based on observed mutations compared to expected mutation rates derived from an underlying SHM targeting model. Selection is quantified via the following steps:
- Calculate the selection scores for individual sequences.
- Group by relevant fields for comparison and convolve individual selection PDFs.
- Plot and compare selection scores of different groups of sequences.
Example data¶
A small example Change-O database is included in the
alakazam package.
The example dataset consists of a subset of Ig sequencing data from an
influenza vaccination study (Laserson and Vigneault et al., PNAS, 2014). The
data include sequences from multiple time-points before and after the subject
received an influenza vaccination. Quantifying selection requires the following
fields (columns) to be present in the Change-O database:
SEQUENCE_ID
SEQUENCE_IMGT
GERMLINE_IMGT_D_MASK
# Load example data library(shazam) data(ExampleDb, package="alakazam")
Calculate selection PDFs for individual sequences¶
Selection scores are calculated with the
calcBaseline function. This can
be performed with a single call to
calcBaseline, which performs all
required steps. Alternatively, one can perform each step separately for
greater control over the analysis parameters.
Constructing clonal consensus sequences¶
Individual sequences within clonal groups are not, strictly speaking,
independent events and it is generally appropriate to only analyze selection
pressures on an effective sequence for each clonal group. The
collapseClones
function provides one strategy for generating an effective sequences for
each clone. It reduces the input database to one row per clone and appends
CLONAL_SEQUENCE and
CLONAL_GERMLINE columns which contain the
consensus sequences for each clone.
# Collapse clonal groups into single sequences clones <- collapseClones(ExampleDb, regionDefinition=IMGT_V, method="thresholdedFreq", minimumFrequency=0.6, includeAmbiguous=FALSE, breakTiesStochastic=FALSE, nproc=1)
Calculating selection in multiple steps¶
Following construction of an effective sequence for each clone, the observed
and expected mutation counts are calculated for each sequence in the
CLONAL_SEQUENCE column relative to the
CLONAL_GERMLINE.
observedMutations
is used to calculate the number of observed mutations and
expectedMutations calculates the expected frequency of mutations.
The underlying targeting model for calculating expectations can be specified
using the
targetingModel parameter. In the example below, the default
HH_S5F is used. Column names for sequence and germline sequence may
also be passed in as parameters if they differ from the Change-O defaults.
Mutations are counted by these functions separately for complementarity
determining (CDR) and framework (FWR) regions. The
regionDefinition
argument defines whether these regions are handled separately, and where
the boundaries lie. There are two built-in region definitions
in the
shazam package, both dependent upon the V segment
being IMGT-gapped:
IMGT_V: All regions in the V segment, excluding CDR3, grouped as either CDR or FWR.
IMGT_V_BY_REGIONS: The CDR1, CDR2, CDR3, FWR1, FWR and FWR3 regions in the V segment (no CDR3) treated as individual regions.
Users may define other region sets and boundaries by creating a custom
RegionDefinition object.
# Count observed mutations and append MU_COUNT columns to the output observed <- observedMutations(clones, sequenceColumn="CLONAL_SEQUENCE", germlineColumn="CLONAL_GERMLINE", regionDefinition=IMGT_V, nproc=1) # Count expected mutations and append MU_EXPECTED columns to the output expected <- expectedMutations(observed, sequenceColumn="CLONAL_SEQUENCE", germlineColumn="CLONAL_GERMLINE", targetingModel=HH_S5F, regionDefinition=IMGT_V, nproc=1)
The counts of observed and expected mutations can be combined to test for selection
using
calcBaseline. The statistical framework used to test for selection based
on mutation counts can be specified using the
testStatistic parameter.
# Calculate selection scores using the output from expectedMutations baseline <- calcBaseline(expected, testStatistic="focused", regionDefinition=IMGT_V, nproc=1)
Calculating selection in one step¶
It is not required for
observedMutation and
expectedMutations to be run prior to
calcBaseline. If the output of these two steps does not appear in the input
data.frame, then
calcBaseline will call the appropriate functions prior to
calculating selection scores.
# Calculate selection scores from scratch baseline <- calcBaseline(clones, testStatistic="focused", regionDefinition=IMGT_V, nproc=1)
Using alternative mutation definitions and models¶
The default behavior of
observedMutations and
expectedMutations, and
by extension
calcBaseline, is to define a replacement mutation in the usual
way - any change in the amino acid of a codon is considered a replacement
mutation. However, these functions have a
mutationDefinition argument which
allows these definitions to be changed by providing a
MutationDefinition
object that contains alternative replacement and silent criteria.
shazam
provides the following built-in MutationDefinitions:
CHARGE_MUTATIONS: Amino acid mutations are defined by changes in side chain charge class.
HYDROPATHY_MUTATIONS: Amino acid mutations are defined by changes in side chain hydrophobicitity class.
POLARITY_MUTATIONS: Amino acid mutations are defined by changes in side chain polarity class.
VOLUME_MUTATIONS: Amino acid mutations are defined by changes in side chain volume class.
The default behavior of
expectedMutations is to use the human 5-mer mutation model,
HH_S5F. Alternative SHM targeting models can be provided using the
targetingModel argument.
# Calculate selection on charge class with the mouse 5-mer model baseline <- calcBaseline(clones, testStatistic="focused", regionDefinition=IMGT_V, targetingModel=MK_RS5NF, mutationDefinition=CHARGE_MUTATIONS, nproc=1)
Group and convolve individual selection distributions¶
To compare the selection scores of groups of sequences, the sequences must
be convolved into a single PDF representing each group. In the example dataset,
the
SAMPLE field corresponds to samples taken at different time points
before and after an influenza vaccination and the
ISOTYPE field specifies
the isotype of the sequence. The
groupBaseline function convolves the BASELINe
PDFs of individual sequences/clones to get a combined PDF. The field(s) by
which to group the sequences are specified with the
groupBy parameter.
The
groupBaseline function automatically calls
summarizeBaseline to
generate summary statistics based on the requested groupings, and populates
the
stats slot of the input
Baseline object with the number of sequences
with observed mutations for each region, mean selection scores, 95% confidence
intervals, and p-values with positive signs indicating the presence of positive
selection and/or p-values with negative signs indicating the presence of negative
selection. The magnitudes of the p-values (without the signs) should be
interpreted as analagous to a t-test.
Grouping by a single annotation¶
The following example generates a single selection PDF for each unique
annotation in the
SAMPLE column.
# Combine selection scores by time-point grouped_1 <- groupBaseline(baseline, groupBy="SAMPLE")
Subsetting and grouping by multiple annotations¶
Grouping by multiple annotations follows the sample proceedure as a
single annotation by simply adding columns to the
groupBy argument.
Subsetting the data can be performed before or
after generating selection PDFs via
calcBaseline. However, note
that subsetting may impact the clonal representative sequences
generated by
collapseClones. In the following example subsetting
precedes the collapsing of clonal groups.
# Subset the original data to switched isotypes db_sub <- subset(ExampleDb, ISOTYPE %in% c("IgM", "IgG")) # Collapse clonal groups into single sequence clones_sub <- collapseClones(db_sub, regionDefinition=IMGT_V, method="thresholdedFreq", minimumFrequency=0.6, includeAmbiguous=FALSE, breakTiesStochastic=FALSE, nproc=1) # Calculate selection scores from scratch baseline_sub <- calcBaseline(clones_sub, testStatistic="focused", regionDefinition=IMGT_V, nproc=1) # Combine selection scores by time-point and isotype grouped_2 <- groupBaseline(baseline_sub, groupBy=c("SAMPLE", "ISOTYPE"))
Convolving variables at multiple levels¶
To make selection comparisons using two levels of variables, you
would need two iterations of groupings, where the first iteration of
groupBaseline groups on both variables, and the second iteration groups
on the “outer” variable. For example, if a data set has both case and control
subjects, annotated in
STATUS and
SUBJECT columns, then
generating convolved PDFs for each status would be performed as:
# First group by subject and status subject_grouped <- groupBaseline(baseline, groupBy=c("STATUS", "SUBJECT")) # Then group the output by status status_grouped <- groupBaseline(subject_grouped, groupBy="STATUS")
Testing the difference in selection PDFs between groups¶
The
testBaseline function will perform signifance testing between two
grouped BASELINe PDFs, by region, and return a data.frame with the
following information:
REGION: The sequence region, such as “CDR” and “FWR”.
TEST: The name of the two groups compared.
PVALUE: Two-sided p-value for the comparison.
FDR: FDR corrected p-value.
testBaseline(grouped_1, groupBy="SAMPLE")
## REGION TEST PVALUE FDR ## 1 CDR -1h != +7d 0.05019208 0.08610636 ## 2 FWR -1h != +7d 0.08610636 0.08610636
Plot and compare selection scores for groups¶
plotBaselineSummary plots the mean and confidence interval of selection scores
for the given groups. The
idColumn argument specifies the field that contains
identifiers of the groups of sequences. If there is a secondary field by which
the sequences are grouped, this can be specified using the
groupColumn. This
secondary grouping can have a user-defined color palette passed into
groupColors or can be separated into facets by setting the
facetBy="group".
The
subsetRegions argument can be used to visualize selection of specific
regions. Several examples utilizing these different parameters are provided
below.
# Set sample and isotype colors sample_colors <- c("-1h"="seagreen", "+7d"="steelblue") isotype_colors <- c("IgM"="darkorchid", "IgD"="firebrick", "IgG"="seagreen", "IgA"="steelblue") # Plot mean and confidence interval by time-point plotBaselineSummary(grouped_1, "SAMPLE")
# Plot selection scores by time-point and isotype for only CDR plotBaselineSummary(grouped_2, "SAMPLE", "ISOTYPE", groupColors=isotype_colors, subsetRegions="CDR")
# Group by CDR/FWR and facet by isotype plotBaselineSummary(grouped_2, "SAMPLE", "ISOTYPE", facetBy="group")
plotBaselineDensity plots the full
Baseline PDF of selection scores for the
given groups. The parameters are the same as those for
plotBaselineSummary.
However, rather than plotting the mean and confidence interval, the full density
function is shown.
# Plot selection PDFs for a subset of the data plotBaselineDensity(grouped_2, "ISOTYPE", groupColumn="SAMPLE", colorElement="group", colorValues=sample_colors, sigmaLimits=c(-1, 1))
Editing a field in a Baseline object¶
If, for any reason, one needs to edit the existing values in a field in a
Baseline object, one can do so via
editBaseline. In the following example,
we remove results related to IgA in the relevant fields from
grouped_2.
When the input data is large and it takes a long time for
calcBaseline to run,
editBaseline could become useful when, for instance, one would like to exclude
a certain sample or isotype, but would rather not re-run
calcBaseline after
removing that sample or isotype from the original input data.
# Get indices of rows corresponding to IgA in the field "db" # These are the same indices also in the matrices in the fileds "numbOfSeqs", # "binomK", "binomN", "binomP", and "pdfs" # In this example, there is one row of IgA for each sample dbIgMIndex <- which(grouped_2@db$ISOTYPE == "IgA") grouped_2 <- editBaseline(grouped_2, "db", grouped_2@db[-dbIgMIndex, ]) grouped_2 <- editBaseline(grouped_2, "numbOfSeqs", grouped_2@numbOfSeqs[-dbIgMIndex, ]) grouped_2 <- editBaseline(grouped_2, "binomK", grouped_2@binomK[-dbIgMIndex, ]) grouped_2 <- editBaseline(grouped_2, "binomN", grouped_2@binomN[-dbIgMIndex, ]) grouped_2 <- editBaseline(grouped_2, "binomP", grouped_2@binomP[-dbIgMIndex, ]) grouped_2 <- editBaseline(grouped_2, "pdfs", lapply(grouped_2@pdfs, function(pdfs) {pdfs[-dbIgMIndex, ]} )) # The indices corresponding to IgA are slightly different in the field "stats" # In this example, there is one row of IgA for each sample and for each region grouped_2 <- editBaseline(grouped_2, "stats", grouped_2@stats[grouped_2@stats$ISOTYPE!="IgA", ]) | https://shazam.readthedocs.io/en/version-0.1.11_a/vignettes/Baseline-Vignette/ | 2019-04-18T15:21:30 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['../figure/Baseline-Vignette-11-1.png',
'plot of chunk Baseline-Vignette-11'], dtype=object)
array(['../figure/Baseline-Vignette-11-2.png',
'plot of chunk Baseline-Vignette-11'], dtype=object)
array(['../figure/Baseline-Vignette-11-3.png',
'plot of chunk Baseline-Vignette-11'], dtype=object)
array(['../figure/Baseline-Vignette-12-1.png',
'plot of chunk Baseline-Vignette-12'], dtype=object)] | shazam.readthedocs.io |
Grouping Customer Profiles
This page helps you to learn how you can group customer profiles.
Overview
In the following example, the requirement is to enable an "account" or "family" view of multiple customer profiles. If a new object is created, the profile entity and attribute Group is set to true. If a multi-valued profile extension points to a group member profile, id-keys can be used. It then becomes possible to either attach services to the group, or to the member depending on the scenario.
Examples
The following examples are similar with the difference being the way the profiles are created. Relationship between profiles and group are either "direct link" from entity account owner to other profiles, or "indirect link" where each profile belongs to a family profile.
Account
There are many Profiles. Some are Admin for one or more other Profiles. For example, a telephony provider has a single account (billing account) for several persons. One of the persons is an admin for the account and has rights to change options for each cellular.
Example data:
- Profile with Id="XXXXXXXXXXX-JOHN":
{ CustomerId="XXXXXXXXXXX-JOHN", LastName:"Doe", FirstName:"John", Cellular:"555-123456", EmailAddress:"[email protected]", GroupAdmin: [ {AdminForProfile:"XXXXXXXXXXXX-JANE"}, {AdminForProfile:"XXXXXXXXXXXX-PETER"} ], ProviderOptions: { AccountInfo:"account number", MemberLevel:"High" } }
- Profile with Id="XXXXXXXXXXX-JANE":
{ CustomerId="XXXXXXXXXXX-JANE", LastName:"Doe", FirstName:"Jane", Cellular:"555-987654", EmailAddress:"[email protected]", ProviderOptions: { AccountInfo:"account number", MemberLevel:"Untouchable" } }
- Profile with Id="XXXXXXXXXX-PETER":
{ CustomerId="XXXXXXXXXXX-PETER", LastName:"Doe", FirstName:"Peter", Cellular:"555-654321", EmailAddress:"[email protected]", ProviderOptions: { AccountInfo:"account number", MemberLevel:"Untouchable" } }
- Two extensions for the example:
- Single-valued extension ProviderOptions with any type of attributes used for the example to customize Cellular options.
- Multi-valued extension GroupAdmin with attribute AdminForProfile
- Identification keys :
- on attribute EmailAddress from Core Profile
- on attribute Cellular from Core Profile
- on attribute LastName+FirstName from Core Profile
- on attribute AdminForProfile of extension GroupAdmin
- Example scenario:
- John calls to update his cellular account.
- He is identified by his Cellular="555-123456".
- The system knows that he his and admin for Jane and Peter because of the GroupAdmin extension.
- He is asked "Do you want to change settings for your account #1, for Jane's account #2 or for Peter's account #3?".
- If he enters #2 or #3, the system picks the correct Profile by the Id and can retrieve specific cellular options.
- ...
- Peter calls to change some options.
- He is identified by Cellular="555-654321".
- The system knows he is not Admin.
- Depending on the requests, the system may "identify" who is admin for the cellular by the id-key on GroupAdmin.AdminForProfile="XXXXXXXXXX-PETER".
- The system might fail the request stating "need admin rights".
Family
Each member of a family has its own profile. They belong to the Family Group (same house hold). This result in slightly different from the previous example because the Family itself is identified as a profile.
Note: The Account example can also be implemented this way.
Example data:
- Profile with Id="XXXXXXXXXXX-JOHN":
{ CustomerId="XXXXXXXXXXX-JOHN", LastName:"Doe", FirstName:"John", Cellular:"555-123456", EmailAddress:"[email protected]", GroupFamily: { ProfileFamily:"XXXXXXXXXXXX-DOE" } }
- Profile with Id="XXXXXXXXXXX-JANE":
{ CustomerId="XXXXXXXXXXX-JANE", LastName:"Doe", FirstName:"Jane", Cellular:"555-987654", EmailAddress:"[email protected]", GroupFamily: { ProfileFamily:"XXXXXXXXXXXX-DOE" } }
- Family = Profile 'XXXXXXXXXXXX-DOE':
{ CustomerId="XXXXXXXXXXXX-DOE", LastName:"Doe", PhoneNumber:"555-1592648", EmailAddress:"[email protected]", PostalAddress: { Address:"5, This Road", ZipCode:65536 } }
- Extensions:
- Single-valued extension PostalAddress with attributes like 'Zip Code', 'State', etc. This extension may have values only for the Family since all profiles are to live at the same place.
- Single-valued extension GroupFamily with attribute ProfileFamily pointing to the main family profile.
- Identification keys:
- on attribute EmailAddress from Core Profile.
- on attribute Cellular from Core Profile.
- on attribute PhoneNumber from Core Profile.
- on attributes LastName+FirstName from Core Profile.
- on attribute ProfileFamily from extension GroupFamily.
- Example scenario:
- Assuming John sends the request from e-mail or cellular:
- He is identified by id-key Profile.Cellular="555-123456" or Profile.EmailAddress="[email protected]".
- Then his family information is gathered from querying Profile with Id GroupFamily.ProfileFamily="XXXXXXXXXXXX-DOE".
- Assuming John calls from Home:
- His Family information is matched by id-key Profile.PhoneNumber="555-1592648".
- Members of the family can be identified by id-key on GroupFamily.ProfileFamily="XXXXXXXXXXXX-DOE".
- The IVR might question "Who are you? Jane or John?".
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/CS/latest/Developer/GroupingCustomerProfiles | 2018-10-15T11:19:00 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.genesys.com |
Australian Government Guide to Regulation
If you have trouble accessing this document, please contact the Department to request a copy in a format you can use.
View this document as… approach stakeholders can look forward to a future with substantially less red tape and Australia’s economy continuing to grow and prosper.
This item is in the Department of the Prime Minister and Cabinet’s online resources library, as it is shared with the Office of Best Practice Regulation (OBPR).
Last modified on Wednesday 7 March 2018 [44021|107051] | https://docs.jobs.gov.au/documents/australian-government-guide-regulation | 2018-10-15T10:57:37 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.jobs.gov.au |
The Capability System
Capabilities allow exposing features in a dynamic and flexible way, without having to resort to directly implementing many interfaces.
In general terms, each capability provides a feature in the form of an interface, alongside with a default implementation which can be requested, and a storage handler for at least this default implementation. The storage handler can support other implementations, but this is up to the capability implementor, so look it up in their documentation before trying to use the default storage with non-default implementations.
Forge adds capability support to TileEntities, Entities, ItemStacks, Worlds and Chunks, which can be exposed either by attaching them through an event or by overriding the capability methods in your own implementations of the objects. This will be explained in more detail in the following sections.
Forge-provided Capabilities
Forge provides three capabilities: IItemHandler, IFluidHandler and IEnergyStorage
IItemHandler exposes an interface for handling inventory slots. It can be applied to TileEntities (chests, machines, etc.), Entities (extra player slots, mob/creature inventories/bags), or ItemStacks (portable backpacks and such). It replaces the old
IInventory and
ISidedInventory with an automation-friendly system.
IFluidHandler exposes an interface for handling fluid inventories. It can also be applied to TileEntities Entities, or ItemStacks. It replaces the old
IFluidHandler with a more consistent and automation-friendly system.
IEnergyStorage exposes an interface for handling energy containers. It can be applied to TileEntities, Entities or ItemStacks. It is based on the RedstoneFlux API by TeamCoFH.
Using an Existing Capability
As mentioned earlier, TileEntities, Entities, and ItemStacks implement the capability provider feature, through the
ICapabilityProvider interface. This interface adds two methods,
hasCapability and
getCapability, which can be used to query the capabilities present in the objects.
In order to obtain a capability, you will need to refer it by its unique instance. In the case of the Item Handler, this capability is primarily stored in
CapabilityItemHandler.ITEM_HANDLER_CAPABILITY, but it is possible to get other instance references by using the
@CapabilityInject annotation.
@CapabilityInject(IItemHandler.class) static Capability<IItemHandler> ITEM_HANDLER_CAPABILITY = null;
This annotation can be applied to fields and methods. When applied to a field, it will assign the instance of the capability (the same one gets assigned to all fields) upon registration of the capability, and left to the existing value (
null), if the capability was never registered. Because local static field accesses are fast, it is a good idea to keep your own local copy of the reference for objects that work with capabilities. This annotation can also be used on a method, in order to get notified when a capability is registered, so that certain features can be enabled conditionally.
Both the
hasCapability and
getCapability methods have a second parameter, of type EnumFacing, which can be used in the to request the specific instance for that one face. If passed
null, it can be assumed that the request comes either from within the block, or from some place where the side has no meaning, such as a different dimension. In this case a general capability instance that does not care about sides will be requested instead. The return type of
getCapability will correspond to the type declared in the capability passed to the method. For the Item Handler capability, this is indeed
IItemHandler.
Exposing a Capability
In order to expose a capability, you will first need an instance of the underlying capability type. Note that you should assign a separate instance to each object that keeps the capability, since the capability will most probably be tied to the containing object.
There’s two ways to obtain such an instance, through the Capability itself, or by explicitly instantiating an implementation of it. The first method is designed to use a default implementation, if those default values are useful for you. In the case of the Item Handler capability, the default implementation will expose a single slot inventory, which is most probably not what you want.
The second method can be used to provide custom implementations. In the case of
IItemHandler, the default implementation uses the
ItemStackHandler class, which has an optional argument in the constructor, to specify a number of slots. However, relying on the existence of these default implementations should be avoided, as the purpose of the capability system is to prevent loading errors in contexts where the capability is not present, so instantiation should be protected behind a check testing if the capability has been registered (see the remarks about
@CapabilityInject in the previous section).
Once you have your own instance of the capability interface, you will want to notify users of the capability system that you expose this capability. This is done by overriding the
hasCapability method, and comparing the instance with the capability you are exposing. If your machine has different slots based on which side is being queried, you can test this with the
facing parameter. For Entities and ItemStacks, this parameter can be ignored, but it is still possible to have side as a context, such as different armor slots on a player (top side => head slot?), or about the surrounding blocks in the inventory (west => slot on the left?). Don’t forget to fall back to
super, otherwise the attached capabilities will stop working.
@Override public boolean hasCapability(Capability<?> capability, EnumFacing facing) { if (capability == CapabilityItemHandler.ITEM_HANDLER_CAPABILITY) { return true; } return super.hasCapability(capability, facing); }
Similarly, you will want to provide the interface reference to your capability instance, when requested. Again, don’t forget to fall back to
super.
@Override public <T> T getCapability(Capability<T> capability, EnumFacing facing) { if (capability == CapabilityItemHandler.ITEM_HANDLER_CAPABILITY) { return (T) inventory; } return super.getCapability(capability, facing); }
It is strongly suggested that direct checks in code are used to test for capabilities instead of attempting to rely on maps or other data structures, since capability tests can be done by many objects every tick, and they need to be as fast as possible in order to avoid slowing down the game.
Attaching Capabilities
As mentioned, attaching capabilities to entities and itemstacks can be done using
AttachCapabilitiesEvent. The same event is used for all objects that can provide capabilities.
AttachCapabilitiesEvent has 5 valid generic types providing the following events:
AttachCapabilitiesEvent<Entity>: Fires only for entities.
AttachCapabilitiesEvent<TileEntity>: Fires only for tile entities.
AttachCapabilitiesEvent<ItemStack>: Fires only for item stacks.
AttachCapabilitiesEvent<World>: Fires only for worlds.
AttachCapabilitiesEvent<Chunk>: Fires only for chunks.
The generic type cannot be more specific than the above types. For example: If you want to attach capabilities to
EntityPlayer, you have to subscribe to the
AttachCapabilitiesEvent<Entity>, and then determine that the provided object is an
EntityPlayer before attaching the capability.
In all cases, the event has a method
addCapability, which can be used to attach capabilities to the target object. Instead of adding capabilities themselves to the list, you add capability providers, which have the chance to return capabilities only from certain sides. While the provider only needs to implement
ICapabilityProvider, if the capability needs to store data persistently it is possible to implement
ICapabilitySerializable<T extends NBTBase> which, on top of returning the capabilities, will allow providing NBT save/load functions.
For information on how to implement
ICapabilityProvider, refer to the Exposing a Capability section.
Creating Your Own Capability
In general terms, a capability is declared and registered through a single method call to
CapabilityManager.INSTANCE.register(). One possibility is to define a static
register() method inside a dedicated class for the capability, but this is not required by the capability system. For the purpose of this documentation we will be describing each part as a separate named class, although anonymous classes are an option.
CapabilityManager.INSTANCE.register(capability interface class, storage, default implementation factory);
The first parameter to this method, is the type that describes the capability feature. In our example, this will be
IExampleCapability.class.
The second parameter is an instance of a class that implements
Capability.IStorage<T>, where T is the same class we specified in the first parameter. This storage class will help manage saving and loading for the default implementation, and it can, optionally, also support other implementations.
private static class Storage implements Capability.IStorage<IExampleCapability> { @Override public NBTBase writeNBT(Capability<IExampleCapability> capability, IExampleCapability instance, EnumFacing side) { // return an NBT tag } @Override public void readNBT(Capability<IExampleCapability> capability, IExampleCapability instance, EnumFacing side, NBTBase nbt) { // load from the NBT tag } }
The last parameter is a callable factory that will return new instances of the default implementation.
private static class Factory implements Callable<IExampleCapability> { @Override public IExampleCapability call() throws Exception { return new Implementation(); } }
Finally, we will need the default implementation itself, to be able to instantiate it in the factory. Designing this class is up to you, but it should at least provide a basic skeleton that people can use to test the capability, if it’s not a fully usable implementation on itself.
Persisting Chunk and TileEntity capabilities
Unlike Worlds, Entities and ItemStacks, Chunks and TileEntities are only written to disk when they have been marked as dirty. A capability implementation with persistent state for a Chunk or a TileEntity should therefore ensure that whenever its state changes, its owner is marked as dirty.
ItemStackHandler, commonly used for inventories in TileEntities, has an overridable method
void onContentsChanged(int slot) designed to be used to mark the TileEntity as dirty.
public class MyTileEntity extends TileEntity { private final IItemHandler inventory = new ItemStackHandler(...) { @Override protected void onContentsChanged(int slot) { super.onContentsChanged(slot); markDirty(); } } ... }
Synchronizing Data with Clients
By default, Capability data is not sent to clients. In order to change this, the mods have to manage their own synchronization code using packets.
There are three different situation in which you may want to send synchronization packets, all of them optional:
- When the entity spawns in the world, or the block is placed, you may want to share the initialization-assigned values with the clients.
- When the stored data changes, you may want to notify some or all of the watching clients.
- When a new client starts viewing the entity or block, you may want to notify it of the existing data.
Refer to the Networking page for more information on implementing network packets.
Persisting across Player Deaths
By default, the capability data does not persist on death. In order to change this, the data has to be manually copied when the player entity is cloned during the respawn process.
This can be done by handling the
PlayerEvent.Clone event, reading the data from the original entity, and assigning it to the new entity. In this event, the
wasDead field can be used to distinguish between respawning after death, and returning from the End. This is important because the data will already exist when returning from the End, so care has to be taken to not duplicate values in this case.
Migrating from IExtendedEntityProperties
Although the Capability system can do everything IEEPs (IExtendedEntityProperties) did and more, the two concepts don’t fully match 1:1. In this section, I will explain how to convert existing IEEPs into Capabilities.
This is a quick list of IEEP concepts and their Capability equivalent:
- Property name/id (
String): Capability key (
ResourceLocation)
EntityConstructing): Attaching (
AttachCapabilitiesEvent<Entity>), the real registration of the Capability happens during pre-init.
- NBT read/write methods: Does not happen automatically. Attach an
ICapabilitySerializablein the event, and run the read/write methods from the
serializeNBT/
deserializeNBT.
Features you probably will not need (if the IEEP was for internal use only):
- The Capability system provides a default implementation concept, meant to simplify usage by third party consumers, but it doesn’t really make much sense for an internal Capability designed to replace an IEEP. You can safely return
nullfrom the factory if the capability is be for internal use only.
- The Capability system provides an
IStoragesystem that can be used to read/write data from those default implementations, if you choose not to provide default implementations, then the IStorage system will never get called, and can be left blank.
The following steps assume you have read the rest of the document, and you understand the concepts of the capability system.
Quick conversion guide:
- Convert the IEEP key/id string into a
ResourceLocation(which will use your MODID as a domain).
- In your handler class (not the class that implements your capability interface), create a field that will hold the Capability instance.
- Change the
EntityConstructingevent to
AttachCapabilitiesEvent, and instead of querying the IEEP, you will want to attach an
ICapabilityProvider(probably
ICapabilitySerializable, which allows saving/loading from NBT).
- Create a registration method if you don’t have one (you may have one where you registered your IEEP’s event handlers) and in it, run the capability registration function. | https://mcforge.readthedocs.io/en/latest/datastorage/capabilities/ | 2018-10-15T11:35:14 | CC-MAIN-2018-43 | 1539583509170.2 | [] | mcforge.readthedocs.io |
Using Notifications for Amazon SES Email Receiving
When you receive an email, Amazon SES executes the rules in the active receipt rule set. You can configure receipt rules to send you notifications using Amazon SNS. Your receipt rules can send two different types of notifications:
Notifications sent from SNS actions – When you add an SNS action to a receipt rule, it sends information about the email. If the message is 150KB or smaller, this notification type also includes the complete MIME body of the email.
Notifications sent from other action types – When you add any other action type (including Bounce, Lambda, Stop Rule Set, or WorkMail actions) to a receipt rule, you can optionally specify an Amazon SNS topic. If you do, you will receive notifications when these actions are performed. These notifications contain information about the email, but do not contain the content of the email.
This section describes the contents of these notifications, and provides an example of each type of notification. | https://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email-notifications.html | 2018-10-15T10:44:16 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.aws.amazon.com |
Historical Data Volumes View
Use the Volumes view to display interaction volumes within imported or collected historical data. This view provides standard date-selection controls and a grid that shows statistics for days or timesteps. See the toolbar image here and the button descriptions below.
The following sections cover:
- Displaying the Volumes view.
- Setting the data display properties and date range.
- Reading the data.
- Editing weekly totals.
- Save and Calculation options.
- Find events.
Displaying the Volumes View
To display the Volumes view:
- From the Home menu on the toolbar, select Forecast.
- From the Forecast menu on the toolbar, select Historical Data.
- From the Historical Data menu on the toolbar, select Volumes.
- In the Objects tree, select an activity, Site, Business Unit, or Enterprise.
The view displays a graph above a table, each containing the same statistics, and controls that set the data display properties for the graph and table. Master.
Save and Calculation Options
You can use the following buttons on the toolbar (these commands also appear in the Actions menu):
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/PSAAS/latest/WMCloud/HDVlmsVw | 2018-10-15T11:23:41 | CC-MAIN-2018-43 | 1539583509170.2 | [array(['/images/d/d8/WM_851_toolbar-master-forecast-historical-data-volumes.png',
'WM 851 toolbar-master-forecast-historical-data-volumes.png'],
dtype=object) ] | docs.genesys.com |
zipfile — Work with ZIP archives¶
Source:
-file(' the string data to the archive;.directories..
New in version 3.6.
Changed in version 3.6.2: The filename parameter accepts a path-like object. string.. | https://docs.python.org/3/library/zipfile.html | 2018-10-15T11:12:58 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.python.org |
8 Comment Box
This snip implements the comment boxes that you see in DrRacket.
Overrides make-editor in editor-snip:decorated%.Makes an instance ofOverrides make-snip in editor-snip:decorated%.Returns an instance of the comment-snip% class.Overrides get-corner-bitmap in editor-snip:decorated-mixin.Returns the semicolon bitmap from the file
(build-path (collection-path "icons") "semicolon.gif")Overrides get-position in editor-snip:decorated-mixin.Returns 'left-topReturns the same string as the super method, but with newlines replaced by newline-semicolon-space.Overrides get-menu in editor-snip:decorated-mixin.Returns a menu with a single item to change the box into semicolon comments.
The snip-class% object used by comment-box:snip%. | https://docs.racket-lang.org/framework/Comment_Box.html | 2018-10-15T11:19:01 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.racket-lang.org |
Contents IT Service Management Previous Topic Next Topic Create a problem template ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Create a problem template Templates simplify the process of submitting new records by populating fields automatically. Using templates saves time by reducing how long it takes to fill out the form. Before you beginRole required: admin About this taskAfter a template is defined, it can be used from the form, a record producer, or a module. Procedure Navigate to System Definition > Templates and click New. Complete the form as described in Create a template using the Template form. Use the following information as an example to create a problem template to fix performance issues. Name: Performance Issues Table: Problem Global: Selected (true). Selecting this option makes the template available to all users. Short Description: Performance Issues Template: Select each field and enter the following values. Description: Significant performance issues have affected the configuration item. Impact: 2 - Medium Urgency 1 - High Contact Type: Self-service Click Submit. Use a template from a moduleThis example demonstrates how to place the Performance Issues template in a module in the Self-Service application to enable end users to directly file the problem using the template.Create a record producer with a templateIf a predefined template for a problem exists, it can be used with the record producer to fill in standard information. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-it-service-management/page/product/problem-management/task/t_CreateAProblemTemplate.html | 2018-10-15T11:04:52 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.servicenow.com |
Contents Now Platform Capabilities Previous Topic Next Topic Create a panel demo ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Create a panel demo How to create the first two panels and add variables. In the Wizard Panels related list, click New. Select the panel Type of Prompts user to answer questions. Enter the Name, then right-click the header and select Save. In the Variables related list, click Edit.... Using the slushbucket, select and arrange the variables as listed in the table. Table 1. Panel variables table Name Add variables 1 Contract Screen Associate assets? Which type of contract? Start Date End Date Enter short description for contract 2 Asset Screen Asset Listing Related TasksCreate the wizardDefine a variableCreate the third panel and add a field setterCreate) On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/administer/wizards/task/t_CreateAPanelDemo.html | 2018-10-15T11:05:17 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.servicenow.com |
Near to Far Field Spectra
We demonstrate Meep's near-to-far-field transformation feature using two examples. which is described in Section 4.2.1 in Chapter 4 ("Electromagnetic Wave Source Conditions") of the book Advances in FDTD Computational Electrodynamics: Photonics and Nanotechnology..
Radiation Pattern of an Antenna
In this example, we use the near-to-far-field transformation feature to compute the radiation pattern of an antenna. This involves an electric-current point source as the emitter in.
We use the
get_farfield routine to compute the far fields by looping over a set of points along the circumference of the circle. The simulation script is in antenna-radiation.py.
import meep as mp import math resolution = 50 sxy = 4 dpml = 1 cell = mp.Vector3(sxy+2*dpml,sxy+2*dpml,0) pml_layers = mp.PML(dpml) fcen = 1.0 df = 0.4 src_cmpt = mp.Ez sources = mp.Source(src=mp.GaussianSource(fcen,fwidth=df), center=mp.Vector3(), component=src_cmpt) if src_cmpt == mp.Ex: symmetries = [mp.Mirror(mp.Y)] elif src_cmpt == mp.Ey: symmetries = [mp.Mirror(mp.X)] elif src_cmpt == mp.Ez: symmetries = [mp.Mirror(mp.X), mp.Mirror(mp.Y)] sim = mp.Simulation(cell_size=cell, resolution=resolution, sources=[sources], symmetries=symmetries, boundary_layers=[pml_layers]) nearfield = sim.add_near2far(fcen, 0, 1, mp.Near2FarRegion(mp.Vector3(0, 0.5*sxy), size=mp.Vector3(sxy)), mp.Near2FarRegion(mp.Vector3(0, -0.5*sxy), size=mp.Vector3(sxy), weight=-1.0), mp.Near2FarRegion(mp.Vector3( 0.5*sxy), size=mp.Vector3(0,sxy)), mp.Near2FarRegion(mp.Vector3(-0.5*sxy), size=mp.Vector3(0,sxy), weight=-1.0)) sim.run(until_after_sources=mp.stop_when_fields_decayed(50, src_cmpt, mp.Vector3(), 1e-8)) r = 1000*(1/fcen) # 1000 wavelengths out from the source npts = 100 # number of points in [0,2*pi) range of angles for n in range(npts): ff = sim.get_farfield(nearfield, mp.Vector3(r*math.cos(2*math.pi*(n/npts)), r*math.sin(2*math.pi*(n/npts)))) print("farfield: {}, {}, ".format(n,2*math.pi*n/npts), end='') print(", ".join([str(f).strip('()').replace('j', 'i') for f in ff]))
We compute the far fields at a single frequency corresponding to a wavelength of 1 μm (
fcen) for three different polarizations of the point source by running three separate times and setting the
src_cmpt parameter to E, E, and E. The output consists of eight columns containing the far-field points' index (integer), angle (radians), followed by the six field components (E, E, E, H, H, H). Note that the far fields computed analytically using
near2far are always complex even though the near fields are real as in this example. From the far fields at each point , we can compute the in-plane flux: , where P and P and J sources produce dipole radiation patterns while J has a monopole pattern. These plots were generated using this Jupyter notebook and output file.
Far-Field Intensity of a Cavity
For this demonstration, we will compute the far-field spectra of a resonant cavity mode in a holey waveguide; a structure we had explored in Tutorial/Resonant Modes and Transmission in a Waveguide Cavity. See cavity-farfield.py. The structure is shown at the bottom of the left image below.
To do this, we simply remove the last portion of holey-wvg-cavity.py, beginning right after the line:
sim.symmetries.append(mp.Mirror(mp.Y, phase=-1)) sim.symmetries.append(mp.Mirror(mp.X, phase=-1))
and insert the following lines:
d1 = 0.2 sim = mp.Simulation(cell_size=cell, geometry=geometry, sources=[sources], symmetries=symmetries, boundary_layers=[pml_layers], resolution=resolution) nearfield = sim.add_near2far( fcen, 0, 1, mp.Near2FarRegion(mp.Vector3(0, 0.5 * w + d1), size=mp.Vector3(2 * dpml - sx)), mp.Near2FarRegion(mp.Vector3(-0.5 * sx + dpml, 0.5 * w + 0.5 * d1), size=mp.Vector3(0, d1), weight=-1.0), mp.Near2FarRegion(mp.Vector3(0.5 * sx - dpml, 0.5 * w + 0.5 * d1), size=mp.Vector3(0, d1)) ) as the computational cell is surrounded by PMLs, and output the far-field spectra over a rectangular area that lies outside of the computational cell:
sim.run(until_after_sources=mp.stop_when_fields_decayed(50, mp.Hz, mp.Vector3(0.12, -0.37), 1e-8)) d2 = 20 h = 4 sim.output_farfields(nearfield, "spectra-{}-{}-{}".format(d1, d2, h), mp.Volume(mp.Vector3(0, (0.5 * w) + d2 + (0.5 * h)), size=mp.Vector3. This file will includes the far-field spectra for all six field components, including real and imaginary parts.
We run the above modified control file and in post-processing create an image of the real and imaginary parts of H over the far-field region which is shown in insets (a) above. For comparison, we compute the steady-state fields using a larger computational cell that contains within it the far-field region. This involves a continuous source and complex fields. Results are shown in figure (b) above.. This indicates that discretization effects are irrelevant.. | https://meep.readthedocs.io/en/latest/Python_Tutorials/Near_to_Far_Field_Spectra/ | 2018-10-15T11:40:39 | CC-MAIN-2018-43 | 1539583509170.2 | [array(['../../images/Near2far_simulation_geometry.png', None],
dtype=object)
array(['../../images/Source_radiation_pattern.png', None], dtype=object)
array(['../../images/N2ff_comp_cell.png',
'center|Schematic of the computational cell for a holey waveguide with cavity showing the location of the "near" boundary surface and the far-field region.'],
dtype=object) ] | meep.readthedocs.io |
Versioning
By default, the
HEAD operation retrieves metadata from the current version of an
object. If the current version is a delete marker, Amazon S3 behaves as if the
object
was deleted. To retrieve metadata from a different version, use the
versionId subresource.
For more information, see Versions in the Amazon Simple Storage Service Developer Guide.
To see sample requests that use versioning, see Sample Request Getting Metadata from a Specified Version of an Object. | https://docs.aws.amazon.com/AmazonS3/latest/API/headVersions.html | 2018-10-15T10:47:31 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.aws.amazon.com |
Application.Nz Method
You can use the Nz function to return zero, a zero-length string (" "), or another specified value when a Variant is Null. For example, you can use this function to convert a Null value to another value and prevent it from propagating through an expression.
Syntax
expression.Nz(Value, ValueIfNull)
expression A variable that represents an Application object.
Parameters
Return Value
Variant
Remarks.
In the next example, the Nz function provides the same functionality as the first expression, and the desired result is achieved in one step rather than two..
In the next example, the optional argument supplied to the Nz function provides the string to be returned if
varFreight is Null.
Example
The following example evaluates a control on a form and returns one of two strings based on the control's value. If the value of the control is Null, the procedure uses the Nz function to convert a Null value to a zero-length string. | https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2007/bb148937(v=office.12) | 2018-10-15T10:26:48 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.microsoft.com |
Cassowary¶
A pure Python implementation of the Cassowary constraint-solving algorithm. Cassowary is the algorithm that forms the core of the OS X and iOS visual layout mechanism.
Quickstart¶
Cassowary is compatible with both Python 2 or Python 3. To install Cassowary in your virtualenv, run:
$ pip install cassowary
Then, in your Python code, you can create and solve constraint systems. See the documentation for examples of what this looks like in practice.
Documentation¶
- Solving constraint systems
- Examples
- Reference
- Contributing to Cassowary
- Cassowary Roadmap
- Release History
Community¶
Cassowary¶
If you experience problems with Cassowary, log them on GitHub. If you want to contribute code, please fork the code and submit a pull request. | http://cassowary.readthedocs.io/en/latest/ | 2016-08-31T14:21:04 | CC-MAIN-2016-36 | 1471982290634.12 | [] | cassowary.readthedocs.io |
JBoss.orgCommunity Documentation
Abstract
The User manual is an in depth manual on all aspects of HornetQ, very high performance, clustered, asynchronous messaging system.
HornetQ is an example of Message Oriented Middleware (MoM) For a description of MoMs and other messaging concepts please see the Chapter 4, Messaging Concepts.
For answers to more questions about what HornetQ is and what 6+ runtime, that's everything from Windows desktops to IBM mainframes.
Amazing, clean-cut design with minimal third party dependencies. Run HornetQ stand-alone, run it in integrated in your favourite JEE application server, or run it embedded inside your own product. It's up to you. available from
Red Hat kindly employs developers to work full time on HornetQ, they are:
Clebert Suconic (project lead)
Andy Taylor
Howard Gao has a RESTful interface. You can find documentation for it outside of this manual. See the HornetQ distribution or website for more information.
Stomp is a very simple text protocol for interoperating with messaging systems. It defines a wire format, so theoretically any Stomp client can work with any messaging system that supports Stomp. Stomp clients are available in many different programming languages.
Please see Section 46.1, “Stomp” for using STOMP with HornetQ. server, HornetQ and Application Server Cluster Configuration.. In fact, communicating with a JMS messaging system directly, without using JCA would be illegal according to the JEE specification. Queue, Topic Queue, Topic 48
stop.bat
To run on Unix/Linux type
./stop.sh
To run on Windows type
stop.bat
Please note that HornetQ requires a Java 6 or later runtime to.
hornetq-configuration.xml. This is the main HornetQ
configuration file. All the parameters in this file are described in Chapter 48, Configuration Reference. Please see Section 6.9, “The main configuration file.” for more information on this file. basic.core.remoting.impl.spi.core.security follow.
The JMS API doc provides several connection factories for applications. HornetQ JMS users
can choose to configure the types for their connection factories. Each connection factory
has a
signature attribute and a
xa parameter, the
combination of which determines the type of the factory. Attribute
signature
has three possible string values, i.e. generic,
queue and topic;
xa is a boolean
type parameter. The following table gives their configuration values for different
connection factory interfaces.
As an example, the following configures an XAQueueConnectionFactory:
<configuration xmlns="urn:hornetq" xmlns: <connection-factory <xa>true</xa> <connectors> <connector-ref </connectors> <entries> <entry name="ConnectionFactory"/> </entries> </connection-factory> </configuration>.'s a very common JMS usage pattern to lookup JMS Administered Objects (that's JMS Queue, Topic and ConnectionFactory instances)Factory via the HornetQJMSClient Utility class, note we need to provide connection parameters and specify which transport we are using, for more information on connectors please see Chapter 16, Configuring the Transport.
TransportConfiguration transportConfiguration = new TransportConfiguration(NettyConnectorFactory.class.getName()); ConnectionFactory cf = HornetQJMSClient.createConnectionFactoryWithoutHA(JMSFactoryType.CF,transportConfiguration);
We also create the JMS Queue object via the HornetQJMSClient Utility class:
Queue orderQueue = HornetQJMSClient.create..62, it is very highly optimised for the specific messaging use cases. extremely good performance and runs on any platform where there's a Java 6+ runtime.
The second implementation uses a thin native code wrapper to talk to the Linux asynchronous IO library (AIO). With AIO, HornetQ). a
different thread to that which the message arrived on. This gives the best
overall throughput and scalability, especially on multi-core machines.
However it also introduces some extra latency due to the extra context
switch required. If you want the lowest latency and the possible expense of
some reduction in throughput then you can make sure
direct-deliver to true. The default value for this parameter
is
true. If you are willing to take some small extra hit
on latency but want the highest throughput set this parameter.
key-store-password. This is the password for the client
certificate key store on the client..39, .39, .14, .43, -durable-send and
block-on-non-durable durable.api.core.client.SendAcknowledgementHandler and set a handler
instance on your
ClientSession.
Then, you just send messages as normal using your
ClientSession, and as messages reach the server, the server will send
back an acknowledgement.52, .16, .21, larger.();.
If you specify the boolean property
compress-large-message on
the
server locator or
ConnectionFactory The
system will use the ZIP algorithm to compress the message body as the message is
transfered to the server's side. Notice that there's no special treatment at the
server's side, all the compressing and uncompressing is done at the client.); ... }
Please see Section 11.1.29, ..41, .api.HDR_SCHEDULED_DELIVERY_TIME).
The specified value must be a positive.50, .30, ..34, “Message Group” for an example which shows how message groups are configured and used with JMS.
See Section 11.1.35,
However there is another case which is not supported by JMS:.42, , HornetQ and Application Server Cluster Configuration)
Chapter 38, HornetQ and Application Server Cluster Configuration) Chapter 38, HornetQ and Application Server Cluster Configuration).31, ");.32, .33, <=5445<>> <entry> <key>jnp.timeout</key> <value>5000</value> </entry> <entry> <key>jnp.sotimeout</key> <value>5000<..27, .
Setting this parameter to
-1 disables any buffering and prevents
any re-attachment from occurring, forcing reconnect instead. The default value for this
parameter is
-1. (Which means by default no auto re-attachment will occur).17, way> <ha>true</ha> "/> <user>foouser</user> <password>foopassword</password> < address.
ha. This optional parameter determines whether or not this
bridge should support high availability. True means it will connect to any available
server in a cluster and support failover. The default value is
false.
max-size-bytesto.api.core, HornetQ and Application Server Cluster Configuration. provides only shared store in this release. Replication will be available in the next release.
Only persistent message data will survive failover. Any non persistent message data will not be available after failover.
When using a shared store, both live and backup servers share the same entire data directory using a shared file system. This means the paging directory, journal directory, large messages and binding journal.
When failover occurs and, HornetQ and Application Server Cluster Configuration, HornetQ and Application Server Cluster Configuration. Alternatively, the clients can explicitly connect to a specific server and download the current servers and backups see Chapter 38, HornetQ and Application Server Cluster Configuration. doesn't learn about the full topology until after the first
connection is made there is a window where it doesn.65, “Transaction Failover” and Section 11.1.40, “Non-Transaction Failover With Server Data Replication”.
HornetQ does not replicate full server state between.
It is possible to provide full state machine replication using.
The caveat to this rule is when XA is used either via JMS or through the core API.
If 2 phase commit is used and prepare has already ben.65, . durable.3, ,);.Logger.setDelegateFactory(new Log4jLogDelegateFactory())
Where
Log4jLogDelegateFactory is the implementation of
org.hornetq.sp. 48.19, “Embedded” for an example which shows how to setup and run HornetQ embedded with JMS.
You may also choose to use a dependency injection framework such as JBoss Micro Container™ or Spring Framework™. See Chapter 44, 43..api.core.interceptor;.25, “Interceptor” for an example which shows how to use interceptors to add properties to a message on the server.
Stomp is a text-orientated wire protocol that allows Stomp clients to communicate with Stomp Brokers..
Please note that the STOMP protocol does not contain any heartbeat frame. It is therefore the user's responsibility to make sure data is sent within connection-ttl or the server will assume the client is dead and clean up server side resources.. custom durable messages. By default JMS messages are durable. If you don't really... | http://docs.jboss.org/hornetq/2.2.2.Final/user-manual/en/html_single/ | 2016-08-31T15:30:19 | CC-MAIN-2016-36 | 1471982290634.12 | [] | docs.jboss.org |
java.lang.Object
org.modeshape.graph.connector.base.Repository<NodeType,WorkspaceType>org.modeshape.graph.connector.base.Repository<NodeType,WorkspaceType>
NodeType- the node type
WorkspaceType- the workspace type
@ThreadSafe public abstract class Repository<NodeType extends Node,WorkspaceType extends Workspace>
A representation of a repository as a set of named workspaces. Workspaces can be
created or
destroyed, though the exact type of
Workspace is dictated by the
Transaction. All workspaces contain a root
node with the same
UUID.
Note that this class is thread-safe, since a
BaseRepositorySource will contain a single instance of a concrete subclass
of this class. Often, the Workspace objects are used to hold onto the workspace-related content, but access to the content is
always done through a
transaction.
protected final BaseRepositorySource source
protected final UUID rootNodeUuid
protected final ExecutionContext context
protected Repository(BaseRepositorySource source)
MapRepositorywith the given repository source name, root node UUID, and a default workspace with the given name.
source- the repository source to which this repository belongs
IllegalArgumentException- if the repository source is null, if the source's
BaseRepositorySource.getRepositoryContext()is null, or if the source name is null or empty
protected void initialize()
Due to the ordering restrictions on constructor chaining, this method cannot be called until the repository is fully
initialized. This method MUST be called at the end of the constructor by any class that implements
MapRepository
.
public ExecutionContext getContext()
protected String getDefaultWorkspaceName()()
public Set<String> getWorkspaceNames()
public WorkspaceType getWorkspace(Transaction<NodeType,WorkspaceType> txn, String name)
txn- the transaction attempting to get the workspace, and which may be used to create the workspace object if needed; may not be null
name- the name of the workspace to return
public WorkspaceType createWorkspace(Transaction<NodeType,WorkspaceType> txn, String name, CreateWorkspaceRequest.CreateConflictBehavior existingWorkspaceBehavior, String nameOfWorkspaceToClone) throws InvalidWorkspaceException
This method will first check to see if a workspace already exists with the given name. If no such workspace exists, the
method will create a new workspace with the given name, add it to the
#workspaces workspaces map, and return it. If
a workspace with the requested name already exists and the
behavior is
CreateWorkspaceRequest.CreateConflictBehavior.DO_NOT_CREATE
, this method will return
null without modifying the state of the repository. If a workspace with the requested
name already exists and the
behavior is
CreateWorkspaceRequest.CreateConflictBehavior.CREATE_WITH_ADJUSTED_NAME, this method will
generate a unique new name for the workspace, create a new workspace with the given name, added it to the
#workspaces workspaces map, and return it.
If
nameOfWorkspaceToClone is given, this method will clone the content in this original workspace into the new
workspace. However, if no workspace with the name
nameOfWorkspaceToClone exists, the method will create an empty
workspace.
txn- the transaction creating the workspace; may not be null
name- the requested name of the workspace. The name of the workspace that is returned from this method may not be the same as the requested name; may not be null
existingWorkspaceBehavior- the behavior to use in case a workspace with the requested name already exists in the repository
nameOfWorkspaceToClone- the name of the workspace from which the content should be cloned; be null if the new workspace is to be empty (other than the root node)
nameOfWorkspaceToCloneor
nullif a workspace with the requested name already exists in the repository and
behavior == CreateConflictBehavior#DO_NOT_CREATE.
InvalidWorkspaceException- if the workspace could not be created
public boolean destroyWorkspace(String name)
#workspaces workspaces map.
name- the name of the workspace to remove
trueif a workspace with that name previously existed in the map
public RequestProcessor createRequestProcessor(Transaction<NodeType,WorkspaceType> txn)
txn- the transaction; may not be null
public abstract Transaction<NodeType,WorkspaceType> startTransaction(ExecutionContext context, boolean readonly)
committedor
rolled back.
context- the context in which the transaction is to be performed; may not be null
readonly- true if the transaction will not modify any content, or false if changes are to be made
Transaction.commit(),
Transaction.rollback() | http://docs.jboss.org/modeshape/2.8.0.Final/api/org/modeshape/graph/connector/base/Repository.html | 2016-08-31T14:22:47 | CC-MAIN-2016-36 | 1471982290634.12 | [] | docs.jboss.org |
numpy.ndarray.setflags¶
- ndarray.setflags(write=None, align=None, uic=None)¶
Set array flags WRITEABLE, ALIGNED, and UPDATEIFCOPY, respectively.
These Boolean-valued flags affect how numpy interprets the memory area used by a (see Notes below). The ALIGNED flag can only be set to True if the data is actually aligned according to the type. The UPDATEIFCOPY flag.)
Notes
Array flags provide information about how the memory area used for the array is to be interpreted. There are 6 Boolean flags in use, only three of which can be changed by the user: UPDATEIFCOPY, WRITEABLE, and ALIGNED.
WRITEABLE (W) the data area can be written to;
ALIGNED (A) the data and strides are aligned appropriately for the hardware (as determined by the compiler);
UPDATEIFCOPY (U) this array is a copy of some other array (referenced by .base). When this array is deallocated, the base array will be updated with the contents of this array.
All flags can be accessed using their first (upper case) letter as well as the full name.
Examples
>>> y array([[3, 1, 7], [2, 0, 0], [8, 5, 9]]) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False >>> y.setflags(write=0, align=0) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : False ALIGNED : False UPDATEIFCOPY : False >>> y.setflags(uic=1) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: cannot set UPDATEIFCOPY flag to True | http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.setflags.html | 2016-08-31T14:25:28 | CC-MAIN-2016-36 | 1471982290634.12 | [] | docs.scipy.org |
CIDER
CIDER is the Clojure(Script) Interactive Development Environment that Rocks!
CIDER extends Emacs with support for interactive programming in Clojure. The
features are centered around cider-mode, an Emacs minor-mode that complements
clojure-mode. While
clojure-mode supports editing Clojure source files,
cider-mode adds support for interacting with a running Clojure process for
compilation, debugging, definition and documentation lookup, running tests and
so on.
Please consider supporting financially its ongoing development.
Overview
CIDER aims to provide an interactive development experience similar to the one you’d get when programming in Emacs Lisp, Common Lisp (with SLIME or Sly), Scheme (with Geiser) and Smalltalk.
Programmers are expected to program in a very dynamic and incremental manner, constantly re-evaluating existing Clojure definitions and adding new ones to their running applications. You never stop/start a Clojure application while using CIDER - you’re constantly interacting with it and changing it.
You can find more details about the typical CIDER workflow in the Interactive Programming section. While we’re a bit short on video tutorials, you can check out this introduction to CIDER to get a feel about what do we mean by an "Interactive Development Environment"..
Features
CIDER packs plenty of features. Here are some of them (in no particular order):
Interactive code evaluation
Powerful REPL
Code completion
Code reloading
Definition & documentation lookup
Enhanced error reporting
clojure.testintegration
clojure.specintegration
Interactive debugger
ClojureScript support
And many many more… The rest of this manual will be exploring CIDER’s features in great detail.
CIDER in Action
Below you can see a typical CIDER session.
Here the user checked the documentation for
clojure.core/merge straight from the source buffer
and then jumped to a REPL buffer to try out something there.
What’s Next?
So, what to do next? While you can peruse the documentation in whatever way you’d like, here are a few recommendations:
Install CIDER and get it up and running
Get familiar with interactive programming and cider-mode
Configure CIDER to your liking
Explore the additional packages that can make you more productive | https://docs.cider.mx/cider/1.1/index.html | 2021-05-06T08:52:13 | CC-MAIN-2021-21 | 1620243988753.91 | [array(['_images/cider_architecture.png', 'cider architecture'],
dtype=object)
array(['_images/cider-overview.png', 'CIDER Screenshot'], dtype=object)] | docs.cider.mx |
Babashka
Babashka is highly compatible with Clojure, so it works with CIDER out of the box. All you need to do is start its bundled nREPL server:
$ bb --nrepl-server
And connect to it afterwards using C-c C-x c j (
cider-connect-clj).
Babashka’s nREPL server supports all core nREPL operations, plus code completion, so you’ll get all of CIDER’s basic functionality with it.
Currently you can’t use
cider-jack-in with Babashka, but this may change down the road.
Differences with Clojure
There are a few differences between Babashka and Clojure that you should keep in mind:
Built-in vars (e.g.
clojure.core/map) don’t have definition location metadata. In practice this means you can’t navigate to their definitions in CIDER.
The
javadoc(
clojure.java.javadoc/javadoc) REPL utility function is not currently available in Babashka. | https://docs.cider.mx/cider/1.1/platforms/babashka.html | 2021-05-06T08:54:14 | CC-MAIN-2021-21 | 1620243988753.91 | [] | docs.cider.mx |
TS-8100-4200
Contents
- 1 Overview
- 2 Getting Started
- 3 Debian Configuration
- 4 Backup / Restore
- 5 Software Development
- 6 Features
- 7 External Interfaces
- 8 Further Resources
- 9 Revisions and Changes
-
Other options include:
2.2 Booting up the board
The TS-8100-4200 accepts 5-28VDC input connected to the two terminal blocks.
A typical power supply for just the TS-8100-4200 Mar 9 2011 09:59:23 >> Copyright (c) 2011, Technologic Systems >> Booting from SD100 | bzip2 > backup.dd.bz Kernel Compile: #; }
The TS-4200 has 4 IRQs that can be used by external devices. See page 30 of the CPU manual for a complete listing of all of the available IRQs.
The TS-8100 has a baseboard ID of 7 which can be used to detect the baseboard in code. Implementations should refer to "ts4700ctl --baseboard" which detect the baseboard ID.
6.8 CAN
The TS-4200 does not support CAN.
6.9 USB
6.
30000000); for(;;) { // This feeds the watchdog for 16s. syscon[0x10/2] = 2; sleep(8); } return 0; }
6.15 values
## Fast value peekpoke 16 0x30000020 0x181 ## Slow value (for older PC104 devices) # peekpoke 0x30000020 0xf0ff
6.16 TS-8100 Header
The TS-8100 baseboard includes 2 standard 5-pin USB headers. These can be connected to USB adapters such as the CB-USB-AF5P which allow for simple mounting in custom enclosures.> //); /*init();.8..9 SATA Connector
This controller does not support SATA.
7.10 Second Ethernet
The TS-8100 supports an optional second Ethernet port through a USB SMSC chip onboard. In linux this creates an eth1 interface. For more information see the #Network Configuration section.
7.11 Revisions and Changes
9.1 TS-8100 PCB Revisions
10 Product Notes
10.1 Errata
Description:
On the REV A only board the white dot indicating pin 1 was on the wrong side. This was corrected in REV B, but on both boards the 5V pin is near the ethernet connector. Refer to the #USB Header section for further details.. | https://docs.embeddedarm.com/index.php?title=TS-8100-4200&printable=yes | 2021-05-06T09:10:21 | CC-MAIN-2021-21 | 1620243988753.91 | [] | docs.embeddedarm.com |
OpenBlu Introduction
Introduction to OpenBluOpenBlu is a public repository of OpenVPN servers that are publicly accessible and updated regularly. You can use the official Web Application found at or via the API.
All the VPN servers found in OpenBlu are hosted by volunteers world-wide thus making OpenBlu a decentralized solution because the servers are not hosted by Intellivoid. These servers do not require you to authenticate to connect to any of them.
How do I connect to these servers?You can connect to these servers using a standard OpenVPN client, by obtaining a .ovpn file from OpenBlu you can connect to the server by providing the .ovpn file to the OpenVPN Client
Can these servers see what i do?Yes and no. It is possible that the admins of these servers can see what hosts you connect to but if your connection is encrypted (such as using standard SSL encryption) they cannot see what you send and receive.
Yes it is possible that the admins keep logs but at the end of the day the only information you are sending to these servers is what IP address you are using to connect to these VPN servers. While most known VPN providers claims that they do not keep logs, often most of them requires you to
- Create an account while providing your personal details such as your Email
- Authenticate with your personal account to connect to these servers
- Download and install proprietary software or "open source" software with blobs to connect to these servers
OpenBlu being decentralized means that most of the servers found in OpenBlu is self-hosted in residential locations making it a more effective solution to stay anonymous.
Some servers won't work, why's that?One of the downsides of decentralization is the inconsistency of server availability and network infrastructures. OpenBlu does not guarantee that all servers will continue to work after a week or two, but here's some ways to make sure that the the probability of you connecting to a available server is high.
- Connect to servers that has been updated recently
- Connect to servers with a high active session count
- Avoid servers that hasn't been updated in more than a week
Obtaining a API Key
Have a suggestion? | https://docs.intellivoid.net/openblu/introduction | 2021-05-06T10:09:53 | CC-MAIN-2021-21 | 1620243988753.91 | [] | docs.intellivoid.net |
.
The following instructions demonstrate the process of creating a configurable product using a product template, required fields, and basic settings. Each required field is marked with a red asterisk (
*). When you finish the basics, you can complete the advanced settings and other settings as needed.
Configurable Product
Part 1: Creating a configurable product
Although a configurable product uses more SKUs and may initially take a little longer to set up, it can save you time in the long run. If you plan to grow your business, the configurable product type is a good choice for products with multiple options.
Before you begin, prepare an attribute set that includes an attribute that is set to one of the allowable input types for each product variation. For example, the attribute set might include drop-down attributes for color and size.
The properties of each attribute that is used for a configurable product variation must have the following settings:
Product variation attribute requirements
Step 1: Choose the product type
On the Admin sidebar, go to Catalog > Products.
On the Add Product (
) menu at the upper-right corner, choose Configurable Product.
Add Configurable Product
Step 2: Choose the attribute set
The attribute set determines the selection of fields that are used in the product. The attribute set that is used in the following example has attributes for color and size. The name of the attribute set is indicated at the top of the page and is initially set to
Default.
To choose the attribute set for the product, click the field at the top of the page and do one of the following:
- For Search, enter the name of the attribute set.
- In the list, choose the attribute set that you want to use.
The form is updated to reflect the change.
If you need to add an additional attribute to the attribute set, click Add Attribute and follow the instructions in Adding an Attribute.
Choose Template
Step 3: Complete the required settings
Enter the product
The Quantity is determined by the product variations, so you can leave it blank.
Leave the Stock Status as set.
The Stock Status of a configurable product is determined by each associated configuration. Because the product was saved without entering a quantity, the Stock Status is set to
Out of Stock.
Enter the product Weight. attributes that are used to describe the product. The selection varies by attribute set, and you can complete them later.
Step 5: Save and continue
This is a good time to save your work. In the upper-right corner, click Save. In the next series of steps, you’ll set up the configurations for each variation of the product.
Part 2: Adding configurations
The following example shows how to add configurations for three colors and three sizes. In all, nine simple products will be created with unique SKUs to cover every possible combination of variations. By default, the product name and SKU for each variation is based on the attribute value and either the parent product name or SKU.
The progress bar at the top of the page shows where you are in the process and guides you through each step.
Progress Bar
Step 1: Choose the attributes
Continuing from above, scroll down to the Configurations section and click Create Configurations.
Configurations
Select the checkbox of each attribute that you want to include as a configuration.
For this example, we choose
colorand
size.
The list includes all attributes from the attribute set that can be used in a configurable product.
Select Attributes
If you need to add a new attribute, click Create New Attribute and do the following:
Complete the attribute properties.
Click Save Attribute.
Select the checkbox to select the attribute.
In the upper-right corner, click Next.
Step 2: Enter the attribute values
For each attribute, select the checkbox of the values that apply to the product.
To rearrange the attributes, grab the Change Order (
) icon and move the section to a new position.
The order determines the position of the drop-down lists on the product page.
In the progress bar, click Next.
Step 3: Configure the.
Choose the configuration options that apply.
Use one of the following methods to configure the images:
Method 1: Apply a single set of images to all SKUs
Select Apply single set of images to all SKUs.
Browse to each image that you want to include in the product gallery, or drag them to the box.
Use Same Images for All SKUs
Method 2: Apply unique images for each SKU
Because we already uploaded an image for the parent product, we’ll use this option to upload an image of each color. This is the image that will appear in the shopping cart when someone buys the shirt in a specific color.
Select Apply unique images by attribute to each SKU.
Select the attribute that the images illustrate, such as
color.
For each attribute value, either browse to the images that you want to use for that configuration or drag them to the box.
If you drag the an image to a value box, it also appears in the sections for the other values. If you want to delete an image, click the trashcan (
) icon.
Unique Images per SKU
Use one of the following methods to configure the prices:
- Method 1: Apply the same price to all SKUs
If the price is the same for all variations, select Apply single price to all SKUs.
Enter the Price.
Same Price per SKU
- Method 2: Apply a different price for each SKU
If the price differs for each or for some variations of the product, select Apply unique prices by attribute to each SKU.
Select the attribute that is the basis of the price difference.
Enter the price for each attribute value. In this example, the XL size costs more.
Unique Price per SKU
Use one of the following methods to configure the quantity:
- Method 1: Apply the same quantity to all SKUs
If the quantity is the same for all SKUs, select Apply single quantity to each SKU.
Enter the Quantity.
Same Quantity for All SKUs
If needed, apply the Same Quantity to All SKUs (Inventory Management).
For Multi Source merchants using Inventory Management, assign sources and add quantities for all generated product variants:
Select the Apply single quantity to each SKUs option.
To add a source, click Assign Sources.
Browse or search for a source you want to add. Select the checkbox next to the source(s) you want to add for the product.
Enter an on-hand inventory amount per source.
Same Quantity for All SKUs
- Method 2: Apply Different Quantity by Attribute
If the quantity is the different for each SKU, select Apply unique quantity by attribute to each SKU.
Enter the Quantity for each.
Different Quantities per Attribute
When configuration for images, price, and quantity are complete, click Next in the upper-right corner.
Step 4: Generate the product configurations
Wait a moment for the list of products to appear and do one of the following:
If you are satisfied with the configurations, click Next.
To make corrections, click Back.
Summary
The current product variations appear at the bottom of the Configuration section.
Current Configurations
Step 5: Add a product image
Scroll down and expand
the Images and Videos section.
Click the Camera tile and browse to the main image that you want to use for the configurable product.
For more information, see Images and Video.
Step 6: Complete the product information
Scroll down and complete the information in the following sections as needed:
Step 7: Publish the product
If you are ready to publish the product in the catalog, set Enable Product to
Yes 8: Configure the cart thumbnails
If you have a different image for each variation you can set the configuration to use the correct image for the shopping cart thumbnail.
On the Admin sidebar, go to Stores > Settings > Configuration.
In the left panel, expand Sales and choose Checkout underneath.
Expand
the Shopping Cart section.
Set Configurable Product Image to
Product Thumbnail Itself.
When complete, click Save Config.
Shopping Cart - Configurable Product Image
Things to remember
A configurable product allows the shopper to choose options from drop-down, multiple select, visual swatch and text swatch input types. Each option is a separate, simple product.
The attributes that are used for product variations must have a global scope and the customer must be required to choose a value. The product variation attributes must be included in the attribute set that is used as a template for the configurable product.
The attribute set that is. | https://docs.magento.com/user-guide/v2.3/catalog/product-create-configurable.html | 2021-05-06T09:13:38 | CC-MAIN-2021-21 | 1620243988753.91 | [] | docs.magento.com |
Triangulating before baking
3D Meshes can be defined with polygons with multiple border edges per face. Usually via quads (4 edges), sometimes more (n-gons).
Software however transform those polygons into Triangles later because it's easier to manage and perform computation with (especially on the GPU).
How can triangulation affect a mesh ?
There are no standard solutions to convert Quad/N-Gons into triangles. As demonstrated on the image above, multiple choices are valid.
The bakers are unlikely to triangule meshes like a game engine would do because we choose a specific algorithm over an other.
Why triangulating before baking ?
The baking process will read the geometry and then encode information into textures.
Because those information are based on UVs and sometimes on the mesh topology, other software could decode the information incorrectly if they don't read the geometry the same way as when they apply the texture.
On the image below, you can see the low-poly mesh at the top left and the high-poly mesh at the top right.
At the bottom is the low-poly with the normal map baked from the high-poly. The mesh on the left use a triangulation identical to the one used by Substance Painter when baking. The mesh on the right doesn't and display black artifacts. This is because there is a mismatch between how the normal map was baked and how the mesh is currently triangulated. This can be fixed by updating the mesh and/or rebaking. | https://docs.substance3d.com/bake/triangulating-before-baking-159451841.html | 2021-05-06T09:05:30 | CC-MAIN-2021-21 | 1620243988753.91 | [] | docs.substance3d.com |
With the emergence of cloud-native technologies, traditional approaches do not adequately satisfy data protection requirements. As a result, a new approach and technology are needed for addressing business continuity and disaster recovery strategies for environments that have adopted and are running cloud-native applications. TrilioVault is a backup and recovery as a service tool for cloud platforms, including OpenStack, RHV, and now Kubernetes.
This guide covers the evolution of data protection in a cloud-native world and how TrilioVault addresses these requirements. This guide also covers the architecture, installation, operations and usage of TrilioVault for Kubernetes.
This guide serves as end-user and technical documentation for TrilioVault for Kubernetes. The intended audience for this guide is all users of the product that want to understand the value, operations, and nuances for protecting their cloud-native applications with TrilioVault for Kubernetes. | https://docs.trilio.io/kubernetes/ | 2021-05-06T09:50:42 | CC-MAIN-2021-21 | 1620243988753.91 | [] | docs.trilio.io |
. Parent topic: Upgrade SD-WAN Orchestrator with DR Deployment | https://docs.vmware.com/en/VMware-SD-WAN/4.1/vmware-sd-wan-operator-guide/GUID-D929A3AF-A582-4911-B441-0273C47E1399.html | 2021-05-06T10:56:23 | CC-MAIN-2021-21 | 1620243988753.91 | [] | docs.vmware.com |
Leave Feedback
From Xojo Documentation
Thanks for helping us improve the Xojo documentation.
Send your quick feedback in an email to [email protected] and include the name (or URL) of the page in the message.
For larger changes or suggestions, please create a Feedback case.
Thank you for taking the time to help us improve the documentation. | https://docs.xojo.com/index.php?title=Xojo_Documentation:Leave_Feedback&oldid=77439 | 2021-05-06T09:19:35 | CC-MAIN-2021-21 | 1620243988753.91 | [] | docs.xojo.com |
A newer version of this page is available. Switch to the current version.
Context Menu
The Context Menu is activated by right-clicking within the grid. The control provides different context menu types for the following elements: row, column, footer, group footer, and group panel.
The table below lists the main members that affect the appearance of the common menu, as well as the appearance of specific menus.
The text and image of menu items can be specified via the ASPxGridView.SettingsText and ASPxGridView.Images properties, respectively.
See Also
Feedback | https://docs.devexpress.com/AspNet/17125/aspnet-webforms-controls/grid-view/visual-elements/context-menu?v=19.2 | 2021-05-06T09:44:43 | CC-MAIN-2021-21 | 1620243988753.91 | [array(['/AspNet/images/aspxgridview_contextmenu12678.png?v=19.2',
'ASPxGridView_ContextMenu'], dtype=object) ] | docs.devexpress.com |
Quantify is a Windows-based program that enables you to use the skills that you already have. We designed these hardware requirements to work with your existing infrastructure and minimize your IT costs.
Quantify has an auto-update mechanism that keeps end-users up to date with the latest features and enhancements.
Supported Operating Systems*:
Windows 10
Quantify has passed Microsoft's compatibility tests for 10.
To verify certification status search for quantify in the Windows Compatibility Center. *64-bit operating system required to run Quantify 64-bit.
Processor*:
400 MHz processor (Minimum)
1GHz processor or greater (Recommended).
*64-bit processor required to run Quantify 64-bit.
RAM:
256 MB (Minimum)
512 MB (Recommended)
Hard Disk:
Up to 500 MB of available space may be required
CD or DVD Drive:
Not required
Display:
1024 x 768, 256 colors (Minimum)
1024 x 768 high color (Recommended)
Internet:
Internet connection required for installation and automatic updates.
Windows 10
Windows 8.1
Windows 8
Windows Server 2019
Windows Server 2016
Windows Server 2012
Windows Server 2012 R2
Windows Server 2008 R2
Windows Server 2008 SP2
1 GHz processor (Minimum) 32 Bit or 1.4 GHz processor (Minimum) 64 Bit
2 GHz processor or greater (Recommended)
512 MB (Minimum);
1 GB or more (Recommended)
6 GB or more of available space
Not required
Super-VGA 800 x 600 or higher resolution
Internet connection required for installation and automatic updates
High speed internet required
Required minimum speeds: 10 Mbps upload, 50 Mbps download
To keep the installation and update download size to a minimum, the prerequisites that Quantify requires are not included in the initial installer. If your computer does not have any prerequisites installed, a Prerequisites Wizard automatically appears during installation. It will notify you of the files needed and step you through the process of downloading and installing them. After all prerequisites have been installed, Quantify installation will begin. | https://docs.avontus.com/plugins/viewsource/viewpagesrc.action?pageId=134874132 | 2021-05-06T10:39:22 | CC-MAIN-2021-21 | 1620243988753.91 | [] | docs.avontus.com |
Prerequisites for creating VM images
In this document, the scope of Azure Marketplace is China, and “mirror image” or “image” mean the same thing.
1. Download and install the tools
Before starting, install the following tools:
Depending on the operating system, install the Azure PowerShell command or Linux command line interface tool (Azure CLI) from the Azure Download page. For information about management tools that you can use, please see Azure PowerShell overview and Install Azure CLI 2.0.
Install Azure Storage Explorer
Download and run the Certification Test Tool for Azure Certified: Certification Test Tool for Azure Certified. You need a Windows machine to run the Authentication Tool. If you do not have a Windows machine, then you can run it on a Windows Azure virtual machine.
2. Supported platforms
You can develop Azure VMs using Windows or Linux, but there are some issues during the publication process—for example if you are creating an Azure-compatible virtual hard disk (VHD), the tools used and steps depend on the operating system you use:
If you use Linux, please refer to Azure-approved Linux release.
If you are creating a Windows image, ensure that you are using the correct base VHD.
The VM image operating system VHD must be based on an Azure-approved base image (including Windows Server or SQL Server). When starting, create the VM from an image located on the Azure portal. These images can also be found on the Azure Marketplace Windows Server and SQL Server.
The Windows operating system VHD in the VM image should be created as a 128 GB fixed-format VHD. If the size is less than 128 GB, then the VHD should be sparse. The base Windows and SQL Server image already meets these requirements, so do not change the format or size of the VHD obtained.
If you are creating a Windows image, please install the latest Windows patches.
The base image includes the latest patches up to the release date. Before publishing the created operating system VHD, ensure that Windows Update has been run and the latest “Critical” and “Important” security updates have been installed. Please refer to the document Prepare a Windows VHD or VHDX to upload to Azure “Install Windows Updates” section.
If you are creating a Windows image, you can implement other configurations or schedule tasks as needed. If you need another configuration, consider running scheduled tasks at startup, so that after the VM is deployed it can execute the latest changes:
A best practice is to have it delete itself after successfully executing.
Configurations should not rely on disks other than C or D because these two disks are the only ones guaranteed to exist. Disk C is the operating system disk and disk D is the temporary local disk.
To create an image with a high level of security, please refer to Security Recommendations for Azure Marketplace Images.
3. Product image requirements
The product image must meet the following requirements. Ensure that:
- The image is suitable for use with production environments, because the Azure Marketplace does not put up test versions of products on principle.
- The image is a self-contained image that contains all the software components on which it depends, including the client.
- The image does not contain any known defects, malicious software, or viruses.
- The image has undergone stringent testing to ensure its usability.
- The root logon is disabled by default for Linux images.
- For Linux images, the image does not contain any user authentication key details.
- VHD image files must be between 1GB and 1TB in size.
4. Azure subscription
If you don’t have an Azure subscription, please refer to the Azure Marketplace Publisher Guide.
Next steps
Feedback
- If you have any questions about this documentation, please submit user feedback in the Azure Marketplace.
- You can also look for solutions in the FAQs. | https://docs.azure.cn/en-us/articles/azure-marketplace/imageguide | 2021-05-06T09:00:39 | CC-MAIN-2021-21 | 1620243988753.91 | [] | docs.azure.cn |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.