content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
XQuery Update From BaseX Documentation, a few problems are addressed that frequently arise due to the nature of the language. These are stated in the Concepts paragraph. [edit] Features [edit]. [edit]! [edit] delete delete node //n The example query deletes all <n> elements in your database. Note that, in contrast to other updating expressions, the delete expression allows multiple nodes as a target. [edit]. [edit]. [edit] Non-Updating Expressions [edit] transform [edit] update for $item in db:open('data')//item return $item update delete node text() The update expression is a convenience operator for writing simple transform expressions. Similar to the XQuery 3.0 map operator, the value of the first expression is bound as context item, and the second expression performs updates on this item. The updated item is returned as result. Please note that update is not part of the official XQuery Update Facility yet. It is currently being discussed in the W3 Bug Tracker; your feedback is welcome. [edit] Functions [edit] fn:put fn:put() is also part of the XQUF and enables the user to serialize XDM instances to secondary storage. It is executed at the end of a snapshot. Serialized documents therefore reflect all changes made effective during a query. [edit] Database Functions Some additional, updating database functions exist in order to perform updates on document and database level. [edit]. [edit] Pending Update List The most important thing to keep in mind when using XQuery Update is the Pending Update List (PUL). Updating statements are not executed immediately, but are first collected as update primitives within a set-like structure. At the end of a query, after some consistency checks and optimizations, the update primitives will be applied in the following order: - Backups (1): db:create-backup() - XQuery Update: insert before, delete, replace, rename, replace value, insert attribute, insert into first, insert into, insert into last, insert, insert after, put - Documents: db:add(), db:store(), db:replace(), db:rename(), db:delete(), db:optimize(), db:flush(), - Users: user:grant(), user:password(), user:drop(), user:alter(), user:create() - Databases: db:copy(), db:drop(), db:alter(), db:create() - Backups (2):. [edit] <b/> element is inserted within the same snapshot and is therefore not yet visible to the user. [edit] Returning Results By default, it is not possible to mix different types of expressions in a query result. The outermost expression of a query must either be a collection of updating or non-updating expressions. But there are two ways out: - The BaseX-specific db:output()function bridges this gap: it caches the results of its arguments at runtime and returns them after all updates have been processed. The following example performs an update and returns a success message: db:output("Update successful."), insert node <c/> into doc('factbook')/mondial - With the MIXUPDATES option, all updating constraints will be turned off. Returned nodes will be copied before they are modified by updating expressions. An error is raised if items are returned within a transform expression. If you want to modify nodes in main memory, you can use the transform expression. [edit] Function Declaration To use updating expressions within a function, the %updating annotation has to be added to the function declaration. A correct declaration of a function that contains updating expressions (or one that calls updating functions) looks like this: declare %updating function { ... } [edit] Effects [edit] when the WRITEBACKoption. [edit] Indexes Index structures are discarded after update operations when UPDINDEX is turned off (which is the default). More details are found in the article on Indexing. [edit] Error Messages Along with the Update Facility, a number of new error codes and messages have been added to the specification and BaseX. All errors are listed in the XQuery Errors overview. [edit] Changelog - Version 8.0 - Added: MIXUPDATESoption for Returning Results in updating expressions - Added: information message if files are not written back - Version 7.8
http://docs.basex.org/wiki/Updates
2015-08-28T00:08:43
CC-MAIN-2015-35
1440644060103.8
[]
docs.basex.org
Setting Up Apache Sqoop Using the Command Line. Upgrading the RPM or Viewing the Sqoop 1 Documentation For additional documentation see the Sqoop user guides.
https://docs.cloudera.com/documentation/enterprise/5/latest/topics/cdh_ig_sqoop_installation.html
2019-11-11T22:32:01
CC-MAIN-2019-47
1573496664439.7
[]
docs.cloudera.com
Arithmetic Exception Class Definition The exception that is thrown for errors in an arithmetic, casting, or conversion operation. public ref class ArithmeticException : SystemException [System.Runtime.InteropServices.ComVisible(true)] [System.Serializable] public class ArithmeticException : SystemException type ArithmeticException = class inherit SystemException Public Class ArithmeticException Inherits SystemException - Inheritance - ArithmeticException - Derived - - Attributes - Remarks ArithmeticException is the base class for the following exceptions: DivideByZeroException, which is thrown in integer division when the divisor is 0. For example, attempting to divide 10 by 0 throws a DivideByZeroException exception. NotFiniteNumberException, which is thrown when an operation is performed on or returns Double.NaN, Double.NegativeInfinity, Double.PositiveInfinity, Single.NaN, Single.NegativeInfinity, Single.PositiveInfinity, and the programming language used does not support those values. OverflowException, which is thrown when the result of an operation is outside the bounds of the target data type. That is, it is less than a number's MinValueproperty or greater than its MaxValueproperty. For example, attempting to assign 200 + 200 to a Byte value throws an OverflowException exception, since 400 greater than 256, the upper bound of the Byte data type. Your code should not handle or throw this exception. Instead, you should either handle or throw one of its derived classes, since it more precisely indicates the exact nature of the error. For a list of initial property values for an instance of ArithmeticException, see the ArithmeticException constructors. ArithmeticException uses the HRESULT COR_E_ARITHMETIC, which has the value 0x80070216.
https://docs.microsoft.com/en-us/dotnet/api/system.arithmeticexception?redirectedfrom=MSDN&view=netframework-4.8
2019-11-11T23:37:51
CC-MAIN-2019-47
1573496664439.7
[]
docs.microsoft.com
8.0.1 Fixes - Fixes indexing issues when a filename contains unicode characters 8.0.0 Fixes: - Issue with search results that query subject fields - CLI bug when a LD_LIBRARY_PATH environment variable is present Non-Backwards Compatible Changes: With the formalization of Subjects in the Flywheel hierarchy in EM 5.0.0 , the metadata value of `age` in a Subject moved to the Session. This reflects that `age` is the subject's age at the time of the session, not their current age. A subject's birthdate is metadata that is inherent to the subject and does not change over time, so that metadata field stayed on the Subject level. To ease the transition, using the search term `subject.age`returned the same results as searching the term `session.age`. However, with EM 8.0.0 and beyond, searches querying `subject.age` will be removed, and the field will no longer be indexed. The user must instead search by `session.age`, the actual location of the subject's age at the time of scan.
https://docs.flywheel.io/hc/en-us/articles/360022580514-EM-8-0-x-Release-Notes-
2019-11-11T22:47:52
CC-MAIN-2019-47
1573496664439.7
[]
docs.flywheel.io
MaximumHeight Property Specifies the maximum size in pixels to which the control add-in can dynamically increase its height. Applies to - Control add-in objects Value Type - Integer Property Values The default is the integer’s maximum value. If VerticalStretch is true but MaximumHeight is not set, the control add-in can expand indefinitely. Dependent Property This setting only applies if VerticalStretch is set to true. Remarks Use this property when the visual content of the add-in is no longer usable or no longer visually appealing beyond a certain size. Code Example RequestedHeight = 300; VerticalStretch = true; MaximumHeight = 500; See Also Feedback
https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/developer/properties/devenv-maximumheight-property
2019-11-11T22:37:55
CC-MAIN-2019-47
1573496664439.7
[]
docs.microsoft.com
getElementsByTagName Method (DOMDocument) (Windows CE 5.0) Returns a collection of elements that have the specified name. [Script] Script Syntax var objXMLDOMNodeList=oXMLDOMDocument.getElementsByTagName(tagName); Script Parameters - tagName String specifying the element name to find. The tagName "*" returns all elements in the document. Script Return Value Object. Points to a collection of elements that match the specified name. [C/C++] C/C++ Syntax HRESULT getElementsByTagName(BSTRtagName,IXMLDOMNodeList** resultList); C/C++ Parameters - tagName [in] Element name to find. The tagname "*" returns all elements in the document. - resultList [out, retval] Address of a collection of elements that match the specified name. C/C++ Return Values - S_OK Value returned if successful. Requirements OS Versions: Windows CE .NET 4.0 and later. Header: Msxml2.h, Msxml2.idl. Link Library: Uuid.lib. General Remarks This method is only valid if the XML Query Language (XQL) feature has been included in the operating system (OS). If a call to this method is made and XQL is not supported, an error message will be returned. The elements in the collection are returned in the order in which they would be encountered in a preorder traversal of the document tree. In a preorder traversal, the parent root node is visited first and each child node from left to right is then traversed. The returned IXMLDOMNodeList object is live and immediately reflects changes to the nodes that appear in the list. More complex searches can be performed using the selectNodes Method. This method can also be faster in some cases. This method applies to the following objects and interfaces: DOMDocument and IXMLDOMNodeList. See Also Send Feedback on this topic to the authors
https://docs.microsoft.com/en-us/previous-versions/windows/embedded/aa515033%28v%3Dmsdn.10%29
2019-11-11T23:28:01
CC-MAIN-2019-47
1573496664439.7
[]
docs.microsoft.com
frr – Use frr cliconf to run command on Free Range Routing platform¶ New in version 2.8. Synopsis¶ - This frr plugin provides low level abstraction apis for sending and receiving CLI commands from FRR network devices. Status¶ - This cliconf is not guaranteed to have a backwards compatible interface. [preview] - This cliconf is maintained by the Ansible Community. [community]
https://docs.ansible.com/ansible/latest/plugins/cliconf/frr.html
2019-11-11T23:44:51
CC-MAIN-2019-47
1573496664439.7
[]
docs.ansible.com
Expressions are allowed in different places in the structure definition. For example, the size of the bit field, the number of items in array and pointer offset are all specified using expressions. Expressions are also used in calculating constant values and enumeration values. Expression is a combination of immediate values, constants, enumeration values, function calls and field references. All these elements are connected with one or more operators. An expression may be as simple as: 5 // evaluates to integer "5" or as complex as: 5 + 7*(10 - 2) // calculate an expression info.bmiHeader.sel.header.biSizeImage // take a value of a field several scopes deep bfTypeAndSignature.bfType == 'BM' && bfReserved1 == 0 && bfReserved2 == 0 // verify a condition RvaToVa(OptionalHeader.DataDirectory[i].VirtualAddress) // access a field in a nested structure and array and call an external function Below is a table of supported operators. Operators are sorted by their precedence, from highest to lowest. Operators in the same row have the same precedence value and are evaluated from left to right. All expressions are evaluated at the time a structure file is compiled. If expression is successfully evaluated to constant value (that is, it does not contain any field references), calculated value is used instead of the expression. It is used each time a type is bound to the data, thus greatly minimizing bind time. You can get advantage of this optimization taken by the Device Monitoring Studio: instead of const SecondsInHour = 3600; // 60 * 60 const SecondsInDay = 86400; // 60 * 60 * 24 use const SecondsInHour = 60*60; const SecondsInDay = SecondsInHour * 24; As the result will be the same after the source file is compiled. Device Monitoring Studio is also capable of optimizing sub-expressions: struct A { int size; byte data[size * (sizeof(int) - 1)]; }; Important note here is that constant sub-expression must be enclosed in parenthesis in order to be optimized. Otherwise, Device Monitoring Studio will not optimize it because it considers a variable as having arbitrary type: var i = ... var j = i + (5 + 3); // will be optimized to i + 8 var j = i + 5 + 3; // will not be optimized (consider the case when i is a string, for example, which results in i + "53").
https://docs.hhdsoftware.com/dms/advanced-features/protocols/language-reference/expressions/overview.html
2019-11-11T22:28:19
CC-MAIN-2019-47
1573496664439.7
[]
docs.hhdsoftware.com
. Important The example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious. No association with any real company, organization, product, domain name, e-mail address, logo, person, places, or events is intended or should be inferred. In This Section Sales and Marketing Scenario Describes Adventure Works Cycles sales and marketing environment and customers. Product Scenario Describes the products produced by Adventure Works Cycles. Purchasing and Vendor Scenario Describes Adventure Works Cycles purchasing needs and vendor relationships. Manufacturing Scenario Describes Adventure Works Cycles manufacturing environment. See Also
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008/ms124825(v=sql.100)?redirectedfrom=MSDN
2019-11-11T23:18:42
CC-MAIN-2019-47
1573496664439.7
[]
docs.microsoft.com
Inspections -. The trick to doing this right is understanding that the order in which the package deploys the artifacts is important. Look at the image below, which shows a fairly representative simple data model for a SharePoint solution. The screen shot is from VS2102/SP2013, but it works exactly the same in 2010. It shows fields, content types, and lists, deployed in that order. Now, the Projects list has a lookup to the Clients list. If you put that lookup in the SiteColumns element, it will fail, because it will have been provisioned in the wrong order. The same for different features. The practice to make this work is to put the Field element for the lookup directly under the ListInstance element it depends on (or into the same feature). By doing this there is no possibility the lookup will be provisioned out of order, because SharePoint will deploy the stuff in an individual elements file in the order it appears in the file: Resolution Consider to put lookup fields in the same feature along with related lists.
http://docs.subpointsolutions.com/resp/inspections/xml/resp515114.html
2017-07-20T16:48:11
CC-MAIN-2017-30
1500549423269.5
[array(['http://derekgusoff.files.wordpress.com/2013/04/package.jpg', None], dtype=object) array(['http://derekgusoff.files.wordpress.com/2013/04/elements.jpg', None], dtype=object) ]
docs.subpointsolutions.com
Christian-Based Treatment of Eating Disorders: Reconciling Self, Life and God Start Date : April 28, 2017 End Date : April 28, 2017 Time : 8:15 am to12:30 pm Location : The Westin Galleria Houston 5060 West Alabama Street Houston, TX 77056 Description The Renfrew Center Foundation is pleased to present a half-day seminar for health and behavioral health professionals addressing eating disorders within the Christian community and innovative treatment strategies. Registration Info Admission : Go to URL or call 555-5555 for more info... Organized by Organized by jmccormick The Renfrew Center , 475 Spring Lane, Philadlephia, PA 19128 Event Categories: Family Practice, Health & Nutrition, Internal Medicine, Pediatrics, and Psychiatry.
http://meetings4docs.com/event/christian-based-treatment-of-eating-disorders-reconciling-self-life-and-god-2/
2017-07-20T16:34:22
CC-MAIN-2017-30
1500549423269.5
[]
meetings4docs.com
Addictive Disorders and Alcoholism 2017 Start Date : July 3, 2017 End Date : July 4, 2017 Time : 6:00 am to9:00 am Phone : 6502689744 Location : kuala lumpur, malaysia Description You should enter description content for your listing. Registration Info Admission : Go to URL or call 555-5555 for more info... Organized by Organized by program manager program manager , 2360 Corporate Circle..Tel: 6502689744 Mobile: 6502689744 Website: Event Categories: Neurology and Psychiatry.
http://meetings4docs.com/event/addictive-disorders-and-alcoholism-2017/
2017-07-20T16:22:37
CC-MAIN-2017-30
1500549423269.5
[]
meetings4docs.com
Getting started - Console App Provision Project Template Getting started with M2 VS Console provision Console Provision project template This template bootstraps a new .NET console application with a few predefined classed and code snippets. The project is aimed to provide a jump start for a simple console provision of the SPMeta2 based models. Refer to General concepts document for the more context and information about definitions, models and provision services. Once creating a project based on this template, you will be required to provide a project name and location and then, similarly to 'SPMeta2 Intranet Model' template, a target SharePoint runtime as per the following screen: Here are mode details on the settings: |Parameter Name | Sample value | Description | |------------- |------------- |------------- | and a pre-generated classes: The pre-generated project has the following folders, files and classed predefined: Folders - /PSScripts - put your scripts here, as you need them - /Utils - houses usefule project specific utils Classes - ConsoleUtils - partial CSOMConsoleUtils, SSOMConsoleUtils or O365ConsoleUtils The following programme is pre-generated to bootstrap the provision. Default "" url is used as a target site collection so that it works well with SPAutoInstaller. static void Main(string[] args) { var siteUrl = ""; var consoleUtils = new ConsoleUtils(); consoleUtils.WithCSOMContext(siteUrl, context => { // replace it with your M2 models var siteModel = default(ModelNode); var rotWebModel = default(ModelNode); // create a provision service - CSOMProvisionService or StandardCSOMProvisionService var provisionService = new CSOMProvisionService(); // little nice thing, tracing the progress consoleUtils.TraceDeploymentProgress(provisionService); // deploy! provisionService.DeploySiteModel(context, siteModel); provisionService.DeployWebModel(context, rotWebModel); }); } Depending on the SharePoint provision runtime you selected early, the ConsoleUtils class will have WithXXX() methods such as following: - WithO365Context() - WithCSOMContext() - WithSSOMContext() Console app configuration and tracing The console application comes with a pre-generated app.config file. We update two sections to add SPMeta2 tracing to the console and the log file as well as add some app-level setting. By default, SPMeta2 uses the standard .NET trace listeners infrastructure with the source name "SPMeta2", so that the following diagnostic sources config can be used as a jump starter. Default level is 'Information' which is more than enough to see the progress in the console application. <system.diagnostics> <sources> <!-- SPMeta2 logging --> <source name="SPMeta2" switchName="sourceSwitch" switchType="System.Diagnostics.SourceSwitch"> <listeners> <add name="SPMeta2.ConsoleLog" type="System.Diagnostics.ConsoleTraceListener"> </add> <add name="SPMeta2.DelimitedLog" type="System.Diagnostics.DelimitedListTraceListener" delimiter=":" initializeData="spmeta2.delimited.txt" traceOutputOptions="ProcessId, DateTime, Timestamp" /> <!-- <add name="SPMeta2.TextLog" traceOutputOptions="Timestamp" type="System.Diagnostics.TextWriterTraceListener" initializeData="spmeta2.log"> </add> <add name="SPMeta2.XmlLog" type="System.Diagnostics.XmlWriterTraceListener" initializeData="spmeta2.xml.log" traceOutputOptions="ProcessId, DateTime, Timestamp" /> <add name="SPMeta2.WebPageLog" type="System.Web.WebPageTraceListener, System.Web, Version=2.0.3600.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> --> </listeners> </source> </sources> <switches> <add name="sourceSwitch" value="Information" /> </switches> </system.diagnostics> The app settings are updated with the following values, we aim to use these value to drive the code-based provision of the needed SPMeta2 models or smaller bits: <appSettings> <!-- generic settings START --> <add key="IntranetUrl" value="" /> <!-- generic settings END --> <!-- site level provision START --> <add key="ShouldDeployTaxonomy" value="false" /> <add key="ShouldDeploySandboxSolutions" value="false" /> <add key="ShouldDeploySiteFeatures" value="false" /> <add key="ShouldDeploySiteSecurity" value="false" /> <add key="ShouldDeployFieldsAndContentTypes" value="false" /> <!-- site level provision END --> <!-- root web level provision START --> <add key="ShouldDeployRootWeb" value="false" /> <add key="ShouldDeployStyleLibrary" value="false" /> <!-- root web level provision END --> <!-- sub webs level provision START --> <add key="ShouldDeployFinanceWeb" value="false" /> <add key="ShouldDeploySalesWeb" value="false" /> <add key="ShouldDeployTesmsWeb" value="false" /> <!-- sub webs level provision END --> </appSettings> App configuration can be auto-generated with the T4 template. Just use "Add new item -> SPMeta2 -> AppSettings", name it as 'AppSetting' and click 'Ok': A new file 'AppSetting.tt' will be added to your solution: Once file is opened and saved, a corresponding *.cs file with the class names 'AppSetting' is generated. All properties are driven by the app.config with the following naming convention: - ShouldXXX -> cobverted to boolen props - XXXCount -> converted to integer - the rest -> converted to strings Altogether, the console project template, pre-generated utils, app.config and T4 config item template help to bootstrap the basic provision console application in a few clicks.
http://docs.subpointsolutions.com/spmeta2-vs/getting-started/consoleprovisionprojecttemplate
2017-07-20T16:31:00
CC-MAIN-2017-30
1500549423269.5
[array(['_img/m2newprojectwizard.png', None], dtype=object) array(['_img/m2console.newproject.png', None], dtype=object) array(['_img/m2console.wizard.png', None], dtype=object) array(['_img/m2console.projectstructure.png', None], dtype=object) array(['_img/m2console.appsettingsitem.png', None], dtype=object) array(['_img/m2console.appsettingsitemt4.png', None], dtype=object) array(['_img/m2console.appsettingsitemt4generation.png', None], dtype=object) ]
docs.subpointsolutions.com
Applies To: Windows Server 2016, Windows Server 2012 R2, Windows Server 2012 This reference topic for the IT professional summarizes common Windows logon and sign-in scenarios. The Windows operating systems require all users to log on to the computer with a valid account to access local and network resources. Windows-based computers secure resources by implementing the logon process, in which users are authenticated. After a user is authenticated, authorization and access control technologies implement the second phase of protecting resources???determining if the authenticated user is authorized to access a resource. The contents of this topic apply to versions of Windows designated in the Applies to list at the beginning of this topic. In addition, applications and services can require users to sign in to access those resources that are offered by the application or service. The sign-in process is similar to the logon process, in that a valid account and correct credentials are required, but logon information is stored in the Security Account Manager (SAM) database on the local computer and in Active Directory where applicable. Sign-in account and credential information is managed by the application or service, and optionally can be stored locally in Credential Locker. To understand how authentication works, see Windows Authentication Concepts. This topic describes the following scenarios: Interactive logon The logon process begins either when a user enters credentials in the credentials entry dialog box, or when the user inserts a smart card into the smart card reader, or when the user interacts with a biometric device. Users can perform an interactive logon by using a local user account or a domain account to log on to a computer. The following diagram shows the interactive logon elements and logon process. Windows Client Authentication Architecture Local and domain logon Credentials that the user presents for a domain logon contain all the elements necessary for a local logon, such as account name and password or certificate, and Active Directory domain information., or when the computer is part of a network of computers. A local logon grants a user permission to access Windows resources on the local computer. A local logon requires that the user. A network logon grants a user permission to access Windows resources on the local computer in addition to any resources on networked computers as defined by the credential's access token. Both a local logon and a network logon require that the user grant the user and computer permission to access and to use domain resources. Remotely, through Terminal Services or Remote Desktop Services (RDS), in which case the logon is further qualified as remote interactive. After an interactive logon, Windows runs applications on behalf of the user, and the user can interact with those applications. A local logon grants a user permission to access resources on the local computer or resources on networked computers. If the computer is joined to a domain, then the Winlogon functionality attempts to log on to that domain. A domain logon grants a user permission to access local and domain resources. A domain logon requires that the user has a user account in Active Directory. The computer must have an account in the Active Directory domain and be physically connected to the network. Users must also have the user rights to log on to a local computer or a domain. Domain user account information and group membership information are used to manage access to domain and local resources. Remote logon In Windows, accessing another computer through remote logon relies on the Remote Desktop Protocol (RDP). Because the user must already have successfully logged on to the client computer before attempting a remote connection, interactive logon processes have successfully finished. RDP manages the credentials that the user enters by using the Remote Desktop Client. Those credentials are intended for the target computer, and the user must have an account on that target computer. In addition, the target computer must be configured to accept a remote connection. The target computer credentials are sent to attempt to perform the authentication process. If authentication is successful, the user is connected to local and network resources that are accessible by using the supplied credentials. Network logon A network logon can only be used after user, service, or computer authentication has taken place. During network logon, the process does not use the credentials entry dialog boxes to collect data. Instead, previously established credentials or another method to collect credentials is used. This process confirms the user's identity to any network service that the user is attempting to access. This process is typically invisible to the user unless alternate credentials have to be provided. To provide this type of authentication, the security system includes these authentication mechanisms: Kerberos version 5 protocol Public key certificates Secure Sockets Layer/Transport Layer Security (SSL/TLS) Digest NTLM, for compatibility with Microsoft Windows NT 4.0???based systems For information about the elements and processes, see the interactive logon diagram above. Smart card logon Smart cards can be used to log on only to domain accounts, not local accounts. Smart card authentication requires the use of the Kerberos authentication protocol. Introduced in Windows 2000 Server, in Windows-based operating systems a public key extension to the Kerberos protocol's initial authentication request is implemented. In contrast to shared secret key cryptography, public key cryptography is asymmetric, that is, two different keys are needed???one to encrypt, another to decrypt. Together, the keys that are required to perform both operations make up a private/public key pair. To initiate a typical logon session, a user must prove his or her identity by providing information known only to the user and the underlying Kerberos protocol infrastructure. The secret information is a cryptographic shared key derived from the user's password. A shared secret key is symmetric, which means that the same key is used for both encryption and decryption. The following diagram shows the elements and processes required for smart card logon. Smart Card credential provider architecture When a smart card is used instead of a password, a private/public key pair stored on the user's smart card is substituted for the shared secret key, which is sign-in works in Windows. Biometric logon A device is used to capture and build a digital characteristic of an artifact, such as a fingerprint. This digital representation is then compared to a sample of the same artifact, and when the two are successfully compared, authentication can occur. Computers running any of the operating systems designated in the Applies to list at the beginning of this topic can be configured to accept this form of logon. However, if biometric logon is only configured for local logon, the user needs to present domain credentials when accessing an Active Directory domain. Additional resources For information about how Windows manages credentials submitted during the logon process, see Credentials Management in Windows Authentication. Windows Logon and Authentication Technical Overview
https://docs.microsoft.com/en-us/windows-server/security/windows-authentication/windows-logon-scenarios
2017-07-20T17:09:35
CC-MAIN-2017-30
1500549423269.5
[array(['../media/windows-logon-scenarios/authn_lsa_architecture_client.gif', 'Diagram showing the interactive logon elements and logon process'], dtype=object) array(['../media/windows-logon-scenarios/smartcardcredarchitecture.gif', 'Diagram showing the elements and processes required for smart card logon'], dtype=object) ]
docs.microsoft.com
Hello" to greet anyone who submits his or her name through a web browser: Your implementation of the Web service interface will look, host the POJO as a component in Mule, then use the simple front-end client with its CXF inbound endpoint.. Creating a Simple Front-end Web Service A simple front end allows you to create web services which don’t require annotation. First, you write the service interface. As in the example above, you could write an operation called "sayHello" will see the WSDL that CXF generates. Advanced Configuration Validation of Messages The following code enables schema validation for incoming messages by adding a validationEnabled attribute to your service declaration: Changing the Data Binding You can use the databinding property on an endpoint to configure the databinding that will be: Optional Annotations.
https://docs.mulesoft.com/mule-user-guide/v/3.4/building-web-services-with-cxf
2017-07-20T16:32:05
CC-MAIN-2017-30
1500549423269.5
[]
docs.mulesoft.com
Using Placeholders in Email Templates When composing an email template you can use placeholders to save typing or to customize it for each recipient or recipient group. A few important placeholders are predefined by the system but you can define an arbitrary number of your own placeholders. This article is organized as follows: - What is a Placeholder - System Placeholders - Custom Placeholders - Using Flow Control Clauses - Dynamically Including HTML - Creating Separate HTML Tags for Array Items What is a Placeholder A placeholder is a Mustache tag that is dynamically replaced with text when sending the email to each recipient. It has the following structure: {{placeholder}} System placeholders are valid in all templates' contexts. Custom placeholders are valid only in the context of the template where they are declared. System Placeholders The Email Notifications service comes with a set of predefined placeholders called system placeholders. These are automatically replaced with values when sending emails. You don't need to set the values in the template context. The system placeholders can be divided into several categories: The email header system placeholders are populated with built-in variables that you can manage on Emails > Settings (or if you have added the User Management service, you can also use Users > Email Settings to set the same values). They include: - {{DefaultFromEmail}} set using Default from email - {{DefaultFromName}} set using Default from name - {{DefaultReplyToEmail}} set using Default ReplyTo email These come in handy when filling in the respective email header fields, as shown in the next figure, but you can also use them in the message body: User Management Placeholders The user management placeholders are only allowed in the context of the ResetPasswordEmail, PasswordChangedEmail, VerifyAccountEmail, and WelcomeEmail system templates. They allow you to access data from the Users content type. They must be typed in the following format: {{User.fieldname}} where fieldname is a name of the Users content type field that you want to access. For example: - {{User.DisplayName}} is replaced with the user's display name - {{User.Username}} is replaced with the user's username In addition, you get a pair of user management placeholders that do not read data from the Users content type, but a closely related to each user: - {{VerifyAccountURL}} is replaced with the default user account verification URL for the user - {{PasswordResetURL}} is replaced with default password reset URL for the user App Data Placeholders The app data placeholders allow you to access data about your Telerik Platform app. These include: - {{Application.Title}} is replaced with the app name - {{Application.Name}} is replaced with the internal autogenerated name of your app. This name is also used by default in the DefaultFromEmail and looks like this: aBackendServicesd83ed520219646ffb017ab38e79e1180 - {{Application.Description}} is replaced with the app description Custom Placeholders You can customize an email message in many ways using placeholders. A simple example is inserting a product name that your recipient has just purchased: Thank you for purchasing {{ProductName}}. which appears like this in the email: Thank you for purchasing Space Rocket 3000. You can use placeholders for more elaborate tasks such as inserting dynamic HTML code or automatically generating HTML code for an array of elements. See the following sections for examples. Besides inserting the placeholder into the template, you need to set its value. You do this in the context payload parameter when sending the email. For details, see the respective article: Using Flow Control Clauses For greater flexibility you can use flow control (conditional) clauses in your template: If Variable Exists To display a certain piece of text only when a value for the placeholder is set in the context, use this construct: {{#variableFromTheContext}} Text to display {{/variableFromTheContext}} If Variable Does Not Exist To display a certain piece of text only when a value for the placeholder is not set in the context, use this construct: {{^variableFromTheContext}} Text to display {{/variableFromTheContext}} If Values are Equal To display a certain piece of text only when a pair of values are equal, use the next construct. Each of the values can be: - Variable from the context, formatted as: variableFromTheContext - Number, formatted as: N - String, enclosed in single quotes: 'string' {{#if (eq val1 val2)}} Text to display 1 {{else if (eq val3 val4)}} Text to display 2 {{else}} Text to display 3 {{/if}} Examples In this example, different text is displayed depending on whether you've set {{PremiumUser}} in the context. Context: "PremiumUser": "John Doe" Message Body: {{#PremiumUser}} View out special offers for premium users! {{/PremiumUser}} {{^PremiumUser}} Become a paying member to get the full benefits! {{/PremiumUser}} The following example checks if it is April 1st and prints a customized messages if it is: Context: "CurrentDate": "04.01" "CurrentYear": "2016" Message Body: {{#if (eq CurrentDate '04.01')}} Happy April Fool's Day! {{else}} Today is {{CurrentYear}}.{{CurrentDate}}. {{/if}} Dynamically Including HTML When composing an email body using HTML, you can insert placeholders that can be dynamically replaced with HTML code when sending the email later. To dynamically insert HTML into your template: Insert a template placeholder enclosed in a <pre></pre> tag where you want the dynamic HTML to appear. In contrast to previous placeholders, this placeholder type uses triple braces. For example if you want to insert a customized HTML-formatted greeting into your emails, use a placeholder such as: <pre>{{{CustomizedGreeting}}}</pre> Then, in the Context object, provide a value for the placeholder. For example: var context = { "CustomizedGreeting": "<div><span> ... </span><img ... ></div>" }; Creating Separate HTML Tags for Array Items Suppose you have an array of items, each of which you want to display as a separate HTML tag. You can do this using nothing more than template syntax. For example, the Urls variable contains an array of five URL: By adding the code shown below to the HTML view of your template, you will get five images in the resulting email: {{#Urls}} <img src="{{.}}" /> {{/Urls}} If you need to access the properties of objects contained in an array use the same dot ( .) notation followed by the property name. Assume that the array of objects looks like the following: By adding the property name after the dot ( .), you will get the Url values in the resulting email: {{#Urls}} <img src="{{.Url}}" /> {{/Urls}}
http://docs.telerik.com/platform/backend-services/rest/email-templates/email-notifications-use-placeholders
2017-07-20T16:31:05
CC-MAIN-2017-30
1500549423269.5
[array(['images/email-template-complete.png', 'System placeholders in use'], dtype=object) ]
docs.telerik.com
Removes a remote login mapped to a local login used to execute remote stored procedures against the local server running SQL Server. Important This feature will be removed in the next version of Microsoft SQL Server. Do not use this feature in new development work, and modify applications that currently use this feature as soon as possible. Use linked servers and linked-server stored procedures instead. || |-| |Applies to: SQL Server ( SQL Server 2008 through current version).| Transact-SQL Syntax Conventions. Permissions Requires membership in the sysadmin or securityadmin fixed server roles. Examples A. Dropping all remote logins for a remote server The following example removes the entry for the remote server ACCOUNTS, and, therefore, removes all mappings between logins on the local server and remote logins on the remote server. EXEC sp_dropremotelogin 'ACCOUNTS'; B. Dropping a login mapping The following example removes the entry for mapping remote logins from the remote server ACCOUNTS to the local login Albert. EXEC sp_dropremotelogin 'ACCOUNTS', 'Albert'; C. Dropping a remote user The following example removes the login for the remote login Chris on the remote server ACCOUNTS that was mapped to the local login salesmgr. EXEC sp_dropremotelogin 'ACCOUNTS', 'salesmgr', 'Chris'; See Also Security)
https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-dropremotelogin-transact-sql
2017-07-20T17:41:38
CC-MAIN-2017-30
1500549423269.5
[array(['../../includes/media/yes.png', 'yes'], dtype=object) array(['../../includes/media/no.png', 'no'], dtype=object) array(['../../includes/media/no.png', 'no'], dtype=object) array(['../../includes/media/no.png', 'no'], dtype=object)]
docs.microsoft.com
Help Center Local Navigation Are External Network Connections Allowed application control policy rule Description This rule specifies whether an application can make external network connections. You can configure this rule to prevent the application from sending or receiving any data on a BlackBerry® device using an external protocol (such as WAP or TCP). You can also configure this rule so that an application prompts a user before it makes external connections through the BlackBerry device firewall. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/admin/deliverables/7229/External_netwk_connections_allowed_651555_11.jsp
2014-10-20T12:13:52
CC-MAIN-2014-42
1413507442497.30
[]
docs.blackberry.com
Unit tests are what keep your code on the straight and narrow path. A Lack of Unit Tests is analogous to the moral Deadly Sin of sloth, which is sometimes called moral laziness, and sometimes defined as not doing the things you should do. Unit tests help keep bugs and regressions from slipping into production code. And when you make a change to existing code, they help you know that you didn't break it. They are your backstop in refactoring, and give you confidence, when you're eliminating duplications or reducing complexity, that you haven't just thrown a monkey wrench into the works. If you're dealing with legacy code that doesn't have unit tests, it likely wasn't written with unit tests in mind. In that case, don't be intimidated by the volume of work you'll need to do to add tests. Instead, focus on covering your new code, and add tests for existing code as you can. Data on unit tests and code coverage can be displayed .By on a project dashboard by adding the SCM Activity plugin, code coverage on new code (added or modified) is displayed. ...
http://docs.codehaus.org/pages/diffpagesbyversion.action?pageId=229738278&selectedPageVersions=9&selectedPageVersions=10
2014-10-20T11:46:49
CC-MAIN-2014-42
1413507442497.30
[]
docs.codehaus.org
How to create a strategy A strategy is a combination of technical indicators and candlestick patterns that work together to provide entries into the market and also exits (depending upon how you configure it). Cryptohopper shows multiple Technical Analysis every week, click here to read them and get inspired by our analysts! Do you want to play frist with technical indicators in a chart to check when a buy signal will be given? We explain how charting works here. Cryptohopper also sends technical analyses by mail, so don't forget to subscribe to our newletter. Let's start by designing our own strategy. Click on "New Strategy" on the top right corner to get started. Naming: On the left-hand side, you have the possibility of giving your strategy a name and description. We have named our strategy: "Cryptohopper Example Strategy" and wrote in the description: "An example strategy". After this step has been completed, click "save". If you click on the little green arrow just right of the button Save, two options will popup: Save & close: use this if you would like to save the strategy and close it to continue working on it another time. Save as copy: use this if you would like to create a copy of your current strategy. Image: You can now choose an image for your strategy by clicking on the "select image" button. You can either choose a default image or upload your own. To upload your own image, you have to select "upload image" and then "upload logo". Please make sure that your image has the following size: "600px X 430px". Indicators: It is now time to choose the right indicators for your strategy. Technical indicators are chart analysis tools that can help traders enter a position. At Cryptohpper, we offer more than 36 customizable indicators. To select an indicator, click on Indicators and search for the indicator you desire. If you would like to use an indicator to enter a position, make sure the signal is set to "buy". If you would like to use the indicator to exit a position, make sure to set the signal to "sell". The Strategy Designer supports up to 16 Technical Indicators and/or Candle Patterns. Candlestick Patterns: Aside from "indicators", Cryptohopper also offers 91 Candlestick patterns. These are technical analysis tools that are used to predict price direction. Unlike the indicators, the candlestick patterns cannot be changed from buy to sell and vice versa. They are either a position entry pattern or a position exit pattern. Building a strategy: When you wish to build a strategy, it is vital to have a clear picture in mind of what you would like to build. Let's create a strategy that buys the dips of an uptrending market. To identify a bullish market, we can use the MESA on the daily chart. This is how the MESA looks like on a chart when using the "Charts" tab: The MESA will indicate a bullish market as long as the cloud is green. This is how it looks like on when using Cryptohopper's "Charts" tab: Chart Period: This is the timeframe the indicator operates on (in our case, 1 day). You can choose any timeframe between 1 day and 1 minute. Signal: If you would like to use the indicator for an entry point, make sure it is set to "buy". If you would like to use the indicator as an exit point, make sure to set it to "sell". OHLCV Value, Fast Limit, Slow Limit: are unique settings that you can set for this indicator (each indicator has specific editable settings). Keep signal for: This option is used if you would like to keep a signal for a specific number of candles. In our case, it is not necessary for the MESA. Necessary signal: We have marked the signal as necessary because we want this indicator to indicate a buy each time we enter a position. This option is useful, for example, when you have a strategy made up of 6 indicators, but you only need to use 4 signaling a buy simultaneously to enter a trade. If you would like to have one out the 4 indicators active at all times just tick the option "necessary signal". Let's now add an indicator that can find the bottoms of this uptrend. William's %R is an oscillator that can find overbought or oversold zones. We will add this indicator on the 1-hour chart to find the oversold zones of the uptrend. This is how William's % R looks like on a chart under "Charts": The indicator will identify a bottom each time the black line falls below the lower band at -80. The green circles indicate the bottom on the graph. Williams %R can find some great buy points in a bullish market. However, it has some significant drawbacks. In the case of a market crash, the indicator would indicate a buy continuously. As a result, you may enter a position, only to have the market continue falling a lot further. Therefore, we will have to combine it with yet another indicator. This is how the Indicator looks like on our platform under "Charts": Let's now add an indicator that can complement the major flaws found in William's %R. A crossover of the 1 and 15 EMA can work well in identifying when the trend is bullish again. This indicator produces "buy signals" each time the fast EMA crosses over the slow EMA, as it can be seen on the chart. This is how the EMA crossover looks like on the chart when configured under "Charts": This is how the EMA crossover looks like on Cryptohopper's "Charts" tab: This is how the strategy currently looks like on the chart on the "Charts" tab: We have chosen to keep William's %R signal for 10 candles, as it is very rare for Williams %R to be oversold while at the same time, the EMA crossover indicates a bullish signal. The vertical green line at the left represents the moment the MESA turned bullish on the daily chart. We have set the minimum "buy" signals to 3 out of 3. This means that all of the indicators are necessary for your hopper to make a trade. This feature is useful if you have many indicators, but not all are essential to the strategy, and some are substitutes. For example, you consider the MACD as a substitute for the EMA and not essential to the strategy, then you will have 3 out 4 minimum buy signals. You will also still have the MESA and Williams %R set as necessary, while the EMA will no longer be necessary. In the end, you will need the MESA and Williams %R to indicate a buy signal, and then you need either the MACD or the EMA to indicate a buy for your hopper to open up a position. Lets now also add a sell indicator to this strategy to close out positions with it. The MACD on the hourly chart should work well here. To create a sell indicator instead of a buy one, simply click on the green "buy" button in the menu of the particular indicator, this will change your signal to "sell". This is how the indicator should look on Cyptohopper's "Charts" tab. This is how the final version of the strategy should look on Cryptohopper's "Charts" tab: This is how the final version of the strategy looks like on the "Charts" tab: Buttons and SymbolsButtons and Symbols Let's now look at the symbols on the right-hand side of each indicator: Their meaning in order from left to right: The "!". This is the symbol of a necessary signal. As long as this is green, a signal from this indicator is required to open up a position. You can use this if you have a strategy comprising multiple signals, with some being essential and others not. You can thus mark the essential signals as necessary. The gears symbol. This opens up the menu for a specific indicator. You can also open up this menu by clicking on the title of the indicator. This duplicates an indicator, with the specific settings that you have customized. For example, this feature is useful if you want to have the same version of the MACD as both a buy signal and a sell signal. You can use th "X" to remove an indicator/candlestick pattern. The hamburger button on the left of an indicator can be used to drag strategies higher or lower. This will not have any effect on the strategy's performance. It is just for aesthetics and to help you order your strategies however you see fit. You can select an indicator by ticking the empty box to the left of its signal type. When ticked, this box should become blue. You can then use the buttons at the top to either duplicate the indicator or remove it. Test: You can use the Test Button to quickly backtest your strategy. Please note that this testing feature is not as in-depth and customizable as the one from the backtesting page. You have the possibility of choosing your exchange, market (coin pair, for example, USDT - BTC). After choosing your exchange and market, click "Test" to get started! On the bottom left, you can see the total number of both Buy and Sell signals generated by the strategy. On the bottom right, you can see each individual signal given. And if you click on TA, you can see the values of all of the indicators. To get back to the "designer" menu where we see an overview of all of our indicators and candlestick patterns, simply click on "Designer". Code: You can use the Code feature to edit your indicators using the JSON code. Please be careful when using this feature, as you may disrupt your strategy if you make a mistake while coding. To get back to the designer, simply click the button designer again. Delete: You can use the "Delete" button to delete your strategy if you no longer needed. Please bear in mind that this action cannot be undone, and you will need to create it again. Close: Finally, when you are satisfied with your strategy, you can click the "close" button to get back to the designer. Make sure you save before closing!
https://docs.cryptohopper.com/docs/en/Strategy%20Designer/how-to-create-a-strategy/
2021-02-24T23:06:16
CC-MAIN-2021-10
1614178349708.2
[array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/exploref/strategy-designer/howto/1.+Creating+A+Strategy.gif', 'TA indicators technical analysis Automated automatic trading bot platform crypto cryptocurrencies Cryptohopper bitcoin ethereum'], dtype=object) array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/exploref/strategy-designer/howto/2.+Uploading+Custom+Image.gif', 'TA indicators technical analysis Automated automatic trading bot platform crypto cryptocurrencies Cryptohopper bitcoin ethereum'], dtype=object) array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/exploref/strategy-designer/howto/3.+Choosing+Indicators.gif', 'TA indicators technical analysis Automated automatic trading bot platform crypto cryptocurrencies Cryptohopper bitcoin ethereum'], dtype=object) array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/exploref/strategy-designer/howto/Schermafbeelding+2020-07-24+om+09.00.19.png', 'TA indicators technical analysis Automated automatic trading bot platform crypto cryptocurrencies Cryptohopper bitcoin ethereum'], dtype=object) array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/exploref/strategy-designer/howto/4.+MACD+Sell.gif', 'TA indicators technical analysis Automated automatic trading bot platform crypto cryptocurrencies Cryptohopper bitcoin ethereum'], dtype=object) array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/exploref/strategy-designer/howto/5.+Hamburger+Menu.gif', 'TA indicators technical analysis Automated automatic trading bot platform crypto cryptocurrencies Cryptohopper bitcoin ethereum'], dtype=object) array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/exploref/strategy-designer/howto/6.+Duplicate+and+Delete.gif', 'TA indicators technical analysis Automated automatic trading bot platform crypto cryptocurrencies Cryptohopper bitcoin ethereum'], dtype=object) array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/exploref/strategy-designer/howto/7.+Save+and+Close.gif', 'TA indicators technical analysis Automated automatic trading bot platform crypto cryptocurrencies Cryptohopper bitcoin ethereum'], dtype=object) ]
docs.cryptohopper.com
Astra Documentation Astra is a Kubernetes application data lifecycle management service that simplifies operations for stateful applications. Easily back up Kubernetes apps, migrate data to a different cluster, and instantly create working application clones. Understand how Astra works. Learn what's new with Astra. Start managing your apps from Astra. Create recovery points for your apps in case of failure. Migrate an app to another cluster. Set up billing for the Premium Plan. Watch videos that show how to use Astra. Deploy MariaDB from a Helm chart and register it with Astra. Deploy MySQL from a Helm chart and register it with Astra. How Astra provides robust storage, easy-to-consume data services, and application and data portability. A curated collection of blog postings to learn more about Kubernetes and persistent storage.
https://docs.netapp.com/us-en/astra/
2021-02-25T00:09:10
CC-MAIN-2021-10
1614178349708.2
[]
docs.netapp.com
You can use a function in your workflows for a complex task that has to be completed during the planning phase of the workflow. You can write functions by using the MVFLEX Expression Language (MVEL). You can use functions to put together commonly used logic as well as more complex logic in a named function and reuse it as values for command parameters or filter parameters. You can write a function once and use it across workflows. You can use functions to handle repetitive tasks and tasks that might be complex, such as defining a complex naming convention. Functions might use other functions during their execution.
https://docs.netapp.com/wfa-42/topic/com.netapp.doc.onc-wfa-wdg/GUID-463370BC-82A9-4551-9FF6-15558C37AA81.html
2021-02-25T00:42:05
CC-MAIN-2021-10
1614178349708.2
[]
docs.netapp.com
This guide reflects the old console for Amazon SES. For information about the new console for Amazon SES, see the new Amazon Simple Email Service Developer Guide. Integrating Amazon SES with Postfix Postfix is an alternative to the widely used Sendmail Message Transfer Agent (MTA). For information about Postfix, go to Postfix is a third-party application, and isn't developed or supported by Amazon Web Services. The procedures in this section are provided for informational purposes only, and are subject to change without notice. Prerequisites Before you complete the procedures in this section, you have to perform the following tasks: Uninstall Sendmail, if it's already installed on your system. The procedure for completing this step varies depending on the operating system you use. Install Postfix. The procedure for completing this step varies depending on the operating system you use. Install a SASL authentication package. The procedure for completing this step varies depending on the operating system you use. For example, if you use a RedHat-based system, you should install the cyrus-sasl-plainpackage. If you use a Debian- or Ubuntu-based system, you should install the libsasl2-modulespackage. Verify an email address or domain to use for sending email. For more information, see Verifying email addresses in Amazon SES. If your account is still in the sandbox, you can only send email to verified email addresses. For more information, see Moving out of the Amazon SES sandbox. Configuring Postfix Complete the following procedures to configure your mail server to send email through Amazon SES using Postfix. To configure Postfix At the command line, type the following command: sudo postconf -e "relayhost = [ email-smtp.us-west" Note If you use Amazon SES in an AWS Region other than US West (Oregon), replace email-smtp.us-west-2.amazonaws.comin the preceding command with the SMTP endpoint of the appropriate region. For more information, see Regions and Amazon SES. In a text editor, open the file /etc/postfix/master.cf. Search for the following entry: -o smtp_fallback_relay= If you find this entry, comment it out by placing a #(hash) character at the beginning of the line. Save and close the file. Otherwise, if this entry isn't present, proceed to the next step. In a text editor, open the file /etc/postfix/sasl_passwd. If the file doesn't already exist, create it. Add the following line to /etc/postfix/sasl_passwd: [ SMTPUSERNAME: SMTPPASSWORD Note Replace SMTPUSERNAMEand SMTPPASSWORDwith your SMTP username and password, respectively. Your SMTP user name and password aren't the same as your AWS access key ID and secret access key. For more information about credentials, see Obtaining your Amazon SES SMTP credentials. If you use Amazon SES in an AWS Region other than US West (Oregon), replace email-smtp.us-west-2.amazonaws.comin the example above with the SMTP endpoint of the appropriate region. For more information, see Regions and Amazon SES. Save and close sasl_passwd. At a command prompt, type the following command to create a hashmap database file containing your SMTP credentials: sudo postmap hash:/etc/postfix/sasl_passwd (Optional) The /etc/postfix/sasl_passwdand /etc/postfix/sasl_passwd.dbfiles you created in the previous steps aren't encrypted. Because these files contain your SMTP credentials, we recommend that you modify the files' ownership and permissions in order to restrict access to them. To restrict access to these files: At a command prompt, type the following command to change the ownership of the files: sudo chown root:root /etc/postfix/sasl_passwd /etc/postfix/sasl_passwd.db At a command prompt, type the following command to change the permissions of the files so that only the root user can read or write to them: sudo chmod 0600 /etc/postfix/sasl_passwd /etc/postfix/sasl_passwd.db Tell Postfix where to find the CA certificate (needed to verify the Amazon SES server certificate). The command you use in this step varies based on your operating system. If you use Amazon Linux, Red Hat Enterprise Linux, or a related distribution, type the following command: sudo postconf -e 'smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crt' If you use Ubuntu or a related distribution, type the following command: sudo postconf -e 'smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt' If you use macOS, you can generate the certificate from your system keychain. To generate the certificate, type the following command at the command line: sudo security find-certificate -a -p /System/Library/Keychains/SystemRootCertificates.keychain > /etc/ssl/certs/ca-bundle.crt After you generate the certificate, type the following command: sudo postconf -e 'smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crt' Type the following command to start the Postfix server (or to reload the configuration settings if the server is already running): sudo postfix start; sudo postfix reload Send a test email by typing the following at a command line, pressing Enter after each line. Replace [email protected] your From email address. The From address has to be verified for use with Amazon SES. Replace [email protected] the destination address. If your account is still in the sandbox, the recipient address also has to be verified. Finally, the final line of the message has to contain a single period (.) with no other content. sendmail -f [email protected] [email protected]: Sender Name< [email protected]> Subject: Amazon SES Test This message was sent using Amazon SES. . Check the mailbox associated with the recipient address. If the email doesn't arrive, check your junk mail folder. If you still can't locate the email, check the mail log on the system that you used to send the email (typically located at /var/log/maillog) for more information. Advanced usage example This example shows how to send an email that uses a configuration set, and that uses MIME-multipart encoding to send both a plain text and an HTML version of the message, along with an attachment. It also includes a link tag, which can be used for categorizing click events. The content of the email is specified in an external file, so that you do not have to manually type the commands in the Postfix session. To send a multipart MIME email using Postfix In a text editor, create a new file called mime-email.txt. In the text file, paste the following content, replacing the values in red with the appropriate values for your account: X-SES-CONFIGURATION-SET: ConfigSetFrom: Sender Name< [email protected]> Subject:Amazon SES Test MIME-Version: 1.0 Content-Type: multipart/mixed; Using the Amazon SES SMTP Interface to Send Email</a> in the <em>Amazon SES Developer Guide</em>.</p> </body> </html> --3NjM0N2QwMTE4MWQ0ZTg2NTYxZQ-- --YWVhZDFlY2QzMGQ2N2U0YTZmODU Content-Type: application/octet-stream MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="customers.txt" SUQsRmlyc3ROYW1lLExhc3ROYW1lLENvdW50cnkKMzQ4LEpvaG4sU3RpbGVzLENh bmFkYQo5MjM4OSxKaWUsTGl1LENoaW5hCjczNCxTaGlybGV5LFJvZHJpZ3VleixV bml0ZWQgU3RhdGVzCjI4OTMsQW5heWEsSXllbmdhcixJbmRpYQ== --YWVhZDFlY2QzMGQ2N2U0YTZmODU-- Save and close the file. At the command line, type the following command. Replace [email protected] your email address, and replace [email protected] the recipient's email address. sendmail -f [email protected] [email protected]< mime-email.txt If the command runs successfully, it exits without providing any output. Check your inbox for the email. If the message wasn't delivered, check your system's mail log.
https://docs.aws.amazon.com/ses/latest/DeveloperGuide/postfix.html
2021-02-24T23:24:20
CC-MAIN-2021-10
1614178349708.2
[]
docs.aws.amazon.com
Symmetric Keys on User Databases Applies to: SQL Server (all supported versions) This rule checks whether keys that have a length of less than 128 bytes do not use the RC2 or RC4 encryption algorithm. Best Practices Recommendations Use AES 128 bit or larger to create symmetric keys for data encryption. If AES is not supported by your operating system, use 3DES. For More Information Choose an Encryption Algorithm See Also Monitor and Enforce Best Practices by Using Policy-Based Management
https://docs.microsoft.com/en-us/sql/relational-databases/policy-based-management/symmetric-keys-on-user-databases?view=sql-server-ver15
2021-02-25T00:33:23
CC-MAIN-2021-10
1614178349708.2
[]
docs.microsoft.com
End-of-Life (EoL) Configure SSL Forward Proxy To enable the firewall to perform SSL Forward Proxy decryption, you must set up the certificates required to establish the firewall as a trusted third party to the session between the client and the server. The firewall can use self-signed certificates or certificates signed by an enterprise certificate authority (CA) as forward trust certificatesto authenticate the SSL session with the client. - (Recommended)Enterprise CA-signed CertificatesAn enterprise CA can issue a signing certificate which the firewall can use to sign the certificates for sites requiring SSL decryption. When the firewall trusts the CA that signed the certificate of the destination server, the firewall can then send a copy of the destination server certificate to the client signed by the enterprise CA. - Self-signed CertificatesWhen a client connects to a server with a certificate that is signed by a CA that the firewall trusts, the firewall can sign a copy of the server certificate to present to the client and establish the SSL session. You can use self-signed certificates for SSL Forward Proxy decryption if your organization does not have an enterprise CA or if you intend to only perform decryption for a limited number of clients.. After setting up the forward trust and forward untrust certificates required for SSL Forward Proxy decryption, add a decryption policy rule to define the traffic you want the firewall to decrypt. SSL tunneled traffic matched to the decryption policy rule is decrypted to clear text traffic. The clear text traffic is blocked and restricted based on the decryption profile attached to the policy and the firewall security policy. Traffic is re-encrypted the server certificate is signed by a trusted CA:(Recommended)Use an enterprise CA-signed certificate as the forward trust certificate. Use a self-signed certificate as the forward trust certificate. - Generate a Certificate Signing Request (CSR) for the enterprise CA to sign and validate: - Selectand clickDeviceCertificate ManagementCertificatesGenerate. - Enter aCertificate Name, such as my-fwd-proxy. - In theSigned Bydrop-down, selectExternal Authority (CSR). - (Optional)If your enterprise CA requires it, addCertificate Attributesto further identify the firewall details, such as Country or Department. - ClickOKto for import onto the firewall. - Import the enterprise CA-signed certificate onto the firewall: - Selectand clickDeviceCertificate ManagementCertificatesImport. - Enter the pendingCertificate Nameexactly (in this case, my-fwd-trust)., in this case, my-fwd-proxy, to enable it as aForward Trust Certificateto be used for SSL Forward Proxy decryption. - ClickOKto save the enterprise CA-signed forward trust certificate. - Generate a new certificate: - Select.DeviceCertificate ManagementCertificates - ClickGenerateat the bottom of the window. - Enter aCertificate Name, such asmy-fwd-trust. - Enter aCommon Name, such as 192.168.2.1. This should be the IP or FQDN that will appear in the certificate. In this case, we are using the IP of the trust interface. Avoid using spaces in this field. - Leave theSigned Byfield blank. - Click theCertificate Authoritycheck box to enable the firewall to issue the certificate. Selecting this check box creates a certificate authority (CA) on the firewall that is imported to the client browsers, so clients trust the firewall as a CA. - Generatethe certificate. - Click the new certificatemy-fwd-trustto modify it and enable the certificate to be aForward Trust Certificate. - ClickOKto save the self-signed forward trust certificate. - Distribute the forward trust certificate to client system certificate stores.If you do not install the forward trust certificate on client systems, users will see certificate warnings for each SSL site they visit.If you are using an enterprise-CA signed certificate as the forward trust certificate for SSL Forward Proxy decryption, and the client systems already have the enterprise CA added to the local trusted root CA list, you can skip this step.On a firewall configured as a GlobalProtect portal:This option is supported with Windows and Mac client OS versions, and requires GlobalProtect agent 3.0.0 or later to be installed on the client systems. Without GlobalProtect:Export the forward trust certificate for import into client systems by highlighting the certificate and clickingExportat the bottom of the window. Choose PEM format, and do not select theExport private keyoption. import it into the browser trusted root CA list on the client systems in order for the clients to trust it. When importing to the client browser, ensure the certificate is added to the Trusted Root Certification Authorities certificate store. On Windows systems, the default import location is the Personal certificate store. You can also simplify this process by using a centralized deployment, such as an Active Directory Group Policy Object (GPO). - Selectand then select an existing portal configuration orNetworkGlobalProtectPortalsAdda new one. - SelectAgentand then select an existing agent configuration orAdda new one. - Addthe SSL Forward Proxy forward trust certificate to the Trusted Root CA section. - Install in Local Root Certificate Storeso that the GlobalProtect portal automatically distributes the certificate and installs it in the certificate store on GlobalProtect client systems. - ClickOKtwice. - Configure the forward untrust certificate. - ClickGenerateat the bottom of the certificates page. - Enter aCertificate Name, such as myw-untrust certificate to modify it and enable theForward Untrust Certificateoption.Do not export the forward untrust certificate for import into client systems. If the forward untrust certificate is imported on client systems, the users will not see certificate warnings for SSL sites with untrusted certificates. - ClickOKto save. - (Optional)Set the key size of the SSL Forward Proxy certificates that the firewall presents to clients. By default, the firewall determines the key size to use based on the key size of the destination server certificate. - Create a Decryption Policy Rule to define traffic for the firewall to decrypt. - Select, Add or modify an existing rule, and define traffic to be decrypted.PoliciesDecryption - SelectOptionsand: - Set the ruleActiontoDecryptmatching traffic. - Set the ruleTypetoSSL Forward Proxy. - (Optional)Select aDec Exceptions to disable decryption for certain types of traffic. Recommended For You Recommended Videos Recommended videos not found.
https://docs.paloaltonetworks.com/pan-os/7-1/pan-os-admin/decryption/configure-ssl-forward-proxy.html
2021-02-25T00:08:54
CC-MAIN-2021-10
1614178349708.2
[]
docs.paloaltonetworks.com
The application used by Teradata DWM to connect to the Teradata Database. A collection of callable service routines that provide an interface between an application and the MTDP (for network-attached clients) or TDP (for mainframe-attached). CLI builds parcels that are sent to Teradata Database and provides the application with a pointer to each of the parcels returned from Teradata Database. When used with workstation-attached clients, CLIv2 contains the following components: - CLI (Call-Level Interface) - MTDP (Micro Teradata Director Program) - MOSI (Micro Operating System Interface)
https://docs.teradata.com/r/e5CVO7qajjNgUMRt1m8Ylw/viFXbqsymzJrFVqLA0sJ9w
2021-02-25T00:10:46
CC-MAIN-2021-10
1614178349708.2
[]
docs.teradata.com
Version 9.3.2 Important!! Read This Document Before Attempting To Install Or Use This Product! -: New Features of SIOS Protection Suite for Linux Version 9 Bug Fixes The following is a list of the latest bug fixes and enhancements.. Client Platforms and Browsers The SPS web client can run on any platform that provides support for Java Runtime Environment JRE8 update 51. The currently supported browsers are Firefox (Firefox 51 or earler). Post your comment on this topic.
https://docs.us.sios.com/spslinux/9.3.2/en/topic/sios-protection-suite-for-linux-release-notes
2021-02-25T00:08:01
CC-MAIN-2021-10
1614178349708.2
[]
docs.us.sios.com
Empty Set Dollar Basics What is Empty Set Dollar? Empty Set Dollar (ESD) is an algorithmic stablecoin built to be the reserve currency of Decentralized Finance. It has three key features: - Stability - ESD uses an algorithmic approach to maintaining price stability around a 1 USDC target. This approach relies on a tuned incentivization mechanism to reward actors who promote stability within the protocol. - Composability - Even with a dynamic system supply, Empty Set Dollar adheres to the ERC-20 token standard. This makes it work seamlessly across the decentralized finance infrastructure and reduces the likely hood of unforeseen bugs in integrated protocols. - Decentralisation - Since day one Empty Set Dollar has had completely decentralized on-chain governance. Additionally, the protocol launched with 0 initial supply and no pre-mine for the anonymous founding team. How does ESD differ from other stablecoins? Empty Set Dollar's protocol was designed by taking elements from numerous pre-existing protocols to produce a balanced protocol that avoids the pitfalls of other protocol designs. The resulting protocol sidesteps the centralisation risks of USDC, USDT, & TUSD, attempts to avoid AMPL & BASED’s "death spirals", the over-collateralisation requirements of sUSD & DAI, and, most importantly, it integrates seamlessly with existing DeFi protocols. How does ESD become a sustainably useful token? For ESD to become a sustainably useful stablecoin like USDT or DAI, it must begin to be accepted as currency by DeFi and other applications on the Ethereum protocol. In periods of volatility, the token's utility may be diminished. However, as the protocol matures the volatility will reduce increasing its utility. Who created Empty Set Dollar? The original founding team is anonymous. However, if you'd like to contact them you can email them here: [email protected] Who controls Empty Set Dollar? Since launch Empty Set Dollar has had on-chain governance. This means that any changes or upgrades to the protocol need to be voted on by the community of token holders before they are enacted.
https://docs.emptyset.finance/faqs/basics
2021-02-24T22:46:21
CC-MAIN-2021-10
1614178349708.2
[array(['/esd-comparison.png', None], dtype=object)]
docs.emptyset.finance
name of the current network Scene. This is populated if the NetworkManager is doing Scene management. This should not be changed directly. Calls to ServerChangeScene() cause this to change. New clients that connect to a server will automatically load this Scene. Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2018.3/Documentation/ScriptReference/Networking.NetworkManager-networkSceneName.html
2021-02-25T00:10:05
CC-MAIN-2021-10
1614178349708.2
[]
docs.unity3d.com
7. Modes of Operation¶ The core supports the following standard modes of operations: Debug: This is highest level of operation which provides access to all features of the core and the system. Typically, this mode is used for software development and bring-up phases. This mode is available only if the debugging option is enabled at design time. Machine: This is the highest mode of software execution and is mandatory in all variants generated by the core. The code running in machine mode is inherently trusted and has access to all implementation resources. User: This is lowest mode of operation where user applications are executed.
https://chromitem-soc.readthedocs.io/en/0.9.9/modes.html
2021-02-24T23:57:53
CC-MAIN-2021-10
1614178349708.2
[]
chromitem-soc.readthedocs.io
This guide reflects the old console for Amazon SES. For information about the new console for Amazon SES, see the new Amazon Simple Email Service Developer Guide. Using dedicated IP addresses with Amazon SES When you create a new Amazon SES account, your emails are sent from IP addresses that are shared with other Amazon SES users. For an additional monthly charge If you don't plan to send large volumes of email on a regular and predictable basis, we recommend that you use shared IP addresses. If you use dedicated IP addresses in situations where you are sending low volumes of mail, or if your sending patterns are highly irregular, you might experience deliverability issues. Ease of Setup If you choose to use shared IP addresses, then you don't need to perform any additional configuration. Your Amazon SES account is ready to send email as soon as you verify an email address and move out of the sandbox. If you choose to lease dedicated IP addresses, you have to submit a request and optionally configure dedicated IP pools. Reputation Managed by AWS IP address reputations are based largely on historical sending patterns and volume. An IP address that sends consistent volumes of email over a long period of time typically has a good reputation. Shared IP addresses are used by several Amazon SES customers. Together, these customers send a large volume of email. AWS carefully manages this outbound traffic in order to maximize the reputations of the shared IP addresses. If you use dedicated IP addresses, it is your responsibility to maintain your sender reputation by sending consistent and predictable volumes of email. If you would like to see Smart Network Data Services (SNDS) data for your dedicated IPs, see SNDS metrics for dedicated IPs for more information. Predictability of Sending Patterns An IP address with a consistent history of sending email has a better reputation than one that suddenly starts sending out large volumes of email with no prior sending history. If your email sending patterns are irregular—that is, they don't follow a predictable pattern—then shared IP addresses are probably a better fit your needs. When you use shared IP addresses, you can increase or decrease your email sending patterns as the situation demands. If you use dedicated IP addresses, you must warm up those addresses by sending an amount of email that gradually increases every day. The process of warming up new IP addresses is described in Warming up Dedicated IP Addresses. Once your dedicated IP addresses are warmed up, you must then maintain a consistent sending pattern. Volume of Outbound Email Dedicated IP addresses are best suited for customers who send large volumes of email. Most internet service providers (ISPs) only track the reputation of a given IP address if they receive a significant volume of mail from that address. For each ISP with which you want to cultivate a reputation, you should send several hundred emails within a 24-hour period at least once per month. In some cases, you may be able to use dedicated IP addresses if you don't send large volumes of email. For example, dedicated IP addresses may work well if you send to a small, well-defined group of recipients whose mail servers accept or reject email using a list of specific IP addresses, rather than IP address reputation. Additional Costs The use of shared IP addresses is included in the standard Amazon SES pricing. Leasing dedicated IP addresses incurs an extra monthly cost beyond the standard costs associated with sending email using Amazon SES. Each dedicated IP address incurs a separate monthly charge. For pricing information, see the Amazon SES pricing page Control over Sender Reputation When you use dedicated IP addresses, your Amazon SES account is the only one that is able to send email from those addresses. For this reason, the sender reputation of the dedicated IP addresses that you lease is determined by your email sending practices. Ability to Isolate Sender Reputation By using dedicated IP addresses, you can isolate your sender reputation for different components of your email program. If you lease more than one dedicated IP address for use with Amazon SES, you can create dedicated IP pools—groups of dedicated IP addresses that can be used for sending specific types of email. For example, you can create one pool of dedicated IP addresses for sending marketing email, and another for sending transactional email. To learn more, see Creating Dedicated IP Pools. Known, Unchanging IP Addresses When you use dedicated IP addresses, you can find the values of the addresses that send your mail in the Dedicated IPs page of the Amazon SES console. Dedicated IP addresses don't change. With shared IP addresses, you don't know the IP addresses that Amazon SES uses to send your mail, and they can change at any time.
https://docs.aws.amazon.com/ses/latest/DeveloperGuide/dedicated-ip.html
2021-02-24T23:22:23
CC-MAIN-2021-10
1614178349708.2
[]
docs.aws.amazon.com
Tracks in Locus Map generally refer to both track and routes. Tracks and routes are sets of GPS information designed to help you in your navigation. There are a few key differences to keep in mind when using routes and tracks: Tracks act like breadcrumb trails, allowing you to see where you or another individual traveled in the past. Tracks contain track points, not waypoints or points of interest. They provide a record of where you have been, and when, so you can later determine your path and speed. Locus Map can record tracks and import already recorded tracks from other sources. Routes are generally made up of a series of significant points along your path. Locus Map will tell you the bearing and distance to the next point in sequence as you navigate along your route. Each point is usually named (in fact, a route is usually just a sequence of waypoints). Routes can be planned directly in Locus Map point by point or Locus Map can calculate it for you. Actually Locus Map can transform a previously recorded track into a route and navigate along it.
https://docs.locusmap.eu/doku.php?id=manual:user_guide:tracks:about
2021-02-25T00:14:53
CC-MAIN-2021-10
1614178349708.2
[]
docs.locusmap.eu
Install the .pak file of the management pack for storage devices to add the management pack as a solution, click the Add icon. - On the Select a Solution to Install page from the Add Solution wizard, browse to the .pak file of the vRealize Operations Manager Management Pack for Storage Devices and click Upload. When the Storage Devices management pack file is uploaded, you see details about the management pack. - After the upload is complete, click Next. - On the End User License Agreement page, accept the license agreement and click Next. The installation of the management pack starts. You see its progress on the Install Solution page. - After the installation is complete, click Finish on the Install Solution page. Results The Management Pack for Storage Devices solution appears on the Solutions page of the vRealize Operations Manager user interface.
https://docs.vmware.com/en/VMware-Validated-Design/4.3/com.vmware.vvd.sddc-deploya.doc/GUID-94532850-4981-446E-B3CC-DB30BBC30AD7.html
2021-02-25T00:17:57
CC-MAIN-2021-10
1614178349708.2
[]
docs.vmware.com
Extending BMC Remedy Developer Studio Developer Studio is composed of Eclipse plug-ins, which are modules of code that perform various functions. Some of these plug-ins have public extension points, which are ports through which they expose their functionality to other plug-ins and indicate which class or method to call to use that functionality. To add functionality to Developer Studio, you can create custom plug-ins with extensions that hook into these extension points. Through these connections, custom plug-ins can exchange API calls with Developer Studio and the AR System server. Important To create plug-ins for Developer Studio, you must be familiar with Eclipse plug-in development (see) and Java (see). Although BMC Customer Support is available to answer questions about BMC plug-ins and APIs, it cannot provide help with general Eclipse and Java issues that you encounter while developing custom plug-ins. This section contains information about adding custom functionality to BMC Remedy Developer Studio: - Creating plug-ins to extend BMC Remedy Developer Studio - Prerequisites for creating plug-ins - Extension points for BMC Remedy Developer Studio - Developer Studio Java API - Installation directory for the BMC Remedy Developer Studio plug-ins Note This feature does not apply to BMC Remedy AR System release 7.5.00 or earlier.
https://docs.bmc.com/docs/ars1908/extending-bmc-remedy-developer-studio-866349668.html
2020-07-02T19:51:46
CC-MAIN-2020-29
1593655879738.16
[]
docs.bmc.com
toyplot.data module¶ Classes and functions for working with raw data. - class toyplot.data. Table(data=None, index=False)[source]¶ Encapsulates an ordered, heterogeneous collection of labelled data series. matrix()[source]¶ Convert the table to a matrix (2D numpy array). The data type of the returned array is chosen based on the types of the columns within the table. Tables containing a homogeneous set of column types will return an array of the the same type. If the table contains one or more string columns, the results will be an array of strings. toyplot.data. minimax(items)[source]¶ Compute the minimum and maximum of an arbitrary collection of scalar- or array-like items. The items parameter must be an iterable containing any combination of None, scalars, numpy arrays, or numpy masked arrays. None, NaN, masked values, and empty arrays are all handled correctly. Returns (None, None) if the inputs don’t contain any usable values. toyplot.data. read_csv(fobj, convert=False)[source]¶ Load a CSV (delimited text) file. Notes read_csv() is a simple tool for use in demos and tutorials. For more full-featured delimited text parsing, you should consider the csvmodule included in the Python standard library, or functionality provided by numpy or Pandas.
https://toyplot.readthedocs.io/en/latest/toyplot.data.html
2020-07-02T18:44:55
CC-MAIN-2020-29
1593655879738.16
[]
toyplot.readthedocs.io
Release workloads After a collection of workloads and their supporting assets have been deployed to the cloud, it must be prepared before it can be released. In this phase of the migration effort, the collection of workloads are load tested and tested with the business. They are then optimized and documented. Once the business and IT teams have reviewed and signed off on workload deployments, those workloads can be released or handed off to governance, security, and operations teams for ongoing operations. The objective of "release workloads" is to prepare migrated workloads for promotion to production usage. Definition of done The optimization process is complete when a workload has been properly configured, sized, and deployed to production. Accountability during optimization The cloud adoption team is accountable for the entire optimization process. However, members of the cloud strategy team, the cloud operations team, and the cloud governance team should also be responsible for activities within this process. Responsibilities during optimization In addition to the high-level accountability, there are actions that an individual or group needs to be directly responsible for. The following are a few activities that require assignments to responsible parties: - Business testing. Resolve any compatibility issues that prevent the workload from completing its migration to the cloud. - Power users from within the business should participate heavily in testing of the migrated workload. Depending on the degree of optimization attempted, multiple testing cycles may be required. - Business change plan. Development of a plan for user adoption, changes to business processes, and modification to business KPIs or learning metrics as a result of the migration effort. - Benchmark and optimize. Study of the business testing and automated testing to benchmark performance. Based on usage, the cloud adoption team refines sizing of the deployed assets to balance cost and performance against expected production requirements. - Ready for production. Prepare the workload and environment for the support of the workload's ongoing production usage. - Promote. Redirect production traffic to the migrated and optimized workload. This activity represents the completion of a release cycle. In addition to core activities, there are a few parallel activities that require specific assignments and execution plans: - Decommission. Generally, cost savings can be realized from a migration, when the previous production assets are decommissioned and properly disposed of. - Retrospective. Every release creates an opportunity for deeper learning and adoption of a growth mindset. When each release cycle is completed, the cloud adoption team should evaluate the processes used during migration to identify improvements. Next steps With a general understanding of the optimization process, you are ready to begin the process by establishing a business change plan for the candidate workload.
https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/migrate/migration-considerations/optimize/
2020-07-02T20:15:30
CC-MAIN-2020-29
1593655879738.16
[]
docs.microsoft.com
Support Departments Contents Support Tickets Creating.
https://docs.whmcs.com/index.php?title=Support_Departments&oldid=27302
2020-07-02T19:26:06
CC-MAIN-2020-29
1593655879738.16
[]
docs.whmcs.com
The NServiceBus Azure Host package will be deprecated as of version 9. Instead use self-hosting in Azure Cloud Services. Refer to the upgrade guide for further details. Handling critical errors Versions 6.2.2 and above The Azure host is terminated on critical errors by default. When the host is terminated, the Azure Fabric controller will restart the host automatically. Versions 6.2.1 and below The Azure host is not terminated on critical errors by default and only shuts down the bus. This would cause the role not to process messages until the role host is restarted. To address this (probably undesired) behavior, implement a critical errors action that shuts down the host process instead. // Configuring how NServiceBus handles critical errors endpointConfiguration.DefineCriticalErrorAction( onCriticalError: context => { var output = $"Critical exception: '{context.Error}'"; log.Error(output, context.Exception); if (Environment.UserInteractive) { // so that user can see on their screen the problem Thread.Sleep(10000); } var fatalMessage = $"Critical error:\n{context.Error}\nShutting down."; Environment.FailFast(fatalMessage, context.Exception); return Task.CompletedTask; });
https://particular-docs.azurewebsites.net/nservicebus/hosting/cloud-services-host/critical?version=cloudserviceshost_7
2020-07-02T19:23:29
CC-MAIN-2020-29
1593655879738.16
[]
particular-docs.azurewebsites.net
NOTES: includes the Intel® Deep Learning Deployment Toolkit (Intel® DLDT). The Intel® Distribution of OpenVINO™ toolkit for Linux* with FPGA Support: Included with the Installation and installed by default: The development and target platforms have the same requirements, but you can select different components during the installation, based on your intended use. Hardware NOTE: Intel® Arria 10 FPGA (Mustang-F100-A10) SG1 is no longer supported. If you use Intel® Vision Accelerator Design with an Intel® Arria 10 FPGA (Mustang-F100-A10) Speed Grade 1, we recommend continuing to use the Intel® Distribution of OpenVINO™ toolkit 2020.1 release. NOTE: Intel® Arria® 10 GX FPGA Development Kit is no longer supported. For the Intel® Arria® 10 FPGA GX Development Kit configuration guide, refer to the 2019 R1.1 documentation. Processor Notes: Operating Systems: This guide provides step-by-step instructions on how to install the Intel® Distribution of OpenVINO™ toolkit with FPGA Support. Links are provided for each type of compatible hardware including downloads, initialization and configuration steps. The following steps will be covered: Download the Intel® Distribution of OpenVINO™ toolkit package file from Intel® Distribution of OpenVINO™ toolkit for Linux* with FPGA Support. Select the Intel® Distribution of OpenVINO™ toolkit for Linux with FPGA Support package from the dropdown menu. Downloadsdirectory: l_openvino_toolkit_fpga_p_<version>.tgz. l_openvino_toolkit_fpga_p_<version>directory. l_openvino_toolkit_fpga_p_<version>directory: /home/<user>/inference_engine_samples /home/<user>/openvino_models Installation Notes: /opt/intel/openvino_fpga_2019.<version>/. /opt/intel/openvino/. The first core components are installed. Continue to the next section to install additional dependencies. These dependencies are required for: install_dependenciesdirectory: The dependencies are installed. Intel Distribution of OpenVINO toolkit. You cannot perform inference on your trained model without running the model through the Model Optimizer. When you run a pre-trained model through the Model Optimizer, your output is an Intermediate Representation (IR) of the network. The Intermediate Representation is a pair of files that describe the whole model: .xml: Describes the network topology .bin: Contains the weights and biases binary data For more information about the Model Optimizer, refer to the Model Optimizer Developer Guide. IMPORTANT: The Internet access is required to execute the following steps successfully. If you have access to the Internet through the proxy server only, please make sure that it is configured in your environment. You can choose to either configure all supported frameworks at once OR configure one framework at a time. Choose the option that best suits your needs. If you see error messages, make sure you installed all dependencies. NOTE: If you installed the Intel® Distribution of OpenVINO™ to the non-default install directory, replace /opt/intelwith the directory in which you installed the software. Option 1: Configure all supported frameworks at the same time Option 2: Configure each framework separately Configure individual frameworks separately ONLY if you did not select Option 1 above. You are ready to compile the samples by running the verification scripts. To verify the installation and compile two samples, run the verification applications provided with the product on the CPU: car.pngimage in the demo directory. When the verification script completes, you will have the label and confidence for the top-10 categories: Run the Inference Pipeline verification script: This verification script builds the Security Barrier Camera Demo application included in the package. This verification script uses the car_1.bmp image in the demo directory to show an inference pipeline using three of the pre-trained models. The verification script uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute. First, an object is identified as a vehicle. This identification is used as input to the next model, which identifies specific vehicle attributes, including the license plate. Finally, the attributes identified as the license plate are used as input to the third model, which recognizes specific characters in the license plate. When the verification script completes, you will see an image that displays the resulting frame with detections rendered as bounding boxes, and text: To learn about the verification scripts, see the README.txt file in /opt/intel/openvino/deployment_tools/demo. For a description of the Intel Distribution of OpenVINO™ pre-trained object detection and object recognition models, see Overview of OpenVINO™ Toolkit Pre-Trained Models. You have completed all required installation, configuration and build steps in this guide to use your CPU to work with your trained models. To use other hardware, see Install and Configure your Compatible Hardware below. Install your compatible hardware from the list of supported components below. NOTE: Once you've completed your hardware installation, you'll return to this guide to finish installation and configuration of the Intel® Distribution of OpenVINO™ toolkit. Links to install and configure compatible hardware Congratulations, you have finished the Intel® Distribution of OpenVINO™ toolkit installation for FPGA. To learn more about how the Intel® Distribution of OpenVINO™ toolkit works, the Hello World tutorial and other resources are provided below. Refer to the OpenVINO™ with FPGA Hello World Face Detection Exercise. Additional Resources To learn more about converting models, go to:
https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_linux_fpga.html
2020-07-02T20:31:05
CC-MAIN-2020-29
1593655879738.16
[]
docs.openvinotoolkit.org
Support¶ The Toyplot documentation: Visit our GitHub repository for access to source code, issue tracker, and the wiki: We also have a continuous integration server that runs the Toyplot regression test suite anytime changes are committed to GitHub: And here are our test coverage stats, also updated automatically when modifications are committed: For Toyplot questions, comments, or suggestions, get in touch with the team at: Otherwise, you can contact Tim directly: - Timothy M. Shead - [email protected]
https://toyplot.readthedocs.io/en/latest/support.html
2020-07-02T19:15:25
CC-MAIN-2020-29
1593655879738.16
[array(['_images/toyplot.png', '_images/toyplot.png'], dtype=object)]
toyplot.readthedocs.io
- bump - dark - language - material - migrate - migrating - multiresolution - particlemtl - scannedmtl - simbiontmtl - utility To add a label to the list of required labels, choose '+ labelname' from Related Labels. To remove a label from the required labels, choose '- labelname' from above. - There are no pages at the moment.
https://docs.chaosgroup.com/label/VRAY4MAYA/bump+dark+language+material+migrate+migrating+multiresolution+particlemtl+scannedmtl+simbiontmtl+utility
2020-07-02T17:58:19
CC-MAIN-2020-29
1593655879738.16
[]
docs.chaosgroup.com
- direct - environm - environment - light - lighting - lightselect - probabilistic - rawlightin - rawtotallighting - variables To add a label to the list of required labels, choose '+ labelname' from Related Labels. To remove a label from the required labels, choose '- labelname' from above. - There are no pages at the moment.
https://docs.chaosgroup.com/label/direct+environm+environment+light+lighting+lightselect+probabilistic+rawlightin+rawtotallighting+variables
2020-07-02T19:32:32
CC-MAIN-2020-29
1593655879738.16
[]
docs.chaosgroup.com
New logo to the TechNet Wiki Ninjas pages on Facebook I
https://docs.microsoft.com/en-us/archive/blogs/wikininjas/new-logo-to-the-technet-wiki-ninjas-pages-on-facebook
2020-07-02T19:06:03
CC-MAIN-2020-29
1593655879738.16
[]
docs.microsoft.com
Custom Domain You're always more than welcome to use a sub-domain at tadabase.io, for example: demo.tadabase.io to host your apps. But if you'd like to have your domains hosted using your own domain here's how you do that: - Within your app, click on Settings > Domain Settings - In the custom domain section enter the domain name you wish to use for this app. Keep in mind that is not the same as mysite.com That's all you need to do on the app, the next step is to log onto wherever your DNS is hosted (ie. GoDaddy etc) and create a new CNAME record and forward it to sites.tadabase.io Here are links with instructions on how to do this on a few popular hosting sites: - GoDaddy: - Google Domains: - Wix: Important Notes: If you'd like to have an SSL certificate assigned to your domain or would like to forward the root domain (mysite.com - vs app.mysite.com ) please email [email protected] and we'll give you the details necessary to accomplish that. Please note: SSL Certificates are not issued during trial periods. Feel free to use the CNAME of sites.tadabase.io instead during your trial. IMPORTANT: Never use our IP address in your A record. We use load balancing and auto-scaling technology which crash your site the moment anything changes. When creating a CNAME record, make sure to only put in your subdomain as the host, not the full domain. If you're planning on using app.example.com, only enter 'app' for the host value.
https://docs.tadabase.io/categories/manual/article/custom-domain
2020-07-02T19:42:51
CC-MAIN-2020-29
1593655879738.16
[]
docs.tadabase.io
museotoolbox.cross_validation.SpatialLeaveOneSubGroupOut.save_to_vector¶ SpatialLeaveOneSubGroupOut. save_to_vector(vector, field, group=None, out_vector=None)¶ Save to vector files each fold from the cross-validation. - Parameters vector (str.) – Path where the vector is stored. field (str.) – Name of the field containing the label. group (str, or None.) – Name of the field containing the group/subgroup (or None out_vector (str.) – Path and filename to save the different results. - Returns list_of_output – List containing the number of folds * 2. train + validation for each fold. - Return type list
https://museotoolbox.readthedocs.io/en/latest/modules/SpatialLeaveOneSubGroupOut/museotoolbox.cross_validation.SpatialLeaveOneSubGroupOut.save_to_vector.html
2020-07-02T20:24:30
CC-MAIN-2020-29
1593655879738.16
[]
museotoolbox.readthedocs.io
Knowledge base article describing how to extract .msu files and automate installing them The .NET Framework 3.5 installs service packs for the .NET Framework 2.0 and 3.0 behind the scenes as prerequisites. On Windows Vista and Windows Server 2008, the .NET Framework 2.0 and 3.0 are installed as OS components, which means that the 2.0 and 3.0 service packs are delivered as .msu files that contain OS update metadata files and payload. I was asked by a colleague today about how to view and extract the contents of the .NET Framework 2.0 SP1 and 3.0 SP1 .msu files and then automate the installation if needed. An .msu file is essentially a .zip file, so in the past I have used standard .zip viewing tools (such as WinZip) to view the contents. Fortunately, my colleague found a useful knowledge base article describing how to automate this type of scenario and I wanted to post a link to it here to hopefully make it easier to find. You can check out the article at this location: To summarize the information in that article, you can use the following syntax to extract the contents of an .msu file. I am using the .NET Framework 2.0 SP1 .msu file that is included as a prerequisite for the .NET Framework 3.5 in the below examples: expand -f:* "NetFX2.0-KB110806-v6000-x86.msu" %temp%\netfx20sp1 After extracting the contents of the .msu file, you can install it by using Package Manager (pkgmgr.exe) with a command line like the following: pkgmgr.exe /n:%temp%\netfx20sp1\Windows6.0-KB110806-v6000-x86.xml Alternatively, you can use Windows Update Standalone Installer (wusa.exe) to directly install a .msu file without extracting it by using a command line like the following: wusa.exe "NetFX2.0-KB110806-v6000-x86.msu" /quiet /norestart
https://docs.microsoft.com/en-us/archive/blogs/astebner/knowledge-base-article-describing-how-to-extract-msu-files-and-automate-installing-them
2020-07-02T20:26:48
CC-MAIN-2020-29
1593655879738.16
[]
docs.microsoft.com
Documentation | Tutorials | Porting map project Modern Tablets and Phones are very powerful devices. But whatever regardless of what producers may say about capabilities, they are simply not as powerful as desktop computers. It is all about power consumption: do not expect that a low powered device can really be compared to a workstation.
https://docs.tatukgis.com/DK11/guides:tutorials:portingmapproject
2020-07-02T18:18:28
CC-MAIN-2020-29
1593655879738.16
[]
docs.tatukgis.com
DeleteDataflowEndpointGroup Deletes a dataflow endpoint group. Request Syntax DELETE /dataflowEndpointGroup/ dataflowEndpointGroupIdHTTP/1.1 URI Request Parameters The request uses the following URI parameters. - dataflowEndpointGroupId UUID of a dataflow endpoint group. Required: Yes Request Body The request does not have a request body. Response Syntax HTTP/1.1 200 Content-type: application/json { "dataflowEndpointGroupId": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - dataflowEndpointGroupId UUID of a dataflow endpoint group. Type: String:
https://docs.aws.amazon.com/ground-station/latest/APIReference/API_DeleteDataflowEndpointGroup.html
2020-07-02T20:33:26
CC-MAIN-2020-29
1593655879738.16
[]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Instantiates PublishRequest with the parameterized properties Namespace: Amazon.SimpleNotificationService.Model Assembly: AWSSDK.SimpleNotificationService.dll Version: 3.x.y.z The topic you want to publish to. If you don't specify a value for the TopicArn parameter, you must specify a value for the PhoneNumber or TargetArn parameters. The message you want to send. If you are publishing to a topic and action
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/SNS/MPublishRequestctorStringStringString.html
2020-07-02T19:44:44
CC-MAIN-2020-29
1593655879738.16
[]
docs.aws.amazon.com
MKR Shark mkr_sharky ID for board option in “platformio.ini” (Project Configuration File): [env:mkr_sharky] platform = ststm32 board = mkr_sharky You can override default MKR Sharky settings per build environment using board_*** option, where *** is a JSON object path from board manifest mkr_sharky.json. For example, board_build.mcu, board_build.f_cpu, etc. [env:mkr_sharky] platform = ststm32 board = mkr_sharky ; change microcontroller board_build.mcu = stm32wb55cg ; change MCU frequency board_build.f_cpu = 64000000L Uploading¶ MKR Sharky supports the next uploading protocols: blackmagic dfu jlink mbed serial Default protocol is mbed You can change upload protocol using upload_protocol option: [env:mkr_sharky] platform = ststm32 board = mkr_shark). MKR Sharky does not have on-board debug probe and IS NOT READY for debugging. You will need to use/buy one of external probe listed below.
https://docs.platformio.org/en/latest/boards/ststm32/mkr_sharky.html
2020-07-02T19:12:46
CC-MAIN-2020-29
1593655879738.16
[]
docs.platformio.org
About Voila Voila is a toolbox container that allows you to: - Manage a Voila environment that wraps all your resources and targets a specific Kubernetes cluster. - Prepares all the values and certificates that will be injected in the Helm charts to deploy the Microsegmentation Console. - Secure your certificates and other secrets. - Once activated, wraps Kubernetes and Helm commands to target your Kubernetes cluster. Your Voila environment contains the following: activatescript: activates your Voila environment aporeto.yamlfile: contains all the settings used for the deployment certsfolder: contains all generated certificates. conf.dfolder: contains service configurations conf.voilafile: contains the Voila settings By default, aporeto.yaml and all certificates are encrypted when the environment is not activated. A set of commands is available to perform administrative operations. See all commands available using: list-cmds The main command is doit. This a wrapper tool that will just do it with default configuration This command will check your current setup and adapt the configuration, apply it and trigger the installation/upgrade if needed. It is idempotent and is calling other commands under the hood like: upconf: This is the tool that maintain your environment settings up to date. snap: Is at tool that will analyze your current deployment and handle the install/update for you. apostate: To check the status of the current deployment All the settings for your deployment are handled through YAML files that are then fed to the Helm charts to generate a Kubernetes resources to create. There are two commands to help you read and write those configurations: get_value set_value You may have the need to use Voila in a non-interactive way, for instance, to create a new Voila environment and deploy automatically. Consult docker run -ti gcr.io/prismacloud-cns/voila:release-5.0.8 create -h output to see what you can configure using environment variables. To execute a command or a script against an existing Voila environment: export VOILA_ENV_KEY=<KEY> cd ./microseg && ./activate run <cmd or script> Where: <KEY>is the Voila environment key used to unlock it. <cmd or script>is a command or script containing commands to run.
https://docs.aporeto.com/5.0/start/install-console/about-voila/
2021-11-27T02:17:23
CC-MAIN-2021-49
1637964358078.2
[]
docs.aporeto.com
Discussion Board Plus supports emoji for discussion topics and replies. By default, emoji are not enabled. Check the Enable Emoji box in the configuration tool pane to allow users to post topics and replies by either using the emoji picker control or by typing an emoji command into the body field. For a full list of emoji commands, click here. NOTE: Emoji are not visible in the Forum and Management views.
https://docs.bamboosolutions.com/document/enable_emoji/
2021-11-27T03:00:16
CC-MAIN-2021-49
1637964358078.2
[array(['/wp-content/uploads/2017/06/addemoji-a.png', 'addemoji-a.png'], dtype=object) array(['/wp-content/uploads/2017/06/addemoji-1.png', 'addemoji-1.png'], dtype=object) ]
docs.bamboosolutions.com
Inking¶ The first thing to realize about inking is that unlike anatomy, perspective, composition or color theory, you cannot compensate for lack of practice with study or reasoning. This is because all the magic in drawing lines happens from your shoulder to your fingers, very little of it happens in your head, and your lines improve with practice. On the other hand, this can be a blessing. You don’t need to worry about whether you are smart enough, or are creative enough to be a good inker. Just dedicated. Doubtlessly, inking is the Hufflepuff of drawing disciplines. That said, there are a few tips to make life easy: Pose¶ Notice how I mentioned up there that the magic happens between your shoulders and fingers? A bit weird, not? But perhaps, you have heard of people talking about adopting a different pose for drawing. You can in fact, make different strokes depending on which muscles and joints you use to make the movement: The Fingers, the wrist and lower-arm muscles, the elbow and upper-arm muscles or the shoulder and back muscles. Generally, the lower down the arm the easier it is to make precise strokes, but also the less durable the joints are for long term use. We tend to start off using our fingers and wrist a lot during drawing, because it’s easier to be precise this way. But it’s difficult to make long strokes, and furthermore, your fingers and wrist get tired far quicker. Your shoulders and elbows on the other hand are actually quite good at handling stress, and if you use your whole hand you will be able to make long strokes far more easily. People who do calligraphy need shoulder based strokes to make those lovely flourishes (personally, I can recommend improving your handwriting as a way to improve inking), and train their arms so they can do both big and small strokes with the full arm. To control pressure in this state effectively, you should press your pinky against the tablet surface as you make your stroke. This will allow you to precisely judge how far the pen is removed from the tablet surface while leaving the position up to your shoulders. The pressure should then be put by your elbow. So, there are not any secret rules to inking, but if there is one, it would be the following: The longer your stroke, the more of your arms you need to use to make the stroke. Stroke smoothing¶ So, if the above is the secret to drawing long strokes, that would be why people having been inking lovely drawings for years without any smoothing? Then, surely, it is decadence to use something like stroke smoothing, a short-cut for the lazy? Example of how a rigger brush can smooth the original movement (here in red)¶ Not really. To both, actually. Inkers have had a real-life tool that made it easier to ink, it’s called a rigger-brush, which is a brush with very long hairs. Due to this length it sorta smooths out shakiness, and thus a favoured brush when inking at three in the morning. With some tablet brands, the position events being sent aren’t very precise, which is why we having basic smoothing to apply the tiniest bit of smoothing on tablet strokes. On the other hand, doing too much smoothing during the whole drawing can make your strokes very mechanical in the worst way. Having no jitter or tiny bumps removes certain humanity from your drawings, and it can make it impossible to represent fabric properly. Therefore, it’s wise to train your inking hand, yet not to be too hard on yourself and refuse to use smoothing at all, as we all get tired, cold or have a bad day once in a while. Stabilizer set to 50 or so should provide a little comfort while keeping the little irregularities. Bezier curves and other tools¶ So, you may have heard of a French curve. If not, it’s a piece of plastic representing a stencil. These curves are used to make perfectly smooth curves on the basis of a sketch. In digital painting, we don’t have the luxury of being able to use two hands, so you can’t hold a ruler with one hand and adjust it while inking with the other. For this purpose, we have instead Bezier curves, which can be made with the Bezier Curve Tool. You can even make these on a vector layer, so they can be modified on the fly. The downside of these is that they cannot have line-variation, making them a bit robotic. You can also make small bezier curves with the Assistant Tool, amongst the other tools there. Then, in the freehand brush tool options, you can tick Snap to Assistants and start a line that snaps to this assistant. Presets¶ So here are some things to consider with the brush-presets that you use: Anti-aliasing versus jagged pixels¶. You can turn any pixel brush into an aliased brush, by going the F5 key and ticking Sharpness. Texture¶ Do you make smooth ‘wet’ strokes? Or do you make textured ones? For the longest time, smooth strokes were preferred, as that would be less of a headache when entering the coloring phase. Within Krita there are several methods to color these easily, the colorize mask being the prime example, so textured becomes a viable option even for the lazy amongst us. Left: No texture, Center: Textured, Right: Predefined Brush tip.¶ Pressure curve¶ Of course, the nicest lines are made with pressure sensitivity, so they dynamically change from thick to thin. However, different types of curves on the pressure give different results. The typical example is a slightly concave line to create a brush that more easily makes thin lines. Ink_Gpen_25 is a good example of a brush with a concave pressure curve. This curve makes it easier to make thin lines.¶ Conversely, here’s a convex brush. The strokes are much rounder.¶ Fill_circle combines both into an s-curve, this allows for very dynamic brush strokes.¶ Pressure isn’t the only thing you can do interesting things with, adding an inverse convex curve to speed can add a nice touch to your strokes.¶ Preparing sketches for inking¶ So, you have a sketch and you wish to start inking it. Assuming you’ve scanned it in, or drew it, you can try the following things to make it easier to ink. Opacity down to 10%¶ Put a white (just press the Backspace key) layer underneath the sketch. Turn down the opacity of the sketch to a really low number and put a layer above it for inking. Make the sketch colored¶ Put a layer filled with a color you like between the inking and sketch layer. Then set that layer to ‘screen’ or ‘addition’, this will turn all the black lines into the color! If you have a transparent background, or put this layer into a group, be sure to tick the alpha-inherit symbol! Make the sketch colored, alternative version¶ Or, on the layer, go to layer properties, and untick ‘blue’. This works easier with a single layer sketch, while the above works best with multi-layer sketches. Super-thin lines¶ If you are interested in super-thin lines, it might be better to make your ink at double or even triple the size you usually work at, and, only use an aliased pixel brush. Then, when the ink is finished, use the fill tool to fill in flats on a separate layer, split the layer via, and then resize to the original size. This might be a little of an odd way of working, but it does make drawing thin lines trivial, and it’s cheaper to buy RAM so you can make HUGE images than to spent hours on trying to color the thin lines precisely, especially as colorize mask will not be able to deal with thin anti-aliased lines very well. Tip David Revoy made a set of his own inking tips for Krita and explains them in this youtube video.
https://docs.krita.org/en/tutorials/inking.html
2021-11-27T01:56:39
CC-MAIN-2021-49
1637964358078.2
[array(['../_images/Stroke_fingers.gif', 'Finger movement.'], dtype=object) array(['../_images/Stroke_wrist.gif', 'Wrist movement.'], dtype=object) array(['../_images/Stroke_arm.gif', 'Arm movement.'], dtype=object) array(['../_images/Stroke_shoulder.gif', 'Stroke shoulder movement.'], dtype=object) array(['../_images/Stroke_rigger.gif', 'Rigger brush demonstration.'], dtype=object) array(['../_images/Inking_patterned.png', 'Type of strokes.'], dtype=object) array(['../_images/Ink_gpen.png', 'Pressure curve for Ink Gpen preset brush.'], dtype=object) array(['../_images/Ink_convex.png', 'Convex inking brush.'], dtype=object) array(['../_images/Ink_fill_circle.png', 'Ink fill circle preset brush.'], dtype=object) array(['../_images/Ink_speed.png', 'Inverse convex to speed parameter.'], dtype=object) array(['../_images/Inking_aliasresize.png', 'Aliased resize.'], dtype=object) ]
docs.krita.org
LexUtils Lex Utils currently contains two utility functions that can be helpful in preparing content for use with Salience. These are language detection and html extraction. Like Salience, Lex Utils is provided as a .so (linux) or .dll (windows) and wrappers in c, java, python and .NET. The Lex Utils objecst are not thread safe, but they are small and one can be created on each thread to support multithreaded environments. Language Detection LexLanguageUtilities provides the ability to classify text into one of the [languages supported by Salience]. Language Detection only works for supported languages, content in other languages will not be correctly identified. First, a LexLanguageUtilities object must be created: If you wish to perform language detection on a machine that will not have Salience installed, please contact [Lexalytics Support] to obtain a languages.bin file that can be provided to all constructors in replacement of the data directory path. Once you have languages.bin, just give the full path to that file in replacement of the path to the salience data directory. Once a session has been opened, a LanguageRecommendation object can be obtained for any text: The results are split into a best match and a list of how each possible language scored. Each language is provided as an (internal) code number, a language string, the score for that language, and what the optimal score for this text would have been. Text that scores very low compared to the optimal may be gibberish or an unsupported language. If you have text in multiple languages, you will get multiple language results with similar scores, with the ratio of score to perfect score approximating the ratio of each language. .NET LanguageResult LanguageRecommendation Java LanguageResult LanguageRecommendation Python C lxaLanguageRecommendation C provides an additional function: lxaGetLanguageName(int nLanguageCode, char** acOutText); to transform a language code to its name Html Extraction LexHtmlUtilities removes html tags from a document, and attempts to strip out unrelated content like ads and sidebars. Stripping out unrelated content will sometimes remove some of the article text, but not significant portions. This is particularly noticable if you provide non-html content: in that case you'll get the text back with some sentences removed for being 'off topic', so separating out html and non-html content before using the html extractor is recommended. First, a LexHtmlUtilities object must be created: Then simply pass in content to the extraction function to get stripped text back: Updated almost 2 years ago
https://salience-docs.lexalytics.com/docs/lexutils
2021-11-27T02:56:18
CC-MAIN-2021-49
1637964358078.2
[]
salience-docs.lexalytics.com
The following information applies if an application chooses single-buffering and a request is a single statement request. CLI will fetch the response in single buffering mode only; even if double buffering is set and reposition of response is requested. Application - Supplies the session id and request id of the response required. - Supplies the length of and a pointer to the response area, if the request was submitted with the Move Mode processing option. - Sets the DBCAREA fields positioning_action, positioning_value, and positioning_stmt_number, as required. - Sets the function code to FETCH(DBFFET). - Calls DBCHCL. DBCHCL - The RCB for the active request is located. - The waitdata options flag and other cursor repositioning flags are passed to MTDP with session and request identifiers. - MTDP is called to check for or wait for completion of the request. - If keep_resp is not set to 'P' while initiating the request. - The request did not complete successfully. - The requested reposition is invalid. Otherwise, DBCHCL places a PclROWPOSITION/PclOFFSETPOSITION parcel in the request buffer for the request, supplies the session id and request id of the specified request in the MTDPCB, sets the MTDPCB function code to MTDPCONT, and calls MTDP. MTDP MTDP sends a Continue Request message to the database. It then returns control to DBCHCL. DBCHCL A success or error message is generated in the message field of the DBCAREA and DBCHCL returns to the application. Application If the return code or the error flag in the DBCAREA is not normal, the application makes the appropriate changes, and re-submits the DBCAREA to DBCHCL, as above. Otherwise, it performs a fetch as for the non-reposition case and consumes the unit of response. Typically, the application loops back for another unit of response until the EndRequest parcel is obtained.
https://docs.teradata.com/r/bh1cB~yqR86mWktTVCvbEw/xrGhytWszEwFCJ48BwkEgQ
2021-11-27T03:48:57
CC-MAIN-2021-49
1637964358078.2
[]
docs.teradata.com
Camera Data Node The Camera Data node is used to get information about the position of the object relative to the camera. This could be used for example to change the shading of objects further away from the camera, or make custom fog effects. Ingressi Questo nodo non ha connettori d’ingresso. Proprietà This node has no properties. Uscite - View Vector A camera space vector from the camera to the shading point. - View Z Depth The distance each pixel is away from the camera. - View Distance Distance from the camera to the shading point.
https://docs.blender.org/manual/it/dev/render/shader_nodes/input/camera_data.html
2021-11-27T03:20:50
CC-MAIN-2021-49
1637964358078.2
[array(['../../../_images/render_shader-nodes_input_camera-data_node.png', '../../../_images/render_shader-nodes_input_camera-data_node.png'], dtype=object) ]
docs.blender.org
Measuring and Understanding the Performance of Your SSIS Packages in the Enterprise (SQL Server Video) Video Summary. Video Acknowledgements Thank you to Thomas Kejser for contributing to the material for the series, SSIS: Designing and Tuning for Performance SQL Server Video Series. This video is the first one in the series. Thank you to Carla Sabotta and Douglas Laudenschlager for their guidance and valuable feedback. See Also
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008/dd795223(v=sql.100)
2021-11-27T03:05:34
CC-MAIN-2021-49
1637964358078.2
[]
docs.microsoft.com
Anti-ransomware overview Contributors Beginning with ONTAP 9.10.1, the anti-ransomware feature uses workload analysis in NAS (NFS and SMB) environments to proactively detect and warn about abnormal activity that might indicate a ransomware attack. When an attack is suspected, anti-ransomware also creates new Snapshot backups, in addition to existing protection from scheduled Snapshot copies. The anti-ransomware feature requires the Multi-tenant Encryption Key Management (MT_EK_MGMT) license, which is available in the security and compliance bundle. ONTAP ransomware protection strategy An effective ransomware detection strategy should include more than a single layer of protection. An analogy would be the safety features of a vehicle. You wouldn’t want to rely on a single feature, such as a seatbelt, to completely protect you in an accident. Air bags, anti-lock brakes, and forward-collision warning are all additional safety features that will lead to a much better outcome. Ransomware protection should be viewed in the same way. While ONTAP includes features like FPolicy, Snapshot copies, SnapLock, and Active IQ Digital Advisor to help protect from ransomware, the focus of this content is the ONTAP anti-ransomware on-box feature with machine-learning capabilities. To learn more about ONTAP’s other anti-ransomware features, see: TR-4572: NetApp Solution for Ransomware. What ONTAP anti-ransomware detects There are two types of ransomware attacks: Denial of service to files by encrypting data. The attacker withholds access to this data unless a ransom is paid. Theft of sensitive proprietary data. The attacker threatens to release this data to the public domain unless a ransom is paid. ONTAP ransomware protection addresses the first type, with an anti-ransomware detection mechanism that is based on: Identification of the incoming data as encrypted or plaintext. Analytics, which detects High data entropy (an evaluation of the randomness of data in a file) A surge in abnormal volume activity with data encryption An extension that does not conform to the normal extension type How to recover data in ONTAP after a ransomware attack When an attack is suspected, the system takes a volume Snapshot copy at that point in time and locks that copy. If the attack is confirmed later, the volume can be restored to this proactively taken snapshot, minimizing the data loss. Locked Snapshot copies cannot be deleted by normal means. However, if you decide later to mark the attack as a false positive, the locked copy will be deleted. With the knowledge of the affected files and the time of attack, it is possible to selectively recover the affected files from various Snapshot copies, rather than simply reverting the whole volume to one of the snapshots. Anti-ransomware thus builds on proven ONTAP data protection and disaster recovery technology to respond to ransomware attacks. See the following topics for more information on recovering data.
https://docs.netapp.com/us-en/ontap/anti-ransomware/
2021-11-27T02:32:28
CC-MAIN-2021-49
1637964358078.2
[]
docs.netapp.com
Increase the size of a FlexGroup volume Contributors You can increase the size of a FlexGroup volume either by adding more capacity to the existing constituents of the FlexGroup volume or by expanding the FlexGroup volume with new constituents. Sufficient space must be available in the aggregates. If you want to add more space, you can increase the collective size of the FlexGroup volume. Increasing the size of a FlexGroup volume resizes the existing constituents of the FlexGroup volume. If you want to improve performance, you can expand the FlexGroup volume. You might want to expand a FlexGroup volume and add new constituents in the following situations: New nodes have been added to the cluster. New aggregates have been created on the existing nodes. The existing constituents of the FlexGroup volume have reached the maximum FlexVol size for the hardware, and therefore the FlexGroup volume cannot be resized. In releases earlier than ONTAP 9.3, you must not expand FlexGroup volumes after a SnapMirror relationship is established. If you expand the source FlexGroup volume after breaking the SnapMirror relationship in releases earlier than ONTAP 9.3, you must perform a baseline transfer to the destination FlexGroup volume once again. Starting with ONTAP 9.3, you can expand FlexGroup volumes that are in a SnapMirror relationship. Increase the size of the FlexGroup volume by increasing the capacity or performance of the FlexGroup volume, as required: Whenever possible, you should increase the capacity of a FlexGroup volume. If you must expand a FlexGroup volume, you should add constituents in the same multiples as the constituents of the existing FlexGroup volume to ensure consistent performance. For example, if the existing FlexGroup volume has 16 constituents with eight constituents per node, you can expand the existing FlexGroup volume by 8 or 16 constituents. Example of increasing the capacity of the existing constituents The following example shows how to add 20 TB space to a FlexGroup volume volX: cluster1::> volume modify -vserver svm1 -volume volX -size +20TB If the FlexGroup volume has 16 constituents, the space of each constituent is increased by 1.25 TB. Example of improving performance by adding new constituents The following example shows how to add two more constituents to the FlexGroup volume volX: cluster1::> volume expand -vserver vs1 -volume volX -aggr-list aggr1,aggr2 The size of the new constituents is the same as that of the existing constituents.
https://docs.netapp.com/us-en/ontap/flexgroup/increase-capacity-task.html
2021-11-27T03:37:34
CC-MAIN-2021-49
1637964358078.2
[]
docs.netapp.com
.../data/themes The themes directory contains the files that control theme extraction at all levels; documents, entities, and collections. The following files may be customized by users in the themes section of a user directory. Click the name of the file for more information below. rules.ptn This controls the POS rules that determine if a combination of words is a theme or not. It is uses the Pattern File format. stopwords.dat This file is used to eliminate phrases that would match the POS rules contained within rules.ptn but are too common to be considered useful, last week for example. The file is a single column .dat file. It can contain both single words and phrases (multi-word) Single words will act as a stop on any phrase containing them: hello will stop any phrase appearing that contains the word hello Phrases will act as a stop on that particular phrase: next week will stop next week, it will not stop sometimes next week NOTE: stopwords.dat is case insensitive. normalization.dat NOTE: Salience does NOT ship with a normalization.dat by default. If you create a normalization.dat, it is possible to normalize multiple different themes into the same theme. This is useful if you want to do some sort of roll-up. For example, you could normalize poor sound, great sound and good quality speakers into ''audio quality''. To enable theme normalization create a normalization.dat under /data/user/themes with each entry in the format: - [theme][normalized_form] NOTE: theme can either be the unstemmed or stemmed form Updated almost 2 years ago
https://salience-docs.lexalytics.com/docs/datathemes
2021-11-27T02:01:42
CC-MAIN-2021-49
1637964358078.2
[]
salience-docs.lexalytics.com
What's New with the 22 April 2015 Release With Clover Mobile and Clover Mini rolling out, the team has been hard at work tuning up our platform and building out improvements geared toward helping 3rd party developers succeed in building, testing, launching, and tracking their apps. API Updates In order to prevent service delays there are new constraints for certain API calls: - Items Reports queries are limited to a 9 week span - V3 Tax report queries are limited to a 92 day span - Cash log queries without explicit time filters will return results for the last 30 days POST requests to create LineItems will now honor "note" field values Dashboard Updates - We have improved our systems for validating uploaded APK packages. Please review the updated documentation about creating and naming your APK - In order for a developer's test merchant account to allow credit card processing, that merchant must be pointed at a "black hole" payment gateway. This is now done by default on all new test merchant accounts. - All apps are now free for sales demo devices - The developer Charges table now supports filtering by any column - including status, amount and type. Updated over 1 year ago Did this page help you?
https://docs.clover.com/docs/whats-new-with-the-22-april-release
2022-06-25T08:45:03
CC-MAIN-2022-27
1656103034877.9
[]
docs.clover.com
mars.tensor.vstack# - mars.tensor.vstack(tup)[source]# Stack tensors in sequence vertically (row wise). This is equivalent to concatenation along the first axis after 1-D tensors of shape (N,) have been reshaped to (1,N). Rebuilds tensors divided by vsplit. This function makes most sense for tensors with up to 3 dimensions. For instance, for pixel-data with a height (first axis), width (second axis), and r/g/b channels (third axis). The functions concatenate, stack and block provide more general stacking and concatenation operations. - Parameters tup (sequence of tensors) – The tensors must have the same shape along all but the first axis. 1-D tensors must have the same length. - Returns stacked – The tensor formed by stacking the given tensors, will be at least 2-D. - Return type Tensor See also stack Join a sequence of tensors along a new axis. hstack Stack tensors in sequence horizontally (column wise). dstack Stack tensors in sequence depth wise (along third dimension). concatenate Join a sequence of tensors along an existing axis. vsplit Split tensor into a list of multiple sub-arrays vertically. block Assemble tensors from blocks. Examples >>> import mars.tensor as mt >>> a = mt.array([1, 2, 3]) >>> b = mt.array([2, 3, 4]) >>> mt.vstack((a,b)).execute() array([[1, 2, 3], [2, 3, 4]]) >>> a = mt.array([[1], [2], [3]]) >>> b = mt.array([[2], [3], [4]]) >>> mt.vstack((a,b)).execute() array([[1], [2], [3], [2], [3], [4]])
https://docs.pymars.org/en/latest/reference/tensor/generated/mars.tensor.vstack.html
2022-06-25T08:32:05
CC-MAIN-2022-27
1656103034877.9
[]
docs.pymars.org
Sysadmin¶ Installation¶ Libreant is written in Python and uses Elasticsearch as the underlying search engine. In the follwoing sections there are the step-by-step guides to install Libreant on different linux-based operating system: Debian & Ubuntu¶ System dependencies¶ Install Elasticsearch¶ The recommended way of installing Elasticsearch on debian-based distro is through the official APT repository. Note If you have any problem installing elasticsearch try to follow the official deb installation guide In order to follow the Elasticsearch installation steps we needs to install some common packages: sudo apt-get update && sudo apt-get install apt-transport-https wget gnupg ca-certificates Download and install the Public Signing Key for elasticsearch repo: wget -qO - | sudo apt-key add - Add elasticsearch repository: echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list And finally you can install the Elasticsearch package with: sudo apt-get update && sudo apt-get install openjdk-8-jre-headless procps elasticsearch Note The procps provides the ps command that is required by the elasticsearch startup script Arch¶ Install all necessary packages: sudo pacman -Sy python2 python2-setuptools python2-virtualenv grep procps elasticsearch Note The procps and grep packages are required by the elasticsearch startup script Create a virtual env: virtualenv2 -p /usr/bin/python2 ve Install libreant and all python dependencies: ./ve/bin/pip install libreant Execution¶ Start elasticsearch service: sudo service elasticsearch start Note If you want to automatically start elasticsearch during bootup: sudo systemctl enable elasticsearch To execute libreant: ./ve/bin/libreant Upgrading¶ Generally speaking, to upgrade libreant you just need to: ./ve/bin/pip install -U libreant And restart your instance (see the Execution section). Some versions, however, could need additional actions. We will list them all in this section. Upgrade to version 0.5¶ libreant now supports elasticsearch 2. If you were already using libreant 0.4, you were using elasticsearch 1.x. You can continue using it if you want. The standard upgrade procedure is enough to have everything working. However, we suggest you to upgrade to elasticsearch2 sooner or later. Step 2: upgrade elasticsearch¶ Just apply the steps in Installation section as if it was a brand new installation. Note If you are using archlinux, you’ve probably made pacman ignore elasticsearch package updates. In order to install the new elasticsearch version you must remove the IgnorePkg elasticsearch line in /etc/pacman.conf before trying to upgrade. Step 3: upgrade DB contents¶ Libreant ships a tool that will take care of the upgrade. You can run it with ./ve/bin/libreant-db upgrade. This tool will give you information on the current DB status and ask you for confirmation before proceding to real changes. Which means that you can run it without worries, you’re still in time for answering “no” if you change your mind. The upgrade tool will ask you about converting entries to the new format, and upgrading the index mapping (in elasticsearch jargon, this is somewhat similar to what a TABLE SCHEMA is in SQL)
https://libreant.readthedocs.io/en/latest/sysadmin.html
2022-06-25T08:00:30
CC-MAIN-2022-27
1656103034877.9
[]
libreant.readthedocs.io
PartiQL Insert Statements for DynamoDB Use the INSERT statement to add an item to a table in Amazon DynamoDB. You can only insert one item at a time; you cannot issue a single DynamoDB PartiQL statement that inserts multiple items. For information on inserting multiple items, see Performing Transactions with PartiQL for DynamoDB or Running Batch Operations with PartiQL for DynamoDB. Syntax Insert a single item. INSERT INTO tableVALUE item Parameters table (Required) The table where you want to insert the data. The table must already exist. item (Required) A valid DynamoDB item represented as a PartiQL tuple . You must specify only one item and each attribute name in the item is case-sensitive and can be denoted with single quotation marks ( '...') in PartiQL. String values are also denoted with single quotation marks ( '...') in PartiQL. Return value This statement does not return any values. If the DynamoDB table already has an item with the same primary key as the primary key of the item being inserted, DuplicateItemException is returned. Examples INSERT INTO "Music" value {'Artist' : 'Acme Band','SongTitle' : 'PartiQL Rocks'}
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ql-reference.insert.html
2022-06-25T08:29:13
CC-MAIN-2022-27
1656103034877.9
[]
docs.aws.amazon.com
AccountSettings The Amazon QuickSight settings associated with your AWS account. Contents In the following list, the required parameters are described first. - AccountName The "account name" you provided for the Amazon QuickSight subscription in your AWS account. You create this name when you sign up for Amazon QuickSight. It is unique in all of AWS and it appears only when users sign in. Type: String Required: No - DefaultNamespace The default Amazon QuickSight namespace for your AWS account. Type: String Length Constraints: Maximum length of 64. Pattern: ^[a-zA-Z0-9._-]*$ Required: No - Edition The edition of Amazon QuickSight that you're currently subscribed to: Enterprise edition or Standard edition. Type: String Valid Values: STANDARD | ENTERPRISE Required: No - NotificationEmail The main notification email for your Amazon QuickSight subscription. Type: String Required: No - PublicSharingEnabled A boolean that indicates whether or not public sharing is enabled on an Amazon QuickSight account. For more information about enabling public sharing, see UpdatePublicSharingSettings. Type: Boolean Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/quicksight/latest/APIReference/API_AccountSettings.html
2022-06-25T09:08:39
CC-MAIN-2022-27
1656103034877.9
[]
docs.aws.amazon.com
Follow Path Constraint The Follow Path constraint places its owner onto a curve target object, and makes it move along this curve (or path). It can also affect its owner’s rotation to follow the curve’s bends, when the Follow Curve option is enabled. It could be used for complex camera traveling, a train on its rails and most other vehicles can also use «invisible» tracks, the links of a bicycle chain, etc. The owner is always evaluated in the global (world) space: Its location (as shown in the Transform panel) is used as an offset from its normal position on the path. E.g. if you have an owner with the (1.0, 1.0, 0.0) location, it will be one unit away from its normal position on the curve, along the X and Y axis. Hence, if you want your owner on its target path, clear its location Alt-G! This location offset is also proportionally affected by the scale of the target curve. Taking the same (1.0, 1.0, 0.0) offset as above, if the curve has a scale of (2.0, 1.0, 1.0), the owner will be offset two units along the X axis (and one along the Y one)… When the Follow Curve option is enabled, its rotation is also offset to the one given by the curve. E.g. if you want the Y axis of your object to be aligned with the curve’s direction, it must be in rest, non-constrained state, aligned with the global Y axis. Here again, clearing your owner’s rotation Alt-R might be useful… The movement of the owner along the target curve/path may be controlled in two different ways: The most simple is to define the number of frames of the movement, in the Path Animation panel of the Curve tab, via the Frames number field, and its start frame via the constraint’s Offset option (by default, start frame: 1 [= offset of 0], duration: 100). The second way, much more precise and powerful, is to define an Evaluation Time interpolation curve for the Target path (in the Graph Editor). See the Graph Editor chapter to learn more about F-Curves. If you do not want your owner to move along the path, you can give to the target curve a flat Speed F-Curve (its value will control the position of the owner along the path). Follow Path is another constraint that works well with the Locked Track one. One example is a flying camera on a path. To control the camera’s roll angle, you can use a Locked Track and a target object to specify the up direction, as the camera flies along the path. Nota Follow Path & Clamp To Do not confuse these two constraints. Both of them constraint the location of their owner along a curve, but Follow Path is an «animation-only» constraint, inasmuch as the position of the owner along the curve is determined by the time (i.e. current frame), whereas the Clamp To constraint determines the position of its owner along the curve using one of its location properties” values. Nota Note that you also need to keyframe Evaluation Time for the Path. Select the path, go to the Path Animation panel in the curve properties, set the overall frame to the first frame of the path (e.g. frame 1), set the value of Evaluation time to the first frame of the path (e.g. 1), right-click on Evaluation time, select create keyframe, set the overall frame to the last frame of the path (e.g. frame 100), set the value of Evaluation time to the last frame of the path (e.g. 100), right-click on Evaluation time, select create keyframe. Opções - Target Identificador de dados used to select the constraint’s target, which must be a curve object, and is not functional (red state) when it has none. See common constraint properties for more information. - Deslocamento The number of frames to offset from the «animation» defined by the path (by default, from frame 1). - Forward Axis The axis of the object that has to be aligned with the forward direction of the path (i.e. tangent to the curve at the owner’s position). - Up Axis The axis of the object that has to be aligned (as much as possible) with the world Z axis. In fact, with this option activated, the behavior of the owner shares some properties with the one caused by a Locked Track constraint, with the path as «axle», and the world Z axis as «magnet». - Fixed Position Object will stay locked to a single point somewhere along the length of the curve regardless of time. - Curve Radius Objects scaled by the curve radius. See Curve Editing. - Follow Curve If this option is not activated, the owner’s rotation is not modified by the curve; otherwise, it is affected depending on the Forward and Up Axes. - Animate Path Adds an F-Curve with options for the start and end frame. ToDo: from above. - Influência Controls the percentage of affect the constraint has on the object. See common constraint properties for more information.
https://docs.blender.org/manual/pt/dev/animation/constraints/relationship/follow_path.html
2022-06-25T08:08:12
CC-MAIN-2022-27
1656103034877.9
[array(['../../../_images/animation_constraints_relationship_follow-path_panel.png', '../../../_images/animation_constraints_relationship_follow-path_panel.png'], dtype=object) ]
docs.blender.org
Do not use locals(). Example: LOG.debug("volume %(vol_name)s: creating size %(vol_size)sG" % locals()) # BAD LOG.debug("volume %(vol_name)s: creating size %(vol_size)sG" % {'vol_name': vol_name, 'vol_size': vol_size}) # OKAY Use ‘raise’ instead of ‘raise e’ to preserve original traceback or exception being reraised: except Exception as e: ... raise e # BAD except Exception: ... raise # OKAY. For more information on creating unit tests and utilizing the testing infrastructure in OpenStack Ec2api, please read ec2api/testing/README.rst. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/ec2-api/rocky/hacking.html
2022-06-25T07:38:44
CC-MAIN-2022-27
1656103034877.9
[]
docs.openstack.org
mars.tensor.reshape# - mars.tensor.reshape(a, newshape, order='C')[source]# Gives a new shape to a tensor without changing its data. - Parameters a (array_like) – Tensor to be reshaped. newshape (int or tuple of ints) – The new shape should be compatible with the original shape. If an integer, then the result will be a 1-D tensor of that length. One shape dimension can be -1. In this case, the value is inferred from the length of the tensor and remaining dimensions. order ({'C', 'F', 'A'}, optional) – Read the elements of a using this index order, and place the elements into the reshaped array using this index order. ‘C’ means to read / write the elements using C-like index order, with the last axis index changing fastest, back to the first axis index changing slowest. ‘F’ means to read / write the elements using Fortran-like index order, with the first index changing fastest, and the last index changing slowest. Note that the ‘C’ and ‘F’ options take no account of the memory layout of the underlying array, and only refer to the order of indexing. ‘A’ means to read / write the elements in Fortran-like index order if a is Fortran contiguous in memory, C-like order otherwise. - Returns reshaped_array – This will be a new view object if possible; otherwise, it will be a copy. - Return type Tensor See also Tensor.reshape Equivalent method. Notes It is not always possible to change the shape of a tensor without copying the data. If you want an error to be raised when the data is copied, you should assign the new shape to the shape attribute of the array: >>> import mars.tensor as mt >>> a = mt.arange(6).reshape((3, 2)) >>> a.execute() array([[0, 1], [2, 3], [4, 5]]) You can think of reshaping as first raveling the tensor (using the given index order), then inserting the elements from the raveled tensor into the new tensor using the same kind of index ordering as was used for the raveling. >>> mt.reshape(a, (2, 3)).execute() array([[0, 1, 2], [3, 4, 5]]) >>> mt.reshape(mt.ravel(a), (2, 3)).execute() array([[0, 1, 2], [3, 4, 5]]) Examples >>> a = mt.array([[1,2,3], [4,5,6]]) >>> mt.reshape(a, 6).execute() array([1, 2, 3, 4, 5, 6]) >>> mt.reshape(a, (3,-1)).execute() # the unspecified value is inferred to be 2 array([[1, 2], [3, 4], [5, 6]])
https://docs.pymars.org/en/latest/reference/tensor/generated/mars.tensor.reshape.html
2022-06-25T08:46:23
CC-MAIN-2022-27
1656103034877.9
[]
docs.pymars.org
and higher Required and optional parameters for Android are listed below. Returns The variation the user was bucketed into, or null if setForcedVariation failed to force the user into the variation. Example import com.optimizely.ab.config.Variation; Variation variation = optimizelyClient.getForcedVariation(“my_experiment_key”, “user_123”); Source files The language/platform source files containing the implementation for Android is OptimizelyClient.java. Updated 4 months ago
https://docs.developers.optimizely.com/experimentation/v3.1.0-full-stack/docs/get-forced-variation-android
2022-06-25T07:08:31
CC-MAIN-2022-27
1656103034877.9
[]
docs.developers.optimizely.com
Links¶ - Substrate.dev - Starting point for learning about Substrate, a Rust-based framework for developing blockchains. Moonbeam is developed using Substrate and uses many of the modules that come with it. - Polkadot.network - Learn about Polkadot, including the vision behind the network and how the system works, i.e., staking, governance, etc. - Polkadot-JS Apps - A web-based interface for interacting with Substrate based nodes, including Moonbeam. - Solidity Docs - Solidity is the main smart contract programming language supported by Ethereum and Moonbeam. The Solidity docs site is very comprehensive. - Remix - Web-based IDE for Solidity smart contract development that is compatible with Moonbeam. - Truffle - Development tools for Solidity, including debugging, testing, and automated deployment that is compatible with Moonbeam.
https://docs.moonbeam.network/learn/platform/links/
2022-06-25T07:46:44
CC-MAIN-2022-27
1656103034877.9
[]
docs.moonbeam.network
Smart Patch Tool¶ The smart patch tool allows you to seamlessly remove elements from the image. It does this by letting you draw the area which has the element you wish to remove, and then it will attempt to use patterns already existing in the image to fill the blank. You can see it as a smarter version of the clone brush. The smart patch tool has the following tool options: Accuracy¶ Accuracy indicates how many samples, and thus how often the algorithm is run. A low accuracy will do few samples, but will also run the algorithm fewer times, making it faster. Higher accuracy will do many samples, making the algorithm run more often and give more precise results, but because it has to do more work, it is slower. Patch size¶ Patch size determines how big the size of the pattern to choose is. This will be best explained with some testing, but if the surrounding image has mostly small elements, like branches, a small patch size will give better results, while a big patch size will be better for images with big elements, so they get reused as a whole.
https://docs.krita.org/en/reference_manual/tools/smart_patch.html
2022-06-25T08:48:19
CC-MAIN-2022-27
1656103034877.9
[array(['../../_images/Smart-patch.gif', '../../_images/Smart-patch.gif'], dtype=object) ]
docs.krita.org
Hi. I'm migrating from winForm to WPF and I have the following code which works very well: printDialog = new PrintDialog(); if (DialogResult.OK == printDialog.ShowDialog()) { try { PrintDocument pd = new PrintDocument(); pd.PrintPage += new PrintPageEventHandler(PrintImage); pd.PrinterSettings = printDialog.PrinterSettings; pd.Print(); } catch { } } Now in wpf it is indicated that there is an error in the line: pd.PrinterSettings = printDialog.PrinterSettings; So to test if the rest of the code works I commented it and it works very well, but obviously it always prints on the printer that the PC has configured by default. I tried to investigate in other threads how to solve this problem and the solution is supposedly the following: PrintDocument pd = new PrintDocument(); PrinterSettings ps = new PrinterSettings(); pd.PrintPage += new PrintPageEventHandler(PrintImage); printDialog.PrintQueue = new PrintQueue(new PrintServer(),"The exact name of my printer"); pd.Print(); My question is: How can I get the name of the printer? Before winForms I didn't need to do that, I imagine that printDialog took care of that at the moment that the user chose the printer. Any comments or suggestions are welcome.
https://docs.microsoft.com/en-us/answers/questions/17883/print-with-a-printer-that-is-not-the-one-configure.html
2022-06-25T08:50:04
CC-MAIN-2022-27
1656103034877.9
[]
docs.microsoft.com
3.15. Tracking saved posts¶ Dashboard Page of the plugin is designed to track saved, updated, and deleted posts. Using this page, you can also see which sites are currently active, the time when the next crawling will occur, and many more. Please refer to Dashboard Page for more information. The following video demonstrates Dashboard Page.
https://docs.wpcontentcrawler.com/1.11/guides/tracking-saved-posts.html
2022-06-25T07:38:53
CC-MAIN-2022-27
1656103034877.9
[]
docs.wpcontentcrawler.com
Key management AWS IoT SiteWise cloud key management By default, AWS IoT SiteWise uses AWS managed keys to protect your data in the AWS Cloud. You can update your settings to use a customer managed key to encrypt some data in AWS IoT SiteWise. You can create, manage, and view your encryption key through AWS Key Management Service (AWS KMS). AWS IoT SiteWise supports server-side encryption with customer managed keys stored in AWS KMS to encrypt the following data: Asset property values Aggregate values Other data and resources are encrypted using the default encryption with keys managed by AWS IoT SiteWise. This key is stored in the AWS IoT SiteWise account. For more information, see What is AWS Key Management Service? in the AWS Key Management Service Developer Guide. Enable encryption using customer managed keys To use customer managed keys with AWS IoT SiteWise, you need to update your AWS IoT SiteWise settings. To enable encryption using KMS keys Navigate to the AWS IoT SiteWise console . Choose Account Settings and choose Edit to open the Edit account settings page. For Encryption key type, choose Choose a different AWS KMS key. This enables encryption with customer managed keys stored in AWS KMS. Note Currently, you can only use customer managed key encryption for asset property values and aggregate values. Choose your KMS key with one of the following options: To use an existing KMS key – Choose your KMS key alias from the list. To create a new KMS key – Choose Create an AWS KMS key. Note This opens the AWS KMS dashboard. For more information about creating a KMS key, see Creating keys in the AWS Key Management Service Developer Guide. Choose Save to update your settings. AWS IoT Greengrass gateway key management AWS IoT SiteWise gateways run on AWS IoT Greengrass, and AWS IoT Greengrass core devices use public and private keys to authenticate with the AWS Cloud and encrypt local secrets, such as OPC-UA authentication secrets. For more information, see Key management in the AWS IoT Greengrass Version 1 Developer Guide.
https://docs.aws.amazon.com/iot-sitewise/latest/userguide/key-management.html
2022-06-25T09:28:24
CC-MAIN-2022-27
1656103034877.9
[]
docs.aws.amazon.com
The Bitnami Helper Tool is a command line tool designed for executing frequently run commands in Bitnami stacks. This tool is located in the installation directory of the stack at /opt/bitnami. Run the Bitnami Helper Tool The Bitnami Helper Tool is included in every Bitnami Stack released since October 18th 2019. In order to check whether your stack includes it or doesn’t, please check if it is present at /opt/bitnami/bnhelper-tool. To run the Bitnami Helper Tool, follow the instructions below: Connect to the server through SSH. Run these commands to run the Bitnami Helper Tool: $ cd /opt/bitnami $ sudo ./bnhelper-tool Custom configurations Automatically run the tool on login You can configure your server to run the Bitnami Helper Tool every time you access it. This way, you won’t need to run any command to start this tool and run the frequently run commands easily when log in to the server. Follow the steps below: - Connect to the server through SSH. - Create a .ssh/rc file in the HOME folder of the user with the following content: sudo /opt/bitnami/bnhelper-tool - Exit the console and access the server console again. The Bitnami Helper Tool should be run automatically. Add a custom action to the Bitnami Helper Tool The Bitnami Helper Tool gets the list of available commands from the /opt/bitnami/bnhelper/bnhelper.json file. - Edit the file and add a custom action as shown below:: ... }, { "Title": "Custom action title", "Cli": "Custom action command", "Description": "Custom action description", "Success": "Custom action success message", "Fail": "Custom action fail message" }, ... Run the Bitnami Helper Tool again: $ sudo /opt/bitnami/bnhelper-tool
https://docs.bitnami.com/vmware-marketplace/how-to/understand-bnhelper/
2022-06-25T07:35:48
CC-MAIN-2022-27
1656103034877.9
[]
docs.bitnami.com
.. См.также See Меню Render
https://docs.blender.org/manual/ru/latest/render/output/introduction.html
2022-06-25T07:12:18
CC-MAIN-2022-27
1656103034877.9
[]
docs.blender.org
Platform Security June 25, 2014 Windows operating systems can be found on a variety of devices, including desktops, laptops, tablets, convertibles, and smartphones. All current Windows operating systems have a consistent look and feel. Regardless of whether you’re using a 32-bit operating system, a 64-bit operating system, or an ARM based operating system, the user experience is similar. A shared Windows platform for security Windows-based devices share many security features that are not only identical in name but increasingly common at a code level. The following table lists security features that are common across all current Windows operating systems. A key advantage to these and other common security features in Windows operating systems is the predictability and uniformity of security configuration. You can use the same types of security policies and settings to enforce the same level of security, regardless of the device used. The security capabilities of Windows operating systems provide an advantage over other operating system families, which often have different security implementations for desktops and laptops versus tablets and smartphones. Windows also offers a common operating system distribution for each hardware vendor and device, whereas competing operating systems may be fragmented into many variations. This lack of consistency in operating system distributions can result in security challenges that just aren’t an issue on the Windows platform. Security improvements in Windows Phone 8.1 The following lists security related improvements to Windows Phone 8.1 from Windows Phone 8: Trustworthy Hardware Operating system security in the modern world requires capability that is derived from security-related hardware, and Windows Phone is no exception to that rule. Windows Phone takes advantage of the latest standards-based security hardware components to help protect devices and the information stored on them. EUFI UEFI is a modern, standards-based replacement for the traditional BIOS found in most devices. UEFI provides the same functionality as BIOS while adding security features and other advanced capabilities. Like BIOS, UEFI initializes hardware devices, and then starts the Windows Phone boot loader, but unlike BIOS, UEFI ensures that the operating system loader is secure, tamper free, and prevents jail-breaking which can enable an attacker, or even a user, to tamper with the system and install unauthorized apps. Current implementations of UEFI run internal integrity checks that verify the firmware’s digital signature before running it. These checks also extend to any optional ROM components on the device. Because only the hardware manufacturer of the device has access to the digital certificate required to create a valid firmware signature, UEFI has protection from firmware and master boot record rootkits (or bootkits). From a security perspective, UEFI enables the chain of trust to transition from the hardware to the software itself. TPM A TPM is a tamper-resistant security processor capable of creating and protecting cryptographic keys and hashes. In addition, a TPM can digitally sign data using a private key that software cannot access. Essentially, a TPM is a crypto-processor and secure storage place that both UEFI and the operating system can use to store integrity data, meaning hashes (which verify that firmware and critical files have not been changed) and keys (which verify that a digital signature is genuine). Among other functions, Windows Phone uses the TPM for cryptographic calculations and to protect the keys for BitLocker storage encryption, virtual smart cards, and certificates. All Windows Phone 8.1 devices include a TPM.. Among other functions, Windows Phone uses the TPM for cryptographic calculations and to protect the keys for BitLocker storage encryption, virtual smart cards, and certificates. All Windows Phone 8.1 devices include a TPM. Lifecycle. Apps Securing the Windows Phone operating system core is the first step in providing a defense-in-depth approach to securing Windows Phone devices. Securing the apps running on the device is equally important, because attackers could potentially use apps to compromise Windows Phone operating system security and the confidentiality of the information stored on the device. Windows Phone can mitigate these risks by providing a secured and controlled mechanism for users to acquire trustworthy apps. In addition, the Windows Phone Store app architecture isolates (or sandboxes) one app from another, preventing a malicious app from affecting another app running on the device. Also, the Windows Phone Store app architecture prevents apps from directly accessing critical operating system resources, which helps prevent the installation of malware on devices. Windows Phone Store Downloading and running apps that contain malware is a common concern for all organizations. One of the most common methods that enables malware to make its way onto devices is by users downloading and running apps that are unsupported or unauthorized by the organization. Downloading and using apps published in the Windows Phone Store dramatically reduce the likelihood that a user can download an app that contains malware. All Windows Phone Store apps go through a careful screening process and scanning for malware and viruses before being made available in the store. The certification process checks Windows Phone Store apps for inappropriate content, store policies, and security issues. Finally, all apps must be signed during the certification process before they can be installed and run on Windows Phone devices. In the event that a malicious app makes it’s way through the process and is later detected, the Windows Phone Store can revoked access to the app on any devices that have installed it. In the end, the Windows Store app-distribution process and the app sandboxing capabilities of Windows Phone 8.1 will dramatically reduce the likelihood that users will encounter malicious apps on the system. Note Windows Phone Store apps built by organizations (also known as line-of-business [LOB] apps) that are distributed through sideloading processes need to be reviewed internally to help ensure they meet organizational security requirements. For more information, see the “Line-of-business apps” section later in this guide. You can manage Windows Phone Store apps by using policies that are supported for Windows Phone. These policies allow you to completely disable access to the Windows Phone Store, disable app sideloading, allow or block apps, and other security settings. Many Windows Phone Store apps require sensitive information from users or may want to access confidential information stored on the device, such as user credentials or the user’s physical location. To pass certification, apps obtained from the Windows Phone Store must notify users when such sensitive information or device resources are requested. This notification helps users know when they are granting access to this information. AppContainer The Windows Phone security model is based on the principle of least privilege and uses isolation to achieve it. Every app and even large portions of the operating system itself run inside their own isolated sandbox called an AppContainer. An AppContainer is a secured isolation boundary that an app and its process can run within. Each AppContainer is defined and implemented using a security policy. The security policy of a specific AppContainer defines the operating system capabilities to which the processes have access within the AppContainer. A capability is a Windows Phone device resource such as geographical location information, camera, microphone, networking, or sensors. By default, a basic set of permissions is granted to all AppContainers, including access its own isolated storage location. In addition, access to other capabilities can be declared within the app code itself. Access to additional capabilities and privileges cannot be requested at runtime, as can be done with traditional desktop applications. The AppContainer concept is advantageous for the following reasons: **Attack surface reduction.**Apps get access only to capabilities that are declared in the application code and are needed to perform their functions. **User consent and control.**Capabilities that apps use are automatically published to the app details page in the Windows Phone Store. Access to capabilities that may expose sensitive information, such as geographic location, automatically prompt the user to acknowledge and provide consent. **Isolation.**Unlike desktop style apps, which have unlimited access to other apps, communication between Windows Phone apps is tightly controlled. Apps are isolated from one another and can only communicate using predefined communications channels and data types. Like the Windows Store security model, all Windows Store apps follow the security principal of least privilege. Apps receive the minimal privileges they need to perform their legitimate tasks only, so even if an attacker exploits an app, the damage the exploit can do is severely limited and should be contained within the sandbox. The Windows Phone Store displays the exact permissions that the app requires along with the app’s age rating and publisher. Operating system app protection Although applications built for Windows Phone are designed to be secure and free of defects, the reality is that as long as human beings are writing code, vulnerabilities will always be discovered. When identified, malicious users and software may attempt to exploit the vulnerability in the hopes of a successful exploit. To mitigate these risks, Windows Phone includes core improvements to make it more difficult for malware to perform buffer overflow, heap spraying, and other low-level attacks.. For example, Windows Phone includes ASLR and DEP, which dramatically reduce the likelihood that newly discovered vulnerabilities will result in a successful exploit. Technologies like ASLR and DEP act as another level in the defense-in-depth strategy for Window Phone. **Address space layout randomization.**One of the most common techniques for gaining access to a system is to find a vulnerability in a privileged process that is already running, or guess or find a location in memory where important system code and data have been placed, and then overwrite that information with a malicious payload. In the early days of operating systems, any malware that could write directly to system memory could pull off such an exploit: The malware would simply overwrite system memory within well-known and predictable locations. Because all Windows Phone Store apps run in an AppContainer and with fewest necessary privileges, most apps are unable to perform this type of attack outside of one app. It is conceivable that an app from the Window Phone Store might be malicious, but the AppContainer severely limits any damage that the malicious app might do, as apps are also unable to access critical operating system components. The level of protection AppContainers provide is one of the reasons that their functionality was brought into Windows 8.1 client operating systems. However, ASLR provides an additional defense in-depth to help further secure apps and the core operating system. **Data execution prevention.**Malware depends on its ability to put a malicious payload into memory with the hope that it will be executed later. ASLR makes that much more difficult, but wouldn’t it be great if Windows Phone could prevent that malware from running if it writes to an area that has been allocated solely for the storage of information? DEP does exactly that by substantially reducing the range of memory that malicious code can use for its benefit. DEP uses the eXecute Never (XN) bit on the ARM processors in Windows Phone devices to mark blocks of memory as data that should never be executed as code. Therefore, even if an attacker succeeds in loading the malware code into memory, to the malware code will not execute. DEP is automatically active in Windows Phone because all devices have ARM processors that support the XN bit. Line-of-business apps With Windows Phone, organizations can register with Microsoft to obtain the tools to privately sign and distribute custom LOB apps directly to their users. This means that organizations are not required to submit business apps to the Windows Phone Store before deploying them. After registration, organizations (or contracted vendors) can use a validated process to privately develop, package, sign, and distribute apps. These LOB apps are identical in architecture to apps obtained from the Windows Phone Store. The only difference is the method that is used to deploy these apps and that they are for private rather than public consumption. Management of these LOB apps is identical to managing Windows Phone Store apps and can be done by using Windows Phone policies Warning . Potentially, a user could sideload apps onto their device by using a development environment. To disable this ability, use the Disable development unlock (side loading) policy in your MDM system. Company portal Many MDM systems, such as Microsoft System Center 2012 R2 Configuration Manager and Windows Intune, have a company portal app that allows users to install LOB and Window’s Phone Store apps. A company portal app coupled with a properly designed MDM system can help reduce the likelihood of users downloading apps that have malware, because the company portal list only those apps that the organization trusts and has approved. Internet Explorer Windows Phone includes Internet Explorer 11 for Windows Phone. Internet Explorer helps to protect the user because it runs in an isolated AppContainer and prevents web apps from accessing the system and other app resources. In addition, Internet Explorer on Windows Phone supports a browser model without plug-ins, so plug-ins that compromise the user experience or perform malicious actions cannot be installed (just like the Windows Store version of Internet Explorer in Windows 8.1). The SmartScreen URL Reputation filter is also available in Internet Explorer for Windows Phone. This technology blocks or warns users of websites that are known to be malicious or are suspicious. Internet Explorer on Windows Phone can also use SSL to encrypt communication, just as in other Windows operating systems. This is discussed in more detail in the “Communication encryption” section later in this guide. Information Protection Although it is extremely important to protect the Windows Phone operating system and the apps running on the device, it is even more important to protect the information that these apps access. Windows Phone supports several technologies that help protect this information. Internal storage encryption Windows Phone 8.1 performs device encryption, which is based on BitLocker technology, to encrypt the internal storage of devices with Advanced Encryption Standard (AES) 128-bit encryption. This helps ensure that data is always protected from unauthorized users, even when they have physical possession of the phone. The encryption key is protected by the TPM to ensure that the data cannot be accessed by unauthorized users, even if the internal storage media is physically removed from the device. With both PIN-lock and device encryption enabled, the combination of data encryption and device lock would make it extremely difficult for an attacker to recover sensitive information from a device. The Require Device Encryption policy prevents users from disabling device encryption and forces encryption of internal storage. Additional security can be included when the Device wipe threshold policy has been implemented to wipe the device when a brute-force attack on the PIN lock is detected. For more information about this policy, see “Security-related policy settings” later in this guide. Removable storage protection Many Windows Phone devices have an SD card slot that allows users to store apps and data on an SD card (the installation of apps on an SD card is a new feature in Windows Phone 8.1).. S/MIME signing and encryption. Use the following policies in your MDM system or Exchange Server infrastructure to configure S/MIME support in Windows Phone: Require signed S/MIME messages Require encrypted S/MIME messages Require signed S/MIME algorithm Require encrypted S/MIME algorithm Allow S/MIME encrypted algorithm negotiation Allow S/MIME SoftCerts S/MIME uses certificates that your MDM system manages or even virtual smart cards to perform encryption and signing. Communication encryption. VPN..
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-phone/dn756283(v=technet.10)?redirectedfrom=MSDN
2022-06-25T08:14:29
CC-MAIN-2022-27
1656103034877.9
[]
docs.microsoft.com
Cause: The ERS resource corresponding to resource “{resource tag}” is in-service and maintaining backup locks on a remote system. Bringing resource “{resource tag}” in-service on “{system}” would result in a loss of the backup lock table. Please bring resource “{resource tag}” in-service on the system where the corresponding ERS resource is currently in-service in order to maintain consistency of the lock table. In order to force resource “{resource tag}” in-service on “{system}”, either (i) run the command ‘/opt/LifeKeeper/bin/flg_create -f sap_cs_force_restore “{resource tag}” as root on “{system}” and reattempt the in-service operation or (ii) take the corresponding ERS resource out of service on the remote system. Warning: Both of these actions will result in a loss of the backup lock table. Action: Additional information is available in the LifeKeeper and system logs. Return to SPS SAP Messages Post your comment on this topic.
https://docs.us.sios.com/spslinux/9.4.1/en/topic/112086-ersremotelyisprestorefailure-ref
2022-06-25T08:14:05
CC-MAIN-2022-27
1656103034877.9
[]
docs.us.sios.com
Wearable Time Wearable Time is any time that has been automatically tracked on behalf of a user via Crowdkeep IoT Gateway and Wearables. There are three different versions of Crowdkeep Wearables: Crowdkeep ID Crowdkeep ID Holder Crowdkeep Keychain Important: Wearable Time cannot be modified by anybody. Time Entry Time entries can be created via three different strategies based on your account policy. Strategy 1: Time Entries generated via Wearable Time (Recommended) Requires every employee to carry a Crowdkeep ID on them while at work. Let Crowdkeep generate and utilize data pattern matching and machine learning to create Time Entries on behalf of other team members. Employees and administrators will have a chance to review the results before they go through an approval process or integration with account and payroll. Strategy 2: Time Entries are submitted by each employee Requires every employee to install and have access to iOS, Android, or Web App. Every employee at the organization will have Crowdkeep access to submit their own Time Entries manually. If using Wearables, the Wearable Time will be displayed alongside the employees' Time Entries. Strategy 3: Time Entries are submitted by each administrator on behalf of another employee Does not require every employee to have Crowdkeep ID or access to Crowdkeep. Time Entries can only be created by administrators. If using Wearables, the Wearable Time will be shown alongside the employees' time entries. Administrators will manually submit Time Entries on behalf of another employee.
http://docs.crowdkeep.com/en/articles/490551-wearable-time-vs-time-entry-explained
2019-11-12T05:25:26
CC-MAIN-2019-47
1573496664752.70
[array(['https://downloads.intercomcdn.com/i/o/29772107/f9ae7f84f6adf26b61a958a7/wearableTime.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/29770303/3c370da3408cd4af3f4fd8cf/timeEntry.png', None], dtype=object) ]
docs.crowdkeep.com
XenDesktop 4 May 28, 2016 You can transfer data and settings from a XenDesktop 4 farm to a XenDesktop 7.x Site using the Migration Tool, which is available in the Support > Tools > MigrationTool folder on the XenDesktop installation media. The tool includes: - The export tool, XdExport, which exports XenDesktop 4 farm data to an XML file (default name: XdSettings.xml). The XML file schema resides in the file XdFarm.xsd. - The import tool, XdImport, which imports the data by running the PowerShell script Import-XdSettings.ps1. To successfully use the Migration Tool, both deployments must have the same hypervisor version (for example, XenServer 6.2), and Active Directory environment. You cannot use this tool to migrate XenApp, and you cannot migrate XenDesktop 4 to XenApp. Tip: You can upgrade XenDesktop 5 (or later XenDesktop versions) to the current XenDesktop version; see Upgrade a deployment. Limitations Not all data and settings are exported. The following configuration items are not migrated because they are exported but not imported: - Administrators - Delegated administration settings - Desktop group folders - Licensing configuration - Registry keys These use cases are not directly supported in migration: - Merging settings of policies or desktop group or hosting settings. - Merging private desktops into random Delivery Groups. - Adjusting existing component settings through the migration tools. For more information, see What is and is not migrated . Migration steps The following figure summarizes the migration process. The migration process follows this sequence: - In the Studio console on the XenDesktop 4 Controller, turn on maintenance mode for all machines to be exported. - Export data and settings from your XenDesktop 4 farm to an XML file using XdExport; see Export from a XenDesktop 4 farm. - Edit the XML file so that it contains only the data and settings you want to import into your new XenDesktop Site; see Edit the Migration Tool XML file. - Import the data and settings from the XML file to your new XenDesktop Site using XdImport; see Import XenDesktop 4 data. - To make additional changes, repeat steps 3 and 4. After making changes, you might want to import additional desktops into existing Delivery Groups. To do so, use the Mergedesktops parameter when you import. - Complete the post-migration tasks; see Post-migration tasks. Before migrating Complete the following before beginning a migration: - Make sure you understand which data can be exported and imported, and how this applies to your own deployment. See What is and is not migrated. - Citrix strongly recommends that you manually back up the Site database so that you can restore it if any issues are discovered. - Install the XenDesktop 7.x components and create a Site, including the database. - To migrate from XenDesktop 4 , all VDAs must be at a XenDesktop 5.x level so that they are compatible with both XenDesktop 4 and XenDesktop 7.x controllers. After the Controller infrastructure is fully running XenDesktop 7.x, Windows 7 VDAs can be upgraded to XenDesktop 7.x. For details, see Migration examples. Migrate XenDesktop 4
https://docs.citrix.com/en-us/xenapp-and-xendesktop/7-6-long-term-service-release/xad-upgrade-existing-environment/xad-migrate-xd4-intro.html
2019-11-12T07:14:57
CC-MAIN-2019-47
1573496664752.70
[array(['/en-us/xenapp-and-xendesktop/7-6-long-term-service-release/media/cds-migrateXD4.png', 'Migrate'], dtype=object) ]
docs.citrix.com
Warning! This page documents an earlier version of Flux, which is no longer actively developed. Flux v0.50 is the most recent stable version of Flux. Flux transformation functions transform or shape your data in specific ways. There are different types of transformations categorized below: Aggregates Aggregate functions take values from an input table and aggregate them in some way. The output table contains is a single row with the aggregated value. Selectors Selector functions return one or more records based on function logic. The output table is different than the input table, but individual row values are not. Type conversions Type conversion functions convert the _value column of the input table into a specific data type.
https://docs.influxdata.com/flux/v0.7/functions/transformations/
2019-11-12T06:18:00
CC-MAIN-2019-47
1573496664752.70
[]
docs.influxdata.com
The following document specifies the Flux language and query execution. Note: This document is a living document and may not represent the current implementation of Flux. Any section that is not currently implemented is commented with a [IMPL#XXX]where XXXis an issue number tracking discussion and progress towards implementation. The Flux language is centered on querying and manipulating time series data. Notation The syntax of the language case. Lexical tokens are enclosed in double quotes ( "") or back quotes (``).
https://docs.influxdata.com/flux/v0.7/language/
2019-11-12T06:15:21
CC-MAIN-2019-47
1573496664752.70
[]
docs.influxdata.com
Installation You can install Kong on most Linux distributions and macOS. We even provide the source so you can compile yourself.Install Kong → 5-minute Quickstart Learn how to start Kong, add your API, enable plugins, and add consumers in under thirty seconds.Start using Kong → Clustering If you are starting more than one node, you must use clustering to make sure all the nodes belong to the same Kong cluster.Read the clustering reference → Admin API reference Ready to learn the underlying interface? Browse the Admin API reference to learn how to start making requests.Explore the interface → Write your own plugins Looking for something Kong does not do for you? Easy: write it as a plugin. Learn how to write your own plugins for Kong.Read the plugin development Proxy reference Learn every way to configure Kong to proxy your APIs and discover tips for your production setup.Read the Proxy Reference →
https://docs.konghq.com/0.6.x/
2019-11-12T06:12:03
CC-MAIN-2019-47
1573496664752.70
[]
docs.konghq.com
You know what they say about business apps.. it’s all about location, location, location. 🙂 If you like, you can turn on location settings in apps built with Flow. Based on the device (whether it has GPS, for example), turning on location populates the $User system value with geo-location details like current latitude/longitude or even current speed. Here is how you can turn on location data for your apps: - Create a new player. Give the player a name. For the sake of simplicity, let’s call the player Location in this example. - Scroll down the code, till you come to the manywho.settings.initialize block (Ln 121 in this case). - Time to add a couple of lines of code. These lines, to be exact: This is what the player code looks like now: - Click Save to save the player. You will get a success message saying Player saved successfully. Use this player to run the flow where you want location settings enabled. Here is one example: Turning the location setting on will populate the following location-specific properties of the $User system value: - $User/Current Latitude - $User/Current Latitude - $User/Current Longitude - $User/Location Accuracy - $User/Current Altitude - $User/Altitude Accuracy - $User/Current Heading - $User/Current Speed - $User/Location Timestamp NOTE: These values are device-dependent. For example, if a mobile phone does not have a GPS or an altimeter built in, some of these values will not be available.
https://docs.manywho.com/turning-on-location-in-flows/
2019-11-12T05:32:43
CC-MAIN-2019-47
1573496664752.70
[array(['https://docs.manywho.com/wp-content/uploads/2018/09/Screen-Shot-2018-09-18-at-9.30.11-PM-minishadow-1024x236.png', None], dtype=object) ]
docs.manywho.com
Zulip in Production¶ - Requirements - Installing a production server - Troubleshooting - Customize Zulip - Mobile push notification service - Maintain, secure, and upgrade - Monitoring - Scalability - Sections that have moved - API and your Zulip URL - Memory leak mitigation - Management commands - Hosting multiple Zulip organizations - Upgrade or modify Zulip - Upgrading to a release - Upgrading from a git repository - Troubleshooting and rollback - Preserving local changes to configuration files - Upgrading the operating system - Modifying Zulip - Making changes - Applying changes from master - Contributing patches - Security Model - Authentication methods - Plug-and-play SSO (Google, GitHub) - SAML - LDAP (including Active Directory) - Apache-based SSO with REMOTE_USER - Adding more authentication backends - Development only - Backups, export and import - Postgres database details - File upload backends - Installing SSL Certificates - Manual install - Certbot (recommended) - Self-signed certificate - Troubleshooting - Outgoing email - Deployment options - Incoming email integration
https://zulip.readthedocs.io/en/latest/production/index.html
2019-11-12T05:55:03
CC-MAIN-2019-47
1573496664752.70
[]
zulip.readthedocs.io
Installing faucet for the first time¶ This tutorial will run you through the steps of installing a complete faucet system for the first time. We will be installing and configuring the following components: This tutorial was written for Ubuntu 16.04, however the steps should work fine on any newer supported version of Ubuntu or Debian. Package installation¶ - Add the faucet official repo to our system - Install the required packages, we can use the faucet-all-in-onemetapackage which will install all the correct dependencies.sudo apt-get install faucet-all-in-one Configure prometheus¶ We need to configure prometheus to tell it how to scrape metrics from both the faucet and gauge controllers. To help make life easier faucet ships a sample configuration file for prometheus which sets it up to scrape a single faucet and gauge controller running on the same machine as prometheus. The configuration file we ship looks like: #: - "faucet.rules.yml" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'prometheus' static_configs: - targets: ['localhost:9090'] - job_name: 'faucet' static_configs: - targets: ['localhost:9302'] - job_name: 'gauge' static_configs: - targets: ['localhost:9303'] To learn more about what this configuration file does you can look at the Prometheus Configuration Documentation. The simple explanation is that it includes an additional faucet.rules.yml file that performs some automatic queries in prometheus for generating some additional metrics as well as setting up scrape jobs every 15 seconds for faucet listening on localhost:9302 and gauge listening on localhost:9303. Steps to make prometheus use the configuration file shipped with faucet: - Change the configuration file prometheus loads by editing the file /etc/default/prometheusto look like: - Restart prometheus to apply the changes:sudo systemctl restart prometheus Configure grafana¶ Grafana running in it’s default configuration will work just fine for our needs. We will however need to make it start on boot, configure prometheus as a data source and add our first dashboard: - Make grafana start on boot and then start it manually for the first time:sudo systemctl daemon-reload sudo systemctl enable grafana-server sudo systemctl start grafana-server - To finish setup we will configure grafana via the web interface. First load your web browser (by default both the username and password are admin). - The web interface will first prompt us to add a: - - - Configure faucet¶ For this tutorial we will configure a very simple network topology consisting of a single switch with two ports. - Configure faucet We need to tell faucet about our topology and VLAN information, we can do this by editing the faucet configuration /etc/faucet/faucet.yamlto look like:vlans: office: vid: 100 description: "office network" dps: sw1: dp_id: 0x1 hardware: "Open vSwitch" interfaces: 1: name: "host1" description: "host1 network namespace" native_vlan: office 2: name: "host2" description: "host2 network namespace" native_vlan: office Note Tabs are forbidden in the YAML language, please use only spaces for indentation. This will create a single VLAN and a single datapath with two ports. - Verify configuration The check_faucet_configcommand can be used to verify faucet has correctly interpreted your configuration before loading it. This can avoid shooting yourself in the foot by applying configuration with typos. We recommend either running this command by hand or with automation each time before loading configuration.check_faucet_config /etc/faucet/faucet.yaml This script will either return an error, or in the case of successfully parsing the configuration it will return a JSON object containing the entire faucet configuration that would be loaded (including any default settings), for example:[{'advertise_interval': 30, 'arp_neighbor_timeout': 30, 'cache_update_guard_time': 150, 'combinatorial_port_flood': False, 'cookie': 1524372928, 'description': 'sw1', 'dot1x': None, 'dp_acls': None, 'dp_id': 1, 'drop_broadcast_source_address': True, 'drop_spoofed_faucet_mac': True, 'egress_pipeline': False, 'fast_advertise_interval': 5, 'faucet_dp_mac': '0e:00:00:00:00:01', 'global_vlan': 0, 'group_table': False, 'hardware': 'Open vSwitch', 'high_priority': 9001, 'highest_priority': 9099, 'idle_dst': True, 'ignore_learn_ins': 10, 'interface_ranges': OrderedDict(), 'interfaces': {'host1': {'acl_in': None, 'acls_in': None, 'description': 'host': 1, 'opstatus_reconf': True, 'output_only': False, 'permanent_learn': False, 'receive_lldp': False, 'stack': OrderedDict(), 'tagged_vlans': [], 'unicast_flood': True}, 'host2': {'acl_in': None, 'acls_in': None, 'description': 'host': 2, 'opstatus_reconf': True, 'output_only': False, 'permanent_learn': False, 'receive_lldp': False, 'stack': OrderedDict(), 'tagged_vlans': [], 'unicast_flood': True}}, 'lacp_timeout': 30, 'learn_ban_timeout': 51, 'learn_jitter': 51, 'lldp_beacon': OrderedDict(), 'low_priority': 9000, 'lowest_priority': 0, 'max_host_fib_retry_count': 10, 'max_hosts_per_resolve_cycle': 5, 'max_resolve_backoff_time': 64, 'max_wildcard_table_size': 1280, 'metrics_rate_limit_sec': 0, 'min_wildcard_table_size': 32, 'multi_out': True, 'nd_neighbor_timeout': 30, 'ofchannel_log': None, 'packetin_pps': None, 'priority_offset': 0, 'proactive_learn_v4': True, 'proactive_learn_v6': True, 'stack': None, 'strict_packet_in_cookie': True, 'table_sizes': OrderedDict(), 'timeout': 300, 'use_classification': False, 'use_idle_timeout': False}] - Reload faucet To apply this configuration we can reload faucet which will cause it to compute the difference between the old and new configuration and apply the minimal set of changes to the network in a hitless fashion (where possible).sudo systemctl reload faucet - Check logs To verify the configuration reload was successful we can check /var/log/faucet/faucet.logand make sure faucet successfully loaded the configuration we can check the faucet log file /var/log/faucet/faucet.log:faucet INFO Loaded configuration from /etc/faucet/faucet.yaml faucet INFO Add new datapath DPID 1 (0x1) faucet INFO Add new datapath DPID 2 (0x2) faucet INFO configuration /etc/faucet/faucet.yaml changed, analyzing differences faucet INFO Reconfiguring existing datapath DPID 1 (0x1) faucet.valve INFO DPID 1 (0x1) skipping configuration because datapath not up faucet INFO Deleting de-configured DPID 2 (0x2) If there were any issues (say faucet wasn’t able to find a valid pathway from the old config to the new config) we could issue a faucet restart now which will cause a cold restart of the network. Configure gauge¶ We will not need to edit the default gauge configuration that is shipped with faucet as it will be good enough to complete the rest of this tutorial. If you did need to modify it the path is /etc/faucet/gauge.yaml and the default configuration looks like: # This default configuration will setup a prometheus exporter listening on port 0.0.0.0:9303 and write all the different kind of gauge metrics to this exporter. We will however need to restart the current gauge instance so it can pick up our new faucet configuration: sudo systemctl restart gauge Connect your first datapath¶ Now that we’ve set up all the different components let’s connect our first switch (which we call a datapath) to faucet. We will be using Open vSwitch for this which is a production-grade software switch with very good OpenFlow support. - Add WAND Open vSwitch repo The bundled version of Open vSwitch in Ubuntu 16.04 is quite old so we will use WAND’s package repo to install a newer version (if you’re using a more recent debian or ubuntu release you can skip this step). Note If you’re using a more recent debian or ubuntu release you can skip this stepsudo apt-get install apt-transport-https echo "deb $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/wand.list sudo curl -o /etc/apt/trusted.gpg.d/wand.gpg sudo apt-get update - Install Open vSwitchsudo apt-get install openvswitch-switch - Add network namespaces to simulate hosts We will use two linux network namespaces to simulate hosts and this will allow us to generate some traffic on our network. First let’s define some useful bash functions by coping and pasting the following definitions into our bash terminal:# Run command inside network namespace as_ns () { NAME=$1 NETNS=faucet-${NAME} shift sudo ip netns exec ${NET as_ns ${NAME} ip link set dev lo up [ -n "${IP}" ] && as_ns ${NAME} ip addr add dev veth0 ${IP} as_ns ${NAME} ip link set dev veth0 up } NOTE: all the tutorial helper functions can be defined by sourcing helper-funcsinto your shell enviroment. Now we will create host1and host2and assign them some IPs:create_ns host1 192.168.0.1/24 create_ns host2 192.168.0.2/24 - Configure Open vSwitch We will now configure a single Open vSwitch bridge (which will act as our datapath) and add two ports to this bridge:sudo ovs-vsctl add-br br0 \ -- set bridge br0 other-config:datapath-id=0000000000000001 \ -- set bridge br0 other-config:disable-in-band=true \ -- set bridge br0 fail_mode=secure \ -- add-port br0 veth-host1 -- set interface veth-host1 ofport_request=1 \ -- add-port br0 veth-host2 -- set interface veth-host2 ofport_request=2 \ -- set-controller br0 tcp:127.0.0.1:6653 tcp:127.0.0.1:6654 The Open vSwitch documentation is very good if you wish to find out more about configuring Open vSwitch. - Verify datapath is connected to faucet At this point everything should be working, we just need to verify that is the case. If we now load up some of the grafana dashboards we imported earlier, we should see the datapath is now listed in the Faucet Inventorydashboard. If you don’t see the new datapath listed you can look at the faucet log files /var/log/faucet/faucet.logor the Open vSwitch log /var/log/openvswitch/ovs-vswitchd.logfor clues. - Generate traffic between virtual hosts With host1and host2we can now test our network works and start generating some traffic which will show up in grafana. Let’s start simple with a ping:as_ns host1 ping 192.168.0.2 If this test is successful this shows our Open vSwitch is forwarding traffic under faucet control, /var/log/faucet/faucet.logshould now indicate those two hosts have been learnt:faucet.valve INFO DPID 1 (0x1) L2 learned 22:a6:c7:20:ff:3b (L2 type 0x0806, L3 src 192.168.0.1, L3 dst 192.168.0.2) on Port 1 on VLAN 100 (1 hosts total) faucet.valve INFO DPID 1 (0x1) L2 learned 36:dc:0e:b2:a3:4b (L2 type 0x0806, L3 src 192.168.0.2, L3 dst 192.168.0.1) on Port 2 on VLAN 100 (2 hosts total) We can also use iperf to generate a large amount of traffic which will show up on the Port Statisticsdashboard in grafana, just select sw1as the Datapath Name and Allfor the Port.sudo apt-get install iperf3 as_ns host1 iperf3 --server --pidfile /run/iperf3-host1.pid --daemon as_ns host2 iperf3 --client 192.168.0.1 Further steps¶ Now that you know how to setup and run faucet in a self-contained virtual environment you can build on this tutorial and start to make more interesting topologies by adding more Open vSwitch bridges, ports and network namespaces. Check out the faucet Configuration document for more information on features you can turn on and off. In future we will publish additional tutorials on layer 3 routing, inter-VLAN routing, ACLs. You can also easily add real hardware into the mix as well instead of using a software switch. See the Vendor-specific Documentation section for information on how to configure a wide variety of different vendor devices for faucet.
https://docs.faucet.nz/en/latest/tutorials/first_time.html
2019-11-12T05:34:30
CC-MAIN-2019-47
1573496664752.70
[]
docs.faucet.nz
Known issues Log errors - Why am I seeing a 503 Service Unavailableerror in my meta node logs? - Why am I seeing a 409error in some of my data node logs? - Why am I seeing hinted handoff queue not emptyerrors in my data node logs? - Why am I seeing error writing count stats ...: partial writeerrors in my data node logs? - Why am I seeing queue is fullerrors in my data node logs? - Why am I seeing unable to determine if "hostname" is a meta nodewhen I try to add a meta node with influxd-ctl join? Other Where can I find InfluxDB Enterprise logs? On systemd operating systems service logs can be accessed using the journalctl command. Meta: journalctl -u influxdb-meta Data : journalctl -u influxdb Enterprise console: journalctl -u influx-enterprise The journalctl output can be redirected to print the logs to a text file. With systemd, log retention depends on the system’s journald settings. Why am I seeing a 503 Service Unavailable error in my meta node logs? This is the expected behavior if you haven’t joined the meta node to the cluster. The 503 errors should stop showing up in the logs once you join the meta node to the cluster. Why am I seeing a 409 error in some of my data node logs? When you create a Continuous Query (CQ) on your cluster every data node will ask for the CQ lease. Only one data node can accept the lease. That data node will have a 200 in its logs. All other data nodes will be denied the lease and have a 409 in their logs. This is the expected behavior. Log output for a data node that is denied the lease: [meta-http] 2016/09/19 09:08:53 172.31.4.132 - - [19/Sep/2016:09:08:53 +0000] GET /lease?name=continuous_querier&node_id=5 HTTP/1.2 409 105 - InfluxDB Meta Client b00e4943-7e48-11e6-86a6-000000000000 380.542µs Log output for the data node that accepts the lease: [meta-http] 2016/09/19 09:08:54 172.31.12.27 - - [19/Sep/2016:09:08:54 +0000] GET /lease?name=continuous_querier&node_id=0 HTTP/1.2 200 105 - InfluxDB Meta Client b05a3861-7e48-11e6-86a7-000000000000 8.87547ms Why am I seeing hinted handoff queue not empty errors in my data node logs? [write] 2016/10/18 10:35:21 write failed for shard 2382 on node 4: hinted handoff queue not empty This error is informational only and does not necessarily indicate a problem in the cluster. It indicates that the node handling the write request currently has data in its local hinted handoff queue for the destination node. Coordinating nodes will not attempt direct writes to other nodes until the hinted handoff queue for the destination node has fully drained. New data is instead appended to the hinted handoff queue. This helps data arrive in chronological order for consistency of graphs and alerts and also prevents unnecessary failed connection attempts between the data nodes. Until the hinted handoff queue is empty this message will continue to display in the logs. Monitor the size of the hinted handoff queues with ls -lRh /var/lib/influxdb/hh to ensure that they are decreasing in size. Note that for some write consistency settings, InfluxDB may return a write error (500) for the write attempt, even if the points are successfully queued in hinted handoff. Some write clients may attempt to resend those points, leading to duplicate points being added to the hinted handoff queue and lengthening the time it takes for the queue to drain. If the queues are not draining, consider temporarily downgrading the write consistency setting, or pause retries on the write clients until the hinted handoff queues fully drain. Why am I seeing error writing count stats ...: partial write errors in my data node logs? [stats] 2016/10/18 10:35:21 error writing count stats for FOO_grafana: partial write The _internal database collects per-node and also cluster-wide information about the InfluxEnterprise cluster. The cluster metrics are replicated to other nodes using consistency=all. For a write consistency of all, InfluxDB returns a write error (500) for the write attempt even if the points are successfully queued in hinted handoff. Thus, if there are points still in hinted handoff, the _internal writes will fail the consistency check and log the error, even though the data is in the durable hinted handoff queue and should eventually persist. Why am I seeing queue is full errors in my data node logs? This error indicates that the coordinating node that received the write cannot add the incoming write to the hinted handoff queue for the destination node because it would exceed the maximum size of the queue. This error typically indicates a catastrophic condition for the cluster - one data node may have been offline or unable to accept writes for an extended duration. The controlling configuration settings are in the [hinted-handoff] section of the file. max-size is the total size in bytes per hinted handoff queue. When max-size is exceeded, all new writes for that node are rejected until the queue drops below max-size. max-age is the maximum length of time a point will persist in the queue. Once this limit has been reached, points expire from the queue. The age is calculated from the write time of the point, not the timestamp of the point. Why am I seeing unable to determine if "hostname" is a meta node when I try to add a meta node with influxd-ctl join? Meta nodes use the /status endpoint to determine the current state of another metanode. A healthy meta node that is ready to join the cluster will respond with a 200 HTTP response code and a JSON string with the following format (assuming the default ports): "nodeType":"meta","leader":"","httpAddr":"<hostname>:8091","raftAddr":"<hostname>:8089","peers":null} If you are getting an error message while attempting to influxd-ctl join a new meta node, it means that the JSON string returned from the /status endpoint is incorrect. This generally indicates that the meta node configuration file is incomplete or incorrect. Inspect the HTTP response with curl -v "http://<hostname>:8091/status" and make sure that the hostname, the bind-address, and the http-bind-address are correctly populated. Also check the license-key or license-path in the configuration file of the meta nodes. Finally, make sure that you specify the http-bind-address port in the join command, e.g. influxd-ctl join hostname:8091.
https://docs.influxdata.com/enterprise_influxdb/v1.5/troubleshooting/frequently_asked_questions/
2019-11-12T05:35:17
CC-MAIN-2019-47
1573496664752.70
[]
docs.influxdata.com
Table of Contents - Introduction - Auth0 IDP Configuration - Kong Configuration - Downstream configuration Introduction This guide covers an example OpenID Connect plugin configuration to authenticate headless service consumers using Auth0’s identity provider. Auth0 IDP Configuration This configuration will use a client credentials grant as it is non-interactive, and because we expect clients to authenticate on behalf of themselves, not an end-user. To do so, you will need to create an Auth0 API and a non-interactive client. API Configuration When creating your API, you will need to specify an Identifier. Using the URL that your consumer makes requests to is generally appropriate, so this will typically be the hostname/path combination configured as an API in Kong. After creating your API, you will also need to add the openid scope at<API ID>/scopes. Client Configuration You will need to authorize your client to access your API. Auth0 will prompt you to do so after client creation if you select the API you created previously from the client’s Quick Start menu. After toggling the client to Authorized, expand its authorization settings and enable the openid scope. Kong Configuration If you have not done so already, create an API to protect. The url configuration should match the Identifier you used when configuring Auth0. Add an OpenID plugin configuration using the parameters in the example below using an HTTP client or Kong Manager. Auth0’s token endpoint requires passing the API identifier in the audience parameter, which must be added as a custom argument: $ curl -i -X POST<API name>/plugins --data name="openid-connect" \ --data config.auth_methods="client_credentials" \ --data config.issuer="https://<auth0 API name>.auth0.com/.well-known/openid-configuration" \ --data config.token_post_args_names="audience" \ --data config.token_post_args_values="" Downstream configuration The service accessing your resource will need to pass its credentials to Kong. It can do so via HTTP basic authentication, query string arguments, or parameters in the request body. Basic authentication is generally preferable, as credentials in a query string will be present in Kong’s access logs (and possibly in other infrastructure’s access logs, if you have HTTP-aware infrastructure in front of Kong) and credentials in request bodies are limited to request methods that expect client payloads. For basic authentication, use your client ID as the username and your client secret as the password. For the other methods, pass them as parameters named client_id and client_secret, respectively.
https://docs.konghq.com/enterprise/0.34-x/oidc-auth0/
2019-11-12T06:38:12
CC-MAIN-2019-47
1573496664752.70
[]
docs.konghq.com
, Meta is, by default, available only through the Escape key. But the preferences dialog (in the General Editing-f : move forward one character C-b : move backward one character M-f : move forward one word M-b : move backward one word C-v : move forward one page M-v : move backward one page M-< : move to beginning of file M-> : move to end of file C-a : move to beginning of line (left) C-e : move to end of line (right) C-n : move to next line (down) C-p : move to previous line (up) M-C-f : move forward one S-expression M-C-b : move backward one S-expression M-C-u : move up out of an S-expression M-C-d : move down into a nested S-expression M-C-SPACE : select forward S-expression M-C-p : match parentheses backward M-C-left : move backwards to the nearest editor box A-C-left : move backwards to the nearest editor box M-C-right : move forward to the nearest editor box A-C-right : move forward to the nearest editor box M-C-up : move up out of an embedded editor A-C-up : move up out of an embedded editor M-C-down : move down into an embedded editor A-C-down : move down into an embedded editor C-d : delete forward one character C-h : delete backward one character M-d : delete forward one word M-DEL : delete backward one word C-k : delete forward to end of line M-C-k : delete forward one S-expression M-w : copy selection to clipboard C-w : delete selection to clipboard (cut) C-y : paste from clipboard (yank) C-t : transpose characters M-C-t : transpose sexpressions M-C-m : toggle dark green marking of matching parenthesis M-C-k : cut complete sexpression M-( : wrap selection in parentheses M-[ : wrap selection in square brackets M-{ : wrap selection in curly brackets M-S-L : wrap selection in (lambda () ...) and put the insertion point in the v : Make the nearby ASCII art rectangles taller.For example, if the insertion point is just above the the middle line of this rectangle:then the keystroke will turn it into this one: C-x r c : Centers the contents of the current line inside the enclosing cell of the enclosing ASCII art rectangle. - C-x r o : Toggles the ASCII art rectangle editing mode. When the mode is enabled, key strokes that would normally break the rectangles instead enlarge them. Specifically: Return and enter add a line to the enclosing rectangle and put the insertion point at the first column of the enclosing cell. When in overwrite mode, if a key would overwrite one of the walls of the cell, the wall is first moved over to accomodate the new key When not in overwrite mode, inserting a character will always widen the containing cell.
https://docs.racket-lang.org/drracket/Keyboard_Shortcuts.html?q=keybinding
2019-11-12T05:15:55
CC-MAIN-2019-47
1573496664752.70
[]
docs.racket-lang.org
Intermediate - How to Go Through the Facebook Review Process - How to fix Revive Old Post not posting - How to Update your PHP version - How to use custom fields (Content + URL+hashtags) in Revive Old Post - Fix Error: (#200) Requires either publish_to_groups permission and app being installed in the group, or manage_pages and publish_pages as an admin with sufficient administrative permission - How to use excerpt field for shares in Revive Old Posts
https://docs.revive.social/category/626-intermediate
2019-11-12T05:13:09
CC-MAIN-2019-47
1573496664752.70
[]
docs.revive.social
Important Netgate is offering COVID-19 aid for pfSense software users, learn more. Arping Package¶ arping is a utility to test the reachability and responsiveness of hosts to ARP. It is effectively like ICMP ping, except using ARP instead. This is beneficial in circumstances where the host has a firewall enabled (every host even firewalled will respond to ARP), or there is no layer 3 connectivity on the IP subnet of the host and hence cannot ping, but do have layer 2 connectivity. The arping package can be very useful when trying to pick an unused IP address for a subnet to which there is not yet a route or link, but is connected at Layer 2. See also Visit the arping website for more information. Package Support¶ This package is currently supported by Netgate Global Support to those with an active support subscription.
https://docs.netgate.com/pfsense/en/latest/packages/arping-package.html
2020-09-18T21:25:24
CC-MAIN-2020-40
1600400188841.7
[]
docs.netgate.com
We setup ADConnect to begin syncing devices. This setup a SCP record in AD. We are testing the setup, so following the controlled validation setup, we cleared the SCP record property, and used a GPO. We also use ADFS. Can someone please provide insight into whether what we are seeing is normal/expected, or abnormal. On-premise devices with the GPO link to the device OU and ADFS server, will perform an autoenrollment in Azure and appear as hybrid device joined. ADConnect does not initially sync any computer objects to Azure. If I create a computer object in an OU which is synced, AD Connect will not add the device to Azure. It appears that device must perform the enrollment action to be added to Azure. This occurs via the scheduled task \Microsoft\Windows\Workplace Join\Automatic-Device-Join and is only triggered at logon. Only after the devices self-enrolls will ADConnect begin managing it. While this is great and seamless to any on-premise clients, this isn't working for off-premise hosts. If i VPN connect in i can pick up the GPO configuration bits, my client is ready to go but the task doesn't trigger unless I login. If i reboot and am disconnected from the VPN, the schedule task runs but does NOT enroll, as it seems to need a line of sight to AD. On my test client i perform the "Access Work or School" connection, but the device now only appears as registered not hybrid even after any adconnect sync job ran. Should AD Connect be syncing computer objects regardless of the clients self-enrollment? (maybe our admin did something wrong) Should off-premise clients be able to auto-enroll seamlessly like on-prem clients? (the gpo has the settings that would normal only be in AD, what else is at play?) Are there other methods for off-prem clients to complete the hybrid join setup? These existing clients are sccm managed, we are looking to setup hybrid so that they can begin to leverage intune to pick up windows updates. While registered devices can potentially do this, I feel like this is the wrong approach and may present future issues in which we can't do windows hello or take advantage of other services/features.
https://docs.microsoft.com/en-us/answers/questions/21452/hybrid-device-join-off-premise.html
2020-09-18T21:33:46
CC-MAIN-2020-40
1600400188841.7
[]
docs.microsoft.com
Consult the following topics for guidance on when to deploy multiple BMC TrueSight Infrastructure Management Servers and best practices for deployment: You use Central Monitoring Administration to manage BMC PATROL configuration across one or more Infrastructure Management Servers. Central Monitoring Administration is installed with the Presentation Server and launched from the TrueSight console. BMC recommends implementing one of the following architectural deployments for multiple Infrastructure Management servers: Multiple Infrastructure Management Servers connected a single Presentation Server with Central Monitoring Administration Multiple Infrastructure Management Servers connected to multiple Presentation Servers with Central Monitoring Administration Important Updates to and deletion of existing policies that are already in production must use the policy export/import capability for creation, testing, and deployment to production. For more information, see Staging Integration Service host deployment and policy management for development, test, and production best practices. These implementation architecture options are not installation options; they are choices for where you install the servers and how you connect them. Note All BMC TrueSight Infrastructure Management Server versions must be the same in a single environment. The Infrastructure Management solution cannot support mixed versions of Infrastructure Management Servers and requires the same version of the Presentation Server with Central Monitoring Administration. For more information, see Infrastructure Management interoperability. The first option is to implement a single Presentation Server with Central Monitoring Administration for scenarios that require multiple Infrastructure Management Servers, such as multiple tenants or separate environments for development, test, and production. This option requires fewer infrastructure nodes and components. Only a single staging Integration Service host is needed, and only a single Central Monitoring Administration instance is used. This option might not be supported in some sites where all the necessary connections in the development, test, and production or separate tenant environments are not available or allowed to be connected over the network. Due to the powerful ease of use, it is easier for administrators to unintentionally apply policies to production. However, this possibility can be managed. The following figure illustrates a typical high-level logical architecture for an Infrastructure Management deployment using multiple BMC TrueSight Infrastructure Management Servers connected to a single BMC TrueSight Presentation Server with Central Monitoring Administration. High-level architecture for multiple-server deployment with a single Presentation Server and Central Monitoring Administration In the single Central Monitoring Administration architecture, a single staging Integration Service node is used in the agent deployment process for all agents. All Infrastructure Management Servers leverage the single Central Monitoring Administration instance for managing the entire policy. For the list of ports required in a multiple-server deployment, see Network ports. The second option is to implement a separate Presentation Server and Central Monitoring Administration instance in each of the development, test, and production or tenant environments. Creating, testing, and deploying monitoring policies into production or across multiple tenant environments requires additional effort because you have to export and import policy data from development to the test environment and from test to the production environments. Policies can get out of sync across development, test, and production environments if not managed properly. Keeping them updated is more of a manual process supported by the export/import utility. This deployment requires additional infrastructure nodes and components. Development, test, and production environments must each have a dedicated staging Integration Service host and a dedicated instance of Central Monitoring Administration. The following diagram illustrates the high-level architecture for multiple Central Monitoring Administration instances in the development, test, and production environments. High-level architecture for multiple-server deployment with multiple Presentation Servers and CMA In this architecture, each environment has its own dedicated Central Monitoring Administration instance and a staging Integration Service . All policy application between environments is supported by the policy export/import utility. In previous releases, the parent Infrastructure Management Server in a multiple-server deployment was known as the Central Server. In this release, the Central Server has been replaced by the Presentation Server for most deployments. However, a Central Server is still required for the following deployment use cases: Distributed service models—When the number of CIs in a service model exceeds the maximum limit of CIs allowed in a single server, it is possible to distribute the model across multiple Infrastructure Management Servers. This is optional. Distributed service models are enabled by the web services installed with the Infrastructure Management Servers. The Central Server acts as a single point of entry for users and provides a common point to access to service models. For the deployment architecture, see Infrastructure Management deployment with BMC Atrium CMDB. For information on sizing the deployment, see Sizing charts and guidelines for event and impact management. BMC Cloud Lifecycle Management Integration—The integration allows IT Operations to monitor cloud constructs provisioned by BMC Cloud Lifecycle Management. This integration requires a Central Server to fetch cloud topology from BMC Cloud Lifecycle Management and push cloud constructs to the appropriate Infrastructure Management child servers. For the deployment architecture, see Infrastructure Management deployment integrated with BMC Cloud Lifecyle Management. For information on how to configure a Central Server, see Configuring a BMC TrueSight Infrastructure Management multiple-server deployment. Staging Integration Service host deployment and policy management for development, test, and production best practices
https://docs.bmc.com/docs/display/public/tsim10/Infrastructure+Management+multiple-server+deployment+architecture
2020-09-18T21:09:44
CC-MAIN-2020-40
1600400188841.7
[]
docs.bmc.com
InterSystems IRIS Connector for Power BI This article describes how to work with the InterSystems IRIS Connector for Power BI. It contains these sections: Introduction to the Connector The InterSystems IRIS Connector for Power BI is a custom connector for InterSystems IRIS® data platform. The InterSystems IRIS Connector for Power BI allows you to access and create reports on regular relational tables as well as InterSystems IRIS Business Intelligence cube data from Microsoft Power BI, and includes full DirectQuery support when querying either type of data. The Connector is included with Power BI Desktop, starting with Microsoft’s April 2019 release of Power BI Desktop. Connecting to InterSystems IRIS Prior to connecting to InterSystems IRIS from Power BI Desktop, ensure that you have an InterSystems IRIS ODBC driver installed on your system. In order to connect to InterSystems IRIS from Power BI Desktop, do the following: Open Power BI Desktop and click Get Data > More... > InterSystems IRIS (Beta). Click Connect. Enter connection information for your InterSystems IRIS instance. Here, Host (IP Address) is the IP address of the host for your InterSystems IRIS instance, Port is the instance’s superserver port, and Namespace is the namespace where your Business Intelligence data is located. Accept all other options as default. Upon your first connection to an instance of InterSystems IRIS, an authentication dialog will appear. Choose Basic and enter your InterSystems IRIS credentials. Browse Your Data If you have successfully connected to InterSystems IRIS, Power BI will display the database Navigator dialog. You can browse relational tables by selecting Tables. You can expand packages in the left pane to select tables and/or views that you want to include in your Power BI report. Alternatively, you can view available InterSystems IRIS BI cubes by selecting Cubes in the left pane. Expanding the Cubes option lists all available InterSystems IRIS Business Intelligence cubes in the current namespace. Note that cubes or subject areas with certain features that cannot be supported through SQL access, such as programmatic filters, are excluded from the list. When you expand a cube, you will see the star schema representation of the cube, including regular dimensions and a fact table with all regular measures for the cube. Note that some columns with internal identifiers are removed. Troubleshooting the Connector Missing Tables Symptom There are missing tables in the Navigator. Diagnosis The InterSystems IRIS Connector for Power BI excludes system tables and tables associated with InterSystems IRIS Business Intelligence cubes from the regular Tables menu. Scrubbed and annotated versions of the latter are available through the Cubes menu. If you need access to a table or a field not listed in the Navigator, you can add it manually with a custom query or use Power BI’s generic ODBC connector. Missing Cubes Symptom There are missing cubes in the Navigator. Diagnosis The InterSystems IRIS Connector for Power BI leverages the relational projects of InterSystems Business Intelligence cubes to make them available for use in Power BI. Some cube features, like programmatic filters, cannot be supported through these projections and are therefore left out of the list. Please contact the WRC if you encounter a cube where this behavior is not appropriate. Dimension Hierarchy Won’t Show Up in the Report Designer Symptom A dimension hierarchy is not showing up in the report designer. Diagnosis Power BI does not currently allow seeding dimension information from a connector. Multilevel Dimension Hierarchy Not Functioning Correctly Symptom A multilevel dimension hierarchy is not functioning correctly. Diagnosis When a dimension has multiple levels, these levels are usually represented by separate dimension tables (snowflake schema). While foreign key relationships exist between the fact table and each dimension level and between the different levels of the dimension, Power BI can only choose one path from a fact table to a higher dimension level as the “active relationship”, and may choose the wrong one, leading to unexpected query results. To fix the active relationship, click Manage Relationships in Power BI Desktop and de-activate the direct links between a fact table and higher-level dimension tables. Then, activate the correct relationships one by one. For more information, see the Microsoft documentation.
https://docs.intersystems.com/irisforhealthlatest/csp/docbook/DocBook.UI.Page.cls?KEY=APOWER
2020-09-18T19:54:51
CC-MAIN-2020-40
1600400188841.7
[]
docs.intersystems.com
- Online Help Export Messages When you want to use message data in another format (e.g. in a spreadsheet), you can export messages exchanged between you and your contacts to a CSV file. For large contact lists, you might want to export messages for each contact status as a separate file or limit the date range you export. For Business Suite and Professional users, please see your site owner (broker or team leader) to export messages. To Export Messages - Log into the Market Leader Admin interface. - In the navigation list, click on Contacts. - In the action links, click Import/Export. - Click the Export Messages tab. The Export Messages text includes a description of the fields and order of the exported data. - In the Contact Status drop down list, select a status type. - Click the All Time or Choose a date range radio option. For the Choose a date range option, enter valid dates in the From and To fields. - Click Export. - If the browser prompts you, depending on your preference, choose to save or open the CSV file. Example Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aliquam fermentum vestibulum est. Sed quis tortor.
http://docs.marketleader.com/display/ohml/Export+Messages
2020-09-18T19:45:54
CC-MAIN-2020-40
1600400188841.7
[]
docs.marketleader.com
We want to enable guest users for a particular domain to login with their G Suite accounts. We setup the direct federation, but invitations are not redeeming. We can see when the user accepts the invitation, the user is passed to G Suite, authenticated, passed back to Azure, but then get's the message: Invitation redemption failed An error has occurred. Please retry again shortly. It seems then the SAML response from G Suite to Azure is broken. Either the SAML response is malformed or Azure isn't processing the response correctly. Any ideas?
https://docs.microsoft.com/en-us/answers/questions/12342/setup-of-g-suite-idp-for-saml-direct-federation-fo.html
2020-09-18T21:26:04
CC-MAIN-2020-40
1600400188841.7
[]
docs.microsoft.com
- Reference > mongoShell Methods > - Connection Methods > - Session > - Session.commitTransaction() Session.commitTransaction()¶ On this page Definition¶ Session. commitTransaction()¶ New in version 4.0. Saves the changes made by the operations in the multi-document transaction and ends the transaction. Availability -¶ Write Concern¶ When commiting the transaction, the session uses the write concern specified at the transaction start. See Session.startTransaction(). If you commit using "w: 1" write concern, your transaction can be rolled back during the failover process. Atomicity¶. Retryable¶ If the commit operation encounters an error, MongoDB drivers retry the commit operation a single time regardless of whether retryWrites is set to false. For more information, see Transaction Error Handling..
https://docs.mongodb.com/manual/reference/method/Session.commitTransaction/
2020-09-18T21:01:53
CC-MAIN-2020-40
1600400188841.7
[]
docs.mongodb.com
Script Editor View Menu The Script Editor view menu gives you access to all of the commands needed for editing and testing scripts in the Script Editor view—see Script Editor View. - Add the Script Editor view to your workspace by doing one of the following: - In the top-right corner of an existing view, click on the Add View button and select Script Editor. - In the top menu, select Windows > Script Editor. - In the top-left corner of the Script Editor view, click on the Menu button.
https://docs.toonboom.com/help/harmony-17/premium/reference/menu/view/script-editor-view-menu.html
2020-09-18T21:11:22
CC-MAIN-2020-40
1600400188841.7
[array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
ApisCP provides an automated migration system to assist you in moving accounts from system to system and platform to platform. There are a few prerequisites to confirm before migrating: - ✅ You have root (administrative) access on both servers - ✅ Public key authentication is configured on the destination server - ✅ Both source + destination run ApisCP - ✅ Source + destination server have DNS configured (optional) As long as the account fits this checklist, you're golden! To initiate migration, issue the following command: apnscp_php /usr/local/apnscp/bin/scripts/transfersite.php -s <newserver> <domain> Where is the destination server, a fully-qualified domain isn't necessary, and is the domain name, site identifier (siteXX), or invoice grouping to migrate. For example, if a server has 3 accounts run by Ted with the billing grouping "tedsite" via billing, invoice/ billing, parent_invoice in its service definition, then the following will kickoff 3 migrations in serial for Ted's sites to the server with the hostname newsvr.mydomain.com: apnscp_php /usr/local/apnscp/bin/scripts/transfersite.php -s newsvr.mydomain.com tedsite As long as DNS is configured for the site and the target server has the same DNS provider configured, then migration will be fully automated with an initial stage to prep files and give a 24 hour window to preview the domain. If DNS isn't configured for a site, ( dns, provider= builtin), then an optional parameter, --stage, can be provided to set the migration stages. - Stage 0: initial creation - Stage 1: second sync - Stage 2: site completed Stage 2 is a no-op; the site is considered migrated. apnscp_php bin/scripts/transfersite.php --stage=0 newsvr.mydomain.com tedsite To skip creation as a site, for example if an intermediate stage fails, then --no-create can be specified to skip creation on when stage is 0. # Migration components ApisCP migrates sites by component. Available components may be enumerated using, --components Stages - users - passwords - sql_databases - sql_users - mysql_users - pgsql_users - sql_schema - addon_domains - subdomains - mailing_lists - files - mailboxes - crons - vmount - dns - http_custom_config - letsencrypt_ssl - mysql_schema - pgsql_schema Some components accept arguments, such as files in which case typical ApisCP syntax applies. Component arguments are delimited by a comma: apnscp_php bin/scripts/transfersite.php --do=files,'[/var/www]' mydomain.com Reruns file migration on /var/www for mydomain.com. Upon completion the stage won't be updated. Multiple stages can be run by specifying --do multiple times. apnscp_php bin/scripts/transfersite.php --do=addon_domains --do=subdomains mydomain.com # Overriding configuration Site configuration can be overridden during stage 0 (account creation). This is useful for example if you are changing VPS providers, while retaining the respective provider's DNS service. -c is used to specify site parameters as is commonly repeated in cPanel imports or site creation. apnscp_php bin/scripts/transfersite.php -c='dns,provider=linode' -c='dns,key=abcdef1234567890' mydomain.com On the source server, mydomain.com may continue to use DigitalOcean as its DNS provider while the on the target server mydomain.com will use Linode's DNS provider. Once mydomain.com completes its initial stage (stage 0), be sure to update the nameservers for mydomain.com. # Skipping suspension An account after migration completes is automatically suspended on the source side. In normal operation, this poses no significant complications as DNS TTL is reduced to 2 minutes or less during stage one migration. --no-suspend disables suspension following a successful migration. # Migration internals ApisCP uses DNS + atd to manage migration stages. A TXT record named __acct_migration with the unix timestamp is created on the source server. This is used internally by ApisCP to track migration. ApisCP creates an API client on both the target and source servers. A 24 hour delay is in place between migration stages to allow DNS to propagate and sufficiently prep, including preview, for a final migration. This delay can be bypassed by specifying --force. All resolvers obey TTL, so don't force a migration until the minimum TTL time has elapsed! Migration TTLs are adjusted on the target server to 60 seconds. If you are changing DNS providers during migration, this will allow you to make nameserver changes without affecting your site. On its inital migration (stage 0), ApisCP copies all DNS records verbatim to the target. At the end of the second migration stage (stage 1), all records that match your old hosting IP address are updated to your new IP address. All other records are not altered. Additionally, __acct_migration is removed from the source DNS server and account put into a suspended state. When both source and target share the same nameserver, only TTL is reflected at the end of stage 0 and IP address changed at the end of stage 1. At the end of stage 1, TTL is reset to the default TTL setting. TIP Setting records, TTL adjustments on the target machine allows you to proactively update nameservers before a migration finalizes if you are unable to modify DNS records on the source machine. The initial records during stage 1 will reflect the source server while stage 2 records reflect the target server. # Further reading - Migrating to another server (kb.apiscp.com)
https://docs.apnscp.com/admin/Migrations%20-%20server/
2020-09-18T20:06:09
CC-MAIN-2020-40
1600400188841.7
[]
docs.apnscp.com
DynamicFont¶ Inherits: Font < Resource < Reference < Object DynamicFont renders vector font files at runtime. Description¶ DynamicFont renders vector font files (such as TTF or OTF)backs fonts, which will be used when displaying a character not supported by the main font. DynamicFont uses the FreeType library for rasterization. var dynamic_font = DynamicFont.new() dynamic_font.font_data = load("res://BarlowCondensed-Bold.ttf") dynamic_font.size = 64 $"Label".set("custom_fonts/font", dynamic_font) Note: DynamicFont doesn't support features such as. Enumerations¶ enum SpacingType: - SPACING_TOP = 0 --- Spacing at the top. - SPACING_BOTTOM = 1 --- Spacing at the bottom. - SPACING_CHAR = 2 --- Character spacing. - SPACING_SPACE = 3 --- Space spacing. Property Descriptions¶ Extra spacing at the bottom in pixels. Extra character spacing in pixels. Extra space spacing in pixels.).
https://docs.godotengine.org/fi/latest/classes/class_dynamicfont.html
2020-09-18T20:56:26
CC-MAIN-2020-40
1600400188841.7
[]
docs.godotengine.org