content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
tcom -- Access COM objects from Tcl The tcom package provides commands to access COM objects through IDispatch and IUnknown derived interfaces. These commands return a handle representing a reference to a COM object through an interface pointer. The handle can be used as a Tcl command to invoke operations on the object. In practice, you should store the handle in a Tcl variable or pass it as an argument to another command. References to COM objects are automatically released. If you store the handle in a local variable, the reference is released when execution leaves the variable's scope. If you store the handle in a global variable, you can release the reference by unsetting the variable, setting the variable to another value, or exiting the Tcl interpreter. The createobject subcommand creates an instance of the object. The -inproc option requests the object be created in the same process. The -local option requests the object be created in another process on the local machine. The -remote option requests the object be created on a remote machine. The progID parameter is the programmatic identifier of the object class. Use the -clsid option if you want to specify the class using a class ID instead. The hostName parameter specifies the machine where you want to create the object instance. The getactiveobject subcommand gets a reference to an already existing object. This command returns a reference to a COM object from a file. The pathName parameter is the full path and name of the file containing the object. This command compares the interface pointers represented by two handles for COM identity, returning 1 if the interface pointers refer to the same COM object, or 0 if not. This command invokes a method on the object represented by the handle. The return value of the method is returned as a Tcl value. A Tcl error will be raised if the method returns a failure HRESULT code. Parameters with the [in] attribute are passed by value. For each parameter with the [out] or [in, out] attributes, pass the name of a Tcl variable as the argument. After the method returns, the variables will contain the output values. In some cases where tcom cannot get information about the object's interface, you may have to use the -method option to specify you want to invoke a method. Use the -namedarg option to invoke a method with named arguments. This only works with objects that implement IDispatch. You specify arguments by passing name and value pairs. This command gets or sets a property of the object represented by the handle. If you supply a value argument, this command sets the named property to the value, otherwise it returns the property value. For indexed properties, you must specify one or more index values. The command raises a Tcl error if you specify an invalid property name or if you try to set a value that cannot be converted to the property's type. In some cases where tcom cannot get information about the object's interface, you may have to use the -get or -set option to specify you want to get or set a property respectively. This command implements a loop where the loop variable(s) take on values from a collection object represented by collectionHandle. In the simplest case, there is one loop variable, varname. The body argument is a Tcl script. For each element of the collection, the command assigns the contents of the element to varname, then calls the Tcl interpreter to execute body. In the general case, there can be more than one loop variable. During each iteration of the loop, the variables of varlist are assigned consecutive elements from the collection. Each element is used exactly once. The total number of loop iterations is large enough to use up all the elements from the collection. On the last iteration, if the collection does not contain enough elements for each of the loop variables, empty values are used for the missing elements. The break and continue statements may be invoked inside body, with the same effect as in the for command. The ::tcom::foreach command returns an empty string. This command specifies a Tcl command that will be executed when events are received from an object. The command will be called with additional arguments: the event name and the event arguments. By default, the event interface is the default event source interface of the object's class. Use the eventIID parameter to specify the IID of another event interface. If an error occurs while executing the command then the bgerror mechanism is used to report the error. This command tears down all event connections to the object that were set up by the ::tcom::bind command. Objects that implement the IDispatch interface allow some method parameters to be optional. This command returns a token representing a missing optional argument. In practice, you would pass this token as a method argument in place of a missing optional argument. This command returns a handle representing a description of the interface exposed by the object. The handle supports the following commands. This command returns an interface identifier code. This command returns a list of method descriptions for methods defined in the interface. Each method description is a list. The first element is the member ID. The second element is the return type. The third element is the method name. The fourth element is a list of parameter descriptions. This command returns the interface's name. This command returns a list of property descriptions for properties defined in the interface. Each property description is a list. The first element is the member ID. The second element is the property read/write mode. The third element is the property data type. The fourth element is the property name. If the property is an indexed property, there is a fifth element which is a list of parameter descriptions. This command sets and retrieves options for the package.. This option sets the concurrency model, which can be apartmentthreaded or multithreaded. The default is apartmentthreaded. You must configure this option before performing any COM operations such as getting a reference to an object. After a COM operation has been done, changing this option has no effect. Use the ::tcom::import command to convert type information from a type library into Tcl commands to access COM classes and interfaces. The typeLibrary argument specifies a type library file. By default, the commands are defined in a namespace named after the type library, but you may specify another namespace by supplying a namespace argument. This command returns the library name stored in the type library file. For each class in the type library, ::tcom::import defines a Tcl command with the same name as the class. The class command creates an object of the class and returns a handle representing an interface pointer to the object. The command accepts an optional hostName argument to specify the machine where you want to create the object. You can use the returned handle to invoke methods and access properties of the object. In practice, you should store this handle in a Tcl variable or pass it as an argument to a Tcl command. For each interface in the type library, ::tcom::import defines a Tcl command with the same name as the interface. The interface command queries the object represented by handle for an interface pointer to that specific interface. The command returns a handle representing the interface pointer. You can use the returned handle to invoke methods and access properties of the object. In practice, you should store this handle in a Tcl variable or pass it as an argument to a Tcl command. The ::tcom::import command generates a Tcl array for each enumeration defined in the type library. The array name is the enumeration name. To get an enumerator value, use an enumerator name as an index into the array. Each Tcl value has two representations. A Tcl value has a string representation and also has an internal representation that can be manipulated more efficiently. For example, a Tcl list is represented as an object that holds the list's string representation as well as an array of pointers to the objects for each list element. The two representations are a cache of each other and are computed lazily. That is, each representation is only computed when necessary, is computed from the other representation, and, once computed,. The internal representations built into Tcl include boolean, integer and floating point types. When invoking COM object methods, tcom tries to convert each Tcl argument to the parameter type specified by the method interface. For example, if a method accepts an int parameter, tcom tries to convert the argument to that type. If the parameter type is a VARIANT, the conversion has an extra complication because a VARIANT is designed to hold many different data types. One approach might be to simply copy the Tcl value's string representation to a string in the VARIANT, and hope the method's implementation can correctly interpret the string, but this doesn't work in general because some implementations expect certain VARIANT types. Tcom uses the Tcl value's internal representation type as a hint to choose the resulting VARIANT type. Tcl value to VARIANT mapping The internal representation of a Tcl value may become significant when it is passed to a VARIANT parameter of a method. For example, the standard interface for COM collections defines the Item method for getting an element by specifying an index. Many implementations of the method allow the index to be an integer value (usually based from 1) or a string key. If the index parameter is a VARIANT, you must account for the internal representation type of the Tcl argument passed to that parameter. This command passes a string consisting of the single character "1" to the Item method. The method may return an error because it can't find an element with that string key. In line 1, the for command sets the internal representation of $i to an int type as a side effect of evaluating the condition expression {$i <= $numElements}. The command in line 2 passes the integer value in $i to the Item method, which should succeed if the method can handle integer index values.
http://docs.activestate.com/activetcl/8.4/tcom/tcom.n.html
2016-10-21T11:09:53
CC-MAIN-2016-44
1476988717963.49
[]
docs.activestate.com
Documentation Working Group/Intro From Joomla! Documentation Documentation Working Group The Documentation Team is Part of the Production Working Group. It is responsible for creating and maintaining all the official Joomla! documentation including that for end-users, administrators and third-party developers. We also maintain the help screens for the Joomla! releases. The official documentation language is British English. Translations of the official documentation into other languages are the responsibility of the Documentation Translation Team (JDT).
https://docs.joomla.org/Documentation_Working_Group/Intro
2016-10-21T11:32:56
CC-MAIN-2016-44
1476988717963.49
[]
docs.joomla.org
Cache Directive A Cache Directive defines the path that.
https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.3.0/bk_hdfs_admin_tools/content/caching_terminology.html
2022-05-16T22:44:22
CC-MAIN-2022-21
1652662512249.16
[]
docs.cloudera.com
Mixed Ordered and Orderless Data Issues with using deltaxml:ordered="false" The deltaxml:ordered="false" attribute indicates that the element is orderless, i.e. its child elements can appear in any order and the order does not affect the meaning. When XML Compare compares such elements, it treats the child elements as a set rather than an ordered list. This creates a problem when only some of the elements are orderless. For example, you might have a list of phone numbers in a contact record like this: <records> <contact> <name>John Smith</name> <addressLine>25 Green Lane</addressline> <addressLine>London</addressline> <addressLine>UK</addressline> <phone type="office">+44 200 1234 567</phone> <phone type="fax">+44 200 1234 568</phone> <phone type="mobile">+44 200 1234 569</phone> </contact> ... </records> If we place a deltaxml:ordered="false" attribute on the contact record, then all the elements within it will be considered to occur in any order. In this case, we consider the addressLine elemens to be ordered and only the phone elements to be orderless, so adding this attribute will not work. Managing ordered and orderless sub-elements There are a number of ways to handle the combination of order and orderless sub elements correctly. One would be to create a new element to contain all the phone elements and then assign the deltaxml:ordered="false" to this element. The container can be stripped out later after we have finished the comparison. The problem with this solution is that it introduces false elements into the input files which then need to be stripped out later. Another method is simply to sort these elements before the comparison process, based on some suitable value. In this case, the type attribute provides a possible value. This can be achieved with the following XSLT code: <xsl:template <xsl:for-each <xsl:sort <xsl:copy> <xsl:apply-templates </xsl:copy> </xsl:for-each> </xsl:template> <xsl:template What is happening here is that as soon as the first phone element is found, all phone elements are collected together, sorted and output in the sorted order. The second template ensures that phone elements are not duplicated in the output file. We could also choose to make the type attribute into a key so that phone elements with the same type will always be compared - this would be a good idea if the type value is unique in the context of the contact element. This is achieved as follows: <xsl:template <xsl:for-each <xsl:sort <xsl:call-template <xsl:with-param </xsl:call-template> </xsl:for-each> </xsl:template> And the template for copying and adding the key would be: <xsl:template <xsl:param <xsl:variable <xsl:copy> <xsl:if <xsl:attribute <xsl:value-of </xsl:attribute> </xsl:if> <xsl:apply-templates </xsl:copy> </xsl:template> Many variations of this are possible. In this example, all phone elements are sorted but we could select only those in the contact element. Another variation would be to have a secondary key to be used if the primary one was not present. The advantage of this approach is that minimum changes are made to the input file and therefore it is easier to understand the differences generated.
https://docs.deltaxml.com/xml-compare/11.0/Mixed-Ordered-and-Orderless-Data.2666145225.html
2022-05-16T21:37:22
CC-MAIN-2022-21
1652662512249.16
[]
docs.deltaxml.com
Setting up the database Opening a database Open statement To open a new database, use DB.open: Opening a database val db = DB.open("path/to/db") By default, Kodein-DB will create the database if it does not exist. If you want to modify this behaviour, you can use: OpenPolicy.Open: fails if the database does not already exist OpenPolicy.Create: fails if the database already exists Opening an existing database val db = DB.open("path/to/db", OpenPolicy.Open) Defining the serializer If you are targeting JVM only, then Kodein-DB will find the serializer by itself, so you don’t need to define it. However, when targeting Multiplatform, you need to define the KotlinX serializer and the serialized classes manually: Opening an existing database val db = DB.open("path/to/db", KotlinxSerializer { (1) +User.serializer() (2) +Address.serializer() (2) } )
https://docs.kodein.org/kodein-db/0.6/core/setup-database.html
2022-05-16T22:21:17
CC-MAIN-2022-21
1652662512249.16
[]
docs.kodein.org
New Seasonal Anomalies package feature: Historical Reference Data New feature for the meteoblue Packages API (Seasonal Anomalies Forecast): Historical Reference Data The Seasonal Anomalies Forecast data package now contains the mean climate reference from the seasonal ECMWF forecast, to which the according ECMWF seasonal anomalies may be applied to. More information about this package can be found here.
https://docs.meteoblue.com/blog/2021/11/19/new-seasonal-anomalies-package-feature-historical-reference-data
2022-05-16T20:54:34
CC-MAIN-2022-21
1652662512249.16
[]
docs.meteoblue.com
Read Binary Files# The Read Binary Files node is used to read multiple files Selector field: This is a field that is used to specify the type of files to be read. For example, *.jpg. - Property Name field: Name of the binary property to which to write the data of the read files. It is also possible to select files from a certain directory, by specifying the path in the File Selector field. For example, /data/folder/*.jpg. Example Usage# This workflow allows you to read multiple files from the host machine using the Read Binary Files node. You can also find the workflow on the website. This example usage workflow would use the following two nodes. - Start - Read Binary Files The final workflow should look like the following image. 1. Start node# The start node exists by default when you create a new workflow. 2. Read Binary Files node# - Enter the type of files you want to read in the File Selector field. - Click on Execute Node to run the workflow.
https://docs.n8n.io/integrations/core-nodes/n8n-nodes-base.readbinaryfiles/
2022-05-16T22:42:53
CC-MAIN-2022-21
1652662512249.16
[]
docs.n8n.io
Links may not function; however, this content may be relevant to outdated versions of the product. Changing an integration system icon Change the icon of an integration system to differentiate it from the other integration systems that your data model uses. Using an icon that is easily recognizable as representing a system makes the system easy to find, for example, on the Integration Designer dashboard. The following file formats are supported: .svg (preferred), .jpg, .jpeg, and .png. In the navigation panel of Dev Studio, click. Locate the system whose icon you want to change, and then click its row in the table. Click Change to select a different icon to represent this system. - Select one of the displayed icons and click Change icon. - Click Choose File to upload a custom icon. - Navigate to the icon file that you want to upload. - Click Open. Click Save. Previous topic Creating an integration system Next topic Deleting an integration system
https://docs.pega.com/data-management-and-integration/84/changing-integration-system-icon
2022-05-16T23:18:17
CC-MAIN-2022-21
1652662512249.16
[]
docs.pega.com
Before starting out with the Unity Editor, you might want to familiarize yourself with the platforms that you can create your projects in. Unity supports most of the leading desktop, web, and mobile platforms: Note: If you are a developer with access to Closed platforms, you may see more platform options when you download and register your preferred platform setup. For further information, see Platform Module Installation. and Unity forums. 有关 Unity 支持的平台的完整列表以及这些平台的系统要求,请参阅系统要求文档。
https://docs.unity3d.com/cn/2021.1/Manual/PlatformSpecific.html
2022-05-16T21:17:51
CC-MAIN-2022-21
1652662512249.16
[]
docs.unity3d.com
Tenant API overview Contember has built-in user and permissions management. This part we call Tenant API. There is always a single Tenant API running on an instance, you can find its GraphQL API on Terms - Identity - Holds information about roles and project memberships. - Person - Has assigned some identity and has credentials (email and password), using which he is authenticated to claim his identity. - Authorization key/token - Represents permanent (for applications) or session (for users) authorization of particular identity. It is verified using a Bearer token. Authorization tokens Like a Content API, Tenant API also needs an authorization token for each request - even for a login. The key is defined using CONTEMBER_LOGIN_TOKEN env variable. For local development, you can find this key in docker-compose.yaml You use this token for sign in (using both email/password or IdP) and password reset mutations. Besides special tokens like the login token, there are two basic kinds of authorization tokens: - permanent API token for e.g. applications, where you don't authenticate users. You can generate it using Tenant API mutations - session token, which user obtain after he signs in. You use it e.g. in administration for each action that given user makes. Which token should I use? It is sometimes a bit confusing which token should be used for an action. So lets show it on an example - you as a project administrator want to create a API token for application, so application can read a data from Content API. 1) Find the login token. 2) Use this token as a Bearer token and run 3) You will receive another token, this is your session token with limited validity. 4) Run createApiKey mutation against Tenant API but now with your personal session token. 5) The mutation returns a new permanent token with permissions you have set. 6) Now you can use this permanent API token and run some queries against Content API.
https://docs.contember.com/reference/engine/tenant/overview/
2022-05-16T21:42:46
CC-MAIN-2022-21
1652662512249.16
[]
docs.contember.com
AppointmentBaseCollection Class A collection of appointments. Namespace: DevExpress.XtraScheduler Assembly: DevExpress.XtraScheduler.v21.2.Core.dll Declaration public class AppointmentBaseCollection : SchedulerCollectionBase<Appointment> Public Class AppointmentBaseCollection Inherits SchedulerCollectionBase(Of Appointment) Remarks The AppointmentBaseCollection class represents a collection of appointments. This collection is returned by various methods (SchedulerControl.SelectedAppointments, SchedulerStorageBase.GetAppointments, Appointment.GetExceptions, AppointmentConflictsCalculator.CalculateConflicts, etc). Related GitHub Examples The following code snippets (auto-collected from DevExpress Examples) contain references to the AppointmentBaseCollection class. Note The algorithm used to collect these code examples remains a work in progress. Accordingly, the links and snippets below may produce inaccurate results. If you encounter an issue with code examples below, please use the feedback form on this page to report the issue.
https://docs.devexpress.com/CoreLibraries/DevExpress.XtraScheduler.AppointmentBaseCollection
2022-05-16T22:09:07
CC-MAIN-2022-21
1652662512249.16
[]
docs.devexpress.com
GlobalProtect for Internal HIP Checking and User-Based Access When used in conjunction with User-ID and/or HIP checks, an internal gateway provides a secure, accurate method of identifying and controlling traffic by user and/or device state, replacing other network access control (NAC) services. Internal gateways are useful in sensitive environments that require authenticated access to critical resources. In a configuration with only internal gateways, all endpoints must be configured with User-Logon (Always On); On-Demand mode is not supported. It is also recommended that you configure all client configurations to use single sign-on (SSO). In addition, since internal hosts do not need to establish a tunnel connection with the gateway, the IP address of the physical network adapter on the endpoint is used. In this quick config, the internal gateways enforce group-based policies that allow users in the Engineering group access to the internal source control and bug databases and users in the Finance group access to the CRM applications. All authenticated users have access to internal web resources. In addition, HIP profiles configured on the gateway check each host to ensure compliance with internal maintenance requirements, such as whether the latest security patches are installed, whether disk encryption is enabled, or whether the required software is installed. Use the following steps to configure a GlobalProtect internal gateway. - In this configuration, you must set up interfaces on each firewall hosting a portal and/or a gateway. Because this configuration uses internal gateways only, you must configure the portal and gateways on interfaces in the internal network.Use thedefaultvirtual router for all interface configurations to avoid creating inter-zone routing.On each firewall hosting a portal/gateway: - Select an Ethernet port to host the portal/gateway, and then configure a Layer3 interface with an IP address in thel3-trustSecurity Zone(NetworkInterfacesEthernet - Enable User Identificationon thel3-trustzone. - If any of your end users will be accessing the GlobalProtect app on their mobile devices, or if you plan on using HIP-enabled security policy, purchase and install a GlobalProtect subscription for each firewall hosting an internal gateway.After you purchase the GlobalProtect subscriptions and receive your activation code, install the GlobalProtect subscriptions on the firewalls hosting your gateways, as follows: Contact your Palo Alto Networks Sales Engineer or Reseller if you do not have the required licenses. For more information on licensing, see About GlobalProtect Licenses. - SelectDeviceLicenses - SelectActivate feature using authorization code. - When prompted, enter theAuthorization Code, and then clickOK. - Verify that the license was activated successfully. - Obtain server certificates for the GlobalProtect portal and each GlobalProtect gateway.In order to connect to the portal for the first time, the endpoints must trust the root CA certificate used to issue the portal server certificate. You can either use a self-signed certificate on the portal and deploy the root CA certificate to the endpoints before the first portal connection, or obtain a server certificate for the portal from a trusted CA.You can use self-signed certificates on the gateways.The recommended workflow is as follows: - On the firewall hosting the portal: - On each firewall hosting an internal gateway, Deploy the self-signed server certificates. - Define how you will authenticate users to the portal and gateways.You can use any combination of certificate profiles and/or authentication profiles as necessary to ensure the security of your portal and gateways. Portals and individual gateways can also use different authentication schemes. See the following sections for step-by-step instructions: You must then reference the certificate profile and/or authentication profiles that you defined in the portal and gateway configurations. - Set Up External Authentication (authentication profile) - Set Up Client Certificate Authentication (certificate profile) - Set Up Two-Factor Authentication (token- or OTP-based) - Create the HIP profiles you need to enforce security policies on gateway access. - Create the HIP objects to filter the raw host data collected by the app. For example, if you want to prevent users that are not up-to-date with required patches from connecting, you might create a HIP object to match on whether the patch management software is installed and that all patches with a given severity are up-to-date. - For example, if you want to ensure that only Windows users with up-to-date patches can access your internal applications, you might attach the following HIP profile that will match hosts that do NOT have a missing patch: - Configure the internal gateways.Select, and then select an existing internal gateway orNetworkGlobalProtectGatewaysAdda new gateway. Configure the following gateway settings: Note that it is not necessary to configure the client settings in the gateway configurations (unless you want to set up HIP notifications) because tunnel connections are not required. See Configure a GlobalProtect Gateway for step-by-step instructions on creating the gateway configurations. - Interface - IP Address - Server Certificate - Authentication Profileand/orConfiguration Profile - Configure the GlobalProtect Portals.Although all of the previous configurations can use theUser-logon (Always On)orOn-demand (Manual user initiated connection)connect methods, an internal gateway configuration must always be on, and therefore requires theUser-logon (Always On)connect method.Select, and then select an existing portal orNetworkGlobalProtectPortalsAdda new portal. Configure the portal as follows: - App Updates on the Portal. - Create the HIP-enabled and/or user/group-based security rules on your gateway(s).Add the following security rules for this example: - Select, and clickPoliciesSecurityAdd. - On theSourcetab, set theSource Zonetol3-trust. - On theUsertab, add the HIP profile and user/group to match. - ClickAddin theHIP Profilesarea, and select theMissingPatchHIP profile. - AddtheSource Usergroup (Finance or Engineering depending on which rule you are creating). - ClickOKto save the rule. - Committhe gateway configuration. Recommended For You Recommended Videos Recommended videos not found.
https://docs.paloaltonetworks.com/globalprotect/10-0/globalprotect-admin/globalprotect-quick-configs/globalprotect-for-internal-hip-checking-and-user-based-access
2022-05-16T21:14:03
CC-MAIN-2022-21
1652662512249.16
[]
docs.paloaltonetworks.com
Random Numbers randomSeed() randomSeed(newSeed); Parameters: newSeed- the new random seed The pseudorandom numbers produced by the firmware are derived from a single value - the random seed. The value of this seed fully determines the sequence of random numbers produced by successive calls to random(). Using the same seed on two separate runs will produce the same sequence of random numbers, and in contrast, using different seeds will produce a different sequence of random numbers. On startup, the default random seed is set by the system to 1. Unless the seed is modified, the same sequence of random numbers would be produced each time the system starts. Fortunately, when the device connects to the cloud, it receives a very randomized seed value, which is used as the random seed. So you can be sure the random numbers produced will be different each time your program is run. Disable random seed from the cloud When the device receives a new random seed from the cloud, it's passed to this function: void random_seed_from_cloud(unsigned int seed); The system implementation of this function calls randomSeed() to set the new seed value. If you don't wish to use random seed values from the cloud, you can take control of the random seeds set by adding this code to your app: void random_seed_from_cloud(unsigned int seed) { // don't do anything with this. Continue with existing seed. } In the example, the seed is simply ignored, so the system will continue using whatever seed was previously set. In this case, the random seed will not be set from the cloud, and setting the seed is left to up you.
https://docs.particle.io/cards/firmware/random-numbers/randomseed/
2022-05-16T22:38:46
CC-MAIN-2022-21
1652662512249.16
[]
docs.particle.io
IMFSample::ConvertToContiguousBuffer method Converts a sample with multiple buffers into a sample with a single buffer. Syntax HRESULT ConvertToContiguousBuffer( IMFMediaBuffer **ppBuffer ); Parameters ppBuffer Receives a pointer to the IMFMediaBuffer interface. The caller must release the interface. Return Value The method returns an HRESULT. Possible values include, but are not limited to, those in the following table. Remarks If the sample contains more than one buffer, this method copies the data from the original buffers into a new buffer, and replaces the original buffer list with the new buffer. The new buffer is returned in the ppBuffer parameter. If the sample contains a single buffer, this method returns a pointer to the original buffer. In typical use, most samples do not contain multiple buffers..
https://docs.microsoft.com/en-us/windows/win32/api/mfobjects/nf-mfobjects-imfsample-converttocontiguousbuffer
2019-07-15T21:39:48
CC-MAIN-2019-30
1563195524111.50
[]
docs.microsoft.com
Thread Pools. Thread Pool Architecture The following applications can benefit from using a thread pool: - An application that is highly parallel and can dispatch a large number of small work items asynchronously (such as distributed index search or network I/O). - An application that creates and destroys a large number of threads that each run for a short time. Using the thread pool can reduce the complexity of thread management and the overhead involved in thread creation and destruction. - An application that processes independent work items in the background and in parallel (such as loading multiple tabs). - An application that must perform an exclusive wait on kernel objects or block on incoming events on an object. Using the thread pool can reduce the complexity of thread management and increase performance by reducing the number of context switches. - An application that creates custom waiter threads to wait on events. The original thread pool has been completely rearchitected in Windows Vista. The new thread pool is improved because it provides a single worker thread type (supports both I/O and non-I/O), does not use a timer thread, provides a single timer queue, and provides a dedicated persistent thread. It also provides clean-up groups, higher performance, multiple pools per process that are scheduled independently, and a new thread pool API. The thread pool architecture consists of the following: - Worker threads that execute the callback functions - Waiter threads that wait on multiple wait handles - A work queue - A default thread pool for each process - A worker factory that manages the worker threads Best Practices The new thread pool API provides more flexibility and control than the original thread pool API. However, there are a few subtle but important differences. In the original API, the wait reset was automatic; in the new API, the wait must be explicitly reset each time. The original API handled impersonation automatically, transferring the security context of the calling process to the thread. With the new API, the application must explicitly set the security context. The following are best practices when using a thread pool: The threads of a process share the thread pool. A single worker thread can execute multiple callback functions, one at a time. These worker threads are managed by the thread pool. Therefore, do not terminate a thread from the thread pool by calling TerminateThread on the thread or by calling ExitThread from a callback function. An I/O request can run on any thread in the thread pool. Canceling I/O on a thread pool thread requires synchronization because the cancel function might run on a different thread than the one that is handling the I/O request, which can result in cancellation of an unknown operation. To avoid this, always provide the OVERLAPPED structure with which an I/O request was initiated when calling CancelIoEx for asynchronous I/O, or use your own synchronization to ensure that no other I/O can be started on the target thread before calling either the CancelSynchronousIo or CancelIoEx function. Clean up all resources created in the callback function before returning from the function. These include TLS, security contexts, thread priority, and COM registration. Callback functions must also restore the thread state before returning. Keep wait handles and their associated objects alive until the thread pool has signaled that it is finished with the handle. Mark all threads that are waiting on lengthy operations (such as I/O flushes or resource cleanup) so that the thread pool can allocate new threads instead of waiting for this one. Before unloading a DLL that uses the thread pool, cancel all work items, I/O, wait operations, and timers, and wait for executing callbacks to complete. Avoid deadlocks by eliminating dependencies between work items and between callbacks, by ensuring a callback is not waiting for itself to complete, and by preserving the thread priority. Do not queue too many items too quickly in a process with other components using the default thread pool. There is one default thread pool per process, including Svchost.exe. By default, each thread pool has a maximum of 500 worker threads. The thread pool attempts to create more worker threads when the number of worker threads in the ready/running state must be less than the number of processors. Avoid the COM single-threaded apartment model, as it is incompatible with the thread pool. STA creates thread state which can affect the next work item for the thread. STA is generally long-lived and has thread affinity, which is the opposite of the thread pool. Create a new thread pool to control thread priority and isolation, create custom characteristics, and possibly improve responsiveness. However, additional thread pools require more system resources (threads, kernel memory). Too many pools increases the potential for CPU contention. If possible, use a waitable object instead of an APC-based mechanism to signal a thread pool thread. APCs do not work as well with thread pool threads as other signaling mechanisms because the system controls the lifetime of thread pool threads, so it is possible for a thread to be terminated before the notification is delivered. Use the thread pool debugger extension, !tp. This command has the following usage: - pool address flags - obj address flags - tqueue address flags - waiter address - worker address For pool, waiter, and worker, if the address is zero, the command dumps all objects. For waiter and worker, omitting the address dumps the current thread. The following flags are defined: 0x1 (single-line output), 0x2 (dump members), and 0x4 (dump pool work queue). Related topics
https://docs.microsoft.com/en-us/windows/win32/procthread/thread-pools
2019-07-15T20:00:33
CC-MAIN-2019-30
1563195524111.50
[]
docs.microsoft.com
$ clusterrole route-editor --verb=update --resource=routes.route.openshift.io/custom-host../24 - (1) value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION (2) value: | (1) value: 192.168.12.99/24 -/24 - name: EGRESS_GATEWAY (3) value: 192.168.12.1 - name: EGRESS_ROUTER_MODE (4) value: http-proxy containers: - name: egress-router-proxy image: registry.access.redhat.com/openshift3/ose-egress (2) value: 192.168.12.99/24 -. As a cluster administrator, you can assign specific, static IP addresses to projects, so that traffic is externally easily recognizable. This is different from the default egress router, which is used to send traffic to specific destinations. Recognizable IP traffic increases cluster security by ensuring the origin is visible. Once enabled, all outgoing external connections from the specified project will share the same, fixed source IP, meaning that any external resources can recognize the traffic. Unlike the egress router, this is subject to EgressNetworkPolicy firewall rules.. the kube-service-catalog project and all other. The ovs-subnet and ovs-multitenant plug-ins have their own legacy models of network isolation and do not support Kubernetes NetworkPolicy. However,. When using the ovs-multitenant plug-in, traffic from the routers is automatically allowed into all namespaces. This is because the routers are usually in the default namespace, and all namespaces allow connections from pods in that namespace. With the ovs-networkpolicy plug-in,: networking.k8s.io/v plug-in: Add a label to the default namespace. $ oc label namespace default name=default Create policies allowing connections from that namespace. kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-from-default-namespace spec: podSelector: ingress: - from: - namespaceSelector: matchLabels: name: default The.
https://docs.openshift.com/container-platform/3.9/admin_guide/managing_networking.html
2019-07-15T20:09:15
CC-MAIN-2019-30
1563195524111.50
[]
docs.openshift.com
These metrics indicate in which country and in which region a page is hosted. These values become increasingly important for the local search results. Created by CCC • Last edit by Super Admin on Jun 29, 2019 Form is loading... This section can only be displayed by javascript enabled browsers.
https://docs.linkresearchtools.com/en/02-concepts/seo-metrics/695778-link-source-country-city
2019-07-15T20:18:45
CC-MAIN-2019-30
1563195524111.50
[]
docs.linkresearchtools.com
Debug Node The Debug Node allows inspection of the current payload at any point during a workflow. It is extremely useful when initially constructing a workflow, to make sure that all the various components are acting on the payload as expected. Configuration The Debug Node takes an optional message (a string template) to include in the debug messages that get displayed. In the above example, the message is “Before Mutate”. The node also allows for only printing a single property from the payload, as defined by a payload path. If the property is defined in the configuration, and that property does not exist on the payload, the debug output will print undefined. Viewing Debug Output Whenever a workflow runs, and in the process that run passes through one or more Debug Nodes, the workflow run will be visualized two different ways. Debug Panel Every time a Debug Node is hit, the timestamp, and payload value will be written as a new message in the Debug tab. New messages appear at the top of the list. If multiple Debug Nodes are hit as part of a single workflow run, each Debug Node will get an entry in the panel; this can be helpful in determining how your payload changes over the course of a run. To view that payload’s path through the workflow run, hover your mouse over the debug message. You will see the nodes that were part of the run highlight, and the Debug Node that generated the message will be called out. Live Stream Optionally, you may also enable the Live Debug Stream by clicking the button in the top right corner of the workflow stage. When live streaming is enabled, you will see workflow runs highlight in real time as they fire. Note that, should multiple workflow runs occur per second, some of the runs may not be visualized on the stage. Also, if you have unsaved changes in your workflow, live streams will automatically be disabled until the changes are saved. Behavior by Flow Type If the workflow is an application workflow or experience workflow, debug node output is generated for any runs of the workflow whenever you are looking at the workflow. For edge workflows, however, by default no debug output is generated. When you want debug output for an edge workflow, you must first select an Edge Compute device that the workflow is deployed and running on. At that point, Losant will tell the Edge Agent on that device to start debugging that workflow, and the Edge Agent will start reporting debugging information. Of course, for an edge device to report workflow debug information, it must be connected to the internet and Losant. If an edge device is not online or connected, no debug information will appear even if the device is selected. In addition, debug messages from an edge device are also batched and throttled - a batch of debug messages will be sent up to Losant no more than once per second, with no more than five debug messages. So if a workflow is running quite frequently on an edge device, you might not see all the debug messages in the debug log.
https://docs.losant.com/workflows/outputs/debug/
2019-07-15T21:12:26
CC-MAIN-2019-30
1563195524111.50
[array(['/images/workflows/outputs/debug-node.png', 'Debug Node Debug Node'], dtype=object) array(['/images/workflows/outputs/debug-node-workflow-stream.png', 'Debug Node Output Debug Node Output'], dtype=object) array(['/images/workflows/outputs/debug-live-view.png', 'Debug Node Live Stream Debug Node Live Stream'], dtype=object) array(['/images/workflows/outputs/debug-choose-device.png', 'Debug Node Choose Device Debug Node Choose Device'], dtype=object)]
docs.losant.com
Active Directory Lightweight Directory Services Overview Applies To: Windows Server 2012 Active Directory.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831593(v=ws.11)
2019-07-15T21:32:28
CC-MAIN-2019-30
1563195524111.50
[]
docs.microsoft.com
Custom field On this page: Syntax <txp:custom_field /> The custom_field tag is a single tag and used to display the contents of a custom field. Custom fields are useful when you need to output content having a consistent structure, usually in context to a particular type of article. Custom fields are defined in the Preferences panel, and used in the Write panel. There are conditions to be aware of in each case, so be sure to read the following sections, respectively: - Defining custom fields - @@Adding custom field data@@ Also see the if_custom_field conditional tag, which provides more flexibility and power using custom fields. Attributes Tag will accept the following attributes (case-sensitive): default="value" - Default value to use when field is empty. escape="html" - Escape HTML entities such as <, >and &prior to echoing the field contents. - Values: See the tag escaping documentation for all possible values. - Default: html. name="fieldname" - Display specified custom field. Examples Example 1: Book reviews You might, for example, publish book reviews (for which you add the author, the title of the book, the publishing company and the year of publication), with: - a custom field named Book_Authorcontaining J.R.R. Tolkien, - a custom field named Book_Titlecontaining The Lord of the Rings, - a custom field named Book_Publishercontaining HarperCollins, - a custom field named Book_Yearcontaining 2004. and an ‘article’ type form like the following: <p> <txp:custom_field: <txp:custom_field<br> Published by <txp:custom_field in <txp:custom_field </p> HTML returned would be: <p> J.R.R. Tolkien: The Lord of the Rings<br> Published by HarperCollins in 2004. </p> Example 2: Power A linklog With an article title of Textpattern, an excerpt of Textpattern is awesome!, a custom field named link containing, and an ‘article’ type form like the following: <article class="linklog-entry"> <h1> <a href="<txp:custom_field"><txp:title /></a> </h1> <p> <time datetime="<txp:posted" itemprop="datePublished"> <txp:posted </time> </p> <txp:excerpt /> </article> HTML returned would be: <article class="linklog-entry"> <h1> <a href="">Textpattern</a> </h1> <p> <time datetime="2005-08-14T15:08:12Z" itemprop="datePublished">14 Aug 2005</time> </p> <p>Textpattern is awesome!</p> </article> Other tags used: title, posted, excerpt. Example 3: Unescaping HTML output With a custom field named foo containing: <a href="../here/"> using the following: <txp:custom_field will return this hunk of HTML: <a href="../here/"> whereas using: <txp:custom_field will render the URL as you’d expect, exactly as written in the custom field itself. Thus, it will be rendered as a link by the browser.
https://docs.textpattern.com/tags/custom_field
2019-07-15T20:00:08
CC-MAIN-2019-30
1563195524111.50
[]
docs.textpattern.com
Object Editing Commands Bar - Click here to move the current object/selection set. The movement takes place in reference to the position of the first helper to that of the architect. - Click here to rotate the current object/selection set after typing the rotation angle in the command entry box. The architect is the center of the rotation. The rotation angle is given using the command line (Press any number to open it) - Click here to delete the selected objects from the file. These objects after deletion are deposited in a recycle bin. They can be restored by dragging them from the bin into the objects list within the class tree pane. (The recycle bin is available under 'Edit' in the main menu. This bin is cleared at the end of each session) Press F1 inside the application to read context-sensitive help directly in the application itself ← ∈ Last modified: le 2017/10/05 05:52
http://docs.teamtad.com/doku.php/object_editing_commands_bar
2019-07-15T21:34:06
CC-MAIN-2019-30
1563195524111.50
[]
docs.teamtad.com
Page Contents Overview BRDFSSS2Complex is a material that is primarily designed to render. BRDFSSS2Complex is a complete material with diffuse and specular components that can be used directly, without the need of a Blend. Subsurface Scattering Scale – Additionally scales the subsurface scattering radius. Normally, BRDFSSS2Complex. Index of Refraction – Specifies the index of refraction for the material. Most water-based materials like skin have IOR of about 1.3. Overall Color – Controls the overall coloration for the material. This color serves as a filter for both the diffuse and the sub-surface component. Opacity – Specifies how opaque or transparent the material). The Marble (white) preset was used for all images. Scale = 1 Scale = 10 Scale = 100 Diffuse and SSS Layers Diffuse Color – Specifies (cm) – Determines the specular color for the material.. For all three renders, the Scatter color is set to green. Sub Surface Color = Red Sub Surface Color = Green Sub Surface Color = Blue Note: The "Happy Buddha" model is from the Stanford scanning repository () Example: Scatter Color The Sub-surface color is set to green for all the following renders. Scatter Color = Red Scatter Color = Green Scatter Color = Blue Note: The "Happy Buddha" model is from the Stanford scanning repository (). The cube in the lower left corner has a size of 1cm. Scatter Radius = 1.0cm Scatter Radius = 2.0cm Scatter Radius = 4.0cm Note: The "Happy Buddha" model is from the Stanford scanning repository ().5 (Backward Scattering)Phase Function = -0.5 (Backward Scattering) Phase Function = 0 (Isotropic Scattering) More light exits object. Phase Function = 0 (Isotropic Scattering)Phase Function = 0 (Isotropic Scattering) Phase Function = 0.0 (Forward Scattering) More light is absorbed object. Phase Function = 0.5 (Forward Scattering) Note: The "Happy Buddha" model is from the Stanford scanning repository () Example: Phase Function: Light Source Phase Function = -0.9 Phase Function = 0 Phase Function = 0.0 Specular Layer Reflections – Enables the calculations of reflections. When disabled, only specular highlights will be calculated. Color – Determines the specular color for the material. Amount – Determines the specular amount for the material. Note that there is an automatic Fresnel falloff applied to the specular component, based on the IOR of the material. Glossiness – Determines the glossiness (highlights shape). A value of 1.0 produces sharp reflections, lower values produce more blurred reflections and highlights. Reflection Depth – Specifies the number of reflection bounces for the material. Scattering Options The options in this roll out allow you to control the method used to calculate the sub surface effect and the quality of the final result. Multiple Scatter – Specifies the method used to calculate the subsurface scattering effect. Prepass-based illum map – Uses an approach similar to the irradiance map to approximate the sub-surface scattering effect. It requires a prepass and the quality of the final result depends on the Prepass rate parameter. Object-based illum. Raytraced – Uses true raytracing inside the volume of the geometry to get the subsurface scattering effect. This method is physically accurate and produces the best results. None (diffuse approx.) – Does not calculate the multiple scattering effect and uses a diffuse approximation instead. Single Scatter – Controls how the single scattering component is calculated. For more information, please see the Single Scatter Presets example below. option also produces transparent shadows. Scatter GI – Controls whether the material accurately scatters global illumination. When disabled, the GI is calculated using a simple diffuse approximation on top of the subsurface scatterin. When enabled, the GI is included as part of the surface illumination map for multiple scattering. The latter is more accurate especially for highly translucent materials, but may slow down the rendering quite a bit. Refraction Depth – Determines the depth of refraction rays when the single scatter parameter is set to Raytraced (solid). Prepass Map Options This rollout option is available when the Multiple Scatter parameter is set to Prepass-based illumination map, Object-based illumination map or None. Prepass Mode – This parameter is similar to the Mode parameter of the irradiance map and controls the way V-Ray handles the illumination map for the subsurface scattering. New Map Every Frame – Calculates a new map for every frame of the animation. Save Every Frame – Calculates a new map and saves it on the hard drive for every frame of the animation. Load Every Frame – Looks for and loads a previously saved illumination map for each frame of the animation. Save Map for First Frame – Calculates a new map for the first frame of the animation. Load Map for First Frame – Loads a previously saved illumination map for the first frame of the animation. Prepass File Name – Specifies a file name for the illumination map to be saved or loaded from. Prepass Rate –, BRDFSSS2Complex. Prepass ID – Allows several BRDFSSS2Complex materials to share the same illumination map. This could be useful if different BRDFSSS2Complex materials applied on the same object - either through a Multi/Sub-Object material, or inside a VRayBlendMtl material. If the Prepass ID is 0, then the material will compute its own local illumination map. If this is greater than 0, then all materials with the specified ID will share the same map. Auto Calculate Density – When enabled, V-Ray automatically assigns the number of samples to be used for each square unit of surface on the geometry. Enabling this option disables the Samples per unit area parameter. Samples Per Unit Area – This parameter has effect only when the Auto calculate density check box is disabled. It allows you to control the number of samples that are going to be taken for each square unit of the geometry surface. The size of one unit is controlled by the scene units set up. Increasing this value means that more samples are going to be taken which produces higher quality results at the cost of increased render times. Samples Offset – To prevent artifacts, each sample is taken a tiny distance away from the actual surface in the direction of the normal. This parameter controls that offset. Prepass Blur – Controls if the material uses a simplified diffuse version of the multiple scattering when the prepass rate for the direct lighting map is too low to adequately approximate it. A value of 0.0 causes quality of the approximation of the multiple scattering effect when the type is Prepass-based illumination map or Object-based illumination map. Larger values produce more accurate results but are slower to render. Lower values render faster, but too low values may produce blocky artifacts on the surface. Example: Prepass Rate This example shows the effect of the Prepass rate parameter. To better show the effect, the Prepass blur parameter is set to 0.0 for these images, so that BRDFSSS2Complex. Prepass = -3 Scatter Radius = 1cm Prepass = -1 Scatter Radius = 1cm Prepass = 0 Scatter Radius = 1cm Prepass = 1 Scatter Radius = 1cm Prepass = -3 Scatter Radius = 4cm Prepass = -1 Scatter Radius = 4cm Prepass = 0 Scatter Radius = 4cm Prepass = 1 Scatter Radius = 4cm Example: Single Scatter Presets This example shows the effect of the Single scatter mode parameter. For relatively opaque materials, the different Single scatter modes produce quite similar results (except for render times). In the following set of images, the Scatter radius is set to 0.5 cm. In the second set of images, the Scatter radius is set to 50.0 cm. In this case, the material is quite transparent, and the difference between the different Single scatter modes is apparent. Note also the transparent shadows with the Raytraced (refractive) mode. Preset = Simple Preset = Ray Traced Solid Preset = Ray Traced Refractive Preset = Simple Preset = Ray Traced Solid Preset = Ray Traced Refractive Note: The "Happy Buddha" model is from the Stanford scanning repository () Multipliers Mode – Specifies one of the following methods for adjusting textures. Multiply – Multipliers can be specified to adjust colors and textures. Blend Amount – Blend amounts can be specified to adjust colors and textures. Opacity – Controls the intensity of the Opacity value, which determines how opaque or transparent the overall material is. Overall Color – Controls the intensity of the material's Overall Color. Diffuse Color – Controls the intensity of the material's diffuse color. Diffuse Amount – Blends between a texture assigned (if such) and the a color. Sub-Surface Color – Controls the intensity of the material's sub-surface color. Scatter Color – Controls the intensity of the internal scattering color. Specular Color – Controls the intensity of the material's specular color. Specular Glossiness – Controls the sharpness intensity of the material's specular highlights, which affects the highlight shape.SSS2Complex material computes sub-surface scattering only during the final image rendering. During other GI calculations phases (e.g. light cache or photon mapping), the material is calculated as a diffuse one. - For the reason explained above, BRDFSSS2Complex will render as a diffuse one with the progressive path tracing mode of the light cache. - You can layer several BRDFSSS2Complex materials using a VRayBlendMtl material in order to recreate more complex sub-surface scattering effects. In this case, any raytraced single scattering will only be calculated for the base material, but multiple scattering, reflections etc will work correctly for any layer. It might be helpful to use the Prepass ID parameter to make the materials share the same illumination map so that some of the calculations are reused. References and Links Here is a list of links and references used when building the BRDFSSS2Complex.
https://docs.chaosgroup.com/display/VRAYSKETCHUP/Subsurface+Scattering+Material+%7C+BRDFSSS2Complex
2019-07-15T20:32:50
CC-MAIN-2019-30
1563195524111.50
[]
docs.chaosgroup.com
Customizing Blocked HTML Template Page When Web Safety detects the prohibited access attempt, it replaces its contents with “403 Forbidden” HTTP error page. Template for this page is stored in /opt/websafety/etc/blocked_page.html file. This is an ordinary HTMLs file which can be customized according to your needs. Upon blocking some macros in this file. To always show the detected information, clear the [ ] Hide explicit scan results on the blocked page checkbox in Safety / Policies / Policy / Advanced. - VERSION - Version of the product. - SERVERNAME - Name of the server where detection occurred. - TIMESTAMP - Date and time of block. Note When AdBlock or Online Privacy module detects a forbidden URL and browser that sent the request seems to be expecting the image back, Web Safety uses transparent gif located at /opt/websafety/etc/transparent.gif as blocked placeholder. It hides the advertised images nicely in some browsers and improves usage experience of proxy users.
https://docs.diladele.com/administrator_guide_6_4/web_filter/policies/customizing_blocked_page.html
2019-07-15T20:03:46
CC-MAIN-2019-30
1563195524111.50
[array(['../../../_images/webui_blocked_page10.png', '../../../_images/webui_blocked_page10.png'], dtype=object)]
docs.diladele.com
You don't have permissions to assign roles on Azure Permissions errors when user is not authorized to perform role assignment, Error: When fulfilling prerequisites for an app-based Cloudbreak credential, you must register an application and assign the Contributor role to it. If you are authorized to perform role assignment, you will get the following error: You don't have enough permissions to assign roles, please contact with your administrator If you skip the role assignment step you will get the following error when creating an app-based credential: Failed to verify the credential: Status code 403, {"error":{"code":"AuthorizationFailed","message": "The client 'someid' with object id 'someid' does not have authorization to perform action 'Microsoft.Storage/storageAccounts/read' over scope '/subscriptions/someid'."}} Solution: To solve the problem, ask your Azure administrator to perform the step of assigning the “Contributor” role to your application:
https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.9.1/troubleshoot/content/cb_problems-with-iam-permissions-assignment.html
2019-07-15T21:02:55
CC-MAIN-2019-30
1563195524111.50
[array(['../how-to/images/cb_azure-appbased03.png', None], dtype=object)]
docs.hortonworks.com
How does Stripe Connect affect me? Restrict Content Pro supports Stripe Connect, an easier and more secure way of connecting your Stripe account to your website. Stripe Connect makes it very easy to connect your Stripe account, and prevents issues that can arise when copying and pasting account details from Stripe into your Restrict Content Pro settings page. With Stripe Connect, you'll be ready to go with just a few clicks. Don't have a Stripe account yet? No problem, the Connect with Stripe button will walk you through the account creation process. It only takes a few minutes. If you already have a Stripe account and have already set up Stripe by copying and pasting your API keys into the settings page, you may see the following notice asking you to connect your Stripe account. When you click the link, you'll go to the Restrict Content Pro settings page with the Payments tab pre-selected. From there, click the Connect with Stripe button to start the process. If you've already set up your Stripe account in Restrict Content Pro, you may have questions. - Does this change my account? No. Nothing is changed. This just connects your Restrict Content Pro installation with your Stripe account and automatically sets up RCP with the API keys Stripe Connect gives it. - Do I lose active subscriptions? No. You retain all your current customers, subscriptions, plans, etc. The only thing that changes is how Restrict Content Pro communicates with your Stripe account. - Does it change the cost of my plan? No. There are no changes in cost to you. - Do I pay anything for this? No. There are no charges for using Stripe Connect. Still have questions? We're happy to help. Send us a message.
https://docs.restrictcontentpro.com/article/2033-how-does-stripe-connect-affect-me
2019-07-15T21:10:57
CC-MAIN-2019-30
1563195524111.50
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/5ad8b777042863075092982c/file-nHmq9Sg2mQ.png', None], dtype=object) ]
docs.restrictcontentpro.com
rcp_process_manual_signup Triggers after a "manual" payment signup is complete, but right before the user is redirected to the success page. Parameters: - $member (RCP_Member) - Member object. - $payment_id (int) - ID number of the pending payment that was just created. - $gateway (RCP_Payment_Gateway_Manual) - Manual payment gateway object, which extends the RCP_Payment_Gateway class. Example: This example will automatically change the pending payment status to "complete" and activate the account. Note this means memberships will be activated before payment is verified. function ag_rcp_auto_activate_manual_payments( $member, $payment_id, $gateway ) { /** * @var RCP_Payments $rcp_payments_db */ global $rcp_payments_db; // Change payment status to "complete". This also activates the membership. $rcp_payments_db->update( $payment_id, array( 'status' => 'complete' ) ); } add_action( 'rcp_process_manual_signup', 'ag_rcp_auto_activate_manual_payments', 10, 3 );
https://docs.restrictcontentpro.com/article/2067-rcp-process-manual-signup
2019-07-15T21:12:53
CC-MAIN-2019-30
1563195524111.50
[]
docs.restrictcontentpro.com
This section will include step-by-step companion guides to videos created to help show how to create common simulations in Phoenix for Max. QuickStart 01 - Basic Liquids Covers a simple liquid simulation from a Preset and manually step-by-step QuickStart 02 - Gasoline Explosion Covers the basic workflow for creating a gasoline explosion simulation QuickStart 03A - Solids & Non-Solid Bodies Discusses the fundamentals of Solid and Non-Solid properties.
https://docs.chaosgroup.com/display/PHX3MAX/QuickStart+Guides
2019-07-15T19:55:03
CC-MAIN-2019-30
1563195524111.50
[]
docs.chaosgroup.com
). Category: 03 Blacklist Monitor 03.01 RBL Info(15) Add Your First Blacklist Monitor We […] Add An IP Range Start by heading to your Blacklist Monitors page. If you have IP ranges which need to be monitored, you don’t have to add each IP individually, you can simply add an entire range, all at once, as shown in the image below. If your range contains an IP address which you’re already monitoring, then that […] Add An IP Block The […] Add A Domain Name Adding a domain blacklist monitor is very easy and similar to adding an IPv4 blacklist monitor. Start by going to the Blacklist Monitors, and clicking the “Add Monitor” button. In the pop-up that comes up, simply input your domain name. There’s no need to select whether it’s a domain name, or an IP address. Our […] Add multiple entries at once You can easily add at once multiple IPs/Domains/Ranges etc. to be monitored, with just one click of the mouse. Start by going to your Blacklist Monitors page from your dashboard: Click on the “Add Monitor” button, located on the top right side of the page: In the newly opened modal, simply click “add multiple”: Now, […] Pagination & Monitors/Page When […] Sorting Blacklist Monitors By default, your blacklist monitors are ordered by the time they were added to your account, in a descending order (newest at the top). But you can also order them by IP Address, in an ascending order. Or by Report, in a descending order, if you wish to see the top blacklisted IPs really quickly. […] Blacklist Monitors Actions Next to each one of your monitored IPs, you have an “Action” button from where you can either access your White Label report for that IP address, or Delete that IP address from your blacklist monitors list. In order to access the regular (non White Label) blacklist report for any IP address, you simply click […] Group Actions Besides […]
https://docs.hetrixtools.com/category/blacklist-monitor/
2019-07-15T20:31:09
CC-MAIN-2019-30
1563195524111.50
[]
docs.hetrixtools.com
Structure of the image catalog Use this information to create a valid image catalog. The image catalog JSON file includes the following two high-level sections: images: Contains information about the created images. The burned images are stored in the base-imagessection. versions: Contains the cloudbreakentry, which includes mapping between Cloudbreak versions and the image identifiers of burned images available for these Cloudbreak versions. The images section The burned images are stored in the base-images sub-section of images. The base-images section stores one or more image “records”. Every image “record” must contain the date, description, images, os, os_type, and uuid fields. The versions section The versions section includes a single “cloudbreak” entry, which maps the uuids to a specific Cloudbreak version:
https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.9.1/advanced-cluster-options/content/cb_structure-of-the-image-catalog-json-file.html
2019-07-15T21:01:07
CC-MAIN-2019-30
1563195524111.50
[]
docs.hortonworks.com
Advanced Link Counts Each domain name is dissolved into a unique IP address, under which the respective website is accessible. The IP address, in turn, has various classes (AAA.BBB.CCC.DDD). A high C-class popularity means that the website has backlinks that originated from many different IP addresses with different C-classes. What is a Top-Level Domain / Domain? A Top-Level Domain is, for example, .org. It is virtually the top level of a domain. A Topdomain (referring root-domain) is wikipedia.org. A Domain, for instance, is en.wikipedia.org. Created by CCC • Last edit by Super Admin on Jun 29, 2019
https://docs.linkresearchtools.com/en/02-concepts/seo-metrics/695782-advanced-link-counts
2019-07-15T20:45:54
CC-MAIN-2019-30
1563195524111.50
[]
docs.linkresearchtools.com
Alpha Filter This topic documents a feature of Visual Filters and Transitions, which is deprecated as of Windows Internet Explorer 9. Adjusts the opacity of the content of the object. Syntax Possible Values Members Table The following table lists the members exposed by the Alpha object. Remarks You can set the opacity as uniform or graded, in a linear or radial fashion. The following list of allowable Style property values provides more information on how the Alpha filter properties support each style of filtered output. - 0—Uniform—Applies Opacity value evenly across the object. - 1—Linear—Applies an even opacity gradient, beginning with the Opacity value on a line from StartX to StartY and ending with the FinishOpacity value on a line from FinishX to FinishY. - 2—Radial—Applies an even opacity gradient, beginning in the center with the Opacity value and ending at the middle of the sides of the object with the FinishOpacity value. The corners of the object are not affected by the opacity gradient. - 3—Rectangular—Applies an even opacity gradient, beginning at the sides of the object with the Opacity value and ending at the center of the object with the FinishOpacity value. uses the Alpha filter and the Opacity property to change the appearance of a button. <STYLE> INPUT.aFilter {filter:progid:DXImageTransform.Microsoft.Alpha(opacity=50);} </STYLE> <INPUT TYPE="button" VALUE="Button" CLASS="aFilter"> Code example: Applies To See Also Scripting Filters, Filter Design Considerations
https://docs.microsoft.com/en-us/previous-versions/windows/internet-explorer/ie-developer/platform-apis/ms532967(v=vs.85)
2019-07-15T21:17:06
CC-MAIN-2019-30
1563195524111.50
[]
docs.microsoft.com
JavaScript in TYPO3 Backend ¶. Attention Since TYPO3 7, the TYPO3 backend relies primarily on bootstrap and jQuery. Since TYPO3 7.4, Prototype and Scriptaculous are not packed with the Core anymore. If you need them for your projects, you need to take care of shipping them yourself, preferable by usage of RequireJS. Since TYPO3 8, ExtJS will be removed step by step, the most parts are now ExtJS free and replaced with mostly pure JavaScript components. Contents: - AJAX in TYPO3 Backend - RequireJS in the TYPO3 Backend
https://docs.typo3.org/m/typo3/reference-coreapi/master/en-us/ApiOverview/JavaScript/Index.html
2019-07-15T20:59:02
CC-MAIN-2019-30
1563195524111.50
[]
docs.typo3.org
Version History 1.0.7 - Replaced internal timing functions with a simpler, more versatile timing system - Added automatic checks to preserve custom text alignment when drawing VNgen - Previously it was required to manually reset draw_set_halign or draw_set_valign before drawing VNgen. This is no longer necessary, as VNgen properly handles both functions. - (Requires GameMaker Studio v2.2.1.375 or newer) - Added automatic option results logging. Setting an option block ID in vngen_get_option now returns user selections from any previous option! - Option selection data is now also saved/loaded with VNgen file functions - Added vngen_file_load_map for and vngen_set_prop functions to vngen_perspective_modify_pos for and vngen_label_replace_ext (GMS2 only) - Fixed overdraw in wipe transitions - Fixed vngen_goto performing certain skipped actions when performing skipped events is disabled - Fixed vngen_button_select and vngen_log_button_select causing crashes if run when buttons do not exist - Fixed typo causing errors in vngen_label_create_ext 1.0.3 - Added ‘auto’ keyword support to vngen_text_create_* actions to better facilitate NVL-style presentation - Using ‘auto’ ^, instead of , - This change was necessary to escape markup in GMS2, but was changed in both versions for interoperability - Fixed vngen_goto and vngen_room_goto sometimes functions to VNgen options for added hover/select animation and stylization - This also replaced the extra color functionality previously added to vngen_option_create_ext, resulting in a simpler *_ext function - Added on-screen buttons as a new entity type - Log buttons have been rewritten to match the new button standard and can now be used to execute arbitrary code, not just scroll the log - The existing vngen_type_button macro now refers to both log buttons and standard buttons and can be used to check both in property functions - Added support for deformations of any number of subdivisions - Deformation columns and rows can now be set on a per-deformation basis - Updated the included def_wave deformation to display an actual sine wave - Added new underwater-like wave shader - Added vngen_count to replace individual entity *_count functions with a universal function - Added ‘previous’ keyword support to character face coordinates in character replace actions - Added ‘full’ keyword support to vngen_audio_modify loop clip settings - Added vngen_event_get_label to complement vngen_event_get_index - Added vngen_script_execute_ext to perform scripts as VNgen actions, including when the running event is skipped - Updated vngen_event_get_index to optionally return the index of an event by label (rather than the current event) - Replaced per-entity text speed with vngen_set_speed, which sets speed for all text entities. - Replaced vngen_instance_create with vngen_instance_change - The existing script was not really necessary and functioned more closely to the built-in instance_change function anyhow - Replaced bracket escape character with ^. Markup can now be drawn literally as ^[ instead_goto is run - Fixed non-looped sounds failing to fade in on create - Fixed mouse cursor getting stuck in hover state if vngen_goto is run while in this state - Fixed scaling not updating when launched in fullscreen - Miscellaneous additional fixes and improvements 0.9.9 (Early Access) - Added vngen_goto_unread to and trans_spin_out transitions -_execute argument limit to 32. - Fixed compatibility issues with YYC. YYC is now fully supported - Fixed audio fade transitions not being skipped if vngen_goto is run - Improved HTML5 compatibility - Fixed broken mouse/touch hotspots when using scaling - Reduced renderlevel 2 surface usage to mitigate texture swapping glitches - Miscellaneous additional fixes and improvements 0.9.8 (Early Access) - Added per-entity shader support and a selection of included shaders - Support for custom uniforms coming in a future update - Added make_color_rgb_to_hex to complement make_color_hex, which has now been renamed make_color_hex_to_rgb for consistency - Added optional “perform” argument to vngen_goto to allow disabling performing skipped events - Added optional “id” argument to vngen_option_select to allow selecting a specific option directly without navigation - Fixed wrong colors being used when auto labels are replaced with the ‘previous’ color 0 0.9.5 (Early Access) - Added new tiled scenes system with support for rotation, gradient color blending, wipe transitions, and deforms - Added paragraph alignment support to text (labels coming soon) - Added animation blending support to transforms and deforms - Added color gradient and wipe transition support to deforms - Updated file functions to save/load text alignment and language settings - Improved replace fade transitions to better support transparent entities - Improved text auto linebreak accuracy - Fixed character flipping in vngen_char_replace_ext 0.9.2 (Early Access) - Fixed an issue causing touch scrolling to sometimes scroll the backlog infinitely - Fixed manual linebreaks being removed from backlog text - Fixed backlog entries being listed in literal order rather than historical order. - Added vngen_log_get_index script to return historical log entry index from an entry’s on-screen order. - Added new proportion scaling modes which scale relatively to changes in display scale - Added new properties functions (vngen_get_*) to return the calculated width, height, x, y, xscale, yscale, and rotation of VNgen entities, factoring in animations and modifications - Updated vngen 0.9.1 (Early Access) - 0.9.0 (Early Access) - Initial release
https://docs.xgasoft.com/assets/vngen/?section=version-history
2019-07-15T20:30:51
CC-MAIN-2019-30
1563195524111.50
[]
docs.xgasoft.com
Overview List Priorities allow you to override the default order of your list. Without any priorities, children will appear in the list in the order they were added. If you use priorities, the list will first be sorted by priority, then by added date. In the example below you can see that the first two children in the list have a red priority marker, they appear above the 3rd child even though they have a later added date. Setup On the home page, click on the Priorities link under Features (Home > Features > Priorities). To add a priority to click Add Priority. Here you can select one of the two Automatic Priorities (see below for more information on these) or you can create your own. After you’ve added a priority you’ll be taken back to the Priorities screen where you can add more or set their order. The order is important as a child with a first order priority will placed above a child with a second or lower priority. Usage Set Priority on a Child On the child edit page, in the waitlist section there is a drop down listing all priorities that you have selected. Automatic Priorities There are two priorities that will not appear in the priority drop down on the child page. These are set based on other data within the system. - Family of Staff - This priority will be set when a child’s parent has the Staff Member checkbox checked. - Family member enrolled - For this priority to be set, a child must have a sibling that was on the waitlist and has been removed with a status of Enrolled. For brand new lists you may have to enter a sibiling who has already been enrolled and immediately remove them for this to work.
http://docs.daycarewaitlist.com/tiki-index.php?page=List+Priorities&structure=Main
2019-07-15T20:48:37
CC-MAIN-2019-30
1563195524111.50
[]
docs.daycarewaitlist.com
Always)…” When used as a proper name, use the capitalization of the product, such as GNUPro, Source-Navigator, and Ansible Tower. When used as a command, use lowercase as appropriate, such as “To start GCC, type gcc.” Note “vi” is always lowercase. This is often used to mean “because”, but has other connotations, for example, parallel or simultaneous actions. If you mean “because”, say “because”. Assure implies a sort of mental comfort. As in “I assured my husband that I would eventually bring home beer.” Ensure means “to make sure.” Insure relates to monetary insurance. Correct. Avoid using backwards unless you are stating that something has “backwards compatibility.” Use “can” to describe actions or conditions that are possible. Use “may” only to describe situations where permission is being given. If either “can,” “could,” or “may” apply, use “can” because it’s less tentative. When referring to a compact disk, use CD, such as “Insert the CD into the CD-ROM drive.” When referring to the change directory command, use cd. Correct. Do not use “cdrom,” “CD-Rom,” “CDROM,” “cd-rom” or any other variation. When referring to the drive, use CD-ROM drive, such as “Insert the CD into the CD-ROM drive.” The plural is “CD-ROMs.” Correct. Do not use “command-line” or “commandline.” Use. Correct. Do not use daylight savings time. Daylight Saving Time (DST) is often misspelled “Daylight Savings”, with an “s” at the end. Other common variations are “Summer Time”and “Daylight-Saving Time”. () When used as a noun, a failover is. When used as a verb, fail over is two words since there can be different tenses such as failed over. Fewer is used with plural nouns. Think things you could count. Time, money, distance, and weight are often listed as exceptions to the traditional “can you count it” rule, often thought of a singular amounts (the work will take less than 5 hours, for example).. 2 to the 30th power (1,073,741,824) bytes. One gigabyte is equal to 1,024 megabytes. Gigabyte is often abbreviated as G or GB. “It’s” is a contraction for “it is;” use “it is” instead of “it’s.” Use “its” as a possessive pronoun (for example, “the store is known for its low prices”). Less is used with singular nouns. For example “View less details” wouldn’t be correct but “View less detail” works. Use fewer when you have plural nouns (things you can count).. This means “be careful to remember, attend to, or find out something.” For example, “…make sure that the rhedk group is listed in the output.” Try to use verify or ensure instead. Short for megabytes per second, a measure of data transfer speed. Mass storage devices are generally measured in MBps..” Use to indicate a reference (within a manual or website) or a cross-reference (to another manual or documentation source). This is often used to mean “because”, but “since” has connotations of time, so be careful. If you mean “because”, say “because”. ” refers to a time in the past or the next step in a sequence. “Than” is used for comparisons.. When referring to the reader, use “you” instead of “user.” For example, “The user must…” is incorrect. Use “You must…” instead. If referring to more than one user, calling the collection “users” is acceptable, such as “Other users may wish to access your database.” When using as a reference (“View the documentation available online.”), do not use View. Use “Refer to” instead. Correct. Do not use “webserver”. For example, “The Apache HTTP Server is the default Web server…” Correct. Do not use “web site” or “Web site.” For example, “The Ansible website contains …” Use the pronoun “who” as a subject. Use the pronoun “whom” as a direct object, an indirect object, or the object of a preposition. For example: Who owns this? To whom does this belong? Do not use future tense unless it is absolutely necessary. For instance, do not use the sentence, “The next section will describe the process in more detail.” Instead, use the sentence, “The next section describes the process in more detail.” Use “need” instead of “desire” and “wish.” Use “want” when the reader’s actions are optional (that is, they may not “need” something but may still “want” something). Do not use. Do not use “Hammer”. Always use “AMD64 and Intel® EM64T” when referring to this architecture.
https://docs.ansible.com/ansible/devel/dev_guide/style_guide/spelling_word_choice.html
2019-07-15T21:14:05
CC-MAIN-2019-30
1563195524111.50
[]
docs.ansible.com
notification to reach you before trying again. Sending test SMS notifications does not use any of your SMS Credits, however it is limited by the number of SMS notifications you can send within a certain amount of time. The phone number can be in the following format: +CountryCode CellNumber Examples: +1 333 123 1234 +1(333)1231234 +1.333.123.1234 etc. You can assign SMS notification contact lists to both your Uptime Monitors and Blacklist Monitors. A contact list can contain a maximum of 5 phone numbers which can receive SMS notifications. Each phone number will use 1 SMS credit per notification, so if you have a contact list with 3 phone numbers, and you receive 1 notification, it will use 3 SMS credits. To fully understand how our SMS Credits system works, please read the following article:
https://docs.hetrixtools.com/sms-notifications/
2019-07-15T20:17:39
CC-MAIN-2019-30
1563195524111.50
[]
docs.hetrixtools.com
Retry a cluster When stack provisioning or cluster creation fails, the “retry” option allows you to resume the process from the last failed step.. Only failed stack or cluster creations can be retried. A retry can be initiated any number of times on a failed creation process. To retry provisioning a failed stack or cluster, follow these steps. Steps - Browse to the cluster details. - Click Actions and select Retry. Only failed stack or cluster creations can be retried, so the option is only available in these two cases. - Click Yes to confirm. The operation continues from the last failed step.
https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.9.1/manage-clusters/content/cb_retry-a-cluster.html
2019-07-15T21:10:23
CC-MAIN-2019-30
1563195524111.50
[]
docs.hortonworks.com
PATHITEM Returns the item at the specified position from a string resulting from evaluation of a PATH function. Positions are counted from left to right. Syntax PATHITEM(<path>, <position>[, <type>]) Parameters path A text string in the form of the results of a PATH function. position An integer expression with the position of the item to be returned. type (Optional)An enumeration that defines the data type of the result: Return value returns the third tier manager of the current employee; it takes the employee and manager IDs as the input to a PATH function that returns a string with the hierarchy of parents to current employee. From that string PATHITEM returns the third entry as an integer. =PATHITEM(PATH(Employee[EmployeeKey], Employee[ParentEmployeeKey]), 3, 1)
https://docs.microsoft.com/en-us/dax/pathitem-function-dax
2019-07-15T21:30:59
CC-MAIN-2019-30
1563195524111.50
[]
docs.microsoft.com
Editing Database Scripts and Objects with the Transact-SQL Editor You can edit, validate, and execute database queries, scripts and objects by using the Transact-SQL (T-SQL) editor in Visual Studio Team System Database Edition. In This Section Overview of Transact-SQL Editor Provides an overview of how to create, analyze, and execute scripts and queries in the T-SQL editor. Transact-SQL Editing Essentials Contains topics that describe the most important editing tasks that you can perform by using the T-SQL editor. Managing Database Connections within the Transact-SQL Editor Contains topics that help you connect to a database server or specify a particular database in the T-SQL editor. Script Analysis and Execution in the Transact-SQL Editor Contains topics that help you analyze and run your scripts and queries in the T-SQL editor. Walkthrough: Create and Execute a Simple Transact-SQL Script Explains how to create and execute a simple Transact-SQL script. As part of this walkthrough, you connect and disconnect from the server, validate your T-SQL scripts, and examine the results of the query. Related Sections Getting Started with Visual Studio Team Edition for Database Professionals Provides overviews, introductory walkthroughs, glossary definitions, and other basic information to help you start to learn about Database Edition.. Working with Database Scripts Contains topics that describe how you create and maintain scripts for deploying database schemas and managing databases. Renaming Database Objects Contains links to information about how to rename database objects. Contains links to an overview, important considerations, tasks, and troubleshooting information. Database Unit Testing Describes how you can use database unit testing to verify whether database objects, such as stored procedures and triggers, behave as you expect. When you perform unit tests in combination with using Data Generator, you can test for predictable a target to match a source.
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/aa833162(v=vs.90)
2019-07-15T20:58:24
CC-MAIN-2019-30
1563195524111.50
[]
docs.microsoft.com
Downloading Branded Apps Slyce branded applications are distributed via an enterprise distribution certificate for review. Follow the steps below to download your example application via our download portal. Step 1: Follow the link your Slyce contact provided and submit your email. Step 2: You will receive an email from "Crashlytics" inviting you to the app. Tap the "Let Me In" button on your mobile device. Step 3: Optional Save the Crashlytics app to your home page to download future versions. Tap the share button and hit add to your home screen. Step 4: In your settings, you will need to trust the "Slyce" and "Crashlytics" profiles to enable the download. Crashlytics is the app distribution tool we utilize to distribute apps. Troubleshooting: I hit install, but the app continues to show it is loading on my home screen. Initiate the download again by going to the Crashlytics app or tapping the email invite again. When downloading applications, you should be on wifi to ensure a stable connection.
https://docs.slyce.it/hc/en-us/articles/360030455152-Downloading-Branded-Apps
2019-07-15T21:02:14
CC-MAIN-2019-30
1563195524111.50
[array(['/hc/article_attachments/360032901032/mceclip1.png', 'mceclip1.png'], dtype=object) ]
docs.slyce.it
Changes This article describes the release history of the RadPanelBar control. To see the fixes and features included in our latest official release please refer to our Release History . Q1 2013 What's Fixed Fixed: Cannot tab between items in content Fixed: ContentControls are looking blurry when placed in the PanelBarItem Fixed: Setting SelectedItem property does not reflect into updating the UI Q2 2012 What's Fixed - Fixed: Missing MouseOver and Selected states of second level items in Metro theme Q1 2012 What's Fixed - Fixed: The VerticalAlignment and HorizontalAlignment in some themes are not stretched You can examine the Q1 2012 release history in our site.
https://docs.telerik.com/devtools/wpf/controls/radpanelbar/changes-and-backwards-compatibility/changes
2019-07-15T19:54:31
CC-MAIN-2019-30
1563195524111.50
[]
docs.telerik.com
Feature: #51905 - Add dependencies between classes in the Rich Text Editor ¶ See Issue #51905 Description ¶ It is now possible to configure a class as requiring other classes. The syntax of this new property is RTE.classes.[ *classname* ] { .requires = list of class names; list of classes that are required by the class; if this property, in combination with others, produces a circular relationship, it is ignored; when a class is added on an element, the classes it requires are also added, possibly recursively; when a class is removed from an element, any non-selectable class that is not required by any of the classes remaining on the element is also removed. }
https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/7.0/Feature-51905-AddDependenciesBetweenClassesInRte.html
2019-07-15T20:56:36
CC-MAIN-2019-30
1563195524111.50
[]
docs.typo3.org
Overview Workspace ONE UEM helps you install Content Gateway on Windows and Linux servers. Workspace ONE UEM supports Content Gateway installation on relay and endpoint servers. Log in to My Workspace ONE portal to download the Content Gateway Installer and configure it based on your requirements. You can also verify your installation and configuration using the verification options available in the UEM console.
https://docs.vmware.com/en/VMware-Workspace-ONE-UEM/9.6/vmware-airwatch-guides-96/GUID-AW96-CG_Install_Overview.html
2019-07-15T20:39:03
CC-MAIN-2019-30
1563195524111.50
[]
docs.vmware.com
Apigee Edge provides many different types of resources and each of them serve a different purpose. There are certain resources that can be configured (i.e., created, updated, and/or deleted) only through the Edge UI, management APIs, or tools that use management APIs, and by users with the prerequisite roles and permissions. For example, only org admins belonging to a specific organization can configure these resources. That means, these resources cannot be configured by end users through developer portals, nor by any other means. These resources include: - API proxies - Shared flows - API products - Caches - KVMs - Keystores and truststores - Virtual hosts - Target servers - Resource files While these resources do have restricted access, if any modifications are made to them even by the authorized users, then the historic data simply gets overwritten with the new data. This is due to the fact that these resources are stored in Apigee Edge only as per their current state. The main exceptions to this rule are API proxies and shared flows. API Proxies and Shared Flows under Revision Control API proxies and shared flows are managed -- in other words, created, updated and deployed -- through revisions. Revisions are sequentially numbered, which enables you to add new changes and save it as a new revision or revert a change by deploying a previous revision of the API proxy/shared flow. At any point in time, there can be only one revision of an API proxy/shared flow deployed in an environment unless the revisions have a different base path. Although the API proxies and shared flows are managed through revisions, if any modifications are made to an existing revision, there is no way to roll back since the old changes are simply overwritten. Audits and History Apigee Edge provides the Audits and API, Product, and organization history features that can be helpful in troubleshooting scenarios. These features enable you to view information like who performed specific operations (create, read, update, delete, deploy, and undeploy) and when the operations were performed on the Edge resources. However, if any update or delete operations are performed on any of the Edge resources, the audits cannot provide you the older data. Antipattern Managing the Edge resources (listed above) directly through Edge UI or management APIs without using source control system There's a misconception that Apigee Edge will be able to restore resources to their previous state following modifications or deletes. However, Edge Cloud does not provide restoration of resources to their previous state. Therefore, it is the user's responsibility to ensure that all the data related to Edge resources is managed through source control management, so that old data can be restored back quickly in case of accidental deletion or situations where any change needs to be rolled back. This is particularly important for production environments where this data is required for runtime traffic. Let's explain this with the help of a few examples and the kind of impact that can be caused if the data in not managed through a source control system and is modified/deleted knowingly or unknowingly: Example 1: Deletion or modification of API proxy When an API proxy is deleted, or a change is deployed on an existing revision, the previous code won't be recoverable. If the API proxy contains Java, JavaScript, Node.js, or Python code that is not managed in a source control management (SCM) system outside Apigee, a lot of development work and effort could be lost. Example 2: Determination of API proxies using specific virtual hosts A certificate on a virtual host is expiring and that virtual host needs updating. Identifying which API proxies use that virtual host for testing purposes may be difficult if there are many API proxies. If the API proxies are managed in an SCM system outside Apigee, then it would be easy to search the repository. Example 3: Deletion of keystore/truststore If a keystore/truststore that is used by a virtual host or target server configuration is deleted, it will not be possible to restore it back unless the configuration details of the keystore/truststore, including certificates and/or private keys, are stored in source control. Impact - If any of the Edge resources are deleted, then it's not possible to recover the resource and its contents from Apigee Edge. - API requests may fail with unexpected errors leading to outage until the resource is restored back to its previous state. - It is difficult to search for inter-dependencies between API proxies and other resources in Apigee Edge. Best Practice - Use any standard SCM coupled with a continuous integration and continuous deployment (CICD) pipeline for managing API proxies and shared flows. - Use any standard SCM for managing the other Edge resources, including API products, caches, KVMs, target servers, virtual hosts, and keystores. - If there are any existing Edge resources, then use management APIs to get the configuration details for them as a JSON/XML payload and store them in source control management. - Manage any new updates to these resources in source control management. - If there's a need to create new Edge resources or update existing Edge resources, then use the appropriate JSON/XML payload stored in source control management and update the configuration in Edge using management APIs. * Encrypted KVMs cannot be exported in plain text from the API. It is the user's responsibility to keep a record of what values are put into encrypted KVMs.
https://docs.apigee.com/api-platform/antipatterns/no-source-control
2019-07-15T20:14:48
CC-MAIN-2019-30
1563195524111.50
[]
docs.apigee.com
Dashboard List The Dashboard List Block allows you to display a list of your dashboards. The list includes links to the dashboards, their names, and whether each dashboard is visible to the public. Configuration The block takes one optional parameter: a filter to reduce the list of dashboards by name. The filter accepts an asterisk * to search for any occurrence of the given characters in the dashboard name. For example: - Filtering with “th” will display an dashboard called “Thermostat” but hide one called “Monolith” - Filtering with ”*th”, will show both the “Thermostat” and “Monolith” dashboards To display all of your dashboards, do not provide a filter.
https://docs.losant.com/dashboards/dashboard-list/
2019-07-15T21:12:36
CC-MAIN-2019-30
1563195524111.50
[array(['/images/dashboards/dashboards-example.png', 'Dashboard List Dashboard List'], dtype=object) array(['/images/dashboards/dashboards-filter.png', 'Dashboard List Config Dashboard List Config'], dtype=object)]
docs.losant.com
Getting Started with the Slyce Demo App - Navigate to in a web browser on your iOS or Android mobile device. - Click the green "Download" icon to download the app. - ANDROID ONLY: Once the app is downloaded. Open the downloaded file and follow the system prompts to install the app. - iOS ONLY: Once downloaded, you will also need to perform the following steps for the app to work: 3. Next, launch the "Slyce" app. To view or change settings, tap the "gear" icon at the top right of the "Welcome" screen. If you're not seeing the "gear" icon, tap the back arrow ("<") in the top left to go back to the "Welcome" screen. By default, you should be in "Universal" camera mode. This is the most common use case for Slyce Visual Search. If you'd like to see "Lens Picker" camera mode, select it. In order to change lenses, you'll select them in the bottom left corner of the camera screen. The last option is a toggle for "Batch Capture". This UI/UX choice is useful when the primary use case has users doing multiple searches at once. "Dismiss" when you're done.
https://docs.slyce.it/hc/en-us/articles/360018075632-Slyce-Preview-App-Install-and-Setup-Information-for-iOS-and-Android
2019-07-15T20:11:03
CC-MAIN-2019-30
1563195524111.50
[array(['/hc/article_attachments/360023122392/IMG_4524.png', 'IMG_4524.png'], dtype=object) ]
docs.slyce.it
This page shows you how to configure an IPsec VPN between two NSX Edge Gateways. On your Edge Gateway, go to the Manage tab, then the VPN tab and the section IPsec VPN. If IPsec VPN Service Status is disabled , clik Enableto enable it. (You need at least one peer to be able to publish)** You can also enable logging and configure the log level (by default, the value is INFO). Click Publish Changes to apply what you have just done. Now you need to configure the IPsec VPN on each site to have a working VPN. (Here each site is a NSX Edge Gateway) Create a VPN configuration on a NSX Edge Gateway Click on the "Add" ( ) icon. Enter the Name of your IPsec VPN peer Enter a Local Id, it will be the Peer Id on the remote site. In this example, we chose the public IP of the NSX Edge Gateway. Enter the same value as Local Id for Local Endpoint. Enter the localsubnets you want to share with the remote site. (CIDR format) Enter the Peer Id, (remember, it's the Local Id of the remote site) Enter the same value as Peer Id for Peer Endpoint. Enter the local subnets of the remote site. (CIDR format) Select your required encryption algorithm. Select an authentication method. You can use Certificate authentication if you enabled it in Global configuration and if you added a certificate on the NSX Edge Gateway. Type the Pre-SharedKey. (It must be the same on the local and peer sites) Select a Diffie-Hellman Group. Click OK and click Publish Changes to put your parameter in production. Let's configure the remote site as following, using your own requirements and following the instructions above : Click OK and click Publish Changes to put your parameter in production. Your configuration is done. You can display your tunnel state by clicking on Show Ipsec Statistics In this screenshot, you can see our tunnel status as UP and running During the NSX Edge Gateway Deployement , if you enabled the auto rule generation, your firewall rules are automatically configured. If not you need to configure the firewall to allow the IPsec VPN. This screenshot shows you the auto generated rule on the peer site.
https://docs.ovh.com/fr/private-cloud/ipsec-vpn/
2018-03-17T10:12:44
CC-MAIN-2018-13
1521257644877.27
[array(['https://docs.ovh.com/fr/private-cloud/ipsec-vpn/images/L3VPN_tab.png', None], dtype=object) array(['https://docs.ovh.com/fr/private-cloud/ipsec-vpn/images/L3VPN_config_vpn.png', None], dtype=object) array(['https://docs.ovh.com/fr/private-cloud/ipsec-vpn/images/L3VPN_SITE2.png', None], dtype=object) array(['https://docs.ovh.com/fr/private-cloud/ipsec-vpn/images/L3VPN_ipsec_done.png', None], dtype=object) array(['https://docs.ovh.com/fr/private-cloud/ipsec-vpn/images/L3VPN_firewall_rules.png', None], dtype=object) ]
docs.ovh.com
Do you have a website or blog powered by WordPress? Discover how to speed up page loading time using Redis! In this tutorial, we will set up a NoSQL Redis database and use it to cache WordPress objects. Not only will visitors have a better user experience, but WordPress administrators will also benefit from reduced page loading time. This guide can only be used with the Beta SaaS Database Lab Runabove Any action carried out in this tutorial is at the user’s own risk. OVH shall not be held liable for any technical failure, loss of data, etc. Remember to back up your files prior to making any changes. This tutorial requires: - a compatible Web hosting with the PHP-redis module installed or the possibility of installing it (currently this is not compatible with OVH shared webhosting plans): VPS, dedicated server, Public Cloud... with OVH or another provider. - a WordPress administrator account to install a plugin. - a WordPress version 4.x or above - and of course, a Redis database, that you can activate on our lab! Wordpress + Redis = ?? Redis is an open source tool that makes it possible to maintain NoSQL databases in RAM and to cache objects. The aim is to increase speed. It is increasingly being used and has been adopted by Twitter, Github, Flickr, Pinterest, as well as many others. We are not going to go into more detail about what Redis is, more information can be found in the official and comprehensive documentation . As for WordPress, it may be resource intensive: user sessions, queries in databases,... Their developers make it possible to cache these "objects", to store them in a non-persistent way, in order to go faster. As you can well imagine, optimization will be more or less significant depending on your use of "objects". A website with static content, like a blog, will be much less optimizable than an e-commerce site, news portal, etc. There are many plugins that make it possible to manage this cache, but the most popular one is W3 Total Cache (W3TC). However, it offers a local object cache (on your VPS, or server), which is not always suitable. Maintaining a Redis or Memcached database can be tedious and memory consuming. In this tutorial, you will learn how to configure the Redis Lab provided by OVH, and link it to your WordPress installation. The aim is to improve the WordPress page loading times for both visitors and administrators. Let’s get started! The first step is to get a Redis database and check that it is operational. Go to Runabove OVH labs and then sign up for the Redis lab. Once it’s done, create a Redis database via the OVH Sunrise Control Panel, change the password, and leave the page open as we are going to use it. Testing the connection to Redis using a terminal Open the terminal of your choice and check if you have the redis-client installed: which redis-cli If not, install it. For Debian based distributions, it will be: apt-get install redis-cli Then, connect to your Redis database via the command line: redis-cli -h my-instance-url -p my-port This is what it looks like for me: redis-cli -h 950c9520-ed3c-492c-8e0a-c1xxxxxxxxxx.pdb.ovh.net -p 21244 Then, authenticate yourself using the password set in your OVH Sunrise Control Panel. Redis works with a unique password, but no user is required. auth MyPassWord And lastly, let’s monitor what happens on your Redis database: monitor What happens in real time on your database will appear here. At this stage, the only thing we know is that the Redis database is running! Let’s check whether we can connect to it using PHP. Testing the connection to Redis from a WordPress site We are going to test the connection to Redis on the instance hosting your WordPress site using a very fast script. Through the means of your choice (FTP, SSH, ...), create a phpinfo.php file at the root of your WordPress site and add in it: 1. <?php 2. phpinfo(); 3. ?> Then go to (can be changed depending on your configuration) and look for: This module enables PHP to communicate with the Redis database. If you don’t find any paragraph on Redis, it means that the appropriate component is missing. Install it (Debian/Ubuntu-compatible command): sudo apt-get install php-redis You should see the Redis box (if needed, update PHP and restart your instance). Let’s now test the connection to Redis using PHP, from your hosting. Through the means of your choice (FTP, SSH, ...), create a redis.php file at the root of your WordPress site and add in it: 1. <?php 2. //Connecting to Redis server on OVH 3. $redis = new Redis(); 4. $redis->connect('xxxxxxxxx-xxxxxx-xxxxx-xxxxxxxxx.pdb.ovh.net', 12345); 5. $redis->auth('MyPassword'); 6. echo "Connection to server ongoing"; 7. //check whether server is running or not 8. echo "Server is running: ".$redis->ping(); 9. ?> Do not forget to fill in the access URL, the port and the password for your database, as they appear in your OVH Sunrise Control Panel. Then go to. If the connection works, you should see "Server is running: +PONG" And a "PING" in your terminal open with your Redis database. Well done! Everything is working on the database side. We are now moving on to the configuration of Wordpress. Configuring Wordpress with Redis The database is working, you now just need to configure Wordpress to use it. If you already have a caching plugin remember to disable the Object Cache (as with W3TC, for example). Changing WP-CONFIG.PHP Open the wp-config.php file at the root of your WordPress using a text editor. We are going to add these 3 lines just after Key Salts. 1. define('WP_REDIS_HOST', '950c9520-ed3c-492c-xxxxx-xxxxxxxxxx.pdb.ovh.net'); 2. define('WP_REDIS_PORT', '12345'); 3. define('WP_REDIS_PASSWORD', 'MyPassword'); Optional: let’s add a salt Key. When only one application uses Redis, this is not necessary, but if you have several WordPress sites, it will be required to determine what-pushes-what. 1. define('WP_CACHE_KEY_SALT', 'myvps_' ); Installing the Redis Object Cache plugin Then, to make it easier, we are going to install the Redis Object Cache plugin. For the curious ones, it is a fork of Eric Mann and Erick Hitter’s code: Github Redis Object Cache . If you hate plugins, do not hesitate to do a manual installation (pay attention to the names of parameters which are noticeably different in WP-CONFIG.PHP) This plugin is going to add an object-cache.php file (or change the existing one) in the wp-content/ folder of your WordPress site. If the plugin is unable to do so, add it manually (see the plugin page for more information). Configuring the plugin When everything is installed, go to your Wordpress Admin Area for Redis. You should see something similar to that: Notice the "Connected" status as well as your accurate Host, Port, Database and Password. All is clean as for configuration! Checking the increase in performance If you haven’t closed the monitoring for your Redis database, the database activity should display. Open another terminal or stop this monitoring, and enter the following commands on your Redis database: keys * This will allow you to analyze all key-value pairs stored in Redis. In my case: ... 48) "myvpswp_:comment:get_comments:9503889e74633f729bf0ed7217c233a4:0.53134300 1486125318" 49) "myvpswp_:comment:1" 50) "myvpswp_:term_meta:1" 51) "myvps_wp_:userlogins:bastien" 52) "myvpswp_:transient:feed_b9388c83948825c1edaef0d856b7b109" 53) "myvpswp_:posts:1" ... and to analyze a specific key for example: get myvpswp_:posts:1 As for performance, I opened the development tools and then "Network". As I started with a fully clean WordPress site with no content, the increase is not that significant but it is still noticeable! The website displays in an average of 250 ms instead of 500 ms (hosted on an OVH SSD VPS located in Strasbourg). There you are! Please do not hesitate to give us your feedback on your tests! DBaaS Help - Documentation: Guides - Community hub: - Mailing List: [email protected]
https://docs.ovh.com/gb/en/clouddb/speed-up-wordpress-with-redis/
2018-03-17T10:12:07
CC-MAIN-2018-13
1521257644877.27
[array(['https://docs.ovh.com/gb/en/clouddb/speed-up-wordpress-with-redis/images/sunrise.png', 'Redis Sunrise Control Panel'], dtype=object) array(['https://docs.ovh.com/gb/en/clouddb/speed-up-wordpress-with-redis/images/phpredis.png', 'PHPinfo and Redis'], dtype=object) array(['https://docs.ovh.com/gb/en/clouddb/speed-up-wordpress-with-redis/images/redisobjectcache.png', 'Information on Redis Object Cache'], dtype=object) array(['https://docs.ovh.com/gb/en/clouddb/speed-up-wordpress-with-redis/images/redisobjectcacheconnected.png', 'Information on Redis Object Cache'], dtype=object) array(['https://docs.ovh.com/gb/en/clouddb/speed-up-wordpress-with-redis/images/chrome.png', 'Chrome Network tool'], dtype=object) ]
docs.ovh.com
You, allows you to preview the content of a symbol or template and to swap between drawings and symbol cells in your Timeline view. This window is also used as the Drawing Substitution window. To preview a template or symbol's content: When working on a movie or series, you will most likely end up with a lot of templates and symbols in your library. You have access to a Search tool to help you find templates and symbols in your folders. To use the Library Search tool: The Library list lets you navigate through the different libraries and subfolders. You can also open, close and create new libraries from here. The Library folders have these default libraries: The symbols and templates contained in the selected Library list can be displayed on the right side of the Library view as thumbnails, in a list or as details. To access the templates and symbols list display options: Thumbnails List Details Related Topics
https://docs.toonboom.com/help/harmony/Content/HAR/Stage/010_Library/006_H1_Library_View.html
2018-03-17T10:53:00
CC-MAIN-2018-13
1521257644877.27
[]
docs.toonboom.com
PDC Promotion Updated: November 25, 2009 Applies To: Windows Server 2008 When a computer is promoted to become a domain controller, the promotion process updates the status of the computer to indicate that it holds the primary domain controller (PDC) emulator operations master role (also known as flexible single master operations or FSMO) for the domain. The PDC emulator operations master acts as a Windows NT primary domain controller. It processes password changes from clients and replicates updates to the backup domain controllers (BDCs). At any time, there can be only one domain controller acting as the PDC emulator master in each domain in the forest. By default, the PDC emulator is responsible for synchronizing the time on all domain controllers throughout the domain. The PDC emulator receives preferential replication of password changes that are performed by other domain controllers in the domain. If a password was changed recently, that change takes time to replicate to every domain controller in the domain. If a logon authentication fails at another domain controller as a result of a bad password, that domain controller will forward the authentication request to the PDC emulator before rejecting the logon attempt. Events Related Management Information DB Upgrade/DC Promotion/DC Demotion
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc756592(v=ws.10)
2018-03-17T10:44:08
CC-MAIN-2018-13
1521257644877.27
[]
docs.microsoft.com
DHCP: The server should be configured to register DNS records on behalf of DHCPv DNS registrations of IPv4 client computers from the DHCP server have been disabled. Impact The DHCP server will not register DHCPv4 client names in DNS resulting in the inability to connect to these client computers using hostnames unless the client computers are themselves registering DNS records.. Resolution Configure client computers to register with DNS or use the DHCP MMC configure dynamic DNS update on the DHCP server for DHCPv. Membership in Administrators, or equivalent, is the minimum required to complete this procedure To configure DHCP clients to register with DNS At the DHCP client computer, Click Start, click Run, in Search programs and files type ncpa.cpl, and then press ENTER. Right-click the applicable network connection, click Properties, click Internet Protocol Version 4 (TCP/IPv4) and then click Properties. Click Advanced, click DNS, check Register this connection’s addresses in DNS and then click OK. Membership in the Administrators or DHCP Administrators group is the minimum required to complete this procedure. To enable dynamic DNS updates At the DHCP Server, click Start, point to Administrative Tools and then click DHCP. In the console tree, expand the applicable DHCP server, expand IPv4, right-click the applicable scope and then click Properties. Click DNS, check Enable DNS dynamic updates according to the settings below: and then click OK. Additional references For updated detailed IT pro information about DHCP, see the Windows Server 2008 R2 documentation on the Microsoft TechNet Web site.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ee941150(v=ws.10)
2018-03-17T10:50:55
CC-MAIN-2018-13
1521257644877.27
[]
docs.microsoft.com
2d Bevel What is 2d Bevel? 2d Bevel is a modified version of bwidth that is specialized for 2d shapes. 2d Bevel in action In the following example I used the knife followed with bevel to face deletion in order to put some spacing in the model. Afterwards I used the 2d bevel then tThick. 90% of the time this tool is followed with tThick 2d Bevel use cases Bevelling 2d shapes - It's important to note that useless geo gets dissolved. Same with cleanMesh(E), it is not recommended to be used in places where the geometry is specific. After initial setup it is able to also be adjusted via bwidth. Enabling 2d Bevel This option may be pro level option. So if it is not seen in operations, enable pro mode in preferences.
http://hardops-manual.readthedocs.io/en/latest/2dbevel/
2018-03-17T10:48:23
CC-MAIN-2018-13
1521257644877.27
[array(['../img/banner.gif', 'header'], dtype=object) array(['../img/2dbevel/bv1.gif', 'bevel'], dtype=object) array(['../img/2dbevel/bv2.gif', 'bevel'], dtype=object) array(['../img/2dbevel/bv4.gif', 'bevel'], dtype=object) array(['../img/2dbevel/bv3.png', 'bevel'], dtype=object) array(['../img/2dbevel/bv5.png', 'bevel'], dtype=object)]
hardops-manual.readthedocs.io
. Note In Windows Server 2008 R2, Terminal Services was renamed Remote Desktop Services. To find out what's new in this version, see What’s New in Remote Desktop Services on the Windows Server TechCenter. Syntax logoff [<SessionName> | <SessionID>] [/server:<ServerName>] [/v] Parameters Remarks. Examples Additional references Remote Desktop Services (Terminal Services) Command Reference
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc731280(v=ws.10)
2018-03-17T11:44:56
CC-MAIN-2018-13
1521257644877.27
[]
docs.microsoft.com
Business rules installed with on-call scheduling On-call scheduling adds the following business rules. Business rule Table Description OnCallEscalation [sys_script] Default escalation handler for on-call rotation escalations. Remove rotation records [sys_user_grmember] Rotation records removed for a user group member. Change active [cmn_rota_roster] Activate or deactivate the roster. Update Rotation Schedules (Member) [cmn_rota_member] Recompute the rotation schedules if a member order value was changed. Initial Roster Members [cmn_rota_roster] Creates a new group member when a roster is created. Show records for user [v_on_call] Show schedules for a specific user. Rota Updated [cmn_rota] Recompute the rotation schedules for the roster members after the rota has been updated. Update Rotation Schedules (Roster) [cmn_rota_roster] Recompute the rotation schedules when the m2m or the roster definition change. Delete roster member schedule [cmn_rota_member] Recalculate the rotation when a member is removed. Delete group member on-call schedule [sys_user_grmember] Delete all spans associated to the group member being deleted. Delete Rota Schedule [cmn_rota] Delete the schedule when the rota is deleted. Validate Rota [cmn_rota] Verify the schedule entry being updated is valid. Edit Schedule [v_rotation] Redirects to the schedule page passing the group as parameter. Show records for user [v_rotation] Display notification. Rota Schedule Item validate [cmn_schedule_span] Validate that the schedule entry is valid Roster Properties [v_rotation] Redirects to a list of rosters filtered by groups. Refresh Report [v_rotation] Refreshes the UI when UI action is executed.
https://docs.servicenow.com/bundle/helsinki-it-service-management/page/administer/on-call-scheduling/reference/r_BRInstlldWOnCallSched.html
2018-03-17T10:19:11
CC-MAIN-2018-13
1521257644877.27
[]
docs.servicenow.com
About the Transform Tool T-HFND-008-003. For more details about the Transform tool options, see Transform Tool Properties . . - The Flip Horizontal command flips the layer following the Camera view X-axis. - The Flip Scale X command uses the original X-axis of the layer and flips the element following it.
https://docs.toonboom.com/help/harmony-14/premium/staging/about-transform-tool.html
2018-03-17T10:56:16
CC-MAIN-2018-13
1521257644877.27
[array(['../Resources/Images/HAR/Stage/SceneSetup/HAR11/HAR11_AdvancedToolvsTransformTool.png', None], dtype=object) array(['../Resources/Images/EDU/HAR/Student/Steps/an_flipscalexy_demo_03.png', None], dtype=object) ]
docs.toonboom.com
Troubleshooting¶ WORK-IN-PROGRESS This page will list many potential problems and gotchas bin/console ezplatform:install --env prod clean on a system with swap enabled should yield success. When a system runs out of RAM, you may see `Killed` when trying to clear the cache (e.g., php bin/console --env=prod cache:clear from your project's root directory). Upload Size Limit¶ To make use of the Back Office, you need to define the maximum upload sizeto're stored.
https://ez-systems-developer-documentation.readthedocs-hosted.com/en/2.0/getting_started/troubleshooting/
2018-03-17T10:19:09
CC-MAIN-2018-13
1521257644877.27
[]
ez-systems-developer-documentation.readthedocs-hosted.com
Known errors and knowledge articles Another source of information about a problem is documentation about known errors and the knowledge base. Information about already known issues can be found in two places: the Known Errors module in the Problem Management application, or in the Knowledge application. The Known Errors module filters the problem table to present all of the problems whose cause has been identified but cannot be fixed. The knowledge base may have information that was gathered from incidents, and may also have useful workarounds for problems.
https://docs.servicenow.com/bundle/helsinki-it-service-management/page/product/problem-management/concept/c_CheckKnErrAndTheKB.html
2018-03-17T10:13:32
CC-MAIN-2018-13
1521257644877.27
[]
docs.servicenow.com
Document Type Article Recommended Citation Roger Williams University School of Law, "Newsroom: Veteran ProJo Columnist to Join RWU 9/12/2016" (2016). Life of the Law School (1993- ). 569. Included in Higher Education Commons, Legal Education Commons, Mass Communication Commons, Organizational Communication Commons, Public Relations and Advertising Commons Also available @
https://docs.rwu.edu/law_archives_life/569/
2017-12-11T05:49:19
CC-MAIN-2017-51
1512948512208.1
[]
docs.rwu.edu
Octave Charting Capability Demonstration Octave Charting Capability Demonstration Model File Tree Software Used . Related Models Following are some related models available for cloning/copying by anyone: - 3D plot of the charge density of diamond using ELK and OpenDX - Electronic band structure of AlAs using ELK - Fermi surface plot of aluminium (Al) - Octave Charting Capability Demonstration Click on the category links at the bottom of this page to navigate to a full list of simulation models in similar subject area or similar computational methodology.
https://docs2.kogence.com/docs/Octave_Charting_Capability_Demonstration
2017-12-11T06:05:11
CC-MAIN-2017-51
1512948512208.1
[]
docs2.kogence.com
Units and unit systems¶ Unit system for physical quantities; include definition of constants. - class sympy.physics.units.unitsystem. UnitSystem(base, units=(), name='', descr='')[source]¶ UnitSystem represents a coherent set of units. A unit system is basically a dimension system with notions of scales. Many of the methods are defined in the same way. It is much better if all base units have a symbol. extend(base, units=(), name='', description='')[source]¶ Extend the current system into a new one. Take the base and normal units of the current system to merge them to the base and normal units given in argument. If not provided, name and description are overriden by empty strings.
http://docs.sympy.org/latest/modules/physics/units/unitsystem.html
2017-12-11T05:28:04
CC-MAIN-2017-51
1512948512208.1
[]
docs.sympy.org
Restoration Note Posted by docbyron on Aug 19th, 2017I've restored most of the following that was lost from about August 1st through the 10th: -Submissions (that weren't already re-submitted) -User registrations (that hadn't already re-registered) -Private messages between users who registered before August 1st -All story edits -All user watches where both users registered before August 1st Things I'm not going to mess with for now: -Story faves, votes, comments, views, and tags -Profile comments -Notifications -Messages and watches for users who registered after August 1st The reasoning is I've noticed a lot of people have already re-done a lot of the faves/votes/comments/etc that were lost and I'd like to avoid creating a lot of confusing duplicates. It's also a pain to try to restore the data for for users and stories that were originally "lost" because the ids aren't the same. Still, if you have anything you'd like restored, please feel free to reach out to me privately and I'd be happy to help out.
http://www.docs-lab.com/
2017-12-11T05:35:43
CC-MAIN-2017-51
1512948512208.1
[]
www.docs-lab.com
ListAttributes Lists the attributes for Amazon ECS resources within a specified target type and cluster. When you specify a target type and cluster, ListAttributes). Request Syntax { "attributeName": " string", "attributeValue": " string", "cluster": " string", "maxResults": number, "nextToken": " string", "targetType": " string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - attributeName The name of the attribute with which to filter the results. Type: String Required: No - attributeValue The value of the attribute with which to filter results. You must also specify an attribute name to use this parameter. Type: String Required: No - cluster The short name or full Amazon Resource Name (ARN) of the cluster to list attributes. If you do not specify a cluster, the default cluster is assumed. Type: String Required: No - maxResults The maximum number of cluster results returned by ListAttributes is not used, then ListAttributesreturns up to 100 results and a nextTokenvalue if applicable. Type: Integer Required: No - nextToken The nextTokenvalue returned from a previous paginated ListAttributes Required: No - targetType The type of the target with which to list attributes. Type: String Valid Values: container-instance Required: Yes Response Syntax { "attributes": [ { "name": "string", "targetId": "string", "targetType": "string", "value": "string" } ], "nextToken": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - attributes A list of attribute objects that meet the criteria of the request. Type: Array of Attribute objects - nextToken The nextTokenvalue to include in a future ListAttributesrequest. When the results of a ListAttributesrequest exceed maxResults, this value can be used to retrieve the next page of results. This value is nullwhen there are no more results to return. Type: String Errors For information about the errors that are common to all actions, see Common Errors. - lists the attributes for container instances that have the stack=production attribute in the default cluster. Sample Request POST / HTTP/1.1 Host: madison.us-west-2.amazonaws.com Accept-Encoding: identity Content-Length: 122 X-Amz-Target: AmazonEC2ContainerServiceV20141113.ListAttributes X-Amz-Date: 20161222T181559Z User-Agent: aws-cli/1.11.30 Python/2.7.12 Darwin/16.3.0 botocore/1.4.87 Content-Type: application/x-amz-json-1.1 Authorization: AUTHPARAMS { "cluster": "default", "attributeName": "stack", "attributeValue": "production", "targetType": "container-instance" } Sample Response HTTP/1.1 200 OK Server: Server Date: Thu, 22 Dec 2016 18:16:00 GMT Content-Type: application/x-amz-json-1.1 Content-Length: 158 Connection: keep-alive x-amzn-RequestId: b0eb3407-c872:
https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ListAttributes.html
2018-03-17T14:39:42
CC-MAIN-2018-13
1521257645177.12
[]
docs.aws.amazon.com
Using vSphere HA You can set up VM level high availability using the following steps: - Turn on vSphere HA or the vSphere Cluster that the RCA-V is deployed on. - Turn on Enable Host Monitoring and VM Monitoring in vSphere HA configuration. This will enable monitoring at two different levels: - If the host goes down, the RCA-V VM on that host will automatically restart on another host that is healthy. - If the VM running RCA-V goes down due to an accidental power-off or the guest OS crashing, the RCA-V will restart on the same physical server. Using Standby RCA-V Have another RCA-V pre-configured and ready to go in case the primary RCA-V fails. The procedure for creating a secondary RCA-V is: a. Deploy another instance of RCA-V (called RCA-V-DR) b. Login to the admin UI for RCA-V-DR and configure the vSphere connectivity section. c. Copy the following files from primary RCA-V into RCA-V-DR /etc/default/wstuncli /etc/vscale.conf d. Power off RCA-V-DR. In case of failure of RCA-V primary, the RCA-V-DR is ready to get started and taking on requests from RS platform.
http://docs.rightscale.com/rcav/v1.3/rcav_high_availability.html
2018-03-17T14:20:45
CC-MAIN-2018-13
1521257645177.12
[]
docs.rightscale.com
GetSocial Android SDK Changelog¶ v7.12.1 - 15 Jul, 2022¶ - Overall improvements. v7.12.0 - 12 Jul, 2022¶ - New: Support for blocked users v7.11.0 - 17 Jun, 2022¶ - New: support for verified users - Overall improvements. v7.10.0 - 31 Mar, 2022¶ - New: Global search (beta) for activities, labels, hashtags, users, groups and topics - Overall improvements. v7.9.1 - 10 Feb, 2022¶ - Overall improvements. v7.9.0 - 4 Feb, 2022¶ - New: bookmark activities and get all the bookmarked activities from the user. - New: get reacted or voted activities from the user. - New: filter activities by mentions of a user or the app. - Fixed: error when trying to open Native Share option on Android 12. - Fixed: Push Notifications listeners not working correctly on Android 12. v7.8.1 - 25 Jan, 2022¶ - Fixed: Push Notifications were not opening the app on Android 12. v7.8.0 - 11 Jan, 2022¶ - New: follow labels and tags to see related content in the user’s timeline. - New: improved find for labels and hashtags. - Fixed: isBanned() is now correctly updated if the ban expiration happens while the user is using the app. v7.7.1 - 27 Dec, 2021¶ - Fixed: added missing PendingIntent.FLAG_IMMUTABLEflag to which caused crash on Android 12. v7.7.0 - 15 Dec, 2021¶ - New: find activities by content, labels and properties - New: get suggested users based on user connections and trending users v7.6.8 - 4 Nov, 2021¶ - New: find topics and groups by labels and properties. v7.6.7 - 3 Nov, 2021¶ - Added missing exportproperties to AndroidManifest.xml. v7.6.6 - 1 Nov, 2021¶ - Overall improvements. v7.6.5 - 14 Oct, 2021¶ - Overall improvements. v7.6.4 - 6 Oct, 2021¶ - Fixed: issue when trying to join a group when user is already member. v7.6.3 - 28 Sep, 2021¶ - Fixed: issue when calling initWithIdentitymethod while SDK is already initialized. v7.6.2 - 23 Sep, 2021¶ - Fixed: issue with notification click listener invocation. v7.6.1 - 17 Sep, 2021¶ v7.6.0 - 15 Sep, 2021¶ - New: added possibility to filter for trending activities, topics and groups. v7.5.5 - 8 Sep, 2021¶ - Fixed issue with Communities.areGroupMembersmethod. - Fixed issue with ActivityDetailsView builder, which caused empty feed view. v7.5.4 - 22 Jul, 2021¶ - Fixed issue with disabled vibration for Push Notification on some Huawei devices. v7.5.3 - 15 Jul, 2021¶ v7.5.2 - 7 Jul, 2021¶ - Fixed issue with poll status in AnnouncementsQuery. v7.5.1 - 6 Jul, 2021¶ - Fixed issue with poll status in ActivitiesQuery. v7.5.0 - 5 Jul, 2021¶ v7.4.13 - 22 Jun, 2021¶ - Overall improvements. v7.4.12 - 26 May, 2021¶ Fixed: - Permission issues with ActivityDetailsViewBuilder. v7.4.11 - 29 Apr, 2021¶ Fixed: - Background thread issue when sending an invite fails. v7.4.10 - 26 Apr, 2021¶ Fixed: - Issue on feeds UI. v7.4.9 - 21 Apr, 2021¶ New: - Support adding multiple reactions to an activity. v7.4.8 - 12 Apr, 2021¶ Fixed: - Add method to query Announcements inside a Group. v7.4.7 - 31 Mar, 2021¶ Fixed: - Overall improvements. v7.4.6 - 3 Mar, 2021¶ Fixed: - Issue with missing sender information in last chat message object. v7.4.5 - 24 Feb, 2021¶ New: - New method to refresh current user properties. v7.4.4 - 10 Feb, 2021¶ New: - New methods to increment/decrement public and private user properties. v7.4.3 - 29 Jan, 2021¶ Fixed: - Overall improvements. v7.4.2 - 28 Jan, 2021¶ Fixed: - Overall improvements. v7.4.1 - 19 Jan, 2021¶ Fixed: - Added missing idto ChatMessageclass. v7.4.0 - 13 Jan, 2021¶ New: v7.3.8 - 29 Dec, 2020¶ Fixed: - Issue when phone number used as identity access token. v7.3.7 - 10 Dec, 2020¶ Fixed: - Issue with Firebase Perfomance plugin. v7.3.5 - 9 Dec, 2020¶ Fixed: - Issue with invoking referral data listener on background thread. v7.3.4 - 9 Dec, 2020¶ Fixed: - Issue with Firebase Performance plugin. v7.3.3 - 2 Dec, 2020¶ Fixed: - Issue with topics permissions. v7.3.2 - 30 Nov, 2020¶ Fixed: - Overall improvements. v7.3.1 - 19 Nov, 2020¶ Fixed: - Issue with Link Parameters when creating invite content. v7.3.0 - 16 Nov, 2020¶ New: v7.2.8 - 3 Nov, 2020¶ Fixed: - Issue when fetching a single user by UserId. v7.2.7 - Oct 5, 2020¶ New: - Added new method to provide custom error messages on UI. - New error codes for rate limiting errors when posting activities. v7.2.6 - 24 Sep, 2020¶ Fixed: - Text color issue in multiline notification when phone uses Dark mode. v7.2.5 - 21 Sep, 2020¶ Fixed: - Overall improvements. v7.2.4 - 15 Sep, 2020¶ Fixed: - Issue when sending custom invite content on Unity. v7.2.3 - 14 Sep, 2020¶ Fixed: - Overall improvements. v7.2.2 - 19 Aug, 2020¶ Fixed: - Issue when onInitializeListenernot invoked if SDK is already initialized. v7.2.1 - 13 Aug, 2020¶ Fixed: - Minor issue with UserId object. v7.2.0 - 10 Aug, 2020¶ New: - Added new method to initialize the SDK with an existing identity. v7.1.1 - 24 Jul, 2020¶ Fixed: - Overall improvements. v7.1.0 - 13 Jul, 2020¶ New: - Overall API improvements. Changed: UserId.userWithIdis UserId.create. UserId.userWithIdentityIdis UserId.createWithProvider. UserIdList.usersWithIdsis UserIdList.create. UserIdList.usersWithIdentityIdsis UserIdList.createWithProvider. v7.0.1 Beta 2 - 18 Jun, 2020¶ New: - Added new methods to filter activities by tag or by user. v7.0.1 Beta 1 - 21 Apr, 2020¶ Fixed: - Rare issues with UI framework. Upgrading: - Update gradle plugin version to 1.0.4. Identity.createCustomIdentity()is now Identity.custom() Identity.createFacebookIdentity()is now Identity.facebook(withAccessToken:) v7.0.0 - 21 Apr, 2020¶ GetSocial SDK version 7 is a major update that brings a lot of improvements, new features and breaking changes. Follow the guide below to learn about what changed and how to upgrade to SDK v7. Upgrading¶ All pushNotifications.android.* values should not contain @drawable/, @color/, @string/ or any other prefix, it should be just a name of file without extension. For example, if you had im.getsocial.sdk.LargeNotificationIcon in AndroidManifest.xml with value @drawable/large_notification_icon, it should become pushNotifications.android.largeIcon with value large_notification_icon. Methods¶ All methods that had a CompletionCallback, Callback or other callback mechanism now have two different parameters for callbacks: CompletionCallback and Callback are called if operation succeeds and FailureCallback is called in case of error. All methods that supported operations by GetSocial User ID now support both GetSocial ID and Identity ID. This is encapsulated in UserId and UserIdList classes for a single and multiple users respectively. Read more about this. All methods that support pagination are now unified and use the same approach with classes PagingQuery and PagingResult. Read more about this. Initialization¶ whenInitialized is changed to addOnInitializeListener and can be called multiple times. User Management¶ Current User¶ All methods related to the current user were in GetSocial.User class. Now you can get an object of CurrentUser using GetSocial.getCurrentUser(). This method returns null if SDK is not initialized. When you update user properties like avatar or display name, the object is.getCurrentUser(). with Notifications¶ All notifications related methods are moved to Notifications class. NotificationListener is now split into two: OnNotificationClickedListener and OnNoticationReceivedListener. OnNotificationReceivedListener is called when application is in foreground and GetSocial Push Notification is received. Note that now it is called even if notifications in foreground are enabled. Enable Click Listener In order to make OnNotificationClickedListener being invoked, you have to set pushNotifications.customListener to true in getsocial.json: // getsocial.json { ... "pushNotifications": { ... "customListener": true } } OnNotificationClickedListener does not.
https://docs.getsocial.im/libraries/android/changelog/
2022-09-25T01:46:38
CC-MAIN-2022-40
1664030334332.96
[]
docs.getsocial.im
(x,y) for all texture units. In OpenGL this matches glMultiTexCoord for all texture units or glTexCoord when no multi-texturing is available. On other graphics APIs the same functionality is emulated. This function can only be called between GL.Begin and GL.End functions. using UnityEngine; public class Example : MonoBehaviour { // Draws a Quad in the middle of the screen and // Adds the material's Texture to it. Material mat; void OnPostRender() { if (!mat) { Debug.LogError("Please Assign a material on the inspector"); return; } GL.PushMatrix(); mat.SetPass(1); GL.LoadOrtho(); GL.Begin(GL.QUADS); GL.TexCoord2(0, 0); GL.Vertex3(0.25f, 0.25f, 0); GL.TexCoord2(0, 1); GL.Vertex3(0.25f, 0.75f, 0); GL.TexCoord2(1, 1); GL.Vertex3(0.75f, 0.75f, 0); GL.TexCoord2(1, 0); GL.Vertex3(0.75f, 0.25f, 0); GL.End(); GL.PopMatrix(); } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2018.3/Documentation/ScriptReference/GL.TexCoord2.html
2022-09-25T02:19:06
CC-MAIN-2022-40
1664030334332.96
[]
docs.unity3d.com
. The approach this quickstart takes to using ASP.NET Identity (mainly around the differences in login and logout). All the other projects in this solution (for the clients and the API) will remain the same. Note This quickstart assumes you are familiar with how ASP.NET Identity works. If you are not, it is recommended that you first learn about it. New Project for ASP.NET Identity¶ The first step is to add a new project for ASP.NET Identity to your solution. We provide a template that contains the minimal UI assets needed to ASP.NET Identity with IdentityServer. You will eventually delete the old project for IdentityServer, but there are some items that you will need to migrate over. Start by creating a new IdentityServer project that will use ASP.NET Identity integration components for IdentityServer. Startup.cs¶ In ConfigureServices notice the necessary AddDbContext<ApplicationDbContext> and AddIdentity<ApplicationUser, IdentityRole> calls are done to configure ASP.NET> GetIdentityResources() { return new List<IdentityResource> { new IdentityResources.OpenId(), new IdentityResources.Profile(), }; } public static IEnumerable<ApiResource> GetApis() { return new List<ApiResource> { new ApiResource("api1", "My API") }; }" } }, // resource owner password grant client new Client { ClientId = "ro.client", AllowedGrantTypes = GrantTypes.ResourceOwnerPassword, ClientSecrets = { new Secret("secret".Sha256()) }, AllowedScopes = { "api1" } }, // OpenID Connect hybrid flow client (MVC) new Client { ClientId = "mvc", ClientName = "MVC Client", AllowedGrantTypes = GrantTypes.Hybrid, ClientSecrets = { new Secret("secret".Sha256()) }, RedirectUris = { "" }, PostLogoutRedirectUris = { "" }, AllowedScopes = { IdentityServerConstants.StandardScopes.OpenId, IdentityServerConstants.StandardScopes.Profile, "api1" }, AllowOfflineAccess = true }, // JavaScript Client new Client { ClientId = "js", ClientName = "JavaScript Identity to validate credentials and manage the authentication session. Much of the rest of the code is the same from the prior quickstarts and templates. Logging in with the MVC client¶ At this point, you should be al Identity template. Given the variety of requirements and different approaches to using ASP.NET Identity, our template deliberately does not provide those features. You are expected to know how ASP.NET Identity works sufficiently well to add those features to your project. Alternatively, you can create a new project based on the Visual Studio ASP.NET Identity template and add the IdentityServer features you have learned about in these quickstarts to that project.
https://identityserver4.readthedocs.io/en/docs-preview/quickstarts/6_aspnet_identity.html
2022-09-25T02:20:37
CC-MAIN-2022-40
1664030334332.96
[array(['../_images/aspid_mvc_client.png', '../_images/aspid_mvc_client.png'], dtype=object) array(['../_images/aspid_login.png', '../_images/aspid_login.png'], dtype=object) array(['../_images/aspid_claims.png', '../_images/aspid_claims.png'], dtype=object) array(['../_images/aspid_api_claims.png', '../_images/aspid_api_claims.png'], dtype=object)]
identityserver4.readthedocs.io
Git repositories in your account. See also: AWS API Documentation list-code: CodeRepositorySummaryList list-code-repositories [--creation-time-after <value>] [--creation-time-before <value>] [--last-modified-time-after <value>] [--last-modified-time-before <value>] [--name-contains Git repositories that were created after the specified time. --creation-time-before (timestamp) A filter that returns only Git repositories that were created before the specified time. --last-modified-time-after (timestamp) A filter that returns only Git repositories that were last modified after the specified time. --last-modified-time-before (timestamp) A filter that returns only Git repositories that were last modified before the specified time. --name-contains (string) A string in the Git repositories name. This filter returns only repositories whose name contains the specified string. --sort-by (string) The field to sort results by. The default is Name. Possible values: - Name - CreationTime - LastModified. CodeRepositorySummaryList -> (list) Gets a list of summaries of the Git repositories. Each summary specifies the following values for the repository: - Name - Amazon Resource Name (ARN) - Creation time - Last modified time - Configuration information, including the URL location of the repository and the ARN of the Amazon Web Services Secrets Manager secret that contains the credentials used to access the repository. (structure) Specifies summary information about a Git repository. CodeRepositoryName -> (string)The name of the Git repository. CodeRepositoryArn -> (string)The Amazon Resource Name (ARN) of the Git repository. CreationTime -> (timestamp)The date and time that the Git repository was created. LastModifiedTime -> (timestamp)The date and time that the Git repository was last modified. GitConfig -> (structure) Configuration details for the Git repository, including the URL where it is located and the ARN of the Amazon Web Services Secrets Manager secret that contains the credentials used to access the repository. RepositoryUrl -> (string)The URL where the Git repository is located. Branch -> (string)The default branch for the Git repository. SecretArn -> (string) The Amazon Resource Name (ARN) of the Amazon Web Services Secrets Manager secret that contains the credentials used to access the git repository. The secret must have a staging label of AWSCURRENTand must be in the following format: {"username": *UserName* , "password": *Password* } NextToken -> (string) If the result of a ListCodeRepositoriesOutputrequest was truncated, the response includes a NextToken. To get the next set of Git repositories, use the token in the next request.
https://docs.aws.amazon.com/cli/latest/reference/sagemaker/list-code-repositories.html
2022-09-25T02:38:51
CC-MAIN-2022-40
1664030334332.96
[]
docs.aws.amazon.com
Budget Upload TabEstimated reading time: 10 Regional Controller can pull on the Region, making Budget amount changes across multiple districts and uploading them all at once. Budget values and save them! Do I have security rights to Save from the Budget Upload tab? To find if you can save to the Budget Upload Tool, please check out the Tools Controlled by the Control Center page._10<< Then try uploading a change to Budget using the Budget Upload tool (after this date), and get the following popup and in-row messages. Notice the changes have been saved to Target Center 2.0, but they are not yet synced to Interject. This change will be synced to Interject once the Corp Cutoff date is set to a Date/Time after our change or BOD. We can use the Budget Change Query Tool to confirm that our save has been registered in Target Center 2.0. The new UnsyncedChanges tab is designed to pull in all amounts not yet synced to Interject, which neatly matches our inquiry. The screenshot below shows that our Budget save succeeded and is in the database. _14<<.
https://docs.gointerject.com/bApps/InterjectTraining/Budget/BudgetUpload.html
2022-09-25T01:16:54
CC-MAIN-2022-40
1664030334332.96
[array(['/images/WCNTraining/Budget/BudgetUpload_FullView.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_MultipleDistrictsPull.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_MultipleDistrictsSave.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_BlankRowsDefault.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_InsertNewRowsMiddle.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_InsertNewRowsFromEmpty.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_QuickTools.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_SaveFormula.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_SmallSaveRange.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_BigSaveRange.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_CCAfterCorpCutoff.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_AfterCorpCutoffMessage.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_AfterCorpCutoffRowMessage.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_UnsyncedChangesBCQuery.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_LockLevelError.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_DPAError.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_InvalidSource.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_InvalidYear.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_IncompleteGLString.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_InvalidAccount.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_InvalidDistrict.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_InvalidSystem.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_InvalidSubSystem.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_DuplicateAccount.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_InvalidAmount.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_DistrictNotinRightsRow.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_RegCorpMultipleDistrictError.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_NotinDPAforDistrict.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_CannotUpdateAutocalcs.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_AfterCorpCutoffMessage.png', None], dtype=object) array(['/images/WCNTraining/Budget/BudgetUpload_AfterCorpCutoffRowMessageSingle.png', None], dtype=object) ]
docs.gointerject.com
Bot Lifecycle Management (BLM) - an overview As a Control Room user with export or import bots module permission, you can move your bots (new or updated) from one environment to another using Bot Lifecycle Management module in the Enterprise Control Room. For example, you can move the bots that are verified as production ready from staging to production. The process can be performed in two stages: - Export Bots from one environment of a source Control Room - Import Bots to another environment of a destination Control Room You can choose to Export and Import using two methods: - Control Room user interface - API to export and import Bot Lifecycle Management
https://docs.automationanywhere.com/ko-KR/bundle/enterprise-v11.3/page/enterprise/topics/control-room/bots/my-bots/blm-overview.html
2022-09-25T01:26:48
CC-MAIN-2022-40
1664030334332.96
[]
docs.automationanywhere.com
Getting started with Puhti This is a quick start guide for Puhti users. It is assumed that you have previously used CSC cluster resources like Taito/Sisu. If not, you can start by looking at overview of CSC supercomputers. Go to my.csc.fi to apply for access to Puhti or view your projects and their project numbers if you already have access. On Puhti, you can also use command csc-projects. Connecting to Puhti Connect using a normal ssh-client: $ ssh [email protected] There is also a beta web interface available at, where you can log in with your CSC user name. In this interface you can manage files, launch interactive applications and list jobs, quotas and project statuses. You can use it also for graphical applications. Module system: CSC uses the Lmod module system. Modules are set up in a hierarchical fashion, meaning you need to load a compiler before MPI and other libraries appear. See more information about modules. Compilers The system comes with two compiler families installed, the Intel and GCC compilers. We have installed both the 18 and 19 versions of the Intel compiler, and for GCC 9.1, 8.3 and 7.4 are available. The pgi compiler 19.7 is available for building gpu applications. See more information about compilers. High performance libraries Puhti has several high performance libraries installed, see more information about libraries. MPI Currently the system has a two MPI implementations installed: - hpcx-mpi - intel-mpi We recommend to test using hpcx-mpi first, this one is from the network vendor and is based on OpenMPI. You will need to have the MPI module loaded when submitting your jobs. More information about building and running MPI applications. Applications More information about specific applications can be found here Default python Python is available through the python-env module. This will replace the system python call with python 3.7. The anaconda environment has a lot of regularly used packages installed by default. Running jobs Puhti uses the slurm batch job system. A description of the different slurm partitions can be found here. Note that the GPU partitions are available from the normal login nodes. Instructions on how to submit jobs can be found here and example batch job scripts are found here Very important change!! You have to specify your billing project in your batch script with the --account=<project> flag. Failing to do so will cause your job to be held with the reason “AssocMaxJobsLimit”. Running srun directly also requires the flag. More information about billing here and common queuing system error messages in the FAQ. Network - Login nodes can access the Internet - Compute nodes can access the Internet ( You can check your current disk usage with csc-workspaces, more detailed information about storage can be found here. Linux basics Tutorial for CSC If you are new to Linux command line or using supercomputers, please consult this tutorial section!
https://docs.csc.fi/support/tutorials/puhti_quick/
2022-09-25T01:25:32
CC-MAIN-2022-40
1664030334332.96
[]
docs.csc.fi
How to stake ATOM to get stkATOMs¶ pSTAKE will launch with initial support for the Cosmos chain’s native token, ATOM, and will extend support for multiple PoS networks in the near future. You can unlock the liquidity of staked assets on the supported PoS networks by depositing your native assets. You first need to get connected to your Ethereum wallet. A designated wallet address is provided to send your assets to. - From Home click Staking; the Stake page appears by default. - Enter the number of ATOM. Click Deposit & Stake. A window appears to copy the attributes when you send your ATOM using any of these methods: Wallet - gaia CLI method - Keplr - Ledger - Click the Copy icon to copy the attributes and complete the transfer of the ATOM. - Click Confirm. Note: When sending ATOM through the Gaia CLI method, copy the Gaia CLI command and replace the address placeholder with your Cosmos address and run the command. After the staking transaction is processed, your stkATOM are credited to your wallet. You can now start earning staking rewards on the stkATOM credited.
https://docs.pstake.finance/How_to_stake_tokens/
2022-09-25T02:58:20
CC-MAIN-2022-40
1664030334332.96
[array(['https://user-images.githubusercontent.com/73919215/157028555-e669a3ce-bc3a-4862-8115-ec562f80f2d0.png', None], dtype=object) array(['https://user-images.githubusercontent.com/34552383/125457194-410784bf-d155-40da-8056-6d05102cf4dd.png', 'Wrap options'], dtype=object) ]
docs.pstake.finance
BNB Staking Background¶ BNB Chain is comprised of: BNB Beacon Chain (BC) (previously Binance Chain) - BNB Chain Governance (Staking, Voting) BNB Smart Chain (BSC) (previously Binance Smart Chain) - EVM compatible, consensus layers, and with hubs to multi-chains BSC allows smart contracts and hosts the DeFi activities in the Binance ecosystem. The dedicated staking module for BSC is on BC. BSC allows a total of 42 validators out of which the top 21 validators are in the active set and earn transaction fees. Token holders, including the validators, can bond their tokens for staking. Token holders can delegate their tokens onto any validator or validator candidate (to expect it can become an actual validator). Redelegation is allowed after a period of 7 days. Validators share a part of their rewards with their delegators. Validators can suffer from “slashing” as a punishment for bad behaviour, such as double sign and/or downtime. A validator slashing does not affect the stake of the delegator as validators put up self-stake that is slashed. There is an unbonding period of 7 days for validators and delegators so that the system makes sure the tokens remain bonded when any bad behaviour is caught. pSTAKE BNB Liquid Staking¶ pSTAKE’s BNB liquid staking product allows holders of BNB to stake their assets using the BNB staking interface. Users are issued stkBNB which follows an exchange rate model, (inspired by the Compound’s cToken model). stkBNB value keeps increasing against BNB as it accrues staking rewards in the background. The BNB deposited by the user onto the pSTAKE application goes to pSTAKE’s StakePool contract. Everyday at 23:00 hrs UTC, the BC bot runs a staking transaction that aggregates all the deposits made to the StakePool contract and delegates them to the pSTAKE validator set. The users start earning staking rewards when they deposit BNB to the StakePool contract which is reflected in the increase in exchange rate (c-value) for the stkBNB token. Users can unstake stkBNB on the pSTAKE application. When a user performs an unstake transaction, stkBNB deposited by the user is burnt. A claim for an equivalent amount of BNB based on the ongoing exchange rate (c-value) is created in the name of the user. The user can claim the unstaked BNB from the pSTAKE application after 15 days. The 15 days unstaking period is necessary to always be able to fulfil user claims as the bot can undelegate from any BNB chain validator only once in 7 days. Users will be able to exit their liquid staked BNB position directly by swapping stkBNB with BNB on DEXs, and need not wait for the 15 day unstaking period. The users stop earning rewards after performing the unstake transaction.
https://docs.pstake.finance/stkBNB_Staking_Overview/
2022-09-25T02:58:58
CC-MAIN-2022-40
1664030334332.96
[]
docs.pstake.finance
Sensitive personal data¶ The following is a list of Ethical, Legal and Social Implications (ELSI) that should be considered when working with human data. The content on this page is based on a checklist that has been developed in the Tryggve project. It is intended be used as a tool to document these considerations, and is available as: - An MS Word file that can be downloaded from the Tryggve project pages. - In the SciLifeLab Data Stewardship Wizard (SciLifeLab DSW) - Log in to the SciLifeLab DSW using your university credentials - Select Questionnaires in the left sidebar, and click the Create button - Choose Tryggve checklist… from the Knowledge Model drop-down menu Note that the checklist was created with cross-border collaborative projects in mind, but it should be useful for other research projects as well. Before the collection of personal data has begun you should always consult with the Data Protection Officer of your organisation. Ethical reviews and informed consent (more info)¶ - Has the project (or parts of the project) undergone ethical review? - Have informed consents been collected from the research subjects? - Are there limitations of use defined in these? - Is the intended research purpose within the scope of the limitations of use that is defined in the ethics approval(s) and/or the informed consent(s)? GDPR (more info)¶ - What is the purpose of processing of the personal data? - Who is the Controller(s) of the personal data? - What is the legal basis processing of the personal data? - What are the exemptions for the prohibition for processing of special categories of data (such as health and genetic data) under Art. 9 GDPR used? - Have data processing agreements been established between the data controller(s) and any data processors? - Has a Data Protection Impact Assessments (DPIA) been performed for the personal data? - What happens with the data after project completion? - What technical and procedural safeguards will be established for processing the data? Other considerations (more info)¶ - Are there other relevant national legislation considerations that has to be taken into account? - e.g. regarding public access to information, biobank acts, etc. - Are there other Terms & conditions for data access (in particular if presenting obstacles for cross-border processing of health data)? - e.g. register data access policies - Are there other legal agreements between project parties that should be considered? - e.g. conditions regarding data reuse and intellectual property Clarifications and comments¶ Ethical reviews and informed consents¶ The purpose of these questions is to spell out what uses the subjects have consented to, and/or for what uses ethical approvals have been given. Then, given the stated research purpose of this project, are the consents and ethical approvals for the datasets compatible with this. GDPR¶ State the purpose of processing the personal data¶ The GDPR stipulates that to process personal data the controller must do that with stated purposes, and not further process the data in a manner that is incompatible with those purposes (Article 5 - Principles relating to processing of personal data). Who are the data controller of the personal data processed in the project?¶ Article 4 (7): “‘controller’ means the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data; […].” The Controller is typically the university employer of the PI, and the PI should act as a representative of her university employer and is responsible for ensuring that personal data is handled correctly in her projects. If the project involves more than one legal entity, and joint controllership is considered, make sure that all parties understand their obligations, and it is probably good to define the terms for this in an agreement between the parties. What is the legal basis for processing the personal data?¶ Article 6 (1) lists under what conditions the processing is considered lawful. Of these, Consent or Public interest are relevant when it comes to research. You should determine what legal basis (or bases) you have for processing the personal data in your project. Traditionally, consent has been the basis for processing personal data for research, but under the GDPR there cannot be an imbalance between the processor and the data subject for it to be considered to be freely given. In some countries the use of consent as the legal basis for processing by universities for research purposes is therefore not recommended. In those cases, public interest should probably be your legal basis. Note that if your legal basis for processing is consent, a number of requirements exists for the consent to be considered valid under the GDPR. Consents given before the GDPR might not live up to this. Also note that even if public interest is the legal basis, other laws and research ethics standards might still require you to have consent from the subjects for performing the research. Please consult with the Data Protection Officer of your organisation on which legal basis to apply to your data. What are the exemptions for the prohibition for processing of special categories of data (such as health and genetic data) under Art. 9 GDPR used?¶ Processing of certain categories of personal data is not allowed unless there are exemptions in law to allow this. Among these categories (“sensitive data”) are “‘[…] data revealing racial or ethnic origin, […] genetic data, […] data concerning health’”. Most types of personal data collected in biomedical research will fall under these categories. Article 9 (2) lists a number of exemptions that apply, of which consent and scientific research are most likely to be relevant for research. Please consult with your Data Protection Officer of your organisation. Have data processing agreements been established between the data controller(s) and any data processors?¶ Article 4 (8): “‘processor’ means a natural or legal person, public authority, agency or other body which processes personal data on behalf of the controller.” Examples of this is if you use a secure computing environment provided by another organisation to do your analysis or to store the data, along with several other scenarios. In the case that you do, there needs to be a legal agreement established between the controller(s) and processor(s) as defined in Article 28 (3): . […]” Article 28 also lists the required contents of such an agreement. Your organisation and/or the processor organisation will probably have agreement templates that you can use. Have Data Protection Impact Assessments (DPIA) been performed for the personal data?¶ Where a type of processing is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data, a so called Data Protection Impact Assessment (DPIA) - Article 35. To clarify when this is necessary, the Swedish Data Protection Authority (DPA) “Datainspektionen” has issued guidance of when an impact assessment is required. Large-scale processing of sensitive data such as genetic or other health related data is listed as requiring DPIAs. The French DPA has made a PIA tool (endorsed by several other DPAs) available that can help in performing these impact assessments. Please also consult your Data Protection Officer of your organisation. What technical and procedural safeguards have been established for processing the data?¶ To ensure that the personal data that you process in the project is protected at an appropriate level, you should apply technical and procedural safeguards to ensure that the rights of the data subjects are not violated. Examples of such measures include, but are not limited to, pseudonymisation end encryption of data, the use of computing and storage environments with heightened security, and clear and documented procedures for project members to follow. What happens with the data after project completion?¶ The GDPR states that the processing (including storing) of personal data should stop when the intended purpose of the processing is done. There are, however, exemptions to this e.g. when the processing is done for research purposes. Also, from a research ethics point of view, research data should be kept to make it possible for others to validate published research findings and reuse data for new discoveries. This is also governed by what the data subjects have been informed about regarding how you will treat the data after project completion. The recommendation is to deposit the sensitive data in the appropriate controlled access repositories if such are available, but this requires that the data subjects are informed and have agreed to this. Other considerations¶ There might also exist other national legal or procedural considerations for cross-border research collaborations. Other laws might affect how and if data can or cannot be made available outside the country of origin. The operating procedures of government authorities or other organisations might create obstacles for sharing data across borders. To make sure that it is clear how original and derived data, as well as results, can be used by the parties after the project completion, consider establishing legal agreements that defines this. This can include e.g. reuse of data for other projects or intellectual property rights derived from the research project.
https://scilifelab-data-guidelines.readthedocs.io/en/latest/docs/general/sensitive_data.html
2022-09-25T01:47:27
CC-MAIN-2022-40
1664030334332.96
[]
scilifelab-data-guidelines.readthedocs.io
View tree Close tree | Preferences | | Feedback | Legislature home | Table of contents Search Up Up 6.24 (5) Ballots. The board shall prescribe a special ballot for use under this section whenever necessary. Official ballots under ss. 5.60 (8) and 5.64 (3) prescribed for use in the presidential preference primary may also be used. The ballot shall be designed to comply with the requirements of prescribed under ss. 5.60 (8), 5.62 and 5.64 (1) insofar as applicable. All ballots shall be limited to national offices only. 182, s. s. 74 Section 74 . 6.24 (6) of the statutes is amended to read: 6.24 (6) Instructions and handling. The municipal clerk shall mail. The Except as authorized under s. 6.87 (3) (d), the municipal clerk shall mail the material postage prepaid to any place in the world. The overseas elector shall provide return postage. 182, s. s. 75m Section 75m . 6.24 (7) of the statutes is amended to read: 6.24 (7) Voting procedure. Except as authorized under s. 6.25, the ballot shall be marked or punched and returned, deposited and recorded in the same manner as other absentee ballots. In addition, the certificate-affidavit certificate shall have a statement of the elector's birth date. Failure to return the unused ballots in a primary election does not invalidate the ballot on which the elector casts his or her votes. 182, s. s. 76 Section 76 . 6.275 (1) (c) of the statutes is amended to read: 6.275 (1) (c) Where registration applies, the total number of electors of the municipality residing in that county who registered after the close of registration and prior to the day of the primary or election under s. ss. 6.28 (1) and 6.29. 182, s. s. 77 Section 77 . 6.28 (1) of the statutes is amended to read: 6.28 (1) Registration locations; deadline. Except as authorized in ss. 6.29 and 6.55 (2),. An application for registration in person or by mail may be accepted for placement on the registration list after the specified deadline, if the municipal clerk determines that the registration list can be revised to incorporate the registration in time for. 182, s. s. 78 Section 78 . 6.29 (1) of the statutes is amended to read: 6.29 (1) No names may be added to a registration list for any election after the close of registration, except as authorized under this section or s. 6.28 (1) or 6.55 (2) or (3) . Any person whose name is not on the registration list but who is otherwise a qualified elector is entitled to vote at the election upon compliance with this section. 182, s. s. 79 Section 79 . 6.29 (2) (b) of the statutes is amended to read: 6.29 (2) (b) Upon the filing of the registration form required by this section, the municipal clerk shall issue a certificate addressed to the inspectors of the proper ward directing that the elector be permitted to cast his or her vote , unless the clerk determines that the registration list will be revised to incorporate the registration in time for the election . The certificate shall be numbered serially, prepared in duplicate and one copy preserved in the office of the municipal clerk. 182, s. s. 80 Section 80 . 6.29 (2) (c) of the statutes is amended to read: 6.29 (2) (c) The elector, at At the time he or she appears at the correct polling place, the elector shall deliver the any certificate issued under par. (b) to the inspectors. If the elector applies for and obtains an absentee ballot, the any certificate shall be annexed to and mailed with the absentee ballot to the office of the municipal clerk. 182, s. s. 81 Section 81 . 6.30 (1) of the statutes is amended to read: 6.30 (1) In person. Registration applications shall be made in person, except under subs. (2) to sub. (4). 182, s. s. 82 Section 82 . 6.30 (2) and (3) of the statutes are repealed. 182, s. s. 83 Section 83 . 6.30 (4) of the statutes is amended to read: 6.30 (4) By mail. Any eligible elector who is located not more than 50 miles from his or her legal voting residence may register by mail on a form prescribed by the board and provided by each municipality. The form shall be designed to obtain the information required in ss. 6.33 (1) and 6.40 (1) (a) and (b). The form shall contain a certification by the elector that all statements are true and correct. The form shall be prepostpaid for return when mailed at any point within the United States , and shall be signed by a special registration deputy or shall be signed and substantiated by one other elector residing in the same municipality in which the registering elector resides, corroborating all material statements therein . The form shall be available in the municipal clerk's office and may be distributed by any elector of the municipality. The clerk shall mail a registration form to any elector upon written or oral request. 182, s. s. 84 Section 84 . 6.33 (2) (b) of the statutes is amended to read: 6.33 (2) (b) The registration form shall be signed by the registering elector and any corroborating elector under s. 6.29 (2) (a) , 6.30 (2) to (4) or 6.55 (2) before the clerk, issuing officer or registration deputy. The form shall contain a certification by the registering elector that all statements are true and correct. 182, s. s. 85 Section 85 . 6.45 (1) of the statutes is amended to read: 6.45 (1) After the deadline for revision of the registration list, the municipal clerk shall make copies of the list for election use. any person who is observing the proceedings under s. 7.41 when such use does not interfere with the conduct of the election. 182, s. s. 86 Section 86 . 6.79 (intro.) of the statutes is amended to read: 6.79 Recording electors. (intro.) Two election officials at each election ward shall be in charge of and shall maintain 2 separate lists of all persons voting. The municipal clerk may elect to maintain the information on the poll list manually or electronically. If the list is maintained electronically, the officials shall enter the information into an electronic data recording system that enables retrieval of a printed copy of the poll list at the polling place. The system employed is subject to the approval of the board. 182, s. s. 87 Section 87 . 6.79 (1) and (2) of the statutes are amended to read: 6.79 (1) Municipalities without registration. Where there is no registration, before being permitted to vote, each person shall state his or her full name and address. The officials shall record enter each name and address on a poll list in the same order as the votes are cast. If the residence of the elector does not have a number, the election officials shall, in the appropriate space, write enter enter. The officials shall maintain a separate list of those persons voting under ss. 6.15 and 6.24. (2) Municipalities with registration. entered and shall be given a slip bearing such number. 182, s. s. 88 Section 88 . 6.79 (5) of the statutes is amended to read: 6.79 (5) Poll list forms format . Poll lists shall be kept on forms designed or in an electronic format prescribed by the board to be substantially similar to the standard registration list forms used in municipalities where registration is required and shall require, for each person offering to vote, the entry of the person's full name and address. 182, s. s. 89 Section 89 . 6.80 (2) (e) and (f) of the statutes are amended to read: 6.80 (2) (e) Upon voting his or her ballot, the elector shall publicly and in person deposit it in into the ballot box or deliver it to an inspector for , who shall deposit in the ballot into the ballot box. (f) In the presidential preference primary and other partisan primary elections at polling places where ballots are distributed to electors, unless the ballots are prepared under s. 5.655 or are utilized with an electronic voting system in which all candidates appear on the same ballot, after the elector prepares his or her ballot the elector shall detach the remaining ballots, fold the ballots to be discarded , and fold the completed ballot unless the ballot is intended for counting with automatic tabulating equipment , . The elector shall then either personally deposit the ballots to be discarded in into the separate ballot box marked "blank ballot box" , and deposit the completed ballot in into the ballot box indicated by the inspectors , or give the ballots to an inspector who shall deposit the ballots directly into the appropriate ballot boxes . The inspectors shall keep the blank ballot box locked until the canvass is completed and shall dispose of the blank ballots as prescribed by the municipal clerk. 182, s. s. 90m Section 90m . 6.85 of the statutes is amended to read: 6.85 Absent elector; definition. An absent elector is any otherwise qualified elector who is or expects to be absent from the municipality in which the absent elector is a qualified elector on election day whether by reason of active service in the U.S. armed forces or for any other reason , or who because of age, sickness, handicap, physical disability, jury duty, service as an election official or religious reasons cannot is unable or unwilling to appear at the polling place in his or her ward. No person under the age of 70 qualifies as an absent elector solely because of age. Any otherwise qualified elector who changes residence within this state by moving to a different ward or municipality later than 10 days prior to an election may vote an absentee ballot in the ward or municipality where he or she was qualified to vote before moving. An elector qualifying under this section may vote by absentee ballot under ss. 6.86 to 6.89. 182, s. s. 91 Section 91 . 6.86 (1) (b) of the statutes is amended to read: 6.86 (1) (b) Except as provided in this section, if application is made in writing, the application, signed by the elector, shall be received no later than 5 p.m. on the Friday immediately preceding the election. If application is made in person, the application shall be made no later than 5 p.m. on the day preceding the election. If the elector is making written application and the application indicates that the reason for requesting an absentee ballot is that the elector is a sequestered juror, the application shall be received no later than 5 p.m. on election day. If the application is received after 5 p.m. on the Friday immediately preceding the election, the municipal clerk or the clerk's agent shall immediately take the ballot to the court in which the elector is serving as a juror and deposit it with the judge. The judge shall recess court, as soon as convenient, and give the elector the ballot. The judge shall then notarize the affidavit witness the voting procedure as provided in s. 6.87 and shall deliver the ballot to the clerk or agent of the clerk who shall deliver it to the polling place as required in s. 6.88. If application is made under sub. (2), the application may be received no later than 5 p.m. on the Friday immediately preceding the election. 182, s. s. 92 Section 92 . 6.86 (3) (a) of the statutes is amended to read: 6.86 (3) (a) Any elector who is registered, or otherwise qualified where registration is not required, and who qualifies under ss. 6.20 and 6.85 as an absent elector because the elector is hospitalized, may apply for and obtain an official ballot by agent. The agent may apply for and obtain a ballot for the hospitalized absent elector by presenting a form prescribed by the board and containing the required information supplied by the hospitalized elector and signed by that elector and any other elector residing in the same municipality as the hospitalized elector, corroborating the information contained therein. The corroborating elector shall state on the form his or her full name and address. 182, s. s. 93 Section 93 . 6.865 (intro.) and (1) of the statutes are consolidated, renumbered 6.865 and amended to read: 6.865 Federal postcard request form. A federal postcard registration and absentee ballot request form may be used to apply for an absentee ballot under s. 6.86 (1) if the form is completed in such manner that the municipal clerk or board of election commissioners with whom it is filed is able to determine all of the following: (1) That that the applicant is an elector of this state and of the ward or election district where the elector seeks to vote. 182, s. s. 94 Section 94 . 6.865 (2) of the statutes is repealed. 182, s. s. 95p Section 95p . 6.87 (2) of the statutes is amended to read: 6.87 (2) The Except as authorized under sub. (3) (d), the municipal clerk shall place the ballot in an unsealed envelope furnished by the clerk. The envelope shall have the name, official title and post-office address of the clerk upon its face. The other side of the envelope shall have a printed certificate-affidavit certificate in substantially the following form: [STATE OF .... County of ....] or [(name of foreign country and city or other jurisdictional unit)] I, ...., ( certify ) (do solemnly swear) cannot am unable or unwilling to appear at the polling place in the (ward) (election district) on election day because I expect to be absent from the municipality or because of age, sickness, handicap, physical disability, religious reasons, jury duty, service as an election official, or because I have changed my residence within the state from one ward or election district to another within 10 days before the election. I ( certify ) (swear) that I exhibited the enclosed ballot unmarked to the (2 witnesses) (person administering the oath) witness , that I then in .... The (2 witnesses) (person administering the oath) witness shall execute either of the following as appropriate : We I , the undersigned witnesses witness , subject to the penalties of s. 12.60 (1) (b), Wis. Stats., for false statements, certify that the above statements are true and the voting procedure was executed as there stated. Neither of us is I am not a candidate for any office on the enclosed ballot (except in the case of an incumbent municipal clerk). The elector was not solicited or advised by us I did not solicit or advise the elector to vote for or against any candidate or measure. ....(Name) ....(Address) ....(Name) ....(Address) Subscribed and sworn to before me this .... day of ...., A.D., ...., and I hereby certify that I am not a candidate on the ballot upon which the affiant voted (unless I am an incumbent municipal clerk), that the voting procedure above was executed as therein stated, and that the affiant was not solicited or advised by me to vote for or against any candidate or measure. ....(Name) ....(Title) ....(State or nation) 182, s. s. 96 Section 96 . 6.87 (3) (a) of the statutes is amended to read: 6.87 (3) (a) Except as authorized under par. (d) and as otherwise provided in s. 6.875, the municipal clerk shall mail the absentee ballot postage prepaid for return to the elector's residence unless otherwise directed, or shall deliver it to the elector personally at the clerk's office. 182, s. s. 97 Section 97 . 6.87 (3) (d) of the statutes is created to read: 6.87 (3) (d) Unless a municipality uses an electronic voting system that requires an elector to punch a ballot in order to record the elector's votes,. 182, s. s. 98p Section 98p . 6.87 (4) of the statutes is amended to read: 6.87 (4) Except as otherwise provided in s. 6.875, the elector voting absentee shall either make and subscribe to the affidavit before a person authorized to administer oaths or make and subscribe to the certification before 2 witnesses one witness . The absent elector, in the presence of the administrator of the oath or witnesses witness , shall mark or punch the ballot in a manner that will not disclose how the elector's vote is cast. The elector shall then, still in the presence of the administrator of the oath or the 2 witnesses witness , fold the ballots if they are paper ballots so each is separate and so that the elector conceals the markings or punches thereon and deposit them in the proper envelope , but . witnesses or the official oath administrator. 182, s. s. 99m Section 99m . 6.87 (7) of the statutes is amended to read: 6.87 (7) No individual who is a candidate at the election in which absentee ballots are cast may administer the oath or serve as a witness. Any candidate who administers the oath or serves as a witness shall be penalized by the discounting of a number of votes for his or her candidacy equal to the number of certificate-affidavit certificate envelopes bearing his or her signature. 182, s. s. 100m Section 100m . 6.87 (8) of the statutes is amended to read: 6.87 (8) The provisions of this section which prohibit candidates from assisting or administering the oath to serving as a witness for absentee electors shall not apply to the municipal clerk in the performance of the clerk's official duties. 182, s. s. 101m Section 101m . 6.87 (9) of the statutes is amended to read: 6.87 (9) If a municipal clerk receives an absentee ballot with an improperly completed certificate-affidavit certificate or with no certificate-affidavit certificate , the clerk may return the ballot to the elector, inside the sealed envelope when an envelope is received, together with a new envelope if necessary, whenever time permits the elector to correct the defect and return the ballot within the period prescribed in sub. (6). 182, s. s. 102 Section 102 . 6.875 (2) (b) of the statutes is amended to read: 6.875 (2) (b) The municipal clerk or board of election commissioners of any municipality where a community-based residential facility home is located may adopt the procedures under this section for absentee voting in any community-based residential facility located in the municipality if the municipal clerk or board of election commissioners finds that a significant number of the occupants of the community-based residential facility lack adequate transportation to the appropriate polling place, a significant number of the occupants of the community-based residential facility may need assistance in voting, there are a significant number of the occupants of the community-based residential facility aged 60 or over, or there are a significant number of indefinitely confined electors who are occupants of the community-based residential facility. The municipal clerk or board of election commissioners shall promptly notify the individual submitting nominations for special voting deputies under s. 7.30 (4) of any action taken under this paragraph. 182, s. s. 103 Section 103 . 6.875 (2) (c) of the statutes is amended to read: 6.875 (2) (c) The municipal clerk or board of election commissioners of any municipality where a retirement home is located may adopt the procedures under this section for absentee voting in any retirement home located in the municipality if the municipal clerk or board of election commissioners finds that a significant number of the occupants of the retirement home lack adequate transportation to the appropriate polling place, a significant number of the occupants of the retirement home may need assistance in voting, there are a significant number of the occupants of the retirement home aged 60 or over, or there are a significant number of indefinitely confined electors who are occupants of the retirement home. The municipal clerk or board of election commissioners shall promptly notify the individual submitting nominations for special voting deputies under s. 7.30 (4) of any action taken under this paragraph. 182, s. s. 104 Section 104 .. 182, s. s. 105 Section 105 .-affidavit mail send the ballot to the elector no later than 5 p.m. on the Friday preceding the election. 182, s. s. 106m Section 106m . , aged, sick, handicapped or disabled elector or the ballot of an election official and must be opened at the polls during polling hours on election day" . If the ballot was received). 182, s. s. 107 Section 107 . 6.88 (2) of the statutes is amended to read: 6.88 (2) When an absentee ballot is received by the municipal clerk prior to the delivery of the official ballots to the election officials of the ward in which the elector resides, the municipal clerk shall seal the ballot envelope , sealed in the carrier envelope , as provided under sub. (1), and shall be enclosed enclose the envelope in the a package and delivered deliver the package to the election inspectors of the proper ward or election district . When the official ballots for the ward or election district have been delivered to the election officials before the receipt of an absentee ballot, the clerk shall immediately enclose the envelope containing the absentee ballot in a carrier envelope as provided under sub. (1) and deliver it in person to the proper election officials. 182, s. s. 108m Section 108m . 6.88 (3) (a) of the statutes is amended to read: 6.88 (3) (a) Any time between the opening and closing of the polls on election day, the inspectors shall open the carrier envelope only, and announce the absent elector's name. When the inspectors find that the certification or affidavit has been properly executed, the applicant is a qualified elector of the ward or election district, and the applicant has not voted in the election, they shall enter an indication on the poll or registration list next to the applicant's name indicating an absentee ballot is cast by the elector. They shall then open the envelope containing the ballot in a manner so as not to deface or destroy the affidavit or certification thereon. The inspectors shall take out the ballot without unfolding it or permitting it to be unfolded or examined. Unless the ballot is cast under s. 6.95, the inspectors shall verify that the ballot has been endorsed by the issuing clerk. The inspectors shall deposit the ballot in into the proper ballot box and enter the absent elector's name or voting number after his or her name on the poll or registration list the same as if the elector had been present and voted in person. 182, s. s. 109p Section 109p . 6.88 (3) (b) of the statutes is amended to read: 6.88 (3) (b) When the inspectors find that an affidavit or a certification is insufficient, that the applicant is not a qualified elector in the ward or election district , that the ballot envelope is open or has been opened and resealed, or that the ballot envelope contains more than one ballot of any one kind or that the certificate of an elector who received an absentee ballot by facsimile transmission or electronic mail is missing , or if proof is submitted to the inspectors that an elector voting an absentee ballot has since died, the inspectors shall not count the ballot. The inspectors shall endorse every ballot not counted on the back, "rejected (giving the reason)". The inspectors shall reinsert each rejected ballot into the affidavit certificate envelope in which it was delivered and enclose the affidavit certificate envelopes and ballots, and securely seal the ballots and envelopes in an envelope marked for rejected absentee ballots. The inspectors shall endorse the envelope, "rejected ballots" with a statement of the ward or election district and date of the election, signed by the chief inspector and one of the inspectors representing each of the 2 major political parties and returned to the municipal clerk in the same manner as official ballots voted at the election. Down Down /1999/related/acts/182 true acts /1999/related/acts/182/88 acts/1999/182, s. s. 88 acts/1999/182, s. s. 88 section true PDF view View toggle Cross references for section View sections affected References to this Reference lines Clear highlighting Permanent link here Permanent link with tree
http://docs.legis.wi.gov/1999/related/acts/182/88
2013-05-18T22:42:24
CC-MAIN-2013-20
1368696382917
[]
docs.legis.wi.gov
This tutorial outlines the basic installation process for deploying MongoDB on Red Hat Enterprise Linux, CentOS Linux, Fedora Linux and related systems. This procedure uses .rpm packages as the basis of the installation. 10gen publishes packages of the MongoDB releases as .rpm packages for easy installation and management for users of CentOS, Fedora and Red Hat Enterprise Linux systems. While some of these distributions include their own MongoDB packages, the 10gen packages are generally more up to date. This tutorial includes: an overview of the available packages, instructions for configuring the package manager, the process install packages from the 10gen repository, and preliminary MongoDB configuration and operation. See also See Additional installation tutorials: The 10gen. Create a /etc/yum.repos.d/10gen.repo file to hold information about your repository. If you are running a 64-bit system (recommended,) place the following configuration in /etc/yum.repos.d/10gen.repo file: [10gen] name=10gen Repository baseurl= gpgcheck=0 enabled=1 If you are running a 32-bit system, which isn’t recommended for production deployments, place the following configuration in /etc/yum.repos.d/10gen.repo file: [10gen] name=10genongo directories. Warning With the introduction of systemd in Fedora 15, the control scripts included in the packages available in the 10gen repository are not compatible with Fedora systems. A correction is forthcoming, see SERVER-7285 for more information, and in the mean time use your own control scripts or install using the procedure outlined in Install MongoDB on Linux. Start the mongod process by issuing the following command (as root, or with sudo): service mongod start:) chkconfig mongod on Stop the mongod process by issuing the following command (as root, or with sudo): service mongod stop You can restart the mongod process by issuing the following command (as root, or with sudo): service mongod restart Follow the state of this process by watching the output in the /var/log/mongo/mongod.log file to watch for errors or important messages from the server.. You must SELinux to allow MongoDB to start on Fedora systems. Administrators have two options: Among the tools included in the mongo-10gen and then retrieve that document. > db.test.save( { a: 1 } ) > db.test.find() See also “mongo” and “mongo Shell Methods“
http://docs.mongodb.org/manual/tutorial/install-mongodb-on-red-hat-centos-or-fedora-linux/
2013-05-18T22:27:36
CC-MAIN-2013-20
1368696382917
[]
docs.mongodb.org
This module provides a standard interface to. The module defines the following functions:
http://docs.python.org/release/1.5/lib/node32.html
2013-05-18T22:26:43
CC-MAIN-2013-20
1368696382917
[]
docs.python.org
These functions create new file objects. :'. These methods do not make it possible to retrieve the return code from the child processes. The only way to control the input and output streams and also retrieve the return codes is to use the Popen3 and Popen4 classes from the popen2 module; these are only available on Unix. .
http://docs.python.org/release/2.2.1/lib/os-newstreams.html
2013-05-18T22:19:51
CC-MAIN-2013-20
1368696382917
[]
docs.python.org
User Guide Local Navigation Search This Document Call waiting, call forwarding, and call blocking - Turn on or turn off call waiting - About call forwarding - Forward or stop forwarding calls - Add, change, or delete a call forwarding number - About call blocking - Block or stop blocking calls - Change the call blocking password Next topic: Turn on or turn off call waiting Previous topic: Change the contact that is assigned to a speed dial key Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/33213/Call_waiting_forwarding_and_blocking_1027637_11.jsp
2013-05-18T22:36:34
CC-MAIN-2013-20
1368696382917
[]
docs.blackberry.com
Organizational structure.).
http://docs.joomla.org/Organizational_structure
2013-05-18T22:03:09
CC-MAIN-2013-20
1368696382917
[]
docs.joomla.org
View tree Close tree | Preferences | | Feedback | Legislature home | Table of contents Search Previous file: Chapter LES 1 Register August 2001 No. 548 Chapter LES 2 RECRUITMENT QUALIFICATIONS LES 2.01 Minimum qualifications for recruitment. LES 2.02 Pre-employment drug testing. LES 2.01 LES 2.01 Minimum qualifications for recruitment. LES 2.01(1) (1) Before an individual may commence employment on a probationary, temporary, part-time or full-time basis as a law enforcement, tribal law enforcement, jail or secure detention officer, that individual must have met recruit qualifications established by the board. The minimum qualifications for recruitment shall be: LES 2.01(1)(a) (a) The applicant shall possess a valid Wisconsin driver's license or such other valid operator's permit recognized by the Wisconsin department of transportation as authorizing operation of a motor vehicle in Wisconsin prior to completion of the preparatory training course. The results of a check of the issuing agency's motor vehicle files shall constitute evidence of driver's status. LES 2.01(1)(b) (b) The applicant shall have attained a minimum age of 18 years. A birth or naturalization certificate shall serve as evidence of applicant's date of birth. LES 2.01(1)(c) (c) The applicant shall not have been convicted of any federal felony or of any offense which if committed in Wisconsin could be punished as a felony unless the applicant has been granted an absolute and unconditional pardon. LES 2.01(1)(d) (d) The. LES 2.01(1)(e) (e) An applicant for employment as a law enforcement or tribal law enforcement officer shall possess either a 2 year associate degree from a Wisconsin technical college system district or its accredited equivalent from another state or a minimum of 60 fully accredited college level credits. An applicant who has not met this standard at the time of employment shall meet this standard as a requirement of recertification by the board at the end of his or her fifth year of employment as a law enforcement or tribal law enforcement officer. At the request of an applicant and upon documentation of experiences that have enhanced his or her writing, problem solving and other communication skills, the board may waive a maximum of 30 college level credits. This educational standard shall apply to applicants first employed as law enforcement or tribal law enforcement officers on or after February 1, 1993. LES 2.01(1)(f) (f) The applicant shall be of good character as determined from a written report containing the results of the following: LES 2.01(1)(f)1. 1. The fingerprinting of the applicant and with a search of local, state and national fingerprint records. LES 2.01(1)(f)2. 2. A background investigation conducted by or on behalf of an employer. The employer shall certify in a document subscribed and sworn to by the affiant that a reasonably appropriate background investigation has been conducted, what persons or agency conducted the investigation and where written results of the investigation are maintained on file. LES 2.01(1)(f)3. 3. Such other investigation as may be deemed necessary to provide a basis of judgment on the applicant's loyalty to the United States or to detect conditions which adversely affect performance of one's duty as a law enforcement, tribal law enforcement, jail or secure detention officer. LES 2.01(1)(g) (g) The applicant shall be free from any physical, emotional or mental condition which might adversely affect performance of duties as a law enforcement, tribal law enforcement, jail or secure detention officer. LES 2.01(1)(g)1. 1. The applicant shall complete a personal medical history, a copy of which is to be submitted to the examining physician. LES 2.01(1)(g)2. 2. The examination shall be by a Wisconsin licensed physician who shall provide a written report on the results of the examination. LES 2.01(1)(h) (h) The applicant shall submit to and complete with satisfactory results, an oral interview to be conducted by the employing authority or its representative or representatives."Satisfactory results" shall be determined from the contents of a written rating by the interviewer expressing an opinion concerning the applicant's appearance, personality, and ability to communicate as observed during the interview. LES 2.01(2) (2) The employing authority shall supply the training and standards bureau with copies of the documentation and reports concerning the above listed qualifications. Personal history, rating and report forms currently used by the employing authority are acceptable for this purpose. If such forms are not available, the bureau will supply forms for this purpose upon request. LES 2.01(3) (3) If the applicant is employed on a probationary or temporary basis, the bureau shall be immediately informed. The bureau shall maintain a permanent file on each applicant./01/4/?
http://docs.legis.wisconsin.gov/code/admin_code/les/2/01/4/_1?up=1
2013-05-18T22:43:04
CC-MAIN-2013-20
1368696382917
[]
docs.legis.wisconsin.gov
Revision history of "JDocumentXML::render/1.6" There is no edit history for this page. This page has been deleted. The deletion and move log for the page are provided below for reference. - 05:31, 3 May 2013 Wilsonge (Talk | contribs) deleted page JDocumentXML::render/1.6 (content was: "__NOTOC__ =={{JVer|1.6}} JDocumentXML::render== ===Description=== Render the document. {{Description:JDocumentXML::render}} <span class="editsection" style="fon..." (and the only contributor was "Doxiki2"))
http://docs.joomla.org/index.php?title=JDocumentXML::render/1.6&action=history
2013-05-18T22:20:58
CC-MAIN-2013-20
1368696382917
[]
docs.joomla.org
glDrawElements — render primitives from array data. glDrawElements specifies multiple geometric primitives with very few subroutine calls. Instead of calling a GL function to pass each individual vertex, normal, texture coordinate, edge flag, or color, you can prespecify separate arrays of vertices,_OPERATION is generated if glDrawElements is executed between the execution of glBegin and the corresponding glEnd. glArrayElement, glColorPointer, glDrawArrays,.
http://docs.knobbits.org/opengl/sdk2/xhtml/glDrawElements.xml
2014-10-20T08:06:18
CC-MAIN-2014-42
1413507442288.9
[]
docs.knobbits.org
Can I sign out of BlackBerry ID and sign in with a different BlackBerry ID You can only sign in with one BlackBerry ID on your BlackBerry device. If you want to sign out and then sign in with a different BlackBerry ID, you must delete all of the data from your device. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/50635/amc1343243578009.jsp
2014-10-20T08:26:40
CC-MAIN-2014-42
1413507442288.9
[]
docs.blackberry.com
Java developers benefit from using Groovy, but so can those who don't already know Java: - Those who want to learn Java the easy way can learn Groovy first. They'll be productive sooner, and can go on to learn more about Java at their own pace. - Those who don't want to learn Java, but do want to access the power of the Java virtual machine and standard libraries when programming, can use Groovy instead. Topics access keys Text Processing: Characters in Groovy - access the full power of Unicode Strings and StringBuffers in Groovy - for handling strings of characters (NEW ON 30 APRIL 2007) Pattern Matching in Groovy - COMING SOON Input and Output: Files in Groovy - access the file system easily (NEW ON 2 MAY 2007) [Streams, Readers, and Writers|] - access data as a flow of information (NEW ON 6 MAY 2007) Streams around Specific Resources - COMING SOON Networking in Groovy - COMING SOON Control Structures: Blocks, Closures, and Functions - compose your program from a wide variety of building blocks Expandos, Classes, and Categories - encapsulate your program's complexity Other topics coming: program control enums Queues/Deques arrays exceptions permissions annotations multi-threading tree processing, builders, XML static typing interfaces inheritance, method overriding, multi-methods, casting method-to-syntax mappings (switch, for, as, operator overloading) packages, evaluate, class loading internationalization Miscellaneous: Using Interceptors with the ProxyMetaClass - intercept calls to methods Java Reflection in Groovy - examine and manipulate objects that aren't known at compile time Tutorial Aims: - Correctness: All code examples have been tested using Groovy 1.0 inside a script. - Completeness: The tutorials are detailed demonstrations of the classes, having plenty of code examples. Only after creating all the detailed examples needed to introduce Groovy's syntax and core classes to Java newbies, will we then expand the explanations, re-sequence the information, add a "Getting Started" section, etc. Until then, it's correct and detailed, though a little raw.
http://docs.codehaus.org/pages/viewpage.action?pageId=77968
2014-10-20T08:44:03
CC-MAIN-2014-42
1413507442288.9
[]
docs.codehaus.org
User Guide Local Navigation Export to a GIF or PNG file You can export your content to .gif and .png formats, which are useful file formats for incorporating into a mobile media file or for viewing your content on third-party handhelds or Internet browsers. - Perform one of the following actions: -: Export to an animated GIF file Previous topic: Export to a PME file Was this information helpful? Send us your comments.
http://docs.blackberry.com/it-it/developers/deliverables/21108/Export_to_a_GIF_or_PNG_file_623176_11.jsp
2014-12-18T05:48:20
CC-MAIN-2014-52
1418802765616.69
[]
docs.blackberry.com
Dr. Joffre Olaya performs the full range of pediatric neurosurgical procedures, including epilepsy surgery. He joined CHOC Children’s earlier this year after completing pediatric neurosurgery and epilepsy neurosurgery fellowships at Children’s Hospital Los Angeles and Seattle Children’s Hospital, respectively. Dr. Joffre Olaya had his eye on CHOC Children’s long before coming here. During his pediatric neurosurgery training, he became increasingly interested in epilepsy surgery. And he was impressed by what he heard about the CHOC Comprehensive Epilepsy Program, the only children’s hospital program in California to receive the prestigious Level 4 distinction from the National Association of Epilepsy Centers. But first, Dr. Olaya decided to complete an additional fellowship in epilepsy surgery through the University of Washington at Seattle Children’s Hospital. In March 2014, he joined the CHOC medical staff and performs the full range of pediatric neurosurgical procedures, including the treatment of brain/spine tumors, cerebrovascular lesions, chiari malformations, neural tube defects, craniosynostosis and hydrocephalus, as well as intractable epilepsy.. “I am particularly interested in the use of endoscopic techniques for treating hydrocephalus and craniosynostosis, as well as laser ablation for tumors and focal epilepsy,” said Dr. Olaya. Dr. Olaya’s interest in treating epilepsy was sparked by a landmark article he read during his residency training. “A randomized controlled trial showed that for patients who failed two medications, surgical treatment of epilepsy resulted in much greater seizure freedom compared to medication alone,” Dr. Olaya said. “I realized that for a select group of epilepsy patients, I could make a huge impact in their lives by offering them surgery.” Dr. Olaya’s clinical interests encompass other neurological disorders, as well. His research on deep-brain stimulation for treating childhood dystonic cerebral palsy was published in the Journal of Neurosurgery: Pediatrics October 2014. A related study on this topic was recently presented at the Pediatric Neurological Surgery Annual Meeting in Toronto and published in the November 2013 issue of Neurosurgery Focus. This past April, Dr. Olaya presented his research into the use of resting-state functional connectivity MRI to assess memory lateralization in pediatric epilepsy surgery patients at the American Association of Neurological Surgeons Annual Meeting. He is currently working on a CHOC institutional review board-approved study evaluating the use of helmet therapy for treating plagiocephaly. Dr. Olaya is a graduate of the University of California at Davis School of Medicine. He completed his neurological surgery residency at Loma Linda University Medical Center. As a resident, he also performed an elective rotation in pediatric neurosurgery at Children’s Hospital Los Angeles. He is an assistant clinical professor in the Department of Neurosurgery at UC Irvine School of Medicine. Fluent in Spanish, Dr. Olaya has privileges at CHOC and CHOC Children’s at Mission Hospital. He is in practice with Dr. Michael Muhonen and Dr. William Loudon at CHOC Children’s Specialists in Orange. For more information or to arrange a referral, please contact Dr. Olaya at 714-835-2724.
http://docs.chocchildrens.org/page/2/
2014-12-18T05:32:26
CC-MAIN-2014-52
1418802765616.69
[]
docs.chocchildrens.org
Deactivating & Reactivating Users How do I deactivate clinician accounts? - Select the Users tab. - Locate the account(s) that you would like to deactivate from the All category of accounts. 3. Click on the box to the right of those account(s) that you would like to deactivate and then click the Deactivate Users under Actions. 4. You will then see a screen that identifies which accounts will be activated. Click Deactivate Users to confirm your request. How do I reactivate clinician accounts? If you need to reactivate a user, follow the same procedure but click Deactivate Users under Actions.
https://docs.app.acpdecisions.org/article/703-deactivating-reactivating-users
2021-04-10T14:59:27
CC-MAIN-2021-17
1618038057142.4
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5519a29fe4b0221aadf24142/images/5ea3700f2c7d3a7e9aeb9c9e/file-LDbd4RDQBV.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5519a29fe4b0221aadf24142/images/5ea370e104286364bc98fc23/file-spdAqdDAyH.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5519a29fe4b0221aadf24142/images/5ea371cd04286364bc98fc28/file-d8ZhqwPovT.png', None], dtype=object) ]
docs.app.acpdecisions.org
What is a blockchain?¶ A blockchain is a type of database or ledger that is duplicated and distributed to all participants within the blockchain network. It is made up of a set of interconnected nodes that store data or items of value in blocks. These blocks are verified by transactions and linked to each other in chronological order in a chain. Details of these transactions are permanently inscribed in the block and cannot be altered. Blockchain technology, otherwise known as distributed ledger technology (DLT), provides a decentralized and accessible data structure for various records. Such records might include financial payment and transaction details, as well as other types of information – from commerce to internet of things (IoT) records. As a blockchain stores data in a decentralized manner, it is independent of centralized, controlling entities, or middlemen. This provides enhanced transparency of data storage and its management. An important feature of blockchain is that it stores records immutably, which means that they cannot be changed, forged, or deleted, as this will break the chain of records. Blockchain can be compared to a book of permanent records, where every page acts as an information holder. Let’s take a closer look at existing data storage solutions to understand the difference between these systems: Centralized systems — all data entries and activities are usually managed using one central server. This increases the risk of a single point of failure, and also means that the controlling entity (such as banks or government institutions, for example) act as decision-makers. Decentralized systems — generally rely on multiple server nodes, each of which serves a subset of the total end clients. Distributed systems — all data and records of transactions are encrypted and stored not in one server, but in a system of interconnected, independent nodes and terminals. This ensures independence from centralized entities, transparency, and security. Finally, blockchains not only provide an immutable and secure database but also act as a functional environment to transact funds, create digital currencies, and process complex deals using digital agreements (smart contracts).
https://docs.cardano.org/en/latest/explainers/cardano-explainers/what-is-a-blockchain.html
2021-04-10T14:57:06
CC-MAIN-2021-17
1618038057142.4
[array(['../../_images/chain-of-blocks.png', 'chain-of-blocks'], dtype=object) array(['../../_images/data-storage.png', 'data-storage'], dtype=object)]
docs.cardano.org
Database High Availability Configuration This section contains additional information you can use when configuring databases for high availability. Database-Specific Mechanisms - MariaDB: Configuring MariaDB for high availability requires configuring MariaDB for replication. For more information, see. - MySQL: Configuring MySQL for high availability requires configuring MySQL for replication. Replication configuration depends on which version of MySQL you are using. For version 5.1, provides an introduction. MySQL GTID-based replication is not supported. - PostgreSQL: PostgreSQL has extensive documentation on high availability, especially for versions 9.0 and higher. For information about options available for version 9.1, see. - Oracle: Oracle supports a wide variety of free and paid upgrades to their database technology that support increased availability guarantees, such as their Maximum Availability Architecture (MAA) recommendations. For more information, see. Disk-Based Mechanisms DRBD is an open-source Linux-based disk replication mechanism that works at the individual write level to replicate writes on multiple machines. Although not directly supported by major database vendors (at the time of writing of this document), it provides a way to inexpensively configure redundant distributed disk for disk-consistent databases (such as MySQL, PostgreSQL, and Oracle). For information, see.
https://docs.cloudera.com/documentation/enterprise/5-6-x/topics/admin_cm_ha_dbms.html
2021-04-10T14:23:54
CC-MAIN-2021-17
1618038057142.4
[]
docs.cloudera.com
- Docs - Positive WordPress Theme - - System Requirements System Requirements Estimated reading : 1 minute To use Positive WordPress theme you need to have a WordPress 4.0 (or higher version) site with PHP 5.6 or more running on your hosting server. If you’ve already installed WordPress on your server and your site is up, that’s great. For help regarding WordPress installation, please see this WordPress Codex link. Some more resources from WordPress Codex: Still Stuck? We can help you. Create a Support Ticket
https://docs.droitthemes.com/docs/positive-wp/system-requirements/
2021-04-10T15:12:54
CC-MAIN-2021-17
1618038057142.4
[array(['https://docs.droitthemes.com/wp-content/themes/ddoc/assets/images/Still_Stuck.png', 'Still_Stuck'], dtype=object) ]
docs.droitthemes.com
ProductDyno Hints and Tips - Q&A Q1: I want to be able to give access to a free product within my Collection but I don't want it to display unless someone has specifically registered for it. I may run a different funnel every few months to drive people to the Collection and so I don't want to use the Bonus feature as all free products from any funnel would appear. What's the best way to grant access to a product in a Collection without either buying it or simply registering for the collection and seeing a bonus product? A1: The best way to do is to use a third party payment gateway (e.g; Paddle, Paykickstart, Thrivecart etc) and offer the 100% discount or free option. This way when someone buys that product, he will be to see the product inside the collection. In your Collection > Products, you can use the option of “Hide from non-buyer” to hide the product from customers who have not bought it. Q2: I have several products in one collection. The front end is free and there is an upsell and downsell. How do I let customers access the free front end? A2: In a collection, you will need to choose the "free" product as “Bonus” to give it away as free. Having free ‘Product’ will not work in the collection. Q3: I currently only have one collection. The members are in a PAID ADVANCED course that comes through a payment gateway. I want to funnel people to that collection using a series of introductory courses. I want to make sure my ADVANCED people do NOT see the Free and Intro content so that when they log in they are not confused. I DO want my Advanced people to SEE SOME of the ADVANCED content that I will post for sale, since I sell other things that I want them to see when they log in. A3: The best way to do is to sell the Introductory courses through your payment gateway - for the free ones, offer the 100% discount or free option. In your Collection > Products….for the Introductory/Free courses use the option of “Hide from non-buyer” to hide the product from customers who have not bought it. Thus your Advanced members will not see the Introductory courses. For the products you want all the members to see, you leave the "Hide from non-buyer" option unchecked. To restrict which products your advanced members see depending on their course, you can add a new "CONTENT SECTION" to your advanced products (name it something like "Next Steps", or whatever) In that section add the other things you want to sell to these members with a link to the sales/promo page Q4: I am able to use the get and post methods to activate and check license. but I am unclear on several issues. Should I run activation each time ? How do I prevent people from using the same key on multiple computers? I need some help to clear up how and when to use the activation. A4: Check out this example: ProductDyno License API Example. You don' t need to run activation every time but check periodically (e.g; CRON Job) if the license is valid. When activating a license, the vendor needs to send a unique "GUID" with the license...this differentiates one computer from another. Later when user tries to use an already activated key to a different computer, ProductDyno will automatically return the error that this key is already attached to a GUID. Q5: One Paypal account can just have one IPN for a product? If I have several products at Product Dyno, how can all of them use Paypal IPN? A5: ProductDyno automatically adds an IPN. Even if you don't define an IPN inside PayPal, all would work without any issue. Integrate first PayPal at Global Integration. Get the webhook URL from ProductDyno and enter it in PayPal. Make sure IPN is enabled. Q6: When products are in a collection are customers for each individual product added to a list if the autoresponder integration is at "product level"? A6: If you add an autoresponder to the product, the customer is added to that autoresponder. If you include the product (with integrated autoresponder) to a collection that has an integrated autoresponder - then the customer is added to both. Q7: Is there a way to set a time limit access for a free product? A7: This would be done via the payment processor in the form of a free trial for the product, automatically leading into the payment for it or a subscription. If the customer cancels the payment before the free trial period ends, access would be terminated. Q8: How do I integrate with my cPanel? A8: The cPanel integration in our system means you can integrate with FTP for videos/file hosting and the custom SMTP settings for sending emails with your SMTP settings. It does not mean that you can connect hosting service details (cpanel username and password) in Product Dyno. Q9: I have used Amazon Simple Storage Service (S3) and/or Amazon CloudFront which will begin migration in March 2021. Will this affect my ProductDyno account? A9: This upcoming migration from Amazon will not affect ProductDyno's workflow and communication with Amazon S3 buckets. The content from S3 will be served normally via ProductDyno without any issue. Q10: Does ProductDyno accept SCORM files? A10: Product Dyno doesn't support SCORM files. The work around is to zip the files inside a folder and upload the zip file to Product Dyno. That way, members will be able to download and unzip on their end.
https://docs.promotelabs.com/article/1104-productdyno-hints-and-tips
2021-04-10T14:09:41
CC-MAIN-2021-17
1618038057142.4
[]
docs.promotelabs.com
© 2008-2019 The original authors. Preface.> 3.1. Dependency Management with Spring Boot Spring Boot selects a recent version of Spring Data modules for you. If you still want to upgrade to a newer version, configure the property spring-data-releasetrain.version to the train version and iteration you would like to use..: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <jpa:repositories </beans> Using the repositories element looks up Spring Data repositories as described in “Creating Repository Instances”. Beyond that, it activates persistence exception translation for all beans annotated with @Repository, to let exceptions being thrown by the JPA persistence providers be converted into Spring’s DataAccessException hierarchy. Custom Namespace Attributes Beyond the default attributes of the repositories element, the JPA namespace offers additional attributes to let you gain more detailed control over the setup of the repositories: 5.1.2. Annotation-based Configuration The Spring Data JPA repositories support can be activated not only through an XML namespace but also by using an annotation through JavaConfig, as shown in the following example: : Version-Property and Id-Property inspection (default): By default Spring Data JPA inspects first if there is a Version-property of non-primitive type. If there is the entity is considered new if the value is null. Without such a Version-property. Option 1 is not an option for entities that use manually assigned identifiers as with those the identifier will always be non- null. A common pattern in that scenario is to use a common base class with a transient flag defaulting to indicate a new instance and using JPA lifecycle callbacks to flip that flag on persistence operations: @MappedSuperclass public abstract class AbstractEntity<ID> implements Persistable<ID> { @Transient private boolean isNew = true; (1) @Override public boolean isNew() { return isNew; (2) } @PrePersist (3) @PostLoad void markNotNew() { this.isNew = false; } // More code… } 5.3. Query Methods This section describes the various ways to create a query with Spring Data JPA. 5.3.1. Query Lookup Strategies The JPA module supports defining a query manually as a String or having it being derived from the method name.. Compare with Using SpEL Expressions. Declared Queries Using JPA Named Queries for more information) or rather annotate your query method with @Query (see Using @Query for details). 5.3.2. Query Creation Generally, the query creation mechanism for JPA works as described in “Query methods”. The following example shows what a JPA query method translates into:: 5.3.3. Using JPA Named Queries, specify the UserRepository as follows: instead of trying to create a query from the method name. 5.3.4. Using @Query Using named queries to declare queries for entities is a valid approach and works fine for a small number of queries. As the queries themselves are tied to the Java method that executes them,: likeexpressions: public interface UserRepository extends JpaRepository<User, Long> { @Query(value = "SELECT * FROM USERS WHERE EMAIL_ADDRESS = ?1", nativeQuery = true) User findByEmailAddress(String emailAddress); } : Sort: @Entity public class User { @Id @GeneratedValue Long id; String lastname; } public interface UserRepository extends JpaRepository<User,Long> { @Query("select u from #{#entityName} u where u.lastname = ?1") List<User> findByLastname(String lastname); } To avoid stating the actual entity name in the query string of a @Query annotation, you can use the #{#entityName} variable.. SpEL expressions to manipulate arguments may also be used to manipulate method arguments. In these SpEL expressions the entity name is not available, but the arguments are. They can be accessed by name or index as demonstrated in the following example. @Query("select u from User u where u.firstname = ?1 and u.firstname=?#{[0]} and u.emailAddress = ?#{principal.emailAddress}") List<User> findByFirstnameAndCurrentUserWithCustomQuery(String firstname); For like-conditions one often wants to appen % to the beginning or the end of a String valued parameter. This can be done by appending or prefixing a bind parameter marker or a SpEL expression with %. Again the following example demonstrates this. @Query("select u from User u where u.lastname like %:#{[0]}% and u.lastname like %:lastname%") List<User> findByLastnameWithSpelExpression(@Param("lastname") String lastname); When using like-conditions with values that are coming from a not secure source the values should be sanitized so they can’t contain any wildcards and thereby allow attackers to select more data than they should be able to. For this purpose the the escape(String) method is made available in the SpEL context. It prefixes all instances of _ and % in the first argument with the single character from the second argument. In combination with the escape clause of the like expression available in JPQL and standard SQL this allows easy cleaning of bind parameters. @Query("select u from User u where u.firstname like %?#{escape([0])}% escape ?#{escapeCharacter()}") List<User> findContainingEscaped(String namePart); Given this method declaration in an repository interface findContainingEscaped("Peter_")" will find `Peter_Parker but not Peter Parker. The escape character used can be configured by setting the escapeCharacter of the @EnableJpaRepositories annotation. Note that the method escape(String) available in the SpEL context will only escape the SQL and JPQL standard wildcards _ and %. If the underlying database or the JPA implementation supports additional wildcards these will not get escaped.: interface UserRepository extends Repository<User, Long> { void deleteByRoleId(long roleId); @Modifying @Query("delete from User u where u: @Repository public interface GroupRepository extends CrudRepository<GroupInfo, String> { @EntityGraph(attributePaths = { "members" }) GroupInfo getByGroupName(String name); } 5.3.11.); } 5.4. Stored Procedures The JPA 2.1 specification introduced support for calling stored procedures by using the JPA criteria query API. We Introduced the @Procedure annotation for declaring stored procedure metadata on a repository method. The examples to follow use the following stored procedure: plus1inoutprocedure. @Entity @NamedStoredProcedureQuery(name = "User.plus1", procedureName = "plus1inout", parameters = { @StoredProcedureParameter(mode = ParameterMode.IN, name = "arg", type = Integer.class), @StoredProcedureParameter(mode = ParameterMode.OUT, name = "res", type = Integer.class) }) public class User {} Note that @NamedStoredProcedureQuery has two different names for the stored procedure. name is the name JPA uses. procedureName is the name the stored procedure has in the database. You can reference stored procedures from a repository method in multiple ways. The stored procedure to be called can either be defined directly by using the value or procedureName attribute of the @Procedure annotation. This referes directly to the stored procedure in the database and ignores any configuration via @NamedStoredProcedureQuery. Alternatively you may specify the @NamedStoredProcedureQuery.name attribute as the @Procedure.name attribute. If neither value, procedureName nor name is configured, the name of the repository method is used as the name attribute. The following example shows how to reference an explicitly mapped procedure: @Procedure("plus1inout") Integer explicitlyNamedPlus1inout(Integer arg); The following example is equivalent to the previous one but uses the procedureName alias: procedureNamealias. @Procedure(procedureName = "plus1inout") Integer callPlus1InOut(Integer arg); The following is again equivalent to the previous two but using the method name instead of an explicite annotation attribute. EntityManagerby using the method name. @Procedure Integer plus1inout(@Param("arg") Integer arg); The following example shows how to reference a stored procedure by referencing the @NamedStoredProcedureQuery.name attribute. EntityManager. @Procedure(name = "User.plus1IO") Integer entityAnnotatedCustomNamedProcedurePlus1IO(@Param("arg") Integer arg); If the stored procedure getting called has a single out parameter that parameter may be returned as the return value of the method. If there are multiple out parameters specified in a @NamedStoredProcedureQuery annotation those can be returned as a Map with the key being the parameter name given in the @NamedStoredProcedureQuery annotation.. } 5: 5.6.4. Executing an example In Spring Data JPA, you can use Query by Example with Repositories, as shown in the following example: public interface PersonRepository extends JpaRepository<Person, String> { … } public class PersonService { @Autowired PersonRepository personRepository; public List<Person> findPeople(Person probe) { return personRepository.findAll(Example.of(probe)); } }. Note that the call to save is not strictly necessary from a JPA point of view, but should still be there in order to stay consistent to the repository abstraction offered by Spring Data. 5.7.1. Transactional query methods To let your query methods be transactional, use @Transactional at the repository interface you define, as shown in the following example: @Transactional(readOnly = true) public interface UserRepository extends JpaRepository<User, Long> { List<User> findByLastname(String lastname); @Modifying @Transactional @Query("delete from User u where u.active = false") void deleteInactiveUsers(); } Typically, you. 5.8. Locking To specify the lock mode to be used, you can use the @Lock annotation on query methods, as shown in the following example:: interface UserRepository extends Repository<User, Long> { // Redeclaration of a CRUD method @Lock(LockModeType.READ); List<User> findAll(); } 5.9. Auditing 5 { } With orm.xml suitably modified and spring-aspects.jar on the classpath, activating auditing functionality is a matter of adding the Spring Data JPA auditing namespace element to your configuration, as follows: : JpaContext: Frequently Asked Questions Common I’d like to get more detailed logging information on what methods are called inside JpaRepositoryfor example. How can I gain them? You can make use of CustomizableTraceInterceptorprovided by Spring, as shown in the following example: <bean id="customizableTraceInterceptor" class=" org.springframework.aop.interceptor.CustomizableTraceInterceptor"> <property name="enterMessage" value="Entering $[methodName]($[arguments])"/> <property name="exitMessage" value="Leaving $[methodName](): $[returnValue]"/> </bean> <aop:config> <aop:advisor </aop:config> Infrastructure Currently I have implemented a repository layer based on HibernateDaoSupport. I create a SessionFactoryby using Spring’s AnnotationSessionFactoryBean. How do I get Spring Data repositories working in this environment? You have to replace AnnotationSessionFactoryBeanwith the HibernateJpaSessionFactoryBean, as follows:Example 120. Looking up a SessionFactoryfrom a HibernateEntityManagerFactory <bean id="sessionFactory" class="org.springframework.orm.jpa.vendor.HibernateJpaSessionFactoryBean"> <property name="entityManagerFactory" ref="entityManagerFactory"/> </bean> Auditing I want to use Spring Data JPA auditing capabilities but have my database already configured to set modification and creation date on entities. How can I prevent Spring Data from setting the date programmatically? Set the set-datesattribute of the auditingnamespace element to false. Appendix F: Glossary - AOP Aspect oriented programming - Commons DBCP Commons DataBase Connection Pools - a library from the Apache foundation that offers pooling implementations of the DataSource interface. - CRUD Create, Read, Update, Delete - Basic persistence operations. - DAO Data Access Object - Pattern to separate persisting logic from the object to be persisted - Dependency Injection Pattern to hand a component’s dependency to the component from outside, freeing the component to lookup the dependent itself. For more information, see. - EclipseLink Object relational mapper implementing JPA - - Hibernate Object relational mapper implementing JPA - - JPA Java Persistence API - Spring Java application framework -
https://docs.spring.io/spring-data/jpa/docs/2.4.0-RC1/reference/html/
2021-04-10T14:11:14
CC-MAIN-2021-17
1618038057142.4
[]
docs.spring.io
Step into the future with DIGITAL TECHNOLOGIES Join the most in-demand industry and secure your future. Studying Digital Technologies - or Computing as you might call it offers you the chance to develop some of the most in-demand, essential skills that employers are looking for today. The opportunities in the industry are endless, with constant innovation and inspiration coming from every corner. It's no shock to the system that we live in an entirely digital world. From our phones holding our most personal information to robots that can serve your dinner, technological advancements have come on threefold in the last decade. Take the first step and join the changemakers. What are the career opportunities? The great thing about having digital skills is that you can work in pretty much any industry. All companies rely on their IT systems, and most use some form of specialist software. Security, websites, apps... the opportunities are endless! Jobs you could get include: - Web Developer - Cyber Security Specialist - IT Support Technician - App Developer - Information Scientist - Network Engineer - Technical Architect Our Digital Technologies Courses: programming, cybersecurity or web development, your Advanced (Level 3) qualification will give you UCAS points to apply for further study. On the next page you will see what you could progress to after completing one of these courses! Employment You could search for an entry-level position with a company, where you will take the next step on your career journey. Digital skills shortage in the UK There has never been a better time to study Digital Technologies. The UK has been identified as a global leader in the field, however there is so much work to do to make sure it stays that way. Meet the tutor Dan Purdy Areas of Expertise: Computer Networking & Infrastructure, Web Technologies Where did your interests in Computing come from? My first proper experience with Computing was in 1995 when I had access to my first home Windows PC (running Windows 95). Starting as a basic user, I soon found myself interested in being able to configure and maintain computer systems. I spent a lot of my teenage years studying IT, but my interests were within the computer hardware and maintenance. After leaving school, I moved on to study a BTEC Level 3 in IT where I gained further knowledge and skills in a range of areas of interest, including website development, computer networks, and software development. Why did you become a tutor? I was studying for my Bachelor’s degree through Anglia Ruskin University. Half-way through my 2nd year of the programme, I was approached by the programme manager of the FE college I previously attended to undertake some hourly paid teaching. At the age of 19, I was teaching FE students a range of subjects – from Spreadsheet Modelling to Digital Forensics. This opportunity provided me with the ability to pass on my knowledge and skills to learners who also wish to enter the field of Computing. This was my motivation to remain a FE lecturer, and become a qualified teacher following my studies..
http://docs.mkcollege.ac.uk/school-leaver-guide/digital-technologies
2021-04-10T14:54:07
CC-MAIN-2021-17
1618038057142.4
[array(['https://assets.foleon.com/eu-west-2/uploads-7e3kk3/47356/industry_stat_images_5-05-05.502c24079a12.png', None], dtype=object) array(['https://assets.foleon.com/eu-west-2/uploads-7e3kk3/47356/industry_stat_images_70-10.1f51c3f81765.png', None], dtype=object) array(['https://assets.foleon.com/eu-west-2/uploads-7e3kk3/47356/industry_stat_images11-11.9f4234d3135e.png', None], dtype=object) array(['https://assets.foleon.com/eu-west-2/uploads-7e3kk3/47356/card_images_13_-13.3edebb35ed3c.png', None], dtype=object) ]
docs.mkcollege.ac.uk
Introduction to HDFS High Availability This section assumes that the reader has a general understanding of components in an HDFS cluster. For details, see the Apache HDFS Architecture Guide. Background In a standard configuration, the NameNode is a single point of failure (SPOF) in an HDFS cluster. Each cluster has a single NameNode, and if that host or process became unavailable, the cluster as a whole is unavailable until the NameNode is either restarted or brought up on a new host. The Secondary NameNode does not provide failover capability. - In the case of an unplanned event such as a host crash, the cluster is unavailable until an operator restarts the NameNode. - Planned maintenance events such as software or hardware upgrades on the NameNode machine result in periods of cluster downtime. HDFS HA addresses the above problems by providing the option of running two NameNodes in the same cluster, in an active/passive configuration. These are referred to as the active NameNode and the standby NameNode. Unlike the Secondary NameNode, the standby NameNode is hot standby, allowing a fast automatic failover to a new NameNode in the case that a host crashes, or a graceful administrator-initiated failover for the purpose of planned maintenance. You cannot have more than two NameNodes. Implementation Cloudera Manager 5 and CDH 5 support Quorum-based Storage to implement HA. Quorum-based Storage Quorum-based Storage refers to the HA implementation that uses a Quorum Journal Manager (QJM). For the standby NameNode to keep its state synchronized with the active NameNode in this implementation, both nodes communicate with a group of separate daemons called JournalNodes. When any namespace modification is performed by the active NameNode, it durably logs a record of the modification to a majority of the JournalNodes. The standby NameNode is capable of reading the edits from the JournalNodes, and is constantly watching them for changes to the edit log. As the standby Node sees the edits, it applies them to its own namespace. In the event of a failover, the standby ensures that it has read all of the edits from the JournalNodes before promoting itself to the active state. This ensures that the namespace state is fully synchronized before a failover occurs. To provide a fast failover, it is also necessary that the standby NameNode has up-to-date information regarding the location of blocks in the cluster. To achieve this, DataNodes are configured with the location of both NameNodes, and they send block location information and heartbeats to both. Automatic Failover Automatic failover relies on two additional components in an HDFS: a ZooKeeper quorum, and the ZKFailoverController process (abbreviated as ZKFC). In Cloudera Manager, the ZKFC process maps to the HDFS Failover Controller role. Apache ZooKeeper is a highly available service for maintaining small amounts of coordination data, notifying clients of changes in that data, and monitoring clients for failures. The implementation of HDFS automatic failover relies on ZooKeeper for the following functions: - ZooKeeper client that also monitors and manages the state of the NameNode. Each of the hosts that run a NameNode also run a ZKFC. The ZKFC is responsible for: - Health monitoring - the ZKFC contacts its local NameNode on a periodic basis with a health-check command. So long as the NameNode responds promptly with a healthy status, the ZKFC considers the NameNode healthy. If the NameNode has crashed, frozen, or otherwise entered an unhealthy state, the health monitor marks is automatically deleted. - ZooKeeper-based election - if the local NameNode is healthy, and the ZKFC sees that no other NameNode. General Questions about HDFS HA - October 2018.
https://docs.cloudera.com/documentation/enterprise/5-9-x/topics/cdh_hag_hdfs_ha_intro.html
2021-04-10T14:30:43
CC-MAIN-2021-17
1618038057142.4
[]
docs.cloudera.com
Permissions. The new Merchant Center permissions model is more powerful and offers more options for customizing user permissions. To learn more, you can read the User Permissions documentation in advance of the release. Existing team permissions for all projects and organizations will be mapped to the new model as a part of the decommissioning of Admin Center permissions. The Admin Center will continue to work with its existing permission model. Changes to the Merchant Center permissions have no effect in the Admin Center. New organizations, teams, and projects will per default not have any old Admin Center permissions, and will not be visible in the Admin Center. You should not notice any difference in behavior during or after the upgrade on 18 July 2019. If you have any questions or concerns please contact Support. To remove visibility of existing organizations or projects in the Admin Center, please open a support ticket.
https://docs.commercetools.com/merchant-center/releases/2019-07-03-permissions-upgrade-on-july-18-2019
2021-04-10T14:35:38
CC-MAIN-2021-17
1618038057142.4
[]
docs.commercetools.com
If you are new to New Relic distributed tracing, we recommend you read the following before you enable distributed tracing. Impact to APM features Our distributed tracing improves on APM's previous cross application tracing feature. Here are some key benefits: - See more cross-service activity details and more complete end-to-end traces. - Filter and query traces, as well as make custom charts. - See the complete trace even when calls cross account boundaries (for accounts with the same master account or in the same customer partnership). - See Introduction to distributed tracing for other features. Enabling distributed tracing may affect some APM features you currently use. These changes affect only applications monitored by agents that have distributed tracing enabled—they don't apply on an account-level. We may provide backward compatibility with some or all of the affected features in future releases. For now, you should understand the following changes before enabling distributed tracing: Plan your rollout If you're enabling distributed tracing for a large, distributed system, here are some tips: If you are a current APM user, see Impact to APM features. Determine the requests that are the most important for your business, or the most likely to require analysis and troubleshooting, and enable distributed tracing for those services. Enable tracing for services at roughly the same time so you can more easily gauge how complete your end-to-end traces are. When you look at traces in the distributed tracing UI, you'll see spans in the trace for external calls to other services. Then, you can enable distributed tracing for any of those services you want. If a service is fairly standalone and not often used in context with other services, you may not want to enable distributed tracing for it. Here's a visual representation of such a phased roll-out: If you are using APM for a large, monolithic service, there may be many sub-process spans per trace and APM limits may result in fewer traces than expected. You can solve this by using APM agent instrumentation to disable the reporting of unimportant data. Distributed tracing works by propagating header information from service to service in a request path. Some services may communicate through a proxy or other intermediary service that does not automatically propagate the header. In that case, you will need to configure that proxy so that it allows the newrelicheader value to be propagated from source to destination. Enable distributed tracing If you are aware of the impact to APM features and have thought about your rollout, you are ready to set up distributed tracing. See Overview: Enable distributed.
https://docs.newrelic.com/docs/distributed-tracing/concepts/distributed-tracing-planning-guide/
2021-04-10T14:30:25
CC-MAIN-2021-17
1618038057142.4
[]
docs.newrelic.com
This (short) Quarter is all about shedding complexity: in our offer, in our codebase, in our UI. Offer: COVID-19 has given us an opportunity to position ourselves in a new vertical that has similar requirements than the Open Source one: Crisis Responders. Meaning similar in the sense that they need a turn-key solution that includes a platform to fundraise and disburse funds paired with a fiscal host that can act as the custodian of the funds. We’ve struggled to replicate this outside of the FOSS ecosystem, but now we are seeing it again. To make sure we support as many groups as possible in this vertical, we’ve waived our fees until June, we are hiring someone to manage the Open Collective Foundation (501c3 that’s doing the fiscal hosting in the US) and relaunched it with a focus on COVID-19. From the new collectives in march 175 are COVID-19 related initiatives. The growth driven by COVID-19 Relief groups is an opportunity for us to focus our efforts on growing a new segment of collectives with a First Party Host (this is a host we own, as opposed to a third party host, a legal entity that uses Open Collective Platform). This is important for us, because we are providing a complete solution ourselves: Tax deductible fiscal sponsorship as a service paired with a transparent open finances platform. Project Issue: #2896 Project owner: @piamancini Design Owner: @Memo-Es Status: Wireframes: shipped, UI design: in process. Goal: Design in components that can be implemented in phases. Project owner: @piamancini Technical owner: @Betree Implementation Specs Issue: needed Component Deliverables: Expense List Pay Filtering for Host Dashboard Goal is to lower the barriers for donors to give: User related Issues: #3067 and #2228 Project owner: @piamancini Design owner: @Memo-Es Project Issue: #3164 We're currently using Paypal Adaptive Payments PayPal introduced a Payout API that provides the following benefits over adaptive payments: Payout limit is $20,000 USD For payouts made through the API, it’s just $0.25 USD per U.S. transaction. For international payments, the fee is 2% of the payment amount to each recipient (up to a maximum of $20) Project Owner @kewitz Project Issue: #3131 Related Issues: #2274 and #2258 Improving and simplifying Collective Page Project Owner: @Betree Project Issue: #3136 Simplifying how donors manage their subscriptions Project Owner: @sbinlondon Project Issue: #3137 Figma Designs: here Project Owner: @znarf Project Issue: needed Related Issues Project Owner: @znarf Project Issue: needed Project Issue: #3105 Since we are waiving our fees for COVID collectives we are setting up a path for donors to give platform fees on top of their donation. Project Owner: @piamancini Technical Owner @sbinlondon Status: MVP merged Stretch Goal: Update old static pages from new design (Design owner @Memo-Es, Project owner @piamancini ) #3176 Small Improvements (@znarf): Killing host Collectives Collective to collective across hosts (spec) Pledges issue (Ben) (Biz Dev) Update OCF Board / Compliance (@piamancini) (Biz Dev) Update Ford / Sloane Proposal (@piamancini) (Biz Dev) Sign OSC Agreement with John Hopkins (Alyssa) (Biz Dev) Push Github Sponsors (@alanna) (Biz Dev) Grow the COVID-19 groups for OCF (Kayla)
https://docs.opencollective.com/help/product/roadmap
2021-04-10T14:54:57
CC-MAIN-2021-17
1618038057142.4
[]
docs.opencollective.com
>>. Prerequisites to installing Splunk Enterprise on Windows, Install Next steps Now that you have installed Splunk Enterprise, learn what happens next. You can also review this topic about considerations for deciding how to monitor Windows data in the Getting Data In manual.!
https://docs.splunk.com/Documentation/Splunk/8.1.2/Installation/InstallonWindowsviathecommandline
2021-04-10T13:54:14
CC-MAIN-2021-17
1618038057142.4
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Packaging for Bazel Deprecated These rules have been extracted from the Bazel sources and are now available at bazelbuild/rules_pkg (docs). Issues and PRs against the built-in versions of these rules will no longer be addressed. This page will exist for reference until the code is removed from Bazel. For more information, follow issue 8857 rules_pkg Overview pkg_tar() is available for building a .tar file without depending on anything besides Bazel. Since this feature is deprecated and will eventually be removed from Bazel, you should migrate to @rules_pkg. Basic Example This example is a simplification of building Bazel and creating a distribution tarball. load("@bazel_tools//tools/build_defs/pkg:pkg.bzl", "pkg_tar") = "bazel-all", extension = "tar.gz", deps = [ ":bazel-bin", ":bazel-tools", ], ) Here, a, bazel-allcreates a gzip-compressed tarball that merge the two previous tarballs. pkg_tar pkg_tar(name, extension, strip_prefix, package_dir, srcs, mode, modes, deps, symlinks) Creates a tar file from a list of inputs.
https://docs.bazel.build/versions/master/be/pkg.html
2021-04-10T14:36:50
CC-MAIN-2021-17
1618038057142.4
[]
docs.bazel.build
You can configure HTTPS with a new custom certificate. Configuring HTTPS with a new custom certificate involves three parts: - Generating a Certificate Signing Request (CSR) - Submitting the CSR to the Certificate Authority to obtain a key, CA certificate, and CA certificate bundle - Uploading the CA files to the Address Manager.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Configuring-HTTPS-with-a-new-custom-certificate/8.2.0
2021-04-10T14:59:07
CC-MAIN-2021-17
1618038057142.4
[]
docs.bluecatnetworks.com