content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Chapter 10 — Improving Web Services: Web Services Performance Send feedback to [email protected] patterns & practices Library Summary: This chapter focuses on design guidelines and techniques, such as state management, asynchronous invocation, serialization, threading, to help you develop efficient Web services. This chapter also presents a formula for reducing thread contention and HTTP connections to increase the throughput for your Web services. Contents Objectives Overview How to Use This Chapter Architecture Prescriptive Guidance for Web Services, Enterprise Services, and .NET Remoting Performance and Scalability Issues Design Considerations Implementation Considerations Connections Threading One-Way (Fire-and-Forget) Communication Asynchronous Web Methods Asynchronous Invocation Timeouts WebMethods Serialization Caching State Management Bulk Data Transfer Attachments COM Interop Measuring and Analyzing Web Services Performance Web Service Enhancements Summary Additional Resources Objectives - Identify top Web services performance issues. - Design scalable Web services that meet your performance objectives. - Improve serialization performance. - Configure the HTTP runtime for optimum performance. - Improve threading efficiency. - Evaluate and choose the most appropriate caching mechanism. - Decide when to maintain state. - Evaluate and choose the most appropriate bulk data transfer mechanism. Overview Services are the ideal communication medium for distributed applications. You should build all of your services using Web services and then, if necessary, use Enterprise Services or Microsoft® .NET remoting within the boundaries of your service implementation. For example, you might need to use Enterprise Services for distributed transaction support or object pooling. Web services are ideal for cross-platform communication in heterogeneous environments because of their use of open standards such as XML and Simple Object Access Protocol (SOAP). However, even in a closed environment where both client and server systems use the .NET Framework, ease of deployment and maintenance make Web services a very attractive approach. This chapter begins by examining the architecture of ASP.NET Web services, and then explains the anatomy of a Web services request from both the client and server-side perspectives. You need a solid understanding of both client and server to help you identify and address typical Web services performance issues. An understanding of Web services architecture will also help you when you configure the HTTP runtime to optimize Web services performance. The chapter then presents a set of important Web services design considerations, followed by a series of sections that address the top Web services performance issues. How to Use This Chapter To get the most out of this chapter: - Jump to topics or read from beginning to end. The main headings in this chapter help you to locate the topics that interest you. Alternatively, you can read the chapter from beginning to end to gain a thorough appreciation of performance and scalability design issues. - Use the checklist. Use "Checklist: Web Services Performance" in the "Checklists" section of this guide to quickly view and evaluate the guidelines presented in this chapter. - Use the "Architecture" section of this chapter to learn how Web services work. By understanding Web services architecture, you can make better design and implementation choices. - Use the "Design Considerations" section of this chapter. This section helps you to understand the higher-level decisions that affect implementation choices for Web services code. - Read Chapter 6, "Improving ASP.NET Performance" Many of the performance optimizations described in Chapter 6, "Improving ASP.NET Performance" — such as tuning the thread pool and designing and implementing efficient caching — also apply to ASP.NET Web services development. - Read Chapter 13, "Code Review: .NET Application Performance" See the "Web Services" section of Chapter 13 for specific guidance. - Measure your application performance. Read the "Web Services" and ".NET Framework Technologies" sections of Chapter 15, "Measuring .NET Application Performance" to learn about key metrics that you can use to measure application performance. It is important that you are able to measure application performance so that you can target performance issues accurately. - Test your application performance. Read Chapter 16, "Testing .NET Application Performance" to learn how to apply performance testing to your application. It is important that you apply a coherent testing process and that you are able to analyze the results. - Tune your application performance. Read the "Web Services" section of Chapter 17, "Tuning .NET Application Performance" to learn how to resolve performance issues identified through the use of tuning metrics. Architecture The server-side infrastructure is based on ASP.NET and uses XML serialization. When the Web server processes an HTTP request for a Web service, Internet Information Services (IIS) maps the requested extension (.asmx) to the ASP.NET Internet server application programming interface (ISAPI) extension (Aspnet_isapi.dll). The ASP.NET ISAPI extension then forwards the request to the ASP.NET worker process, where it enters the request processing pipeline, which is controlled by the HttpRuntime object. See Figure 10.1 for an illustration of the Web services architecture and request flow. Figure 10.1: ASP.NET Web services architecture and request flow The request is initially passed to the HttpApplication object, followed by the series of registered HttpModule objects. HttpModule objects are registered in the system-wide Machine.config file or in the <httpModules> section of an application-specific Web.config file. HttpModule objects handle authentication, authorization, caching, and other services. After passing through the HTTP modules in the pipeline, HttpRuntime verifies that the .asmx extension is registered with the WebServiceHandlerFactory handler. This creates an HTTP handler, an instance of a type that derives from WebServiceHandler, which is responsible for processing the Web services request. The HTTP handler uses reflection to translate SOAP messages into method invocations. WebServiceHandler is located in the System.Web.Services.Protocols namespace. Client-Side Proxy Classes On the client side, proxy classes provide access to Web services. Proxy classes use XML serialization to serialize the request into a SOAP message, which is then transported using functionality provided by the System.Net namespace. You can use the Wsdl.exe tool to automatically generate the proxy class from the Web Services Description Language (WSDL) contract file. Depending on the bindings specified in the WSDL, the request issued by the proxy may use the HTTP GET, HTTP POST, or HTTP SOAP protocols. The proxy class is derived from one of the following base classes: - System.Web.Services.Protocols.HttpGetClientProtocol - System.Web.Services.Protocols.HttpPostClientProtocol - System.Web.Services.Protocols.SoapHttpClientProtocol These all derive from System.Web.Services.Protocols.HttpWebClientProtocol, which in turn derives from the System.Web.Services.Protocols.WebClientProtocol base class in the inheritance chain. WebClientProtocol is the base class for all automatically generated client proxies for ASP.NET Web services, and, as a result, your proxy class inherits many of its methods and properties. guidelines on how to make .NET Enterprise Services components execute as quickly as C++ COM components, see the MSDN® article, ".NET Enterprise Services Performance," at. - For more information on Enterprise Services, see Chapter 8, "Improving Enterprise Services Performance" - For more information on remoting, see Chapter 11, "Improving Remoting Performance" Performance and Scalability Issues The main issues that can adversely affect the performance and scalability of your Web services are summarized in the following list. Subsequent sections in this chapter provide strategies and technical information to prevent or resolve each of these issues. requirements, XmlDocument types and choose types specific to your application, such as an Employee or Person class. Serialization. Serializing large amounts of data and passing it to and from Web services can cause performance-related issues, including network congestion and excessive memory and processor overhead. Other issues that affect the amount of data passed across the wire include. Design Considerations To help ensure that you create efficient Web services, there are a number of issues that you must consider and a number of decisions that you must make at design time. The following are major considerations: - Design chunky interfaces to reduce round trips. - Prefer message-based programming over RPC style. - Use literal message encoding for parameter formatting. - Prefer primitive types for Web services parameters. - Avoid maintaining server state between calls. - Consider input validation for costly Web methods. - Consider your approach to caching. - Consider approaches for bulk data transfer and attachments. - Avoid calling local Web services.) More Information For more information about these approaches, see the "Bulk Data Transfer" and "Attachments" sections later in this chapter.. More Information - For IIS 6.0 – specific deployment mitigation, refer to "ASP.NET Tuning" in Chapter 17, "Tuning .NET Application Performance" - For more information about how to structure your application properly, refer to "Application Architecture for .NET: Designing Applications and Services" on MSDN at. Implementation Considerations When you move from application design to development, consider the implementation details of your Web services. Important Web services performance measures include response times, speed of throughput, and resource management: - You can reduce request times and reduce server load by caching frequently used data and SOAP responses. - You can improve throughput by making effective use of threads and connections, by optimizing Web method serialization, and by designing more efficient service interfaces. Tune thread pooling to reduce contention and increase CPU utilization. To improve connection performance, configure the maximum limit of concurrent outbound calls to a level appropriate for the CPU performance. - You can improve resource management by ensuring that shared resources, such as connections, are opened as late as possible and closed as soon as possible, and also by not maintaining server state between calls. By following best practice implementation guidelines, you can increase the performance of Web services. The following sections highlight performance considerations for Web services features and scenarios. Connections When you call Web services, transmission control protocol (TCP) connections are pooled by default. If a connection is available from the pool, that connection is used. If no connection is available, a new connection is created, up to a configurable limit. There is always a default unnamed connection pool. However, you can use connection groups to isolate specific connection pools used by a given set of HTTP requests. To use a separate pool, specify a ConnectionGroupName when you make requests. If you don't specify a connection group, the default connection pool is used. To use connections efficiently, you need to set an appropriate number of connections, determine whether connections will be reused, and factor in security implications. The following recommendations improve connection performance: - Configure the maxconnection attribute. - Prioritize and allocate connections across discrete Web services. - Use a single identity for outbound calls. - Consider UnsafeAuthenticatedConnectionSharing with Windows Integrated Authentication. - Use PreAuthenticate with Basic authentication. Configure The maxconnection Attribute. ). For more information, see the "Threading" section later in this chapter. Evaluating the Change Changing the attribute may involve multiple iterations for tuning and involves various trade-offs. More Information -; } Threading Web Services use ASP.NET thread pooling to process requests. To ensure that your Web Services use the thread pool most effectively, consider the following guidelines: - Tune the thread pool using the Formula for Reducing Contention. - Consider minIoThreads and minWorkerThreads for intermittent burst load. Tune the Thread Pool by Using the Formula for Reducing Contention The Formula for Reducing Contention can give you a good starting point for tuning the ASP.NET thread pool. Consider using the Microsoft product group recommended settings (shown in Table 10.2) if you have available CPU, your application performs I/O bound operations (such as calling a Web method or accessing the file system), and you have queued requests as indicated by the ASP.NET Applications/Requests in Application Queue performance counter. Table 10.2: Recommended Threading Settings for Reducing Contention To address this issue, you need to configure the following items in Machine.config. The changes described in the following list should be applied across the settings and not in isolation. For a detailed description of each of these settings, see "Thread Pool Attributes" in Chapter 17, "Tuning .NET Application Performance." - Set maxconnection to 12 * # of CPUs. This setting controls the maximum number of outgoing HTTP connections allowed by the client, which in this case is ASP.NET. The recommendation is to set this to 12 times the number of CPUs. - Set maxIoThreads to 100. This setting controls the maximum number of I/0 threads in the common language runtime (CLR) thread pool. This number is then automatically multiplied by the number of available CPUs. The recommendation is to set this to 100. - Set maxWorkerThreads to 100. This setting controls the maximum number of worker threads in the CLR thread pool. This number is then automatically multiplied by the number of available CPUs. The recommendation is to set this to 100. - Set minFreeThreads to 88 * # of CPUs. The worker process uses this setting to queue up all the incoming requests if the number of available threads in the thread pool falls below the value for this setting. This setting effectively limits the number of concurrently executing requests to maxWorkerThreads – minFreeThreads. The recommendation is to set this to 88 times the number of CPUs. This limits the number of concurrent requests to 12 (assuming maxWorkerThreads is 100). - Set minLocalRequestFreeThreads to 76 * # of CPUs. This worker process uses this setting to queue up requests from localhost (where a Web application calls a Web service on the same server) if the number of available threads in the thread pool falls below this number. This setting is similar to minFreeThreads, but it only applies to requests that use localhost. The recommendation is to set this to 76 times the number of CPUs. **Note **The above recommendations are starting points rather than strict rules. You should perform appropriate testing to determine the correct settings for your environment. If the formula has worked, you should see improved throughput and less idle CPU time: - CPU utilization should go up. - Throughput should increase (ASP.NET Applications\Requests/Sec should go up), - Requests in the application queue (ASP.NET Applications\Requests in Application Queue) should go down. If this does not improve your performance, you may have a CPU-bound situation. If this is the case, by adding more threads you increase thread context switching. For more information, see "ASP.NET Tuning" in Chapter 17, "Tuning .NET Application Performance." More Information For more information, see Microsoft Knowledge Base article 821268, "PRB: Contention, Poor Performance, and Deadlocks When You Make Web Service Requests from ASP.NET Applications," at;en-us;821268. Consider minIoThreads and minWorkerThreads for Intermittent Burst Load If you have burst load scenarios that are intermittent and short (0 to 10 minutes), then the thread pool may not have enough time to reach the optimal level of threads. The use of minIoThreads and minWorkerThreads allows you to configure a minimum number of worker and I/O threads for load conditions. At the time of this writing, you need a supported fix to configure the settings. For more information, see the following Microsoft Knowledge Base articles: - threading and Web services, see: - "ASP.NET Tuning" in Chapter 17, "Tuning .NET Application Performance" - Microsoft Knowledge Base article 821268, "PRB: Contention, Poor Performance, and Deadlocks When You Make Web Service Requests from ASP.NET Applications," at;en-us;821268. One-Way (Fire-and-Forget) Communication Consider using the OneWay attribute if you do not require a response. Using the OneWay property of SoapDocumentMethod and SoapRpcMethod in the System.Web.Services.Protocols namespace frees the client immediately instead of forcing it to wait for a response. For a method to support fire-and-forget invocation, you must decorate it with the OneWay attribute, as shown in the following code snippet. [SoapDocumentMethod(OneWay=true)] [WebMethod(Description="Returns control immediately")] public void SomeMethod() { } This is useful if the client needs to send a message, but does not expect anything as return values or output parameters. Methods marked as OneWay cannot have output parameters or return values. Asynchronous Web Methods You can call a Web service asynchronously regardless of whether or not the Web service has been implemented synchronously or asynchronously. Similarly, you can implement a synchronous or asynchronous Web service, but allow either style of caller. Client-side and server-side asynchronous processing is generally performed to free up the current worker thread to perform additional work in parallel. The asynchronous implementation of a Web method frees up the worker thread to handle other parallel tasks that can be performed by the Web method. This ensures optimal utilization of the thread pool, resulting in throughput gains. For normal synchronous operations, the Web services asmx handler uses reflection on the assembly to find out which methods have the WebMethod attribute associated with them. The handler simply calls the appropriate method based on the value of the SOAP-Action HTTP header. However, the Web services asmx handler treats asynchronous Web methods differently. It looks for methods that adhere to the following rules: - Methods adhere to the asynchronous design pattern: - There are BeginXXX and EndXXX methods for the actual XXX method that you need to expose. - The BeginXXX method returns an IAsyncResult interface, takes whatever arguments the Web method needs, and also takes two additional parameters of type AsyncCallback and System.Object, respectively. - The EndXXX method takes an IAsyncResult as a parameter and returns the return type of your Web method. - Both methods are decorated with the WebMethod attribute. The Web services asmx handler then exposes the method, as shown in the following code snippet. [WebMethod] IAsyncResult BeginMyProc( ) [WebMethod] EndMyProc( ) //the WSDL will show the method as MyProc( ) The Web services asmx handler processes incoming requests for asynchronous methods as follows: - Call the BeginXXX method. - Pass the reference to an internal callback function as a parameter to the BeginXXX method, along with the other in parameters. This frees up the worker thread processing the request, allowing it to handle other incoming requests. The asmx handler holds on to the HttpContext of the request until processing of the request is complete and a response has been sent to the client. - Once the callback is called, call the EndXXX function to complete the processing of the method call and return the response as a SOAP response. - Release the HttpContext for the request. Consider the following guidelines for asynchronous Web methods: - Use asynchronous Web methods for I/O operations. - Do not use asynchronous Web methods when you depend on worker threads. Use Asynchronous Web Methods for I/O Operations Consider using asynchronous Web methods if you perform I/O-bound operations such as: - Accessing streams - File I/O operations - Calling another Web service The .NET Framework provides the necessary infrastructure to handle these operations asynchronously, and you can return an IAsyncResult interface from these types of operations. The .NET Framework exposes asynchronous methods for I/O-bound operations using the asynchronous design pattern. The libraries that use this pattern have BeginXXX and EndXXX methods. The following code snippet shows the implementation of an asynchronous Web method calling another Web service. // The client W/S public class AsynchWSToWS { WebServ asyncWs = null; public AsynchWSToWS(){ asyncWs = new WebServ(); } [System.Web.Services.WebMethod] public IAsyncResult BeginSlowProcedure(int milliseconds,AsyncCallback cb, object s){ // make call to other web service and return the IAsyncResult return asyncWs.BeginLengthyCall(milliseconds,cb,s); } [System.Web.Services.WebMethod ] public string EndSlowProcedure(IAsyncResult call) { return asyncWs.EndLengthyCall(call); } } // The server W/S public class WebServ { [WebMethod] public string LengthyCall(int milliseconds){ Thread.Sleep(milliseconds); return "Hello World"; } } Asynchronous implementation helps when you want to free up the worker thread instead of waiting on the results to return from a potentially long-running task. For this reason, you should avoid asynchronous implementation whenever your work is CPU bound because you do not have idle CPU to service more threads. In this case, an asynchronous implementation results in increased utilization and thread switching on an already busy processor. This is likely to hurt performance and overall throughput of the processor. **Note **You should not use asynchronous Web methods when accessing a database. ADO.NET does not provide asynchronous implementation for handling database calls. Wrapping the operation in a delegate is not an option either because you still block a worker thread. You should only consider using an asynchronous Web method if you are wrapping an asynchronous operation that hands back an IAsyncResult reference. Do Not Use Asynchronous Web Methods When You Depend on Worker Threads You should not implement Web methods when your asynchronous implementation depends upon callbacks or delegates because they use worker threads internally. Although the delegate frees the worker thread processing the request, it uses another worker thread from the process thread pool to execute the method. This is a thread that can be used for processing other incoming requests to the Web service. The result is that you consume a worker thread for the delegate-based operation and you increase context switching. Alternatively, you can use synchronous Web methods and decrease the minFreeThreads setting so that the worker threads can take requests and execute them directly. In this scenario, you could block the original worker thread by implementing the Web method to run synchronously. An example of the delegate-based implementation is shown in the following code snippet. // delegate public delegate string LengthyProcedureAsyncStub(int milliseconds); //actual method which is exposed as a web service [WebMethod] public string LengthyCall(int milliseconds) { System.Threading.Thread.Sleep(milliseconds); return "Hello World"; } [WebMethod] public IAsyncResult BeginLengthyCall(int milliseconds,AsyncCallback cb, object s) { LengthyProcedureAsyncStub stub = new LengthyProcedureAsyncStub(LengthyCall); //using delegate for asynchronous implementation return stub.BeginInvoke(milliseconds, cb, null); } [System.Web.Services.WebMethod] public string EndLengthyCall(IAsyncResult call) { return ms.asyncStub.EndInvoke(call); } Asynchronous Invocation Web services clients can call a Web service either synchronously or asynchronously, independently of the way the Web service is implemented. For server applications, using asynchronous calls to a remote Web service is a good approach if the Web service client can either free the worker thread to handle other incoming requests or perform additional parallel work before blocking for the results. Generally, Windows Forms client applications call Web services asynchronously to avoid blocking the user interface. **Note **The HTTP protocol allows at most two simultaneous outbound calls from one client to one Web service. The WSDL-generated proxy contains support for both types of invocation. The proxy supports the asynchronous call by exposing BeginXXX and EndXXX methods. The following guidelines help you decide whether or not calling a Web service asynchronously is appropriate: - Consider calling Web services asynchronously when you have additional parallel work. - Use asynchronous invocation to call multiple unrelated Web services. - Call Web services asynchronously for UI responsiveness. Consider Calling Web Services Asynchronously When You Have Additional Parallel Work Asynchronous invocation is the most useful when the client has additional work that it can perform while the Web method executes. Asynchronous calls to Web services result in performance and throughput gains because you free the executing worker thread to do parallel work before it is blocked by the Web services call and waits for the results. This lets you concurrently process any work that is not dependent on the results of the Web services call. The following code snippet shows the approach. private void Page_Load(object sender, System.EventArgs e) { serv = new localhost.WebService1(); IAsyncResult result = serv.BeginLengthyProcedure(5000,null,null); // perform some additional processing here before blocking // wait for the asynchronous operation to complete result.AsyncWaitHandle.WaitOne(); string retStr = serv.EndLengthyProcedure(result); } Use Asynchronous Invocation to Call Multiple Unrelated Web Services Consider asynchronous invocation if you need to call multiple Web services that do not depend on each other's results. Asynchronous invocation lets you call the services concurrently. This tends to reduce response time and improve throughput. The following code snippet shows the approach. private void Page_Load(object sender, System.EventArgs e){ serv1 = new WebService1(); serv2 = new WebService2(); IAsyncResult result1 = serv1.BeginLengthyProcedure(1000,null,null); IAsyncResult result2 = serv2.BeginSlowProcedure(1000,null,null); //wait for the asynchronous operation to complete WaitHandle[] waitHandles = new WaitHandle[2]; waitHandles[0] = result1.AsyncWaitHandle; waitHandles[1] = result2.AsyncWaitHandle; WaitHandle.WaitAll(waitHandles); //depending upon the scenario you can //choose between WaitAny and WaitAll string retStr1 = serv1.EndLengthyProcedure(result1); string retStr2 = serv2.EndSlowProcedure(result2); } Call Web Services Asynchronously for UI Responsiveness By calling a Web service asynchronously from a Windows Forms application, you free the main user interface thread. You can also consider displaying a progress bar while the call progresses. This helps improve perceived performance. However, you need to perform some additional work to resynchronize the results with the user interface thread because the Web service call is handled by a separate thread. You need to call the Invoke method for the control on which you need to display the results. More Information For more information, see the MSDN article, "At Your Service: Performance Considerations for Making Web Service Calls from ASPX Pages," at. Timeouts It is very common for an ASP.NET application to call a Web service. If your application's Web page times out before the call to the Web service times out, this causes an unmanaged resource leak and a ThreadAbortException. This is because I/O completion threads and sockets are used to service the calls. As a result of the exception, the socket connection to the Web service is not closed and cannot be reused by other outbound requests to the Web service. The I/O thread continues to process the Web service response. To avoid these issues, set timeouts appropriately as follows: - Set your proxy timeout appropriately. - Set your ASP.NET timeout greater than your Web service timeout. - Abort connections for ASP.NET pages that timeout before a Web services call completes. - Consider the responseDeadlockInterval attribute. Set Your Proxy Timeout Appropriately When you call a Web service synchronously, set the Timeout property of the Web service proxy. The default value is 100 seconds. You can programmatically set the value before making the call, as shown in the following code snippet. MyWebServ obj = new MyWebServ(); obj.Timeout = 15000; // in milliseconds For ASP.NET applications, the Timeout property value should always be less than the executionTimeout attribute of the httpRuntime element in Machine.config. The default value of executionTimeout is 90 seconds. This property determines the time ASP.NET continues to process the request before it returns a timed out error. The value of executionTimeout should be the proxy Timeout, plus processing time for the page, plus buffer time for queues. - Consider reducing the Proxy Timeout value from its default of 100 seconds if you do not expect clients to wait for such a long time. You should do this even under high load conditions when the outbound requests to the Web service could be queued on the Web server. As a second step, reduce the executionTimeout also. - You might need to increase the value if you expect the synchronous call to take more time than the default value before completing the operation. If you send or receive large files, you may need to increase the attribute value. As a second step, increase the executionTimeout attribute to an appropriate value. Set Your ASP.NET Timeout Greater Than Your Web Service Timeout The Web service timeout needs to be handled differently, depending upon whether you call the Web service synchronously or asynchronously. In either case, you should ensure that the timeouts are set to a value less than the executionTimeout attribute of the httpRuntime element in Machine.config. The following approaches describe the options for setting the timeouts appropriately: Synchronous calls to a Web service. Set the proxy Timeout to an appropriate value, as shown in the following code snippet. MyWebServ obj = new MyWebServ(); obj.Timeout = 15000; // in milliseconds You can also set the value in the proxy class generated by the WSDL for the Web service. You can set it in the class constructor, as shown in the following code snippet. public MyWebServ() { this.Url = ""; this.Timeout = 10000; //10 seconds } Or you can set it at the method level for a long-running call. public string LengthyProc(int sleepTime) { this.Timeout = 10000; //10 seconds object[] results = this.Invoke("LengthyProc", new object[] {sleepTime}); return ((string)(results[0])); } Asynchronous calls to a Web service. In this case, you should decide on the number of seconds you can wait for the Web service call to return the results. When using a WaitHandle, you can pass the number of milliseconds the executing thread is blocked on the WaitHandle before it aborts the request to the Web service. This is shown in the following code snippet. MyWebServ obj = new MyWebServ(); IAsyncResult ar = obj.BeginFunCall(5,5,null,null); // wait for not more than 2 seconds ar.AsyncWaitHandle.WaitOne(2000,false); if (!ar.IsCompleted) //if the request is not completed { WebClientAsyncResult wcar = (WebClientAsyncResult)ar; wcar.Abort();//abort the call to web service } else { //continue processing the results from web service } Abort Connections for ASP.NET Pages That Timeout Before a Web Services Call Completes After you make the configuration changes described in the previous section, if your Web pages time out while Web services calls are in progress, you need to ensure that you abort the Web services calls. This ensures that the underlying connections for the Web services calls are destroyed. To abort a Web services call, you need a reference to the WebRequest object used to make the Web services call. You can obtain this by overriding the GetWebRequest method in your proxy class and assigning it to a private member of the class before returning the WebRequest. This approach is shown in the following code snippet. private WebRequest _request; protected override WebRequest GetWebRequest(Uri uri){ _request = base.GetWebRequest(uri); return _request; } Then, in the method that invokes the Web service, you should implement a finally block that aborts the request if a ThreadAbortException is thrown. [System.Web.Services.Protocols.SoapDocumentMethodAttribute( )] public string GoToSleep(int sleepTime) { bool timeout = true; try { object[] results = this.Invoke("GoToSleep", new object[] {sleepTime}); timeout = false; return ((string)(results[0])); } finally { if(timeout && _request!=null) _request.Abort(); } } **Note **Modifying generated proxy code is not recommended because the changes are lost as soon as the proxy is regenerated. Instead, derive from the proxy class and implement new functionality in the subclass whenever possible. Consider the responseDeadlockInterval Attribute When you make Web services calls from an ASP.NET application, if you are increasing the value of both the proxy timeout and the executionTimeout to greater than 180 seconds, consider changing the responseDeadlockInterval attribute for the processModel element in the Machine.config file. The default value of this attribute is 180 seconds. If there is no response for an executing request for 180 seconds, the ASP.NET worker process will recycle. You must reconsider your design if it warrants changing the attributes to a higher value. WebMethods You add the WebMethod attribute to those public methods in your Web services .asmx file that you want to be exposed to remote clients. Consider the following Web method guidelines: Prefer primitive parameter types. When you define your Web method, try to use primitive types for the parameters. Using primitive types means that you benefit from reduced serialization, in addition to automatic validation by the .NET Framework. Consider buffering. By default, the BufferResponse configuration setting is set to true, to ensure that the response is completely buffered before returning to the client. This default setting is good for small amounts of data. For large amounts of data, consider disabling buffering, as shown in the following code snippet. [WebMethod(BufferResponse=false)] public string GetTextFile() { // return large amount of data } To determine whether or not to enable or disable buffering for your application, measure performance with and without buffering. Consider caching responses. For applications that deal with relatively static data, consider caching the responses to avoid accessing the database for every client request. You can use the CacheDuration attribute to specify the number of seconds the response should be cached in server memory, as shown in the following code snippet. [WebMethod(CacheDuration=60)] public string GetSomeDetails() { // return large amount of data } Note that because caching consumes server memory, it might not be appropriate if your Web method returns large amounts of data or data that frequently changes Enable session state only for Web methods that need it. Session state is disabled by default. If your Web service needs to maintain state, then you can set the EnableSession attribute to true for a specific Web method, as shown in the following code snippet. [WebMethod(EnableSession=true)] public string GetSomeDetails() { // return large amount of data } Note that clients must also maintain an HTTP cookie to identify the state between successive calls to the Web method. For more information, see "WebMethodAttribute.EnableSession Property" on MSDN at. Serialization The amount of serialization that is required for your Web method requests and responses is a significant factor for overall Web services performance. Serialization overhead affects network congestion, memory consumption, and processor utilization. To help keep the serialization overhead to a minimum: - Reduce serialization with XmlIgnore. - Reduce round trips. - Consider XML compression. Reduce Serialization with XmlIgnore To limit which fields of an object are serialized when you pass the object to or from a Web method and to reduce the amount of data sent over the wire, use the XmlIgnore attribute as shown in the following code snippet. The XmlSerializer class ignores any field annotated with this attribute. Note **Unlike the formatters derived from the **IFormatter interface, XmlSerializer serializes only public members. // This is the class that will be serialized. public class MyClass { // The str1 value will be serialized. public string str1; /* This field will be ignored when serialized-- unless it's overridden. */ [XmlIgnoreAttribute] public string str2; } Reduce Round Trips Reducing round trips to a Web service reduces the number of times that messages need to cross serialization boundaries. This helps reduce the overall serialization cost incurred. Design options that help to reduce round trips include the following: - Use message-based interaction with a message-based programming model, rather than an RPC style that requires multiple object interactions to complete a single logical unit of work. - In some cases, split a large payload into multiple calls to the Web service. Consider making the calls in parallel using asynchronous invocation instead of in series. This does not technically reduce the total number of round trips, but in essence the client waits for only a single round trip. Consider XML Compression Compressing the XML payload sent over the wire helps reduce the network traffic significantly. You can implement XML compression by using one of the following. More Information For more information about serialization, see: - "XmlSerializer Architecture" in the November 2001 edition of MSDN magazine, at. - Microsoft Knowledge Base article 314150, "INFO: Roadmap for XML Serialization in the .NET Framework," at;en-us;314150. - Microsoft Knowledge Base article 313651, "INFO: Roadmap for XML in the .NET Framework," at;en-us;313651. - Microsoft Knowledge Base article 317463, "HOW TO: Validate XML Fragments Against an XML Schema in Visual Basic .NET," at;en-us;317463.: - Consider output caching for less volatile data. - Consider providing cache-related information to clients. - Consider perimeter caching. Consider Output Caching for Less Volatile Data If portions of your output are static or nearly static, use ASP.NET output caching. To use ASP.NET output caching with Web services, configure the CacheDuration property of the WebMethod attribute. The following code snippet shows the cache duration set to 30 seconds. [WebMethod(CacheDuration=30)] public string SomeMethod() { ... . } For more information, see Microsoft Knowledge Base article 318299, "HOW TO: Perform Output Caching with Web Services in Visual C# .NET," at;en-us;318299. Consider Providing Cache-Related Information to Clients Web services clients can implement custom caching solutions to cache the response from Web services. If you intend that clients of your Web services should cache responses, consider providing cache expiration–related information to the clients so that they send new requests to the Web service only after their cached data has expired. You can add an additional field in the Web service response that specifies the cache expiration time. Consider Perimeter Caching If the output from your Web services changes infrequently, use hardware or software to cache the response at the perimeter network. For example, consider ISA firewall-based caching. Perimeter caching means that a response is returned before the request even reaches the Web server, which reduces the number of requests that need to be serviced. For more information about ISA caching, see the white paper, "Scale-Out Caching with ISA," at. State Management Web services state can be specific to a user or to an application. Web services use ASP.NET session state to manage per-user data and application state to manage application-wide data. You access session state from a Web service in the same way you do from an ASP.NET application — by using the Session object or System.Web.HttpContext.Current. You access application state using the Application object, and the System.Web.HttpApplicationState class provides the functionality. Maintaining session state has an impact on concurrency. If you keep data in session state, Web services calls made by one client are serialized by the ASP.NET runtime. Two concurrent requests from the same client are queued up on the same thread in the Web service — the second request waits until the first request is processed. If you do not use session data in a Web method, you should disable sessions for that method. Maintaining state also affects scalability. First, keeping per-client state in-process or in a state service consumes memory and limits the number of clients your Web service can serve. Second, maintaining in-process state limits your options because in-process state is not shared by servers in a Web farm. If your Web service needs to maintain state between client requests, you need to choose a design strategy that offers optimum performance and at the same time does not adversely affect the ability of your Web service to scale. The following guidelines help you to ensure efficient state management: - Use session state only where it is needed. - Avoid server affinity. Use Session State Only Where It Is Needed To maintain state between requests, you can use session state in your Web services by setting the EnableSession property of the WebMethod attribute to true, as shown in the following code snippet. By default, session state is disabled. [WebMethod(EnableSession=true)] YourWebMethod() { ... } Since you can enable session state at the Web method level, apply this attribute only to those Web methods that need it. **Note **Enabling session state pins each session to one thread (to protect session data). Concurrent calls from the same session are serialized once they reach the server, so they have to wait for each other, regardless of the number of CPUs. Avoid Server Affinity If you do use session state, in-process session state offers the best performance, but it prevents you from scaling out your solution and operating your Web services in a Web farm. If you need to scale out your Web services, use a remote session state store that can be accessed by all Web servers in the farm. Bulk Data Transfer You have the following basic options for passing large amounts of data including binary data to and from Web methods: - Using a byte array Web method parameter. - Returning a URL from the Web service. - Using streaming.(); } More Information. Attachments You have various options when handling attachments with Web services. When choosing your option, consider the following: - WS-Attachments. WSE versions 1.0 and 2.0 provide support for WS-Attachments, which uses Direct Internet Message Encapsulation (DIME) as an encoding format. Although DIME is a supported part of WSE, Microsoft is not investing in this approach long term. DIME is limited because the attachments are outside the SOAP envelope. - Base 64 encoding. Use Base 64 encoding. At this time, you should use Base 64 encoding rather than WS-Attachments when you have advanced Web service requirements, such as security. Base 64 encoding results in a larger message payload (up to two times that of WS-Attachments). You can implement a WSE filter to compress the message with tools such as GZIP before sending it over the network for large amounts of binary data. If you cannot afford the message size that Base 64 introduces and you can rely on the transport for security (for example, you rely on SSL or IPSec), then consider the WSE WS-Attachments implementation. Securing the message is preferable to securing the transport so that messages can be routed securely, whereas transport only addresses point-to-point communication. - SOAP MessageTransmissionOptimizationMechanism(MTOM). MTOM, which is a derivative work of SOAP messages with attachments (SwA), is likely to be the future interop technology. MTOM is being standardized by the World Wide Web Consortium (W3C) and is much more composition-friendly than SwA. SOAP Messages with Attachments (SwA) SwA (also known as WS-I Attachments Profile 1.0) is not supported. This is because you cannot model a MIME message as an XML Infoset, which introduces a non-SOAP processing model and makes it difficult to compose SwA with the rest of the WS-* protocols, including WS-Security. The W3C MTOM work was specifically chartered to fix this problem with SwA, and Microsoft is planning to support MTOM in WSE 3.0. COM Interop Calling single-threaded apartment (STA) objects from Web services is neither tested nor supported. The ASPCOMPAT attribute that you would normally use in ASP.NET pages when calling Apartment threaded objects is not supported in Web services. More Information For more information, see Microsoft Knowledge Base article 303375, "INFO: XML Web Services and Apartment Objects," at;en-us;303375. Measuring and Analyzing Web Services Performance The quickest way to measure the performance of a Web services call is to use the Microsoft Win32® QueryPerformanceCounter API, which can be used with QueryPerformanceFrequency to determine the precise number of seconds that the call consumed. Note **You can also use the **ASP.NET\Request Execution Time performance counter on the server hosting the Web service. More Information - For more information, see "How To: Time Managed Code Using QueryPerformanceCounter and QueryPerformanceFrequency" in the "How To" section of this guide. - For more information about measuring Web services performance, see "Web Services" in Chapter 15, "Measuring .NET Application Performance" Web Service Enhancements Web Service Enhancements (WSE) is an implementation provided to support emerging Web services standards. This section briefly explains WSE, its role in Web services, and sources of additional information. WSE 2.0 provides a set of classes implemented in the Microsoft.Web.Services.dll to support the following Web services standards: - WS-Security - WS-SecureConversation - WS-Trust - WS-Policy - WS-Addressing - WS-Referrals - WS-Attachments Figure 10.2 shows how WSE extends the .NET Framework to provide this functionality. Figure 10.2: WSE runtime The WSE runtime consists of a pipeline of filters that intercepts inbound SOAP requests and outgoing SOAP response messages. WSE provides a programming model to manage the SOAP headers and messages using the SoapContext class. This gives you the ability to implement various specifications that it supports. More Information For more information about WSE, see the MSDN article, "Web Services Enhancements (WSE)," at. Summary Web services are the recommended communication mechanism for distributed .NET applications. It is likely that large portions of your application are depending on them or will depend on them. For this reason, it is essential that you spend time optimizing Web services performance and that you design and implement your Web services with knowledge of the important factors that affect their performance and scalability. This chapter has presented the primary Web services performance and scalability issues that you must address. It has also provided a series of implementation techniques that enable you to tackle these issues and build highly efficient Web services solutions. Additional Resources For more information, see the following resources: - For a printable checklist, see "Checklist: Web Services Performance" in the "Checklists" section of this guide. - Chapter 4, "Architecture and Design Review of a .NET Application for Performance and Scalability" - Chapter 13, "Code Review: .NET Application Performance" See the "Web Services" and "ASP.NET" sections. - Chapter 15, "Measuring .NET Application Performance" See the "Web Services" and "ASP.NET" sections. - Chapter 16, "Testing .NET Application Performance" - Chapter 17, "Tuning .NET Application Performance" See the "Web Services Tuning" and "ASP.NET Tuning" sections. - For key recommendations to help you create high-performance .NET Enterprise Services components, see ".NET Enterprise Services Performance" by Richard Turner, on MSDN at. - For more information on using Microsoft WSE, see Microsoft Knowledge Base article 821377, "Support WebCast: Introduction to Microsoft Web Services Enhancements," at;en-us;821377.
https://docs.microsoft.com/en-us/previous-versions/msp-n-p/ff647786(v=pandp.10)
2019-05-19T09:23:48
CC-MAIN-2019-22
1558232254731.5
[array(['images/ff647786.ch10-web-services-architecture%28en-us%2cpandp.10%29.gif', 'Ff647786.ch10-web-services-architecture(en-us,PandP.10).gif Ff647786.ch10-web-services-architecture(en-us,PandP.10).gif'], dtype=object) array(['images/ff647786.ch10-wse%28en-us%2cpandp.10%29.gif', 'Ff647786.ch10-wse(en-us,PandP.10).gif Ff647786.ch10-wse(en-us,PandP.10).gif'], dtype=object) ]
docs.microsoft.com
Contents Now Platform Administration Previous Topic Next Topic Virtual Private Network (VPN) Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Virtual Private Network (VPN) Use a virtual private network (VPN to integrate your instance with external data sources over the Internet. When configuring an integration that uses an encrypted protocol, such as Lightweight Directory Access Protocol (LDAP) or HTTPS, it is good practice to use the Internet as a transport mechanism. However, there may be security or network architecture requirements that dictate the use of a site-to-site Internet Protocol Security (IPSEC) Virtual Private Network (VPN) connection between the datacenters and your business networks. The VPN supports the necessary encrypted communication between the instance and your network. This video describes how to locate the IP addresses for each of your company's instances. My IP Information V. Redundant tunnels There are two ways to build redundancy for your tunnels: Using the same encryption domain behind both of your peers. This is the preferred method. Using a different encryption domain behind each peer. With the first method, you need to provide the same NAT address behind each of your peers to create a connection path using that address to your server. The path to your server could be the same physical machine or a mirror which provides identical services. With this method, your instance would use the same IP address to connect to your servers regardless of whether your primary or secondary tunnel is active. If you have more than one server, follow this same scheme for your additional servers. This method provides the most transparency to your users and is recommended. The second method requires configuration in your instance to provide the redundancy. When the tunnel is used for LDAP, for example, you could provide redundant LDAP servers in your instance. Note that this method requires the connection to the first configured LDAP server to timeout before the instance attempts to connect to the secondary server. Because of this additional time delay, this solution should only be implemented if the first option is unattainable. Also note that not all services can be configured for redundantly in your instance. If you are using a VPN tunnel for something other than LDAP and redundancy is required, check that your configuration can support multiple addresses, or see the first option above. Alternatives to using a VPN These alternatives provide a simpler way to connect your instance to the resources in the ServiceNow data centers and provide better encryption. Additionally, you can avoid any issues that VPN downtime might cause, such as making your instance unavailable to users if there is an issue with the VPN tunnel. Single sign-on and MID server Consider using a combination of Single Sign-On (SSO) for authentication and the MID Server for user data synchronization, rather than using a VPN to connect your LDAP server to your instance. For integrations other than LDAP, consider using certificate-based encryption. You can use the LDAP listener on a MID server to synchronize your user table in near real time. The advantage of this approach is that there are no firewall holes, routes, VPN tunnels, or other special network settings to configure and maintain. The SSO/MID-Server solution is the most flexible, secure, and cost-effective method to achieve the complete LDAP integration. LDAP over SSL Another alternative to using a VPN tunnel is to configure LDAP Over SSL (LDAPS) directly over the Internet. You can configure a read-only domain controller and lock the instance down in your DMZ using only the instance's source addresses and the destination ports of your choice. Since the ports for LDAP are configurable in your instance, you can perform a port address translation (PAT) if desired. With LDAPS, you control the certificate that is uploaded over an encrypted channel to the instance, (see Upload a certificate to an instance). The packets cannot be encrypted or decrypted without the certificate.The advantage of this approach is that it provides a stronger encryption and decryption mechanism. A VPN can only encrypt and decrypt the traffic between the two peers sitting on the Internet with a coordinated pre-shared key, similar to a password. LDAPS provides a longer encrypted path, end-to-end, at the application layer and with a certificate that is far more complicated than a pre-shared key that the IPSec tunnel uses. VPN setup From the time that a VPN request is submitted, it typically takes one week or less to complete the VPN build. To support the redundancy requirements of your instance and your organization, a minimum of two and a maximum of four VPNs are provisioned (from the active site to your active site or the active site to your DR site, and so on). It is good practice for the encryption domain to be as specific as possible. Ideally, the encryption domain would include only the specific hosts that are required for the integrations. A large encryption domain can create opportunities for routing discrepancies (VPN versus Internet). To create the VPN, the instance does the following: Provides the VPN peer and host addresses from each data center. Builds the necessary VPN connectivity from two data centers into your network. To support redundancy and disaster recovery (DR) requirements, the VPNs can be provisioned from two data centers into two networks. The instance does not support building multiple VPN tunnels into a customer network for the purpose of connecting to multiple geographic regions or subsidiaries. You should perform any inter-site routing, traffic distribution, or traffic shaping within your own internal network, rather than having multiple VPN tunnels. Request a VPN serviceFor all VPN requests, including provisioning, modifications, or general questions, use the Service Catalog VPN Request form. Create an address for VPN communicationTo prevent conflict or overlap with internal ServiceNow networks or with another customer's internal IP address schemes, the instance requires that all tunneled traffic in the encryption domain use non-RFC-1918 addresses on both sides of the tunnel. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/madrid-platform-administration/page/administer/encryption/concept/c_SetUpAVPN4SNowBusNet.html
2019-05-19T08:55:47
CC-MAIN-2019-22
1558232254731.5
[]
docs.servicenow.com
interface gre vpn interface gre—Configure a GRE tunnel interface interface in the transport VPN (on vEdge routers only). GRE interfaces are logical interfaces, and you configure them just like any other physical interface. GRE interfaces come up as soon as they are configured, and they stay up as long as the physical tunnel interface is up. vManage Feature Template For vEdge routers only: Configuration ► Templates ► VPN Interface GRE Command Hierarchy vpn 0 interface grenumber access-list acl-name block-non-source-ip clear-dont-fragment description text ip address prefix/length keepalive seconds retries mtu bytes policer policer-name rewrite-rule rule-name tcp-mss-adjust bytes tunnel-destination ip-address (tunnel-source ip-address | tunnel-source-interface interface-name) Options - Interface Name - grenumber Name of the GRE interface.number can be a value from 1 through 255. Operational Commands show interface show tunnel statistics gre Example Configure a GRE tunnel interface in VPN 0: vEdge# show running-config vpn 0 vpn 0 interface gre1 ip address 172.16.111.11/24 keepalive 60 10 tunnel-source 172.16.255.11 tunnel-destination 10.1.2.27 no shutdown ! ! Release Information Command introduced in Release 14.1. Support for GRE interfaces added in Release 15.4.1. Additional Information See the "Configure GRE Interfaces and Advertise Services to Them" section in Configuring Interfaces article for your software release.
https://sdwan-docs.cisco.com/Product_Documentation/Command_Reference/Configuration_Commands/interface_gre
2019-05-19T08:43:20
CC-MAIN-2019-22
1558232254731.5
[]
sdwan-docs.cisco.com
MS-DOS and Windows Wildcard Characters Note Indexing Service is no longer supported as of Windows XP and is unavailable for use as of Windows 8. Instead, use Windows Search for client side search and Microsoft Search Server Express for server side search.. In its short form, Dialect 2 uses the equal sign (=) to indicate that wildcard characters are used. Essentially, "=" turns on the MS-DOS/Windows wildcard character mode. If no equal sign is used, a CONTAINS operator is assumed. Note In the Indexing Service query language, the syntax #contents = text is invalid.t
https://docs.microsoft.com/en-us/previous-versions/windows/desktop/indexsrv/ms-dos-and-windows-wildcard-characters
2019-05-19T08:28:39
CC-MAIN-2019-22
1558232254731.5
[]
docs.microsoft.com
High DPI Desktop Application Development on Windows This content is targeted at developers who are looking to update desktop applications to handle display scale factor (dots per inch, or DPI) changes dynamically, allowing their applications to be crisp on any display they're rendered on. To start, if you're creating a new Windows app from scratch, it is highly recommended that you create a Universal Windows Platform (UWP) application. UWP applications automatically—and dynamically—scale for each display that they're running on. Desktop applications using older Windows programming technologies (raw Win32 programming, Windows Forms, Windows Presentation Framework (WPF), etc.) are unable to automatically handle DPI scaling without additional developer work. Without such work, applications will appear blurry or incorrectly-sized in many common usage scenarios. This document provides context and information about what is involved in updating a desktop application to render correctly.); in 2017, displays with nearly 300 DPI or higher are readily available. Most legacy desktop UI frameworks have built-in assumptions that the display DPI will not change during the lifetime of the process. This assumption no longer holds true, with display DPIs commonly changing several times throughout an application process's lifetime. redraw themselves for the new DPI automatically. By default, and without additional developer work, desktop applications do not. Desktop applications that don't do this extra work to respond to DPI changes may appear blurry or incorrectly-sized to the user. DPI Awareness Mode Desktop applications must tell Windows if they support DPI scaling. By default, the system considers desktop applications DPI unaware and bitmap-stretches their windows. By setting one of the following available DPI awareness modes, applications can explicitly tell Windows how they wish to handle DPI scaling: DPI Unaware. System DPI Awareness Desktop applications that are system DPI aware typically receive the DPI of the primary connected monitor as of the time of user sign-in. During initialization, they lay out their UI appropriately (sizing controls, choosing font sizes, loading assets, etc.) using that System DPI value. As such, System DPI-aware applications are not DPI scaled (bitmap stretched) by Windows on displays rendering at that single DPI. When the application is moved to a display with a different scale factor, or if the display scale factor otherwise changes, Windows will bitmap scale the application's windows, making them appear blurry. Effectively, System DPI-aware desktop applications only render crisply at a single display scale factor, becoming blurry whenever the DPI changes. Per-Monitor and Per-Monitor (V2) DPI Awareness It is recommended that desktop applications be updated to use per-monitor DPI awareness mode, allowing them to immediately render correctly whenever the DPI changes. When an application reports to Windows that it wants to run in this mode, Windows will not bitmap stretch the application when the DPI changes, instead sending WM_DPICHANGED to the application window. It is then the complete responsibility of the application to handle resizing itself for the new DPI. Most UI frameworks used by desktop applications (Windows common controls (comctl32), Windows Forms, Windows Presentation Framework, etc.) do not support automatic DPI scaling, requiring developers to resize and reposition the contents of their windows themselves. bitmap scaled by Windows - Automatic non-client area (window caption, scroll bars, etc.) DPI scaling). Note Per-Monitor V1 (PMv1) awareness is very limited. It is recommended that applications use PMv2., supported on Windows 10 1703 or above.) On Windows 10 1607 or above, PMv1 applications may also call EnableNonClientDpiScaling during WM_NCCREATE to request that Windows correctly scale the window's non-client area. Per Monitor DPI Scaling Support by UI Framework / Technology The table below shows the level of per-monitor DPI awareness support offered by various Windows UI frameworks as of Windows 10 1703: performed not only during application initialization, but also whenever a DPI change notification (WM_DPICHANGED System DPI. It can be useful to grep through your code to look for some of these APIs and replace them with DPI-aware variants. Some of the common APIs that have DPI-aware variants are: It is also a good idea to search for hard-coded sizes in your codebase that assume a constant DPI, replacing them with code that correctly accounts for DPI scaling. Below is an example that incorporates all of these suggestions: Example: The example below shows a simplified Win32 case of creating a child HWND. The call to CreateWindow assumes that the application is running at 96 DPI, and neither the button's size nor position will be correct at higher DPIs: case WM_CREATE: { // Add a button HWND hWndChild = CreateWindow(L"BUTTON", L"Click Me", WS_CHILD|WS_VISIBLE|BS_PUSHBUTTON, 50, 50, 100, 50, hWnd, (HMENU)NULL, NULL, NULL); } The updated code below shows: - The window-creation code DPI scaling the position and size of the child HWND for the DPI of its parent window - Responding to DPI change by repositioning and resizing the child HWND - Hard-coded sizes Per-Monitor DPI aware and replace them with Per-Monitor DPI-aware allow bitmap stretching of these top-level windows by the system. Mixed-Mode DPI Scaling (Sub-Process DPI Scaling) When updating an application to support per-monitor DPI awareness, it can sometimes become impractical or impossible to update every window in the application in one go. into07), the DPI awareness mode of a process was a process-wide property. Beginning in the Windows 10 Anniversary Update, this property can now be set per top-level window. (Child windows must continue to match the scaling size of their parent.) to validate your application properly responds to DPI changes in a mixed-DPI environment. Some specifics to test include: - Moving application windows back and forth between displays of different DPI values - Starting your application on displays of different DPI values - Changing the scale factor for your monitor while the application is running - Changing the display that you use as the primary display, signing out of Windows, then re-testing your application after signing back in. This use to resize your window. It is critical that your application use this rectangle to resize itself, as this. If you have application-specific requirements that prevent you from using the suggested rectangle that Windows provides in the WM_DPICHANGED message, see WM_GETDPISCALEDSIZE. This message can be used to give Windows a desired size you'd like used once the DPI change has occurred, while still avoiding the issues described above. Lack of documentation about virtualization When an HWND or process is running as either DPI unaware or system DPI aware, it can be bitmap stretched by Windows. When this happens, Windows scales and converts DPI-sensitive information from some APIs to the coordinate space of the calling thread. For example, if a DPI-unaware thread queries the screen size while running on a high-DPI display, Windows will virtualize the answer given to the application as if the screen were in 96 DPI units. Alternatively, when a System DPI-aware thread is interacting with a display at a different DPI than was in use when the current user's session was started, Windows will DPI-scale some API calls into the coordinate space that the HWND would be using if it were running at its original DPI scale factor. When you update your desktop application to DPI scale properly, it can difficult to know which API calls can return virtualized values based on the thread context; this information is not currently sufficiently documented by Microsoft. Be aware that if you call any system API from a DPI-unaware or system-DPI-aware thread context, the return value might be virtualized. As such, make sure your thread is running in the DPI context you expect when interacting with the screen or individual windows. When temporarily changing a thread's DPI context using SetThreadDpiAwarenessContext, be sure to restore the old context when you're done to avoid causing incorrect behavior elsewhere in your application. Many Windows APIs do not have an DPI context Many legacy Windows APIs do not include a DPI or HWND context as part of their interface. As a result, developers often have to do additional work to handle the scaling of any DPI-sensitive information, such as sizes, points, or icons. As an example, developers using LoadIcon must either bitmap stretch loaded icons or use alternate APIs to load correctly-sized icons for the appropriate DPI, such as LoadImage.. On all versions of Windows, as. The table below shows what happens if you attempt to violate this rule:
https://docs.microsoft.com/en-us/windows/desktop/hidpi/high-dpi-desktop-application-development-on-windows
2019-05-19T08:38:27
CC-MAIN-2019-22
1558232254731.5
[array(['images/hub-page-illustrations.png', 'differences in dpi scaling between awareness modes'], dtype=object)]
docs.microsoft.com
Chunks Last edited by Mike Reid on Jul 23, 2014. Chunks are bits of static text which you can reuse across your site, similar in function to include files or "blocks" in other content management systems. Common examples of Chunks might be your contact information or a copyright notice. Although Chunks cannot contain any logic directly, they can however contain calls to Snippets, which are executable bits of PHP code which produce dynamic output. Create Before you can use a Chunk, you must first create and name one by pasting text into the MODx manager (Elements --> Chunks --> New Chunk): Usage To use the Chunk, you reference it by name in your templates or in your page content. [[$chunkName]] That reference is then replaced with the contents of the Chunk. You can also pass properties to a Chunk. Say you had a chunk named 'intro' with the contents: Hello, [[+name]]. You have [[+messageCount]] messages. You could fill those values with: [[$intro? &name=`George` &messageCount=`12`]] Which would output: Hello, George. You have 12 messages. You could even take it one step further, by adding a Template Variable that allows the user to specify their name per Resource: [[!$intro? &name=`[[*usersName]]` &messageCount=`[[*messageCount]]`]] or in the Chunk itself: Hello, [[*usersName]]. You have [[*messageCount]] messages. Processing Chunk via the API Chunks are also frequently used to format the output of Snippets. A Chunk can be processed from a Snippet using the process() function; for example, given the following Chunk named 'rowTpl': <tr class="[[+rowCls]]" id="row[[+id]]"> <td>[[+pagetitle]]</td> <td>[[+introtext]]</td> </tr> the following Snippet code retrieves it and processes it with an array of properties for all published Resources, and returns formatted results as a table, setting the class to "alt" if for even rows: $resources = $modx->getCollection('modResource',array('published' => true)); $i = 0; $output = ''; foreach ($resources as $resource) { $properties = $resource->toArray(); $properties['rowCls'] = $i % 2 ? '' : 'alt'; $output .= $modx->getChunk('rowTpl',$properties); $i++; } return '<table><tbody>'.$output.'</tbody></table>'; Modifying a Chunk Via the API Chunks can also be manipulated by the MODx API: <?php /* create a new chunk, give it some content and save it to the database */ $chunk = $modx->newObject('modChunk'); $chunk->set('name','NewChunkName'); $chunk->setContent('<p>This is my new chunk!</p>'); $chunk->save(); /* get an existing chunk, modify the content and save changes to the database */ $chunk = $modx->getObject('modChunk', array('name' => 'MyExistingChunk')); if ($chunk) { $chunk->setContent('<p>This is my existing chunks new content!</p>'); $chunk->save(); } /* get an existing chunk and delete it from the database */ $chunk = $modx->getObject('modChunk', array('name' => 'MyObsoleteChunk')); if ($chunk) $chunk->remove(); ?> See Also Suggest an edit to this page on GitHub (Requires GitHub account. Opens a new window/tab) or become an editor of the MODX Documentation.
https://docs.modx.com/revolution/2.x/making-sites-with-modx/structuring-your-site/chunks
2019-05-19T08:27:40
CC-MAIN-2019-22
1558232254731.5
[array(['download/attachments/bf9f8ccf5036b4f4bf8b248f7748d0c3/chunk_example.jpg', None], dtype=object) ]
docs.modx.com
SNMP¶ The Simple Network Management Protocol (SNMP) daemon enables remote monitoring of some pfSense system parameters. Depending on the options chosen, monitoring may be performed for network traffic, network flows, pf queues, and general system information such as CPU, memory, and disk usage. The SNMP implementation used by pfSense book, but there are plenty of print and online resources for SNMP, and some of the MIB trees are covered in RFCs. For example, the Host Resources MIB is defined by RFC 2790. SNMP Daemon¶ These options dictate if, and how, the SNMP daemon will run. To turn the SNMP daemon on, check Enable. Once Enable has been checked, the other options may then be changed. SNMP Traps¶ To instruct the SNMP daemon to send SNMP traps, check Enable. Once Enable has been checked, the other options may then be changed. Modules¶ Loadable modules allow the SNMP daemon to understand and respond to queries for more system information. Each loaded module will consume additional resources. As such, ensure that only required modules are loaded..
https://docs.netgate.com/pfsense/en/latest/book/services/snmp.html
2019-05-19T08:33:03
CC-MAIN-2019-22
1558232254731.5
[]
docs.netgate.com
I can't upload a build! Posted in General by Anish Aggarwal Thu Oct 26 2017 18:49:58 GMT+0000 (UTC)·Viewed 513 times When I try to upload a build, the progress bar goes through fine and when I try to refresh my dashboard nothing has been updated. I have tried multiple times and it is not working. Was working just a couple days ago though.. is something broken?
http://docs.apphub.io/v1.0/discuss/59f22e5681c79b0030b4073d
2019-05-19T09:03:10
CC-MAIN-2019-22
1558232254731.5
[]
docs.apphub.io
CephFS health messages¶ Cluster health checks¶ The Ceph monitor daemons will generate health messages in response to certain states of the filesystem map structure (and the enclosed MDS maps). Message: mds rank(s) ranks have failed Description: One or more MDS ranks are not currently assigned to an MDS daemon; the cluster will not recover until a suitable replacement daemon starts. Message: mds rank(s) ranks are damaged Description: One or more MDS ranks has encountered severe damage to its stored metadata, and cannot start again until it is repaired. Message: mds cluster is degraded Description: One or more MDS ranks are not currently up and running, clients may pause metadata IO until this situation is resolved. This includes ranks being failed or damaged, and additionally includes ranks which are running on an MDS but have not yet made it to the active state (e.g. ranks currently in replay state). Message: mds names are laggy Description: The named MDS daemons have failed to send beacon messages to the monitor for at least mds_beacon_grace (default 15s), while they are supposed to send beacon messages every mds_beacon_interval (default 4s). The daemons may have crashed. The Ceph monitor will automatically replace laggy daemons with standbys if any are available. Message: insufficient standby daemons available Description: One or more file systems are configured to have a certain number of standby daemons available (including daemons in standby-replay) but the cluster does not have enough standby daemons. The standby daemons not in replay count towards any file system (i.e. they may overlap). This warning can configured by setting ceph fs set <fs> standby_count_wanted <count>. Use zero for count to disable. Daemon-reported health checks¶ MDS daemons can identify a variety of unwanted conditions, and indicate these to the operator in the output of ceph status. This conditions have human readable messages, and additionally a unique code starting MDS_HEALTH which appears in JSON output. Message: “Behind on trimming…” Code: MDS_HEALTH_TRIM Description: CephFS maintains a metadata journal that is divided into log segments. The length of journal (in number of segments) is controlled by the setting mds_log_max_segments, and when the number of segments exceeds that setting the MDS starts writing back metadata so that it can remove (trim) the oldest segments. If this writeback is happening too slowly, or a software bug is preventing trimming, then this health message may appear. The threshold for this message to appear is for the number of segments to be double mds_log_max_segments. Message: “Client name failing to respond to capability release” Code: MDS_HEALTH_CLIENT_LATE_RELEASE, MDS_HEALTH_CLIENT_LATE_RELEASE_MANY Description: CephFS clients are issued capabilities by the MDS, which are like locks. Sometimes, for example when another client needs access, the MDS will request clients release their capabilities. If the client is unresponsive or buggy, it might fail to do so promptly or fail to do so at all. This message appears if a client has taken longer than session_timeout (default 60s) to comply. Message: “Client name failing to respond to cache pressure” Code: MDS_HEALTH_CLIENT_RECALL, MDS_HEALTH_CLIENT_RECALL_MANY Description: Clients maintain a metadata cache. Items (such as inodes) in the client cache are also pinned in the MDS cache, so when the MDS needs to shrink its cache (to stay within mds_cache_size or mds_cache_memory_limit), it sends messages to clients to shrink their caches too. If the client is unresponsive or buggy, this can prevent the MDS from properly staying within its cache limits and it may eventually run out of memory and crash. This message appears if a client has failed to release more than mds_recall_warning_threshold capabilities (decaying with a half-life of mds_recall_max_decay_rate) within the last mds_recall_warning_decay_rate second. Message: “Client name failing to advance its oldest client/flush tid” Code: MDS_HEALTH_CLIENT_OLDEST_TID, MDS_HEALTH_CLIENT_OLDEST_TID_MANY Description: The CephFS client-MDS protocol uses a field called the oldest tid to inform the MDS of which client requests are fully complete and may therefore be forgotten about by the MDS. If a buggy client is failing to advance this field, then the MDS may be prevented from properly cleaning up resources used by client requests. This message appears if a client appears to have more than max_completed_requests (default 100000) requests that are complete on the MDS side but haven’t yet been accounted for in the client’s oldest tid value. Message: “Metadata damage detected” Code: MDS_HEALTH_DAMAGE, Description: Corrupt or missing metadata was encountered when reading from the metadata pool. This message indicates that the damage was sufficiently isolated for the MDS to continue operating, although client accesses to the damaged subtree will return IO errors. Use the damage ls admin socket command to get more detail on the damage. This message appears as soon as any damage is encountered. Message: “MDS in read-only mode” Code: MDS_HEALTH_READ_ONLY, Description: The MDS has gone into readonly mode and will return EROFS error codes to client operations that attempt to modify any metadata. The MDS will go into readonly mode if it encounters a write error while writing to the metadata pool, or if forced to by an administrator using the force_readonly admin socket command. Message: N slow requests are blocked” Code: MDS_HEALTH_SLOW_REQUEST, Description: One or more client requests have not been completed promptly, indicating that the MDS is either running very slowly, or that the RADOS cluster is not acknowledging journal writes promptly, or that there is a bug. Use the ops admin socket command to list outstanding metadata operations. This message appears if any client requests have taken longer than mds_op_complaint_time (default 30s). Message: “Too many inodes in cache” Code: MDS_HEALTH_CACHE_OVERSIZED Description: The MDS is not succeeding in trimming its cache to comply with the limit set by the administrator. If the MDS cache becomes too large, the daemon may exhaust available memory and crash. By default, this message appears if the actual cache size (in inodes or memory) is at least 50% greater than mds_cache_size (default 100000) or mds_cache_memory_limit (default 1GB). Modify mds_health_cache_threshold to set the warning ratio.
http://docs.ceph.com/docs/master/cephfs/health-messages/
2019-05-19T09:40:11
CC-MAIN-2019-22
1558232254731.5
[]
docs.ceph.com
Verify the recovery plan for the operations management layer. About this task Performing a test recovery of the operations management recovery plan ensures that the virtual machines are being replicated correctly and the power on order is accurate with the correct timeout values and dependencies. Site Recovery Manager runs the analytic cluster nodes on an isolated test network using a temporary snapshot of replicated data while performing test recovery. Procedure - Log in to the Management vCenter Server by using the vSphere Web Client. - Open a Web browser and go to. - Log in using the following credentials. - From the Home menu, select Site Recovery. - Under Inventory Trees, click the Recovery Plans and click the SDDC Operations Management RP recovery plan. - On the SDDC Operations Management RP page, click the Monitor tab and click Recovery Steps. - Click the Test Recovery Plan icon to run a test recovery. The Test wizard appears. - On the Confirmation options page, leave the Replicate recent changes to recovery site check box selected and click Next. - On the Ready to complete page, click Finish to start the test recovery. Test failover.Note: Log in to lax01m01vc01.lax01.rainpole.local vCenter Server and follow the procedure if the protected vRealize Operations Manager virtual machines are located in Region B. Results What to do next If you encounter issues while performing this procedure, use the following troubleshooting tips
https://docs.vmware.com/en/VMware-Validated-Design/4.2/com.vmware.vvd.sddc-verify.doc/GUID-A0216791-C5A5-4C8B-AED3-E566A256DC1B.html
2019-05-19T09:17:53
CC-MAIN-2019-22
1558232254731.5
[]
docs.vmware.com
Creating a Bridge¶ In pfSense, bridges are added and removed at Interfaces > (assign) on the Bridges tab. Using bridges, any number of ports may be bound together easily. Each bridge created in the GUI will also create a new bridge interface in the operating system, named bridgeX where X starts at 0 and increases by one for each new bridge. These interfaces may be assigned and used like most other interfaces, which is discussed later in this chapter. To create a bridge: - Navigate to Interfaces > (assign) on the Bridges tab. - Click Add to create a new bridge. - Select at least one entry from Member Interfaces. Select as many as needed using Ctrl-click. - Add a Description if desired. - Click Show Advanced Options to review the remaining configuration parameters as needed. For most cases they are unnecessary. - Click Save to complete the bridge. Note A bridge may consist of a single member interface, which can help with migrating to a configuration with an assigned bridge, or for making a simple span/mirror port. Advanced Bridge Options¶ There are numerous advanced options for a bridge and its members. Some of these settings are quite involved, so they are discussed individually in this section. (Rapid) Spanning Tree Options¶ Spanning Tree is a protocol that helps switches and devices determine if there is a loop and cut it off as needed to prevent the loop from harming the network. There are quite a few options that control how spanning tree behaves which allow for certain assumptions to be made about specific ports or to ensure that certain bridges get priority in the case of a loop or redundant links. More information about STP may be found in the FreeBSD ifconfig(8) man page, and on Wikipedia. Protocol¶ The Protocol setting controls whether the bridge will use IEEE 802.1D Spanning Tree Protocol (STP) or IEEE 802.1w Rapid Spanning Tree Protocol (RSTP). RSTP is a newer protocol, and as the name suggests it operates much faster than STP, but is backward compatible. The newer IEEE 802.1D-2004 standard is based on RSTP and makes STP obsolete. Select STP only when older switch gear is in use that does not behave well with RSTP. STP Interfaces¶ The STP Interfaces list reflects the bridge members upon which STP is enabled. Ctrl-click to select bridge members for use with STP. Valid Time¶ Set the Valid Time for a Spanning Tree Protocol configuration. The default is 20 seconds. The minimum is 6 seconds and the maximum is 40 seconds. Forward Time¶ The Forward Time option sets the time that must pass before an interface begins forwarding packets when Spanning Tree is enabled. The default is 15 seconds. The minimum is 4 seconds and the maximum is 30 seconds. Note A longer delay will be noticed by directly connected clients as they will not be able to pass traffic, even to obtain an IP address via DHCP, until their interface enters forwarding mode. Hello Time¶ The Hello Time option sets the time between broadcasting of Spanning Tree Protocol configuration messages. The Hello Time may only be changed when operating in legacy STP mode. The default is 2 seconds. The minimum is 1 second and the maximum is 2 seconds. Bridge Priority¶ The Bridge Priority for Spanning Tree controls whether or not this bridge would be selected first for blocking should a loop be detected. The default is 32768. The minimum is 0 and the maximum is 61440. Values must be a multiple of 4096. Lower priorities are given precedence, and values lower than 32768 indicate eligibility for becoming a root bridge. Hold Count¶ The transmit Hold Count for Spanning Tree is the number of packets transmitted before being rate limited. The default is 6. The minimum is 1 and the maximum is 10. Port Priorities¶ The Priority fields set the Spanning Tree priority for each bridge member interface. Lower priorities are given preference when deciding which ports to block and which remain forwarding. Default priority is 128, and must be between 0 and 240. Path Costs¶ The Path Cost fields sets the Spanning Tree path cost for each bridge member. The default is calculated from the link speed. To change a previously selected path cost back to automatic, set the cost to 0. The minimum is 1 and the maximum is 200000000. Lower cost paths are preferred when making a decision about which ports to block and which remain forwarding. Cache Settings¶ Cache Size sets the maximum size of the bridge address cache, similar to the MAC or CAM table on a switch. The default is 100 entries. If there will be a large number of devices communicating across the bridge, set this higher. Cache entry expire time controls the timeout of address cache entries in seconds. If set to 0, then address cache entries will not be expired. The default is 240 seconds (Four minutes). Span Port¶ Selecting an interface as the Span port on the bridge will transmit a copy of every frame received by the bridge to the selected interface. This is most useful for snooping a bridged network passively on another host connected to the span ports of the bridge with something such as Snort, tcpdump, etc. The selected span port may not be a member port on the bridge. Edge Ports / Automatic Edge Ports¶ If an interface is set as an Edge port, it is always assumed to be connected to an end device, and never to a switch; It assumes that the port can never create a layer 2 loop. Only set this on a port when it will never be connected to another switch. By default ports automatically detect edge status, and they can be selected under Auto Edge ports to disable this automatic edge detection behavior. PTP Ports / Automatic PTP Ports¶ If an interface is set as a PTP port, it is always assumed to be connected to a switch, and not to an end user device; It assumes that the port can potentially create a layer 2 loop. It should only be enabled on ports that are connected to other RSTP-enabled switches. By default ports automatically detect PTP status, and they can be selected under Auto PTP ports to disable this automatic PTP detection behavior. Sticky Ports¶ An interface selected in Sticky Ports will have its dynamically learned addresses cached as though they were static once they enter the cache. Sticky entries are never removed from the address cache, even if they appear on a different interface. This could be used a security measure to ensure that devices cannot move between ports arbitrarily. Private Ports¶ An interface marked as a Private Port will not communicate with any other port marked as a Private Port. This can be used to isolate end users or sections of a network from each other if they are connected to separate bridge ports marked in this way. It works similar to “Private VLANs” or client isolation on a wireless access point.
http://docs.netgate.com/pfsense/en/latest/book/bridging/creating-a-bridge.html
2019-05-19T09:22:03
CC-MAIN-2019-22
1558232254731.5
[]
docs.netgate.com
true if the client will share the desktop with other currently-connected clients. false if the client is asking for exclusive access to the desktop.Namespace: RemoteViewing.Vnc.Server Assembly: RemoteViewing (in RemoteViewing.dll) Version: 0.9.1.0 (0.9.1.0) Syntax public bool ShareDesktop { get; private set; } Public Property ShareDesktop As Boolean Get Private Set public: property bool ShareDesktop { bool get (); private: void set (bool value); } member ShareDesktop : bool with get, private set See Also
http://docs.zer7.com/remoteviewing/html/b453061b-c777-e42e-7daf-1a19013ed24e.htm
2019-05-19T08:25:53
CC-MAIN-2019-22
1558232254731.5
[]
docs.zer7.com
Problems can occur when the cluster time is inaccurate. Although Data ONTAP enables you to manually set the time zone, date, and time on the cluster, you should configure the Network Time Protocol (NTP) servers to synchronize the cluster time. NTP is always enabled. However, configuration is still required for the cluster to synchronize with an external time source. Data ONTAP enables you to manage the cluster's NTP configuration in the following ways: By default, Data ONTAP automatically selects the NTP version that is supported for a given external NTP server. If the NTP version you specify is not supported for the NTP server, time exchange cannot take place. A node that joins a cluster automatically adopts the NTP configuration of the cluster. In addition to using NTP, Data ONTAP also enables you to manually manage the cluster time. This capability is helpful when you need to correct erroneous time (for example, a node's time has become significantly incorrect after a reboot). In that case, you can specify an approximate time for the cluster until NTP can synchronize with an external time server. The time you manually set takes effect across all nodes in the cluster. You can manually manage the cluster time in the following ways:
https://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-900/GUID-1E923D05-447D-4323-8D87-12B82F49B6F1.html
2019-05-19T09:27:20
CC-MAIN-2019-22
1558232254731.5
[]
docs.netapp.com
Setting up the support page completes the cluster setup, and involves setting up the AutoSupport messages and event notifications, and for single-node clusters, configuring system backup. You must have set up the cluster and network. If you have enabled the AutoSupport button, all the nodes in that cluster are enabled to send AutoSupport messages. If you have disabled the AutoSupport button, then all the nodes in that cluster are disabled to send AutoSupport messages. View the storage recommendations and create SVMs to continue with the cluster setup.
https://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-950/GUID-9BC12134-CF71-4237-B87F-A02D5E270695.html
2019-05-19T09:05:25
CC-MAIN-2019-22
1558232254731.5
[]
docs.netapp.com
API Manager 7.5.3 User Guide Configure API Manager settings in Policy Studio Policy Studio enables you to configure a range of settings that apply to API Manager and the underlying API Gateway. This topic describes how to create a Policy Studio project with API Manager configuration, and how to configure each of the API Manager settings. Create a Policy Studio project with API Manager configuration To create a Policy Studio project with API Manager configuration, perform the following steps: Ensure that your API Gateway installation has already been configured for API Manager using the setup-apimanager script. For more details, see Configure API Manager settings in Policy Studio. Create a project from one of the following:API Gateway instanceAPI Gateway configuration directory.fed, .pol, or .env file For more details on creating projects, see the Get Started section in the API Gateway Policy Developer Guide. Configure API Manager server settings In the Policy Studio tree, select Environment Configuration > Server Settings > API Manager to configure the settings described in this topic. Alerts The Alerts settings enable you to configure runtime alerts, which call specified policies to handle the alert event. For example, the policy might send an email to an interested party, or forward the alert to an external notification system. Sample policies are provided as a starting point for custom development. You can enable or disable alerts in the API Manager web interface. You can change the policy that is executed when an alert is generated on this screen. For more details, see API management alerts. API Listeners The API Listeners settings enable you to configure API Gateway listeners to service API Manager-registered APIs. Defaults to Portal Listener. Note This screen only displays listeners that do not have a relative path resolver on the / relative path. For more details on API Gateway listeners, relative paths, and resolvers, see the API Gateway Policy Developer Guide. API Promotion The API Promotion settings enable you to configure an optional policy that is invoked when APIs registered in API Manager are promoted between environments (for example, from a test or sandbox environment to a live production environment). To select a promotion policy, click the browse button on the right, and select a policy that you have already created. By default, no API promotion policy is selected. For more details, see Promote managed APIs between environments. API Connectors The API Connectors settings enable you to configure client authentication profiles to use with specific API connectors and plugins. For example, this includes connecting to Cloud APIs such as Salesforce.com and Google. A preconfigured plugin for Salesforce.com APIs is provided by default. For more details, see Cloud application connectors. Identity Provider The Identity Provider settings enable you to integrate API Manager with a wide range of external user repositories. For example, this includes third-party identity providers such as Apache Directory, OpenLDAP, Microsoft Active Directory, and so on. To enable integration, select Use external identity provider, and configure the following set of custom policies: Account authentication policy:Click the browse button, and select the required authentication policy that is invoked whenever a user tries to log in to API Manager. This setting is mandatory. Account information policy:Click the browse button, and select the required information policy that is invoked on first login to seed the user profile in API Manager. This setting is mandatory. For more details, see Configure external LDAP identity providers. Account creation success (optional):Click the browse button, and select an optional policy that is invoked when a new user has been registered with API Manager. Account creation failure (optional):Click the browse button, and select an optional policy that is invoked when an attempt to register a new account with API Manager has failed. API Manager provides sample external identity provider configuration. For more details, see Configure external LDAP identity providers. Note The Identity Provider settings are used only to configure integration of API Manager with external user repositories. All other API Manager data is stored using a Key Property Store (KPS) in an Apache Cassandra cluster. For more details, see the API Gateway Key Property Store User Guide. Monitoring The Monitoring settings allow you to configure monitoring metrics in API Manager: Enable monitoring:Select whether to enable monitoring metrics displayed on the Monitoring tab in API Manager. Monitoring is enabled by default. Use the following database:Click the browse button to configure the connection to the database that stores the monitoring metrics. For more details, see Configure database connections in the API Gateway Policy Developer Guide. For more details on monitoring, see Monitor APIs and applications in API Manager. OAuth Outbound Credentials The OAuth Outbound Credentials setting enables you to configure optional client credentials for use with OAuth outbound authentication. These enable clients to request an OAuth access token using only their client credentials with the authorization specified in the header. By default, no credentials are configured. For more details, see the following: Configure custom API Manager routing policies provides a detailed example of using these credentials with a custom OAuth routing policy API Gateway OAuth User Guide provides more details on OAuth API Gateway Policy Developer Guide explains how to create policies OAuth Token Information Policies The OAuth Token Information Policies setting enables you to configure optional policies used by external OAuth security devices in API Manager. These include custom policies used to obtain and extract token information from external OAuth providers. By default, no policies are configured. For more details, see the following: Virtualize REST APIs in API Manager explains how to configure security devices API Gateway OAuth User Guide provides more details on OAuth API Gateway Policy Developer Guide explains how to create policies OAuth Token Stores The OAuth Token Stores settings enable you to configure OAuth token stores for the OAuth security devices used by API Manager-registered APIs. Click Add to configure an OAuth access token store. To add a store, right-click Access Token Stores, and select Add Access Token Store. Defaults to OAuth Access Token Cache. For more details on OAuth, see the API Gateway OAuth User Guide. Quota Settings The Quota Settings enable you to configure how quota information is stored. Quotas enable you to manage the maximum message traffic rate that can be sent by applications to APIs. For more details on configure quotas in API Manager, see Administer APIs in API Manager. You can configure the following settings in Policy Studio: Send warning if API usage reaches:Enter the % of System Quota and % of Application Quota that must be reached before warnings are sent to the API administrator. Both API usage values default to 80 per cent. For more details, see Manage quotas. Where to store quota data:Select In external storage or In memory only. This setting defaults to In external storage, and to keep the quota in memory only if the time window is below 30 seconds. In this case, if the API administrator configures a quota in API Manager with a time window below 30 seconds, the data is stored in memory instead of in external storage. Alternatively, to never use external storage, select In memory only to store data in memory in all cases. If you select In external storage, you must specify an external storage mechanism:Automatic (adapt to KPS storage configuration): The data is stored externally as configured in the Key Property Store (KPS). This is the default option. For more details, see the API Gateway Key Property Store User Guide.Use database: To store your data in a relational database, select this option, and specify the database connection that you want to use in Environment Configuration > External Connections > Database Connections. For more details, see the API Gateway Policy Developer Guide.Use Cassandra: To store your data in an Apache Cassandra database, select this option. For more details, see Install Apache Cassandra in the API Gateway Installation Guide. Cassandra consistency levels:When Use Cassandra is selected, you can configure Read and Write consistency levels for the Cassandra database. These settings control how up-to-date and synchronized a row of data is on all of its replicas. For high availability, you must ensure that the Cassandra read and write consistency levels are both set to QUORUM. For more details on consistency levels, see the following Note Quota data is not shared for those quotas created in API Manager with a time window less than the value configured in Policy Studio, irrespective of the storage selected. This could impact on throttling in an HA environment, where multiple API Gateways are servicing requests and contributing to total message counts. Inbound Security Policies The Inbound Security Policies settings enable you to configure the custom security policies that can be applied to APIs registered in API Manager. These policies enable you to perform custom policy-based authentication on front-end APIs. API Manager provides a number of built-in authentication policies to secure APIs (for example, API keys and OAuth 2.0), which you can select when creating front-end APIs. You can extend the built-in authentication policies with custom authentication policies that have been developed in Policy Studio. For example, a custom policy could use CA SiteMinder to authenticate client application requests to APIs. In addition, custom authentication policies can specify a message that is displayed in the API Catalog informing application developers of the authentication mechanism to use when accessing the API. To configure your custom inbound security policies, click Add, and select the appropriate policies in the dialog. The configured polices are added to the list. Note Inbound security policies must set the authentication.subject.id message attribute to match the client ID set in the external credentials of the application. For details on how to create polices, see the API Gateway Policy Developer Guide. For details on applying inbound security policies to front-end APIs, see Virtualize REST APIs in API Manager Request Policies The Request Policies settings enable you to configure optional request processing policies for virtualized APIs in API Manager. For example, you could use the configured policies to check request messages for authentication or authorization. To configure request policies, click Add, and select policies in the dialog. By default, no request policies are configured. Note Request Policies, Response Policies, and Routing Policies apply to APIs registered using the API Manager, and do not apply to policies registered using Policy Studio. These policies enable policy developers to implement enterprise-specific request policies in Policy Studio that can be applied to multiple APIs in API Manager. For details on how to create polices, see the API Gateway Policy Developer Guide. Response Policies The Response Policies settings enable you to configure optional response processing policies for virtualized APIs in API Manager. For example, you could use the configured policies to validate or transform outbound response messages. To configure response policies, click Add, and select policies in the dialog. By default, no response policies are configured. For details on how to create polices, see the API Gateway Policy Developer Guide. Routing Policies The Routing Policies settings enable you to configure custom routing policies for virtualized APIs in API Manager. For example, you could use the configured policies to route to a back-end JMS service. To configure routing policies, click Add, and select policies in the dialog. By default, no routing policies are configured, and the default URL-based routing policy is used. For more details, see Customize the default API Manager routing policy for all APIs. For detailed examples of using custom routing policies based on API key and OAuth, see Configure custom API Manager routing policies. For more details on how to create API Gateway polices in Policy Studio, see the API Gateway Policy Developer Guide. SMTP Server Under SMTP Server settings, to send emails (for example, for user registration or client application approval), you must configure an SMTP server for API Manager in the Policy Studio. The default setting is Portal SMTP server on localhost. Note You must ensure that API Manager is configured with the SMTP server used by your organization to generate emails for user registration or client application approval. For example, to configure your SMTP server, perform the following steps: Click the browse button on the on the right of the SMTP Server field. Right-click Portal SMTP, and select Edit. Complete the SMTP settings in the dialog. The following example settings use the Gmail SMTP server: Name: Name for your SMTP server (for example, Acme Portal SMTP Server). SMTP Server Hostname: Hostname of your SMTP server (for example, smtp.gmail.com). Port: SMTP server port number (for example, 465). User Name: Your email user name (for example, [email protected]). Password: Your email password. For more details on SMTP configuration, see the API Gateway Policy Developer Guide. Note When finished updating your API Manager configuration, remember to click Apply Changes at the bottom of the window, and then Deploy in the toolbar. Customize the default API Manager routing policy for all APIs You can customize the default URL-based routing used by API Manager by modifying the default Connect To URL filter in Policy Studio. To edit this default policy, select Policies > Generated Policies > REST APIs > Templates > Default URL-based Routing, and double-click the Connect to URL filter in the policy canvas on the right. For example, under Settings > Failure > Call connection policy on failure, you could configure a custom policy with a Reflect message filter that modifies the default 500 response code to 503 when the API Manager runtime cannot connect to a back-end service. Updating this default routing policy modifies how API Manager manages connection failures globally for all APIs, without needing to modify each API. Note After updating this default routing policy, you do not need to restart the underlying API Gateway, redeploying the updated configuration is sufficient. For more details on how to create API Gateway polices in Policy Studio, see the API Gateway Policy Developer Guide. Configure API Manager in network protected by an HTTP proxy If you are using API Manager in a network protected by an HTTP proxy that requires authentication, you must perform some additional configuration steps. Configure a proxy server For API Manager to connect to the back-end API through a proxy, the routing policy used must be configured with a proxy server. For example, perform the following steps: In the Policy Studio tree, select Policies > Generated Policies > REST APIs > Templates > Default URL-based Routing. Double-click the Connect to URL filter to edit it, and select the Settings tab. Select Proxy > Send via proxy. In the Proxy Server field, browse to the configured proxy server. If a proxy server has not already been configured, right-click Proxy Servers, and select Add a> Related Links
https://docs.axway.com/bundle/APIManager_753_APIMgmtGuide_allOS_en_HTML5/page/Content/APIManagementGuideTopics/api_mgmt_config_ps.htm
2019-05-19T09:10:32
CC-MAIN-2019-22
1558232254731.5
[]
docs.axway.com
USB Interface Configuration: NA| Nodes: ghanta jsw xhawk ifc mhx The USB interface could be used to connect node to PC. Possible use cases: - Connect a portable ground modem to Ground Control. - Integrate Payload Computer with Autopilot system. - Perform UAV ground maintenance and speed-up firmware updates on complex networks. - Execute custom user scripts onboard and communicate with onboard Computer Vision processor.
https://docs.uavos.com/fw/conf/usb.html
2019-05-19T09:15:56
CC-MAIN-2019-22
1558232254731.5
[]
docs.uavos.com
Migration¶ Migrate access rules to the Security Groups - When CROC Cloud update, all the rules will moved to conform rules in the appropriate Security Groups. - Membership of the appropriate subnets for existing resources is preserved, the number of Security Groups will be equal to the number of subnets. Example migration for subnets and rules Were: - The subnet 10.0.0.0/24 with the allow rules: - icmp 0.0.0.0/0 - tcp/80 from 10.0.0.0/8 - The subnet 10.0.1.0/24 with the allow rule: - tcp/22 from 10.0.0.0/24 - tcp/80 from 0.0.0.0/0 - tcp/433 from subnet-XXXXXXXX (subnet ID 10.0.0.0/24) Added: - 2 new Security Groups: Example migration for switches Were: - Security group sg-XXXXXXXX - type: interconnect - security group name: my virtual switch Now - Switch sw-XXXXXXXX - switch name: my virtual switch
http://docs.website.cloud.croc.ru/en/changelog/12.0-CROC1/migration.html
2019-05-19T09:39:53
CC-MAIN-2019-22
1558232254731.5
[]
docs.website.cloud.croc.ru
The Diagnostics > pfInfo page displays statistics and counters for the firewall packet filter which serve as metrics to judge how it is behaving and processing data. The information shown on the page contains items such as: Bytes transferred in and out of the firewall. Packets transferred in or out and passed or blocked counters for each direction. Statistics about the state table and source tracking table (Firewall States). Statistics an counts for various types of special, unusual or badly formatted packets. Counters that pertain to packets that have reached or exceeded limits configured on firewall rules, such as max states per IP address. State table max size, source node table size, frag table size, number of allowed tables, and maximum number of table entries. The current configured timeout values for various connection states for TCP, UDP, and other protocols. Per-interface packet counters.
https://docs.netgate.com/pfsense/en/latest/book/monitoring/pfinfo.html
2019-05-19T09:15:13
CC-MAIN-2019-22
1558232254731.5
[]
docs.netgate.com
Reconstruction of the Si (100) surface - a geometry optimization study with QuantumATK¶ Introduction¶ Although LDA and GGA may not be able to produce a proper band gap in Si (and most other semiconductors), these “simple” DFT functionals are very capable when it comes to predicting geometrical properties. In this short tutorial we will demonstrate how to use QuantumATK to study the so-called asymmetric dimer reconstruction of a Si (100) surface. The focus will be on the physics and how to set up the model correctly, rather than how to operate QuantumATK. It will be assumed that you have experience building structures (for instance, cleaving surfaces) and setting up calculations and inspecting the results, and the steps will therefore only be indicated, not explained in any great detail. Building the geometry¶ Start QuantumATK and open the Builder. Add Silicon (alpha) from the Database (). Open. Cleave the structure along 100 by entering the corresponding Miller indices. Since the dimer reconstruction can only occur in a 2x1 supercell, you need to make a larger surface cell than the smallest one. Therefore, on the second page set \(\mathbf{v_2} = 2\mathbf{u_2}\). On the final page of the Surface tool, choose a Slab configuration with thickness 4.5 layers (the default size of the vacuum is fine). Center the structure in all directions (). Select the two Si atoms with the smallest Z-coordinate and add a small random perturbation to their positions by clicking the Rattletool a few times. This is because the completely symmetric state is a meta-stable configuration, and it’s best to not start the optimization in such a state. Select the two Si atoms with the highest Z-coordinate and passivate them to saturate all Si bonds in the structure. The resulting structure is shown in the picture below. Setting up the calculation¶ Send the structure to the ScriptGenerator. - Insert a New Calculator block, then an OptimizeGeometry block. - For the calculator you can keep all parameters at their default except the k-point sampling. Set it to 9x9 (of course 1 point is enough in the C direction, since it’s a slab) - accuracy is important since the difference in energy between the symmetric and asymmetric dimer states is very small [1]. - For the optimization, constrain the two bottom Si atoms and the 4 hydrogens. It is always important to limit the degrees of freedom in a relaxation if possible, to project out pure translations and rotations. - Also, make sure to tick Save trajectoryand set a name for the trajectory file. - Set the name of the output file, and the calculation is ready to be run. Save the script! Send the script to the Job Manager, or transfer it to a cluster and run it there. This calculation parallelizes very well, and takes between one and several hours depending on how many MPI nodes it is run on. Results¶ When the calculation is done, select the HDF5 file in QuantumATK and inspect the second “Bulk configuration”, with Id “gID001” - this is the optimized geometry. Drag-and-Drop it on the Builder - it is immediately visible that indeed an asymmetric dimer state has been obtained. To display the bond lengths just hoover the mouse over the respective bond - the bond distance will be displayed in the tool tip. To perform measurements of the bond lengths and angles, open, and select various atoms in the surface - both for the original and optimized geometry - and compare. It is also interesting to analyze the relaxation trajectory. Select the LabFloor object in which you have the saved the trajectory, and open it in the Viewer to watch a movie of the optimization. If we plot the total energy and forces for each relaxation step, we see that the total energy is more or less constantly decreasing, as expected from the fact that we use the LBFGS algorithm, which indeed is designed to minimize the total energy rather than just following the forces. The following simple script reads the trajectory data and plots the total energy and maximum force for each step: traj = nlread('si_100_traj.hdf5', Trajectory)[0] energies = [] max_forces = [] for i in range(len(traj)): energies.append(traj.imageEnergy(i).inUnitsOf(eV)) forces = traj.imageForces(i).inUnitsOf(eV/Angstrom) force_magnitude = (forces**2).sum(axis=1)**0.5 max_forces.append(numpy.max(forces)) # Plot the energy and max. force curve using pylab. import pylab pylab.figure() pylab.subplot(2, 1, 1) pylab.plot(range(len(traj)), energies) pylab.xlabel('Optimization step') pylab.ylabel('Energy (eV)') pylab.grid(True) pylab.subplot(2, 1, 2) pylab.plot(range(len(traj)), max_forces) pylab.xlabel('Optimization step') pylab.ylabel('Maximum Forces (eV/Ang)') pylab.grid(True) pylab.show() The result may look like the figure below, although the exact curve depends on the exact initial displacement of the top silicon atoms. From this plot, combined with the trajectory movie, you can learn many things. First of all, note that unless you use a criterion for the forces which is below 0.2 eV/Å, you would be fooled into thinking the symmetric dimer is the minimum, since this state is passed on the way to the asymmetric one! This is also possible to see in the trajectory movie. In fact, although it has not been shown here, it is well known that unless care is taken to have an accurate method (i.e. enough k-points) and even enough vacuum around the slab, the symmetric dimer will be predicted to be the most energetically favorable. Specifically, the symmetric dimer local minimum is reached around step 13 (see the trajectory picture above), but as the algorithm tries to reduce the total energy further, the forces rise again and the asymmetry starts to appear. After step 15 an asymmetry has been established for real, and the asymmetric dimer is finally formed at around step 20. You can use the following script to visualize the vertical relaxation of the atomic layers: traj = nlread('si_100_traj.hdf5', Trajectory)[0] number_of_steps = len(traj) z_coordinates = [] for i in range(number_of_steps): coordinates = traj.image(i).cartesianCoordinates().inUnitsOf(Angstrom) # Append all z-coordinates, apart from the fixed bottom layer. z_coordinates.append(coordinates[:16, 2].tolist()) # Convert to numpy array. z_coordinates = numpy.array(z_coordinates) # Plot the z-coordinates using pylab. import pylab pylab.figure() # Loop over all unconstrained atoms. for i in range(16): pylab.plot(range(number_of_steps), z_coordinates[:, i], '--*') pylab.xlabel('Optimization step') pylab.ylabel('Z-coordinates (Ang)') pylab.grid(True) pylab.show() It is important to note that the asymmetry appears as a result of a long-range interaction, which also is why short-range methods like tight-binding or classical potentials are only able to relax the configuration into the symmetric dimer. Summary¶ This study has shown how QuantumATK can succesfully predict even a rather complicated geometric reconstruction by using geometry optimization in DFT. The Si 100 surface first forms a symmetric dimer which in turns induces an asymmetry in the electron landscape, under which we again optimize the structure to find the ground state asymmetric structure. The asymmetry is a long-range effect which originates a few layers below the surface where the Si lattice is compressed downwards under the dimer. References
https://docs.quantumwise.com/tutorials/reconstruction_si100/reconstruction_si100.html
2019-05-19T09:24:03
CC-MAIN-2019-22
1558232254731.5
[array(['../../_images/si100.png', '../../_images/si100.png'], dtype=object) array(['../../_images/optimize_geometry.png', '../../_images/optimize_geometry.png'], dtype=object) array(['../../_images/z-matrix.png', '../../_images/z-matrix.png'], dtype=object) array(['../../_images/optimize_si_100.gif', '../../_images/optimize_si_100.gif'], dtype=object) array(['../../_images/optimize_si100_energy_force.png', '../../_images/optimize_si100_energy_force.png'], dtype=object) array(['../../_images/optimize_si100_coordinates.png', '../../_images/optimize_si100_coordinates.png'], dtype=object)]
docs.quantumwise.com
Microsoft's Windows Store for Business enables you to acquire, manage, and distribute applications in bulk. If you use AirWatch to manage your Windows 10+ devices, you can integrate the two systems. After integration, acquire applications from the Windows Store for Business and distribute the applications and manage their updated versions with AirWatch. This topic explains how to deploy acquired apps using AirWatch. For information on Windows Store for Business processes, refer to. Disclaimer Third-party URLs are subject to changes beyond the control of VMware AirWatch. If you find a URL in VMware AirWatch documentation that is out of date, submit a Documentation Feedback support ticket using the Support Wizard on support.air-watch.com. Required Components to Integrate AirWatch and the Windows Store for Business See Requirements for Windows Store for Business Integration for information on the components that integrate AirWatch and the Windows Store for Business. Before you can use Azure AD to enroll your Windows devices, you must configure AirWatch to use Azure AD as an Identity Service. See Configure Azure AD Identity Services for SaaS Deployments for information. AirWatch supports both online and offline licensing models. For a comparison of the two models, see Online and Offline Models of the Windows Store for Business . Import and Deploy With the AirWatch Console See Import Windows Store for Business Apps for the steps to import Windows Store for Business applications to AirWatch. Follow the import by deploying applications as outlined in Deploy Windows Store for Business Apps. Manage Windows Store for Business Applications with Details View Use the Details View of public, Windows Store for Business applications to sync licenses, assign the application to groups, and to edit details about the application. See Details View Setting Descriptions for information on settings. See Sync and Reclaim Licenses for Windows Store for Business Apps for information on license management in the AirWatch Console.
https://docs.vmware.com/en/VMware-AirWatch/9.1/vmware-airwatch-guides-91/GUID-AW91-Win_BSP_Public_Apps.html
2019-05-19T09:14:43
CC-MAIN-2019-22
1558232254731.5
[]
docs.vmware.com
SeExpr node¶ This documentation is for version 2.0 of SeExpr. Description¶ Use the SeExpr expression language (by Walt Disney Animation Studios) to process images. What is SeExpr?¶ SeExpr is a very simple mathematical expression language used in graphics software (RenderMan, Maya, Mudbox, Yeti). See the SeExpr Home Page and SeExpr Language Documentation for more information. SeExpr is licensed under the Apache License, Version 2.0, and is Copyright Disney Enterprises, Inc. SeExpr vs. SeExprSimple¶ The SeExpr plugin comes in two versions: - SeExpr has a single vector expression for the color channels, and a scalar expression for the alpha channel. The source color is accessed through the Csvector, and alpha through the Asscalar, as specified in the original SeExpr language. - SeExprSimple has one scalar expression per channel, and the source channels may also be accessed through scalars ( r, g, b, a). SeExpr extensions¶ A few pre-defined variables and functions were added to the language for filtering and blending several input images. The following pre-defined variables can be used in the script: x: X coordinate (in pixel units) of the pixel to render. y: Y coordinate (in pixel units) of the pixel to render. u: X coordinate (normalized in the [0,1] range) of the output pixel to render. v: Y coordinate (normalized in the [0,1] range) of the output pixel to render. sx, sy: Scale at which the image is being rendered. Depending on the zoom level of the viewer, the image might be rendered at a lower scale than usual. This parameter is useful when producing spatial effects that need to be invariant to the pixel scale, especially when using X and Y coordinates. (0.5,0.5) means that the image is being rendered at half of its original size. par: The pixel aspect ratio. cx, cy: Shortcuts for (x + 0.5)/par/sxand (y + 0.5)/sy, i.e. the canonical coordinates of the current pixel. frame: Current frame being rendered Cs, As: Color (RGB vector) and alpha (scalar) of the image from input 1. CsN, AsN: Color (RGB vector) and alpha (scalar) of the image from input N, e.g. Cs2and As2for input 2. output_width, output_height: Dimensions of the output image being rendered. input_width, input_height: Dimensions of image from input 1, in pixels. input_widthN, input_heightN: Dimensions of image from input N, e.g. input_width2and input_height2for input 2. The following additional functions are available: color cpixel(int i, int f, float x, float y, int interp = 0): interpolates the color from input i at the pixel position (x,y) in the image, at frame f. float apixel(int i, int f, float x, float y, int interp = 0): interpolates the alpha from input i at the pixel position (x,y) in the image, at frame f. The pixel position of the center of the bottom-left pixel is (0., 0.). The first input has index i=1. interp controls the interpolation filter, and can take one of the following values: - 0: impulse - (nearest neighbor / box) Use original values - 1: bilinear - (tent / triangle) Bilinear interpolation between original values - 2: cubic - (cubic spline) Some smoothing - 3: Keys - (Catmull-Rom / Hermite spline) Some smoothing, plus minor sharpening (*) - 4: Simon - Some smoothing, plus medium sharpening (*) - 5: Rifman - Some smoothing, plus significant sharpening (*) - 6: Mitchell - Some smoothing, plus blurring to hide pixelation (*+) - 7: Parzen - (cubic B-spline) Greatest smoothing of all filters (+) - 8: notch - Flat smoothing (which tends to hide moire’ patterns) (+) Some filters may produce values outside of the initial range (*) or modify the values even at integer positions (+). Sample scripts¶ Add green channel to red, keep green, and apply a 50% gain on blue¶ SeExprSimple: r+g g 0.5*b SeExpr: [Cs[0]+Cs[1], Cs[1], 0.5*Cs[2]] “Multiply” merge operator on inputs 1 and 2¶ SeExprSimple: r*r2 g*g2 b*b2 a+a2-a*a2 SeExpr: Cs * Cs2 As + As2 - As * As2 “Over” merge operator on inputs 1 and 2¶ SeExprSimple: r+r2*(1-a) g+g2*(1-a) b+b2*(1-a) a+a2-a*a2 SeExpr: Cs + Cs2 * (1 - As) As + As2 - As * As2 Custom parameters¶ To use custom variables that are pre-defined in the plug-in (scalars, positions and colors) you must reference them using their script-name in the expression. For example, the parameter x1 can be referenced using x1 in the script: Cs + x1 Multi-instruction expressions¶ If an expression spans multiple instructions (usually written one per line), each instruction must end with a semicolumn (‘;’). The last instruction of the expression is considered as the final value of the pixel (a RGB vector or an Alpha scalar, depending on the script), and must not be terminated by a semicolumn. More documentation is available on the SeExpr website. Accessing pixel values from other frames¶ The input frame range used to render a given output frame is computed automatically if the following conditions hold: The frameparameter to cpixel/apixel must not depend on the color or alpha of a pixel, nor on the result of another call to cpixel/apixel A call to cpixel/apixel must not depend on the color or alpha of a pixel, as in the following: if (As > 0.1) { src = cpixel(1,frame,x,y); } else { src = [0,0,0]; } If one of these conditions does not hold, all frames from the specified input frame range are asked for.
http://natron.readthedocs.io/en/master/plugins/fr.inria.openfx.SeExpr.html
2017-03-23T02:19:02
CC-MAIN-2017-13
1490218186608.9
[]
natron.readthedocs.io
Pipewelder¶ Pipewelder is a framework that provides a command-line tool and Python API to manage AWS Data Pipeline jobs from flat files. Simple uses it as a cron-like job scheduler. - Source - - Documentation - - PyPI - Overview¶ Pipewelder aims to ease the task of scheduling jobs by defining very simple pipelines which are little more than an execution schedule, offloading most of the execution logic to files in S3. Pipewelder uses Data Pipeline’s concept of data staging to pull input files from S3 at the beginning of execution and to upload output files back to S3 at the end of execution. If you follow Pipewelder’s directory structure, all of your pipeline logic can live in version-controlled flat files. The included command-line interface gives you simple commands to validate your pipeline definitions, upload task definitions to S3, and activate your pipelines. Installation¶ Pipewelder is available from PyPI via pip and is compatible with Python 2.6, 2.7, 3.3, and 3.4: pip install pipewelder The easiest way to get started is to clone the project from GitHub, copy the example project from Pipewelder’s tests, and then modify to suit: git clone cp -r pipewelder/tests/test_data my-pipewelder-project If you’re setting up Pipewelder and need help, feel free to email the author. Development¶ To do development on Pipewelder, clone the repository and run make to install dependencies and run tests. Directory Structure¶ To use Pipewelder, you provide a template pipeline definition along with one or more directories that correspond to particular pipeline instances. The directory structure looks like this (see test_data for a working example): pipeline_definition.json pipewelder.json <- optional configuration file my_first_pipeline/ run values.json tasks/ task1.sh task2.sh my_second_pipeline/ ... The values.json file in each pipeline directory specifies parameter values that are used modify the template definition including the S3 paths for inputs, outputs, and logs. Some of these values are used directly by Pipewelder as well. A `ShellCommandActivity <>`__ in the template definition simply looks for an executable file named run and executes it. run is the entry point for whatever work you want your pipeline to do. Often, your run executable will be a wrapper script to execute a variety of similar tasks. When that’s the case, use the tasks subdirectory to hold these definitions. These tasks could be text files, shell scripts, SQL code, or whatever else your run file expects. Pipewelder gives tasks folder special treatment in that the CLI will make sure to remove existing task definitions when uploading files. Using the Command-Line Interface¶ The Pipewelder CLI should always be invoked from the top-level directory of your definitions (the directory where pipeline_definition.json lives). If your directory structure matches Pipewelder’s expectations, it should work without further configuration. As you make changes to your template definition or values.json files, it can be useful to check whether AWS considers your definitions valid: $ pipewelder validate Once you’ve defined your pipelines, you’ll need to upload the files to S3: $ pipewelder upload Finally, activate your pipelines: $ pipewelder activate Any time you change the values.json or pipeline_definition.json, you’ll need to run the activate subcommand again. Because active pipelines can’t be modified, the activate command will delete the existing pipeline and create a new one in its place. The run history for the previous pipeline will be discarded. Acknowledgments¶ Pipewelder’s package structure is based on python-project-template.
http://pipewelder.readthedocs.io/en/latest/README.html
2017-03-23T02:07:36
CC-MAIN-2017-13
1490218186608.9
[array(['_images/welder.jpg', 'A worker welding a pipe'], dtype=object)]
pipewelder.readthedocs.io
Capabilities in Technical Preview 1612 for System Center Configuration Manager Applies to: System Center Configuration Manager (Technical Preview) This article introduces the features that are available in the Technical Preview for System Center Configuration Manager, version 1612.. The Data Warehouse Service point Beginning with the Technical Preview version 1612, the Data Warehouse Service point enables you to store and report on long-term historical data for your Configuration Manager deployment. This is accomplished by automated synchronizations from the Configuration Manager site database to a data warehouse database. This information is then accessible from your Reporting services point. By default, when you install the new site system role, Configuration Manager creates the data warehouse database for you on a SQL Server instance that you specify. The data warehouse supports up to 2 TB of data, with timestamps for change tracking. By default, the data that is synchronized from the site database includes the data groups for Global Data, Site Data, Global_Proxy, Cloud Data, and Database Views. You can also modify what is synchronized to include additional tables, or exclude specific tables from the default replication sets. The default data that is synchronized includes information about: - Infrastructure health - Security - Compliance - Malware - Software deployments - Inventory details (however, inventory history is not synchronized) In addition to installing and configuring the data warehouse database, several new reports are installed so you can easily search for and report on this data. Data Warehouse Dataflow Prerequisites for the Data Warehouse Service point and database - Your hierarchy must have a Reporting services point site system role installed. - The computer where you install the site system role requires .NET Framework 4.5.2 or later. - The computer account of the computer where you install the site system role must have local admin permissions to the computer that will host the data warehouse database. - The administrative account you use to install the site system role must be a DBO on the instance of SQL Server that will host the data warehouse database. - The database is supported: - With SQL Server 2012 or later, Enterprise or Datacenter edition. - On a default or named instance - On a SQL Server Cluster. Although this configuration should work, it has not been tested and support is best effort. - When co-located with either the site database or Reporting services point database. However, we recommend it be installed on a separate server. - The account that is used as the Reporting Services Point Account must have the db_datareader permission to the data warehouse database. - The database is not supported on a SQL Server AlwaysOn availability group. Install the Data Warehouse You install the Data Warehouse site system role on a central administration site or primary site by using the Add Site System Roles Wizard or the Create Site System Server Wizard. See Install site system roles for more information. A hierarchy supports multiple instances of this role, but only one instance is supported at each site. When you install the role, Configuration Manager creates the data warehouse database for you on the instance of SQL Server that you specify. If you specify the name of an existing database (as you would do if you move the data warehouse database to a new SQL Server), Configuration Manager doesn’t create a new database but instead uses the one you specify. Configurations used during installation Use the following information to complete installation of the site system role: System Role Selection page: Before the Wizard displays an option to select and install the Data Warehouse Service point, you must have installed a Reporting services point. General page: The following general information is required: - Configuration Manager database settings: - Server Name - Specify the FQDN of the server that hosts the site database. If you do not use a default instance of SQL Server, you must specify the instance after the FQDN in the following format: <Sqlserver_FQDN><Instance_name> - Database name - Specify the name of the site database. - Verify - Click Verify to make sure that the connection to the site database is successful. - Data Warehouse database settings: - Server name - Specify the FQDN of the server that hosts the Data Warehouse Service point and database. If you do not use a default instance of SQL Server, you must specify the instance after the FQDN in the following format: <Sqlserver_FQDN><Instance_name> - Database name - Specify the FQDN for the data warehouse database. Configuration Manager will create the database with this name. If you specify a database name that already exists on the instance of SQL server, Configuration Manager will use that database. - Verify - Click Verify to make sure that the connection to the site database is successful. Synchronization settings page: - Data settings: - Replication groups to synchronize – Select the data groups you want to synchronize. For information about the different types of data groups, see Database replication and Distributed views in Data transfers between sites. - Tables included to synchronize – Specify the name of each additional table you want to synchronize. Separate multiple tables by using a comma. These tables will be synchronized from the site database in addition to the replication groups you select. - Tables excluded to synchronize - Specify the name of individual tables from replication groups you synchronize. Tables you specify will be excluded from. Separate multiple tables by using a comma. - Synchronization settings: - Synchronization interval (minutes) - Specify a value in minutes. After the interval is reached, a new synchronization starts. This supports a range from 60 to 1440 minutes (24 hours). - Schedule - Specify the days that you want synchronization to run. Reporting point access: After the data warehouse role is installed, ensure the account that is used as the Reporting Services Point Account has the db_datareader permission to the data warehouse database. Troubleshoot installation and data synchronization. Reporting After you install a Data Warehouse site system role, the following reports are available on your Reporting services point with a Category of Data Warehouse: Move the Data Warehouse database Use the following steps to move the data warehouse database to a new SQL Server: Review the current database configuration and record the configuration details, including: - The data groups you synchronize - Tables you include or exclude from synchronization You will reconfigure these data groups and tables after you restore the database to a new server and reinstall the site system role. Use SQL Server Management Studio to backup the data warehouse database, and then again to restore that database to a SQL server on the new computer that will host the data warehouse. After you restore the database to the new server, ensure that the database access permissions are the same on the new data warehouse database as they were on the original data warehouse database. Use the Configuration Manager console to remove the Data Warehouse Service point site system role from the current server. Install a new Data Warehouse Service point and specify the name of the new SQL Server and instance that hosts the Data Warehouse database you restored. After the site system role installs, the move is complete. You can review the following Configuration Manager logs to confirm the site system role has successfully reinstalled: - DWSSMSI.log and DWSSSetup.log - Use these logs to investigate errors when installing the Data warehouse service point. - Microsoft.ConfigMgrDataWarehouse.log – Use this log to investigate data synchronization between the site database to the data warehouse database. Content Library Cleanup Tool Beginning with Technical Preview version 1612, you can use a new command line tool (ContentLibraryCleanup.exe) to remove content that is no-longer associated with any package or application from a distribution point (orphaned content). This tool is called the content library cleanup tool. This tool only affects the content on the distribution point you specify when you run the tool and cannot remove content from the content library on the site server. After you install Technical Preview 1612, you can find ContentLibraryCleanup.exe in the %CM_Installation_Path%\cd.latest\SMSSETUP\TOOLS\ContentLibraryCleanup\ folder on the Technical Preview site server. The tool released with this Technical Preview is intended to replace older versions of similar tools released for past Configuration Manager products. Although this tool version will cease to function after March 1st, 2017, new versions will release with future Technical Previews until such time as this tool is released as part of the Current Branch, or a production ready out-of-band release. Requirements - The tool can be run directly on the computer that hosts the distribution point, or remotely from another server. The tool can only be run against a single distribution point at a time. - The user account that runs the tool must directly have role-based administration permissions that are equal to a Full Administrator on the Configuration Manager hierarchy. The tool does not function when user account is granted permissions as a member of a Windows security group that has the Full Administrator permissions. Modes of operation The tool can be run in two modes: What-If mode: When you do not specify the /delete switch, the tool runs in What-If mode and identifies the content that would be deleted from the distribution point but does not actually delete any data. - When the tool runs in this mode, information about the content that would be deleted is automatically written to the tools log file. The user is not prompted to confirm each potential deletion. - By default, the log file is written to the users temp folder on the computer where you run the tool, however you can use the /log switch to redirect the log file to another location. We recommend you run the tool in this mode and review the resulting log file before you run the tool with the /delete switch. Delete mode: When you run the tool with the /delete switch, the tool runs in delete mode. - When the tool runs in this mode, orphaned content that is found on the specified distribution point can be deleted from the distribution point’s content library. - Before deleting each file, the user is prompted to confirm that the file should be deleted. You can select, Y for yes, N for no, or Yes to all to skip further prompts and delete all orphaned content. We recommend you run the tool in What-If mode and review the resulting log file before you run the tool with the /delete switch. When the content library cleanup tool runs in either mode, it automatically creates a log with a name that includes the mode the tool runs in, distribution point name, date, and time of operation. The log file automatically opens when the tool finishes. By default, this log is written to the users temp folder on the computer where you run the tool., However, you can use a command line switch to redirect the log file to another location, including a network share. Run the tool To run the tool, open an administrative command prompt to a folder that contains ContentLibraryCleanup.exe. Next, enter a command line that includes the required command line switches, and optional switches you want to use. Command line switches The following command line switches can be used in any order. Improvements for in-console search Based on User Voice feedback, we have added the following improvements to in-console search: Object Path: Many objects now support a new column named Object Path. When you search and include this column in your display results, you can view the path to each object. For example, if you run a search for apps in the Applications node and are also searching sub-nodes, the Object Path column in the results pane will show you the path to each object returned. Preservation of search text: When you enter text in the search text box, and then switch between searching a sub-node and the current node, the text you typed will now persist and remain available for a new search without having to retype it. Preservation of your decision to search sub-nodes: The option you select for either searching the current node or all sub-nodes now persists when you change the node you are working in. This new behavior means you do not need to constantly reset the decision as you move around the console. By default, when you open the console the option is to only search the current node. Prevent installation of an application if a specified program is running. You can now configure a list of executable files (with the extension .exe) in deployment type properties which, if running, will block installation of an application. After installation is attempted, users will see a dialog box asking them to close the processes that are blocking installation. Try it out To configure a list of executable files - On the properties page of any deployment type, choose the Installer Handling tab. - Click Add, to add one of more executable files to the list (for example Edge.exe) - Click OK to close the deployment type properties dialog box. Now, when you deploy this application to a user or a device, and one of executables you added is running, the end user will see a Software Center dialog box telling them that the installation failed because an application is running. New Windows Hello for Business notification for end users A new Windows 10 notification informs end users that they must take additional actions to complete Windows Hello for Business setup (for example, setting up a PIN). Windows Store for Business support in Configuration Manager You can now deploy online licensed apps with a deployment purpose of Available from the Windows Store for Business to PCs running the Configuration Manager client. For more details, see Manage apps from the Windows Store for Business with System Center Configuration Manager. Support for this feature is currently only available to PCs running the Windows 10 RS2 preview build. Return to previous page when a task sequence fails You can now return to a previous page when you run a task sequence and there is a failure. Prior to this release, you had to restart the task sequence when there was a failure. For example, you can use the Previous button in the following scenarios: - When a computer starts in Windows PE, the task sequence bootstrap dialog might display before the task sequence is available. When you click Next in this scenario, the final page of the task sequence displays with a message that there are no task sequences available. Now, you can click Previous to search again for available task sequences. You can repeat this process until the task sequence is available. - When you run a task sequence, but dependent content packages are not yet available on distribution points, the task sequence fails. You can now distribute the missing content (if it wasn’t distributed yet) or wait for the content to be available on distribution points, and then click Previous to have the task sequence search again for the content. Express installation files support for Windows 10 updates We have added express installation files support in Configuration Manager for Windows 10 updates.. To enable the download of express installation files for Windows 10 updates on the server To start synchronizing the metadata for Windows 10 express installation files, you must enable it in the Software Update Point Properties. - In the Configuration Manager console, navigate to Administration > Site Configuration > Sites. - Select the central administration site or the stand-alone primary site. - On the Home tab, in the Settings group, click Configure Site Components, and then click Software Update Point. On the Update Files tab, select Download full files for all approved updates and express installation files for Windows 10. To enable support for clients to download and install express installation files To enable express installation files support on clients, you must enable express installation files on clients in the Software Updates section of client settings. This creates a new HTTP listener that listens for requests to download express installation files on the port that you specify. Once you deploy client settings to enable this functionality on the client, it will attempt to download the delta between the current month's Windows 10 Cumulative Update and the previous month's update (clients must run a version of Windows 10 that supports express installation files). - Enable support for express installation files in the Software Update Point Component properties (previous procedure). - In the Configuration Manager console, navigate to Administration > Client Settings. - Select the appropriate client settings, then on the Home tab, click Properties. - Select the Software Updates page, configure Yes for the Enable installation of Express Updates on clients setting and configure the port used by the HTTP listener on the client for the Port used to download content for Express Updates setting. OData endpoint data access Configuration Manager now provides a RESTful OData endpoint for accessing Configuration Manager data. The endpoint is compatible with Odata version 4, which enables tools such as Excel and Power BI to easily access Configuration Manager data through a single endpoint. Technical Preview 1612 only supports read access to objects in Configuration Manager. Data that is currently available in the Configuration Manager WMI Provider is now also accessible with the new OData RESTful endpoint. The entity sets exposed by the OData endpoint enable you to enumerate over the same data you can query with the WMI provider. Try it out Before you can use the OData endpoint, you must enable it for the site. - Go to Administration > Site Configuration > Sites. - Select the primary site and click Properties. - On the General tab of the primary site properties sheet, click Enable REST endpoint for all providers on this site, and then click OK. In your favorite OData query viewer, try queries similar to the following examples to return various objects in Configuration Manager: Note The example queries shown in the table use localhost as the host name in the URL and can be used on the computer running the SMS Provider. If you're running your queries from a different computer, replace localhost with the FQDN of the server with the SMS Provider installed. Azure Active Directory onboarding Azure Active Directory (AD) onboarding creates a connection between Configuration Manager and Azure Active Directory to be used by other cloud services. This can currently be used for creating the connection needed for the Cloud Management Gateway. Perform this task with an Azure admin, as you will need Azure admin credentials. To create the connection: - In the Administration workspace, choose Cloud Services > Azure Active Directory > Add Azure Active Directory. - Choose Sign In to create the connection with Azure AD. Configuration Manager client requirements There are several requirements for enabling the creation of user policy in the Cloud Management Gateway. - The Azure AD onboarding process must be complete, and the client has to be initially connected to the corporate network to get the connection information. - Clients must be both domain-joined (registered in Active Directory) and cloud-domain-joined (registered in Azure AD). - You must run Active Directory User Discovery. - You must modify the Configuration Manager client to allow user policy requests over the Internet, and deploy the change to the client. Because this change to the client takes place on the client device, it can be deployed through the Cloud Management Gateway although you haven't completed the configuration changes needed for user policy. - Your management point must be configured to use HTTPS to secure the token on the network, and must have .Net 4.5 installed. After you make these configuration changes, you can create a user policy and move clients to the Internet to test the policy. User policy requests through the Cloud Management Gateway will be authenticated with Azure AD token-based authentication. Change to configuring multi-factor authentication for device enrollment Now that you can set up multi-factor authentication (MFA) for device enrollment in the Azure portal, the MFA option has been removed in the Configuration Manager console. You can find more information on setting up MFA for enrollment in this Microsoft Intune topic.
https://docs.microsoft.com/en-us/sccm/core/get-started/capabilities-in-technical-preview-1612
2017-03-23T02:10:58
CC-MAIN-2017-13
1490218186608.9
[array(['media/datawarehouse.png', 'Datawarehouse_flow'], dtype=object)]
docs.microsoft.com
Delete a Job Category This topic describes how to delete a Microsoft SQL Server Agent job category in SQL Server 2016 Before You Begin Limitations and Restrictions When you delete a user-defined job category, SQL Server Agent prompts you to reassign the jobs that are assigned to it to another job category. You can only delete user-defined job categories. Security For detailed information, see Implement SQL Server Agent Security. Using SQL Server Management Studio To delete a job category In Object Explorer, click the plus sign to expand the server where you want to delete a job category. Click the plus sign to expand SQL Server Agent. Right-click the Jobs folder and select Manage Job Categories. In the Manage Job Categoriesserver_name dialog box, select the job category to delete. Click Delete. In the Job Categories dialog box, click Yes. Close the Manage Job Categoriesserver_name dialog box. Using Transact-SQL To delete a job category In Object Explorer, connect to an instance of Database Engine . On the Standard bar, click New Query. Copy and paste the following example into the query window and click Execute. -- deletes the job category named AdminJobs. USE msdb ; GO EXEC dbo.sp_delete_category @name = N'AdminJobs', @class = N'JOB' ; GO For more information, see sp_delete_category (Transact-SQL). Using SQL Server Management Objects To delete a job category Call the JobCategory class by using a programming language that you choose, such as Visual Basic, Visual C#, or PowerShell.
https://docs.microsoft.com/en-us/sql/ssms/agent/delete-a-job-category
2017-03-23T03:08:21
CC-MAIN-2017-13
1490218186608.9
[]
docs.microsoft.com
Adding a Tap Gesture Gesture Recognizer - PDF for offline use - - Sample Code: - - Related APIs: - Let us know how you feel about this 0/250 last updated: 2016-01 The tap gesture is used for tap detection and is implemented with the TapGestureRecognizer class. Overview To make a user interface element clickable with the tap gesture, create a TapGestureRecognizer instance, handle the Tapped event and add the new gesture recognizer to the GestureRecognizers collection on the user interface element. The following code example shows a TapGestureRecognizer attached to an Image element: var tapGestureRecognizer = new TapGestureRecognizer(); tapGestureRecognizer.Tapped += (s, e) => { // handle the tap }; image.GestureRecognizers.Add(tapGestureRecognizer); By default the image will respond to single taps. Set the NumberOfTapsRequired property to wait for a double-tap (or more taps if required). tapGestureRecognizer.NumberOfTapsRequired = 2; // double-tap When NumberOfTapsRequired is set above one, the event handler will only be executed if the taps occur within a set period of time (this period is not configurable). If the second (or subsequent) taps do not occur within that period they are effectively ignored and the 'tap count' restarts. Using Xaml A gesture recognizer can be added to a control in Xaml using attached properties. The syntax to add a TapGestureRecognizer to an image is shown below (in this case defining a double tap event): <Image Source="tapped.jpg"> <Image.GestureRecognizers> <TapGestureRecognizer Tapped="OnTapGestureRecognizerTapped" NumberOfTapsRequired="2" /> </Image.GestureRecognizers> </Image> The code for the event handler (in the sample) increments a counter and changes the image from color to black & white. void OnTapGestureRecognizerTapped(object sender, EventArgs args) {} tapCount++; var imageSender = (Image)sender; // watch the monkey go from color to black&white! if (tapCount % 2 == 0) { imageSender.Source = "tapped.jpg"; } else { imageSender.Source = "tapped_bw.jpg"; } } Using ICommand Applications that use the Mvvm pattern typically use ICommand rather than wiring up event handlers directly. The TapGestureRecognizer can easily support ICommand either by setting the binding in code: var tapGestureRecognizer = new TapGestureRecognizer(); tapGestureRecognizer.SetBinding (TapGestureRecognizer.CommandProperty, "TapCommand"); image.GestureRecognizers.Add(tapGestureRecognizer); or using Xaml: <Image Source="tapped.jpg"> <Image.GestureRecognizers> <TapGestureRecognizer Command="{Binding TapCommand}" CommandParameter="Image1" /> </Image.GestureRecognizers> </Image> The complete code for this view model can be found in the sample. The relevant Command implementation details are shown below: public class TapViewModel : INotifyPropertyChanged { int taps = 0; ICommand tapCommand; public TapViewModel () { // configure the TapCommand with a method tapCommand = new Command (OnTapped); } public ICommand TapCommand { get { return tapCommand; } } void OnTapped (object s) { taps++; Debug.WriteLine ("parameter: " + s); } //region INotifyPropertyChanged code omitted } Summary The tap gesture is used for tap detection and is implemented with the TapGestureRecognizer class. The number of taps can be specified to recognize double-tap (or triple-tap, or more taps).
https://docs.mono-android.net/guides/xamarin-forms/application-fundamentals/gestures/tap/
2017-03-23T02:14:52
CC-MAIN-2017-13
1490218186608.9
[]
docs.mono-android.net
aws_sns_topic resource Use the aws_sns_topic InSpec audit resource to test properties of a single AWS Simple Notification Service Topic. SNS topics are channels for related events. AWS resources place events in the SNS topic, while other AWS resources subscribe to receive notifications when new events occur. Syntax describe aws_sns_topic('arn:aws:sns:*::my-topic-name') do it { should exist } end # You may also use has syntax to pass the ARN describe aws_sns_topic(arn: 'arn:aws:sns:*::my-topic-name') do it { should exist } end Parameters arn (required) This resource accepts a single parameter, the ARN of the SNS Topic. This can be passed either as a string or as a arn: 'value' key-value entry in a hash. See also the AWS documentation on SNS. Properties Examples Make sure something is subscribed to the topic describe aws_sns_topic('arn:aws:sns:*::my-topic-name') do its('confirmed_subscription_count') { should_not be_zero}. describe aws_sns_topic('arn:aws:sns:*::good-news') do it { should exist } end describe aws_sns_topic('arn:aws:sns:*::bad-news') do it { should_not exist } end AWS Permissions Your Principal will need the sns:GetTopicAttributes action with Effect set to Allow. You can find detailed documentation at Actions, Resources, and Condition Keys for Amazon SNS. Was this page helpful?
https://docs.chef.io/inspec/resources/aws_sns_topic/
2021-06-13T01:41:47
CC-MAIN-2021-25
1623487598213.5
[]
docs.chef.io
Storing Files on Google Cloud Storage (GCS) If you use containers for deployment (including Docker and Heroku), you should not store files within the container’s filesystem. This integration allows you to delegate storing such files to Google Cloud Storage (GCS) service. #Environment variables #Serving media files from a GCS bucket "Media files" are the files uploaded through the dashboard. They include product images, category images, and non-image files. If you want to use BGC to store and serve media files, you need to configure at least the bucket name (see table above). #Serving static files from a GCS bucket "Static files" are assets required for Saleor to operate. They include assets used in default email templates. If you also intend to use GCS for your static files, you need to configure at least the bucket name (see table above). #Cross-Origin Resource Sharing You need to configure your GCS bucket to allow cross-origin requests for some files to be properly served (SVG files, Javascript files, etc.). To do this, set the following instructions in your GCS bucket’s permissions tab under the CORS section.
https://docs.saleor.io/docs/developer/running-saleor/gcs/
2021-06-13T03:13:19
CC-MAIN-2021-25
1623487598213.5
[]
docs.saleor.io
The NFTX token supply is 650,000 and 60% (i.e. 390k) is being distributed based on the community raise. The raise will accept both ETH and D1 NFT fund tokens to help the NFTX Dao bootstrap AMM pools. Half of the 390k tokens are for ETH contributions and the other half are for D1 NFT fund tokens. All NFTX sent to supporters in return for their contributions are vested (i.e. locked) until January 5th 2021. The raise will begin on Dec 22, 2020 at 11am PST. There will be a per-transaction NFTX cap that starts at 0 NFTX and increases to 50,000 NFTX over the span of one hour. This is simply to avoid the possibility of whales eating up the entire supply before smaller contributors have had a chance to participate. You can view the "XBounties" smart contract at this link. The 195,000 NFTX tokens allocated for ETH contributions are tranched over three valuations. It is not expected that the second or third tranche will get reached anytime soon, but by pre-programming future rounds early contributors know exactly what to expect. Tranche 1: 65k NFTX @ 130 NFTX/ETH Tranche 2: 65k NFTX @ 65 NFTX/ETH Tranche 3: 65k NFTX @ 43.3 NFTX/ETH The 195,000 NFTX tokens allocated for NFT contributions are split across 16 different fund tokens which wrap 6 different NFT contracts. Here are the NFT contracts preceded by the total amount being allocated to each: 93,925 NFTX for CryptoPunks 23,400 NFTX for AxieInfinity 23,400 NFTX for Avastars 23,400 NFTX for Autoglyphs 19,175 NFTX for CryptoKitties 11,700 NFTX for JOYWORLD Below is a list of D1 fund bounties. Each fund token has a certain reward rate which contributors are eligible for. For example, the reward rate for PUNK-BASIC is 390 NFTX, which means that if Alice mints and deposits 2 PUNK-BASIC then she will receive 2 x 390 NFTX (for a total of 780 NFTX tokens). However, each fund also has a reward cap, which is the maximum amount of NFTX tokens to be distributed. So, in our previous example, Alice's transaction would only get accepted if her transaction did not cause that category to go above its cap. The "expected" column is simply the reward cap divided by the reward rate. It is possible for our Dao to update this community raise smart contract and change the rate or cap for any particular fund. We will likely choose to do this in cases where the reward rate is too low to suck in any more assets, however we also encourage the community to be patient in this regard, since a rising NFTX token price translates into a higher reward rate, so with enough time it is possible that not many changes will have to be made. As a general standard, we will try to limit changes to at most once per week, and give followers at least a 24 hours notice before changes are enacted. For those that are unsure about which fund to mint and deposit, consider comparing the prices for NFTs on OpenSea with their respective reward rates above. Also note that some NFTs have their own marketplaces which tend to be more liquid than OpenSea. CryptoPunks have the the LarvaLabs website, Axies have the AxieInfinity marketplace, and likewise CryptoKitties have their own marketplace as well. It may also be worth checking current supply of fund tokens on the NFTX site. It's safe to assume that anyone who has minted a fund token this early is likely planning to use it for the community raise, so if you see that one fund already has a supply which exceeds the "expected" quantity above, then that may not be a good choice since you will have to compete to not be last when the raise opens. Some funds also require waiting 24-48 hours after minting to receive your fund tokens. This is referred to as a "mint request" and is necessary for funds which either have a very large number of potentially eligible tokens or that target an attribute which can change (e.g. the "fast" attribute on CryptoKitties). The reason for the delay is that the Dao must verify the mint requests manually, and voting takes 24 hours to complete. After the request has been verified then the requestee will receive their fund tokens (e.g. KITTY-GEN-0). If requestees change their mind or get impatient they can revoke their request and retrieve their NFTs at any time, assuming the request has not already been approved by the Dao. Lastly, it's important for contributors to remain cognizant of gas costs, which can add up quickly both when purchasing NFTs and also when using those NFTs to mint tokens on NFTX. Gas usage for the NFTX contract is not particularly efficent, and this is something that will be improved in the future. However, in the mean time, we simply recommend that contributors wait for when gas prices are lower. If you have any questions please visit our discord:
https://docs.nftx.org/archive/community-raise
2021-06-13T02:57:12
CC-MAIN-2021-25
1623487598213.5
[]
docs.nftx.org
How particular page of your site. - On the front end of your site, navigate to the page that you want to customize - Click on the Nimble Icon in the top admin menu, this will load the live customizer - For a specific header for the current page, click on the settings icon and expand the Current page options. To design a site wide header / footer, click on Site wide options. For the moment, the header and the footer are those of your active WordPress theme. Click on the Page header and footer tab, and select the Nimble specific header Your header and footer have now been replaced by empty dropzones in which you can start inserting Nimble Builder content. You can also leave them empty if you only need to display your main content on this page. You can use pre-build header and footer sections in the content picker.
https://docs.presscustomizr.com/article/358-building-your-header-and-footer-with-the-nimble-builder
2021-06-13T02:28:49
CC-MAIN-2021-25
1623487598213.5
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5c6d75e4042863543ccd3aa2/file-dkNBznzeNb.jpg', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5caf5df30428631d263c04d0/file-AqvJUoyUB6.jpg', 'pre-build header and footer'], dtype=object) ]
docs.presscustomizr.com
Here is a simple guide to add setting in Waasify - Go to WP Dashboard - Navigate to Waasify > Add Setting Menu - Select Setting Type (Text Editor, Color, Image Upload) - Enter Setting Label – This Label is displayed above the field. - Enter Meta Name – The meta field name is used by WordPress internally. This is not shown publicly. Use only lowercase and underscores. No spaces or special characters allowed. - Enter Tooltip – Write a short helpful tip. - Enter Field Placeholder – Field Content Placeholder. - Enter Field Default – This will be your default content - Enter the element where you want to apply these settings - Select Settings Category from right sidebar (Theme Settings, General Settings) - Select the Menu Icon
https://docs.waashero.com/docs/waasify-page-builder/how-to-add-setting-through-waasify/
2021-06-13T02:08:03
CC-MAIN-2021-25
1623487598213.5
[]
docs.waashero.com
Pass your actual test with our Palo Alto Networks PSE-Strata training material at first attempt Last Updated: Jun 09, 2021 No. of Questions: 63 Questions & Answers with Testing Engine Latest Version: V12.75 Download Limit: Unlimited We provide the most up to date and accurate PSE-Strata questions and answers which are the best for clearing the actual test. Instantly download of the Palo Alto Networks Palo Alto Networks System Engineer Professional – Strata Exam exam practice torrent is available for all of you. 100% pass is our guarantee of PSE-Str PSE-Strata actual test that can prove a great deal about your professional ability, we are here to introduce our Palo Alto Networks Systems Engineer PSE-Strata practice torrent to you. With our heartfelt sincerity, we want to help you get acquainted with our PSE-Strata exam vce. The introduction is mentioned as follows. Our PSE-Strata latest vce team with information and questions based on real knowledge the exam required for candidates. All these useful materials ascribe to the hardworking of our professional experts. They not only are professional experts dedicated to this PSE-Strata training material painstakingly but pooling ideals from various channels like examiners, former candidates and buyers. To make the PSE-Strata actual questions more perfect, they wrote our PSE-Strata prep training with perfect arrangement and scientific compilation of messages, so you do not need to plunge into other numerous materials to find the perfect one anymore. They will offer you the best help with our PSE-Strata questions & answers. We offer three versions of PSE-Strata practice pdf for you and help you give scope to your initiative according to your taste and preference. Tens of thousands of candidates have fostered learning abilities by using our PSE-Strata updated torrent. Let us get to know the three versions of we have developed three versions of PSE-Strata training vce for your reference. The PDF version has a large number of actual questions, and allows you to take notes when met with difficulties to notice the misunderstanding in the process of reviewing. The APP version of Palo Alto Networks Systems Engineer PSE-Str PSE-Strata free pdf maybe too large to afford by themselves, which is superfluous worry in reality. Our PSE-Strata exam training is of high quality and accuracy accompanied with desirable prices which is exactly affordable to everyone. And we offer some discounts at intervals, is not that amazing? As online products, our PSE-Strata : Palo Alto Networks System Engineer Professional – Strata Exam useful training can be obtained immediately after you placing your order. It is convenient to get. Although you cannot touch them, but we offer free demos before you really choose our three versions of PSE-Strata practice materials. Transcending over distance limitations, you do not need to wait for delivery or tiresome to buy in physical store but can begin your journey as soon as possible. We promise that once you have experience of our PSE-Strata practice materials once, you will be thankful all lifetime long for the benefits it may bring in the future.so our Palo Alto Networks PSE-Strata practice guide are not harmful to the detriment of your personal interests but full of benefits for you. Erin Ivy Lorraine Natalie Ruby Verna Exam4Docs is the world's largest certification preparation company with 99.6% Pass Rate History from 69850+ Satisfied Customers in 148 Countries. Over 69850+ Satisfied Customers
https://www.exam4docs.com/palo-alto-networks-system-engineer-professional-strata-exam-accurate-pdf-12165.html
2021-06-13T01:27:33
CC-MAIN-2021-25
1623487598213.5
[]
www.exam4docs.com
Create or update a dedicated host . Delete a dedicated host. Retrieves information about a dedicated host. Lists all of the dedicated hosts in the specified dedicated host group. Use the nextLink property in the response to get the next page of dedicated hosts. Update an dedicated host . Feedback will be sent to Microsoft: By pressing the submit button, your feedback will be used to improve Microsoft products and services. Privacy policy. Thank you.
https://docs.microsoft.com/en-us/rest/api/compute/dedicated-hosts
2021-06-13T03:51:26
CC-MAIN-2021-25
1623487598213.5
[]
docs.microsoft.com
Exporting Products #Introduction This guide describes how to export products from the Saleor GraphQL API. Exporting products can be useful for creating backups of your data or for easy and quick bulk editing. You can export all products, filtered products, or products with specific IDs. Products data can be exported to CSV or XLSX file, but CSV file is recommended because it is much less time-consuming to generate. You can also define which fields will be exported. If any product variants fields are specified then products and variants data are exported. Link to download file is sent by email to the requestor. If any error occurs then email with information about problems is sent. note When the export is made by the App email is not sent. You can get the file by query the ExportFile with ID returned from exportProducts mutation. #Export file structure Each row in the exported file represents a single product variant, but it also contains general product fields. For example, if a product has three variants, there will be three lines in total. Each line will contain common product fields such as name or description, and fields specific to given variant: The exact shape of the file depends on the fields selected for export (see the exportInfo input fields). If no variant fields are provided, each row will contain only product fields: #Workflow The charts below explains workflow of the exporting products: #Schedule export products #Handling background worker result #Exporting products note Products can be exported only by logged users with MANAGE_PRODUCTS permission. To export products, use exportProducts mutation. This mutation takes the following input: scope: determine which products should be exported. You can choose between exporting all products ( ALL), filtered products ( FILTER) or selected IDs ( IDS). You can find more details in the next sections. filter: defines filtering option. This field is optional but must be specified if your choose FILTERscope option. ids: a list of products IDs to export. This field is optional but must be specified if IDSin scopeis chosen. fileType: defines exported file type. You can choose between CSVand XLSXfile. exportInfo: determine exported fields. It takes the following input: attributes: list of attribute IDs to export (optional). warehouses: list of warehouse IDs to export (optional). fields: list of product and variants fields to export (optional, IDfield is exported by default). As a result, this mutation returns ExportFile object which is a Job instance. It corresponds to running export background worker, keeps task status, and created file. ExportFile object contains the following fields: id: a unique export file ID. Could be use to check export status. status: status of running job. user: instance of Userwho requested exporting products. Set to nullif export requested by App. app: instance of Appwhich requested exporting products. Set to nullif export requested by User. createdAt: the date and time when the export was started. updatedAt: the date and time when the job was last time updated. url: URL to the exported file. Set to nullwhen the file doesn't exist yet. events: a list of events associated with the export. In addition the following field is available on the mutation results: exportErrors: a list of errors that occurred during mutation execution. #Exporting all products The following example shows how to export all products with all available fields to CSV file. (For exporting to XLSX just replace CSV with XLSX in fileType field.) note Fields order defines order of headers in exported file. Exporting any of price fields adds a currency field by default. In response we get workers information: Once the task is finished, the url field will contain the URL address of the exported file. If export was triggered by User the link to the file will be sent by email to the requestor. To check if URL is ready you can just fetch ExportFile by ID with use of exportFile query: Example response with URL address to the file. #Exporting filtered products To export only filtered products you need to define FILTER scope and filter field. Lets see an example for exporting only published products from two specific categories: #Exporting products with specified IDs To export only products with provided ids you need to define IDS scope and ids field. Lets see an example: #Define warehouses and attributes to export To export data about specified warehouses and attributes you need to provide list of warehouses or attributes IDs. If you specify warehouses, then for all variants with stocks in given warehouse, data about stock quantity will be exported. It will be visible in column: warehouse-slug (warehouse quantity). If you specify attributes, then data about given attribute value for all products and variants will be exported (empty if not exists). Attributes value will be visible in column: attribute-slug (product attribute) for product attributes and attribute-slug (product attribute) for variant attributes. Below you can find example of exporting warehouses and attributes data.
https://docs.saleor.io/docs/developer/export-products/
2021-06-13T02:14:40
CC-MAIN-2021-25
1623487598213.5
[]
docs.saleor.io
General¶ You can assign content elements to grids. This is a powerful tool for individual designs. Available grids are - columns (1 to 6) - accordion - tab Workflow¶ You can nest gridelements. You can control the responsive behaviour with the properties columns arrangement. You can create every content element in the grid. A gridelement too.
https://docs.typo3.org/p/netzmacher/start/master/en-us/Users/BestPractice/Layouts/Element/General/Index.html
2021-06-13T02:40:29
CC-MAIN-2021-25
1623487598213.5
[]
docs.typo3.org
High availability groups use virtual IP addresses (VIPs) to provide active-backup access to Gateway Node or Admin Node services. An HA group consists of one or more network interfaces on Admin Nodes and Gateway Nodes.. When creating an HA group, you specify one interface to be the preferred Master. The preferred Master is the active interface unless a failure occurs that causes the VIP addresses to be reassigned to a Backup interface. When the failure is resolved, the VIP addresses are automatically moved back to the preferred Master. If the HA group includes interfaces from more than two nodes, the active interface might move to any other node's interface during failover.
https://docs.netapp.com/sgws-115/topic/com.netapp.doc.sg-admin/GUID-5C2D8F65-7F8A-4418-BFC1-C506547325B4.html?lang=en
2021-06-13T01:47:28
CC-MAIN-2021-25
1623487598213.5
[]
docs.netapp.com
Code reviews Team Development administrators can require that pushes undergo code review before accepting pushes. When code review is enabled, pushing a change to the parent instance triggers the code review workflow. By default, users with the teamdev_code_reviewer role receive notifications to review changes and can approve or reject changes. The Team Development Code Reviewers has the teamdev_code_reviewer role. For each change, reviewers can see the following information. Which remote instance the pushed change comes from. Who pushed the change to the parent. What the change is called. When the change was created. Which versions the change includes. Reviewers must approve or reject a push from the Team Development application. While changes are being reviewed on the parent instance, a child instance cannot do the following activities involving the parent instance: Push changes to the parent instance. Pull changes from the parent instance. Reconcile changes with the parent instance. Change the parent instance to another instance. Delete the remote instance record for the parent instance. Related tasksTeam Development processRelated conceptsTeam Development overviewTeam Development setupCode review notificationsCode review workflowExclusion policiesInstance hierarchiesPulls and pushesTeam Development rolesVersionsVersions and local changes
https://docs.servicenow.com/bundle/quebec-application-development/page/build/team-development/concept/c_CodeReview.html
2021-06-13T03:20:52
CC-MAIN-2021-25
1623487598213.5
[]
docs.servicenow.com
Pass your actual test with our SAP C_MDG_1909 training material at first attempt Last Updated: Jun 11, 2021 No. of Questions: 145 Questions & Answers with Testing Engine Latest Version: V12.35 Download Limit: Unlimited We provide the most up to date and accurate C_MDG_1909 questions and answers which are the best for clearing the actual test. Instantly download of the SAP SAP Certified Application Associate - SAP Master Data Governance exam practice torrent is available for all of you. 100% pass is our guarantee of C_MD_MDG_1909 actual test that can prove a great deal about your professional ability, we are here to introduce our SAP Certified Application Associate C_MDG_1909 practice torrent to you. With our heartfelt sincerity, we want to help you get acquainted with our C_MDG_1909 exam vce. The introduction is mentioned as follows. Our C_MDG_1909 latest vce team with information and questions based on real knowledge the exam required for candidates. All these useful materials ascribe to the hardworking of our professional experts. They not only are professional experts dedicated to this C_MDG_1909 training material painstakingly but pooling ideals from various channels like examiners, former candidates and buyers. To make the C_MDG_1909 actual questions more perfect, they wrote our C_MDG_1909 prep training with perfect arrangement and scientific compilation of messages, so you do not need to plunge into other numerous materials to find the perfect one anymore. They will offer you the best help with our C_MDG_1909 questions & answers. We offer three versions of C_MDG_1909 practice pdf for you and help you give scope to your initiative according to your taste and preference. Tens of thousands of candidates have fostered learning abilities by using our C_MDG_1909 updated torrent. Let us get to know the three versions of we have developed three versions of C_MDG_1909 training vce for your reference. The PDF version has a large number of actual questions, and allows you to take notes when met with difficulties to notice the misunderstanding in the process of reviewing. The APP version of SAP Certified Application Associate C_MD_MDG_1909 free pdf maybe too large to afford by themselves, which is superfluous worry in reality. Our C_MDG_1909 exam training is of high quality and accuracy accompanied with desirable prices which is exactly affordable to everyone. And we offer some discounts at intervals, is not that amazing? As online products, our C_MDG_1909 : SAP Certified Application Associate - SAP Master Data Governance useful training can be obtained immediately after you placing your order. It is convenient to get. Although you cannot touch them, but we offer free demos before you really choose our three versions of C_MDG_1909 practice materials. Transcending over distance limitations, you do not need to wait for delivery or tiresome to buy in physical store but can begin your journey as soon as possible. We promise that once you have experience of our C_MDG_1909 practice materials once, you will be thankful all lifetime long for the benefits it may bring in the future.so our SAP C_MDG_1909 practice guide are not harmful to the detriment of your personal interests but full of benefits for you. Veronica Andre Bernard Christopher Edward Haley Exam4Docs is the world's largest certification preparation company with 99.6% Pass Rate History from 69850+ Satisfied Customers in 148 Countries. Over 69850+ Satisfied Customers
https://www.exam4docs.com/sap-certified-application-associate-sap-master-data-governance-accurate-pdf-12585.html
2021-06-13T03:18:28
CC-MAIN-2021-25
1623487598213.5
[]
www.exam4docs.com
Video tutorial Some BSC nodes are slow to retrieve media from the blockchain. To fix this head to Under recommended copy the second link : Alternatively you can use this RPC: Now head to your metamask settings Click on Networks now change your RPC url to this one. Refresh your page and everything should load fine. If you still run into issues then contact admin in telegram for support.
https://docs.degenerate.money/tutorials/im-getting-a-blank-screen
2021-06-13T02:51:10
CC-MAIN-2021-25
1623487598213.5
[]
docs.degenerate.money
The nodes currently running in the PoA mainnet can be seen at the Dock Network telemetry. Our fork of Polkadot-js apps showing block explorer and other tools is here. The validators producing blocks in a round-robin fashion in the explorer. To query the balance of an account, use the system module and not the balances module. The decimal places and token symbol are not part of the testnet chain spec and that makes the transaction fees event ( TxnFeesGiven) show a fees of 0. This will be fixed in the mainnet but for the time being SDK can be used to query the events with correct values.
https://docs.dock.io/validators/poa
2021-06-13T03:10:11
CC-MAIN-2021-25
1623487598213.5
[]
docs.dock.io
Pass your actual test with our Cisco 300-615 training material at first attempt Last Updated: Jun 10, 2021 No. of Questions: 99 Questions & Answers with Testing Engine Latest Version: V14.35 Download Limit: Unlimited We provide the most up to date and accurate 300-615 questions and answers which are the best for clearing the actual test. Instantly download of the Cisco 300-615 exam practice torrent is available for all of you. 100% pass is our guarantee of 300-615 Troubleshooting Cisco Data Center Infrastructure accurate questions with the best reputation in the market instead can help you ward off all unnecessary and useless materials and spend all limited time on practicing most helpful questions as much as possible. To get to know more about their features of CCNP Data Center Troubleshooting Cisco Data Center Infrastructure practice torrent, follow us as passages mentioned below. To candidates saddled with burden to exam, our Troubleshooting Cisco Data Center Infrastructure pdf vce is serving as requisite preparation for you. Our 300-615 CCNP Data Center latest torrent like others. With the effective Troubleshooting Cisco Data Center Infrastructure practice pdf like us you can strike a balance between life and study, and you can reap immediate harvest by using our Troubleshooting Cisco Data Center Infrastructure updated vce. With passing rate up to 98-100 percent, our Cisco study guide has help our customers realized their dreams as much as possible. If you master the certificate of the Troubleshooting Cisco Data Center Infrastructure Troubleshooting Cisco Data Center Infrastructure prep training, you can get full refund without any reasons or switch other versions freely. We think of writing the most perfect Troubleshooting Cisco Data Center Infrastructure 300-615 practice questions, who are staunch defender to your interests. What is more, we have optimized the staff and employees to choose the outstanding one to offer help. It is a win-win situation for you and our company to pass the Troubleshooting Cisco Data Center Infrastructure practice exam successful. So we never stop the pace of offering the best services and 300-615 free questions. That is exactly the aims of our company in these years. Over 69850+ Satisfied Customers Ian Leo Myron Reginald Tom Adelaide Exam4Docs is the world's largest certification preparation company with 99.6% Pass Rate History from 69850+ Satisfied Customers in 148 Countries.
https://www.exam4docs.com/troubleshooting-cisco-data-center-infrastructure-accurate-questions-11189.html
2021-06-13T01:28:36
CC-MAIN-2021-25
1623487598213.5
[]
www.exam4docs.com
Go to the Start Menu and search for Proxy, then after click on Change proxy settings. Go to the Manual proxy setup section and enable Use a proxy server. Insert your proxy IP into the address section and then the proxy port into the port section. If you need a username and password for the proxy, Windows will automatically ask for it in a popup. If you are using an IP authenticated proxy, you should be done. If you are using a user:pass proxy please follow the instructions below. Go to the Start Menu and search for "Manage Windows Credentials" The click on "Add a Windows credential" Then you should be able to add the IP of the proxy (without the port) and the credentials. After this, click OK and you're done. Example of the proxy structure. 123.123.123.123:1234:user:pass IP: 123.123.123.123 Port: 1234 Username: user Password: pass If needed, do a reboot/restart of the server.
https://docs-servers.zesty.group/basics/how-to-setup-a-proxy-on-a-server
2021-06-13T02:48:08
CC-MAIN-2021-25
1623487598213.5
[]
docs-servers.zesty.group
Airdrop Currently and in the future, we regularly launch airdrop campaigns or other contests. Stay tuned on Twitter and Telegram and our news sites to receive the latest news! Presale Comos Token: No Pre-sale, fair launch. When developing new projects, we can presale a certain amount for users to hold.
https://docs.comos.finance/community-social/campaigns
2021-06-13T03:32:25
CC-MAIN-2021-25
1623487598213.5
[]
docs.comos.finance
More cleanup of ros stuff. preparing 0.1.3. [alexv] Improved setup to do releases. removed ros files from master branch. [alexv] Improve self test. [AlexV] Reviewing tox and tests. [AlexV] Refining tox test command, importing more from __future__. [AlexV] Making check for string work with python3. [AlexV] Adding .idea folder to gitignore. [AlexV] Removing ROS and not using site-package, this is a pure python package. [alexv] Revert "improving travis files to test catkin_pip build with rosdistros." [alexv] This reverts commit 3c3bdd1d65f28f24bf3891ff1567e084b0dfb6bf. Improving travis files to test catkin_pip build with rosdistros. [alexv]
https://docs.ros.org/en/kinetic/changelogs/pyros_config/changelog.html
2021-06-13T02:14:54
CC-MAIN-2021-25
1623487598213.5
[]
docs.ros.org
Help Center Local Navigation Overview The decor sample application demonstrates how to create custom fields by specifying their border, padding, color, and background attributes. You can use the sample application to create fields that have the following attributes: Next topic: Featured classes Was this information helpful? Send us your comments.
http://docs.blackberry.com/zh-cn/developers/deliverables/9076/Decor_sample_app_overview_672047_11.jsp
2014-09-16T01:07:43
CC-MAIN-2014-41
1410657110730.89
[]
docs.blackberry.com
Message-ID: <1020827768.3770.1410831124156.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_3769_888850900.1410831124155" ------=_Part_3769_888850900.1410831124155 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Plugin to support decoding JP2 and (limited) JPX files by means of eithe= r JJ2000, Jasper or Kakadu.=20 =20 IP Check: review.apt added, all headers are in place Quality Assurance: More than 60% test coverage reported = by cobertura. Stability: No planned API changes = Supported: Documents available. Module maintainer does wat= ches user list, answers email. The aim of this module is to allow access JPEG2K data using the Kakadu driver via the imageio-ext-kakadujp2 plugin we have prepared o= n imageio-ext (Which uses JNI) when the Kakadu SDK can be found in the path= ; otherwise the standard JAI ImageIO JPEG2K plugin will be used. Notice tha= o= n some of the key objects introduced by Simone on its imagemosaic plugin su= ch as RasterManager, RasterLayerRequest, RasterLayerResponse which are prep= osed to handle any control/logics involving resolutions/envelope/overviews/= crop/requests/... management. The idea is to proceed with improvements and tests on the previously int= roduced concepts/objects in order to extract a base architecture which may = be shared by plugins to access coverages. Example of future tasks dur= ing the development of the plugin include: Kakadu is a powerful implementation of the JPEG2000 stand= ard. It allows to build a set of JNI DLLs which may be used via Java = through a Jar package containing bindings to the native libs. Our ImageIO-E= xt kakadu plugin is based on this. (The imageio-ext site already cont= ains information about the Kakadu capabilities as well as instructions on h= ow.
http://docs.codehaus.org/exportword?pageId=123667549
2014-09-16T01:32:04
CC-MAIN-2014-41
1410657110730.89
[]
docs.codehaus.org
changes.mady.by.user Jan Bartel Saved on May 01, 2008 ... Setter Description The number of thread dedicated to accepting incoming connections. Number of connection requests that can be queued up before the operating system starts to send rejections. Sets the priority of the acceptor threads relative to the other threads. The port to redirect to if there is a security constraint of CONFIDENTIAL. https by default Set the size of the buffer to be used for request and response headers. An idle connection will at most have one buffer of this size allocated. Default is 4K. The particular interface to listen on. If not set or 0.0.0.0, jetty will listen on port on all interfaces. The port to redirect to if there is a security constraint of INTEGRAL. Set the number of connections, which if exceeded places this connector in a low resources state. This is not an exact measure as the connection count is averaged over the select sets. When in a low resources state, different idle timeouts can apply on connections (see lowResourcesMaxIdleTime). Set the period in ms that a connection is allowed to be idle when this there are more than lowResourcesConnections connections. This allows the server to rapidly close idle connections in order to gracefully handle high load situations. Set the maximum Idle time for a connection, which roughly translates to the Socket.setSoTimeout(int) call, although with NIO implementations other mechanisms may be used to implement the timeout. The max idle time is applied: when waiting for a new request to be received on a connection; when reading the headers and content of a request; when writing the headers and content of a response. Jetty interprets this value as the maximum time between some progress being made on the connection. So if a single byte is read or written, then the timeout (if implemented by jetty) is reset. However, in many instances, the reading/writing is delegated to the JVM, and the semantic is more strictly enforced as the maximum time a single read/write operation can take. Note, that as Jetty supports writes of memory mapped file buffers, then a write may take many 10s of seconds for large content written to a slow device. The name of the connector. Can be used to make a WebAppContext respond only to requests on the named connector via the [WebAppContext.setConnectorNames(String[])] method The port to listen on. See also host Set the size of the content buffer for receiving requests. These buffers are only used for active connections that have requests with bodies that will not fit within the header buffer (see headerBufferSize). Default is 8K. Set the size of the content buffer for sending responses. These buffers are only used for active connections that are sending responses with bodies that will not fit within the header buffer. Default is 32K. If true, request IP addresses will be resolved to host names True if the the server socket will be opened in SO_REUSEADDR mode Sets SO_LINGER on the connection socket. Disabled by default. If true, enables statistics collection on connections see Statistics For nio connectors, determines whether direct byte buffers will be used or not. The default is true. Sets the thread pool instance. By default this is the thread pool set on the org.mortbay.jetty.Server, and is a org.mortbay.thread.QueuedThreadPool instance. Powered by a free Atlassian Confluence Open Source Project License granted to Codehaus. Evaluate Confluence today.
http://docs.codehaus.org/pages/diffpages.action?pageId=89554984&originalId=80380047
2014-09-16T00:57:34
CC-MAIN-2014-41
1410657110730.89
[]
docs.codehaus.org
23.Sets the orientation of the panel, switching it from behaving like a panel:horizontal-dragable<%> and panel:vertical-dragable<%>...
http://docs.racket-lang.org/framework/Panel.html
2014-09-16T00:52:58
CC-MAIN-2014-41
1410657110730.89
[]
docs.racket-lang.org
public abstract class AbstractCachingViewResolver extends WebApplicationObjectSupport implements ViewResolver ViewResolverimplementations. Caches Viewobjects) logger AbstractCachingViewResolver() public void setCache(boolean cache) Default is "true": caching is enabled. Disable this only for debugging and development. Warning: Disabling caching can severely impact performance.() public View resolveViewName(java.lang.String viewName, java.util.Locale locale) throws java.lang) java.lang.Exception- if the view cannot be resolved (typically in case of problems creating an actual View object) protected java.lang.Object getCacheKey(java.lang.String viewName, java.util.Locale locale) Default is a String consisting of view name and locale suffix. Can be overridden in subclasses. Needs to respect the locale in general, as a different locale can lead to a different view resource. public void removeFromCache(java.lang.String viewName, java.util(java.lang.String viewName, java.util.Locale locale) throws java.lang) java.lang.Exception- if the view couldn't be resolved loadView(java.lang.String, java.util.Locale) protected abstract View loadView(java.lang.String viewName, java.util.Locale locale) throws java.lang) java.lang.Exception- if the view couldn't be resolved resolveViewName(java.lang.String, java.util.Locale)
http://docs.spring.io/spring-framework/docs/3.2.0.M2/api/org/springframework/web/servlet/view/AbstractCachingViewResolver.html
2014-09-16T01:22:14
CC-MAIN-2014-41
1410657110730.89
[]
docs.spring.io
Once the request is complete you can see the response in the Response panel of the tab. If you dont see the Response panel the the API request might have failed and you should see an error message like this. If a response is returned by the server then you can see under the Body tab in the Response panel. You can also see the Status Code & Status Text for the response along with the Time taken for the request to complete at the top right corner of the Response panel. Under Body tab there are 3 different tabs to see your response. Pretty - Formats and beautifies the response to make it more user readable. Raw - Shows the response as it is received from the server Preview - If the returned response has html content or is a image/audio/video the you can preview it under this tab Test Builder: Provides an feature rich UI to build Tests based on your response You can see the headers returned by the server under Headers tab. You can add test cases to your API before running it under the Scripts tab before making a request. Once a response is received your test cases will be executed and results will be shown in the Test Cases tab in the Response panel. By default it will show all test results. To see only Passed or Failed results you can use the filter beside the results. Learn how to add Test Cases and perform end to end testing of your APIs. Ex: apic.test("Check that Status code is 201 (Created)", function(){expect($response.status).to.be.eql(201);});apic.test("Status Text is Created", function(){expect($response.statusText).to.be.eql("Created");});apic.test("Time taken is less than or equals to 2 sec", function(){expect($response.timeTaken).to.be.lte(2000);})apic.test("Response raw body contains string 'your_string'", function(){expect($response.body).to.include("your_string");});apic.test("The value of response header Content-Type is application/json", function(){expect($response.headers.getValue("content-Type")).to.be.eql("application/json");}); The same code above can also be written as: Apic allows you to debug your test scripts by logging your variables. If you want to debug some values in your script the you can do that by adding logs. All your added logs will be shown in the Logs tab Ex //To see the value of status codelog("Status code is: " + $response.status);//to see the value of a specific headerlog("Value for header Content-Type: " + $response.headers.getValue("content-Type"));// to see the raw responselog($response.body);/*if your response is a JSON data then you can access individual fields in your response{"errCode":400,"msg": "Missing todo name"}*/// for the above response you can access the msg & errCode property by usinglog($response.data.msg);log($response.data.errCode); APIC allows you to save your API response along with your API request. To save your current API response, click on the Save response button in Response panel. To view the saved response for a request click on Load saved request. One the saved response is loaded you can start adding tests by clicking on the Build API Tests which open the Test Builder.
https://docs.apic.app/tester/decoding-the-response
2021-04-10T18:46:10
CC-MAIN-2021-17
1618038057476.6
[]
docs.apic.app
Enrollment Settings Enrollment is the process of adding computers and mobile devices to Jamf School. The Enrollment settings in Jamf School allow you to specify settings that apply to all enrolled devices, such as the following: Configure which locations the enrollment settings apply to Automatically enable Activation Lock Enable Apple Configurator and on-device enrollment authentication Configure location settings Rename devices Configuring Enrollment Settings Requirements To configure the Locations and Location Options settings, you must set up locations in Jamf School. For more information, see Creating Locations in Jamf School. To configure authentication during enrollment, you must configure the Authentication settings. For more information, see the following sections in this guide: Procedure In Jamf School, navigate to Organization > Settings in the sidebar. Click Enrollment. Choose one of the following: To allow each location in your environment to configure independent Enrollment settings, choose "Do not force these settings for locations" from the pop-up menu at the top of the pane. To ensure all locations in your environment use these Enrollment settings, choose "Force these settings for locations" from the pop-up menu at the top of the pane. To configure which regions devices can enroll from, choose an option from the Enrollment Restrictions pop-up menu. To enable Activation Lock on devices enrolled using on-device enrollment or User Enrollment, select the Automatically enable Activation Lock on non-DEP devices checkbox. To assign the currently logged in user to the placeholder device, select the Keep current assigned owner if no owner is configured in a placeholder checkbox. To automatically move enrolled devices to the trash after the MDM profile is removed, choose an option from the Automatic Trash pop-up menu. To allow users to enter any password for the Apple School Manager password, select the Disable password check for users that have been imported from ASM checkbox. Note: It is recommended that you select this setting if you are using local authentication. To allow users to authenticate during Apple Configurator enrollment, select the Apple Configurator Authentication checkbox. Note: Users authenticate using the credentials configured in the Authentication settings in Jamf School. To view these settings, navigate to Organization > Settings > Authentication. To allow users to authenticate during on-device enrollment, select the On Device Enrollment Authentication checkbox. Note: Users authenticate using the credentials configured in the Authentication settings in Jamf School. To view these settings, navigate to Organization > Settings > Authentication. Configure the locations setting by choosing a setting in the Locations settings. To automatically rename devices without a placeholder name or a name specified in an Automated Device Enrollment profile, select the Enable Renaming Devices checkbox and enter the new name in the Rename to... field. Click Save. Related Information For related information, see the following sections in this guide: Payload Variables Find out what variables you can use when renaming devices. Enrollment Methods Learn more about the different methods you can use to enroll devices in Jamf School.
https://docs.jamf.com/jamf-school/deploy-guide-docs/Enrollment_Settings.html
2021-04-10T19:50:55
CC-MAIN-2021-17
1618038057476.6
[]
docs.jamf.com
Index Exchange Features "Send All Bids" Ad Server KeysThese are the bidder-specific keys that would be targeted within GAM in a Send-All-Bids scenario. GAM truncates keys to 20 characters. Overview Module Name: Index Exchange Adapter Module Type: Bidder Adapter Maintainer: [email protected] Description Publishers may access Index Exchange’s (IX) network of demand sources through our Prebid.js and Prebid Server adapters. Both of these modules are GDPR and CCPA compliant. IX Prebid.js Adapter Our Prebid.js adapter is compatible with both the older ad unit format where the sizes and mediaType properties are placed at the top-level of the ad unit, and the newer format where this information is encapsulated within the mediaTypes object. We recommend that you use the newer format when possible as it will be better able to accommodate new feature additions. If a mix of properties from both formats is present within an ad unit, the newer format’s properties will take precedence. Here are examples of both formats. Older Format var adUnits = [{ // ... sizes: [ [300, 250], [300, 600] ] // ... }]; Newer Format var adUnits = [{ // ... mediaTypes: { banner: { sizes: [ [300, 250], [300, 600] ] }, video: { context: 'instream', playerSize: [ [1280, 720] ] } }, // ... }]; Supported Media Types (Prebid.js) Supported Media Types (Prebid Server) Bid Parameters Each of the IX-specific parameters provided under the adUnits[].bids[].params object are detailed here. Banner Video Setup Guide Follow these steps to configure and add the IX module to your Prebid.js integration. The examples in this guide assume the following starting configuration (you may remove banner or video, if either does not apply). In regards to video, context can either be 'instream' or 'outstream'. Note that outstream requires additional configuration on the adUnit. var adUnits = [{ code: 'banner-div-a', mediaTypes: { banner: { sizes: [ [300, 250], [300, 600] ] } }, bids: [] }, { code: 'video-div-a', mediaTypes: { video: { context: 'instream', playerSize: [ [1280, 720] ] } }, bids: [] }]; 1. Add IX to the appropriate ad units For each size in an ad unit that IX will be bidding on, add one of the following bid objects under adUnits[].bids: { bidder: 'ix', params: { siteId: '123456', size: [300, 250] } } Set params.siteId and params.size in each bid object to the values provided by your IX representative. Examples Banner: var adUnits = [{ code: 'banner-div-a', mediaTypes: { banner: { sizes: [ [300, 250], [300, 600] ] } }, bids: [{ bidder: 'ix', params: { siteId: '123456', size: [300, 250] } }, { bidder: 'ix', params: { siteId: '123456', size: [300, 600] } }] }]; Video (Instream): var adUnits = [{ code: 'video-request-a', mediaTypes: { video: { context: 'instream', playerSize: [ [1280, 720] ] } }, bids: [{ bidder: 'ix', params: { siteId: '123456', size: [1280, 720], video: { mimes: [ 'video/mp4', 'video/webm' ], minduration: 0, maxduration: 60, protocols: [6] } } }] }]; Please note that you can re-use the existing siteId within the same flex position. Video (Outstream): Note that currently, outstream video rendering must be configured by the publisher. In the adUnit, a renderer object must be defined, which includes a url pointing to the video rendering script, and a render function for creating the video player. See for more information. var adUnits = [{ code: 'video-div-a', mediaTypes: { video: { context: 'outstream', playerSize: [[640, 360]] } }, renderer: { url: '', render: function (bid) { ... } }, bids: [{ bidder: 'ix', params: { siteId: '123456', size: [640, 360], video: { mimes: [ 'video/mp4', 'video/webm' ], minduration: 0, maxduration: 60, protocols: [6] } } }] }]; Video Caching Note that the IX adapter expects a client-side Prebid Cache to be enabled for video bidding. pbjs.setConfig({ usePrebidCache: true, cache: { url: '' } }); User Sync Add the following code to enable user sync. IX strongly recommends enabling user syncing through iFrames. This functionality improves DSP user match rates and increases the IX bid rate and bid price. Be sure to call pbjs.setConfig() only once. pbjs.setConfig({ userSync: { iframeEnabled: true, filterSettings: { iframe: { bidders: ['ix'], filter: 'include' } } } }); The detectMissingSizes feature By default, the IX bidding adapter bids on all banner sizes available in the ad unit when configured to at least one banner size. If you want the IX bidding adapter to only bid on the banner size it’s configured to, switch off this feature using detectMissingSizes. pbjs.setConfig({ ix: { detectMissingSizes: false } }); OR pbjs.setBidderConfig({ bidders: ["ix"], config: { ix: { detectMissingSizes: false } } }); 2. Include ixBidAdapter in your build process When running the build command, include ixBidAdapter as a module, as well as dfpAdServerVideo if you require video support. gulp build --modules=ixBidAdapter,dfpAdServerVideo,fooBidAdapter,bazBidAdapter If a JSON file is being used to specify the bidder modules, add "ixBidAdapter" to the top-level array in that file. [ "ixBidAdapter", "dfpAdServerVideo", "fooBidAdapter", "bazBidAdapter" ] And then build. gulp build --modules=bidderModules.json Setting First Party Data (FPD) FPD allows you to specify key-value pairs that are passed as part of the query string to IX for use in Private Marketplace Deals which rely on query string targeting for activation. For example, if a user is viewing a news-related page, you can pass on that information by sending category=news. Then in the IX Private Marketplace setup screens, you can create Deals which activate only on pages that contain category=news. Please reach out to your IX representative if you have any questions or need help setting this up. To include FPD in a bid request, it must be set before pbjs.requestBids is called. To set it, call pbjs.setConfig and provide it with a map of FPD keys to values as such: pbjs.setConfig({ ix: { firstPartyData: { '<key name>': '<key value>', '<key name>': '<key value>', // ... } } }); The values can be updated at any time by calling pbjs.setConfig again. The changes will be reflected in any proceeding bid requests. Setting a Server Side Timeout Setting a server-side timeout allows you to control the max length of time taken to connect to the server. The default value when unspecified is 50ms. This is distinctly different from the global bidder timeout that can be set in Prebid.js in the browser. To add a server-side timeout, it must be set before pbjs.requestBids is called. To set it, call pbjs.setConfig and provide it with a timeout value as such: pbjs.setConfig({ ix: { timeout: 50 } }); The timeout value must be a positive whole number in milliseconds. IX Prebid Server Adapter Publishers who would like to retrieve IX demand via a Prebid Server instance can do so by adding IX to the list of bidders for a Prebid Server bid request, with a valid site ID. For example: "imp": [ { "id": "test2", "banner": { "format": [ { "w": 300, "h": 600 } ] }, "ext": { "ix": { "siteId": "12345" } } } ] Important Prebid Server Note Any party operating their own hosted Prebid Server instances must reach out to IX ([email protected]) to receive approval and customized setup instructions. Please do not send Prebid Server requests without first contacting us – you will not receive bid responses. Additional Information Bid Request Limit If a single bid request to IX contains more than 20 impression requests (i.e. more than 20 objects in bidRequest.imp), only the first 20 will be accepted, the rest will be ignored. To avoid this situation, ensure that when pbjs.requestBid is invoked, that the number of bid objects (i.e. adUnits[].bids) with adUnits[].bids[].bidder set to 'ix' across all ad units that bids are being requested for does not exceed 20. Time-To-Live (TTL) Banner bids from IX have a TTL of 300 seconds while video bids have a TTL of 1 hour, after which time they become invalid. If an invalid bid wins, and its associated ad is rendered, it will not count towards total impressions on IX’s side. FAQs Why do I have to input size in adUnits[].bids[].params for IX when the size is already in the ad unit? There are two important reasons why we require it: An IX site ID maps to a single size, whereas an ad unit can have multiple sizes. To ensure that the right site ID is mapped to the correct size in the ad unit we require the size to be explicitly stated. An ad unit may have sizes that IX does not support. By explicitly stating the size, you can choose not to have IX bid on certain sizes that are invalid. How can I view the bid request sent to IX by Prebid.js? In your browser of choice, create a new tab and open the developer tools. In developer tools, select the network tab. Then, navigate to a page where IX is set up to bid. Now, in the network tab, search for requests to casalemedia.com/cygnus. These are the bid requests.
https://docs.prebid.org/dev-docs/bidders/ix
2021-04-10T19:56:55
CC-MAIN-2021-17
1618038057476.6
[]
docs.prebid.org
Constructor # This is the main entry point to communicate with Kuzzle. Every other object inherits properties from the Kuzzle object. Kuzzle(host, [options], [callback]) # Options # Notes: - the offlineModeoption only accepts the manualand autovalues Properties # Notes: - if connectis set to manual, the connectmethod will have to be called manually - the kuzzle instance will automatically queue all requests, and play them automatically once a first connection is established, regardless of the connector offline mode option values. - multiple methods allow passing specific volatiledata. These volatiledata will be merged with the global Kuzzle volatileobject when sending the request, with the request specific volatiletaking priority over the global ones. - the queueFilterproperty is a function taking a JSON object host, port, autoReconnect, reconnectionDelayand sslConnectionproperties will only take effect on next connectcall Callback response # If the connection succeeds, resolves to the Kuzzle object itself. If the connect option is set to manual, the callback will be called after the connect method is resolved. Usage # <?php use \Kuzzle\Kuzzle; $kuzzle = new Kuzzle('localhost', [ 'defaultIndex' => 'some index', 'port' => 7512 ]);
https://docs-v2.kuzzle.io/sdk/php/3/core-classes/kuzzle/constructor/
2021-04-10T19:27:56
CC-MAIN-2021-17
1618038057476.6
[]
docs-v2.kuzzle.io
The sum count of unique addresses holding at least X native units as of the end of that day. Only native units are considered (e.g., an address with less than X ETH but with more than X in ERC-20 tokens would not be considered). These metrics provide a count of addresses with balance by equal or higher than a native unit threshold The state of the ledger is the one at the last available block for that day. Only the native units balance is considered, L2 tokens (ERC-20, etc..) are not taken into account. The computation uses greater than or equal comparison: owning exactly 1 native unit qualifies an address for AdrBalNtv1Cnt For XRP, escrowed amounts are not taken into account. This metric is not available for assets that have full privacy, like Monero, Grin. For assets that have opt-in privacy features, like ZCash, it only takes the non-private activities into account. Released in the 4.0 release of NDP This is a potent set of metrics which can elucidate the dispersion of ownership of the address space in a cryptocurrency. The trend can demonstrate whether or not a cryptocurrency is in a concentrative or distributive phase. It should be noted that supply is arbitrary, and for large-cap assets varies between tens of millions to hundreds of billions; so unit dispersion is often not directly comparable between chains. Put otherwise: it is cheaper to accumulate addresses with 100 XRP than 100 BTC since those are so different in fiat terms. This metric can also be gamed to a degree by adding dust to many thousands of addresses.
https://docs.coinmetrics.io/info/metrics/AdrBalNtv1KCnt
2021-04-10T18:42:59
CC-MAIN-2021-17
1618038057476.6
[]
docs.coinmetrics.io
Domain separation and Financial Services Payment Operations Domain separation is supported for Financial Services Payment Payment Operations All Financial Services Operations applications are built on top of Customer Service Management (CSM) and use many CSM tables. The key reference tables are the customer tables such as Consumer, Account, and Contact, and these tables are domain-separated. Tables All new tables added in Payment Operations are domain-separated: sn_bom_payment_inquiry sn_bom_payment_inquiry_task sn_bom_payment_service sn_bom_payment_claim sn_bom_payment_claim_task sn_bom_checking_account sn_bom_saving_account Use cases Payment Inquiry Customers have the ability create a payment inquiry via the portal for the following use cases: Beneficiary Claim Non-Receipt (BCNR): The customer has sent a payment, but the intended recipient claims to have never received the money. Payment in Error (PiE) – The customer makes a mistake when sending a payment and is trying to retrieve the money. Branch workers and call center agents can create these inquiries on behalf of the customer. Payment Operations staff receive inquiries from their customers as well as from external banks. Internal inquiries come from the bank’s own customers. The recipient customer could be internal or external to the bank. The distinction between internal or external recipients is important because it determines which route Payment Operations takes to resolve the inquiry. External inquiries come from third-party banks, which means that the payment recipient is always internal.Note: There can never be a case where the inquiry is external and the recipient is external. Some inquiries may result in the creation of a claim. Payment Claim Inquiry agents can create a claim on behalf of a customer when the bank determines that the claim is valid and the customer is entitled to a refund. Payment Operations staff receive the claims either internally from an inquiry or from an external bank. When they receive the claim, they start determining where to get the refund. Internal claims come from customers of the bank either from an inquiry or directly from bank staff (Branch or Call Center). Agents can resolve the claim if they know where to get the refund. The refund could be either external (payment to a third-party bank customer) or internal (payment to the bank's customer). If the refund is internal, a Debit Approval must be created (see Debit Approval below). External claims come from third-party banks. The refund is always internal for external claims. Agents may need to create a Debit Approval for internal refunds (see Debit Approval below). Debit Approval Claim agents create Debit Approvals for customers to approve a refund from a claim. The customer can either accept the debit or dispute or reject it.
https://docs.servicenow.com/bundle/paris-financial-services-operations/page/product/fso-payment-operations/concept/domain-separation-financial-services-payment-operations.html
2021-04-10T19:18:11
CC-MAIN-2021-17
1618038057476.6
[]
docs.servicenow.com
Now Platform administration enterprise rather than individual departments or functions. Use the core capabilities to eliminate data silos by sharing information within a single data model. Extend the data model with a flexible table schema and reusable components. View and download the full infocard for a highlight of Now Platform administration features. Deliver a common set of core capabilities Provide users with high-performance business services that make work simpler, faster, and more productive. Eliminate silos with a single data model Prevent silos by sharing data across the enterprise from your instance. Extend the data model with a flexible table schema. Provide reliable and secure cloud services Provide reliable service from multi-instance cloud services. Secure data and communications with encryption. Deliver a common set of core capabilities The Now Platform supports your apps, business requirements, and workflows. With the Now Platform, you can configure global settings for the entire platform or specific applications. Eliminate silos with a single data model With the Now Platform, you can eliminate silos by sharing data between applications and departments. You can also configure and extend your data model with a flexible table schema. Provide reliable and secure cloud services The Now Platform meets your performance, reliability, privacy, and compliance requirements. With the cloud services of the Now Platform, you can ensure that your data is always secure, with each new release offering new security properties. Get started Learn about the Now Platform architecture. See this video: Learn about the Platform Architecture Learn about tables, records, and fields. See this video: Explains the function of tables, which is a fundamental component of the Now Platform. This video introduces table concepts and vocabulary needed to perform administrative, fulfillment, and managerial tasks in the system. Learn about upgrades and conversions. See Upgrades and conversions Applications and features Core configuration Currency administration Data management Domain separation Dynamic Translation Events Field administration Form administration Form configuration Integrate with third-party applications and data sources List administration Metrics Platform performance Platform security Search administration State Management System localization Table administration Time configuration Upgrades and conversions User administration
https://docs.servicenow.com/bundle/quebec-platform-administration/page/get-started/servicenow-overview/reference/r_AdministerServiceNow.html
2021-04-10T19:39:57
CC-MAIN-2021-17
1618038057476.6
[]
docs.servicenow.com
You can access ThoughtSpot via SSH at the command prompt and from a Web browser. Administrative access Each ThoughtSpot appliance comes pre-built with three default users. Contact your ThoughtSpot support team to get the passwords. Both the admin and thoughtspot user can SSH into the appliance. Once on the appliance, either user can do any of the following: The thoughtspot user is restricted to tscli commands that do not require sudo or root privileges.. Log in to the ThoughtSpot application include:Tip: While Internet Explorer is supported, using it is not recommended. Depending on your environment, you can experience performance or UI issues when using IE. To log in to ThoughtSpot from a browser: - Open the browser and type in the Web address for ThoughtSpot: http://<hostname_or_IP> - Enter your username and password and click Enter Now.
https://docs.thoughtspot.com/5.1/data-integrate/introduction/logins.html
2021-04-10T18:44:47
CC-MAIN-2021-17
1618038057476.6
[]
docs.thoughtspot.com
By default, STIG compliance is disabled on Address Manager and DNS/DHCP Server appliances and virtual machines. Once STIG compliance is enabled, however, you may want to disable STIG compliance when working on a STIG-compliant configuration before deploying the server, or if STIG-compliant features are enabled unintentionally. To disable STIG compliance: - Log in to the Address Manager Administration Console as the administrator. For more information on default login credential for Address Manager, refer to BlueCat default login credentials (you must be authenticated to view this topic). - From Main Session mode, type configure system and press ENTER. - Type set stig-compliance disable and press ENTER. Proteus:configure:system> set stig-compliance disable - At the prompt, type Y/y and press ENTER to confirm your selection. The Address Manager server restarts to implement the changes.Note: With STIG compliance and auditing disabled, you now have root access.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Disabling-STIG-compliance/9.0.0
2021-04-10T18:23:28
CC-MAIN-2021-17
1618038057476.6
[]
docs.bluecatnetworks.com
BMC Atrium Core Console client-side logging BMC Atrium Core logs the BMC Atrium Core Console client-side processing in the flashlog.txt file, which helps you to debug user interface errors. To log the client-side processing, install and configure the 7, 0, 14, 112 or newer version of Adobe Flash Player Debugger. The BMC Atrium Core Console requires a minimum of Adobe Flash Player version 9 to run. Note The Adobe Flash Player log combines messages from all its instances running on your computer. For example, if you were running two instances of Adobe Flash Player with CNN.com and BMC Atrium Core each on the same computer, you might find intermixed log messages from both these instances. To determine your Adobe Flash Player Debugger version Use the following methods to determine the version of the Adobe Flash Player Debugger that you have installed. If the version you determined using each method differs, the debug logging might encounter issues. In such a case, uninstall all versions of the Adobe Flash Player Debugger and install again. Determining Adobe Flash Player Debugger version To configure your Adobe Debug Flash Player Debugger When you install BMC Atrium Core, the following files are copied to the webapps directory of your installation for the client-side logging: - Readme.txt — Contains information about downloading, installing, and configuring the Adobe Flash debugger. mm.cfg — Contains the configuration information to enable logging on the client side. Copy this file to the location specified in the readme.txt file for your operating system. The location of the flashlog.txt log file depends on your operating system, as listed in the following table. Log file location by operating systems Each entry in the log provides the following details: - Timestamp — The date and time of the log entry. - Log Type — The type of log entry such as warning, error, or information. - Message — The message for the log entry.
https://docs.bmc.com/docs/ac91/bmc-atrium-core-console-client-side-logging-609847392.html
2021-04-10T19:54:01
CC-MAIN-2021-17
1618038057476.6
[]
docs.bmc.com
Navigating the interface TrueSight Server Automation contains global menus, toolbars, perspectives, views, objects, and content editors, referred to collectively as the console. You can use them to perform the tasks required to provision and manage your data center efficiently. See the following topics for more information about the console: Was this page helpful? Yes No Submitting... Thank you
https://docs.bmc.com/docs/tssa89/navigating-the-interface-808905210.html
2021-04-10T19:31:14
CC-MAIN-2021-17
1618038057476.6
[]
docs.bmc.com
rrricanes is a R library that extracts information from available archives on past and current tropical cyclones. Currently, archives date back to Data can be obtained for cyclones in the north Atlantic (considered the Atlantic Basin) and north-eastern Pacific (the East Pacific Basin from 140°W and eastward. Central Pacific data (140°W to 180°W) is included if issued by the National Hurricane Center (generally they’re issued by the Central Pacific Hurricane Center). This library parses the text advisories of all tropical cyclones since I wrote this package with the goal of consolidating messy text data into well-organized formats that can easily be saved to CSV, SQL and other data formats. You may explore some features of the package through the shinycanes beta web application (built with R Shiny). Generally speaking, there are five products available for tropical cyclones issued at 03:00, 09:00, 15:00 and 21:00 UTC; Storm Discussion - These are technical discussions centered on the current structure of the cyclone, satellite presentation, computer forecast model tendencies and more. Forecast/Adivsory - This data-rich product lists the current location of the cyclone, its wind structure, forecast and forecast wind structure. Public Advisory - These are general text statements issued for the public-at-large. Information in these products is a summary of the Forecast/Advisory product along with any watches and warnings issued, changed, or cancelled. Public Advisory products are the only regularly-scheduled product that may be issued intermittently (every three hours and, occasionally, every two hours) when watches and warnings are in effect. Wind Speed Probabilities - These products list the probability of a minimum sustained wind speed expected in a given forecast window. This product replaces the Strike Probabilities product beginning in 2006 (see below). Updates - Tropical Cyclone Updates may be issued at any time if a storm is an immediate threat to land or if the cyclone undergoes a significant change of strength or structure. The information in this product is general. Discontinued Products These products are included in the package though they have been discontinued at some point: Strike Probabilities - List the probability of a tropical cyclone passing within 65 nautical miles of a location within a forecast window. Replaced in 2006 by the Wind Speed Probabilities product. Position Estimates - Typically issued as a storm is threatening land but generally rare (see Hurricane Ike 2008, Key AL092008). It is generally just an update of the current location of the cyclone. After the 2011 hurricane season, this product was discontinued; Updates are now issued in their place. Please view the vignette ‘Getting Started’: Online documentation is also available. rrricanes requires an active internet connection as data is extracted from online sources. Linux users must also have the libgdal-dev, libproj-dev and libxml2-dev packages installed. To add rrricanesdata, a package of post-scraped datasets, install.packages("rrricanesdata", repos = "", type = "source") To use high resolution tracking maps you will need to install the rnaturalearthhires package. install.packages("rnaturalearthhires", repos = "", type = "source") rrricanes is currently only available in GitHub. It can be installed using the devtools package: devtools::install_github("ropensci/rrricanes", build_vignettes = TRUE)
https://docs.ropensci.org/rrricanes/
2021-04-10T18:31:48
CC-MAIN-2021-17
1618038057476.6
[]
docs.ropensci.org
pg_resgroup A newer version of this documentation is available. Use the version menu above to view the most up-to-date release of the Greenplum 6.x documentation. pg_resgroup Note: The pg_resgroup system catalog table is valid only when resource group-based resource management is active. The pg_resgroup system catalog table contains information about Greenplum Database resource groups, which are used for managing concurrent statements, CPU, and memory resources. This table, defined in the pg_global tablespace, is globally shared across all databases in the system.
https://gpdb.docs.pivotal.io/6-12/ref_guide/system_catalogs/pg_resgroup.html
2021-04-10T20:05:08
CC-MAIN-2021-17
1618038057476.6
[]
gpdb.docs.pivotal.io
diagnostic_updater contains tools for easily updating diagnostics. it is commonly used in device drivers to keep track of the status of output topics, device status, etc. diagnostic_updater contains assorted C++ classes to assist in diagnostic publication. These libraries are commonly used by device drivers as part of the diagnostics toolchain. The main parts of diagnostic_updater are: Example uses of these classes can be found in example.cpp.
http://docs.ros.org/en/groovy/api/diagnostic_updater/html/index.html
2021-04-10T19:37:44
CC-MAIN-2021-17
1618038057476.6
[]
docs.ros.org
I am trying to use an Azure AD App Registration with a WPF application to upload and download files using Sharepoint Online. I used to authenticate. I have set up the login, and this works without problem. I get the tokens back. (Microsoft.Identity.Client 4.6) I added the sharepoint graph API delegated "Sites.FullControl.All" I tried using the nuget packages SharePointPnPCoreOnline and Microsoft.SharePointOnline.CSOM with the access token from the Azure AD login. The App registration is created in the same tenant as the sharepoint. No matter what I try, I cannot get this to work. (401 returned) I want to CRUD files in a sharepoint List. Have you any ideas, how I could solve this, examples? Or is there any docs for this? Regards Damien
https://docs.microsoft.com/en-us/answers/questions/1101/wpf-azure-ad-app-registration-login-api-request-wi.html
2021-04-10T20:37:10
CC-MAIN-2021-17
1618038057476.6
[]
docs.microsoft.com
narainabhishek Technology Enthusiast - Windows Devices App Dev, Cross-platform, Xamarin, HTML5, Client Technologies Microsoft IoT Camp–Setup This blog is to ensure that you have the appropriate environment setup before your come for the... Author: Abhishek_Narain Date: 03/13/2016 The 1st ever Internet of Things (IoT) DevCamp by Microsoft in India Last month we conducted the 1st ever Internet of Things Camp in India. It was indeed a great... Author: Abhishek_Narain Date: 07/06/2015
https://docs.microsoft.com/en-us/archive/blogs/narainabhishek/
2021-04-10T20:09:05
CC-MAIN-2021-17
1618038057476.6
[]
docs.microsoft.com
Controllino_mini ID for board option in “platformio.ini” (Project Configuration File): [env:controllino_mini] platform = atmelavr board = controllino_mini You can override default Controllino Mini settings per build environment using board_*** option, where *** is a JSON object path from board manifest controllino_mini.json. For example, board_build.mcu, board_build.f_cpu, etc. [env:controllino_mini] platform = atmelavr board = controllino_mini ; Mini has on-board debug probe and IS READY for debugging. You don’t need to use/buy external debug probe.
https://docs.platformio.org/en/stable/boards/atmelavr/controllino_mini.html
2021-04-10T18:47:51
CC-MAIN-2021-17
1618038057476.6
[]
docs.platformio.org
As your business grows, you may need to create a complex but accurate points system for your potential customers, as well as the existing ones. In the CRM context, you need to know which leads have the highest potential of being converted(hot leads) and which leads still need to be qualified(cold leads). How do you do that? Without a proper system, building an accurate points system would prove to be a challenge. With Flexie CRM, you can build a workflow to help you with the points system. To illustrate how you can do that, let’s consider a concrete business scenario. Let’s say you’re an e-commerce store. You have a website, you have a wide presence on social media, but you’re still struggling. People who purchase from you may come from your website, from your ads campaigns on social media, etc. Even if you know the source where they came from, how do you segment them? How can you track them throughout the whole sales journey? How do you automate everyday tasks? What if you want to know which customers bring the highest value to you? Flexie CRM gives answers to all these questions. To understand how you can adjust leads points based on dynamic conditions, let’s first build a workflow. Go to Workflows and on the drop-down menu you’ll see, click Manage Workflows. Next, go to the upper-right corner of the screen and click New. Select Lead as the the workflow entity type and then click Select. Give the workflow a name and a description, choose a category (optional) and then click Launch Workflow builder. Next, choose the workflow source. Let’s say you want to build adjust points of a specific group of leads. As a workflow source, choose Entity lists. Choose the group(s) of leads you want this workflow to run on, and then click Add. Let’s say you want the system to execute actions upon the lead opening a marketing email. Go to Watch For events and click Open Marketing Email. Give it a name and then click Add. You want to add 7 points to leads any time they open your marketing emails, but you want to add conditions. For example, the system will add 7 points to leads upon them opening a marketing email, only if they came from your website. Head over to Decisions and click Conditions. You will be greeted with the following form: Give it a name and then click the Add rule button. Next, go to Leads Fields and choose Source. Next, go to the right and choose equals. Next, select Website. The condition reads like this: source equals Website. Go ahead and click Add. Connect the Conditions button with the Open Marketing Email event. Now, you want the system to execute actions. Specifically, you want the CRM to add 7 points to each lead that opens your marketing email, only if they come from your e-commerce website. Go to Actions and click the Adjust lead points button. Give it a name, choose When Triggered and set 7 as the number of points to be added. Next, click the Add button. Connect the Adjust leads button with the Conditions button. What if the leads come from another source? Say, a referral. You want to give more weight to your e-commerce website, and less to other sources. For example, if leads come from referrals, you want the system to add 3 points to them. Once again, go to Actions and click Adjust lead points. Give it a name, choose when Triggered and set 3 points. Next, click Add. Once again, connect it with the Conditions button. So far, the workflow will do this: when the selected lists of leads open your marketing email, they will be added 7 points if they come from your website, and 3 points if they come from other sources like a referrals, cold calls, meetings, etc. But what if you want to execute other actions? For example, any time the leads open a marketing email, and their city equals Seattle, you want the system to add 10 points to each of them. You want to capture and nurture more leads from Seattle. Also, you want to send a follow-up email to them. Once again, go to Conditions. Give it a name, click Add rule and choose City, equal and Seattle respectively. Next, click Add. You want to add points and send a follow-up email. First, go to Actions and click Adjust lead points. Give it a name, choose When Triggered and set 10 points. Next, click the Add button. Connect it with the Condition we’ve just set, including only leads that reside in the city of Seattle. You also want to send an automatic follow-up email to these leads, but you want to wait 1 day. Go to Actions and click Send email. Give it a name, choose Wait and here you should set the number 1, and choose day(s) on the right. Choose the email you want to send, the email field and then click Add. In this second scenario, leads which open your marketing email, and reside in Seattle, will be added 10 points and will receive a follow-up email(one day after they open your marketing email). As you can see, adjusting lead points in Flexie CRM is simple and intuitive. You can build a more complex workflow, according to your needs and business philosophy. The points system helps you determine which leads have the highest priority and which have lower priority, so you can focus on highest value leads first. To stay updated with the latest features, news and how-to articles and videos, please join our group on Facebook, Flexie CRM Academy.
https://docs.flexie.io/docs/setting-up-workflows/adjust-lead-points-based-on-dynamic-conditions-2/
2021-01-16T04:55:57
CC-MAIN-2021-04
1610703500028.5
[array(['https://flexie.io/wp-content/uploads/2017/08/Capture-34.png', 'Manage workflows'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/New.png', 'New'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Lead.png', 'Select workflow entity type'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Capture-35.png', 'New workflow form'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Entity-list.png', 'Entity lists'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Capture-36.png', 'Entity list form'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/10/Capture-15.png', 'Opens marketing email'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/10/Capture-16.png', 'Opens marketing email'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Conditions.png', 'Conditions'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Capture-37.png', 'Conditions form'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Flexie-CRM-30.png', 'Rules'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Capture-38.png', 'Workflow'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Capture-5.png', 'Adjust lead points'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Flexie-CRM-31.png', 'Add points'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Capture-39.png', 'Workflow'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/10/Capture-17.png', 'Adjust lead points'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Capture-40.png', 'Add points'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Capture-41.png', 'Workflow'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Conditions.png', 'Conditions'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Capture-42.png', 'Rules'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Capture-5.png', 'Adjust lead points'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Flexie-CRM-32.png', 'Add points'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Capture-44.png', 'Workflow'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Capture-13.png', 'Send email'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Flexie-CRM-33.png', 'Send follow-up email'], dtype=object) ]
docs.flexie.io
TableView.Language property (Outlook) Returns or sets a String value that represents the language setting for the view. Read/write. Syntax expression.Language expression A variable that represents a TableView object. Remarks The Language property uses a String to represent an ISO language tag. For example, the string "EN-US" represents the ISO code for "United States - English." If a valid language code is specified, the object will only be available in the View menu for the specified language type. If no value is specified, the object item is available for all language types. The default value for this property is an empty string. Example The following Microsoft Visual Basic for Applications (VBA) example sets the language type of all View objects of type olTableView to U.S. English. Sub SetLanguage() 'Sets the language of all table views to U.S. English. Dim objViews As Outlook.Views Dim objView As Outlook.View Set objViews = _ Application.GetNamespace("MAPI").GetDefaultFolder(olFolderInbox).Views 'Iterate through each view in the collection. For Each objView In objViews Debug.Print objView.Name 'If view is of type olTableVIew then set language. If objView.ViewType = olTableView And objView.Standard = False Then objView.Language = "EN-US" End If Next objView End Sub See also Support and feedback Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback.
https://docs.microsoft.com/en-us/office/vba/api/outlook.tableview.language
2021-01-16T07:13:59
CC-MAIN-2021-04
1610703500028.5
[]
docs.microsoft.com
You can use the DeleteKeyServerKmip method to delete an existing Key Management Interoperability Protocol (KMIP) key server. You can delete a key server unless it is the last one assigned to its provider, and that provider is providing keys which are currently in use. This method has the following input parameters: This method has the no return values. The delete operation is considered successful if there are no errors. Requests for this method are similar to the following example: { "method": "DeleteKeyServerKmip", "params": { "keyServerID": 15 }, "id": 1 } This method returns a response similar to the following example: { "id": 1, "result": {} } 11.7
https://docs.netapp.com/sfe-118/topic/com.netapp.doc.sfe-api/GUID-40E8D088-D93A-4066-9ED0-9D257F2241FB.html?lang=en
2021-01-16T05:32:24
CC-MAIN-2021-04
1610703500028.5
[]
docs.netapp.com
Linkedin only displays the first 100 pages for any given search. This means if you have a search for larger searches we'll need to break them into smaller searches by slightly changing the search criteria. The free version of Linkedin has a maximum of 1000 people per search. Linkedin Sales Navigator has a maximum of 2500 people per search. Original search - 2,500+ results Keywords: Planting trees Location: United States Interests: Joining a Nonprofit Board Not splitting the search into smaller links would result in no emails after the first 1000. Rather than search by the entire US at once, you can search for individual cities or states until you get under 1000 leads in the search. Once Parvenu has extracted the leads, you can remove your existing locations from the search to add new ones until you have all the people in the original search. Original search - 6,700+ results Profile Language: English Interests: Volunteering Another way to split a list so that each list contains less than 1000 leads is by adding title filters. You can repeat this process until you have all the leads in your original list.
https://docs.parvenunext.com/best-practices/splitting-large-searches
2021-01-16T06:02:13
CC-MAIN-2021-04
1610703500028.5
[]
docs.parvenunext.com
How to navigate in the StriveCast platform In the top right, you find the control section. If its settings are changed, the whole page will update its graphs. For example, to inspect a specific time interval of the collected data, just use the date range picker and select a date range. By clicking on the minus and plus buttons next to the date range selector, you can narrow or widen the selected interval. Also, you can choose a subinterval inside any given graph by marking a section with the courser. You can restore default settings at any point by clicking the reset icon.
https://docs.strivecast.com/knowledge/navigation
2021-01-16T04:50:24
CC-MAIN-2021-04
1610703500028.5
[array(['https://docs.strivecast.com/hs-fs/hubfs/Documentation%20Screenshots/navigation%20-%20menu.png?width=688&name=navigation%20-%20menu.png', 'navigation - menu'], dtype=object) ]
docs.strivecast.com
If you wish to customize your bundle page, there are two ways you can do so: through Custom CSS, or through Template Creator. Advanced knowledge of Liquid is essential You need to have a strong understanding of Liquid to create your own bundle template implementation - Template Creator was built to allow you to create your own Bundle Page template from scratch. Due to the complexity and personalised nature of this process, we are currently unable to offer support for this functionality.
https://docs.bundlebuilder.app/advanced-guides/adding-customizations
2021-01-16T05:49:10
CC-MAIN-2021-04
1610703500028.5
[]
docs.bundlebuilder.app
# Store data on Filecoin Storing data on Filecoin lets users harness the power of a distributed network and an open market served by thousands of different storage providers or miners. Users can use the following software solutions to store data on the Filecoin network: - Slate allows uploading and storing content on the Filecoin network directly from your browser. It supports one time deals, and it is perfect to create image galleries. - Lotus imports data and performs deals on the chain using its daemon and CLI. Lotus users get full control of the deals, the chosen miners, and the wallets used to pay. Make sure you are familiar with Lotus and have it installed and running. - Starling provides simplified decentralized storage for preservation, using Lotus but simplifying the usage. # Additional resources There are additional storage solutions that you should not miss. While they have a focus on developers, some of them have simple CLI interfaces that simplify their usage: - Powergate (opens new window) a multitiered storage solution that stores data with IPFS and Filecoin. It can be used self-hosted or hosted. - Textile buckets provide S3-like storage using IPFS with Filecoin-backed archival. - Pinata (opens new window) is an IPFS pinning service that will soon include Filecoin in its portfolio.
https://docs.filecoin.io/store/
2021-01-16T05:11:03
CC-MAIN-2021-04
1610703500028.5
[]
docs.filecoin.io
How-to articles, tricks, and solutions about RULE How and When to Use !important Rule in CSS Learn how and why to use the !important rule in your CSS styles. See what cases are recommended and where is the right place to use it. How and When to Write Comments in Javascript There are 2 types of JavaScript comments: single-line and multi-line. See when to use them with examples, and also use comments to prevent execution when testing alternative code. How to Add and Use CSS Comments A CSS comment is used to give explanatory information to the code or to prevent the browser from trying to interpret particular parts of the style sheet. How to Add Image in the Title Bar Make your website look more attractive by using favicons in the title bar. See how favicons are created and find more information about them. How to Add Non-Standard Fonts to a Website In this tutorial, you can find some methods to make the design of your website attractive using unique fonts. See how you can use the @font-face rule property. How to Insert Video in HTML See how to use <video> and <iframe> tags instead of the <embed>, <frame> and <object> tags. Learn how to set video autoplay. Practice with examples. How to Maintain the Aspect Ratio with CSS Very often developers want to create a div element, that can change its width/height as the window width changes. That can be done by maintaining the aspect ratio of the element. How to Override !important It is possible to override !important, although it isn’t recommended to use !important at all. But you can override the !important rule with another !important. How to Override CSS Styles How CSS overriding works, what is the cascading order and inheritance, what are the priorities and some tricks to override them. How to Resize Background Images with CSS3 Learn about the ways of resizing and creating responsive background images. Use the CSS background-size property for that purpose. See examples.
https://www.w3docs.com/snippets-tags/rule
2021-01-16T06:41:43
CC-MAIN-2021-04
1610703500028.5
[]
www.w3docs.com
Destination FAQs ON THIS PAGE How do I enable or disable the deduplication of records in my Destination tables? Use the Append Rows on Update option within a Destination table to indicate whether the ingested Events must be directly appended as new rows, or should these be checked for duplicates. You can specify this setting for each table. Note: This feature is available only for Amazon Redshift, Google BigQuery, and Snowflake data warehouse Destinations. For RDBMS Destinations such as Aurora MySQL, MySQL, Postgres, and SQL Server, deduplication is always done. In the Destination Detailed View page: - Click the icon next to the table name in the Destination Tables List. Update the Append rows on update option, as required: - Click OK, GOT IT in the confirmation dialog to apply this setting. Note: If you disable this feature after having previously enabled it, uniqueness is ensured only for future records in case of Google BigQuery and Snowflake. Therefore, both old and new versions of the same record may exist. In case of Amazon Redshift, however, uniqueness can be achieved for the entire data upon disabling the feature.
https://docs.hevodata.com/faqs/destination-faqs/
2021-01-16T06:30:57
CC-MAIN-2021-04
1610703500028.5
[]
docs.hevodata.com
General query settings¶ SetSelect¶ Prototype: function SetSelect ( $clause ) Sets the select clause, listing specific attributes to fetch, and Sorting modes to compute and fetch. Clause syntax mimics SQL. ‘AS’ keyword. SQL also lets you do that but does not require to. Manticore enforces aliases so that the computation results can always be returned under a “normal” name in the result set, used in other clauses, etc. Everything else is basically identical to SQL. Star (‘*’) is supported. Functions are supported. Arbitrary amount of expressions is supported. Computed expressions can be used for sorting, filtering, and grouping, just as the regular attributes. When using GROUP BY agregate functions (AVG(), MIN(), MAX(), SUM()) are supported. Expression sorting (Sorting modes) and geodistance functions (SetGeoAnchor) are now internally implemented using this computed expressions mechanism, using magic names ‘@expr’ and ‘" ); SetLimits¶ Manticore side. $cutoff setting is intended for advanced performance control. It tells searchd to forcibly stop search query once $cutoff matches had been found and processed. SetMaxQueryTime¶. SetOverride¶ DEPRECATED Prototype: function SetOverride ( $attrname, $attrtype, $values ) Sets temporary (per-query) per-document attribute value overrides. Only supports scalar attributes. $values must be a hash that maps document IDs to overridden attribute values..
https://docs.manticoresearch.com/3.2.0/html/api_reference/general_query_settings.html
2021-01-16T06:33:54
CC-MAIN-2021-04
1610703500028.5
[]
docs.manticoresearch.com
plugins Plugins by the community # - taiko-accessibility A plugin to test the site accessibility with Taiko - taiko-android A plugin to run web tests on android devices and emulator using Taiko. - taiko-diagnostics A plugin for taiko which provides some diagnostics features like measuring speedindex, performance metrics of webpage. - taiko-screencast A plugin to record a gif video of a taiko script run. - taiko-storage A taiko plugin to interact with browser storages. - taiko-screeny A taiko plugin to capture screenshot on every action. - taiko-video A taiko plugin to save screencast as compressed mp4 videos. If you've written your own plugin send a pull request to list it here. Using plugins with runners # To load plugins and use them with a runner like Gauge, you need to - Install the plugin into the project for example npm install taiko-diagnostics - Set the TAIKO_PLUGINenvironment variable before running the tests. For example in bash or zsh you can do this as follows TAIKO_PLUGIN=diagnostics gauge run specs
https://docs.taiko.dev/plugins/
2021-01-16T05:32:54
CC-MAIN-2021-04
1610703500028.5
[]
docs.taiko.dev
Cluster is not reachable DataPlane containers are not able to resolve a provided hostname or use the IP address to connect to the machine. DNS resolution is not setup. There are firewall or other networking restrictions that are preventing access. Sample Message: Failed: This is not a valid Ambari URL. - Verify that the specified hostname or IP address is valid and reachable from the DP host machine. - If the hostname or IP address is reachable, try adding the hostname resolution to the DataPlane container using the ./dpdeploy.sh utils add-host <ip> <host> command. - Verify if network connectivity settings, such as firewalls, are configured correctly.
https://docs.cloudera.com/HDPDocuments/DP/DP-1.3.1/installation/content/dp_cluster_is_not_reachable.html
2021-01-16T06:42:57
CC-MAIN-2021-04
1610703500028.5
[]
docs.cloudera.com
Versions Compared Old Version 13 changes.mady.by.user Umut Uyurkulak Saved on New Version Current changes.mady.by.user Umut Uyurkulak Saved on Key - This line was added. - This line was removed. - Formatting was changed. Overview Field is one of the most important particle effect features. It represents different aspects of how each individual particle should look like and the type of internal data each particle should have. This can be used to create a quite sophisticated effect logic. With this feature, users can define the color of the particle material, adjust the opacity level and control the size of the particles. Color The color feature specifies the tone that will be assigned to the particles. The color acts as a filter on top of the diffuse texture and particle lighting. Black color will make particles look completely black, even when the emissive values are specified, while white color will effectively remove the filter. Since color acts as a filter and not the albedo itself, using white color is still physically correct, since the settings from the feature Appearance: Lighting will still apply. However, black or very dark colors are not recommended since perfect filters do not appear in nature. Please refer to Appearance for more information. PixelSize This feature directly manipulates both size and opacity fields. It prevents a particle from getting too large or too small. When a particle size starts to get smaller than a certain number of pixels on the screen, its size is maintained while its opacity gets reduced. This is a very useful feature when effects have multiple small particles and start to flicker as they move around as it will trade sharpness for flickering. Size Size specifies how large a particle should look like. If a particle is to be interpreted as a spherical object, size will correspond to its radius and not its diameter. This field can have any value as long it is positive. Zero or negative particle sizes prevents the particle to be rendered by CRYENGINE. GPU Support All Fields features are supported in the GPU pipeline, although there is only limited support for modifiers in properties. Currently, only the Color, Size and Opacity can be modified using the Curve modifier with Self Time as the source.
https://docs.cryengine.com/pages/diffpagesbyversion.action?pageId=36868265&selectedPageVersions=13&selectedPageVersions=14
2021-01-16T06:06:49
CC-MAIN-2021-04
1610703500028.5
[]
docs.cryengine.com
Versions Compared Old Version 28 changes.mady.by.user Sean Braganza Saved on New Version Current changes.mady.by.user Willem Andreas Haan Saved on Key - This line was added. - This line was removed. - Formatting was changed. - A character with a T-Pose and other animations. Setting Up a Root Bone Choose and import your first animation into Maya using the File → Import option from its main menu. If not already docked, click the icon on the left side of Maya's interface to open the Outliner, which lists the different components of your scene. You need to add a root bone to your animation. To do so: - Switch to the Rigging menu set in Maya, and select the Rigging shelf tab as shown in the image below. Rigging - Click the button, and then double click in the viewport to create a joint. The joint should be listed in the Outliner, while its attributes are displayed in the Channel Box. - Next, place the joint at (0,0,0) so that it's right underneath your character. Navigate to the Channel Box and set the joint's Translate X/Y/Z values to 0. - Additionally, to correctly position the joint in CRYENGINE's coordinate system, set Rotate X = 90, Rotate Y = 0, and Rotate Z = 180. - Finally, rename this joint to "root" by double-clicking on its listing in the Outliner. Pairing the Root Bone with the Hip Bone Begin by creating a point constraint between the hip bone and the root bone. Click on the hip bone, and then on the root bone in the Outliner while holding CTRL. - In the main menu (with the Rigging menu set enabled), select the Constrain → Point → option to bring up the Point Constraint Options dialog, which contains additional properties we can modify. - Keep the Maintain Offset and Set layer to override options ticked; since we're only going to constrain the Z axis, only enable the Constraint axes → Z option. Click Apply and then close the dialog. Point Constraint Options dialog You've now attached the root bone to the moving animation. Baking the Animation to the Root Bone To bake the animation, or rather, the translation information from the hip to the root bone: - Switch to the Animation menu set. - Select the root bone, and then select the Key → Bake Animation option. If you play your animation now, you should see the root bone move along with it. Additionally, if you expand the root bone's listing in the Outliner, you should be able to see listed the point constraint that was created with the hip bone. Go ahead and delete it since it's not needed anymore. Making the Hip a Child of the Root - Since CRYENGINE requires a root bone to be at the top of the joint hierarchy however, the hip needs to be made a child of the root as follows: - Select the hip bone in the Outliner. - In the Channel Box, select the Translate X/Y/Z options, right-click, and then click the Delete Selected option from the context menu. - Click the icon beside the root bone, and drag it to to the root bone. This deletes all translation movements from the hip, so that the root bone is the one driving the animation. Exporting the Animation - Select the File → Export All option from the main menu in Maya. - In the Export All window, set where you'd like the file to be exported to, and select the FBX export option from the Files of type dialog to export the animation in FBX format. Export All dialog - Next, in the Options... panel, navigate to the Animation tab and enable the Animation option. Also enable Bake Animation under the Bake Animation panel. Bake Animation 4. If you'd also like to export constraints and/or skeleton definitions, if any, tick their corresponding boxes under the Constraints tab. Finally, click on Export in the Export All dialog to complete the process. Constraints Importing Animations via a Python Script You could also, as an option, use a Python script to speed up the process of exporting characters and animations together: - Begin by importing a T-posed character into Maya. - Create a root bone, as explained previously, and set Rotate X = 90, Rotate Y = 0, and Rotate Z = 180. Make sure to name it "root". - Next, import an animation of choice; while doing so however, make sure to check the Animation Range → Combine to include Source option under the Playback Options tab of the Import dialog. This combines both the T-pose and jump animation in the exported output. Combine to include source - Download the following Python script. - Click the button at the bottom-right corner of your Maya interface to open the Script Editor, and drag/drop the script into the Python tab. Drag/drop the script 6. Select the root and hip bones in the Outliner, before clicking the button in the Script Editor to play the script. 7. You can now export the character and animation using the File → Export All option from the main menu in Maya. Importing into CRYENGINE To test the exported animations in CRYENGINE: - Create/open an empty level, and then create a new folder in the Asset Browser; drag and drop your FBX files into this folder. - Once all your imported files have been generated, drag the .cdf (Character Definition File) into the Viewport. - With the character selected in the Viewport, assign an animation to it using the DefaultAnimation field in the Properties panel. If the animation was correctly reoriented in Maya, you should now be able to see it playing in the Y (forward) direction of CRYENGINE's world coordinate system. Video Tutorial This video tutorial explains how to reorient animations in Maya, export them, and test them in CRYENGINE.
https://docs.cryengine.com/pages/diffpagesbyversion.action?pageId=60523790&selectedPageVersions=28&selectedPageVersions=29
2021-01-16T06:14:15
CC-MAIN-2021-04
1610703500028.5
[array(['/download/attachments/60523790/Options....png?version=2&modificationDate=1588169694000&api=v2', None], dtype=object) array(['/download/attachments/60523790/Constraints.png?version=1&modificationDate=1588169758000&api=v2', None], dtype=object) ]
docs.cryengine.com
SNAP¶ Intended audience: administrators, users, developers The SNAP is set of device servers and an GUI application (Bensikin) providing so called SNAPshot functionality. A snapshot is, as said in the name, a “picture” of a list of equipment’s “settings” (more precisely of their Tango attributes values) taken at a precise instant. Snapshot are stored in a database (MySQL or Oracle). These can be retrieved and restored to the equipment (devices). This kind of functionality is often called a recipe management. It allows to create set of configuration settings (recipes) used for particular purposes (like selected mode of accelerator operation). - SNAPshot (Archiving) installation and configuration - Bensikin User Manual
https://tango-controls.readthedocs.io/en/latest/tools-and-extensions/archiving/SNAP.html
2021-01-16T06:26:37
CC-MAIN-2021-04
1610703500028.5
[]
tango-controls.readthedocs.io
# View node logs You can view the logs for your dedicated nodes on the Growth, Business, and Enterprise subscription plans. To view the node logs: - Click your project. - Click your network. - Click the node name. - Click Logs. For Hyperledger Fabric, you additionally filter the logs by: - Peer — the logs of your Hyperledger Fabric peer. - Chaincode server — the logs of the service running your chaincode. For Quorum, you additionally filter the logs by: - Geth — the logs of your instance of GoQuorum. - Transaction manager — the logs of your instance of the Tessera transaction manager. See also Quorum documentation: What is Tessera. The timestamps in the log entries are displayed in your local time. The logs are retained for 7 days before deletion.
https://docs.chainstack.com/platform/view-node-logs
2021-01-16T06:30:57
CC-MAIN-2021-04
1610703500028.5
[]
docs.chainstack.com
Connecting Redash to Hevo Refer to this section to connect Redash to your managed BigQuery data warehouse. Prerequisites - An active Redash account. - Redash connection settings downloaded from Hevo. Steps Log in to your Redash account. In the Settings page, Data Sources tab, click + New Data Source. In the Create a New Data Source dialog, search for and select BigQuery. In the Create a New Data Source page, specify the following: * The values provided in the image are indicative. Name: A suitable name for your BigQuery data warehouse. Project ID: The project ID mentioned in the Redash connection settings downloaded from Hevo JSON Key File: Click to upload the JSON file having the login credentials, which you downloaded in Hevo. Processing Location: The dataset-location provided by Hevo. For example, us-east1. Optionally, modify the other field values. Click Create. After Redash successfully validates your entries, click Save. Use the Create menu to create SQL queries and dashboards and start visualizing your BigQuery data.
https://docs.hevodata.com/destinations/data-warehouses/managed-google-bigquery-dw/connecting-bi-tool-to-mdw/redash/
2021-01-16T05:02:47
CC-MAIN-2021-04
1610703500028.5
[]
docs.hevodata.com
1 Introduction A reference selector is used to display and, optionally, allow the end-user to select the value of a one-to-one or one-to-many association by selecting the associated object. A reference selector must be placed in a data widget. The object(s) retrieved by this data widget must be at the many end of a one-to-many association, or at either end of a one-to-one association. For example, if you have an employee they will work for one company. A company can have many employees. The entities Employee and Company have a one-to-many association, Employee_Company, which you can select by choosing a Company from the Employee through the reference selector. In the reference selector, the name of the attribute of the associated objects which will be displayed is shown inside the reference selector, between square brackets, and colored blue. For example, the following reference allows the end-user to see, and set, the association Employee_Company by selecting the CompanyName for the current Employee. If you only want to display information, you can also use a text box. This has the added advantage that you can choose an attribute from an object which is linked via several association steps. 2 Properties An example of reference selector properties is represented in the image below: Reference selector properties consist of the following sections: - Common - Data source - Design Properties - Editability - Events - Formatting - General - Label - Selectable Objects - Validation - Visibility of an associated entity is shown in the reference selector. The path must follow one association of type reference starting in the entity of the data view. Editability Section Editability determines whether an end-user can change the value in an input widget. For more information on properties of this section, see the Editability Section section of Properties Common in the Page Editor. 2.6 Formatting Section The formatting section applies only to the way that numeric attributes are displayed. These are attributes of the following data types: - Decimal - Integer - Long 2.7 General Section 2.7.1 Select Using The reference selector allows the end-user to select objects by using either a drop-down menu or a pop-up page. If you choose to to use a page, the drop-down functionality will be replaced with a button to the right of the widget that will open a page selection pop-up window. - The advantage of selecting using a Drop-down is that it is very efficient – the end-user can make a selection with fewer keystrokes, as all the information is on the same page - The advantage of selecting using a Page is that the end-user can search the objects, and more information about each object can be displayed – if there are a lot of objects to select from (for example, more than 20), it is recommended that selecting is done using a page There is a small difference in functionality between a Drop-down reference selector and a Page reference selector. When changing a reference selector item that also has a linked list included in a second drop-down menu or page, the Page reference selector is NOT cleared as it is with a Drop-down reference selector. 2.7.1.1 Drop-Down The drop-down reference selector is similar to a drop-down for an enumeration, except that it allows users to choose from a list of objects which can be associated with the current object, rather than a list of values from an enumeration. The reference selector displays an attribute from the objects which can be linked to the current entity via an association. The chosen attribute should be unique for each object which can be associated, otherwise the end-user will have difficulty choosing the correct one. For example, you should display a company name (which will hopefully be unique) rather than the company region (which will probably not be unique to a company). 2.7.1.2 Page Select using a page, links a button to the right of the widget with a pop-up page which is used to make the selection. You must choose the page to be displayed using the Select Page property. 2.7.2 Empty Option Caption This is only displayed if Select using is set to Drop-down. This property specifies the caption for the empty option in the drop-down reference selector shown to the end-user. This is a translatable text. Filling out the caption for an empty option improves the user experience of your application. It also helps end-users using screen-reader to operate the application easily. 2.7.3 Select Page This is only displayed if Select using is set to Page. Consequently, select page is not supported on native mobile pages. The select page property determines which page is opened when the select page button is used. This page can be used to select an associated object from the list of all possible objects. This page should contain a data grid, template grid or list view connected to the same entity as the input reference set selector. It is recommended that you generate a new page to show by right-clicking the widget and selecting Generate select page…. You can then edit the resulting page, if required. See the Show a Page section of On Click Event & Events Section. Note that select pages must have a pop-up layout. Page title Page title is only available in the Properties dialog box, not in the Properties pane. You can override the title of the page you open to, for example, indicate where you are opening it from. This is activated by checking the Override page title check box. 2.7.4 Go-To Page The go-to page gives end users quick access to a more detailed overview of the currently selected object. This property determines which page is shown to the user. The page should contain a data view with the data source Type set to Context and the same Entity (path) as the one that is selected by the reference selector. It is recommended that you generate a new go-to page by right-clicking the widget and selecting Generate go-to page…. You can then edit the resulting page, if required. Page title Page title is only available in the Properties dialog box, not in the Properties pane. You can override the title of the page you open to, for example, indicate where you are opening it from. This is activated by checking the Override page title check box. 2.8 Label Section A label describes the purpose of a widget to the end-user. For more information on properties of this section, see the Label Section section in Properties Common in the Page Editor. 2.9 Selectable Objects Section The properties in the Selectable objects section determine the objects from which the end user can make a selection. The Source property sets which of the three ways to define the selectable objects is used: - Database (default) - XPath - Microflow 2.9.1 Database Database is the default source for the selectable objects. By default, all database objects of the correct entity type will be selectable. Constraints You can limit the objects presented to the end-user by adding constraints. You will be guided through making constraints in the Edit Constraints dialog box: See the constraints section of Database Source for more information. Sort Order The sort order specifies the order in which the items in the reference selector are shown. You can sort on multiple attributes in both directions (ascending and descending). If (default) sort order is specified, the reference selector sorts on the displayed attribute. 2.9.2 XPath If the source is XPath, the list of objects is also taken from the database, but the objects which are displayed are chosen by an XPath Constraint. XPath Constraint The XPath constraint limits the list of objects that can be selected. For example, the XPath constraint [InStock = true()] on a reference selector for products will ensure that only products that are in stock are selectable. See XPath Constraints for more information on XPath constraints. Constrained By A reference selector can be constrained by one or more paths. This is typically used to make one reference selector dependent on another. The best way to explain this is through an example. Imagine you have an ordering system where the products are sorted into categories – for example, food products and drink products. On a page where you can edit an order line, a product selector can be constrained by a category selector. After selecting a category (food, for example), the product selector is constrained by this category and shows only products in the category. Example Domain model In the domain model the order line has many-to-one associations to both category and product. These associations can be be edited using reference selectors. A third association, from product to category, describes the relation between those two entities – that is, that every product has an associated category. Such a triangle-shaped part of the domain model is what makes using constrained by possible. On the form, you have two reference selectors: one for Category and one for Product. Without a constraint, the reference set selector will offer all the products: However, because of the structure of the domain model, you can add a constraint which means that only the products of the previously selected category will be chosen. This is set by the Constrained by property. Now the end-user will only see products in the selected category: Sort Order The sort order specifies the order in which the items in the reference selector are shown. You can sort on multiple attributes in both directions (ascending and descending). If (default) sort order is specified, the reference selector sorts on the displayed attribute. 2.9.3 Microflow A microflow can only be used if the selection is made using a drop-down. If the source microflow is selected, a microflow is called, and returns the list of objects that the reference selector will show. Microflow Microflow specifies the microflow which is run to return the list of objects. Microflow Settings In microflow settings you can specify what parameters are passed to the microflow, depending on the parameters specified in the microflow itself. 2.10 Validation Section Here, you can specify predefined or custom validation which must be applied to the widget value before the data can be used. You can also customize the message which the end-user will get if the data does not pass the validation. For more information on input widget validation, see the Validation section of Properties Common in the Page Editor. 2.11 Visibility Section Visibility determines whether a widget is displayed to the end-user as part of the page. For more information on properties of this section, see the Visibility Section section in Properties Common in the Page Editor.
https://docs.mendix.com/refguide8/reference-selector
2021-01-16T06:46:20
CC-MAIN-2021-04
1610703500028.5
[array(['attachments/reference-selector/reference-selector-domain-model.png', None], dtype=object) array(['attachments/reference-selector/reference-selector.png', None], dtype=object) array(['attachments/reference-selector/reference-selector-properties.png', None], dtype=object) array(['attachments/reference-selector/generate-select-page.png', 'Generate a select page by right-clicking the widget'], dtype=object) array(['attachments/reference-selector/database-constraints.png', 'Edit constraints dialog box'], dtype=object) array(['attachments/reference-selector/orderline-domain-model.png', None], dtype=object) array(['attachments/reference-selector/orderline-reference-selectors.png', None], dtype=object) array(['attachments/reference-selector/orderline-no-constraint.png', 'List of all products, food and drink'], dtype=object) array(['attachments/reference-selector/orderline-constrained-by.png', None], dtype=object) array(['attachments/reference-selector/orderline-with-constraint.png', 'List of just products in the drink category'], dtype=object) ]
docs.mendix.com
Trail running takes the activity to the wilderness of nature Just who are the poor? Your television shows us the Third World, with victims of famine, drought and war. Who are our Canadian poor? They are not merely stereotypical Welfare recipients. We are among the wealthiest countries in the world and yet over a quarter of a million Canadians are homeless, and a larger number live in substandard housing.. doctor mask Then the batter slugs the ball out, out, out, and up n95 face mask,!». doctor mask doctor mask For every fraction of a degree that temperatures increase surgical mask, these problems will worsen. This is not fearmongering; this is science. For decades, researchers and activists have struggled to get world leaders to take the climate threat seriously. Your implant dentist will assess your jawbone very carefully using a cone beam CT scan which allows them to clearly visualize the amount of healthy bone, its mass, and density. What If Bone is Missing? If you don t have quite enough bone, then don t worry because it is very common for bone grafting to be carried out before implant surgery. There are several different types of bone grafts used for specific situations, but they all perform the same task which is to build up bone in areas where it is deficient. doctor mask wholesale n95 mask Some of these runners though, prefer to do their running in the midst of nature. Trail running takes the activity to the wilderness of nature. It is really important to keep a note and comply with the safety measures as explained in the content. The driver identified himself as Russel Bufalino and his front seat passenger as Vito Genovese. Croswell checked their driving licenses. The men sitting in the back of the Chrysler Imperial were Joseph Ida doctor mask, Gerardo Catena and Dominic Oliveto. wholesale n95 mask wholesale n95 mask I post lung tx and I asked my team about it for the GI and sinus aspects and they told me that it is too soon for them to try. They are open to the idea, but they need to see other results first. As I am post lung tx as well, my bigger fears now are refection as my sinuses are monitored by an ENT through the same hospital and thankfully I have been able to maintain the weight I gained after my transplant as well.. wholesale n95 mask n95 face mask The products boasts a pleasant, fresh scent and is easy to work into one shower routine. Especially given the fact that it only needs to stay on hair for three to five minutes rather than a lengthier time like some masks. Our tester shampooed her freshly highlighted strands with a purple, anti brassiness elixir surgical mask, then followed with this mask. n95 face mask surgical mask Police were requested to obtain a statement by Hazelton RCMP Police were advised that there was an intoxicated female walking down Lakelse Ave who was almost hit by a passing vehicle, the female was located and arrested for public intoxication Police assisted in an ongoing domestic disagreement Police were requested when a male passed out from intoxication in the bathroom of a business A man was observed by police to be staggering in the streets coronavirus mask, when police stopped to speak with him he was heavily intoxicated and the male was arrested for such A group of males were fighting and the instigator was arrested for assault Police observed a man staggering drunk down town Terrace, the man was severely intoxicated and was subsequently arrested An abandoned 911, police were later able to speak with the phone’s owner Two males were being harassed by a third male, police attended but no one was locatedElsewhere in Terrace: Several noisy party complaints were reported throughout bth the early and late hours A woman went to the liquor store to and did not return. Upon follow up with the complainant the missing woman had returned later in the day A male was passed out in the hallway of an apartment building, the male was found to be intoxicated and was subsequently arrested A fight broke out in front of the Scotia Bank, there were no injuries and the fight appeared to be consensual in nature An argument broke out at a residence and police were called to intervene. The subject of complaint had left the scene before police attended and could not be located An intoxicated male was observed walking into the street by police, the male was spoken to and arrested for public intoxication A woman was reported to be high on drugs and screaming by the Aquatic Center, police made patrols for the woman, but could not locate her Police observed a male passed out in front of a business, the male was arrested for public intoxication Chances Casino called police to advise that a male had been refused service due to a high level of intoxication doctor mask, the male drove away n95 face mask, police later found the male at another drinking establishment A business called to report that there was a male passed out in front of their business, police arrested the man for public intoxication An erratic driver was observed by a person coronavirus mask, police were requested to speak to the driver Police attended a business where a male was in the process of passing out from alcohol, police spoke with the male who was intoxicated and arrested him for such Police responded to a hit and run surgical mask.
https://docs.zavgar.online/ru/trail-running-takes-the-activity-to-the-wilderness-of-nature/
2021-01-16T04:59:36
CC-MAIN-2021-04
1610703500028.5
[]
docs.zavgar.online
For deploying HTTP servers, we provide templates for Ruby on Rails, Django, Node (Express), and Sinatra, but any framework can be used so long as the package can be imported or the port can be opened. To deploy an HTTP server on Repl.it, simply write the code that imports the required libraries or frameworks and begin listening on a port. As your server starts up, as soon as the repl begins listening on a port, a new pane should appear in your editor with the URL to your web app, along with a preview of the app.. Below is an example of a simple HTTP server running Flask in python3, displaying HTML from templates/index.html. Feel free to fork and play with the code as you'd like. Our package manager will handle dependency files automatically in your repls. See our documentation on packages for more information on how to install and manage dependencies. Private keys to external services or APIs can be kept in an .env file. See our documentation on secret keys. If you are using Django and you need access to specific bash commands to configure the server, please see this Django template. Note that a repl's public link will persist, even after the repl has been deleted. You can clear a repl of its server code before deleting it in order to prevent it from loading. If you require your web app to be taken down, please contact us at [email protected].
https://docs.repl.it/repls/http-servers
2021-01-16T05:24:27
CC-MAIN-2021-04
1610703500028.5
[]
docs.repl.it
Web Page Section Through this kind of sections the story editor can embed third party contents in the story (like other web pages available on the web). The Web Page Section is similar to the Paragraph Section and the Media Section: adding this section a modal window opens to specify the URL of the web page that is going to be added. Below an example of a Web Page Section that embed a Wikipedia site page: It is possible to add or remove multiple Web Page contents in the same way of text and media contents as it is explained in the Paragraph Sections.
https://mapstore.readthedocs.io/en/latest/user-guide/web-section/
2021-01-16T06:01:16
CC-MAIN-2021-04
1610703500028.5
[array(['../img/web-section/web-window.jpg', None], dtype=object) array(['../img/web-section/web-page.jpg', None], dtype=object) array(['../img/web-section/add-section.jpg', None], dtype=object)]
mapstore.readthedocs.io
Appium Setup To get started, you can first download our Appium examples from Github. These examples are available in C#, Python, Java and Ruby, and we’ll be using these as a basis for some of these tutorials. Depending on which programming language you will be using, select the appropriate client library as listed below. There are different examples available for regular app testing, game testing and some web-related testing using Appium. To make most out of the existing samples you should have Git installed. If you are new to Git, there is a very good guide on how to install Git on popular operating system here. Mac OS X Download the latest Git command line tool from and install it using the normal Mac installation procedure. Linux Use the following command to get Git installed on your Linux machine: $ sudo apt-get install git Windows The easiest and the most straightforward the latest Python (2.7.x or newer) version is installed. Go to command line and use the following: > python --version If Python 2.7 or newer Testdroid Configuration for Java file as an example/template. In case you don’t have an IDE with Maven included and would like to launch the example from command line, you will need to make sure that Maven is properly installed. Here’s a link to the Maven installation instructions. C# Windows Launch the AppiumTest.sln file Testdroid Desired Capabilities. the application under test to Bitbar Testing.
http://docs.testdroid.com/appium/setup/
2017-01-16T15:09:18
CC-MAIN-2017-04
1484560279189.36
[]
docs.testdroid.com
Creates a display group for organizing/layering display objects. Initially, there are no children in a group. The local origin is at the parent's origin; the anchor point is initialized to this local origin. See the Group Programming guide for more information. Using groups with the physics engine has some limitations — see the Physics Notes/Limitations guide. display.newGroup() -- Create a rectangle and a group, then insert the rectangle into the group local rect = display.newRect( 0, 0, 100, 100 ) rect:setFillColor( 0.5 ) local group = display.newGroup() group:insert( rect )
https://docs.coronalabs.com/api/library/display/newGroup.html
2017-01-16T14:55:00
CC-MAIN-2017-04
1484560279189.36
[]
docs.coronalabs.com
Flowhub Documentation and Support Flowhub is web-based IDE for flow-based programming. It is built on NoFlo.js for both client and server. It can connect to any language or environment that can talk the FBP Network Protocol. Support - For support and discussion about NoFlo and flow-based programming in general, see the links at noflojs.org/support/ - For Flowhub billing issues, email [email protected]
http://docs.flowhub.io/
2017-01-16T14:53:29
CC-MAIN-2017-04
1484560279189.36
[]
docs.flowhub.io
public interface MessageProcessor<T> This defines the lowest-level strategy of processing a Message and returning some Object (or null). Implementations will be focused on generic concerns, such as invoking a method, running a script, or evaluating an expression. Higher level MessageHandler implementations can delegate to these processors for such functionality, but it is the responsibility of each handler type to add the semantics such as routing, splitting, transforming, etc. In some cases the return value might be a Message itself, but it does not need to be. It is the responsibility of the caller to determine how to treat the return value. That may require creating a Message or even creating multiple Messages from that value. This strategy and its various implementations are considered part of the internal "support" API, intended for use by Spring Integration's various message-handling components. As such, it is subject to change. T processMessage(Message<?> message)
http://docs.spring.io/spring-integration/docs/2.1.4.RELEASE/api/org/springframework/integration/handler/MessageProcessor.html
2017-01-16T15:17:04
CC-MAIN-2017-04
1484560279189.36
[]
docs.spring.io
Dashboard Dashboard links back the landing page where summary of projects and test runs will be shown. Test Success Summary The Success.
http://docs.testdroid.com/user-manuals/testdroid-cloud/dashboard/
2017-01-16T15:09:28
CC-MAIN-2017-04
1484560279189.36
[array(['http://docs.testdroid.com/assets/user-manuals/dashboard_summary_success.png', None], dtype=object) array(['http://docs.testdroid.com/assets/user-manuals/dashboard_overall_success.png', None], dtype=object) ]
docs.testdroid.com
Phplist Documentation » Message functions » Process the Message Queue Clicking on 'process queue' will send all queued messages, provided they are not under embargo, i.e., scheduled to be sent later on. Once you have clicked on process queue, the sending process cannot be canceled. If you do not process the message queue, messages will remain in the queue. The idea of a queue is that you can prepare messages for sending, and then have them actually sent by a scheduler or cron job. When processing of the queue has started you should leave the browser window open till the result is displayed Depending on the load of messages to be processed, this can take from a few minutes up to an hour or more. Note: phplist does not send any message to people who have not confirmed their subscription. A confirmation report will be sent to the email address(es) entered on the configuration page for "Who gets the reports". In addition, you can also be alerted by email when the message sending starts and ends, by entering one or more email addresses on the "Misc" tab of the "Send a message" page. Message sending speed and batch processingTo help avoid server overloads you can configure PHPlist to slow down the rate at which it feeds messages to the mail server, by using 'mail queue throttle' setting in config.php. If you are on a shared hosting service, it is likely you will face limits in the number of emails you may send per hour or per day. By using the 'mail queue batch size' and 'batch period' settings in config.php, you will be able to keep the number of sent messages within these limits. For more info, see Setting the send speed. Alternatives to processing the queue manually Instead of processing the queue manually through your web browser, you can use a cron job or a commandline script. Or both. If you have more than 1000 users, it is recommended to use commandline queue processing. There are several reasons you might prefer processing the message queue with a cron job and/or commandline script, for instance: It will reduce the problem of timeouts, and -if your server is running PHP-cgi- you'll avoid having to leave the browser window open for hours. Command line scriptAn alternative to using your web browser for queue processing is a commandline script, which you can execute on a scheduled time by using a cron job. A sample commandline script is included in the PHPlist distribution. To be able to use a command line script, the command line version of PHP (PHP-cli) must be installed on your server. Please read the three interfaces of PHP for a brief discussion of differences between PHP-cli and PHP-cgi. A second (obvious) requirement is that you must have access to the command line itself (shell access). For more info, see Using a commandline script. Cron jobA cron job is a scheduler for unix/linux operating systems, that will execute commands on a predefined time. While you can use a cron job to execute commands embedded in a commandline script, you can also place the commands directly in the crontab file. The latter method is useful when your server is running PHP-cgi instead of PHP-cli. For more info, see Setting up a cron job. Note: Keep in mind that whatever queue processing method you use, you still need to put the messages to be sent in the queue. Tips & Tricks from the forum New subscriber gets last list email automatically - This mod will automatically requeue the last list message when a subscriber confirms his subscribtion. Related pages CategoryDocumentation
http://docs.phplist.com/ProcessQueueInfo.html
2017-01-16T14:57:49
CC-MAIN-2017-04
1484560279189.36
[]
docs.phplist.com
public interface MonitoringStrategy Defines the contract for objects that monitor a given folder for new messages. Allows for multiple implementation strategies, including polling, or event-driven techniques such as IMAP's IDLE command. Message[] monitor(Folder folder) throws MessagingException, InterruptedException folder- the folder in which to look for new messages MessagingException- in case of JavaMail errors InterruptedException- if a thread is interrupted int getFolderOpenMode() Folder.READ_ONLYor Folder.READ_WRITE.
http://docs.spring.io/spring-ws/site/apidocs/org/springframework/ws/transport/mail/monitor/MonitoringStrategy.html
2017-01-16T15:11:09
CC-MAIN-2017-04
1484560279189.36
[]
docs.spring.io
Version 4.2.4¶ Version 4.2.4 of mod_wsgi can be obtained from: Known Issues¶ 1. The makefiles for building mod_wsgi on Windows are currently broken and need updating. As most new changes relate to mod_wsgi daemon mode, which is not supported under Windows, you should keep using the last available binary for version 3.X on Windows instead. Bugs Fixed¶ 1. Fixed one off error in applying limit to the number of supplementary groups allowed for a daemon process group. The result could be that if more groups than the operating system allowed were specified to the option supplementary-groups, then memory corruption or a process crash could occur. 2. Improved error handling in setting up the current working directory and group access rights for a process when creating a daemon process group. The change means that if any error occurs that the daemon process group will be restarted rather than allow it to keep running with an incorrect working directory or group access rights.
http://modwsgi.readthedocs.io/en/latest/release-notes/version-4.2.4.html
2017-01-16T14:54:17
CC-MAIN-2017-04
1484560279189.36
[]
modwsgi.readthedocs.io
Adding Android Push Notifications to Previous Versions of Opsview You can add Android push notifications to Opsview Core or Opsview Enterprise 4.0-4.4 manually by following the instructions below. For users of Opsview 4.5 and above, the Android push notification method is shipped as standard: - Save the script to the Opsview server as /usr/local/nagios/libexec/notifications/notify_by_gcm_push_custom, owner nagios:nagios, 755 permissions - In the script, edit the following_gcm_push_custom'. Leave the Contact Variables field blank - Assign the new notification method to a contact or contacts - Apply Changes Note: if Opsview is upgraded, a notification method will be created which is the official Push Notifications to Android method
https://docs.opsview.com/doku.php?id=opsview-mobile-android:custom_setup
2019-02-16T02:00:30
CC-MAIN-2019-09
1550247479729.27
[]
docs.opsview.com
Contents IT Business Management Previous Topic Next Topic Business rules installed with Application Portfolio Management Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Business rules installed with Application Portfolio Management Application Portfolio Management adds the following business rules. Table 1. Business rules for APM Name Table Description Populate Short Description Goal [goal] Populate Short Description of the goal based on the attributes provided. Set Frequency to daily with regard to PA Application Indicator [apm_metric] In case of Performance analytic data sources, set the Frequency to daily. Only one Enterprise rollout is allowed Business Entity [apm_rollout_entity] Only one enterprise rollout is allowed for a business application. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-it-business-management/page/product/application-portfolio-management/reference/business-rules-installed-with-apm.html
2019-02-16T01:57:56
CC-MAIN-2019-09
1550247479729.27
[]
docs.servicenow.com
Monitoring and collecting data. More information on: How it works CouchDB CoScale plugin lets you inspect how CouchDB performs. The plugin collects metrics like requests per second, request sizes, http requests methods, CouchDB database reads/writes and a multitude of other useful metrics. This plugins uses the CouchDB API, which is exposed by default and no additional configuration is required. The minimal supported version of CouchDB is 1.5 Installation The plugin needs to be installed together with a CoScale agent, instructions on how to install the CoScale agent can be found here. If you want to monitor CouchDB inside Docker containers using CoScale, check out the instructions here. Configuration Active checks This plugin can be configured to query a view on your CouchDb. This active monitoring allows us to calculate the uptime of the service and the response time of the query. A database, username, password, design id and view id should be provided.
http://docs.coscale.com/agent/plugins/couchdb/
2018-06-18T03:58:03
CC-MAIN-2018-26
1529267860041.64
[]
docs.coscale.com
do-while Statement (C++) Executes a statement repeatedly until the specified termination condition (the expression) evaluates to zero. Syntax do statement while ( expression ) ; Remarks-whilestatement terminates and control passes to the next statement in the program. If expression is true (nonzero), the process is repeated, beginning with step 1. Example The following sample demonstrates the do-while statement: // do_while_statement.cpp #include <stdio.h> int main() { int i = 0; do { printf_s("\n%d",i++); } while (i < 3); } See Also Iteration Statements Keywords while Statement (C++) for Statement (C++) Range-based for Statement (C++)
https://docs.microsoft.com/en-us/cpp/cpp/do-while-statement-cpp
2018-06-18T04:09:31
CC-MAIN-2018-26
1529267860041.64
[]
docs.microsoft.com
{"_id":"5a0b0d9b04d0d600269f1385",:04:55.015Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":1,"body":"Before a Relying Party application can use Privakey’s authentication system for user login, it must register with Privakey and obtain Open ID Connect credentials, set one or more Callback URIs (see Develop Interfaces and Controls for more information), and (optionally) customize the branding information that its users see on the Privakey-presented user-authentication screen. \n\nThe process is straight forward and can be accessed from the Privakey User Portal. \n\n1. Download and Register the Privakey Application for iOS, Android or Windows\n[block:html]\n{\n \"html\": \"<div style=\\\"text-align: center;\\\">\\n\\n<a title=\\\"Get Privakey in the Apple App Store\\\" href=\\\"\\\" target=\\\"_blank\\\"><img class=\\\"wp-image-3799 alignnone\\\" src=\\\"\\\" alt=\\\"Get Privakey in the Apple App Store\\\" width=\\\"200\\\" height=\\\"59\\\" /></a>\\n\\n<a title=\\\"Get Privakey on Google Play\\\" href=\\\"\\\" target=\\\"_blank\\\"><img style=\\\"border: #B7B7B7 solid 1px; border-radius: 7px; width: 198px; height: 57px;\\\" src=\\\"\\\" alt=\\\"Get Privakey on Google Play\\\" /></a>\\n\\n<a title=\\\"Get Privakey in the Windows Store\\\" href=\\\"\\\" target=\\\"_blank\\\"><img style=\\\"height: 57px; width: 187px; background: white; border-radius: 7px; border: 1px solid #B7B7B7;\\\" src=\\\"\\\" alt=\\\"Get Privakey in the Windows Store\\\" /></a>\\n\\n</div>\\n\"\n}\n[/block]\n2. Log In to Privakey,\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"Picture1.png\",\n 424,\n 308,\n \"#60514e\"\n ],\n \"caption\": \"\"\n }\n ]\n}\n[/block]\n\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"Picture2.png\",\n 407,\n 309,\n \"#f2f2f2\"\n ]\n }\n ]\n}\n[/block]\n3. Select ‘Learn More’ from the section “Become a Relying Party”\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"Picture3.png\",\n 424,\n 367,\n \"#f36255\"\n ],\n \"caption\": \"Privakey Account Management\"\n }\n ]\n}\n[/block]\n\n[block:callout]\n{\n \"type\": \"warning\",\n \"title\": \"Generate a Recovery Key!\",\n \"body\": \"If you haven't generated a recovery key we strongly advice you do so. \\n\\nPrivakey's strong authentication requires you have a device configured with Privakey. A recovery key is a last resort in case you mislaid, replaced or damaged your Privakey devices. \\n\\nWe also recomend you configure Privakey on more than one account. To learn more visit\"\n}\n[/block]\n\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"Picture4.png\",\n 424,\n 369,\n \"#f66c45\"\n ],\n \"caption\": \"Becoming a Relying Party\"\n }\n ]\n}\n[/block]\n4. Fill out additional information and review and accept our terms of service to establish a relying party account.\n\n5. At this point the Relying Party Administration Screen is available. On this screen, one adds and manages Relying Parties and Callback URIs (URIs to which a user returns, with references to their Token, after a successful Authentication on Privakey - see Develop Interface and Controls for more information). \n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"Picture5.png\",\n 386,\n 332,\n \"#f46255\"\n ]\n }\n ]\n}\n[/block]\n6. Click, ‘Add a New Relying Party’ to configure your Relying Party. This will bring up the following page, on which one enters information about the Service:\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"Picture6.png\",\n 608,\n 547,\n \"#ece9e8\"\n ]\n }\n ]\n}\n[/block]\nThe sections required to configure a Relying Party include: \n\n**Friendly Name:** This is how the service will present on the end-user’s pending authentication page and Privakey Apps. \n\n**Logo:** Optionally, one can upload a logo that will appear on the pending authentication page presented to users during authentications.\n\n**Call Back URI:** URI’s to which a user returns, with references to their Token, after a successful Authentication on Privakey (See Develop Interface and Controls for more information). A Service may have more than one callback URI.\n\n* methods; for example, requiring a PIN to log in to the account but not requiring a PIN to process a transaction once logged in.\n\n**Implicit Flow:** Privakey supports two OpenID Connect Protocols: Code Flow and Implicit Flow. More information about these different protocols can be found on OpenID.org. Privakey recommends Code Flow, as it is a more secure protocol. This configuration, once enabled, allows only Implicit Flow and not Code Flow.\n\nA Relying Party can be edited and augmented after configuration.","excerpt":"","slug":"register-to-become-a-privakey-relying-party","type":"basic","title":"Register to become a Privakey Relying Party"}
http://docs.privakey.com/docs/register-to-become-a-privakey-relying-party
2018-06-18T03:46:45
CC-MAIN-2018-26
1529267860041.64
[]
docs.privakey.com
When you set up your host to boot from SAN, you enable the boot adapter in the host BIOS. You then configure the boot adapter to initiate a primitive connection to the target boot LUN. Prerequisites Determine the WWPN for the storage adapter. Procedure Configure the storage adapter to boot from SAN. Because configuring boot adapters is vendor specific, refer to your vendor documentation.
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.storage.doc/GUID-FCEF5E62-695F-4B40-A6B1-CE559B352A28.html
2018-06-18T03:57:50
CC-MAIN-2018-26
1529267860041.64
[]
docs.vmware.com