content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
DirectPlay Warning: Microsoft DirectPlay. The DirectPlay application programming interface (API) provides developers with the tools to develop multiplayer applications such as games or chat clients. For simplicity, this documentation will refer to all such applications as "games." A multiplayer application has two basic characteristics: - Two or more individual users, each with a game client on their computer. - Network links that enable the users' computers to communicate with each other, perhaps through a centralized server. DirectPlay provides a layer that largely isolates your application from the underlying network. For most purposes, your application can just use the DirectPlay API, and enable DirectPlay to handle the details of network communication. DirectPlay provides many features that simplify the process of implementing many aspects of a multiplayer application, including: - Creating and managing both peer-to-peer and client/server sessions - Managing users and groups within a session - Managing messaging between the members of a session over different network links and varying network conditions - Enabling applications to interact with lobbies - Enabling users to communicate with each other by voice This documentation provides a high-level overview of the capabilities of DirectPlay. Subsequent sections will take you into the details of how to use DirectPlay in your multiplayer game. For more information, see the Microsoft.DirectX.DirectPlay managed code reference documentation.
https://docs.microsoft.com/en-us/previous-versions/ms920591(v=msdn.10)?redirectedfrom=MSDN
2019-09-15T10:02:06
CC-MAIN-2019-39
1568514571027.62
[]
docs.microsoft.com
Make sure the Unity Package Manager can access the following domain names using HTTPS: Add the above domain names to your firewall’s whitelist. When using a proxy server, configure the HTTP_PROXY and HTTPS_PROXY environment variables for the Unity Package Manager to use when performing requests against the Unity package registry. You can set these variables globally (either system or user variables) according to your operating system. Alternatively, you can set them only for the Unity Hub when it launches. In some corporations and institutions, users are behind a firewall and can only access the internet through a proxy. Some proxies unpack the HTTPS content and repack it with its own self-signed certificate. Unity Package Manager’s underlying HTTPS layer rejects these self-signed certificates because it does not recognize them, and treats the connection as a possible man-in-the-middle attack. This means that you can’t use the Package Manager in Unity if your proxy uses a self-signed certificate. This section provides instructions for creating a command file you can run from a Windows command prompt or a macOS or Linux terminal. Alternatively, you can copy and paste the commands directly into the prompt or terminal window. NOTE: Before you run the command file, shut down the Hub completely. If the Hub is already running, the script switches focus to the Hub without relaunching, so it does not apply the changed proxy settings. These instructions create a command file on Windows. The file launches the Hub with the environment variables set. You can either double-click the file, or invoke it from the command prompt. Unity passes these environment variables on to any Unity Editor process launched from the Hub. Open a text editor such as Notepad. Enter the following text, replacing proxy-url with the correct proxy server URL and adjusting the Hub install path if needed: @echo off set HTTP_PROXY=proxy-url set HTTPS_PROXY=proxy-url start "" "C:\Program Files\Unity Hub\Unity Hub.exe" NOTE: If there are spaces in the path, you must use double quotes around the path to the program. Save the file to a location where you can easily find it (such as the Desktop), and make sure the file has the .cmd (for example, launchUnityHub.cmd). These instructions create the launchUnityHub.command file on macOS. The file launches the Hub with the environment variables set. You can either double-click the file, or invoke it from a Bash terminal. Unity passes these environment variables on to any Unity Editor process launched from the Hub. NOTE: Double-clicking the command file opens a Terminal window or tab and leaves it open, even after the script finishes. You can change this behavior in the preferences for the Terminal.app. Open a Terminal window. Enter the following script, replacing proxy-url with the correct proxy server URL and adjusting the Hub install path if needed: echo '#!/bin/bash export HTTP_PROXY=proxy-url export HTTPS_PROXY=proxy-url nohup "/Applications/Unity Hub.app/Contents/MacOS/Unity Hub" &>/dev/null &' > launchUnityHub.command chmod +x launchUnityHub.command NOTE: If there are spaces in the path, you must use double quotes around the path to the program. Move the launchUnityHub.command file to a convenient location (for example, the Desktop), if you prefer. Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2019.1/Documentation/Manual/upm-network.html
2019-09-15T09:57:02
CC-MAIN-2019-39
1568514571027.62
[]
docs.unity3d.com
Before you can sign in to the HPE Consumption Analytics Portal, you need an account. Please contact your HPE Account Support Manager (ASM) to request access. When your account is created, the portal sends a verification email to the address you specified. You must respond to the verification email and complete your registration within 30 days. During the activation process, you verify your username and update your password for your HPE Consumption Analytics Portal account. Once your account is activated, you can use your HPE Consumption Analytics Portal If you forget your password, click Forget your password? to reset your password. An email will be sent to your email address with a reset link. To sign in to your account with SSO Refer to the relevant article: HPE Consumption Analytics Portal supports the following browsers: When you sign in to HPE Consumption Analytics Portal, you will see your default view, which displays a snapshot of the usage of and spending on your IT environment. It provides you with a single view across platforms. The dashboard contains the following:. The HPE Consumption Analytics Portal features in-app messages to notify you of outages, new features, and other information. See Managing messages from HPE for more details and instructions for working with them. Now you have an active HPE Consumption Analytics Portal account. For HPE GreenLake Flex Capacity accounts, please review The HPE GreenLake Flex Capacity Customer View. For public cloud accounts, the next step is to create a data collection. To learn more about creating collections and prerequisites, see the following articles:
https://docs.consumption.support.hpe.com/CCS/020Getting_Started/010Setting_up_your_account
2019-09-15T09:58:47
CC-MAIN-2019-39
1568514571027.62
[]
docs.consumption.support.hpe.com
As you configure HPE Consumption Analytics Portal and collect data, HPE Consumption Analytics Portal uses health checks to identify errors and potential problems that need your attention. As you work, note the number listed next to Health Checks in the navigation pane. This number indicates how many health checks HPE Consumption Analytics Portal. The following video will help you understand what health checks are, and how you can use them to keep Cloud Cruiser running smoothly: The severity of each health check can help you prioritize which problems need to be addressed before others: HPE Consumption Analytics Portal provides health checks for the following potential problems: If you do not intend to correct a situation that generated a health check, and you no longer want to see that health check in your display, in the Status field click hide.
https://docs.consumption.support.hpe.com/CCS/050Configuring_the_HPE_Consumption_Analytics_Portal/Health_checks
2019-09-15T10:40:07
CC-MAIN-2019-39
1568514571027.62
[]
docs.consumption.support.hpe.com
Accessing data using CQL Resources for running CQL commands, including steps to launch the cqlsh utility. Common ways to access CQL are: - CQL shell (cqlsh): a Python-based command-line client installed on DataStax Enterprise nodes. - DataStax drivers for developing applications. - DataStax Studio 2.0 an interactive tool for exploring and visualizing large datasets using DSE Graph. It provides an intuitive interface for developers and analysts to collaborate and test theories by mixing code, documentation, query results and visualizations into self-documenting notebooks. Starting cqlsh Launch the cqlsh utility with the default settings. To connect to a security enabled cluster, see Using cqlsh with Kerberos or user authentication. Tip: For a complete list of cqlsh options, see cqlsh (startup options). Procedure - Navigate to the DataStax Enterprise installation directory. - Start cqlshon the Mac OSX. For example. installation_directory
https://docs.datastax.com/en/dse/5.1/cql/cql/cql_using/startCqlshTOC.html
2019-09-15T10:11:41
CC-MAIN-2019-39
1568514571027.62
[]
docs.datastax.com
Backing-up Files With a Custom Policy A backup-policy can be configured to backup files. By default a NullBackupPolicy is configured, which does nothing. It can be replaced by a DeleteBackupPolicy to keep a backup of files for a specified period. The BackupPolicy interface allows custom implementations to be plugged-in. Null Backup Policy A null backup policy acts as a placeholder for a ‘do-nothing’ behavior. Used when an exception occurs trying to instantiate a customized backup policy, or if no policy is desired. Delete Backup Policy A backup policy that deletes any file which is older than the specified period, but keeps at least as many of the specified backup files. By default, a file is kept for a 30 day period. After 30 days, the file is deleted; Unless if there are less than 10 backup files available. In other words, maintain a history of 10 files, even if there was nothing logged for more than 30 days. These properties can be configured either by modifying the logging configuration file: com.gigaspaces.logger.RollingFileHandler.backup-policy = com.gigaspaces.logger.DeleteBackupPolicy com.gigaspaces.logger.DeleteBackupPolicy.period = 30 com.gigaspaces.logger.DeleteBackupPolicy.backup = 10 or by use of a system property override: -Dcom.gigaspaces.logger.DeleteBackupPolicy.[property-name]=[property-value] For example: -Dcom.gigaspaces.logger.DeleteBackupPolicy.period=30 Customized Backup Policy The com.gigaspaces.logger. BackupPolicy is an interface for a pluggable backup policy. For example, you may wish to write an implementation to zip files if reached a certain threshold. The interface has a single method, which is used to track newly created log files. A file is either created upon rollover or at initialization time. Implementation can keep track of files and decide whether to trigger the backup policy. public void track(File file);
https://docs.gigaspaces.com/xap/12.0/admin/logging-backing-custom-policy.html
2019-09-15T09:51:40
CC-MAIN-2019-39
1568514571027.62
[]
docs.gigaspaces.com
The asset management industry is under constant pressure to keep fees as low as possible. To do this, while remaining profitable, you have to find ways to keep your business costs down. Once cost-cutting strategy that has been proven effective time and time again is outsourcing. Technology vendors are rapidly emerging to solve business problems by building cheaper, quicker, more sophisticated solutions than businesses can build in house. Outsourcing to a tech vendor is a huge cost savings whilst delivering often to a higher standard than previously available. One area that certainly lends itself to outsourcing is fund marketing document production. Asset management firms’ realize that vendors now have technological and process maturity to be able to provide highly scalable and cost effective solutions. Outsourced services and products can go a long way to improving quality, business day cycles and presentation flexibility. That said, outsourcing may not be right for every asset manager, but here are five signs it may be time to re-evaluate your current processes and consider outsourcing your fund marketing document production. 1. You need to reduce production costs The first benefit in outsourcing is the cost saving. Producing fund marketing documents internally is expensive because dedicated resources are required. In other words, manual production of documents like fund factsheets days, and even better, weeks. 3. You want to make changes more easily Changes to internally driven systems at asset managers can often require significant IT involvement. Their involvement includes either incorporating new data feeds, building new templates or updating the look and feel of documents.. You want to improve quality If you’re manually producing your fund marketing documents you are exposing your company to the risk of human error. Mistakes happen easily if 1000s of documents need to be created and amended manually, and these errors can have serious legal consequences. Conversely, by outsourcing your fund document production to a vendor that uses an automated process you’re removing the risk of human error and improving the quality of the finished product. 5. Your document approval process can do with an overhaul If you’re still managing your fund marketing document approvals over email your process can do with an overhaul. Many fund marketers spend hours of their valuable time trying to get all the relevant parties to approve the required documents. This is completely unnecessary. By outsourcing your fund document production to a reputable vendor you’ll benefit from their automated workflow management processes, which will ease the process of document approvals and sign-off. Thus, leaving you, the fund marketer, with more time to do actual marketing work. Ready to outsource? Outsourcing fund marketing document production is something that can really provide benefits to an asset management company. If you’ve noticed any of these signs mentioned above, it may be time to re-consider your document production process. The cost benefits alone justify the proposition. The solution becomes really compelling when you pair the savings alongside the production speed and change responsiveness. To find out more about what outsourcing your document production would entail visit
https://docs.kurtosys.com/5-signs-its-time-to-consider-outsourcing-your-fund-marketing-document-production/
2019-09-15T09:43:47
CC-MAIN-2019-39
1568514571027.62
[]
docs.kurtosys.com
. Routed Event Information The corresponding tunneling event is PreviewStylusDown. Override OnStylusDown to implement class handling for this event in derived classes.
https://docs.microsoft.com/en-us/dotnet/api/system.windows.uielement.stylusdown?redirectedfrom=MSDN&view=netframework-4.8
2019-09-15T10:19:43
CC-MAIN-2019-39
1568514571027.62
[]
docs.microsoft.com
Web dashboards overview¶ Because Netdata is a health monitoring and performance troubleshooting system, we put a lot of emphasis on real-time, meaningful, and context-aware charts. We bundle Netdata with a dashboard and hundreds of charts, designed by both our team and the community, but you can also customize them yourself. There are two primary ways to view Netdata’s dashboards: The standard web dashboard that comes pre-configured with every Netdata installation. You can see it at, or localhost. You can customize the contents and colors of the standard dashboard using JavaScript. The dashboard.jsJavaScript library, which helps you customize the standard dashboards using JavaScript, or create entirely new custom dashboards or Atlassian Confluence dashboards. You can also view all the data Netdata collects through the REST API v1. No matter where you use Netdata’s charts, you’ll want to know how to use them. You’ll also want to understand how Netdata defines charts, dimensions, families, and contexts. Using charts¶ Netdata’s charts are far from static. They are interactive, real-time, and work with your mouse, touchpad, or touchscreen! Hover over any chart to temporarily pause it and see the exact values presented as different dimensions. Click or tap stop the chart from automatically updating with new metrics, thereby locking it to a single timeframe. ![ Animated GIF of hovering over a chart to see values]() You can change how charts show their metrics by zooming in or out, moving forward or backward in time, or selecting a specific timeframe for more in-depth analysis. Whenever you use a chart in this way, Netdata synchronizes all the other charts to match it. Chart synchronization even works between separate Netdata agents if you connect them using the node menu! You can change how charts show their metrics in a few different ways, each of which have a few methods: Here’s how chart synchronization looks while zooming and panning: ![ Animated GIF of the standard Netdata dashboard being manipulated and synchronizing charts]() You can also perform all these actions using the small rewind/play/fast-forward/zoom-in/zoom-out buttons that appear in the bottom-right corner of each chart. Charts, contexts, families¶ Before customizing the standard web dashboard, creating a custom dashboard, configuring an alarm, or writing a collector, it’s crucial to understand how Netdata organizes metrics into charts, dimensions, families, and contexts. Charts¶ A chart is an individual, interactive, always-updating graphic displaying one or more collected/calculated metrics. Charts are generated by collectors. Here’s the system CPU chart, the first chart displayed on the standard dashboard: ![ Screenshot of the system CPU chart in the Netdata dashboard]() Netdata displays a chart’s name in parentheses above the chart. For example, if you navigate to the system CPU chart, you’ll see the label: Total CPU utilization (system.cpu). In this case, the chart’s name is system.cpu. Netdata derives the name from the chart’s context. Dimensions¶ A dimension is a value that gets shown on a chart. The value can be raw data or calculated values, such as percentages, aggregates, and more. Charts are capable of showing more than one dimension. Netdata shows these dimensions on the right side of the chart, beneath the date and time. Again, the system.cpu chart will serve as a good example. ![ Screenshot of the dimensions shown in the system CPU chart in the Netdata dashboard]() Here, the system.cpu chart is showing many dimensions, such as user, system, softirq, irq, and more. Note that other applications sometimes use the word series instead of dimension. Families¶ A family is one instance of a monitored hardware or software resource that needs to be monitored and displayed separately from similar instances. For example, if your system has multiple disk drives at sda and sdb, Netdata will put each interface into their own family. Same goes for software resources, like multiple MySQL instances. We call these instances “families” because the charts associated with a single disk instance, for example, are often related to each other. Relatives, family… get it? When relevant, Netdata prefers to organize charts by family. When you visit the Disks section, you will see your disk drives organized into families, and each family will have one or more charts: disk, disk_ops, disk_backlog, disk_util, disk_await, disk_avgsz, disk_svctm, disk_mops, and disk_iotime. In the screenshot below, the disk family sdb shows a few gauges, followed by a few of the associated charts: ![ Screenshot of a disk drive family and associated charts in the Netdata dashboard]() Netdata also creates separate submenu entries for each family in the right navigation page so you can easily navigate to the instance you’re interested in. Here, Netdata has made several submenus under the Disk menu. ![ Screenshot of the disks menu and submenus]() Contexts¶ A context is a way of grouping charts by the types of metrics collected and dimensions displayed. Different charts with the same context will show the same dimensions, but for different instances (families) of hardware/software resources. For example, the Disks section will often use many contexts ( disk.io, disk.ops, disk.backlog, disk.util, and so on). Netdata then creates an individual chart for each context, and groups them by family. Netdata names charts according to their context according to the following structure: [context].[family]. A chart with the disk.util context, in the sdb family, gets the name disk_util.sdb. Netdata shows that name in the top-left corner of a chart. Given the four example contexts, and two families of sdb and sdd, Netdata will create the following charts and their names: And here’s what two of those charts in the disk.io context look like under sdb and sdd families: As you can see in the screenshot, you can view the context of a chart if you hover over the date above the list of dimensions. A tooltip will appear that shows you two pieces of information: the collector that produces the chart, and the chart’s context. Netdata also uses contexts for alarm templates. You can create an alarm for the net.packets context to receive alerts for any chart with that context, no matter which family it’s attached to. Positive and negative values on charts¶¶ Netdata charts automatically zoom vertically, to visualize the variation of each metric within the visible timeframe. A zero-based stacked chart, automatically switches to an auto-scaled area chart when a single dimension is selected. dashboard.js¶ Netdata uses the dashboards.js file to define, configure, create, and update all the charts and other visualizations that appear on any Netdata dashboard. You need to put dashboard.js on any HTML page that’s going to render Netdata charts. The custom dashboards documentation contains examples of such custom HTML pages. Generating dashboard.js¶ We build the dashboards.js file by concatenating all the source files located in the web/gui/src/dashboard.js/ directory. That’s done using the provided build script: cd web/gui make If you make any changes to the src directory when developing Netdata, you should regenerate the dashboard.js file before you commit to the Netdata repository.
https://docs.netdata.cloud/web/
2019-09-15T09:55:35
CC-MAIN-2019-39
1568514571027.62
[array(['https://user-images.githubusercontent.com/1153921/62728232-177e4c80-b9d0-11e9-9e29-2a6c59d4d873.png', 'context_01'], dtype=object) array(['https://user-images.githubusercontent.com/1153921/62728234-1b11d380-b9d0-11e9-8904-07befd8ac592.png', 'context_02'], dtype=object) array(['https://user-images.githubusercontent.com/2662304/48309090-7c5c6180-e57a-11e8-8e03-3a7538c14223.gif', 'positive-and-negative-values'], dtype=object) array(['https://user-images.githubusercontent.com/2662304/48309139-3d2f1000-e57c-11e8-9a44-b91758134b00.gif', 'non-zero-based'], dtype=object) ]
docs.netdata.cloud
Source types for the Splunk Add-on for Microsoft Office 365 The Splunk Add-on for Microsoft Office 365 provides the index-time and search-time knowledge for audit, service status, and service message events in the following formats. This documentation applies to the following versions of Splunk® Supported Add-ons: released Feedback submitted, thanks!
https://docs.splunk.com/Documentation/AddOns/latest/MSO365/Sourcetypes
2019-09-15T10:11:58
CC-MAIN-2019-39
1568514571027.62
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Contact_DeleteByEmailAddress Deletes a Contact record based on customer's email address. Warning: This API endpoint will eventually be deprecated (existing implementations will still continue to work). We strongly advise to use the new CRM v3 REST API endpoints. Here is an example on how to delete customer record using the new endpoints. Request - Method: SOAP - Server: https://[app key here]-[site_ID here]-apps.worldsecuresystems.com. Take a look at the Authorize your API calls document for more info on how this URL is formed. - Path: /catalystwebservice/catalystcrmwebservice.asmx - The username and password fields:You can make SOAP API calls can be authorized either by using the actual username or password of an Admin username or, if you are making the calls from an app leave the username field empty and use the authorization token as the password. Parameters siteId- ID of the site (integer) username- email address of user account, leave empty if using site token (string) password- password of user account, or site authentication token for specified site (string) To use a site token instead of username/password, send an empty username field and the site token as the password. See example below. Response A Contact_DeleteByEmailAddressResponse object with the following properties: Contact_DeleteByEmailAddressResult- (boolean) Examples Accepts and returns XML as Content-Type. The following is a sample SOAP 1.2 request and response. The placeholders shown need to be replaced with actual values, please note the data in the request and response is only for explanatory purposes. Request: POST /catalystwebservice/catalystcrmwebservice.asmx HTTP/1.1 Host: worldsecuresystems.com Content-Type: application/soap+xml; charset=utf-8 Content-Length: length <?xml version="1.0" encoding="utf-8"?> <soap12:Envelope xmlns: <soap12:Body> <Contact_DeleteByEmailAddress xmlns=""> <username>[email protected]</username> <password>Y0urP@ssw0rdH3re</password> <siteId>12345</siteId> <email>string</email> </Contact_DeleteByEmailAddress> </soap12:Body> </soap12:Envelope> Response: HTTP/1.1 200 OK Content-Type: application/soap+xml; charset=utf-8 Content-Length: length <?xml version="1.0" encoding="utf-8"?> <soap12:Envelope xmlns: <soap12:Body> <Contact_DeleteByEmailAddressResponse xmlns=""> <Contact_DeleteByEmailAddressResult>true</Contact_DeleteByEmailAddressResult> </Contact_DeleteByEmailAddressResponse> </soap12:Body> </soap12:Envelope>
http://docs.businesscatalyst.com/reference/soap-apis-legacy/crm/contact_deletebyemailaddress.html
2019-09-15T10:13:05
CC-MAIN-2019-39
1568514571027.62
[]
docs.businesscatalyst.com
Application development¶ Getting involved with Plasma Mobile application environment is a perfect opportunity to familiarize with a set of important technologies: Qt, the cross-platform application framework for creating applications that run on various software and hardware platforms with little or no change in the underlying codebase QML, the UI specification and programming language that allows designers and developers to create applications with fluid transitions and effects, which are quite popular in mobile devices. QML is a declarative language offering a highly readable, declarative, JSON-like syntax with support for imperative JavaScript expressions. Qt Quick, the standard library of types and functionality for QML. It includes, among many others, visual types, interactive types, animations, models and views. A QML application developer can get access this functionality with a single import statement. CMake, the cross-platform set of tools designed to build, test and package software, using a compiler-independent method. Kirigami, a set of QtQuick components, facilitating the easy creation of applications that look and feel great on mobile as well as on desktop devices, following the KDE Human Interface Guidelines. Documentation resources¶¶ Create the application template¶ We will use the KDE flatpak SDK to develop and package the app, so all that is required is a working flatpak and flatpak-builder installation. To install flatpak on your workstation, follow the official instructions provided here. First, clone the app template: git clone This repository can be used as a template to develop Plasma Mobile applications. It already includes templates for the qml ui, a c++ part, app metadata and flatpak packaging. Build the application locally¶ # Install the SDK flatpak install flathub org.kde.Sdk//5.12 # Only needs to be done once # Build flatpak-builder flatpak-build-desktop --force-clean --ccache *.json # Start export QT_QUICK_CONTROLS_MOBILE=true QT_QUICK_CONTROLS_STYLE=Plasma # Required for making the application look like started on a phone flatpak-builder --run flatpak-build-desktop *.json hellokirigami If you can see this image: you have successfully created your first Plasma Mobile application! Build the application for the phone¶ Make sure your system supports qemu user emulation. If not, you can find help for example here. flatpak install flathub org.kde.Sdk/arm/5.12 # Only needs to be done once flatpak-builder flatpak-build-phone --repo=arm-phone --arch=arm --force-clean --ccache *.json flatpak build-bundle arm-phone hellokirigami.flatpak org.kde.hellokirigami --arch=arm Now your app is exported into app.flatpak. You can copy the file to the phone using scp: scp app.flatpak [email protected]:/home/phablet/app.flatpak ssh [email protected] flatpak install app.flatpak Your new application should now appear on the homescreen. Customize the application template¶ Edit the files to fit your naming and needs. In each command, replace “io.you.newapp” and “newapp” with the id and name you want to use./io.you.newapp/g'); done Upload application to repository¶). Create a Kirigami application¶ In this tutorial we will use some of the technologies already presented in the application development section. Before starting, you should follow the instructions in that page since the hellokirigami prototype will be used as a skeleton for our development. Rename the prototype¶ At first, we will change the name used in the plasma-mobile-app-template from hellokirigami to kirigami-tutorial:/org.kde.kirigami-tutorial/g'); done Objective¶ Our goal is to create a simple prototype of an address book. We need to display a grid of cards that will show the contacts of our phone. Each card should display the name of the contact, her/his mobile phone and the email address. Kirigami Gallery¶ Now that the requirements of our project have been defined we need to find out the technologies that will help us to create the prototype. In this task Kirigami Gallery will be our friend. Kirigami Gallery is an application which uses the features of Kirigami, provides links to the source code, tips on how to use the components as well as links to the corresponding HIG pages. Tip Before continuing please install Kirigami Gallery. It should already be in the repository of your GNU Linux distribution. Find a card grid¶ Navigating through the Kirigami Gallery application, we will stumble upon the “Grid view of cards” gallery component. This is a good candidate that serves our purpose; to display a grid of contact cards. After selecting the “Grid view of cards” gallery component, we will click to the bottom action and we will get some useful information about the Card and Abstract Card types. In this information dialog we will also find a link to the source code of the Cards Grid View. Let’s navigate to this page. Implement the card grid¶ We will reuse the most of the code found in the Cards Grid View Gallery source code page. In particular, we will remove the extra parts of the OverlaySheet (which is the implementation of the Kirigami Gallery that helped us reach the kirigami-gallery source code repository). So, we are going to substitute the Page component of main.qml of the skeleton app with the below Scrollable Page: Kirigami.ScrollablePage { title: "Address book (prototype)" Kirigami.CardsGridView { id: view model: ListModel { id: mainModel } delegate: card } } What we have done so far is to create a ScrollablePage and put into it a CardsGridView, since we want to display a grid of Cards generated from a model. The data of each contact is provided by a ListModel while the card delegate is responsible for the presentation of the data. For more info about models and views in Qt Quick, see here. Now let’s populate the model that will feed our grid view with data. In Kirigami.ScrollablePage definition, just after: delegate: card } add the below: Component.onCompleted: { mainModel.append({"firstname": "Pablo", "lastname": "Doe", "cellphone": "6300000002", "email" : "[email protected]", "photo": "qrc:/konqi.jpg"}); mainModel.append({"firstname": "Paul", "lastname": "Adams", "cellphone": "6300000003", "email" : "[email protected]", "photo": "qrc:/katie.jpg"}); mainModel.append({"firstname": "John", "lastname": "Doe", "cellphone": "6300000001", "email" : "[email protected]", "photo": "qrc:/konqi.jpg"}); mainModel.append({"firstname": "Ken", "lastname": "Brown", "cellphone": "6300000004", "email" : "[email protected]", "photo": "qrc:/konqi.jpg"}); mainModel.append({"firstname": "Al", "lastname": "Anderson", "cellphone": "6300000005", "email" : "[email protected]", "photo": "qrc:/katie.jpg"}); mainModel.append({"firstname": "Kate", "lastname": "Adams", "cellphone": "6300000005", "email" : "[email protected]", "photo": "qrc:/konqi.jpg"}); } The model part of our implementation is ready. Let’s proceed to defining a delegate that will be responsible for displaying the data. So, we add the below code to the main.qml page, just after the Component.onCompleted definition: } } } } Following the relative information in the api page we populate a “banner” (although without an image yet), that will act as a header that will display the name of the contact as well as a contact icon. The main content of the card has been populated with the cell phone number and the email of the contact, structured as a column of labels. The application should look like this: Tip You can find the full source code of the tutorial at invent.kde.org. As a last step we will add some dummy functionality to each card. In particular, a “call” action will be added. Nevertheless, instead of a real call, a passive notification will be displayed. So, let’s change the card Component to the below: } } actions: [ Kirigami.Action { text: "Call" icon.name: "call-start" onTriggered: { showPassiveNotification("Calling " + model.firstname + " " + model.lastname + " ...") } } ] } } So, we added an action that, as soon as it is triggered (by pressing the action button), a passive notification is displayed. Finally, our application should look like this:
https://docs.plasma-mobile.org/AppDevelopment.html
2019-09-15T09:54:27
CC-MAIN-2019-39
1568514571027.62
[]
docs.plasma-mobile.org
kmeans Description Partitions the events into k clusters, with each cluster defined by its mean value. Each event belongs to the cluster with the nearest mean value. Performs k-means clustering on the list of fields that you specify. If no fields are specified, performs the clustering on all numeric fields. Events in the same cluster are moved next to each other. You have the option to display the cluster number for each event. Syntax kmeans [kmeans-options...] [field-list] Required arguments None. Optional arguments - field-list - Syntax: <field> ... - Description: Specify a space separated list of the exact fields to use for the join. - Default: If no fields are specified, uses all numerical fields that are common to both result sets. Skips events with non-numerical fields. - kmeans-options - Syntax: <reps> | <iters> | <t> | <k> | <cnumfield> | <distype> | <showcentroid> - Description: Options for the kmeanscommand. kmeans options - reps - Syntax: reps=<int> - Description: Specify the number of times to repeat kmeans using random starting clusters. - Default: 10 - iters - Syntax: maxiters=<int> - Description: Specify the maximum number of iterations allowed before failing to converge. - Default: 10000 - t - Syntax: t=<num> - Description: Specify the algorithm convergence tolerance. - Default:). - Default: k=2 - cnumfield - Syntax: cfield=<field> - Description: Names the field to annotate the results with the cluster number for each event. - Default: CLUSTERNUM - distype - Syntax: dt= ( l1 | l1norm | cityblock | cb ) | ( l2 | l2norm | sq | sqeuclidean ) | ( cos | cosine ) - Description: Specify the distance metric to use. The l1, l1norm, and cbdistance metrics are synonyms for cityblock. The l2, l2norm, and sqdistance metrics are synonyms for sqeuclideanor sqEuclidean. The cosdistance metric is a synonym for cosine. - Default: sqeucildean - showcentroid - Syntax: showcentroid= true | false - Description: Specify whether to expose the centroid centers in the search results (showcentroid=true) or not. - Default: true Usage Limits The number of clusters to collect the values into -- k -- is not permitted to exceed maxkvalue. The maxkvalue is specified in the limits.conf file, in the [kmeans] stanza. The maxkvalue default is 1000. When a range is given for the k option, the total distance between the beginning and ending cluster counts is not permitted to exceed maxkrange. The maxkrange is specified in the limits.conf file, in the [kmeans] stanza. The maxkrange default Also "and" is highlighted but shoul not be (same paragraph). Also "sqEuclidean" is not highlighted to math the others ( same paragraph). There is a "for to" in here that should be just "for". Woodcock Thank you for pointing out these issues. I have corrected the text.
https://docs.splunk.com/Documentation/Splunk/7.3.1/SearchReference/Kmeans
2019-09-15T10:28:10
CC-MAIN-2019-39
1568514571027.62
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Private package registries Are you sharing code between your different projects? Depfu supports all package registries you might use for your own private shared libraries. - For Ruby we support all external sources. That could be self-hosted gem servers, like Geminabox and Gemstash or SaaS registries like Gemfury and packagecloud. Also public 3rd party sources like Rails Assets and private sources like Sidekiq Pro/Enterprise are supported. - For JavaScript we support private packages on npmjs.org and dedicated external registries, be it the above-mentioned SaaS registries or self-hosted like your own NPM registry and tools like Verdaccio. Let’s have a look at how that works in detail: Detection Depfu auto-detects any registries you are using that are not the official rubygems.org or npmjs.org. The way this works depends on the language: In Ruby the additional sources are part of your Gemfile, usually like this: source '' gem 'rake' # all my private gems: source '' do gem 'myprivate_gem', '~> 3.7' end<br> Here we would detect that you’re using another gem source and check if we can access it or if it needs authentication. In JavaScript detecting private registries is actually a bit more tricky, since this information is not part of the package.json or lockfile. We rely on a checked-in .npmrc to do the detection. In this file you can tell npm how to authenticate and which package scopes should live in what registry: //registry.npmjs.org/:_authToken=${NPM_TOKEN} @flowbyte:registry=<br> From this example we would know that the scoped @flowbyte packages live in the “external-registry” and there might be additional private packages on npmjs.org, which Depfu will find later. Authentication In most cases these external registries need some kind of authentication, since you want to use them for sharing private code as opposed to code that could be open-source. That means that we rely on you to also give us access to these external sources. This is the same as giving your CI-system access in order to install the packages and run your tests. Most external sources have a way of only giving read access, which is all we need for Depfu. After we detect an external source, we ask you to tell us how to authenticate. That could be username:password or usually some kind of token. You can enter this information in the settings for your organization: In case of Bundler we actually need the auth information to continue sending you updates at all, even for public gems. Bundler constructs a full dependency tree to run any update and your private gems are part of that as well. Apart from monkey-patching Bundler heavily, which we obviously want to avoid, there is no way for us to ignore your private gems while sending you updates for public gems. Polling and Updates After we have access to an external source, we start polling it regularly and will send you updates via pull requests the exact same way as for your public dependencies. We intentionally have some delay in the system, so don’t be surprised if the pull requests don’t come in right after you released a new version of your private library.
https://docs.depfu.com/article/31-private-package-registries
2019-09-15T10:47:46
CC-MAIN-2019-39
1568514571027.62
[array(['https://depfu.com/images/posts/private_registries.png', None], dtype=object) array(['https://depfu.com/images/posts/pull_request_privategem.png', None], dtype=object) ]
docs.depfu.com
Contents IT Service Management Previous Topic Next Topic Create a new expense line Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Create a new expense line Typically, expense lines are automatically generated based on assets or users, but you can create a new expense line manually if needed. Before you beginRole required: asset or contract_manager Procedure Navigate to Contract Management > Contract > All. Select a contract. In the Expense Lines related list, click New. Complete the form.. Click Submit. Related tasksGenerating expense lines based on assets or usersView contract expense lines On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-it-service-management/page/product/contract-management/task/t_CreatingANewExpenseLine.html
2019-09-15T10:31:20
CC-MAIN-2019-39
1568514571027.62
[]
docs.servicenow.com
point if you are setting up your instance for the first time or if you have recently enabled UI16. Before you beginTo prepare for completing basic configuration, gather the following information. Obtain the company banner image to use in the header. The image can be high resolution, but when it displays it is scaled based on the aspect ratio. It scales to a maximum of 20px high. Get the brand color hex or RGB numbers of your company, typically from your marketing department. Use them to decide how to configure the UI background colors. Role required: adminOverview of the basic procedures for setting up your ServiceNow instance About this taskEach color selection option provides a color picker to select a color. The text box beside the color picker lets you enter the value of the color as any of the following CSS formats. Name: predefined color names, such as red, green, blue RGB decimal: RGB(102, 153, 204) RGB hex: #223344 Refer to HTML Color Names (W3CSchools) for information about HTML color names. Procedure Navigate to System Properties > Basic Configuration UI16 Complete the configuration by changing any of the following settings. Table 1. Basic system configuration properties Label Property Description Page header caption glide.product.description Change the text that appears next to your logo. Browser tab title glide.product.name Change the text that appears on the browser tab System timezone for all users unless overridden in the user's record glide.sys.default.tz Select the timezone in the choice list.Click Configure available time zones to select the time zones that your users can select from in user preferences. Banner image for UI16 glide.product.image.light Click + next to the image and upload your logo. Date formatTime format glide.sys.date_formatglide.sys.time_format Select the date and time formats from the choice lists. Header background color css.$navpage-header-bg Select or enter the color. Banner text color css.$navpage-header-color Select or enter the color. Header divider stripe color css.$navpage-header-divider-color Select or enter the color. Navigation header/footer and navigation background expanded items css.$navpage-nav-bg Select or enter the color. Navigation selected tab background color css.$navpage-nav-selected-bg Select or enter the color. Navigation expanded items highlight background css.$nav-highlight-main Select or enter the color. Navigation separator color css.$nav-hr-color Select or enter the color. Background for navigator and sidebars css.$navpage-nav-bg-sub Select or enter the color. Module text color for UI16 css.$navpage-nav-unselected-color Select or enter the color. Currently selected Navigation tab icon color for UI16 css.$navpage-nav-selected-color Select or enter the color. Also controls color of the module that is currently in focus.Related referenceIstanbul CSS class support On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-platform-user-interface/page/administer/navigation-and-ui/task/t_ConfigureLogoColorsSysDfltsUI16.html
2019-09-15T10:34:54
CC-MAIN-2019-39
1568514571027.62
[]
docs.servicenow.com
Slack Notifications You can set up Wallarm to send notifications to your Slack channel. Notifications can be set up for the following events: - System-related: - new user created; - integration settings changes. - Vulnerability detected. - Network perimeter changed. Set Up Notifications - Open the Settings → Integrations tab. - Click Add integration. - Click Slack. - Go to the WebHooks link. - Select the Slack channel that will receive notifications. Click Add Incoming WebHooks integration. - Copy the link and put it in Wallarm into the WebHook link field. - Enter the integration name and select the events that trigger the notifications. - Click Create. Disabling Notifications - Select an integration on the Integrations tab. - Click Disable. - Click Save. Removing Integration - Select an integration on the Integrations tab. - Click Remove. - Click Sure?.
https://docs.wallarm.com/en/user-guides/cloud-ui/settings/integrations/slack.html
2019-09-15T09:39:47
CC-MAIN-2019-39
1568514571027.62
[]
docs.wallarm.com
. If you set a TLS version as default, it will be used by both the application and the New Relic agent. You cannot use a different TLS version for each. To enable a specific TLS version protocol: - Step 1. Enable TLS protocols in Windows registry. Older versions of Windows Server (2008/2012) may not have TLS 1.1/1.2 support enabled by default. Follow these steps carefully. Serious problems may occur if you modify the registry incorrectly. Recommendation: Before you modify the registry, make a backup. Here's an example of how to update Windows registry to TLS 1.2. This requires TLS to be enabled for the Client role, because your server is connecting as a client to New Relic. Copy and paste the following into a file: - Save the file with a .regextension. - Run the script. - Step 2. Turn on .NET default protocols .NET Frameworks 4.5 or lower use protocols SSL v3 and TLS 1.0 by default. After you enabled TLS 1.1 or 1.2 via the registry, you still need to change the default protocols used by .NET. Choose one of the following options: - Enable strong crypto property in Windows registry Follow these steps carefully. Serious problems may occur if you modify the registry incorrectly. Recommendation: Before you modify the registry, make a backup. Adding the SchUseStrongCryptovalue to the .NET Framework registry keys will allow all .NET apps to use TLS 1.1 or 1.2. Both regkeys will need to be modified to ensure that both 32bit and 64bit .NET applications are able to use TLS 1.1/1.2. Copy and paste the following into a file: Windows Registry Editor Version 5.00 - Save the file with a .regextension. - Run the script. - Include protocol in your app code You can change .NET's default security protocols by modifying your application's source code. The following command enables TLS 1.2, 1.1, and 1.0 as default protocols for your application. It's a global setting and should be set early in your application's start-up. You can modify it to enable the specific protocols you want. System.Net.ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12 | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls;.
https://docs.newrelic.com/docs/agents/net-agent/troubleshooting/no-data-appears-after-disabling-tls-10
2020-07-02T17:18:41
CC-MAIN-2020-29
1593655879532.0
[]
docs.newrelic.com
cpf.checkTransition( $docid as String, $transition as element(*, p.transition)? ) as. const cpf = require('/MarkLogic/cpf/cpf'); declareUpdate(); cpf.checkTransition('/myDocs/example.xml', cpf.transition ); Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question.
http://docs.marklogic.com/cpf.checkTransition
2020-07-02T14:34:25
CC-MAIN-2020-29
1593655879532.0
[]
docs.marklogic.com
. gc_grace_seconds) on your tables. The default for gc_grace_secondsis 10 days (864000 seconds). OpsCenter provides an estimate by checking gc_grace_secondsacross all tables and calculating 90% of the lowest value. The default estimate for the time to completion based on the typical grace seconds default is 9 days. For more information about configuring grace seconds, see gc_grace_seconds in the CQL documentation..
https://docs.datastax.com/en/opscenter/6.5/opsc/online_help/services/repairServiceEstimate.html
2020-07-02T17:02:37
CC-MAIN-2020-29
1593655879532.0
[]
docs.datastax.com
General Information Quality Assurance and Productivity Desktop Frameworks and Libraries Web Controls and Extensions Maintenance Mode Enterprise and Analytic Tools End-User Documentation TreeList.SortedColumnCount Property Gets the number of columns involved in sorting. Namespace: DevExpress.XtraTreeList Assembly: DevExpress.XtraTreeList.v20.1.dll Declaration [Browsable(false)] public int SortedColumnCount { get; } <Browsable(False)> Public ReadOnly Property SortedColumnCount As Integer Property Value Remarks The Tree List control gives you the ability to sort its data by the values of multiple columns. This can be applied by setting the TreeListColumn.SortOrder property of columns to a value that differs from TreeList.GetSortColumn method call. You can use these two members to traverse through the columns involved in sorting. Examples The following sample code uses the TreeList.GetSortColumn method and the TreeList //... }
https://docs.devexpress.com/WindowsForms/DevExpress.XtraTreeList.TreeList.SortedColumnCount
2020-07-02T17:01:21
CC-MAIN-2020-29
1593655879532.0
[]
docs.devexpress.com
. Cloud Release Date: 12/1/2018 This is the initial GA release of Live Forms v8.0.0. It will be deployed to the frevvo Cloud on 12/1/2018. This is a Cloud Only release. Flow Step Properties[TIP-22341] Refresh Searchable Fields: The Data Source pane in the designer will now use the label annotation instead of the name in XSD elements and attributes when importing schemas into forms/flows. [TIP-222 Explorer 11 browser. [TIP-23110] .
https://docs.frevvo.com/d/pages/viewpage.action?pageId=21535657
2020-07-02T16:00:50
CC-MAIN-2020-29
1593655879532.0
[array(['/d/images/icons/linkext7.gif', None], dtype=object)]
docs.frevvo.com
1 Introduction Each application has a log and log messages to monitor the health of the running of the application. Log levels are used to distinguish the log messages and to highlight the highest priority ones so that they can receive the immediate intervention they require. This how-to will teach you how to do the following: - Configure the log levels for the various occurrence of logging within your app 2 Logging Basics 2.1 Log Messages Log messages are notes that appear in the log of your Mendix application that present contextualized and detailed information such as the following: - Date/time the log was created - Level - Node - Detailed message - Stack trace 2.1.1 Log Node The log node name defines the source of the log message. For example, in a log message from the e-mail module, the log name would appear as Email Module. 2.1.2 Message Most messages in the log are auto-generated by the system (for example, Mendix Runtime successfully started, the application is now available). However, for logging that has been created via a microflow, log messages can be customized by the developer. Customized log messages are created by defining a template. The template is the structure of the log’s message, and can be composed of parameters and free text. In the image above, the template for the message is Email not sent to customer {1}. With this example template, when the error occurs, the customer’s full name is inserted into the parameter placeholder {1} (for example, the log message would be Email not sent to customer John Smith). Accordingly, the log message is customized to the data that is specific to the situation. 2.1.4 Stack Trace The stack trace is a list of method calls from the point when the application was started to the point where the exception occurred. In the Modeler, log messages that include a stack trace are marked with a paperclip icon. Double-clicking this icon shows the stack trace. 2.2 Level The log level defines the severity of the log message. In the Modeler, this is represented by different colors and an icon. These are the log levels used by Mendix: 3 Setting the Log Levels In this section of the how-to, you will learn how to configure the log levels of the logging messages produced by the system. The different levels highlighted in 3.2 Level can be applied to custom logging and to the predefined logging produced by the Mendix Modeler. This section will define how to configure both the log levels in custom logging and the predefined logging created by the Modeler. 3.1 Advanced Features of the Console To access the advanced features of the console, follow these steps: - Open the Console. - Click Advanced to open the drop-down menu of advanced options. - Click Set log levels. 3.2 Configuring the Log Levels To select the level on which a log node will log messages, follow these steps: - Set the relevant Log Node. - Open the Info drop-down menu in the Log Level column. - From the drop-down menu, select the correct level. 3.3 Configuring Custom Log Levels To set the level of custom log messages that you have created via a microflow, follow these steps: - Open the microflow in which you intend to change the log messag level. - Double-click the log message activity. - In the Log level drop-down menu, select the correct level. 4 - How to Find the Root Cause of Runtime Errors - How to Clear Warning Messages in Mendix - How to Test Web Services Using SoapUI - How to Monitor Mendix Using JMX - How to Log Levels - How to Debug Microflows - How to Debug Microflows Remotely - How to Debug Java Actions - How to Debug Java Actions Remotely - How to Handle Common Mendix SSO Errors
https://docs.mendix.com/howto7/monitoring-troubleshooting/log-levels
2020-07-02T16:41:43
CC-MAIN-2020-29
1593655879532.0
[array(['attachments/18448575/18580031.png', None], dtype=object) array(['attachments/18448575/18580030.png', None], dtype=object) array(['attachments/18448575/18580029.png', None], dtype=object) array(['attachments/18448575/18580028.png', None], dtype=object)]
docs.mendix.com
Choosing a sourceAnchor for Multi-Forest Sync with AAD Connect - Part 1, Introduction Update 1, Introduction Part 3, An Aside on EmployeeID Part 4, Using msDS-SourceAnchor Part 5, Using mS-DS-ConsistencyGuid Part 6, Moving off objectGuid Before We Begin Today I'll be starting a small series of blog posts that discuss the issue of choosing a sourceAnchor for Multi-Forest sync with AAD Connect into Azure Active Directory. This is a topic that has been discussed by other bloggers but after looking at it myself, I don't think the whole story is out there. I've heard folks touch on the reason it's important - for cases where users may be migrated between Active Directory Forests - but the steps to successfully migrate those users also seem hard to come by. My intent with this series is to explain the problem, explain the options you have for a solution, to step through those options and finally, to describe the steps you need to successfully migrate users from one Active Directory Forest to another without damaging their Azure Active Directory cloud identity. In this post I'll assume you've seen the AAD Connect setup wizard before. If you haven't, read this - What is the sourceAnchor? Referring to the document mentioned above - " The attribute sourceAnchor is an attribute that is immutable during the lifetime of a user object. It is the primary key linking the on-premises user with the user in Azure AD" Where's the Problem? If you step through the AAD Connect setup wizard in either Express mode or in Custom mode while leaving the sourceAnchor configured with the default, you'll be using the objectGuid - On the surface, objectGuid seems reasonable because it's unique for each object in the Forest. In a Multi-Forest scenario, there's a very slim chance that two objects from different Forests may have the same objectGuid but the number of possible GUIDs is so high that it's really not a concern. The issue comes when user migrations occur. Consider this - Here we have users from Forest1 and Forest2 synchronised to Azure Active Directory. Their sourceAnchor on-premises and in Azure Active Directory match. Let's migrate a user - Here, the Forest2 user, originally with a sourceAnchor value of srcAnc03 arrives in Forest1. The migration process actually creates a new user object (which comes with a new objectGuid) and then copies other attribute values across. The result is a user object that looks the same but with a different immutable value. AAD Connect cannot sync the migrated user object with the one that already exists in Azure Active Directory due to the sourceAnchor difference. The best (or worst?) case scenario is that a new object will be created in Azure Active Directory assuming the migrated user account doesn't conflict on any important attributes such as the UPN (unlikely). The Solution It's probably pretty clear that we need to choose an attribute other than objectGuid that will persist across inter-Forest migrations. A generally accepted practice is to populate the mS-DS-ConsistencyGuid attribute which is NULL by default. With the Windows Server 2016 schema, a new attribute called msDS-SourceAnchor has been added specifically for this purpose. The implementation is not so simple as choosing either of them as you step through the AAD Connect setup wizard. There is extra work to do and because the aof each attribute is slightly different, the steps to using them are slightly different also. In the posts that follow, I'll explain each choice and also the changes you'll need to make if you've already configured AAD Connect with objectGuid vs. a new deployment where you decide on either mS-DS-ConsistencyGuid or msDS-SourceAnchor from the get go. Conclusion objectGuid is a poor choice for the AAD Connect sourceAnchor especially in multi-Forest environments where user migrations are likely to occur. ttribute syntax
https://docs.microsoft.com/en-us/archive/blogs/markrenoden/choosing-a-sourceanchor-for-multi-forest-sync-with-aad-connect-part-1-introduction
2020-07-02T16:22:40
CC-MAIN-2020-29
1593655879532.0
[]
docs.microsoft.com
#/edgepreserving_filter.hpp> Smoothes an image using the Edge-Preserving filter. The function smoothes Gaussian noise as well as salt & pepper noise. For more details about this implementation, please see [ReiWoe18] Reich, S. and Wörgötter, F. and Dellen, B. (2018). A Real-Time Edge-Preserving Denoising Filter. Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP): Visapp, 85-94, 4. DOI: 10.5220/0006509000850094. #include <opencv2/ximgproc.hpp> Performs thresholding on input images using Niblack's technique or some of the popular variations it inspired..
https://docs.opencv.org/trunk/df/d2d/group__ximgproc.html
2020-07-02T16:57:59
CC-MAIN-2020-29
1593655879532.0
[]
docs.opencv.org
- 2014 update (4.6) will be deployed on Thursday, 13th of November. Calendars Subscriptions & News Alerts Calendar Subscription makes it easy to sync calendars with just about any device (iOS, Android, Windows Phone) or software (Outlook, Google Calendar). After significant testing, configuration and development, calendar subscription is now compatible with most devices and software clients. Calendar pages let you subscribe using iCal or access an RSS feed. Announcements, special announcement and blog pages let you subscribe to email alerts or access an RSS feed. New Editing Buttons There's a new design for editing buttons on SharePoint 2013 websites. When you hover over content areas editing buttons will appear wherever you need them. The new design makes it much easier for content editors to see the website without the clutter of editing buttons.There's a new design for editing buttons on SharePoint 2013 websites. When you hover over content areas editing buttons will appear wherever you need them. The new design makes it much easier for content editors to see the website without the clutter of editing buttons. We piloted the new design with a number of customers and had a great response from site editors, new and seasoned. Consider Notifying Website Editors Editing buttons now appear once you hover over content areas wherever they are needed. While not an issue for new users, existing users may take a second to find and transition to the new style. Student Portfolios - Add, Lock & Showcase Student Portfolios gets some top requested features:Student Portfolios gets some top requested features: - Use a smartphone or tablet to take a photo, or add photos and other files in an instant. Desktop users can also create simple posts with this new streamlined process. - Teachers can easily showcase student work. Just click the star to curate and highlight student achievements. - Teachers can lock items to prevent students from modifying content. Items created by a teacher are locked by default. Simply select the padlock to lock or unlock an item. - You can now categorize posts. Filtering by category coming soon. This release includes a number of critical SharePoint & iOS 8 bug fixes as well as some general enhancements to the mobile and tablet experience. My Classes & My Groups Experience My Classes and My Groups site lists are now faster and more consistent. Created, followed or deleted sites will now appear almost instantly. Whether you add a group or an individual user to your site, My Classes and My Groups now behave the same. Auto-follow only applies to individual users. Consider Notifying Class & Group Admins My Classes and My Groups now list all sites where you have been added as part of an Active Directory group. As a result Class and Group Admin users may now see many more sites listed on the Classes & Groups page; they are owners of all class or group sites. End-users may also notice some additional sites now showing up on the Classes & Groups page. Since the inclusion of Site Index in the 4.4 update, July 2014, My Classes and My Groups only listed sites where you were added as an individual user. Now users see all sites where you are a student, contributor or owner (as previously). Staff Directory Staff Directory on your website or portal. Simply edit the page and check a box, no need to modify Active Directory. Staff Directories are one of the most requested features on websites and portals; they're a great way to provide valuable information with almost no effort. Just Edit page and then Add Web Part.It's now incredibly easy to hide users from the New & Easier Content Templates We've added new list templates and made it easier to create lists to store your information. Custom List View templates let you easily display and manage content in a touch and mobile friendly way. Content templates can be used to display lists of documents grouped by category, create an electronic forms repository, or just a simple list of links. We'll be continuing to add and enhance the available templates in future updates. A few current plans include Bell times and Jobs templates for websites and Away today for staff rooms. Home Drive Completely reworked, home drive brings an improved experience and design, as well as improved mobile and tablet support. Home Drive now provides configurable views and thumbnails, file previews for photo and videos, multi-select and much more.Completely reworked, home drive brings an improved experience and design, as well as improved mobile and tablet support. Home Drive now provides configurable views and thumbnails, file previews for photo and videos, multi-select and much more. Enhancements & Fixes - iOS users can now upload photos and video files in the SharePoint rich text editor. - Uploading photos and videos from iOS does not overwrite the previous file. - Fix for SharePoint Store apps including the OneNote Class Notebook Creator. See Introducing OneNote class notebooks for more. - Mobile navigation on websites now displays navigation up to three levels deep (SharePoint 2013). - If no list exists the Featured Stories app now displays a link (when editing the page) to easily create a list to store images. - Transportation special announcements are now yellow with a yellow bus icon (SharePoint 2013). - Improvements and fixes for the Twitter App on portals (SharePoint 2013). - SharePoint\system is no longer displayed in Site Manager (class & group sites). - Teacher public sites now correctly display assignments based on the display and expiry dates. - Settings cog no longer opens behind the ribbon. - Search is no longer used to aggregate for the My Classes and My Groups feature. - Portal footers now display fax numbers from site information. - Edit content now uses the full width of the dialog to display content. - The Home Drive page no longer displays left navigation (SharePoint 2013). - Website calendar and announcements title font sizes have been reduced to allow longer titles. - Students added in Student Portfolios will now auto-follow the class. - Fixed Windows Update bug affecting SharePoint 2010 timer jobs, My classes and My groups features. - Auto-follow only activates when a user is first added to a class or group (SharePoint 2010). - Active Directory Groups can now be used as site owners (Class & Group sites). Requires a manual Site Manager update to apply to existing sites. - OneDrive for Business banner and terminology removed from the OneDrive site. - Links within the Edit Content Dialog are no longer forced to open in a new tab. - Yellow has been changed to Royal Yellow for readability purposes. - Featured Stories and Featured Links URL field no longer truncates long URLs. - The Site Manager Access Tab now sorts users alphabetically (based on display name). - Users that do not have permission to create a Blog or Personal site will now receive a message. - Accessing the OneDrive site will no longer displays an un-styled page during page load. - The My Groups list will now correctly highlight the default tab upon page load. - People search results have been improved. - Fix for iCal support on Google Calendar (Google breaking change). - iCal subscription no longer fails in regions without daylight saving time. Other Notes & Resources - iOS 8 currently contains a bug preventing live camera photos from being uploaded when using Mobile Safari. The App may crash, Chrome for iOS does not experience this issue. We expect Apple to address this issue in a future iOS update. - Browser memory requirements for Home Drive means older tablets (such as 2nd gen. iPads) and smartphones are not supported. We're also continuing to create new user documentation and technical resources: - OneDrive Sync - User Adoption and Engagement Guide - Automatic Authentication...Banishing login prompts - POODLE Security Notice - Antivirus Performance Optimisation - ShellSchock Security Bug, iOS 8 bugs, and Sunsetting SHA-1 Certificates More Information If you have any questions regarding the update process please see our original notice.
https://docs.scholantis.com/display/RRN/4.6+Release+Notes
2020-07-02T16:21:50
CC-MAIN-2020-29
1593655879532.0
[]
docs.scholantis.com
- Click the Time icon 2. Choose the expense date, enter a description for the time claim and select the appropriate category. 3. Enter the time worked and select whether this is in hours, days or weeks. 4. If your Account Administrator has set a fixed rate for you then you can select this from the Rate dropdown. Alternatively, if fixed rates have not been set for you then you can input the rate yourself. 5. The time claim will either be reimbursable or billable. Tick the appropriate box to reflect this. 6. If you have any receipts that you need to attach to the claim, click the receipt icon. 7. If you need to add extra information to the claim then you can click Add Additional Information. 8. If you submit this time claim on a regular basis then you may wish to add this claim to your favourites. Click Create to add your time claim to your account. The time claim will be sent to your Draft expense section.
https://docs.expensein.com/en/articles/2212968-create-a-time-expense
2020-07-02T14:55:18
CC-MAIN-2020-29
1593655879532.0
[array(['https://downloads.intercomcdn.com/i/o/71067964/0243bde25932d65d0c8c03c5/Screen+Shot+2018-08-08+at+08.59.39.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/71078674/c3cd7c454ac715962e6e12f0/Time+claim.png', None], dtype=object) ]
docs.expensein.com
Distributed PyTorch¶ The RaySGD PyTorchTrainer simplifies distributed model training for PyTorch. The PyTorchTrainer is a wrapper around torch.distributed.launch with a Python API to easily incorporate distributed training into a larger Python application, as opposed to needing to wrap your training code in bash scripts. Under the hood, PytorchTrainer will create replicas of your model (controlled by num_replicas), each of which is managed by a Ray actor. For end to end examples leveraging RaySGD PyTorchTrainer, jump to PyTorchTrainer Examples. Setting up training¶ The PyTorchTrainer can be constructed with functions that wrap components of the training script. Specifically, it requires constructors for the Model, Data, Optimizer, Loss, and lr_scheduler to create replicated copies across different devices and machines. from ray.util.sgd import PyTorchTrainer trainer = PyTorchTrainer( model_creator, data_creator, optimizer_creator, loss_creator=nn.MSELoss, scheduler_creator=scheduler_creator, config={"lr": 0.001}) The below section covers the expected signatures of creator functions. Jump to Putting things together. Model Creator¶ This is the signature needed for PyTorchTrainer(model_creator=...). import torch.nn as nn def model_creator(config): """Constructor function for the model(s) to be optimized. You will also need to provide a custom training function to specify the optimization procedure for multiple models. Args: config (dict): Configuration dictionary passed into ``PyTorchTrainer``. Returns: One or more torch.nn.Module objects. """ return nn.Linear(1, 1) Optimizer Creator¶ This is the signature needed for PyTorchTrainer(optimizer_creator=...). import torch def optimizer_creator(model, config): """Constructor of one or more Torch optimizers. Args: models: The return values from ``model_creator``. This can be one or more torch nn modules. config (dict): Configuration dictionary passed into ``PyTorchTrainer``. Returns: One or more Torch optimizer objects. """ return torch.optim.SGD(model.parameters(), lr=config.get("lr", 1e-4)) Data Creator¶ This is the signature needed for PyTorchTrainer(data_creator=...). from ray.util.sgd.pytorch.examples.train_example import LinearDataset def data_creator(config): """Constructs torch.utils.data.Dataset objects. Note that even though two Dataset objects can be returned, only one dataset will be used for training. Args: config: Configuration dictionary passed into ``PyTorchTrainer`` Returns: One or Two Dataset objects. If only one Dataset object is provided, ``trainer.validate()`` will throw a ValueError. """ return LinearDataset(2, 5), LinearDataset(2, 5, size=400) Loss Creator¶ This is the signature needed for PyTorchTrainer(loss_creator=...). import torch def loss_creator(config): """Constructs the Torch Loss object. Note that optionally, you can pass in a Torch Loss constructor directly into the PyTorchTrainer (i.e., ``PyTorchTrainer(loss_creator=nn.BCELoss, ...)``). Args: config: Configuration dictionary passed into ``PyTorchTrainer`` Returns: Torch Loss object. """ return torch.nn.BCELoss() Scheduler Creator¶ Optionally, you can provide a creator function for the learning rate scheduler. This is the signature needed for PyTorch ``PyTorch PyTorchTrainer trainer = PyTorchTrainer( model_creator, data_creator, optimizer_creator, loss_creator=nn.MSELoss, scheduler_creator=scheduler_creator, config={"lr": 0.001}) You can also set the number of workers and whether the workers will use GPUs: trainer = PyTorchTrainer( model_creator, data_creator, optimizer_creator, loss_creator=nn.MSELoss, scheduler_creator=scheduler_creator, config={"lr": 0.001}, num_replicas=100, use_gpu=True) See the documentation on the PyTorchTrainer here: PyTorchTrainer. We’ll look at the training APIs next. Training APIs¶ Now that the trainer is constructed, you’ll naturally want to train the model. trainer.train() This takes one pass over the training data. To run the model on the validation data passed in by the data_creator, you can simply call: trainer.validate() You can customize the exact function that is called by using a customized training function (see Custom Training and Validation Functions). Shutting down training¶ After training, you may want to reappropriate the Ray cluster. To release Ray resources obtained by the Trainer: trainer.shutdown() Note Be sure to call trainer.save() or trainer.get_model() before shutting down. Initialization Functions¶ You may want to run some initializers on each worker when they are started. This may be something like setting an environment variable or downloading some data. You can do this via the initialization_hook parameter: def initialization_hook(runner): print("NCCL DEBUG SET") # Need this for avoiding a connection restart issue os.environ["NCCL_SOCKET_IFNAME"] = "^docker0,lo" os.environ["NCCL_LL_THRESHOLD"] = "0" os.environ["NCCL_DEBUG"] = "INFO" trainer = PyTorchTrainer( model_creator, data_creator, optimizer_creator, loss_creator=nn.MSELoss, initialization_hook=initialization_hook, config={"lr": 0.001} num_replicas) trainer_2 = PyTorchTrainer( model_creator, data_creator, optimizer_creator, loss_creator=nn.MSELoss, num_replicas=num_replicas) trainer_2.restore(checkpoint_path) Exporting a model for inference = PyTorchTrainer( model_creator, data_creator, optimizer_creator, loss_creator=nn.MSELoss, num_replicas PyTorchTrainer constructor. Valid arguments can be found on the Apex documentation: trainer = PyTorchTrainer( model_creator, data_creator, optimizer_creator, loss_creator=nn.MSELoss, num_replicas=4, use_fp16=True, apex_args={ opt_level="O3", num_losses=2, verbosity=0 } ) Note that if using a custom training function, you will need to manage loss scaling manually. Distributed Multi-node Training¶ You can scale out your training onto multiple nodes without making any modifications to your training code. To train across a cluster, simply make sure that the Ray cluster is started. You can start a Ray cluster via the Ray cluster launcher or manually. ray up CLUSTER.yaml ray submit train.py --args="--address='auto'" Then, within train.py you can scale up the number of workers seamlessly across multiple nodes: trainer = PyTorchTrainer( model_creator, data_creator, optimizer_creator, loss_creator=nn.MSELoss, num_replicas=100) dataset,. It is currently not possible to recover from a Trainer node failure. Users can set checkpoint="auto" to always checkpoint the current model before executing a pass over the training dataset. trainer.train(max_retries=N, checkpoint="auto") Advanced: Hyperparameter Tuning¶ PyTorchTrainer naturally integrates with Tune via the PyTorchTrainable interface. The same arguments to PyTorchTrainer should be passed into the tune.run(config=...) as shown below. import numpy as np import torch import torch.nn as nn import ray from ray import tune from ray.util.sgd.pytorch.pytorch_trainer import PyTorchTrainable class LinearDataset(torch.utils.data.Dataset): """y = a * x + b""" def __init__(self, a, b, size=1000): x = np.random.random(size).astype(np.float32) * 10 x = np.arange(0, 10, 10 / size, dtype=np.float32) self.x = torch.from_numpy(x) self.y = torch.from_numpy(a * x + b) def __getitem__(self, index): return self.x[index, None], self.y[index, None] def __len__(self): return len(self.x) def model_creator(config): return nn.Linear(1, 1) def optimizer_creator(model, config): """Returns optimizer.""" return torch.optim.SGD(model.parameters(), lr=config.get("lr", 1e-4)) def data_creator(config): """Returns training dataloader, validation dataloader.""" return LinearDataset(2, 5), LinearDataset(2, 5, size=400) def tune_example(num_replicas=1, use_gpu=False): config = { "model_creator": tune.function(model_creator), "data_creator": tune.function(data_creator), "optimizer_creator": tune.function(optimizer_creator), "loss_creator": tune.function(nn.MSELoss), "num_replicas": num_replicas, "use_gpu": use_gpu, "batch_size": 512, "backend": "gloo" } analysis = tune.run( PyTorchTrainable, num_samples=12, config=config, stop={"training_iteration": 2}, verbose=1) return analysis.get_best_config(metric="validation_loss", mode="min") if __name__ == "__main__": import argparse parser = argparse.ArgumentParser() parser.add_argument( "--address", type=str, help="the address to use for Ray") parser.add_argument( "--num-replicas", "-n", type=int, default=1, help="Sets number of replicas for training.") parser.add_argument( "--use-gpu", action="store_true", default=False, help="Enables GPU training") parser.add_argument( "--tune", action="store_true", default=False, help="Tune training") args, _ = parser.parse_known_args() ray.init(address=args.address) tune_example(num_replicas=args.num_replicas, use_gpu=args.use_gpu) Simultaneous Multi-model Training¶ In certain scenarios such as training GANs, you may want to use multiple models in the training loop. You can do this in the PyTorchTrainer by allowing the model_creator, optimizer_creator, and scheduler_creator to return multiple values. If multiple models, optimizers, or schedulers are returned, you will need to provide a custom training function (and custom validation function if you plan to call validate). You can see the DCGAN script for an end-to-end example. def model_creator(config): netD = Discriminator() netD.apply(weights_init) netG = Generator() netG.apply(weights_init) return netD, netG def custom_train(models, dataloader, criterion, optimizers, config): result = {} for i, (model, optimizer) in enumerate(zip(models, optimizers)): result["model_{}".format(i)] = train(model, dataloader, criterion, optimizer, config) return result trainer = PyTorchTrainer( model_creator, data_creator, optimizer_creator, loss_creator=nn.BCELoss, train_function=custom_train) Custom Training and Validation Functions¶ PyTorchTrainer allows you to run a custom training and validation step in parallel on each worker, providing a flexibility similar to using PyTorch natively. This is done via the train_function and validation_function parameters. Note that this is needed if the model creator returns multiple models, optimizers, or schedulers. def train(config, model, train_iterator, criterion, optimizer, scheduler=None): """Runs one standard training pass over the train_iterator. Raises: ValueError if multiple models/optimizers/schedulers are provided. You are expected to have a custom training function if you wish to use multiple models/optimizers/schedulers. Args: config: (dict): A user configuration provided into the Trainer constructor. model: The model(s) as created by the model_creator. train_iterator: An iterator created from the DataLoader which wraps the provided Dataset. criterion: The loss object created by the loss_creator. optimizer: The torch.optim.Optimizer(s) object as created by the optimizer_creator. scheduler (optional): The torch.optim.lr_scheduler(s) object as created by the scheduler_creator. Returns: A dict of metrics from training. """ netD, netG = models optimD, optimG = optimizers real_label = 1 fake_label = 0 device = torch.device("cuda" if torch.cuda.is_available() else "cpu") for i, data in enumerate(dataloader, 0): netD.zero_grad() real_cpu = data[0].to(device) b_size = real_cpu.size(0) label = torch.full((b_size, ), real_label, device=device) output = netD(real_cpu).view(-1) errD_real = criterion(output, label) errD_real.backward() noise = torch.randn(b_size, latent_vector_size, 1, 1, device=device) fake = netG(noise) label.fill_(fake_label) output = netD(fake.detach()).view(-1) errD_fake = criterion(output, label) errD_fake.backward() errD = errD_real + errD_fake optimD.step() netG.zero_grad() label.fill_(real_label) output = netD(fake).view(-1) errG = criterion(output, label) errG.backward() optimG.step() is_score, is_std = inception_score(fake) return { "loss_g": errG.item(), "loss_d": errD.item(), "inception": is_score } def custom_validate(config, model, val_iterator, criterion, scheduler=None): """Runs one standard validation pass over the val_iterator. Args: config: (dict): A user configuration provided into the Trainer constructor. model: The model(s) as created by the model_creator. train_iterator: An iterator created from the DataLoader which wraps the provided Dataset. criterion: The loss object created by the loss_creator. scheduler (optional): The torch.optim.lr_scheduler object(s) as created by the scheduler_creator. Returns: A dict of metrics from the evaluation. """ ... return {"validation_accuracy": 0.5} trainer = PyTorchTrainer( model_creator, data_creator, optimizer_creator, nn.BCELoss, train_function=train, validation_function=custom_validate, ... ) Feature Requests¶ Have features that you’d really like to see in RaySGD? Feel free to open an issue. PyTorchTrainer Examples¶ Here are some examples of using RaySGD for training PyTorch models. If you’d like to contribute an example, feel free to create a pull request here. - PyTorch training example: - Simple example of using Ray’s PyTorchTrainer. - CIFAR10 example: - Training a ResNet18 model on CIFAR10. It uses a custom training function, a custom validation function, and custom initialization code for each worker. - DCGAN example: - Training a Deep Convolutional GAN on MNIST. It constructs two models and two optimizers and uses a custom training and validation function.
https://docs.ray.io/en/releases-0.8.2/raysgd/raysgd_pytorch.html
2020-07-02T15:20:06
CC-MAIN-2020-29
1593655879532.0
[]
docs.ray.io
Accounts quickly and efficiently. After reading this guide, you will be able to: - Create a payment schedule - Create terms of payment - Create a payment day - Set up a cash discount - Create a payment fee - Create a method of payment - Set up customer groups - Set up posting profiles - Create a new customer
http://www.erp-docs.com/1494/dynamics-ax-accounts-receivable-setup-guide/
2018-11-12T22:46:23
CC-MAIN-2018-47
1542039741151.56
[]
www.erp-docs.com
SQLSetStmtAttr Function Conformance Version Introduced: ODBC 3.0 Standards Compliance: ISO 92 Summary SQLSetStmtAttr sets attributes related to a statement. Note For more information about what the Driver Manager maps this function to when an ODBC 3.x application is working with an ODBC 2.x driver, see Mapping Replacement Functions for Backward Compatibility of Applications. Syntax SQLRETURN SQLSetStmtAttr( SQLHSTMT StatementHandle, SQLINTEGER Attribute, SQLPOINTER ValuePtr, SQLINTEGER StringLength); Arguments StatementHandle [Input] Statement handle. Attribute [Input] Option to set, listed in "Comments." ValuePtr [Input] Value to be associated with Attribute. Depending on the value of Attribute, ValuePtr will be one of the following: An ODBC descriptor handle. A SQLUINTEGER value. A SQLULEN value. A pointer to one of the following: A null-terminated character string. A binary buffer. A value or array of type SQLLEN, SQLULEN, or SQLUSMALLINT. A driver-defined value. If the Attribute argument is a driver-specific value, ValuePtr may be a signed integer. StringLength [Input] If Attribute is an ODBC-defined attribute and ValuePtr points to a character string or a binary buffer, this argument should be the length of *ValuePtr.. Returns SQL_SUCCESS, SQL_SUCCESS_WITH_INFO, SQL_ERROR, or SQL_INVALID_HANDLE. Diagnostics When SQLSetStmtAttrStmtAttr and explains each one in the context of this function; the notation "(DM)" precedes the descriptions of SQLSTATEs returned by the Driver Manager. The return code associated with each SQLSTATE value is SQL_ERROR, unless noted otherwise. Statement attributes for a statement remain in effect until they are changed by another call to SQLSetStmtAttr or until the statement is dropped by calling SQLFreeHandle. Calling SQLFreeStmt with the SQL_CLOSE, SQL_UNBIND, or SQL_RESET_PARAMS option does not reset statement attributes. Some statement attributes support substitution of a similar value if the data source does not support the value specified in ValuePtr. In such cases, the driver returns SQL_SUCCESS_WITH_INFO and SQLSTATE 01S02 (Option value changed). For example, if Attribute is SQL_ATTR_CONCURRENCY and ValuePtr is SQL_CONCUR_ROWVER, and if the data source does not support this, the driver substitutes SQL_CONCUR_VALUES and returns SQL_SUCCESS_WITH_INFO. To determine the substituted value, an application calls SQLGetStmtAttr. The format of information set with ValuePtr depends on the specified Attribute. SQLSetStmtAttr accepts attribute information in one of two different formats: a character string or an integer value. The format of each is noted in the attribute's description. This format applies to the information returned for each attribute in SQLGetStmtAttr. Character strings pointed to by the ValuePtr argument of SQLSetStmtAttr have a length of StringLength. Note. Note ODBC 3.x drivers need only support this functionality if they should work with ODBC 2.x applications that set ODBC 2.x statement options at the connection level. For more information, see "Setting Statement Options on the Connection Level" under SQLSetConnectOption Mapping in Appendix G: Driver Guidelines for Backward Compatibility. Statement Attributes That Set Descriptor Fields Many statement attributes correspond to a header field of a descriptor. Setting these attributes actually results in the setting of the descriptor fields. Setting fields by a call to SQLSetStmtAttr rather than to SQLSetDescField has the advantage that a descriptor handle does not have to be obtained for the function call. Caution Calling SQLSetStmtAttr for one statement can affect other statements. This occurs when the APD or ARD associated with the statement is explicitly allocated and is also associated with other statements. Because SQLSetStmtAttr modifies the APD or ARD, the modifications apply to all statements with which this descriptor is associated. If this is not the required behavior, the application should dissociate this descriptor from the other statements (by calling SQLSetStmtAttr to set the SQL_ATTR_APP_ROW_DESC or SQL_ATTR_APP_PARAM_DESC field to a different descriptor handle) before calling SQLSetStmtAttr again. When a descriptor field is set as a result of the corresponding statement attribute being set, the field is set only for the applicable descriptors that are currently associated with the statement identified by the StatementHandle argument, and the attribute setting does not affect any descriptors that may be associated with that statement in the future. When a descriptor field that is also a statement attribute is set by a call to SQLSetDescField, the corresponding statement attribute is set. If an explicitly allocated descriptor is dissociated from a statement, a statement attribute that corresponds to a header field will revert to the value of the field in the implicitly allocated descriptor. When a statement is allocated (see SQLAllocHandle), four descriptor handles are automatically allocated and associated with the statement. Explicitly allocated descriptor handles can be associated with the statement by calling SQLAllocHandle with an fHandleType of SQL_HANDLE_DESC to allocate a descriptor handle and then calling SQLSetStmtAttr to associate the descriptor handle with the statement. The statement attributes in the following table correspond to descriptor header fields. Statement Attributes The currently defined attributes and the version of ODBC in which they were introduced are shown in the following table; it is expected that more attributes will be defined by drivers to take advantage of different data sources. A range of attributes is reserved by ODBC; driver developers must reserve values for their own driver-specific use from Open Group. For more information, see Driver-Specific Data Types, Descriptor Types, Information Types, Diagnostic Types, and Attributes. [1] These functions can be called asynchronously only if the descriptor is an implementation descriptor, not an application descriptor. See Column-Wise Binding and Row-Wise Binding. Related Functions See Also ODBC API Reference ODBC Header Files
https://docs.microsoft.com/en-us/sql/odbc/reference/syntax/sqlsetstmtattr-function?view=sql-server-2017
2018-11-12T22:05:54
CC-MAIN-2018-47
1542039741151.56
[]
docs.microsoft.com
The HOCON Configuration File Editor is a validating HOCON editor that is aware of the structure that defines the HOCON syntax of StreamBase configuration files. The HOCON Configuration File Editor has the following features: Typechecks your configuration file as you compose or edit it. Provides syntax color coding. Provides autocompletion of keywords when you type part of the name and press Ctrl+Space. Provides content assistance, which is a context-aware proposal of keywords and values valid for the current location in the file when you place the cursor after an open brace and press Ctrl+Space. See HOCON Configuration File Editor for instructions on using the Editor.
http://docs.streambase.com/latest/topic/com.streambase.sb.ide.help/data/html/studioref/hocon-editor.html
2018-11-12T22:37:28
CC-MAIN-2018-47
1542039741151.56
[]
docs.streambase.com
Compose This services is a "catch all" service to allows power users to specify custom services that are not currently one of Lando's "supported" services. Technically speaking, this service is just a way for a user to define a service directly using the Docker Compose V3 file format. THIS MEANS THAT IT IS UP TO THE USER TO DEFINE A SERVICE CORRECTLY. This service is useful if you are: - Thinking about contributing your own custom Lando service and just want to prototype something - Using Docker Compose config from other projects - Need a service not currently provided by Lando itself Supported versions - latest Example # The name of my app name: compose # Use the lando proxy to map to the custom service proxy: appserver: - compose.lndo.site # Configure my services services: # Create a service called "custom" appserver: # Use docker compose to create a custom service. type: compose # Specify the docker compose v3 services options here services: # Specify what container to run to provide the service. image: drupal:8 # Required. # You will need to investigate the images Dockerfile to find the "entrypoint" and "command" # and then define the command as `ENTRYPOINT COMMAND` # # You can also try a completely custom command but YMMV command: docker-php-entrypoint apache2-foreground # Spin up a DB to go with this database: type: mysql You will need to rebuild your app with lando rebuild to apply the changes to this file. You can check out the full code for this example over here.
https://docs.devwithlando.io/services/compose.html
2018-08-14T13:22:23
CC-MAIN-2018-34
1534221209040.29
[]
docs.devwithlando.io
Configure indexed field extraction There are three types of fields that Splunk can extract at index time: - Default fields - Custom fields - File header fields Splunk always extracts a set of default fields for each event. You can configure it to also extract custom and, for some data, file header fields. For more information on indexed field extraction, see the chapter "Configure indexed field extraction"!
http://docs.splunk.com/Documentation/Splunk/4.3.1/Data/Overviewofdefaultfieldextraction
2018-08-14T13:57:45
CC-MAIN-2018-34
1534221209040.29
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Creates an instance of CognitoSyncManager using cognito credentials and a configuration object CognitoSyncManager cognitoSyncManager = new CognitoSyncManager(credentials,new AmazonCognitoSyncConfig { RegionEndpoint = RegionEndpoint.USEAST1}) Namespace: Amazon.CognitoSync.SyncManager Assembly: AWSSDK.CognitoSync
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/CognitoSync/MCognitoSyncManagerctorCognitoAWSCredentialsCognitoSyncConfig.html
2018-08-14T14:38:31
CC-MAIN-2018-34
1534221209040.29
[]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. This operation disables automatic renewal of domain registration for the specified domain. For .NET Core and PCL this operation is only available in asynchronous form. Please refer to DisableDomainAutoRenewAsync. Namespace: Amazon.Route53Domains Assembly: AWSSDK.Route53Domains.dll Version: 3.x.y.z Container for the necessary parameters to execute the DisableDomainAutoRenew service method. .NET Framework: Supported in: 4.5, 4.0, 3.5 Portable Class Library: Supported in: Windows Store Apps Supported in: Windows Phone 8.1 Supported in: Xamarin Android Supported in: Xamarin iOS (Unified) Supported in: Xamarin.Forms
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Route53Domains/MIRoute53DomainsDisableDomainAutoRenewDisableDomainAutoRenewRequest.html
2018-08-14T14:52:03
CC-MAIN-2018-34
1534221209040.29
[]
docs.aws.amazon.com
Select UCF content to import Select documents from the UCF download that you want to import into the GRC tables. About this task Your import selections go through an approval process before the system moves the documents into GRC tables. Procedure Navigate to GRC > Administration > Import UCF Content. The UCF Authority Documents screen appears, showing all the downloaded documents as cards in the left column. To view the details of a document, click anywhere in the card. The selected card is outlined in blue. A document counter at the top of the left column indicates the number of document cards displayed and also functions as a reset button for the filter and search box. The citations and controls associated with the selected document card appear in the detail pane on the right. The current version of the UCF document appears in the Released Version field and is expressed as Qx YY - Final, where Q is the current quarter, and YY is the current year. Type a string in the search box to filter the cards by values in the documents' headers. You can search on these UCF fields from the Details pane: GRC import status Category Type Originator Impact Zones For example, a string search for us federal trade displays a document that contains US Federal Trade Commission in the Originator field. To clear the search field, click the counter at the top of the left column. Click the arrow in the search field to display the authority document filter. In the filter that appears, click a group heading to expand the section. Each group is a field from the document header. The numbers in parentheses show the count of UCF documents in each group. To filter the list by document status, select an option from the GRC Update Status section. This list displays these document states: Up to date: Documents you have imported that are currently up to date in your system. Not imported: Available documents that you have not imported yet. Update available: Documents you have imported for which updates are available. To filter the list by documents in similar categories, click a value in one or more of the groups provided. Click one or more field values to filter the list and display the matching document cards in the left column. The system applies the following operators to multiple filters: Filters within the same group or between groups have an OR relationship. Filters in the authority document filter have an AND relationship with a string in the search box. Click Reset to clear the selections in the authority document filter, or click the counter above the left column. Select the check boxes in the cards for the documents you want to import into GRC. A counter on the Update GRC button shows the number of cards currently selected. Click Update GRC. The system displays an import dialog box that lists the requested documents and advises you if approvals are required for this request. The dialog box indicates if a selected document contains super controls. A super control is any control shared by two or more authority documents. When you import a document with super controls, GRC updates those controls for all authority documents that use them. Click Submit to initiate the approval process. When the request is submitted, the dialog box lists the approval status of each document you have selected. If a document was previously requested but has not yet been approved, GRC marks it Awaiting approval. Click Close.
https://docs.servicenow.com/bundle/geneva-governance-risk-compliance/page/product/it_governance_risk_and_compliance/task/t_SelectUCFContentToImport.html
2018-08-14T13:56:25
CC-MAIN-2018-34
1534221209040.29
[]
docs.servicenow.com
Activate best practice change risk calculator You can activate the Best Practice- Change Risk Calculator plugin (com.snc.bestpractice.change_risk) if you have the admin role. This plugin includes demo data and. What to do nextYou can define risk and impact conditions for your change records. Installed with change risk calculatorSeveral types of components are installed with the Best Practice- Change Risk Calculator.Related TasksActivate change management coreActivate the state modelActivate conflict detectionActivate change risk assessmentActivate standard change catalogActivate best practice - bulk CI changesRelated ReferenceChange properties
https://docs.servicenow.com/bundle/geneva-it-service-management/page/product/change_management/task/activate-change-risk-calculator.html
2018-08-14T13:56:29
CC-MAIN-2018-34
1534221209040.29
[]
docs.servicenow.com
perl5242delta - what is new for perl v5.24.2 - NAME - DESCRIPTION - Security - Modules and Pragmata - Selected Bug Fixes - Acknowledgements - Reporting Bugs - SEE ALSO NAME perl5242delta - what is new for perl v5.24.2 DESCRIPTION This The handling of (the removal of) '.' in @INC in base has been improved. This resolves some problematic behaviour in the approach taken in Perl 5.24.1, which is probably best described in the following two threads on the Perl 5 Porters mailing list:,. . Modules and Pragmata Updated Modules and Pragmata base has been upgraded from version 2.23 to 2.23_01. Module::CoreList has been upgraded from version 5.20170114_24 to 5.20170715_24. Selected Bug Fixes Fixed a crash with s///lwhere it thought it was dealing with UTF-8 when it wasn't. [perl #129038] Acknowledgements Perl 5.24.2 represents approximately 6 months of development since Perl 5.24.1 and contains approximately 2,500 lines of changes across 53 files from 18 authors..
http://docs.activestate.com/activeperl/5.26/perl/lib/Pod/perl5242delta.html
2018-08-14T14:22:01
CC-MAIN-2018-34
1534221209040.29
[]
docs.activestate.com
MVR Processed Date Months Ago¶ This Condition located on the Volunteer category tab in Search Builder will let you find people whose Motor Vehicle Record has been processed a specified number of months ago. Use Case This Condition is used to find those needing a recheck. It’s not enough just to look for those whose MVR was more than 23 months ago. You must also add a Condition to find those with a current MVR check on their record. Otherwise, the results will be everyone in the database without a MVR check within the past 23 months, including those without an MVR at all. So, combine this search with the MVRStatusCode Condition, selecting those with a status of Approved, and whose MVR was processed Greater Than Equal 23 months ago to find those with a current MVR approval, but are due a recheck. See also
http://docs.touchpointsoftware.com/SearchBuilder/QB-MVRProcessedDateMonthsAgo.html
2018-08-14T14:14:14
CC-MAIN-2018-34
1534221209040.29
[]
docs.touchpointsoftware.com
Recent Contribution in Bundle Type¶ Use this Condition located on the Contributions category tab in Search Builder to find everyone who has a contribution within a specified number of days (enter the number) and that contribution was included in the selected Bundle Type (select from the drop down). Use Case - Online Giving One use case is to find out how many people give online each month. Select the Bundle Type of Online and use 30 for the number of days to look back. Use Case - Different Campus Another use case would be if you have Bundle Types that represent specific campuses - such as East Campus Offering. You can find those who gave their offering at that campus, even though the Campus on their people record might be a different campus. So, add to this Condition the Campus Condition and select Not Equal East Campus. This will find everyone who made a contribution at that campus, but does not normally attend that campus, or, at least, that is not the campus on their record. Note When you use the Totals by Fund or other contribution reports and specify a Campus, the report will look at the Campus on the individual’s record. So, this Condition is helpful to see if people are attending and giving at a Campus other than their main Campus. Of course, in order to use this Condition to identify giving at a Campus, you would need to have Bundle Types for each Campus. Bundle Types are located in the Lookups table and new types can be created by your System Admin. See also
http://docs.touchpointsoftware.com/SearchBuilder/QB-RecentBundleType.html
2018-08-14T14:13:59
CC-MAIN-2018-34
1534221209040.29
[]
docs.touchpointsoftware.com
Enter add-on properties When submitting an add-on, the options on the Properties page help determine the behavior of your add-on when offered to customers. Product type Your product type is selected when you first create the add-on. The product type you selected is displayed here, but you can't change it. Tip If you haven't published the add-on, you can delete the submission and start again if you want to choose a different product type. The fields you see on this page will vary, depending on the product type of your add-on. Product lifetime If you selected Durable for your product type, Product lifetime is shown here. The default Product lifetime for a durable add-on is Forever, which means the add-on never expires. If you prefer, you can set the Product lifetime so that the add-on expires after a set duration (with options from 1-365 days). Quantity If you selected Store-managed consumable for your product type, Quantity is shown here. You'll need to enter a number between 1 and 1000000. This quantity will be granted to the customer when they acquire your add-on, and the Store will track the balance as the app reports the customer’s consumption of the add-on. Subscription period If you selected Subscription for your product type, Subscription period is shown here. Choose an option to specify how frequently a customer will be charged for the subscription. The default option is Monthly, but you can also select 3 months, 6 months, Annually, or 24 months. Important After your add-on is published, you can't change your Subscription period selection. Free trial If you selected Subscription for your product type, Free trial is also shown here. The default option is No free trial. If you prefer, you can let customers use the add-on for free for a set period of time (either 1 week or 1 month). Important After your add-on is published, you can't change your Free trial selection. Content type Regardless of your add-on's product type, you'll need to indicate the type of content you're offering. For most add-ons, the content type should be Electronic software download. If another option from the list describes your add-on better (for example, if you are offering a music download or an e-book), select that option instead. These are the possible options for an add-on's content type: - Electronic software download - Electronic books - Electronic magazine single issue - Electronic newspaper single issue - Music download - Music streaming - Online data storage/services - Software as a service - Video download - Video streaming Additional properties These fields are optional for all types of add-ons. Keywords You have the option to provide up to ten keywords of up to 30 characters each for each add-on you submit. Your app can then query for add-ons that match these words. This feature lets you build screens in your app that can load add-ons without you having to directly specify the product ID in your app's code. You can then change the add-on's keywords anytime, without having to make code changes in your app or submit the app again. To query this field, use the StoreProduct.Keywords property in the Windows.Services.Store namespace. (Or, if you're using the Windows.ApplicationModel.Store namespace, use the ProductListing.Keywords property.) Note Keywords are not available for use in packages targeting Windows 8 and Windows 8.1. Custom developer data You can enter up to 3000 characters into the Custom developer data field (formerly called Tag) to provide extra context for your in-app product. Most often, this is in the form of an XML string, but you can enter anything you'd like in this field. Your app can then query this field to read its content (although the app can't edit the data and pass the changes back.) For example, let’s say you have a game, and you’re selling an add-on which allows the customer to access additional levels. Using the Custom developer data field, the app can query to see which levels are available when a customer owns this add-on. You could adjust the value at any time (in this case, the levels which are included), without having to make code changes in your app or submit the app again, by updating the info in the add-on's Custom developer data field and then publishing an updated submission for the add-on. To query this field, use the StoreSku.CustomDeveloperData property in the Windows.Services.Store namespace. (Or, if you're using the Windows.ApplicationModel.Store namespace, use the ProductListing.Tag property.) Note The Custom developer data field is not available for use in packages targeting Windows 8 and Windows 8.1.
https://docs.microsoft.com/en-us/windows/uwp/publish/enter-add-on-properties
2018-08-14T13:22:54
CC-MAIN-2018-34
1534221209040.29
[]
docs.microsoft.com
Data Template Data Template Data Template Data Template Class Definition Some information relates to pre-released product which may be substantially modified before it’s commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Prerelease APIs are identified by a Prerelease label. public : class DataTemplate : FrameworkTemplate, IDataTemplate struct winrt::Windows::UI::Xaml::DataTemplate : FrameworkTemplate, IDataTemplate public class DataTemplate : FrameworkTemplate, IDataTemplate Public Class DataTemplate Inherits FrameworkTemplate Implements IDataTemplate <DataTemplate ...> templateContent </DataTemplate> - Inheritance - DataTemplateDataTemplateDataTemplateDataTemplate - Attributes - in depth. Remarks A DataTemplate object is used as the value for these properties: - ItemsControl.ItemTemplate (which is inherited by various items controls such as ListView, GridView, ListBox ) - ContentControl.ContentTemplate (which is inherited by various content controls such as Button, Frame, SettingsFlyout ) - HeaderTemplate and FooterTemplate properties of various items control classes - ItemsPresenter.HeaderTemplate and ItemsPresenter.FooterTemplate - HeaderTemplate and FooterTemplate properties of text controls such as RichEditBox, TextBox - HeaderTemplate property of controls such as ComboBox, DatePicker, Hub, HubSection, Pivot, Slider, TimePicker, ToggleSwitch; some of these also have FooterTemplate You typically use a DataTemplate to specify the visual representation of your data. DataTemplate objects are particularly useful when you are binding an ItemsControl such as a ListBox to an entire collection. Without specific instructions, a ListBox displays the string representation of the objects in a collection. Use a DataTemplate to define the appearance of each of your data objects. The content of your DataTemplate becomes the visual structure of your data objects. You typically show property values that come from each of the Customer objects.. A data template for ContentTemplate can also use data binding. But in this case the data context is the same as the element where the template's applied. Usually this is one data object, and there's no concept of items. You can place a DataTemplate as the direct child of an ItemTemplate property element in XAML. This is know as an inline template and you'd do this if you had no need to use that same data template for other areas of your UI. You can also define a DataTemplate as a resource and then reference the resource as the value of the ItemTemplate property. Once it's a resource, you can use the same template for multiple UI elements that need a data template. If you factor the data template into Application.Resources, you can even share the same template for different pages of your UI. The XAML usage for contents of a data template is not exposed as a settable code property. It is special behavior built into the XAML processing for a DataTemplate. For advanced data binding scenarios, you might want to have properties of the data determine which template should produce their UI representations. For this scenario, you can use a DataTemplateSelector and set properties such as ItemTemplateSelector to assign it to a data view. A DataTemplateSelector is a logic class you write yourself, which has a method that returns exactly one DataTemplate to the binding engine based on your own logic interacting with your data. For more info, see Data binding in depth.
https://docs.microsoft.com/ja-jp/uwp/api/windows.ui.xaml.datatemplate
2018-08-14T13:19:18
CC-MAIN-2018-34
1534221209040.29
[]
docs.microsoft.com
Has Recent New Attend¶ This Condition located on the Recent Attendance category tab in Search Builder allows you to find those those who attended as a new guest within a specified number of days. You also can specify a number of days to look back with no attendance prior to that. You can narrow the search further by selecting other options: Program/Division/Organization, Org Type. Note If nothing is specified for Number of days for no attendance, the default will be 365 days. If you select True, and enter 180 in the field for the number of days with no attendance and 7 days in the Days field, the results will be those who have not attended during the previous 180 days, but did attend in the past 7 days. This Condition is used in the Reports > Vital Stats to find any New Attends for the week. Use Case for Previously Active Attenders You can use this Condition and combine it with Attendance Count History, using the same Program, with a date range of 2 years prior, and a count of greater than 10. This will find those who attended, for example, a Life Group more than 10 times in 2014, but have not attended the past 365 days (in 2015), until the past week, when they attended. See also
http://docs.touchpointsoftware.com/SearchBuilder/QB-HasRecentNewAttend.html
2018-08-14T14:20:19
CC-MAIN-2018-34
1534221209040.29
[]
docs.touchpointsoftware.com
This is an iframe, to view it upgrade your browser or enable iframe display. Prev Deciding When to Program and When to Script Just as the distinction between programming and scripting languages has blurred in the last few years, so have the guidelines for when you should program and when you should script. The simplest rule remains, though: Use whatever techniques make you productive. In the end, no one really cares if you call it a program or a script. Even so, these guidelines may help: *If you have to perform a lot of operations on a lot of RPMs, a program will likely perform much faster than a script that calls the rpm command over and over. *If the task is relatively simple, scripting generally works best. *If you are more experienced with a particular language, use it. *If you need to perform complex operations, perhaps involving transactions, a program is probably the right way to go. *In many cases, programming languages work better for creating graphical user interfaces, although Python and Perl offer graphical user interface toolkits, such as Perl/Tk or PyQt. There isn’t one right way to do it. Pick what works best for you. Cross Reference This chapter covers shell scripting. Chapter 15, Programming RPM with C covers C programming. Chapter 16, Programming RPM with Python covers Python scripting and programming, and Chapter 17, Programming RPM with Perl covers Perl scripting. Prev 14.2. Distinguishing Scripting Languages from Pro... Up 14.4. Shell Scripting Basics
https://docs.fedoraproject.org/en-US/Fedora_Draft_Documentation/0.1/html/RPM_Guide/ch14s03.html
2018-08-14T13:29:15
CC-MAIN-2018-34
1534221209040.29
[]
docs.fedoraproject.org
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Deletes. For .NET Core, PCL and Unity this operation is only available in asynchronous form. Please refer to DeleteAccessKeyAsync. Namespace: Amazon.IdentityManagement Assembly: AWSSDK.IdentityManagement.dll Version: 3.x.y.z Container for the necessary parameters to execute the DeleteAccessKey service method. The following command deletes one access key (access key ID and secret access key) assigned to the IAM user named Bob. var response = client.DeleteAccessKey(new DeleteAccessKeyRequest { AccessKeyId = "AKIDPMS9RO4H3FEXAMPLE", UserName = "Bob" });
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/IAM/MIAMServiceDeleteAccessKeyDeleteAccessKeyRequest.html
2018-08-14T14:37:33
CC-MAIN-2018-34
1534221209040.29
[]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Lists the ARNs of the assessment targets within this AWS account. For more information about assessment targets, see Amazon Inspector Assessment Targets. For .NET Core and PCL this operation is only available in asynchronous form. Please refer to ListAssessmentTargetsAsync. Namespace: Amazon.Inspector Assembly: AWSSDK.Inspector.dll Version: 3.x.y.z Container for the necessary parameters to execute the ListAssessmentTargets service method. Lists the ARNs of the assessment targets within this AWS account. var response = client.ListAssessmentTargets(new ListAssessmentTargetsRequest { MaxResults = 123 }); List assessmentTargetArns = response.AssessmentTarget
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Inspector/MIInspectorListAssessmentTargetsListAssessmentTargetsRequest.html
2018-08-14T14:53:18
CC-MAIN-2018-34
1534221209040.29
[]
docs.aws.amazon.com
Data Model versioning The Data Model exposed by an OData Service, such as the Power BI data model, defines a contract between the OData service and its clients. Services are allowed to extend their model only to the degree that it does not break existing clients. Breaking changes, such as removing properties or changing the type of existing properties, require that a new service version is provided at a different service root URL for the new model. The following Data Model additions are considered safe and do not require services to version their entry point. - Adding a property that is nullable or has a default value; if it has the same name as an existing dynamic property, it must have the same type (or base type) as the existing dynamic property - Adding a navigation property that is nullable or collection-valued; if it has the same name as an existing dynamic navigation property, it must have the same type (or base type) as the existing dynamic navigation property - Adding a new entity type to the model - Adding a new complex type to the model - Adding a new entity set - Adding a new singleton - Adding an action, a function, an action import, or function import - Adding an action parameter that is nullable - Adding a type definition or enumeration - Adding any annotation to a model element that does not need to be understood by the client to interact with the service correctly Clients SHOULD be prepared for services to make such incremental changes to their model. In particular, clients should be prepared to receive properties and derived types not previously defined by the service. Services SHOULD NOT change their data model depending on the authenticated user. If the data model is user or user group dependent, all changes MUST be safe changes as defined in this section when comparing the full model to the model visible to users with limited authorizations. For more about OData Data Model standards, see OData Version 4.0 Part 1: Protocol Plus Errata 02. See also Overview of Power BI REST API
https://docs.microsoft.com/en-us/power-bi/developer/api-data-model-versioning
2018-08-14T14:06:06
CC-MAIN-2018-34
1534221209040.29
[]
docs.microsoft.com
Ernest Renan, "Qu'est-ce qu'une nation ?" : commentaire In the early 1960's the scenery was set up for a revolutionary movement opened by the Civil Rights Movement that would shatter the U.S in the 1960's.The media coverage of the Civil Rights Movement was part of a large-scale cultural and social revolution, which called into question the traditional picture of American society. One may wonder to what extent the Civil Rights Movement opened the path for a three-level redefinition of the American identities: first, within the Black movement itself, through its relation to the Government, secondly from the public ?side itself toward the movement, and finally from nation as a whole. In order to stigmatise the relation between the media and the development of the Civil Rights Movement, let us focus our analysis on a couple of photographs, taken during the summer of 1963. [...] The camera was seen as an indiscreet or even harmful intrusion into the homogeneity of the movement. One the other hand, the main leaders of the movement, such as King, saw in TV and newspapers a significant weapon to attract people's attention and to awaken political awareness. Images could be used to tell a story and conveyed the moral dimension of a parable. King was an outstanding orator. His physical and moral charisma conferred a special power to the Civil Rights Movement. [...] [...] This picture of the present and supreme event was only the subjective construction of a ruling order. Paradoxically, pictures were the most available tools to assert the national consensus and introduce a standardised culture and opinions. Photojournalism could construct events and thereby influence the interpretation of past and future. BIBLIOGRAPHY Becker Howard S,Visual Sociology, Documentary Photography, and Photojournalism Davidson/ Lytle, After the fact 4th edition. Foner Eric, A Story of American Freedom Gitthin Todd, The Sixties, Years of Hope, Days of Rage. [...] [...] Nevertheless, the process of photography is laid on a process of selection that undoubtedly implicates a subjective angle. Moreover the power and legitimacy of picture lies in its intrinsic suggestiveness, and so allusiveness. The collects some pieces of the real to construct a new symbolic picture. The picture is entirely reliant on the angle defined by the photographer, and thus on his ?cultural eye?. So, despite the photojournalist's genuine and sincere concern for objectivity, any picture is an exception to the rule. [...] [...] The violence and the intensity of the scene seem miraculously captured through the precise and effective construction of the picture. The black and white uniforms of the policemen, as well as their number, strongly contrast with the idle and weak posture of the isolated victim. Compared to the violence of the first photograph, the second picture seems particularly serene and static. It is an overview of a crowd of demonstrators. The picture seems almost blurred and shows a huge and mixed crowd. [...] [...] So, they naturally appeal to the federal Government, embodied by the popular President Kennedy. Through this March on Washington, the demonstrators recognise the legitimacy of the State and judged it able to grant them citizenship. Thus, it must be underlined that the Movement struggles for the recognition of pre-existing rights. The main part of the battle happened on the legislative field, through the right to the ballot. Black people raised against the pressures exerted by Segregationist organisations, such as the Klu Klux Klan during the registration campaigns. [...] Enter the password to open this PDF file: - Docs.school utilise des cookies sur son site. En poursuivant votre navigation sur Docs.school, vous en acceptez l'utilisation. Privacy PolicyOK
https://docs.school/matieres-artistiques-et-mediatiques/autres-medias/dissertation/influence-media-in-early-development-civil-rights-movement-through-study-10111.html
2018-08-14T14:22:53
CC-MAIN-2018-34
1534221209040.29
[]
docs.school
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Updates capacity settings for a fleet. Use this action to specify the number of EC2 instances (hosts) that you want this fleet to contain. Before calling this action, you may want to call DescribeEC2InstanceLimits to get the maximum capacity based on the fleet's EC2 instance type. Specify minimum and maximum number of instances. Amazon GameLift will not change fleet capacity to values fall outside of this range. This is particularly important when using auto-scaling (see PutScalingPolicy) to allow capacity to adjust based on player demand while imposing limits on automatic adjustments. To update fleet capacity, specify the fleet ID and the number of instances you want the fleet to host. If successful, Amazon GameLift starts or terminates instances so that the fleet's active instance count matches the desired instance count. You can view a fleet's current capacity information by calling DescribeFleetCapacity. If the desired instance count is higher than the instance type's limit, the "Limit Exceeded" exception occurs. UpdateFleetCapacityAsync. Namespace: Amazon.GameLift Assembly: AWSSDK.GameLift.dll Version: 3.x.y.z Container for the necessary parameters to execute the UpdateFleetCapacity service method. .NET Framework: Supported in: 4.5, 4.0, 3.5 Portable Class Library: Supported in: Windows Store Apps Supported in: Windows Phone 8.1 Supported in: Xamarin Android Supported in: Xamarin iOS (Unified) Supported in: Xamarin.Forms
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/GameLift/MGameLiftUpdateFleetCapacityUpdateFleetCapacityRequest.html
2018-08-14T14:35:06
CC-MAIN-2018-34
1534221209040.29
[]
docs.aws.amazon.com
This is an iframe, to view it upgrade your browser or enable iframe display. Prev Package-oriented focus Like its predecessors, RPM is intended to operate on a package level. Rather than operating on a single-file basis (as when you manually install software using Unix command-line tools like mv and cp) or on an entire system basis (as with many PC operating systems, which provide the ability to upgrade entire releases but not to upgrade individual components), RPM provides software that can manage hundreds or thousands of packages. Each package is a discrete bundle of related files and associated documentation and configuration information; typically, each package is a separate application. By focusing on the package as the managed unit, RPM makes installation and deletion of applications extremely straightforward. Prev 1.2. RPM Design Goals Up 1.2.3. Package upgradability
https://docs.fedoraproject.org/en-US/Fedora_Draft_Documentation/0.1/html/RPM_Guide/ch01s02s02.html
2018-08-14T13:30:05
CC-MAIN-2018-34
1534221209040.29
[]
docs.fedoraproject.org
Version End of Life. 26 October 2018 Tungsten Replicatoroop has been updated to support the use of the beeline as well as the hive command. Issues: CT-153, CT-155 For more information, see The load-reduce-check Tool. The replicator and load-reduce-check (in [Tungsten Replicator 5.0 Manual]) command that is part of the continuent-tools-hadoop repository file. The current file provides three functions: load — which loads an external JavaScript file. readJSONFile — which loads an external JSON file into a variable. JSON — provides JSON class including the ability to dump a JavaScript variable into a JSON string. Issues: CT-99 The thl (in [Tungsten Replicator 2.1 Manual]) has been improved to support -from (in [Tungsten Clustering for MySQL 5.1 Manual]) and -to (in [Tungsten Clustering for MySQL 5.1 Manual]) options for selecting the range. These act as synonyms for the existing -low (in [Tungsten Replicator 2.1 Manual]) and -high (in [Tungsten Replicator 2.1 Manual]) options incoming transaction are suitable to be applied to a targert that requires them. Currently the metadata is only added to the transactions and no enforcement is made. The following filters add this information: PrimaryKeyFilter (in [Tungsten Replicator 2.1 Manual]) ColumnNameFilter (in [Tungsten Replicator 2.1 Manual]) EnumToStringFilter (in [Tungsten Replicator 2.1 Manual]) SetToStringFilter (in [Tungsten Replicator 2.1 Manual]) The format of the metadata is tungsten_filter_NAME=true. Issues: CT-157 Installation and Deployment The tungsten_provision_slave (in [Continuent Tungsten 4.0 Manual]) could fail if the innodb_log_home_dir and innodb_data_home_dir were set to a value different to the datadir option, and the --direct (in [Continuent Tungsten 4.0 Manual]) was used. Issues: CT-83, CT-141 Heterogeneous Replication The Hadoop loader would previously load CSV files directly into the /users/tungsten withinoop tools which used only the schema and table name. Issues: CT-135
http://docs.continuent.com/release-notes/release-notes-tr-5-1-0.html
2018-08-14T13:44:07
CC-MAIN-2018-34
1534221209040.29
[]
docs.continuent.com
Migrating Physical Servers Migrating a Physical ServerMigrating a Physical Server Migrating a physical server to the cloud is done by booting a Velostrata Connector ISO image into RAM from a virtual or physical DVDROM/CDROM device. The Velostrata connector maps the local storage and creates a Stub VMware VM as a management object for Velostrata cloud migration operations. From that point forward, the migration is done in a similar way to the migration of other VMs except here it takes place in write isolation mode, meaning data changes made in the cloud are not synced back to on-prem. Notes: - The Stub VM created in the process is intended for Velostrata management operations only, and not set up for local execution on vSphere. It is set up with no network interface, and a minimal CPU/RAM setting. - Test clone is not supported for physical servers. System requirements: - Disk types supported include SAS, SATA, SSD, virtual disks presented by hardware controller, and SAN volumes mounted on physical HBAs. - PATA/IDE disks are not supported. - Minimum of 4GB RAM is recommended - For machines with less than 4GB RAM, press any key during the boot splash screen (one with the keyboard icon) and choose the Velostrata Connector (low memory) option from the menu. This uses an on-demand copy from the CD image. - Physical DVDROM/CDROM or virtual CDROM to boot the Velostrata Connector ISO from. InstructionsInstructions - Check OS compatibility in the Velostrata release notes. - For Linux OS install the Velostrata-Prep RPM. - Download the Velostrata Connector ISO: - Boot from the Velostrata ISO. - This can be done by using ILO (HP Enterprise servers) or iDRAC (Dell servers). The steps below use HP ILO: - Launch the remote console. - Select Virtual Drives > Image File CD/DVD-ROM, and select the Velostrata Connector ISO that you downloaded. - Select Power Switch > Reset. - Once the server is up, ensure that it boots from the ISO. - Log in to the ISO using the following credentials: ubuntu\Welcome01 - Run ./VelosConnector.sh script to view the menu. - In vCenter, check for an iSCSI Software Adapter. If you do not have one, navigate to Configure > Storage Adapters and add an iSCSI Software Adapter. Note: we recommend a 1:1 relationship between a VM and an iSCSI Target. - Select 1) Register a Stub VM for Velostrata operations and follow the instructions. Note that registering the OS properly is required in order to create the correct AMI in the cloud and enable proper migration. The VM appears in vCenter. Deleting a Stub VMDeleting a Stub VM You can delete the VM and the iSCSI target configuration on the ESX side after your migration is complete. Note: If you create more than one VM using the same iSCSI Target, when you delete any of those VMs, it will automatically delete the iSCSI Target as well. Therefore, you must manually delete any additional VMs that were tied to that iSCSI Target. Showing the iSCSI Target SettingsShowing the iSCSI Target Settings You can view the iqn, portals and luns that will be migrated to the cloud. - Select 3) Show iSCSI Target Settings. Ensure that you can see all the disks of the server. Managing the IP ConfigurationManaging the IP Configuration You can view the IP of the server that will be used for the iSCSI target configuration. The ISO gets the IP from DHCP by default, but you can configure a static IP. It is necessary to have a valid IP before the stub can be registered. - Select 4) Show IP Configuration. - Select 5) Setup Static IP to configure a static IP.
http://docs.velostrata.com/m/75847/l/715617-migrating-physical-servers
2018-08-14T13:42:57
CC-MAIN-2018-34
1534221209040.29
[array(['https://s3.amazonaws.com/screensteps_live/image_assets/assets/000/436/593/original/ed48a07d-ce22-40ff-b3ee-bc0d5c87c138.png?1490876441', None], dtype=object) array(['https://s3.amazonaws.com/screensteps_live/image_assets/assets/000/436/575/original/93519c3b-397b-4867-90c1-fbdf696c6e6f.png?1490876422', None], dtype=object) array(['https://s3.amazonaws.com/screensteps_live/image_assets/assets/000/436/586/original/c38f3800-cc16-435f-b364-3a1ae23591d2.png?1490876433', None], dtype=object) array(['https://s3.amazonaws.com/screensteps_live/image_assets/assets/000/436/649/original/593b0c0f-0dff-4f83-9055-4ef95b851d27.png?1490877796', None], dtype=object) array(['https://s3.amazonaws.com/screensteps_live/image_assets/assets/000/436/647/original/d1bc3a91-9b1b-44f3-9698-86aed69f054b.png?1490877792', None], dtype=object) array(['https://s3.amazonaws.com/screensteps_live/image_assets/assets/001/522/769/original/Registering_VM_stub_-_1.png?1525076972', None], dtype=object) array(['https://s3.amazonaws.com/screensteps_live/image_assets/assets/000/436/655/original/30674126-c7df-4f8d-8473-47a5037df1d1.png?1490877805', None], dtype=object) array(['https://s3.amazonaws.com/screensteps_live/image_assets/assets/000/436/667/original/56d49ccb-2bf5-43d0-b7ad-f798417bf9c3.png?1490878548', None], dtype=object) array(['https://s3.amazonaws.com/screensteps_live/image_assets/assets/000/436/669/original/328b15e7-6e73-4140-af18-72e046acd96f.png?1490879043', None], dtype=object) ]
docs.velostrata.com
17.1. Rulebook Settings 17.1.3. [Protocol SPX] This section is only relevant, and required when using the SPX protocol. Table 17.3. 17.1.4. [Protocol DECnet] This section is only relevant, and required when using the DECnet protocol. Table 17.4. 17.1.7. [Security] Table 17.7. The StartupBy, ShutdownBy and ShutdownFrom parameters use full Regular Expressions. Separate multiple expressions with a comma. Table 17.8. Regular Expression Examples 17.1.8. [generic_agentname] Each Database Agent will have a section that is typically called "generic_" followed by a database label. For example: [generic_virt], [generic_odbc], [generic_db2]. Table 17.9. For the CommandLine parameter, you can specify a selection of the following options: - +noautocommit This means that all connections routed (by the mapping rules) through this agent section will have autocommit behaviour turned off. This is useful if your client-side application relies on manual commits of its transactions; you can define a mapping rule to match that application and add +noautocommit while other applications use a different agent section. - +maxrows This defines the maximum number of rows to fetch from any query. - +initsql Specifies a file with a set of SQL statements to execute immediately each connection is established. For example, this might be useful to set transaction isolation levels, if your application assumes them to be set a specific way already. - +jetfix This enables various workarounds for operation with the Microsoft Jet Engine, e.g. through Access or MS Query. Particularly, the mapping of datatypes may be changed for greater accuracy using these applications. - +norowsetlimit This disables any rowset-size limit; it is useful in cursor operations on large tables. 17.1.9. [Domain Aliases] This section is used to change a domain name specified in the connect string of a DNS with an internal alias. This alias is used in the first colon delimited field of a mapping rule. This example will map two different Progess domains to one agent. An alternative is shown for mapping three different Oracle types to the one Oracle agent. [Domain Aliases] ^Progress 90A$|^Progress 90B$ = pro90b Oracle 9i = ora90 ^Oracle 9.0$ = ora90 ^Oracle 9.0.x$ = ora90 17.1.10. [Database Aliases] This section will replace a database name specified in the connect string of a DNS with an internal alias. This alias is used in the second colon delimited field of a mapping rule. This example looks for a substring 'demo' and will replace with an alias of 'demo'. Thus anydemo, demo, demo123 are all matched, and converted to demo. [Database Aliases] demo = demo 17.1.11. [User Aliases] This section will replace a user name specified in the connect string of a DNS with an internal alias. This alias is used in the third colon delimited field of a mapping rule. The example below shows how certain users or an empty user are handled. In this case they are rejected. [User Aliases] scott|system = insecure ^$ = blank [Mapping Rules] *:*:blank:*:*:*:rw = reject You should specify a username *:*:insecure:*:*:*:rw = reject The user is not allowed 17.1.12. [Opsys Aliases] This section will replace an operating system indentifier with an internal alias. This alias is used in the fourth colon delimited field of a mapping rule. This example will map anything containing the substring 'java' to an alias of 'java'. Two variations of windows are given an alias of 'msdos'. Everything else will be matched to .* so it is mapped to the alias 'other'. [Opsys Aliases] java = java win32|msdos = msdos .* = other 17.1.13. [Machine Aliases] This section will replace a machine name with an internal alias. This alias is used in the fifth colon delimited field of a mapping rule. This example will map two different machine names to one of 'adminpc'. Also anything containing the word 'sales' such as mysales, sales, sales2 is then mapped to 'sales' alias. [Machine Aliases] fredspc|johnspc = adminpc sales = sales 17.1.14. [Application Aliases] This section will replace the application name with an internal alias. This alias is used in the sixth colon delimited field of a mapping rule. This example would match MSACCESS (a program requiring the Jet option), and map it to an alias of jet. The second alias mapping would match various Office applications and convert them to a single alias. [Application Aliases] MSACCESS = jet MSQRY.*|EXCEL|WORD = msoffice 17.1.15. [Mapping Rules] This section is used to determine which agent shall handle the incoming request. The mapping rules are checked once all the alias mappings have been performed. Each mapping rule is tried from top to bottom until a match with the current parameters has been found. There is no regular expression or glob handling in the mapping rules. The 7 colon delimited mapping parameters must each match up exactly. There is a special mapping rule of '*' that denotes a dont care parameter. Do not confuse this special '*' with the regular expression '*', or glob '*'.It is not possible to use the '*' with any other text such as 'demo*'. On the right side of the '=' is either an accept, or reject statement. The accept statment has the word 'accept' followed by the section name that identifies the agent. A reject statement has the word 'reject' followed by a text string that is the error message reported to the client. This is an example mapping section. [Mapping Rules] ;*:*:blank:*:*:*:rw = reject You should specify a username and password *:*:Admin:msdos:*:jet:* = reject Admin user account is not registered sql2000:*:*:*:*:*:* = accept generic_sql2000 ora81:*:*:*:*:jet:* = reject The Oracle 8 Database Agent is not configured for jet *:*:*:java:*:*:* = accept jodbc_client Here is a snippet of the debug output showing how a request is shown to be matched. request: domain=Oracle 8.1 database=db serveropts= connectopts= user=scott opsys=win32 readonly=0 application=ODBCAD32 processid=520 solve mapping: ora8sv:db:scott:win32:MASTERSRVR:ODBCAD32:rw using mapping: ora8sv:*:*:*:*:*:*
http://docs.openlinksw.com/uda/mt/mt_rulebook/
2018-11-13T03:15:09
CC-MAIN-2018-47
1542039741192.34
[]
docs.openlinksw.com
SciPy Roadmap¶ Most of this roadmap is intended to provide a high-level view on what is most needed per SciPy submodule in terms of new functionality, bug fixes, etc. Part of those are must-haves for the 1.0 version of Scipy. Furthermore it contains ideas for major new features - those are marked as such, and are not needed for SciPy to become 1.0.¶ This roadmap will be evolving together with SciPy. Updates can be submitted as pull requests. For large or disruptive changes you may want to discuss those first on the scipy-dev mailing list. API changes¶ In general, we want to take advantage of the major version change to fix some. However, there should be clear value in making a breaking change. The 1.0 version label is not a license to just break things - see it as a normal release with a somewhat more aggressive/extensive set of cleanups. It should be made more clear what is public and what is private in SciPy. Everything private should be underscored as much as possible. Now this is done consistently when we add new code, but for 1.0 it should also be done for existing code..¶ The documentation is in decent. Other¶ Scipy 1.0 will likely contain more backwards-incompatible changes than a minor release. Therefore we will have a longer-lived maintenance branch of the last 0.X release.. - New feature idea: more of the currently wrapped libraries should export Cython-importable versions that can be used without linking. Regarding build environments: - NumPy and SciPy should both build from source on Windows with a MinGW-w64 toolchain and be compatible with Python installations compiled with either the same MinGW or with MSVC. - Bento development has stopped, so will remain having an experimental, use-at-your-own-risk status. Only the people that use it will be responsible for keeping the Bento build updated. A more complete continuous integration setup is needed; at the moment we often find out right before a release that there are issues on some less-often used platform or Python version. At least needed are Windows (MSVC and MingwPy), Linux and OS X builds, coverage of the lowest and highest Python and NumPy versions that are supported. Modules¶ fftpack¶ Needed: - solve issues with single precision: large errors, disabled for difficult sizes - fix caching bug - Bluestein algorithm (or chirp Z-transform) - deprecate fftpack.convolve as public function (was not meant to be public) There’s a large overlap with numpy.fft. This duplication has to change (both are too widely used to deprecate one); in the documentation we should make clear that scipy.fftpack is preferred over numpy.fft. If there are differences in signature or functionality, the best version should be picked case by case (example: numpy’s rfft is preferred, see gh-2487). integrate¶ Needed for ODE solvers: - Documentation is pretty bad, needs fixing - A promising new ODE solver interface is in progress: gh-6326. This needs to be finished and merged. After that, older API can possibly be deprecated. The numerical integration functions are in good shape. Support for integrating complex-valued functions and integrating multiple intervals (see gh-3325) could be added, but is not required for SciPy 1.0. interpolate¶ Needed: - Both fitpack and fitpack2 interfaces will be kept. - splmake is deprecated; is different spline representation, we need exactly one - interp1d/interp2d are somewhat ugly but widely used, so we keep them. Ideas for new features: - Spline fitting routines with better user control. - Integration and differentiation and arithmetic routines for splines - Needed: - Remove functions that are duplicate with numpy.linalg -. The functions in it can be moved to other modules: - pilutil, images : ndimage - comb, factorial, logsumexp, pade: special - doccer: move to scipy._lib - info, who: these are NumPy functions - derivative, central_diff_weight: remove, replace with more extensive functionality for numerical differentiation - likely in a new module scipy.diff(see below) ndimage¶ Rename the module to regression or fitting, include optimize.curve_fit. This module will then provide a home for other fitting functionality - what exactly needs to be worked out in more detail, a discussion can be found at. optimize¶ Overall this module is in reasonably good shape, however it is missing a few more good global optimizers as well as large-scale optimizers. These should be added. Other things that are needed: - deprecate the fmin_*functions in the documentation, minimizeis preferred. - clearly define what’s out of scope for this module.. Make lsim, impulse and step “just work” for any input system. Improve performance of ltisys (less. Continous wavelets only at the moment - decide whether to completely rewrite or remove them. Discrete wavelet transforms are out of scope (PyWavelets does a good job for those). sparse¶ The sparse matrix formats are getting feature-complete but are slow ... reimplement parts in Cython? - Small matrices are slower than PySparse, needs fixing There are a lot of formats. These should be kept, but improvements/optimizations should go into CSR/CSC, which are the preferred formats. LIL may be the exception, it’s inherently inefficient. It could be dropped if DOK is extended to support all the operations LIL currently provides. Alternatives are being worked on, see and. Ideas for new features: - Sparse arrays now act like np.matrix. We want sparse arrays. stats.distributions is in good shape.. New modules under discussion¶ diff¶ Currently Scipy doesn’t provide much support for numerical differentiation. A new scipy.diff module for that is discussed in. There’s also a fairly detailed GSoC proposal to build on, see here. There is also approx_derivative in optimize, which is still private but could form a solid basis for this module. transforms¶ This module was discussed previously, mainly to provide a home for discrete wavelet transform functionality. Other transforms could fit as well, for example there’s a PR for a Hankel transform . Note: this is on the back burner, because the plans to integrate PyWavelets DWT code has been put on hold.
https://docs.scipy.org/doc/scipy-0.19.1/reference/roadmap.html
2018-11-13T02:32:51
CC-MAIN-2018-47
1542039741192.34
[]
docs.scipy.org
Authentication Example This example demonstrates the basics of an implementation of the SecurityManager.authenticate method. The remainder of the example may be found in the Pivotal GemFire source code in the geode-core/src/main/java/org/apache/geode/examples/security directory. Of course, the security implementation of every installation is unique, so this example cannot be used in a production environment.; }
http://gemfire.docs.pivotal.io/95/geode/managing/security/authentication_examples.html
2018-11-13T03:26:38
CC-MAIN-2018-47
1542039741192.34
[]
gemfire.docs.pivotal.io
Follow Path Modifier This modifier causes the particle stream to follow a path which is controlled by a sequence of path objects. Path Data Tag Please also see the manual page for the Path Data Tag, which adds additional functionality to this modifier.. Path Objects These objects set out the path the particle stream will follow. The stream will move to each object in the list in the order they are found there. Each path object must have the following requirements: - it must be a closed spline - it must be coplanar - its axis must be centred within it - the profile must point along the Z-axis The modifier can also use a Mograph Cloner as the path object. Simply make a spline the child of a Cloner then drop the Cloner into the 'Path Objects' list. You can then use effectors to alter the position, scale, etc. of the clones and even animate them! Open splines will not work at all; non-coplanar splines may work but the result is unpredictable. (Coplanar means that all the points of the spline lie on the same plane, but that can be any arbitrary plane.) If your path objects are spline primitives which lie on the XY plane (the default setting) you will meet these requirements but for custom splines you must ensure that this is the case. Splines which can be used include Circle, Rectangle, Star, Flower, etc., but not Helix, Arc, Formula and so on. Some (such as Arc) will work if you make it editable and close the spline. Helix can never work because it is never coplanar. Get AOT From For each path object, when the particles turn to head towards it the sharpness of the turn is governed by the 'Acuteness of Turn' setting (AOT). In many cases you can use the same AOT setting for all the path objects, but it may be that for some objects you want a less or more acute turn. In that case, you can assign a Path Data tag to the object and set the AOT in the tag. This way, all the path objects can have different AOT settings if required. The drop-down has two options: Modifier The acuteness of turn will be obtained from the modifier 'Acuteness of Turn' setting for all path objects, even if they have a Path Data tag. Tag If the object has a Path Data tag, the acuteness of turn will be obtained from the tag. If an object has no tag, the AOT setting in the modifier will be used. Acuteness of Turn This value controls how sharply the particle will turn to head for the next object. If it is too low, the particle may not reach the object in time, in which case it will circle back to try again. This setting is used if 'Get AOT From' is set to 'Modifier' or if it is set to 'Tag' and a path object does not have a Path Data tag. Use Minimum Distance When the modifier directs a particle to the first spline in the list of path objects, it has to calculate a position to head for. To do this, it simply selects a random position within the space enclosed by the spline. This often results in particles moving from one side of the path object to the other, crossing other particles, which is especially noticeable if the particles are producing trails. Checking this switch will reduce the crossover by forcing the modifier to find the shortest distance from the particle to a point in the spline area. The accuracy with which it does so can be improved by increasing the 'Max. Retries' value. The switch is off by default to maintain compatibility with existing scenes. It works best if the particle stream is at right angles to the plane of the spline object. If the stream is parallel to the spline plane you may see all the particles heading for the same point in the spline area. If this happens, you can turn off this switch and accept the possibility of trails crossing each other. Max. Retries This is the number of times the modifier will try to find a target position in the path spline for a particle to aim for. Some very complex splines may take many attempts to find a target position; you can set the maximum number of attempts in this parameter. Large numbers may slow down the playback. If no target position is found after the maximum retry count is reached, the particle will no longer follow the path and will simply continue travelling along its current direction. You can kill these stray particles with the 'Cull Stray Particles' switch. If the 'Use Minimum Distance' switch is checked, the modifier will try to find the nearest point in the spline area to the current particle position. It will do that for however many tries are given in this parameter. Cull Stray Particles If checked, particles which have not been assigned a target position inside a path object will be removed (see 'Max. Retries' above). Speed Mode This setting determines the particle speed between path objects. It has the following options: No Change The particle maintains its current speed. Set Speed The particle speed is set from the value in the 'Speed' setting (with added variation, if any). Get Speed From Tag If the object has a Path Data tag, the speed will be obtained from the tag. If an object has no tag, the speed will be unchanged. Increment Speed From Tag If the object has a Path Data tag, the speed value will be obtained from the tag and this will then be added to the current speed. If an object has no tag, the speed value will stay at its current value. Speed and Variation Only available in 'Set Speed' mode to set the particle speed. Variation in the speed can be added using the 'Variation' setting. Adjust Speed By Distance Only available in 'Set Speed' mode. When the particle stream turns to point to the next path object, they will all move at the same speed but particles on the outside of the curve will take longer to get there than those on the inside. Checking this switch will reduce that discrepancy by adjusting the speed to take account of the distance to be travelled. If you choose 'Get Speed From Tag' the same setting is also available in the Path Data tag. Loop at End of Path If this switch is checked, when the particles have passed through the last path object, they will return to the first object and move along the path again. If it is unchecked they will simply follow the direction they had on exiting the path. Actions quicktab Actions Actions dragged into these lists will be executed when a particle enters the first path object or reaches the end of the path (i.e. passes the final path object). Add Action Clicking either button will add an action to the scene and drop it into the relevant Actions list. Actions on Start of Path/Actions on End of Path The lists of actions to be carried out at the start and end of travel along the path.
http://docs.x-particles.net/html/pathmod.php
2018-11-13T02:13:51
CC-MAIN-2018-47
1542039741192.34
[array(['../images/modifier_v4_fpath1.jpg', None], dtype=object) array(['../images/modifier_v4_fpath2.jpg', None], dtype=object)]
docs.x-particles.net
. File Tab Drawings (Harmony Server only) The Drawings tab lists the modified drawings. Palettes Tab The Palette tab lists the modified colour palettes. Palette Lists Tab The Palette Lists tab lists all the modified colour palette lists.
https://docs.toonboom.com/help/harmony-16/advanced/reference/dialog-box/advanced-save-dialog-box.html
2018-11-13T02:55:57
CC-MAIN-2018-47
1542039741192.34
[array(['../../Resources/Images/HAR/Stage/Network/Steps/hmy_004_advancedsave_001.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/Stage/Network/Steps/hmy_004_advancedsave_007.png', None], dtype=object) array(['../../Resources/Images/HAR/Stage/Network/Steps/hmy_004_advancedsave_002.png', None], dtype=object) array(['../../Resources/Images/HAR/Stage/Network/Steps/hmy_004_advancedsave_003.png', None], dtype=object) array(['../../Resources/Images/HAR/Stage/Network/Steps/hmy_004_advancedsave_004.png', None], dtype=object) ]
docs.toonboom.com
Command scan scan Search for addresses that are located in a memory mapping (haystack) that belonging to another (needle). scan requires two arguments, the first is the memory section that will be searched and the second is what will be searched for. The arguments are grepped against the processes memory mappings (just like vmmap to determine the memory ranges to search.
https://gef.readthedocs.io/en/master/commands/scan/
2018-11-13T03:27:02
CC-MAIN-2018-47
1542039741192.34
[array(['https://i.imgur.com/Ua0VXRY.png', 'scan-example'], dtype=object)]
gef.readthedocs.io
Cloning from Pre-Generated Bundles¶ hg.mozilla.org supports offloading clone requests to pre-generated bundle files stored in a CDN and Amazon S3. This results in drastically reduced server load (which helps prevent outages due to accidental, excessive load) and frequently results in faster clone times. How It Works¶ When a Mercurial client clones a repository, it looks to see if the server is advertising a list of available, pre-generated bundle files. If it is, it looks at the list, finds the most appropriate entry, downloads and applies that bundle, then does the equivalent of an hg pull against the original Mercurial server to fetch new data since the time the bundle file was produced. The end result is a faster clone with drastically reduced load on the Mercurial server. Enabling¶ If you are running Mercurial 3.7 or newer, support for cloning from pre-generated bundles is built-in to Mercurial itself and enabled by default. If you are running Mercurial 3.6, support is built-in but requires enabling a config option: [experimental] clonebundles = true If you are running a Mercurial older than 3.6, upgrade to leverage the clone bundles feature. Mercurial 4.1 is required to support zstd bundles, which are smaller and faster than bundles supported by earlier versions. Configuring¶ hg.mozilla.org will advertise multiple bundles/URLs for each repository. Each listing varies by: - Bundle type - Server location By default, Mercurial uses the first entry in the server-advertised bundles list that the client supports. The clone bundles feature allows the client to define preferences of which bundles to fetch. The way this works is the client defines some key-value pairs in its config and bundles having these attributes will be upweighted. Bundle Attributes on hg.mozilla.org¶ On hg.mozilla.org, following attributes are defined in the manifest: - BUNDLESPEC This defines the type of bundle. We currently generate bundles with the following specifications: zstd-v2, gzip-v1, gzip-v2, none-packed1. - REQUIRESNI - Indicates whether the URL requires SNI (a TLS extension). This is set to truefor URLs where multiple certificates are installed on the same IP and SNI is required. It is undefined if SNI is not required. - ec2region - The EC2 region the bundle file should be served from. We support us-west-1, us-west-2, us-east-1, eu-central-. You should prefer the region that is closest to you. - cdn - Indicates whether the URL is on a CDN. Value is trueto indicate the URL is a CDN. All other values or undefined values are to be interpretted as not a CDN. Example Manifests¶ Here is an example clone bundles manifest: BUNDLESPEC=zstd-v2 REQUIRESNI=true cdn=true BUNDLESPEC=zstd-v2 ec2region=us-west-2 BUNDLESPEC=zstd-v2 ec2region=us-west-1 BUNDLESPEC=zstd-v2 ec2region=us-east-1 BUNDLESPEC=zstd-v2 ec2region=eu-central-1 BUNDLESPEC=gzip-v2 REQUIRESNI=true cdn=true BUNDLESPEC=gzip-v2 ec2region=us-west-2 BUNDLESPEC=gzip-v2 ec2region=us-west-1 BUNDLESPEC=gzip-v2 ec2region=us-east-1 BUNDLESPEC=gzip-v2 ec2region=eu-central-1 BUNDLESPEC=none-packed1;requirements%3Dgeneraldelta%2Crevlogv1 REQUIRESNI=true cdn=true BUNDLESPEC=none-packed1;requirements%3Dgeneraldelta%2Crevlogv1 ec2region=us-west-2 BUNDLESPEC=none-packed1;requirements%3Dgeneraldelta%2Crevlogv1 ec2region=us-west-1 BUNDLESPEC=none-packed1;requirements%3Dgeneraldelta%2Crevlogv1 ec2region=us-east-1 BUNDLESPEC=none-packed1;requirements%3Dgeneraldelta%2Crevlogv1 ec2region=eu-central-1 As you can see, listed bundle URLs vary by bundle type (compression and format) and location. For each repository we generate bundles for, we generate: - A zstd bundle (either default compression or maximum compression depending on repo utilization) - A gzip bundle (the default compression format) - A streaming bundle file (larger but faster) For each of these bundles, we upload them to the following locations: - CloudFront CDN - S3 in us-west-2 region - S3 in us-west-1 region - S3 in us-east-1 region - S3 in eu-central-1 region Which Bundles to Prefer¶ The zstd bundle hosted on CloudFront is the first entry and is thus preferred by clients by default. zstd bundles are the smallest bundles and for most people they are the ideal bundle to use. Note Mercurial 4.1 is required to use zstd bundles. If an older Mercurial client is used, larger, non-zstd bundles will be used. If you have a super fast internet connection, you can prefer the packed/streaming bundles. This will transfer 30-40% more data on average, but will require almost no CPU to apply. If you can fetch from S3 or CloudFront at 1 Gbps speeds, you should be able to clone Firefox in under 60s.: # HG 3.7+ [ui] clonebundleprefers = VERSION=packed1 # HG 3.6 [experimental] clonebundleprefers = VERSION=packed1 Manifest Advertisement to AWS Clients¶ If a client in Amazon Web Services (e.g. EC2) is requesting a bundle manifest and that client is in an AWS region where bundles are hosted in S3, the advertised manifest will only show S3 URLs for the same AWS region. In addition, stream clone bundles are the highest priority bundle. This behavior ensures that AWS transfer are intra-region (which means they are fast and don’t result in a billable AWS event) and that hg clone completes as fast as possible (stream clone bundles are faster than gzip bundles). Important If you have machinery in an AWS region where we don’t host bundles, please let us know. There’s a good chance that establishing bundles in your region is cheaper than paying the cross-region transfer costs (intra-region transfer is free). Which Repositories Have Bundles Available¶ Bundles are automatically generated for repositories that are high volume (in terms of repository size and clone frequency) or have a need for bundles. The list of repositories with bundles enabled can be found at. A JSON document describing the bundles is available at. If you think bundles should be made available for a particular repository, let a server operator know by filing a Developer Services :: hg.mozilla.org bug or by asking in #vcs on irc.mozilla.org.
https://mozilla-version-control-tools.readthedocs.io/en/latest/hgmo/bundleclone.html
2018-11-13T03:41:31
CC-MAIN-2018-47
1542039741192.34
[]
mozilla-version-control-tools.readthedocs.io
A.. Method bindings can be static (the default), virtual, or dynamic. Virtual and dynamic methods can be overridden, and they can be abstract. These designations come into play when a variable of one class type holds a value of a descendant class type. They determine which implementation is activated when a method is called.; To make a method virtual or dynamic, include the virtual or dynamic directive in its declaration. Virtual and dynamic methods, unlike static methods, can be overridden in descendant classes. When an overridden method is called, the actual (runtime) descendant runtime.. In Delphi for Win32, virtual and dynamic methods are semantically equivalent. However, they differ in the implementation of method-call dispatching at runtime: virtual methods optimize for speed, while dynamic methods optimize for code size. In general, virtual methods are the most efficient way to implement polymorphic behavior. Dynamic methods are useful when a base class declares many overridable methods which are inherited by many descendant classes in an application, but only occasionally overridden.; T2 = class(T1) procedure Act; // Act is redeclared, but not overridden end; var SomeObject: T1; begin SomeObject := T2.Create; SomeObject.Act; // calls T1.Act end; The reintroduce directive suppresses compiler warnings about hiding previously declared virtual methods. For example, procedure DoSomething; reintroduce; // the ancestor class also has a DoSomething method Use reintroduce when you want to hide an inherited virtual method with a new one. An abstract method is a virtual or dynamic method that has no implementation in the class where it is declared. Its implementation is deferred to a descendant class. Abstract methods must be declared with the directive abstract after virtual or dynamic. For example, procedure DoSomething; virtual; abstract; You can call an abstract method only in a class or instance of a class in which the method has been overridden. Most methods are called instance methods, because they operate on an individual instance of an object. A class method is a method (other than a constructor) that operates on classes instead of objects. There are two types of class methods: ordinary class methods and class static could be a descendant of the class in which it is defined). If the method is called in the class C, then Self is of the type class of C. Thus you cannot use the Self to access instance fields, instance properties, and normal (object) methods, but you can use it to call constructors and other class methods, or to access class properties and class fields. A class method can be called through a class reference or an object reference. When it is called through an object reference, the class of the object becomes the value of Self. Like class methods,. Methods are made class static by appending the word static to their declaration, for example; Like a class method, you can call a class static method through the class type (i.e. without having an object reference), for example TMyClass.X := 17; TMyClass.StatProc('Hello'); A method can be redeclared using the overload directive. In this case, if the redeclared method has a different parameter signature from its ancestor, it overloads the inherited method without hiding it. Calling the method in a descendant class activates whichever implementation matches the parameters in the call. If you overload a virtual method, use the reintroduce directive when you redeclare it in descendant descendant as virtual is equivalent to a static constructor. When combined with class-reference types, however, virtual constructors allow polymorphic construction of objects -- that is, construction of objects whose types aren't known at compile time. (See Class references.) of checking for nil values before destroying an object.. The implementation of a message method can call the inherited message method, as in this.
http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devcommon/methods_xml.html
2018-02-17T21:02:30
CC-MAIN-2018-09
1518891807825.38
[]
docs.embarcadero.com
Configuration in Mass Configuration is available in the Add-ons section in Jira Administration. Deletion jobs The Deletion jobs page shows deletion job(s) along with their status. It is also the starting page used to create new jobs. Only one job can be running at a time. When a job is currently in progress, its status is displayed in the same table. The page includes the following columns: - Name - Status - one of: - In progress - Done (completed successfully) - Stopped (user terminated the job while it was running) - Failed (job terminated due to an unexpected error) - Created date and time - Last update - when an issue was last deleted - Deleted count - Skipped count - an issue may be skipped if the add-on receives an error, for example because it is not authorized to delete it or because the issue no longer exists - Remaining count An in-progress job may be stopped using the Stop action. Completed or stopped jobs can be executed again with the Run again action. Before a job is re-executed, it will be possible to rename it, adjust the query and review the currently matching issues. Creating jobs Click the Create new job or use the Run again action in Deletion jobs screen to create a new job. The configuration page is shown below. Each job is given a Job name. This name will be used as a label on job status page. The Query determines what Jira issues will be deleted by the job.. Once job name and query are in place, click Preview matching issues to review Jira issues matching the query. The add-on will display a quick preview with a small sample. It also provides a View all issues button, opening Jira search with the same query in a new tab. Use it to review more issues and have access to column selection. If you adjust the query in Jira issue search, remember to copy it back to job configuration! Next, click the Delete issues button. Before the job starts, it needs confirmation by typing in the number of issues currently matching the query. Once the job is saved, you will be navigated back to the job status page.
http://docs.expium.com/mass-delete-for-jira/configuration-in-mass-delete/
2018-02-17T21:17:13
CC-MAIN-2018-09
1518891807825.38
[]
docs.expium.com
Dooming Transactions¶. An example of such a use case can be found in zope/app/form/browser/editview.py. Here a form validation failure must doom the transaction as committing the transaction may have side-effects. However, the form code must continue to calculate a form containing the error messages to return. For Zope in general, code running within a request should always doom transactions rather than aborting them. It is the responsibilty of the publication to either abort() or commit() the transaction. Application code can use savepoints and doom() safely. To see how it works we first need to create a stub data manager: >>> from transaction.interfaces import IDataManager >>> from zope.interface import implementer >>> @implementer(IDataManager) ... class DataManager: ... def __init__(self): ... self.attr_counter = {} ... def __getattr__(self, name): ... def f(transaction): ... self.attr_counter[name] = self.attr_counter.get(name, 0) + 1 ... return f ... def total(self): ... count = 0 ... for access_count in self.attr_counter.values(): ... count += access_count ... return count ... def sortKey(self): ... return 1 Start a new transaction: >>> import transaction >>> txn = transaction.begin() >>> dm = DataManager() >>> txn.join(dm) We can ask a transaction if it is doomed to avoid expensive operations. An example of a use case is an object-relational mapper where a pre-commit hook sends all outstanding SQL to a relational database for objects changed during the transaction. This expensive operation is not necessary if the transaction has been doomed. A non-doomed transaction should return False: >>> txn.isDoomed() False We can doom a transaction by calling .doom() on it: >>> txn.doom() >>> txn.isDoomed() True We can doom it again if we like: >>> txn.doom() The data manager is unchanged at this point: >>> dm.total() 0 Attempting to commit a doomed transaction any number of times raises a DoomedTransaction: >>> txn.commit() Traceback (most recent call last): DoomedTransaction: transaction doomed, cannot commit >>> txn.commit() Traceback (most recent call last): DoomedTransaction: transaction doomed, cannot commit But still leaves the data manager unchanged: >>> dm.total() 0 But the doomed transaction can be aborted: >>> txn.abort() Which aborts the data manager: >>> dm.total() 1 >>> dm.attr_counter['abort'] 1 Dooming the current transaction can also be done directly from the transaction module. We can also begin a new transaction directly after dooming the old one: >>> txn = transaction.begin() >>> transaction.isDoomed() False >>> transaction.doom() >>> transaction.isDoomed() True >>> txn = transaction.begin() After committing a transaction we get an assertion error if we try to doom the transaction. This could be made more specific, but trying to doom a transaction after it’s been committed is probably a programming error: >>> txn = transaction.begin() >>> txn.commit() >>> txn.doom() Traceback (most recent call last): ... ValueError: non-doomable A doomed transaction should act the same as an active transaction, so we should be able to join it: >>> txn = transaction.begin() >>> txn.doom() >>> dm2 = DataManager() >>> txn.join(dm2) Clean up: >>> txn = transaction.begin() >>> txn.abort()
http://transaction.readthedocs.io/en/latest/doom.html
2018-02-17T21:02:06
CC-MAIN-2018-09
1518891807825.38
[]
transaction.readthedocs.io
Overview ClassicLink Mirror is an AWS-provided, open-source solution for replicating (mirroring) EC2-Classic security groups to a new environment in Amazon Virtual Private Cloud (Amazon VPC). This solution is especially useful when performing complicated migrations between the two platforms because it mirrors network security settings in EC2 Classic to the corresponding (target) VPC network environment. Background: Migrating from EC2-Classic to Amazon VPC Two key challenges arise when planning for migration of an application from one network to another. One is maintaining connectivity, as it is common for cloud applications to consist of multiple services that require interconnectivity within the network, i.e. over private IP addresses. The other is maintaining proper access between applications while the migration is in progress. One way to complete a migration is to replicate the old network structure in the new network, and then move the entire deployment from one network to the other. However, this requires application downtime and so, for availability reasons, many customers prefer to carry out migration in a more incremental manner. In January 2015, AWS released a feature called ClassicLink which allows customers to associate (link) EC2-Classic instances with Amazon VPC security groups in the same AWS Region, enabling private communication between the two platforms. This communication facilitates incremental migrations to Amazon VPC, allowing customers to migrate individual components while maintaining communication between older EC2-Classic instances and new EC2 instances running in a virtual private cloud (VPC). In some cases, the migration is completed rapidly and this association is straightforward. However, over the course of a longer-term migration, the set of EC2-Classic instances might change due to manual capacity adjustments or Auto Scaling rules. Furthermore, EC2-Classic security group rules might be added or removed, and it will be necessary to mirror those changes to the corresponding VPC security groups as well. The ClassicLink Mirror solution automates these tasks. It monitors appropriately tagged EC2-Classic security groups, and whenever there is change in their rules or instance memberships, it will replicate those changes in the associated VPC to help keep the networks consistent (mirrored) during migration. The mirroring actions are unidirectional: the user need only update the EC2-Classic security groups and ClassicLink Mirror will overwrite/update the Amazon VPC side accordingly. See the Architecture Overview for detailed information. Cost You are responsible for the cost of the AWS services used while running this solution. There is no additional cost for deploying the automated solution. As of the date of publication, the cost for running this solution is negligible—for most customers the estimated cost will be less than a penny a month. AWS Lambda pricing is based on invocation count and duration. Therefore, the cost of running ClassicLink Mirror automation depends primarily on the frequency with which relevant Amazon EC2 APIs are called from your account (see the appendix for a complete list). For smaller deployments, each invocation of the Lambda function can be expected to complete in under five (5) seconds. Monitor your monthly AWS Lambda bill for a detailed breakdown of service costs incurred while running this solution. Prices are subject to change. For full details, see the pricing webpage for each AWS service you will be using in this solution.
https://docs.aws.amazon.com/solutions/latest/classiclink-mirror/overview.html
2018-02-17T21:54:17
CC-MAIN-2018-09
1518891807825.38
[]
docs.aws.amazon.com
This article provides guidance on how Systen Admistrators. NOTE:. Trace File Metrics ( perf_monitor_rdbms.csv) Summary File and Details File Metrics ( data_store_summary.csv / data_store_details.csv) Note: Metrics labeled "[Details Only]" apply only to the details file. This is because the details file provides the same metrics as the summary file, but the metrics are broken down for each data store, entity, and query rule. These log files provide performance measurements on the building blocks of Appian expressions: functions and rules. Rules can contain both functions and/or other rules. In such cases, the measured time of the main rule will include that of any nested rules and/or functions. Note: If a function or rule does not evaluate successfully it will not be measured. Under certain additional circumstances both rules and Appian-provided functions may go unmeasured. Custom Function Plug-ins that evaluate successfully are always measured. Trace File Metrics ( expressions_trace.csv) Note: Because most SAIL interfaces evaluate many rules and expression functions to serve a single request, enabling the expressions trace log may adversely affect system performance. Summary File Metrics ( expressions_summary.csv) Details File Metrics ( expressions_details.csv) These log files provide measurements on offline mobile performance. Trace File Metrics ( offline_trace.csv) Summary File Metrics ( offline_summary.csv)) Note: Metrics labeled "[Details Only]" apply only to the details file. This is because the details file provides the same metrics as the summary file, but the metrics are broken down for each report. These log files provide performance measurements on SAIL interfaces, including record and report dashboards and SAIL start and task forms. Trace File Metrics ( sail_trace.csv) Summary File Metrics ( sail_summary.csv) Details File Metrics ( sail_details) Note: Metrics labeled "[Details Only]" apply only to the details file. This is because the details file provides the same metrics as the summary file, but the metrics are broken down for each call and each operating system. This file contains information about the status of each engine server. It is written every five minutes to engine_summary.csv, but unlike other summary logs, one line is written for each configured gateway.: Trace File Metrics ( web_apis_trace.csv) Summary File and Details File Metrics ( web_api_summary.csv / web_api_details.csv) The search server replication performance log records the replication of data to the search server for indexing. The log is written to each time a replication occurs, which can be up to once per minute. Summary File Metrics ( search_server_replication_summary Content Metrics log ( content.csv) records metrics on data stored in the Content Engine Server. Metrics include the following: The Data Type Metrics log file ( types.csv) provides information on the system and custom data types created in the system. News Metrics log ( news.csv) records metrics on the number of feeds, posts, comments, and events produced and used on the News tab of the Tempo interface. Records Metrics log file ( records.csv) provides information about record types. Metrics include the following: The Search Server Metrics log file ( search_server.csv) provides information on the search server component of the Appian architecture.: Note: SAIL start forms, SAIL task forms, Tempo reports, record views, SAIL related actions, and Web APIs are written to a comma separated value (CSV) file ( design_errors.csv) in the <APPIAN_HOME>/logs directory. This log has the following columns: When the application server encounters an unexpected error, a message describing the error is logged to application_server: [default-threads - 29] ERROR com.appiancorp.ra.workpoller.WorkPoller - Could not obtain 3 thread(s) after 10000 attempts The error means an unattended node, possibly one running a Custom Smart Service Plug-In, is taking an abnormally long time to complete. By default, logs for the search server are located in the <APPIAN_HOME>/logs/search-server/ directory. This can be controlled by editing <APPIAN_HOME>/search-server/conf/log4j.properties. Gateway startup status is logged in text files within the <APPIAN_HOME>/logs/ directory. Each gateway startup log filename is constructed as follows: gw-engine-name.log Example: gw-process-execution1.log The startup log gives information about which file is being booted, the number of transactions to replay, and the final status of when it's up and running. For example, the startup log for an execution engine with no transactions to replay looks like this: Loading pe1.kdb at 2014-11-18 07:15:33.779. Loaded pe1.kdb in 1548ms. Booting pe from '/usr/local/appian/server/process/exec/': No transactions for replay. Booted. Primary is ready. When there are transactions to replay, the log will contain the number of transactions to replay, the elapsed time, and the estimated time remaining. The estimated time remaining becomes more accurate as the transaction replay gets closer to 100%. It is also more accurate when there are a greater number of transactions to replay. When starting .kdb files with hundreds of thousands of transactions to replay, the first line after "Executing transactions" may not print for several minutes. Example: Loading pe10.kdb at 2014-11-18 07:15:33.779. Loaded pe10.kdb in 11845ms. Booting pe from '/usr/local/appian/server/process/exec/': Loaded 6067 transactions for replay. Executing transactions 242 (3%) replayed in 05.895s (~02m 21.899s remaining) 484 (7%) replayed in 06.110s (~01m 10.480s remaining) 726 (11%) replayed in 06.318s (~46.485s remaining) 968 (15%) replayed in 06.537s (~34.435s remaining) 1210 (19%) replayed in 06.741s (~27.060s remaining) 1452 (23%) replayed in 07.633s (~24.262s remaining) 1694 (27%) replayed in 07.777s (~20.077s remaining) 1936 (31%) replayed in 10.317s (~22.014s remaining) 2178 (35%) replayed in 13.557s (~24.207s remaining) 2420 (39%) replayed in 16.576s (~24.981s remaining) 2662 (43%) replayed in 17.829s (~22.805s remaining) 2904 (47%) replayed in 18.030s (~19.638s remaining) 3146 (51%) replayed in 24.123s (~22.398s remaining) 3388 (55%) replayed in 24.542s (~19.406s remaining) 3630 (59%) replayed in 26.176s (~17.573s remaining) 3872 (63%) replayed in 33.137s (~18.785s remaining) 4114 (67%) replayed in 33.906s (~16.069s remaining) 4356 (71%) replayed in 34.083s (~13.387s remaining) 4598 (75%) replayed in 34.167s (~10.915s remaining) 4840 (79%) replayed in 34.256s (~08.684s remaining) 5082 (83%) replayed in 34.343s (~06.565s remaining) 5324 (87%) replayed in 34.441s (~04.806s remaining) 5566 (91%) replayed in 35.890s (~03.230s remaining) 5808 (95%) replayed in 36.256s (~01.616s remaining) 6050 (99%) replayed in 37.256s (~00.104s remaining) 6067 (100%) replayed in 37.302s Booted. Primary is ready. Gateway communication events and errors are logged in separate text files within the <APPIAN_HOME>/logs/ directory. Each gateway event and error log filename is constructed as follows: gw_EngineAcronym_DATE_TIME.log Examples: gw_NO1_2011-07-22_1917.log gw_PO1_2011-07-22_1917.log Each gateway event and error log is written in the following syntax: DATE TIMESTAMP [Engine Acronym] LOGGING LEVEL .a.gw "Timing in milliseconds" "Action" Examples: 2011-07-22 19:17:36 [PO1] INFO .a.gw "State transition from [DISCONNECTED] to [DISCONNECTED]" 2011-07-22 19:17:37 [PX021] INFO .a.gw "State transition from [DISCONNECTED] to [DISCONNECTED]" Engine acronyms. To create logging for a specific engine gateway: log.propertiesconfiguration file, in the same directory. log_XX_YY.propertieswhere XX is either db (for the engine database) or gw (for the gateway), and YY is the Server ID. The log configuration file you create should list the following settings. #configure the root level configure=DEBUG, A3 #configure node configure.a.gw=DEBUG, A3 #declare appender A1 A1=.a.l.appenders.text.stdout A1.pattern=%yyyy-%mo-%dd %hh:%mm:%ss [%app] %lvl %node "%msg" #declare appender A2 #BINARY FILE APPENDER #Use synch=0. With synch=1 log statement does not complete until updated file written to disk - much too slow. A2=.a.l.appenders.data.binfile A2.file=$(AE_SVRLOG)/$(AE_LOGNM).l A2.synch=0 #declare appender A3 #TEXT FILE APPENDER A3=.a.l.appenders.text.textfile A3.pattern=%yyyy-%mo-%dd %hh:%mm:%ss [%app] %lvl %node "%msg" A3.file=$(AE_SVRLOG)/$(AE_LOGNM) A3.append=1 Only the following properties are configurable: See below: Lowering Log Messages, Changing Log Appenders, and Changing Log Syntax The log properties files that you can create and the corresponding log files generated for each of the Appian gateways include the following: Database communication events and errors are also logged in text files within to the <APPIAN_HOME>/logs/ directory. They operate similar to the Engine Gateway Log files. See above: Engine Gateways Each database event and error log filename is constructed in the following syntax: db_EngineAcronym_DATE_TIME.log Examples: db_CO1_2011-07-22_2020.log db_PD1_2011-07-22_1917.log Each database event and error log is written in the following syntax: DATE TIMESTAMP [Engine Acronym] {Engine File} LOGGING LEVEL Message Type "User Context" "Timing in milliseconds" "Action" Examples: 2011-07-22 20:20:18 [PA00011] {pa2.kdb 13} (Default) WARN .a.pf.te "Administrator" "251.0251" ".a.p.TOPICS.send_message" 2011-07-22 20:20:17 [PX001] {pe2.kdb 1} (Default) WARN .a.p.PROCESS.i "Incremental Update: Exec Engine is not ready to start Incremental update.": <APPIAN_HOME>/logs/login-audit-web-api.csv Data captured includes the following: A login is determined to have Succeeded when the following valid credentials are passed: The user agent information reported by the web browser often includes industry acronyms and jargon with unclear meaning. For example: Special Consideration for the Cloud To view the login-audit file: 2012-04-13 15:12:25,889 [http-0.0.0.0-8080-13] INFO com.appiancorp.content.ContentServiceJavaImpl_Delete - Successful deletion of objects: ids=521; types=Document; names=["server"]; deleted by user=[john.smith.s] Example Tempo Feed Entry Deletion 2011-12-19 13:31:25,509 [http-0.0.0.0-8080-2] INFO com.appiancorp.tempo.rdbms.RdbmsFeedSourceImpl_Delete - Administrator deleted entry [id=b-2] by [user=Administrator]: [body=hello world]: Note:), information is logged in a comma-separated (CSV) file, blocked_files.csv, in <AE_HOME>/logs/audit/. Data captured includes the following: Log files can accumulate rapidly and must be actively managed.: log4j.appender.<APPENDER_NAME>.MaxFileSize=: 2 12 22 32 42 52 The number of files that are created is limited by the following property in the appian_log4j.properties file: log4j.appender.<APPENDER_NAME>.MaxBackupIndex=: <APPIAN_HOME>/server/_scripts To remove aging log files, use the logs argument, passing the path to a backup folder and the number of log files to keep. For example: ./cleanup.sh logs -target /appian_backup_log_files/ -keep 3: com.appiancorp.security.util.StringSecurityUtils - The HTML tag contained an attribute that we could not process. The request attribute has been filtered out, but the tag is still in place. The value of the attribute was ... These messages are logged when the following logger in appian_log4j.properties is set to the WARN level. By default, it is set to ERROR. log4j.logger.com.appiancorp.security.util.StringSecurityUtils: log4j.logger.com.appiancorp.security.csrf=WARN: log4j.logger.com.appiancorp.process.runtime.activities.QueryRdbmsActivity=DEBUG`. NOTE:: jvm 1 | 2007/01/09 19:12:37 | Jan 9, 2007 7:12:37 PM com.metaparadigm.jsonrpc.JSONRPCBridge analyzeClass<br> jvm 1 | 2007/01/09 19:12:37 | INFO: analyzing com.appiancorp.process.execution.presentation.ProcessExecutionAccess<br> vm 1 | 2007/01/09 19:12:37 | Jan 9, 2007 7:12:37 PM com.metaparadigm.jsonrpc.JSONRPCBridge analyzeClass<br> jvm 1 | 2007/01/09 19:12:37 | INFO: analyzing com.appiancorp.asi.components.common.ClientComponentAccess<br> jvm 1 | 2007/01/09 19:12:37 | Jan 9, 2007 7:12:37 PM com.metaparadigm.jsonrpc.JSONRPCBridge analyzeClass<br> jvm 1 | 2007/01/09 19:12:37 | Jan 9, 2007 7:12:37 PM com.metaparadigm.jsonrpc.JSONRPCBridge analyzeClass<br> jvm 1 | 2007/01/09 19:12:37 | INFO: analyzing com.appiancorp.suiteapi.content.Content.design.presentation.ProcessDesign.analytics2.display.AnalyticsAccess.PaletteScheParameterSche.forms.FormConfig<br> jvm 1 | 2007/01/09 19:12:46 | Jan 9, 2007 7:12:46 PM com.metaparadigm.jsonrpc.BeanSerializer analyzeBean<br> You can remove this logging from JSON by adding the following to the logging.properties file located under the <JAVA_HOME>/jre/lib directory: com.metaparadigm.jsonrpc.level=WARNING You can also choose to set all java.util.logging logging to this level by setting the following in the same file: .level=WARNING java.util.logging.ConsoleHandler?.level = WARNING A full list of configuration options for logging.properties appears below: logging.properties<br> ############################################################ #Handler specific properties. #Describes specific configuration info for Handlers. ############################################################ # com.metaparadigm.jsonrpc.level=WARNING The directory where log files are written is controlled by the following property, in the custom.properties file. conf.suite.AE_LOGS=<install_dir>/logs The log file name and path is determined by the log4j.appender.<APPENDER_NAME>.File= property in <APPIAN_HOME>/ear/suite: log4j.appender.<APPENDER_NAME>.layout.ConversionPattern=: log4j.rootLogger=<LOGGING_LEVEL>, <APPENDER_NAME_1>, <APPENDER_NAME_2> In the following example, the root logger statement writes ERROR level messages (and above) using two appenders (console for <APPENDER_NAME_2>). log4j.rootLogger=ERROR, CONSOLE, WORK_POLLER This sample appender (named the WORK_POLLER appender) writes messages to a text file named work-poller.log. ###### WORK_POLLER appender log4j.appender.WORK_POLLER.layout=org.apache.log4j.PatternLayout log4j.appender.WORK_POLLER.layout.ConversionPattern=%d{ABSOLUTE} %-5p [%c{1}] %m%n log4j.appender.WORK_POLLER=org.apache.log4j.RollingFileAppender log4j.appender.WORK_POLLER.File=${AE_LOGS}/work-poller.log log4j.appender.WORK_POLLER.MaxFileSize=10MB log4j.appender.WORK_POLLER.MaxBackupIndex='''1000''' On This Page
https://docs.appian.com/suite/help/17.1/Logging.html
2019-03-18T18:03:39
CC-MAIN-2019-13
1552912201521.60
[]
docs.appian.com
Scrapy shell¶ The or CSS expressions and see how they work and what data they extract from the web pages you’re trying to scrape. It allows you to interactively test your expressions while you’re writing your spider, without having to run the spider to test every change. Once you get familiarized with the Scrapy shell, you’ll see that it’s an invaluable tool for developing and debugging your spiders. Configuring the shell¶. Scrapy also has support for bpython, and will try to use it where IPython is unavailable. Through scrapy’s settings you can configure it to use any one of ipython, bpython or the standard python shell, regardless of which are installed. This is done by setting the SCRAPY_PYTHON_SHELL environment variable; or by defining it in your scrapy.cfg: [settings] shell = bpython Launch the shell¶ To launch the Scrapy shell you can use the shell command like this: scrapy shell <url> Where the <url> is the URL you want to scrape. shell also works for local files. This can be handy if you want to play around with a local copy of a web page. shell understands the following syntaxes for local files: # UNIX-style scrapy shell ./path/to/file.html scrapy shell ../other/path/to/file.html scrapy shell /absolute/path/to/file.html # File URI scrapy shell Note When using relative file paths, be explicit and prepend them with ./ (or ../ when relevant). scrapy shell index.html will not work as one might expect (and this is by design, not a bug). Because shell favors HTTP URLs over File URIs, and index.html being syntactically similar to example.com, shell will treat index.html as a domain name and trigger a DNS lookup error: $ scrapy shell index.html [ ... scrapy shell starts ... ] [ ... traceback ... ] twisted.internet.error.DNSLookupError: DNS lookup failed: address 'index.html' not found: [Errno -5] No address associated with hostname. shell will not test beforehand if a file called index.html exists in the current directory. Again, be explicit. Using the shell¶ The Scrapy shell is just a regular Python console (or IPython console if you have it available) which provides some additional shortcut functions for convenience. Available Shortcuts¶ - shelp()- print a help with the list of available objects and shortcuts - fetch(url[, redirect=True])- fetch a new response from the given URL and update all related objects accordingly. You can optionaly ask for HTTP 3xx redirections to not be followed by passing redirect=False - fetch(request)- fetch a new response from the given request. Available Scrapy objects¶ The Scrapy shell automatically creates some convenient objects from the downloaded page, like the Response object and the Selector objects (for both HTML and XML content). Those objects are: - crawler- the current Crawlerobject. - spider- the Spider which is known to handle the URL, or a Spiderobject if there is no spider found for the current URL - request- a Requestobject of the last fetched page. You can modify this request using replace()or fetch a new request (without leaving the shell) using the fetchshortcut. - response- a Responseobject containing the last fetched page - settings- the current Scrapy settings Example of shell session¶ Here’s an example of a typical shell session where we start by scraping the page, and then proceed to scrape the page. Finally, we modify the (Reddit) request method to POST and re-fetch it getting an Scrapy objects: [s]']} >>> Invoking the shell from spiders to inspect responses¶: import scrapy class MySpider(scrapy.Spider): name = "myspider" start_urls = [ "", "", "", ] def parse(self, response): # We want to inspect one specific response. if ".org" in response.url: from scrapy.shell import inspect_response inspect_response(response, self) # Rest of parsing code. When you run the spider, you will get something similar to this: 2014-01-23 17:48:31-0400 [scrapy.core.engine] DEBUG: Crawled (200) <GET> (referer: None) 2014-01-23 17:48:31-0400 [scrapy.core.engine] DEBUG: Crawled (200) <GET> (referer: None) [s] Available Scrapy objects: [s] crawler <scrapy.crawler.Crawler object at 0x1e16b50> ... >>> response.url '' Then, you can check if the extraction code is working: >>> response.xpath('//h1[@class="fn"]') [] Nope, it doesn’t. So you can open the response in your web browser and see if it’s the response you were expecting: >>> view(response) True Finally you hit Ctrl-D (or Ctrl-Z in Windows) to exit the shell and resume the crawling: >>> ^D 2014-01-23 17:50:03-0400 [scrapy.core.engine] DEBUG: Crawled (200) <GET> (referer: None) ... Note that you can’t use the fetch shortcut here since the Scrapy engine is blocked by the shell. However, after you leave the shell, the spider will continue crawling where it stopped, as shown above.
http://docs.scrapy.org/en/master/topics/shell.html
2019-03-18T17:55:51
CC-MAIN-2019-13
1552912201521.60
[]
docs.scrapy.org
Microsoft Dynamics GP Intercompany Processing You can use Intercompany Processing to set up, enter, and maintain relationships between companies so revenues or expenses incurred in one company (the originating company) can be tracked as “due to” or “due from” amounts in other companies (destination companies). This manual is designed to give you an understanding of how to use the features of Intercompany Processing, and how it integrates with the Microsoft Dynamics GP system. To make best use of Intercompany Processing, you should be familiar with systemwide features described in the System User’s Guide, the System Setup Guide, and the System Administrator’s Guide. Some features described in the documentation are optional and can be purchased through your Microsoft Dynamics GP partner. To view information about the release of Microsoft Dynamics GP that you’re using and which modules or features you are registered to use, choose Help >> About Microsoft Dynamics GP. The manual is divided into the following parts: Part 1, Setup walks you through setting up company relationships for Intercompany Processing. Part 2, Transactions provides a step-by-step guide for recording transactions in one company that will create transactions in the General Ledger of another company. It also describes the process of posting intercompany transactions so they become permanent records. Part 3, Inquiries and Reports describes procedures that help you analyze intercompany financial information. Part 1: Setup Use this part of the documentation to familiarize yourself with Intercompany Processing terms and set up intercompany relationships. The following information is discussed: - Chapter 1, “Intercompany Processing setup” describes what to do before you set up Intercompany Processing and provides steps to set up an intercompany relationship. Chapter 1: Intercompany Processing setup Before you can begin entering intercompany transactions, you must set up the relationships between companies. You can also use the setup procedures whenever you add Intercompany Processing relationships. This information is divided into the following sections: Intercompany Processing terms Before you set up Intercompany Processing Setting up intercompany relationships Intercompany Processing described in Chapter 3, “Intercompany transactions.”. Setting up intercompany relationships Use the Intercompany Setup window to define relationships between companies that can have intercompany transaction interaction. Setting up an intercompany relationship enables you to record transactions in General Ledger or Payables Management for the originating company that will create transactions in the General Ledger for the destination company. Note If you’re not using Multicurrency Management, originating and destination companies must have the same functional currency. is the company. Note For each intercompany relationship, you can specify only one due to/due from account for each company.company Setup List for all the intercompany relationships for this originating company, enter or select the same Originating Company ID and choose File >> Print. You can also print the Intercompany Setup List in the General System Reports window by choosing Reports >> System >> General >> select Intercompany Setup from the Reports drop down list. All intercompany relationships established for the selected range of companies will print on the Intercompany Setup List when you print the report from this window. Part 2: Transactions Use this part of the documentation to record transactions in the General Ledger or Payables Management module of one company that will create transactions in the General Ledger of another company. The following information is discussed: Chapter 2, “Multicurrency transactions” explains how multicurrency functionality affects Intercompany Processing. Chapter 3, “Intercompany transactions” describes how to enter and void intercompany transactions. Chapter 4, “Posting” contains information about posting intercompany transactions. Chapter 2: Multicurrency transactions If you’re using Multicurrency Management with Intercompany Processing, you can choose the currency to enter on transactions. This information is divided into the following sections: Viewing multiple currencies Exchange rate and document date Multicurrency account distributions Viewing multiple currencies You can choose whether to view multicurrency transactions in the originating or the functional currency. Choose View >> Currency >> Functional or Originating while entering an intercompany transaction. The option will be saved on a per user, per window basis. You also can use the Currency list button in the windows that support changing the currency view. The View menu and Currency list button are available in the following types of windows: Transaction Entry windows Journal Entry Inquiry windows The first time you open these windows after registering Multicurrency Management, all the transactions will be displayed in the originating currency. If you change the currency view, the option you last used will be the default view the next time you open that window. Note You also can enter a multicurrency transaction in the Payables Transaction Entry window, but the View menu and Currency list button are not available. Exchange rate and document date If the currency ID for a transaction is not in the functional currency, a rate type and associated exchange rate table are assigned to the transaction. The rate type is based on the rate type you’ve assigned to the selected vendor. If one isn’t assigned to the vendor, the default rate type specified in the Multicurrency Setup window is used. You also can choose the currency expansion button to open the Exchange Rate Entry window to view or modify the default exchange rate. The document date assigned to a transaction determines which exchange rate is used, based on the currency ID and associated rate type that’s entered for the transaction. Each time you change the document date on a multicurrency transaction, the system searches for a valid exchange rate. If a valid rate doesn’t exist, you can enter an exchange rate using the Exchange Rate Entry window. If you’ve entered a General Ledger posting date that’s different from the document date, the exchange rate expiration date must be after the posting date. Multicurrency account distributions For multicurrency transactions, distribution amounts are displayed in both the functional and originating currencies. However, you can change only the originating amounts. When you’re entering a multicurrency transaction, the originating debit and credit amounts must balance. If the functional equivalents don’t balance, the difference is posted automatically to a Rounding Difference account and a distribution type of Round identifies the distribution amount in the Purchasing Distribution Entry window. For example, assume you’ve entered a transaction in the euro currency, with a sale amount of 28,755.42 EUR, a trade discount of 586.84 EUR, a discount available of 1544.33 EUR and the exchange rate is 1.0922. The distributions would be calculated as follows: Chapter 3: Intercompany transactions Intercompany Processing enables you to record transactions in the General Ledger or Payables Management module for one company that will create transactions in the General Ledger of another company. This. To enter analysis information for the codes, you must view and edit the intercompany transaction in the General Ledger of the destination company. For more information about multidimensional analysis codes, refer to the Multidimensional Analysis documentation. Multicurrency intercompany transactions must be saved to a batch. Standard intercompany transactions are entered in single-use batches or recurring batches. Intercompany multicurrency transactions must be entered in single-use batches.company debits and intercompany credits are not included in the batch total shown on the Batch Entry window and on edit lists and posting journals. The number of journal entries, however, is updated for intercompany transactions. Reversing intercompany transactions can be used in situations where cash will be paid or received, or an expense will be realized in the following period. Examples of such accruals include salaries that haven’t been paid or revenues that haven’t been billed. To enter a General Ledger intercompany transaction: Open the Transaction Entry window. (Transactions >> Financial >> General) Enter or select a journal entry number. Mark the Intercompany option. Tip You can unmark this option only if you haven’t entered distributions to companies other than the originating company; that is, the one in which you are entering the transactions. Enter or select a batch. All intercompany transactions must be saved in a batch. Select a transaction type. Enter audit trail code information, such as the Transaction Date or Reversing Date, the Source Document and Reference information, and the Currency ID. Enter company IDs, accounts, transaction amounts, distribution reference, and corresponding company ID. You’ll be able to enter a corresponding company ID only if the Enter Corresponding Company ID option is marked in the Intercompany Setup window. Note Intercompany transactions will not be included in the batch total. Total debit and credit amounts will appear only when viewing the originating currency. Verify the transaction by printing a General Transaction Edit List. Choose Save to save the batch. Use any batch-level posting method to post intercompany batches. See Chapter 4, “Posting,” for more information. Entering Payables Management intercompany transactions Use the Payables Transaction Entry window and the Payables Transaction Entry Distribution window to enter and distribute intercompany transactions. The destination company distributions on an intercompany transaction must be of types PURCH, FNCHG, FREIGHT, MISC, or UNIT. Other distribution types are not supported. For more information about entering various types of Payables Management transactions, see the Payables Management documentation. To enter Payables Management intercompany transactions: Open the Payables Transaction Entry window. (Transactions >> Purchasing >> Transaction Entry) Enter a voucher number and mark the Intercompany option. You can unmark the Intercompany option only if you haven’t entered distributions to companies other than the originating company; that is, the one in which you are entering the transaction. Select the document type and enter a description. Enter or select a batch. All intercompany transactions must be saved in a batch. Enter transaction information including document date, vendor ID, document number, and purchase amounts. Enter or select a currency ID. Choose the Distributions button to open the Payables Transaction Entry Distribution window. By default, the scrolling window displays the distributions that were created automatically based on the posting accounts assigned to the vendor you chose in the Payables Transaction Entry window or on posting accounts assigned in the Posting Accounts Setup window. Modify company IDs, accounts, transaction amounts, distribution reference, and corresponding company IDs of the existing distributions, if necessary. Intercompany transactions must be distributed to Type PURCH, FNCHG, FREIGHT, MISC, or UNIT. You’ll be able to modify a corresponding company ID only if the Enter Corresponding Company ID option is marked in the Intercompany Setup window. Note All intercompany distributions must be entered in the originating currency; that is, the currency specified in the Payables Transaction Entry window. In addition, if Multicurrency Management is not registered, all originating and destination companies must have the same functional currency. Continue entering distribution accounts until your transaction is fully distributed, and choose OK. Note If you’ve entered several distributions to one particular distribution type, you can choose Redisplay to sort the accounts in the scrolling window by distribution type. Print the Payables Transaction Edit List to verify the transactions. Save the batch. Use any batch-level posting method to post intercompany batches. See Chapter 4, “Posting,” for more information. make manual adjustments in destination companies to account for voided intercompany transactions. See the Payables Management documentation for more detailed information about voiding vouchers and payments. Chapter 4: Posting Posting transfers intercompany transactions to permanent records. Until they’re posted, transactions can be changed or deleted. In General Ledger, posting also updates account balances in the chart of accounts for the originating company. Posting reports will be printed when you post transactions, either individually or in batches. For more information about posting reports for Intercompany Processing, refer to Chapter 6, “Reports.” For more information about setting up posting, see the System Setup Guide (Help >> Contents >> select Setting up the System). This. The destination company batch must then be posted in General Ledger to become a permanent part of the records for that company. For example, if you enter a standard General Ledger intercompany transaction to record a debit to telephone expense, that information will remain temporary and won’t be reflected in the balances of the accounts in either the originating or destination company until it is posted. After you’ve posted the transaction in both the originating and destination companies, the information will appear as a credit change to the balance of the Cash account and a debit change to the balance of the Telephone Expense account. The transaction also will become part of the permanent records for an open year. Posting can be performed as a background task while you continue with other tasks; however, posting can’t be performed if year-end closing is in process. In addition, all background tasks must be complete before you exit Dynamics GP. For more information about background processing, see your System Administrator’s Guide (Help >> Contents >> select System Administration). Intercompany transaction amounts In General Ledger, Batch Total Actual amounts for intercompany debits and intercompany credits are not included in the batch total shown on the Batch Entry window and on edit lists and posting journals. The number of journal entries, however, is updated for intercompany transactions. In destination companies, the transactions created will not be marked as intercompany and will be included in the batch totals. However, these transactions are assigned an intercompany audit trail code. (The unique intercompany audit trail code gives you the ability to print reports for all intercompany-generated. For example, if the next audit trail code number is 00000002 in Company A and you post an intercompany transaction from company A to company B, the batch ID in company B would be ICTRX00000002. The ICTRX audit trail code is assigned to transactions posted to open years. The ICTHS audit trail code is assigned to transactions posted to historical years. The ICREV audit trail code is assigned to reversing transactions posted to open years. Distributions to unit accounts You might make distributions to unit accounts on intercompany transactions, but they will not generate a breakout of due to/due from accounts during the posting process. They are simply passed into the destination companies. For example, if you enter a distribution of 10 units to account 1000-1000 for company B as an intercompany transaction in company A, this distribution would go to company B as 10 units to account 1000-1000 for company B. (No due to or due from accounts would be specified.) Transactions with errors Transactions with errors will remain in the originating company batch if any of the following conditions exist: A destination company doesn’t exist An intercompany relationship does not exist with any of the destination companies specified A due to/due from account is not specified in the Intercompany Setup window for any destination company Multicurrency Management is not registered and the functional currencies of any destination company is different from that of the originating company The exchange table does not exist, cannot be accessed, or is not active for the specified Rate ID/Currency ID in any originating or destination company A valid exchange rate could not be found for any company. Part 3: Inquiries and Reports This part of the documentation explains how to use inquiries and reports to analyze intercompany activity. The inquiry windows and reports in Intercompany Processing allow you to access information quickly and to display the information either on the screen or on a printed report. The following information is discussed: Chapter 5, “Inquiries,” explains how to use the Intercompany Processing inquiry windows to view transaction information. Chapter 6, “Reports,” describes how to use reports to analyze intercompany activity. Chapter 5: Inquiries Inquiry features help you analyze intercompany financial information. Analyzing data contained in your accounting system will let you make reasoned choices about managing your company resources. This information is divided into the following sections: About reporting currency Viewing transactions for a posted General Ledger journal entry Viewing intercompany information for a transaction in a destination company Viewing exchange rate information for an intercompany multicurrency voucher Viewing exchange rate information for all destination companies on a voucher About reporting currency A reporting currency is used to convert functional or originating currency amounts to another currency on inquiries and reports. For example, if the German mark might transactions for a posted General Ledger journal entry Use the Journal Entry Inquiry window to view transaction detail for General Ledger posted journal entries in an open fiscal year. Multiple journal entries with the same number might exist if a recurring batch is posted or a reversing transaction is posted. If you enter a journal entry for which multiple entries exist, the journal entry with the oldest posting date in the open year will be displayed. If you use the Journal Entry lookup button, all unique journal entries will be displayed, and you can select the journal entry you’d like to view. The Intercompany button is enabled only if the currently displayed journal entry originated from an intercompany-generated transaction. Note The Journal Entry Inquiry window displays posted journal entries in any open year in General Ledger, so you don’t need to keep transaction history to be able to view journal entries in this window. To view transactions for a posted General Ledger journal entry: Open the Journal Entry Inquiry window. (Inquiry >> Financial >> Journal Entry Inquiry) Enter or select the journal entry number to view. You can view any posted journal entry number that has not been moved to history with distributions for the company you’re logged into. Audit trail code, transaction date, source document, batch ID, reference, currency ID, account, debit, credit, distribution reference, and difference information are displayed. Viewing intercompany information for a transaction in a destination company Use the Intercompany Audit Trail Code Inquiry window in destination companies to view the originating company, originating audit trail code, and journal entry for posted intercompany transactions. Note You can view this information only for journal entries that originated in another company. To view intercompany information for a transaction in a destination company: Open the Journal Entry Inquiry window. (Inquiry >> Financial >> Journal Entry Inquiry) Enter or select the journal entry number to view. If the journal entry is an intercompany-generated transaction, the Intercompany button will be enabled. Choose the Intercompany button to open the Intercompany Audit Trail Code Inquiry window, where you can view information from the originating company. You can view the originating company ID, originating company name, originating audit trail code, and originating journal entry number. Viewing exchange rate information for an intercompany multicurrency voucher Use the Exchange Rate Entry Zoom window to view the exchange rate information for the selected intercompany voucher in Payables Management. To view exchange rate information for an intercompany multicurrency Co. ID expansion button to open the Exchange Rate Entry Zoom window to view exchange rate information. Viewing exchange rate information for all destination companies on a voucher Use the Intercompany Destination Exchange Rate Inquiry window to view the exchange rate information for all destination companies on an intercompany voucher in Payables Management. To view exchange rate information for all destination companies on a Rates button to open the Intercompany Destination Exchange Rate Inquiry window. Chapter 6: Reports You can use Intercompany Processing reports to analyze records of your intercompany transactions in the General Ledger and Payables Management modules. This information is divided into the following sections: Intercompany Processing report summary Creating a report option Intercompany Processing report summary You can print several types of reports using Intercompany Processing. Some reports automatically are printed when you complete certain procedures; for example, posting journals can automatically be printed Creating a report option. The following table lists the report types available in Intercompany Processing and the reports that fall into those categories. (Reports printed using General Ledger or Payables Management are printed using many of the same windows. See the General Ledger or Payables Management documentation for information about reports printed in those modules.) * Indicates reports that can be printed with multicurrency information displayed. † Indicates reports that can be assigned to named printers. See “Printers” in the System Administrator’s Guide (Help >> Contents >> select System Administration) for more information. Creating a report option Report options include specifications for sorting options and range restrictions for a particular report. In order to print several Intercompany Processing reports, you must first create a report option. Each report can have several different options so that you can easily print the information you need. For example, you can create report options for the Intercompany Distribution Breakdown Register that show either detailed or summary information. Note A single report option can’t be used by multiple reports. If you need identical options for several reports, you must create them separately. Use the Financial, Purchasing, or System report options windows to create sorting, restriction, and printing options for the reports that have been included with Intercompany Processing. To create a report option: Open a Financial, Purchasing, or System reports window. There are separate windows for each report type. (Reports >> System >> General) (for the Intercompany Setup List) (Reports >> Financial >> Bank Posting Journals) (Reports >> Purchasing >> Posting Journals). Note You can enter only one restriction for each restriction type. For instance, you can insert one batch ID restriction (LCM621A to LCM628A) and one audit trail code restriction.. To print the report option from the report options window, choose Print before saving it. If you don’t want to print the option now, choose Save and close the window. The report window will be redisplayed. Feedback We'd love to hear your thoughts. Choose the type you'd like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-us/dynamics-gp/financials/intercompanyprocessing
2019-03-18T18:14:31
CC-MAIN-2019-13
1552912201521.60
[]
docs.microsoft.com
Scheduler tasks¶ The "workspaces" extension provides two Scheduler tasks. - Workspaces auto-publication - This task checks if any workspace has a scheduled publishing date. If yes and if that date is passed, then all changes that have reached the "Ready to publish" stage are published to the Live workspace. - Workspaces cleanup preview links - When preview links are generated, they are stored in the database (in table "sys_preview"). This task will delete any link which has expired.
https://docs.typo3.org/typo3cms/extensions/workspaces/Administration/Scheduler/Index.html
2019-03-18T18:50:44
CC-MAIN-2019-13
1552912201521.60
[]
docs.typo3.org
Package httptypes Overview ▹ Overview ▾ Package httptypes defines how etcd's HTTP API entities are serialized to and deserialized from JSON. HTTPError ¶ type HTTPError struct { Message string `json:"message"` // Code is the HTTP status code Code int `json:"-"` } func NewHTTPError ¶ func NewHTTPError(code int, m string) *HTTPError func (HTTPError) Error ¶ func (e HTTPError) Error() string func (HTTPError) WriteTo ¶ func (e HTTPError) WriteTo(w http.ResponseWriter) error type Member ¶ type Member struct { ID string `json:"id"` Name string `json:"name"` PeerURLs []string `json:"peerURLs"` ClientURLs []string `json:"clientURLs"` } type MemberCollection ¶ type MemberCollection []Member func (*MemberCollection) MarshalJSON ¶ func (c *MemberCollection) MarshalJSON() ([]byte, error) type MemberCreateRequest ¶ type MemberCreateRequest struct { PeerURLs types.URLs } func (*MemberCreateRequest) UnmarshalJSON ¶ func (m *MemberCreateRequest) UnmarshalJSON(data []byte) error type MemberUpdateRequest ¶ type MemberUpdateRequest struct { MemberCreateRequest }
http://docs.activestate.com/activego/1.8/pkg/github.com/coreos/etcd/etcdserver/api/v2http/httptypes/
2019-03-18T18:24:32
CC-MAIN-2019-13
1552912201521.60
[]
docs.activestate.com
Logging¶ Note the severity of a given log message. Here are the standard ones, listed in decreasing order: logging.CRITICAL- for critical errors (highest severity) logging.ERROR- for regular errors logging.WARNING- for warning messages logging.INFO- for informational messages logging.DEBUG- for debugging messages (lowest severity) How to log messages needed, the last example could be rewritten as: import logging logging.log(logging.WARNING, "This is a warning") On top of that, you can create different “loggers” to encapsulate messages. (For example, a common practice is to create different loggers for every module). These loggers can be configured independently, and they allow hierarchical constructions. The previous examples use the root logger behind the scenes, which is a top level logger where all messages are propagated to (unless otherwise specified). Using logging helpers is merely a shortcut for getting the root logger explicitly, so this is also an equivalent of") Logging from Spiders¶ Scrapy provides a logger within each Spider instance, which) Logging configuration. Logging settings Log levels. LOG_FORMAT and LOG_DATEFORMAT specify formatting strings used as layouts for all messages. Those strings can contain any placeholders listed in logging’s logrecord attributes docs and datetime’s strftime and strptime directives respectively. If LOG_SHORT_NAMES is set, then the logs will not display the scrapy component that prints the log. It is unset by default, hence logs contain the scrapy component responsible for that log output. Command-line options¶ There are command-line arguments, available for all commands, that you can use to override some of the Scrapy settings regarding logging. - - --nolog - Sets LOG_ENABLEDto False See also - Module logging.handlers - Further documentation on available handlers Advanced customization¶ Because Scrapy uses stdlib logging module, you can customize logging using all features of stdlib logging. For example, let’s say you’re scraping a website which returns many HTTP 404 and 500 responses, and you want to hide all messages like this: 2016-12-16 22:00:06 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <500>: HTTP status code is not handled or not allowed The first thing to note is a logger name - it is in brackets: [scrapy.spidermiddlewares.httperror]. If you get just [scrapy] then LOG_SHORT_NAMES is likely set to True; set it to False and re-run the crawl. Next, we can see that the message has INFO level. To hide it we should set logging level for scrapy.spidermiddlewares.httperror higher than INFO; next level after INFO is WARNING. It could be done e.g. in the spider’s __init__ method: import logging import scrapy class MySpider(scrapy.Spider): # ... def __init__(self, *args, **kwargs): logger = logging.getLogger('scrapy.spidermiddlewares.httperror') logger.setLevel(logging.WARNING) super().__init__(*args, **kwargs) If you run this spider again then INFO messages from scrapy.spidermiddlewares.httperror logger will be gone. scrapy.utils.log module Logging settings). Run Scrapy from a script for more details about using Scrapy this way.
http://docs.scrapy.org/en/master/topics/logging.html
2019-03-18T17:22:37
CC-MAIN-2019-13
1552912201521.60
[]
docs.scrapy.org
Each recipe provides a common user-interface design pattern using SAIL. Some recipes are applicable to record views, reports, and forms, while most are more relevant to forms, like recipes about form validation and collecting user input for submission. As such, the setup and expressions are catered towards forms, but the same concepts apply to record views and reports. We'll show you how to transfer what you learn to record views and reports. Most of the recipes on this page define their own local variables using the load() function. This means that you can interact with your dynamic form as soon as you paste the expression into the Interface Designer. When you want to adapt a recipe to your own use case, and save the data captured on the SAIL Form into process, you can do the following: If you are testing multiple recipes, you can continue using the same Interface Designer tab. Make sure you click Test after pasting in a new expression to ensure that any local variables used get updated correctly. To use these recipes, you must have a basic understanding of SAIL concepts, specifically how to enable user interaction and the difference between load() and with(). See also: Enable User Interaction in SAIL, and SAIL Tutorial The Interface Designer can be used to see most of the recipes in action. When you use the SAIL interface in process, you need to know how to configure a SAIL task form, how to save the user's input into a node input (aka ACP) from a rule, and subsequently save it into a process variable. The recipes can be worked on in no particular order. However, make sure to read the first two sections to get yourself set up. The recipes are catered for use in process forms, but the same concepts generally apply when used on a record view or report. To use the recipe on a record view or report, you can do the following: a!dashboardLayout()instead of a!formLayout(). labelparameter and the buttonsparameter configurations. buttonsparameter configuration.
https://docs.appian.com/suite/help/17.1/SAIL_Recipes.html
2019-03-18T18:13:49
CC-MAIN-2019-13
1552912201521.60
[]
docs.appian.com
Integration guide for Android¶ Two step process to start selling your own game merchandise Version: 3 Date: 15/03/2019 Process: About For games developed on Unity ( Integration guide for Unity ) we offer a multi-platform plugin that supports both iOS and Android. However, if you have developed game on Android platform, please use the following instructions. Step 1. Create “Merch store” view¶ Vertical and horizontal The merch store view can be created in both vertical and horizontal arrangements. See examples: Step 2. Link products¶ 2.1. Link each product section with a specific product ID. 2.2. For each button, link the products to the price buttons by attaching the Product ID to the following URL:<productid> (e.g. ) 2.3 Enable user to open the link in a WebView, or send the user to the default browser when he clicks the buy button: Intent browserIntent = new Intent(Intent.ACTION_VIEW, Uri.parse("")); startActivity(browserIntent); User will be able to checkout using a Credit Card or PayPal and return to the game using the back button of the Android device. 2.4. For testing purposes, use sample product ID’s ( ) You're done! You have successfully integrated Monetizr. You can now test it by making a test order.
https://docs.themonetizr.com/android/index.html
2019-03-18T17:45:36
CC-MAIN-2019-13
1552912201521.60
[]
docs.themonetizr.com
Chapter 42: Scanning Once your traditional animation sequences are completed and cleaned up, you're ready to scan and import them in Harmony. The scanning process is the point where the traditional production becomes digital. It's the moment where you use Harmony to control the project. This chapter is divided as follows:
https://docs.toonboom.com/help/harmony-12/premium-network/Content/_CORE/_Workflow/010_Scanning/000_CT_Scan.html
2019-03-18T18:16:55
CC-MAIN-2019-13
1552912201521.60
[array(['../../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePremium.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stageAdvanced.png', 'Toon Boom Harmony 12 Stage Advanced Online Documentation'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stageEssentials.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/controlcenter.png', 'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/scan.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/Activation.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/_ICONS/download.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Import/anp_scannerimage.png', None], dtype=object) ]
docs.toonboom.com
This package is currently for internal use only. Its API may change without warning in the future. timestamp_tools provides: a TriggerMatcher, that matches a stream of data structures with approximate timestamps to a stream of exact timestamps. This package's API is not yet released. It may change significantly from its current form.
http://docs.ros.org/diamondback/api/timestamp_tools/html/index.html
2019-03-18T18:42:40
CC-MAIN-2019-13
1552912201521.60
[]
docs.ros.org
Overview You can use the Log & Data Management screen to: - View collected logs in the Search section - View the status of the logging subagent in the Sources section By default, Armor collects and retains the following log types for 30 days: To enhance the default Log and Data Management services, you can: - Upgrade the log retention rate for these default log types from 30 days to 13 months. - To learn more, see Review log retention plans. - Collect host-based logs. - Convert your virtual machine into a log collector to collect additional log types. View collected logs The Armor Management Portal (AMP) only displays logs from the previous 30 days. To search for logs, you must enter exact matches. - In the Armor Management Portal (AMP), in the left-side navigation, click Security. - Click Log & Data Management. - Click Search. View logging subagent status You can use these instructions to review the logging status of your virtual machines. Specifically, you can verify if your virtual machine is sending logs to Armor. - In the Armor Management Portal (AMP), in the left-side navigation, click Security. - Click Log & Data Management. - Click Sources. Upgrade log retention plan You can contact Armor Support for additional log management services: - Upgrade log retention plan - By default, logs are retained for 30 days; however, you can upgrade log retention to be 13 months. - Request logs beyond 30 days - Cancel a log retention plan upgrade To learn how to create support ticket, see Create a support ticket. Export log service status You can export the logs that are displayed in the Armor Management Portal (AMP) to analyze offline or to provide to an auditor. This file export will only contain logs from the previous 30 days. - In the Armor Management Portal (AMP), in the left-side navigation, click Security. - Click Log & Data Management. - Click Log Sources. - (Optional) Use the filter function to customize the data displayed. - Under the table, click CSV. - You have the option to export all data (All) or only the data that appears on the current screen (Current Set). Troubleshoot Log Source section of the Log Management screen (Armor Complete). Retention Plan section If you cannot add or update your plan, consider that you do not have permission to update your plans. You must have the following permissions enabled: - Read Log Management Plan Selection - Write Log Management Plan Selection - Read LogManagement - Write LogManagement
https://docs.armor.com/pages/viewpage.action?pageId=21529972
2019-03-18T18:25:32
CC-MAIN-2019-13
1552912201521.60
[]
docs.armor.com
auth.user.login-token-refresh-after-percentage Specifies how much of the user's login session duration should expire before the user's next request causes it to be refreshed. Key: auth.user.login-token-refresh-after-percentage Type: Double Can be set in: global.cfg Description Set how long, as a percentage of the user's login token lifetime, should pass before the user's login token should be refreshed on the next request. For example, if this value is set to 20 (i.e. 20%) and the token lifetime is set to 10 hours (i.e. 36000 seconds) then the user's token would be unchanged for the first two hours of usage and would then be refreshed (i.e. the user would be given a brand new 10 hour token) on the next request made 2 hours after their initial login. Note that this value is read only when Funnelback's web server is started. After modifying the value, the web server must be restarted for the change to take effect. Default Value Refresh the user's session after 20% of the lifetime has been used. auth.user.login-token-refresh-after-percentage=20 Examples Refresh the user's session after 5.2% of the lifetime has been used. In practice this means sessions will be refreshed more frequently (the Funnelback server must perform more work) but the user's session is less likely to expire unexpectedly. auth.user.login-token-refresh-after-percentage=5.2
https://docs.funnelback.com/administer/reference-documents/server-options/auth.user.login-token-refresh-after-percentage.html
2019-03-18T17:43:39
CC-MAIN-2019-13
1552912201521.60
[]
docs.funnelback.com
Contents Now Platform Capabilities Previous Topic Next Topic Email retention Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Email retention You can archive and eventually destroy email messages that you no longer need or if your Email table is excessively large. Email retention is available starting with the Helsinki release. Email archive and destruction plugins The email archiving and destruction feature uses the Data Archiving and Email Retention plugins. The Data Archiving plugin must be active to archive and destroy email records. The Email Retention plugin provides a set of rules that specify when the system archives and destroys email records.Note: The Email Retention plugin also prevents the system from deleting watermarks, which are required for inbound email actions to continue to function. The Email Retention plugin and associated archive and destroy rules are active by default on new instances. On upgraded instances, you must manually activate both the plugin and the archive and destroy rules. ServiceNow recommends that you review and approve these rules before activating them. If your instance already has a process to manage email records, you do not need to activate the Email Retention plugin. If you want to replace your current process with Email Retention, be sure to deactivate the current process before activating the archive and destroy rules. Archiving and destroying email records Archiving means moving records from the Email [sys_email] table to the Archive Email [ar_sys_email] table when they exceed the archive rule time limit. Destroying means deleting records in the Archive Email table when they exceed the destroy rule time limit. Note: When a destroy rule deletes email records, associated watermarks are not deleted. They are preserved to ensure that your inbound email actions continue to function. Default archive and destroy rules. Email Retention also provides this email destroy rule: Email Archive - Over a year old: destroys email records that have been archived for more than 365 days prior to the current date. With these default settings, your email messages are kept on the instance for a total of two years: one year in the Email table, and one year in the Email archive table. At the end of this period, the system deletes the expired email records from the Email archive table. Note: By default these rules are active on new instances and inactive on upgrades. The system runs archive and destroy rules when you activate them. Compatibility with other record management implementations If you are already using another method to manage email records, such as table cleaners, you do not have to use the Email Retention feature. To prevent unexpected record deletion, ServiceNow recommends that you avoid using multiple email management processes on the same instance at the same time. Note: For assistance replacing your existing record management implementation with Email Retention, contact your professional services or sales representative. Effects of archiving and deleting email records Inbound email actions copy the body of an email to the work notes of the related record. If the inbound email record is later deleted, the work notes still contain a text copy of the email. When the system sends an email message about a record, the activity formatter displays a Sent Email section with a link to the email message. If the system archives the email message, the activity formatter removes Sent Email section. When the system deletes the email message, it is no longer visible in the activity formatter nor the work notes.Note: Set the archive time length long enough so your users can access sent emails though the activity formatter. Archiving email records changes the methods available to the system to identify inbound email as a reply. After archiving an email record, the system can no longer use the In-Reply-To field to match an incoming email to an email record. However, the system can still match incoming email to an existing record from a record number or watermark. Activate the Email Retention pluginThe Email Retention plugin provides archive and destruction rules for email messages. It is active by default for new instances, but must be activated for upgrades.Archive email manuallyYou can archive email messages manually on demand instead of waiting for the instance to archive them based on a scheduled job. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/london-servicenow-platform/page/administer/notification/concept/email-retention.html
2019-03-18T18:13:02
CC-MAIN-2019-13
1552912201521.60
[]
docs.servicenow.com
Rigging the Head You will start by attaching the facial features to the head. The advantage of parenting the head and the facial features to the same peg is that you can easily animate each facial feature, and they will all follow the head motion. You can also use additional pegs to control smaller groups of pieces such as the facial features or all the left eye pieces. You can parent as many pegs as you want to obtain a maximum of control on your rig. -.
https://docs.toonboom.com/help/harmony-12/premium-network/Content/_CORE/_Workflow/020_Character_Building/039_H2_Rigging_the_Head_.html
2019-03-18T17:36:08
CC-MAIN-2019-13
1552912201521.60
[array(['../../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePremium.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stageAdvanced.png', 'Toon Boom Harmony 12 Stage Advanced Online Documentation'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stageEssentials.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/controlcenter.png', 'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/scan.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/Activation.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/_ICONS/download.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Breakdown/an_parenthead2.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Breakdown/HAR11/HAR11_Rigging_the_head_001.png', None], dtype=object) array(['../../../Resources/Images/HAR/Stage/Breakdown/HAR11/HAR11_Rigging_the_head_002.png', None], dtype=object) array(['../../../Resources/Images/HAR/Stage/Breakdown/HAR11/HAR11_Rigging_the_head_003.png', None], dtype=object) ]
docs.toonboom.com
Configuration¶ PicoTorrent is stores its configuration in a JSON file and depending on if PicoTorrent was installed or not the file will reside at different locations. - If installed, the file can be found at %APPDATA%/PicoTorrent/PicoTorrent.json. - If not installed, the file will be placed next to PicoTorrent.exe. File filters¶ File filters are used in the Add torrent dialog and allows a user to store custom include/exclude filters. Each filter contains a name and a pattern. The name is visible in the include/exclude context menus, and the pattern is a regular expression which is matched on each file name.
http://docs.picotorrent.org/en/stable/configuration.html
2019-03-18T18:44:24
CC-MAIN-2019-13
1552912201521.60
[array(['_images/file_filters.png', '_images/file_filters.png'], dtype=object) ]
docs.picotorrent.org
LATEST VERSION: 8.2.13 - RELEASE NOTES Cache Eviction Example Cache Eviction Example Eviction. Running the Example In this example, the data region is configured to keep the entry count at 10 or below by destroying LRU entries. A cache listener installed on the region reports the changes to the region entries..DataEviction Example Source Files Program and cache configuration source files for the example, including the listener declared in the DataEviction.xml file: Related Javadocs - com.gemstone.gemfire.cache.EvictionAttributes
http://gemfire82.docs.pivotal.io/docs-gemfire/getting_started/quickstart_examples/cache_eviction.html
2019-03-18T17:44:17
CC-MAIN-2019-13
1552912201521.60
[]
gemfire82.docs.pivotal.io
The Gradle build cache node software is a freely available HTTP remote cache server for Gradle’s build caching functionality. This manual covers obtaining, installing and operating one or more build cache nodes. Build cache nodes can optionally be connected with Gradle Enterprise for centralized management and monitoring, and to enable replicating cache entries between multiple nodes. The build cache node is distributed as a Docker image via Docker Hub, and as a executable JAR. Both distributions offer the same functionality. Requirements Data directory disk usage The build cache node requires a single directory, referred to as the “data directory”, to store its cache entries and other files. The default size of the cache is 10 GB. The build cache node will use a few 10s of MB’s more than the cache size to store log files, config files and other operational files. CPU & memory By default, the build cache node uses up to about 1.5 GB of memory. The build cache node does not require significant CPU resources. Performance is generally constrained by network access to the build cache node. Installation Docker Installation With Docker installed, starting a build cache node is as simple as the following command: docker run -d -v /opt/build-cache-node:/data -p 80:5071 gradle/build-cache-node:latest This will download the latest version of the build cache node container image, create a new container from it, then start it. The cache node will use /opt/build-cache-node on the host to store its files and serve on port 80. More information about changing these settings can be found in the following sections. Airgapped installation In order to install the Docker build cache node docker pull gradle/build-cache-node:latest Export the image to a file docker save gradle/build-cache-node:latest -o build-cache-node.tar Copy this file across to the airgapped host, and then import the image into docker docker load -i build-cache-node.tar The node can then be started and configured on the airgapped host in the same manner as a non-airgapped install: docker run -d -v /opt/build-cache-node:/data -p 80:5071 gradle/build-cache-node:latest Versioning The label gradle/build-cache-node:latest identifies the build cache node image with the tag latest. The latest tag always refers to the most recently released version of the build cache node. In order to use a specific version of the image, substitute latest with the version number. docker run -d -v /opt/build-cache-node:/data -p 80:5071 gradle/build-cache-node:7.0 It is generally desirable to always use the latest version of the build cache node. An exception to this is if you are attaching the node to a Gradle Enterprise installation that is not up to date as each major version of the build cache node has a minimum Gradle Enterprise version requirement. These minimum versions can be found in the Gradle Enterprise compatibility document. Binding the data directory The build cache node container uses /data inside the container as the application data directory. When starting the container, the -v switch should be used to specify which directory on the host to mount to this directory in the container. The command below demonstrates using the /opt/build-cache-node directory on the host as the data directory for the build cache node: docker run -d -v /opt/build-cache-node:/data -p 80:5071 gradle/build-cache-node:latest Each build cache node must have its own data directory. Sharing a data directory between cache nodes is not supported. When choosing where to store the cache data (i.e. which host directory to mount to /data within the container), be sure to choose a disk volume that has adequate storage capacity for your desired cache size. Port mapping The cache node container exposes a single port 5071. This port must be mapped to a port on the host in order to expose it. It is recommended to map this port to one of the standard HTTP ports on the host in order to avoid the need to specify the port number when accessing the cache node. This can be done with -p 80:5071 for HTTP and -p 443:5071 for HTTPS. To expose the application via different ports, simply use a different number for the preceding number in each -p argument. The following command exposes traffic via port 8443 on the host: docker run -d -v /opt/build-cache-node:/data -p 8443:5071 gradle/build-cache-node:latest Auto start The build cache node container can be automatically restarted on system boot by leveraging Docker’s restart policies. Starting a build cache node container with --restart always will ensure that it is always running unless explicitly stopped. docker run -d -v /opt/build-cache-node:/data -p 80:5071 --restart always gradle/build-cache-node:latest JAR Older versions of the JAR can be found in the appendix below. Running Once you have downloaded the JAR file, it can be run simply with java command: java -jar build-cache-node-7.0.jar This will start the node with a data directory inside the $TEMP location and listening on port 5071. Specifying a data directory It is strongly recommended to specify an explicit data directory location with the --data-dir option: java -jar build-cache-node-7.0.jar --data-dir /opt/build-cache-node Each build cache node must have its own data directory. Sharing a data directory between cache nodes is not supported. Specifying the port The default listening port is 5071. To use a different port, use the --port option: java -jar build-cache-node-7.0.jar --data-dir /opt/build-cache-node --port 443 In the absence of a --port option, the PORT environment variable is also respected. Optimized HTTPS The build cache node can server HTTPS traffic more efficiently if OpenSSL is installed on the host. If HTTPS is configured but a usable OpenSSL installation cannot be found, the application will emit a warning to the console of: WARNING: Unable to use optimized HTTPS support as OpenSSL was not found. Auto start The build cache node JAR provides no built-in mechanism for auto starting on system restart. This must be implemented with your operating system’s process manager or similar. The following demonstrates how to use systemd, a popular process manager for Linux systems, to achieve this. 1. Create a file, build-cache-node.sh as root and make it executable, with the following contents: #!/bin/bash # Launches Gradle Remote Build Cache node with correct arguments # Should be run as superuser java -jar /path/to/build-cache-node-7.0.jar --data-dir /opt/build-cache-node --port 80 2. Create a file, /lib/systemd/system/gradle-remote-build-cache.service as root and make it executable, with the following contents: [Unit] Description=Gradle Remote Build Cache After=network.target StartLimitIntervalSec=0 [Service] Restart=always RestartSec=1 ExecStart=/path/to/build-cache-node.sh [Install] WantedBy=multi-user.target 3. Run the following to start the build cache node for the first time: systemctl start gradle-remote-build-cache 4. Run the following to have systemd start the build cache node on system boot: systemctl enable gradle-remote-build-cache Configuration The cache settings and Gradle Enterprise connection settings can be configured via the build cache node’s web interface. Since version 4.2, configuration can also be specified by a config file. Configuring the node via the web interface is generally more convenient, while configuring via the configuration file is generally more appropriate for automated installations. Editing the file The config file is stored at «data-dir»/conf/config.yaml, and is in YAML format. It is read at application startup, and is written to when the config is updated via the web interface. The schema for the config file is versioned. Build cache node version 7.0 uses schema version 2. This schema is published in JSON schema format, and is available here. The config file should always start with the schema version: version: 2 The following is an example of a complete config file that may used as a starting point: version: 2 registration: serverAddress: "https://«ge-hostname»" nodeAddress: "https://«node-hostname»" key: "«key»" secret: "«secret»" cache: targetSize: 50000 maxArtifactSize: 5 credentials: anonymousLevel: "READ" users: - username: "ci-user" password: hash: "«hash-string»" salt: "«salt-string»" algorithm: "sha256" level: "READWRITE" note: "Continuous Integration User" - username: "developer" password: hash: "«hash-string»" salt: "«salt-string»" algorithm: "sha256" level: "READWRITE" uiAccess: username: "myChosenUsername" password: hash: "«hash-string»" salt: "«salt-string»" algorithm: "sha256" The various sections will be explained below. Generating password hashes You can generate password hashes by running the node application with --hash-password. This causes the application to prompt for a password to hash, then emit a block of YAML suitable for use in the config file as a password value, then exit. java -jar /path/to/build-cache-node-7.0.jar --hash-password Enter password: Confirm password: hash: "bbafbc25ee41d8fae40f62756a20e8f7b6a940d5bb2719903dd50e40d14f630e" salt: "8cfa538d2f95e138f13b4aff5a9a825f4df70ce811fcb5b300fb17bee0f316a3" algorithm: "sha256" Gradle Enterprise registration Build cache nodes can be registered and connected with a Gradle Enterprise installation to enable extra features, such as centralized monitoring and cache entry replication. The registration can be configured via the web interface or via the config file. You must first register the node with Gradle Enterprise in order to obtain a key and secret for the node. Details on this process can be found in the Gradle Enterprise Admin Manual. The following is an example config file snippet for configuring the Gradle Enterprise registration: version: 2 registration: serverAddress: "https://«ge-hostname»" nodeAddress: "https://«node-hostname»" key: "«key»" secret: "«secret»" Cache settings The cache settings determine aspects of the build cache. They can be configured via the web interface or config file. Target cache size If adding a new cache entry increases the combined size of all cache entries over the target cache size, the least recently used cache entries will be evicted to make space. The value is configured in MB, and defaults to 10 GB. The following is an example config file snippet for changing the target size to 50 GB: version: 2 cache: targetSize: 50000 Maximum artifact size Attempts to store artifacts larger than this limit will be rejected. The default setting of 100 MB is suitable for most builds and should only need to be increased if your builds produce large outputs that you wish to cache. The following is an example config file snippet for changing the maximum artifact size to 200 MB: version: 2 cache: maxArtifactSize: 200 Cache access control Access to the build cache can be restricted. You can specify one or more credentials and whether they have Read or Read and Write access. If one or more users have been added, it is possible to specify the access level for anonymous users. Specify None if you don’t want to allow anonymous access. If no credentials are specified, access to the cache is unrestricted. Refer to the Gradle user guide or the Gradle Enterprise Maven Extension User Manual for details on how to specify credentials to use when accessing a cache. At the top level of the cache section in the config file, you can optionally define a credentials section specifying fine-grained access to the cache. If credentials are not specified, there are no access controls. The credentials section can contain a list of users with three mandatory fields: username, password and level. The level field may have a value of READ or READWRITE as a string. Users can also have an optional note which is a string to describe the user and aid in administration. The following is an example config file snippet for configuring two users with read and write access: version: 2 cache: credentials: users: - username: "ci-user" password: hash: "«hash-string»" salt: "«salt-string»" algorithm: "sha256" level: "READWRITE" note: "Continuous Integration User" - username: "developer" password: hash: "«hash-string»" salt: "«salt-string»" algorithm: "sha256" level: "READWRITE" When any users are defined, anonymous access defaults to being denied. This can be changed by specifying a READ or READWRITE value for the anonymousLevel field. The following is an example config file snippet for configuring read only anonymous access: version: 2 cache: credentials: anonymousLevel: "READ" users: - username: "ci-user" password: hash: "«hash-string»" salt: "«salt-string»" algorithm: "sha256" level: "READWRITE" note: "Continuous Integration User" UI access control Access to the build cache node’s admin web interface can be restricted via the config file. The following is an example config file snippet for restricting access to the admin web interface: version: 2 uiAccess: username: "myChosenUsername" password: hash: "«hash-string»" salt: "«salt-string»" algorithm: "sha256" Accessing the build cache node’s configuration page will now require entering this username and password. This does not affect access to the actual cache by Gradle. To generate these password hashes, see the “Generating password hashes” section above. Using HTTPS By default, the build cache node serves over HTTP. Using HTTPS requires extra configuration. Using your own certificate To use your own certificate, place an X.509 certificate (including intermediates) at «data-dir»/conf/ssl.crt and your PKCS#1 or PKCS#8 private key at «data-dir»/conf/ssl.key. Both files must be in PEM format. Using a generated certificate You can have the server generate its own self-signed certificate to use. To do this, specify the --generate-self-signed-cert argument to the node application. java -jar build-cache-node-7.0.jar --data-dir /opt/build-cache-node --port 443 --generate-self-signed-cert Or when using the Docker image: docker run -d -v /opt/build-cache-node:/data -p 80:5071 gradle/build-cache-node:latest --generate-self-signed-cert This certificate will not be trusted by clients, and will require extra configuration by clients. Please consult the Gradle user manual or Gradle Enterprise Maven Extension User Manual for how to configure Gradle or Maven builds to accept an untrusted SSL connection. If you are connecting the node with other cache nodes (i.e. for replication), you will need to configure those nodes to allow untrusted SSL connections, for which how to do so is described in the next section. Allowing untrusted SSL connections By default, the cache node will not allow connections to Gradle Enterprise or other cache nodes if they serve over HTTPS and present untrusted certificates (e.g. self-signed). In order to allow such connections, the cache node must be started with the --allow-untrusted-ssl argument. java -jar build-cache-node-7.0.jar --data-dir /opt/build-cache-node --port 443 --allow-untrusted-ssl Or when using the Docker image: docker run -d -v /opt/build-cache-node:/data -p 80:5071 gradle/build-cache-node:latest --allow-untrusted-ssl Gradle Enterprise If your cache node is using an untrusted certificate and you are connecting it with Gradle Enterprise, you will need to configure it to allow untrusted SSL communication with cache nodes. This can be done by enabling in the Gradle Enterprise admin console settings page, by checking the Allow untrusted SSL communication with cache nodes checkbox in the networking settings section. Gradle usage In order to use a build cache, the address of the build cache needs to be configured in your Gradle builds. buildCache { remote(HttpBuildCache) { url = '' } } buildCache { remote(HttpBuildCache::class) { url = uri("") } } For information about using the build cache from Gradle, please consult the Gradle user manual. Apache Maven™ usage In order to use a build cache, the address of the build cache needs to be configured in the configuration for your Maven builds, unless you are connecting to the built-in cache node of Gradle Enterprise. <gradleEnterprise> <server> <url></url> </server> <buildCache> <remote> <server> <url></url> </server> </remote> </buildCache> </gradleEnterprise> For more information about using the build cache from Apache Maven™, please consult the Gradle Enterprise Maven Extension User Manual. Appendix A: JAR downloads - build-cache-node-7.0.jar - build-cache-node-6.0.jar - build-cache-node-5.2.jar - build-cache-node-5.1.jar - build-cache-node-5.0.jar - build-cache-node-4.3.jar - build-cache-node-4.2.jar
https://docs.gradle.com/build-cache-node/
2019-03-18T17:39:37
CC-MAIN-2019-13
1552912201521.60
[]
docs.gradle.com
Chapter 3: About Toon Boom Harmony Toon Boom Harmony is the most advanced professional animation software on the market. Bringing together an impressive 2D drawing toolset with the ability to work in a real 3D space, and import 3D models, Harmony Network combines the animation toolset of Harmony with an impressive database for collaborative workflow. Share assets, batch vectorize and render, and increase production efficiency. Top Features. Modules <![CDATA[ ]]>. The Stage module is the core of Harmony. It comprises all the major drawing, animation and compositing features. It is used to work in the scene: design, character breakdown, cut-out animation, traditional animation, ink and paint, exposure sheet, timeline, effects, compositing, camera moves, colour styling, and so on.. >>IMAGE.. Access your Database via the Cloud. When you're running a studio, you will most likely have a database set up. This enables all the artists working on your production to share the same scenes and assets. What the Toon Boom Cloud enables you to do is to host this database on the Internet. When you do so, you can have freelancers log in from anywhere with an internet connection. Then they can download a scene from the database, work on it, and upload it again. No more need to spend time copying files to an FTP. No need to have an admin exporting and importing files from the database. Do it all directly through the Cloud.
https://docs.toonboom.com/help/harmony-11/draw-network/Content/_CORE/_Workflow/002_About_Harmony/_000_CT_About_Harmony.html
2019-03-18T17:57:20
CC-MAIN-2019-13
1552912201521.60
[array(['../../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stage.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/draw.png', 'Toon Boom Harmony 11 Draw Online Documentation'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/sketch.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/controlcenter.png', 'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/scan.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stageXsheet.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Interface/HAR11_Module_Icons/ControlCenter.png', 'Control Center Module Icon Control Center Module Icon'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Interface/HAR11_Module_Icons/Stage.png', 'Harmony Stage Module Harmony Stage Module'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Interface/HAR11_Module_Icons/HarmonyDraw.png', 'Harmony Draw Module Harmony Draw Module'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Interface/HAR11_Module_Icons/HarmonySketch.png', 'Harmony Sketch Module Harmony Sketch Module'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Interface/HAR11_Module_Icons/Scan.png', 'Harmony Scan Module Harmony Scan Module'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Interface/HAR11_Module_Icons/StageXsheet.png', 'Harmony Xsheet Module Harmony Xsheet Module'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Interface/HAR11_Module_Icons/HarmonyPlay.png', 'Harmony Play Module Harmony Play Module'], dtype=object) array(['../../../Resources/Images/HAR/Harmony_Cloud/cloud-computing.png', 'Harmony Cloud Harmony Cloud'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Interface/HAR11_Module_Icons/HarmonyDraw.png', 'Harmony Draw Module Harmony Draw Module'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Interface/HAR11_Module_Icons/HarmonySketch.png', 'Harmony Sketch Module Harmony Sketch Module'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Interface/HAR11_Module_Icons/Scan.png', 'Harmony Scan Module Harmony Scan Module'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Interface/HAR11_Module_Icons/StageXsheet.png', 'Harmony Xsheet Module Harmony Xsheet Module'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Interface/HAR11_Module_Icons/StagePaint.png', 'Harmony Paint Module Harmony Paint Module'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
Getting started with MATLAB and Simulink From UABgrid Documentation (Difference between revisions) Revision as of 21:17, 6 January 2011 more resources - Recorded Webinars - Learn more about MathWorks products and how they help solve complex technical issues through online recorded webinars. To view a free webinar, select a topic and then click on the link and complete the request form.[1] - Interactive Tutorials for Students and Faculty - MATLAB [2] - Simulink [3] - Example Code, News, Blogs, Teaching Materials - Matlab Central - Classroom resources
https://docs.uabgrid.uab.edu/w/index.php?title=Getting_started_with_MATLAB_and_Simulink&diff=prev&oldid=2373
2019-03-18T17:30:54
CC-MAIN-2019-13
1552912201521.60
[array(['/w/images/thumb/1/1d/Getstart.png/100px-Getstart.png', 'x'], dtype=object) ]
docs.uabgrid.uab.edu
Operating on mailing lists¶ The withlist command is a pretty powerful way to operate on mailing lists from the command line. This command allows you to interact with a list at a Python prompt, or process one or more mailing lists through custom made Python functions. XXX Test the interactive operation of withlist Getting detailed help¶ Because with Multiple lists¶ You can run a command over more than one list by using a regular expression in the listname argument. To indicate a regular expression is used, the string must start with a caret. >>> mlist_2 = create_list('[email protected]') >>> mlist_3 = create_list('[email protected]') >>> IPython¶ You can use IPython as the interactive shell by changing certain configuration variables in the [shell] section of your mailman.cfg file. Set use_ipython to “yes” to switch to IPython, which must be installed on your system. Other configuration variables in the [shell] section can be used to configure other aspects of the interactive shell. You can change both the prompt and the banner.
https://mailman.readthedocs.io/en/release-3.0/src/mailman/commands/docs/withlist.html
2019-03-18T18:12:22
CC-MAIN-2019-13
1552912201521.60
[]
mailman.readthedocs.io
The Appian process modeler pallette contains all of the nodes and Appian smart services that can be used to define a process workflow. These activities are broken into two main categories: standard nodes and smart services. Standard nodes consists of standard BPMN activities, events, and gateways. Smart services are flow activities that integrate specialized business services, like sending e-mails or writing data to a database. Smart services consists of Appian Smart Services and Integration Services. Refer to the following tables for more information about specific nodes or smart services: Standard nodes consist of Activities, Events, and Gateways. Activities are used within process workflow to capture or process business data. Events allow designers to start, stop, or continue the progress of workflows. Gateway are used for workflow control. Smart services provide specialized business services. The two categories of smart services are Appian Smart Services and Integration Services. Smart services are, by default, unattended, meaning the activity will execute once activated. However, certain smart services can be configured as attended. Many of the attended smart services also have an associated smart service function available, which can be used in an Appian expression to invoke that smart service independent of a process model.
https://docs.appian.com/suite/help/17.1/Smart_Services.html
2019-03-18T18:03:20
CC-MAIN-2019-13
1552912201521.60
[]
docs.appian.com
Represents a list of records from the dashboard data source. Returns an array of data members available in a data source. Returns a callstack containing the error caused by an unsuccessful request for underlying data. Gets the number of rows in the underlying data set. Returns the value of the specified cell within the underlying data set. Returns whether or not a request for underlying data was successful.
https://docs.devexpress.com/Dashboard/DevExpress.DashboardWeb.Scripts.ASPxClientDashboardItemUnderlyingData._methods
2019-03-18T17:42:40
CC-MAIN-2019-13
1552912201521.60
[]
docs.devexpress.com
Develop apps for the Universal Windows Platform (UWP) Note This article applies to Visual Studio 2015. If you're looking for Visual Studio 2017 documentation, use the version selector at the top left. We recommend upgrading to Visual Studio 2017. Download it here. With the Universal Windows Platform and our one Windows core, you can run the same app on any Windows 10 device from phones to desktops. Create these Universal Windows apps with Visual Studio 2015 and the Universal Windows App Development tools. Run your app on a Windows 10 phone, a Windows 10 desktop, or an Xbox. It’s the same app package! With the introduction of the Windows 10 single, unified core, one app package can run across all platforms. Several platforms have Extension SDKs that you can add to your app to take advantage of platform specific behaviors. For example, an extension SDK for mobile handles the back button being pressed on a Windows phone. If you reference an Extension SDK in your project, then just add runtime checks to test if that SDK is available on that platform. That’s how you can have the same app package for each platform! What is the Windows Core? For the first time, Windows has been refactored to have a common core across all Windows 10 platforms. There is one common source, one common Windows kernel, one file I/O stack, and one app model. For the UI, there is just one XAML UI framework and one HTML UI framework. So you can concentrate on creating a great app, because we’ve made it easy to have your app run on different Windows 10 devices. What exactly is the Universal Windows Platform? It’s simply a collection of contracts and versions. These allow you to target where your app can run. You no longer target an operating system. Now you target your app to one or more device families. Learn more details from this platform guide. Requirements The Universal Windows App Development tools come with emulators that you can use to see how your app looks on different devices. If you want to use these emulators, you need to install this software on a physical machine. The physical machine must run Windows 8.1 (x64) Professional edition or higher, and have a processor that supports Client Hyper-V and Second Level Address Translation (SLAT). The emulators cannot be used when Visual Studio is installed on a virtual machine. Here is the list of software that you need: - Visual Studio 2015. Make sure that the Universal Windows App Development Tools are selected from the optional features list. Without these tools, you won't be able to create your universal apps. After installing this software, you need to enable your Windows 10 device for development. (You no longer need a developer license for each Windows 10 device.) Windows 8.1 and Windows 7 support If you choose to develop Universal Windows apps with Visual Studio 2015 on a platform other than Windows 10, these are the restrictions: Windows 8.1: You can’t run the app locally (only on a remote Windows 10 device). You can use the emulators in Visual Studio, but not the simulator. Windows 7: You can’t run the app locally (only on a remote Windows 10 device). You can’t use the emulators or the simulator in Visual Studio either. You can only use the XAML designer if your development platform is Windows 10. Universal Windows apps Choose your preferred development language from C#, Visual Basic, C++ or JavaScript to create a Universal Windows app for Windows 10 devices. Or, watch this getting started video. If you have existing Windows Store 8.1 apps, Windows Phone 8.1 apps, or Universal Windows apps created with Visual Studio 2015 RC, port these existing apps to use the latest Universal Windows Platform. After you create your Universal Windows app, you must package your app to install it on a Windows 10 device or submit it to the Windows Store.
https://docs.microsoft.com/en-us/visualstudio/cross-platform/develop-apps-for-the-universal-windows-platform-uwp?view=vs-2015
2019-03-18T18:25:25
CC-MAIN-2019-13
1552912201521.60
[array(['media/uwp-coreextensions.png?view=vs-2015', 'UWP_CoreExtensions Universal Windows Platform'], dtype=object)]
docs.microsoft.com
GroupMemoryBarrierWithGroupSync function Blocks execution of all threads in a group until all group shared accesses have been completed and all threads in the group have reached this call. Syntax void GroupMemoryBarrierWithGroupSync(void); Parameters This function has no parameters. Return value This function does not return a value. Remarks Minimum Shader Model This function is supported in the following shader models. This function is supported in the following types of shaders: See also
https://docs.microsoft.com/en-us/windows/desktop/direct3dhlsl/groupmemorybarrierwithgroupsync
2019-03-18T17:50:47
CC-MAIN-2019-13
1552912201521.60
[]
docs.microsoft.com
As you operate applications with Axon Server, you may need to fine tune the configuration to have AxonServer running optimally and to it full potential. This generally relates to event segmentation / flow control of messages and general compute recommendations around disk storage, O/S characteristics. The summary below mentions these performance considerations.
https://docs.axoniq.io/reference-guide/axon-server/performance
2020-09-18T16:47:39
CC-MAIN-2020-40
1600400188049.8
[]
docs.axoniq.io
. 3 Project vs.. 5 Entity Access vs. Page Access Per entity you can specify who can read or write what members (attributes and associations) under what circumstances. Using XPath constraints you can express powerful security behavior; e.g. .
https://docs.mendix.com/refguide/security
2020-09-18T17:30:41
CC-MAIN-2020-40
1600400188049.8
[]
docs.mendix.com
OS). The Database Selector opens, displaying the Environments available from the Harmony database. - Select the Environment, Job, Scene and Element whereOS)!
https://docs.toonboom.com/help/harmony-17/paint/project-creation/open-drawing-paint.html
2020-09-18T18:16:31
CC-MAIN-2020-40
1600400188049.8
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Paint/Launching/HP_02_OpenDrawings-001.png', None], dtype=object) array(['../Resources/Images/HAR/Paint/Launching/HP_02_OpenDrawings-002.png', None], dtype=object) array(['../Resources/Images/HAR/Paint/Launching/HP_02_OpenDrawings-003.png', None], dtype=object) array(['../Resources/Images/HAR/Paint/Launching/HP_02_OpenDrawings-004.png', None], dtype=object) array(['../Resources/Images/HAR/Paint/Launching/HP_02_OpenDrawings-006.png', None], dtype=object) array(['../Resources/Images/HAR/Paint/Launching/hp_paintcontrols-drawings.png', None], dtype=object) ]
docs.toonboom.com
Cannot add more than one item to cart This is usually caused by your php sessions not being set-up correctly. In 99.99% of cases this is due to the session save path (php.ini) not being set correctly or at all and/or the directory not existing or not being writable. Typically this would read session.save_path = "/tmp"or or session.save_path = "/var/lib/php5" or similar in your php.ini. session.save_path = "/var/lib/php5/session" Refer to the php manual for more details regarding session setup You should speak to your host / server administrator to get this fixed and to make sure the appropriate path is set with the correct permissions. WPPizza will not work without php session support.
https://docs.wp-pizza.com/cannot-add-more-than-one-item-to-cart/
2020-09-18T17:52:58
CC-MAIN-2020-40
1600400188049.8
[]
docs.wp-pizza.com
The value to set the weblet to. If the weblet visualizes a field, this will identify the field whose value is to be shown. Default value No default value applies – for most uses of this weblet you must specify a field whose value is to be represented by the checkbox and/or that is used to receive the state of the checkbox. Valid values Single-quoted text or the name of a field, system variable or multilingual text variable.
https://docs.lansa.com/14/en/lansa087/content/lansa/wamengb2_0815.htm
2019-05-19T15:18:27
CC-MAIN-2019-22
1558232254889.43
[]
docs.lansa.com
tcod.noise¶ The Noise.sample_mgrid and Noise.sample_ogrid methods are multi-threaded operations when the Python runtime supports OpenMP. Even when single threaded these methods will perform much better than multiple calls to Noise.get_point. Example: import numpy as np import tcod import tcod.noise noise = tcod.noise.Noise( dimensions=2, algorithm=tcod.NOISE_SIMPLEX, implementation=tcod.noise.TURBULENCE, hurst=0.5, lacunarity=2.0, octaves=4, seed=None, ) # Create a 5x5 open multi-dimensional mesh-grid. ogrid = [np.arange(5, dtype=np.float32), np.arange(5, dtype=np.float32)] print(ogrid) # Scale the grid. ogrid[0] *= 0.25 ogrid[1] *= 0.25 # Return the sampled noise from this grid of points. samples = noise.sample_ogrid(ogrid) print(samples) - class tcod.noise. Noise(dimensions: int, algorithm: int = 2, implementation: int = 0, hurst: float = 0.5, lacunarity: float = 2.0, octaves: float = 4, seed: Optional[tcod.random.Random] = None)[source]¶ The hurstexponent describes the raggedness of the resultant noise, with a higher value leading to a smoother noise. Not used with tcod.noise.SIMPLE. lacunarityis a multiplier that determines how fast the noise frequency increases for each successive octave. Not used with tcod.noise.SIMPLE. get_point(x: float = 0, y: float = 0, z: float = 0, w: float = 0) → float[source]¶ Return the noise value at the (x, y, z, w) point. sample_mgrid(mgrid: numpy.array) → numpy.array[source]¶ Sample a mesh-grid array and return the result. The sample_ogridmethod performs better as there is a lot of overhead when working with large mesh-grids.
https://python-tcod.readthedocs.io/en/latest/tcod/noise.html
2019-05-19T15:00:57
CC-MAIN-2019-22
1558232254889.43
[]
python-tcod.readthedocs.io
1. Django-SHOP. This means that the merchant is in charge of the project and that django-SHOP acts as one of the third party dependencies making up the whole project. We name this the merchant implementation. The merchant implementation contains everything which makes up its fully customizable project, such as: - The main configuration file, settings.py. - The URL-routing entry point, usually urls.py. - Optionally, but highly reccomended: Django models to describe the products sold by the merchant. - If required, extended models for the Cart and Order. - An administration interface to manage entities from all those models. - Special Cart modifiers to calculate discounts or additional costs. - Order workflows to handle all the steps how an order is processed. - Apphooks for integrating Django-Views into django-CMS. - Custom filters to restrict the rendered set of products according to their properties. - Form definitions, if they differ from the built-in defaults. - HTML snippets and their cascading style sheets, if they differ from the built-in defaults. This approach allows a merchant to implement every desired extra feature, without having to modify any code in the django-SHOP framework itself. This however requires to add some custom code to the merchant implementation itself. Since we don’t want to do this from scratch, we can use a prepared cookiecutter template to bootstrap our first project. Please follow their instructions for setting up a running demo. This cookiecutter template is shipped with 3 distinct product models, which are named commodity, smartcard and polymorphic. Depending on their need for internationalization, they are subdivided into a variant for a single language and one with support for translated product properties. Which one of them to use, depends on the merchant requirements. When answering the questions, asked by the cookiecutter wizard, consider to: - use commodity, if you want to fill a free-form page with components from the CMS. It does not require any adaption of the product model. It is useful for shops with a handful of different products. The Commodity Product Model and The Internationalized Commodity Product Model - use smartcard, if you have many products, which all share the same properties. It is useful for shops with one distinct product type. Here the product model usually must be renamed, and further adopted, by adding and removing fields. The Smart Card Product Model and An Internationalized Smart Card Model - use polymorphic, if you have many product types, with different properties for each type. Here we have to define a smallest common denominator for all products, and further create a product model for each distinc product type. The Polymorphic Product Model and The Internationalized Polymorphic Product Model 1.2. Installation¶ Before installing the files from the project, ensure that your operating system contains these applications: Install some additional Python applications, globally or for the current user: Then change into a directory, usually used for your projects and invoke: You will be asked a few question. If unsure, just use the defaults. This creates a directory named my-shop, or whatever you have choosen. This generated directory is the base for adopting this project into your merchant implementation. For simplicity, in this tutorial, it is refered as my-shop. Change into this directory and install the missing dependencies: This demo shop must initialize its database and be filled with content for demonstration purpose. Each of these steps can be performed individually, but for simplicity we use a Django managment command which wraps all these command into a single one: Finally we start the project, using Django’s built-in development server: Point a browser onto and check if everything is working. To access the backend at , log in using username admin with password secret. Note The first time, django-SHOP renders a page, images must be thumbnailed and cropped. This is an expensive operation which runs only once. Therefore please be patient, when loading a page for the first time. 1.3. Overview¶ What you see here is a content managment system consisting of many pages. By accessing the Django administration backend at Home › django CMS › Pages, one gets an overview of the page-tree structure. One thing which immediately stands out is, that all pages required to build the shop, are actually pages, served by django-CMS. This means that the complete sitemap (URL structure) of a shop, can be reconfigured easily to the merchants needs. 1.4. Adding pages to the CMS¶ If we want to add pages to the CMS which have not been installed with the demo,. Click on New Page to create a new Page. As its Title choose whatever seems appropriate. Then change into the Advanced Settings at the bottom of the page. In this editor window, locate the field Template and choose the default. Change into Structure mode and locate the placeholder named Main Content, add a Container-plugin, followed by a Row-, followed by one or more Column-plugins. Choose the appropriate width for each column, so that for any given breakpoint, the widths units sum up to 12. Below that column, add whatever is approriate for that page. This is how in django-CMS we add components to our page placeholders. The default template provided with the demo contains other placeholders. One shall be used to render the breadcrumb. By default, if no Breadcrumb-plugin has been selected, it shows the path to the current page. By clicking on the ancestors, one can navigate backwards in the page-tree hierarchy. 1.5. Next Chapter¶ In the next chapter of this tutorial, we will see how to organize the Catalog Views
https://django-shop.readthedocs.io/en/latest/tutorial/intro.html
2019-05-19T15:12:40
CC-MAIN-2019-22
1558232254889.43
[array(['../_images/django-cms-toolbar.png', 'django-cms-toolbar'], dtype=object) ]
django-shop.readthedocs.io
Environment¶ Inherits: Resource < Reference < Object Category: Core Brief Description¶ Resource for environment nodes (like WorldEnvironment) that define multiple rendering options. Enumerations¶ enum BGMode: - BG_KEEP = 5 — Keep on screen every pixel drawn in the background. - BG_CLEAR_COLOR = 0 — Clear the background using the project’s clear color. - BG_COLOR = 1 — Clear the background using a custom clear color. - BG_SKY = 2 — Display a user-defined sky in the background. - BG_COLOR_SKY = 3 — Clear the background using a custom clear color and allows defining a sky for shading and reflection. - BG_CANVAS = 4 — Display a CanvasLayer in the background. - BG_MAX = 6 — Helper constant keeping track of the enum’s size, has no direct usage in API calls. enum GlowBlendMode: - GLOW_BLEND_MODE_ADDITIVE = 0 — Additive glow blending mode. Mostly used for particles, glows (bloom), lens flare, bright sources. - GLOW_BLEND_MODE_SCREEN = 1 — Screen glow blending mode. Increases brightness, used frequently with bloom. - GLOW_BLEND_MODE_SOFTLIGHT = 2 — Softlight glow blending mode. Modifies contrast, exposes shadows and highlights, vivid bloom. - GLOW_BLEND_MODE_REPLACE = 3 — Replace glow blending mode. Replaces all pixels’ color by the glow value. enum ToneMapper: - TONE_MAPPER_LINEAR = 0 — Linear tonemapper operator. Reads the linear data and performs an exposure adjustment. -. - DOF_BLUR_QUALITY_MEDIUM = 1 — Medium depth-of-field blur quality. - DOF_BLUR_QUALITY_HIGH = 2 — High depth-of-field blur quality. enum SSAOBlur: - SSAO_BLUR_DISABLED = 0 - SSAO_BLUR_1x1 = 1 - SSAO_BLUR_2x2 = 2 - SSAO_BLUR_3x3 = 3 enum SSAOQuality: - SSAO_QUALITY_LOW = 0 - SSAO_QUALITY_MEDIUM = 1 - SSAO_QUALITY_HIGH = 2 Description¶ Resource for environment nodes (like WorldEnvironment) that define multiple environment operations (such as background Sky or Color, ambient light, fog, depth-of-field…). These parameters affect the final render of the scene. The order of these operations is: - DOF Blur - Motion Blur - Bloom - Tonemap (auto exposure) - Adjustments Property Descriptions¶ Global brightness value of the rendered scene (default value is 1). Applies the provided Texture resource to affect the global color aspect of the rendered scene. Global contrast value of the rendered scene (default value is 1). Enables the adjustment_* options provided by this resource. If false, adjustments modifications will have no effect on the rendered scene. Global color saturation value of the rendered scene (default value is 1). Color of the ambient light. Energy of the ambient light.. Enables the tonemapping auto exposure mode of the scene renderer. If activated, the renderer will automatically determine the exposure setting to adapt to the illumination of the scene and the observed light. Maximum luminance value for the auto exposure. Minimum luminance value for the auto exposure. Scale of the auto exposure effect. Affects the intensity of auto exposure. Speed of the auto exposure effect. Affects the time needed for the camera to perform auto exposure. Maximum layer id (if using Layer background mode). Color displayed for clear areas of the scene (if using Custom color or Color+Sky background modes). Power of light emitted by the background. Defines the mode of background. Sky resource defined as background. Sky resource’s custom field of view. Sky resource’s rotation expressed as a Basis Sky resource’s rotation expressed as euler angles in radians Sky resource’s rotation expressed as euler angles in degrees Amount of far blur. Distance from the camera where the far blur effect affects the rendering. Enables the far blur effect. - DOFBlurQuality dof_blur_far_quality Quality of the far blur quality. Transition between no-blur area and far blur. Amount of near blur. Distance from the camera where the near blur effect affects the rendering. Enables the near blur effect. - DOFBlurQuality dof_blur_near_quality Quality of the near blur quality. Transition between near blur and no-blur area. Fog’s depth starting distance from the camera. Value defining the fog depth intensity. Enables the fog depth. Enables the fog. Needs fog_height_enabled and/or for_depth_enabled to actually display fog. Value defining the fog height intensity. Enables the fog height. Maximum height of fog. Minimum height of fog. Amount of sun that affects the fog rendering. Amount of light that the fog transmits. Enables fog’s light transmission. If enabled, lets reflections light to be transmitted by the fog. - GlowBlendMode glow_blend_mode Glow blending mode. Bloom value (global glow). Enables glow rendering. Bleed scale of the HDR glow. Bleed threshold of the HDR glow. Glow intensity. First level of glow (most local). Second level of glow. Third level of glow. Fourth level of glow. Fifth level of glow. Sixth level of glow. Seventh level of glow (most global). Glow strength. - SSAOQuality ssao_quality Default exposure for tonemap. - ToneMapper tonemap_mode Tonemapping mode. White reference value for tonemap.
https://docs.godotengine.org/en/latest/classes/class_environment.html
2019-05-19T14:17:42
CC-MAIN-2019-22
1558232254889.43
[]
docs.godotengine.org
This section describes the developer-specific DC/OS components, explaining what is necessary to package and provide your own service on DC/OS. The Mesosphere Distributed Cloud Operating System (DC/OS) provides the optimal user experience possible for orchestrating and managing a datacenter. If you are an Apache Mesos developer, you are already familiar with developing a framework. DC/OS extends Apache Mesos by including a web interface for health checks and monitoring, a command-line, a service packaging description, and a repository that catalogs those packages. Package Repositories The DC/OS Universe contains all of the services that are installable on DC/OS. For more information on DC/OS Universe, see the GitHub Universe repository. Our general recommendation is to use the DC/OS CLI rather than the DC/OS web interface throughout the process of creating a package for the Universe. All packaged services are required to meet a certain standard as defined by Mesosphere. For details on submitting a DC/OS service, see Getting Started with Universe. DC/OS service structureDC/OS service structure Each DC/OS service in the Universe repo is comprised of JSON configuration files. These files are used create the packages that are installed on DC/OS. For more information, see Getting Started with Universe.
https://docs.mesosphere.com/1.13/developing-services/
2019-05-19T14:22:05
CC-MAIN-2019-22
1558232254889.43
[]
docs.mesosphere.com
Free Dragging If you click and drag the first helper; the current object or the currently selected set of objects would get dragged. You can now place the object anywhere on the currently visible portion of the site. There is a more precise way to drag an object: When ending the drag; you should drop the mouse right on top of the architect . If that be the case, then the object (or selected objects) would get precisely shifted to the location of the architect. By precise; I mean that TAD determines where the first helper used to be located; and then displaces the object/s in such a manner that the first helper would potentially be exactly on the architect. Once an object is dragged in this fashion; you would find that the archiect would not place itself right on the first helper (i.e. the architect would get magically be shifted from its earlier location to that of the first helper) If you do not want such a behaviour; use the software's Properties dialog and change a checkbox To locate the architect precisely; use the Headsup commandline Press F1 inside the application to read context-sensitive help directly in the application itself ← ∈
http://docs.teamtad.com/doku.php/isdraggingobj
2019-05-19T15:28:04
CC-MAIN-2019-22
1558232254889.43
[]
docs.teamtad.com
Statistics Overview AnyChart's engine calculates a great number of values, which can be obtained with the help of the getStat() method. Here is the list of available values: anychart.enums.Statistics. Which value you can get depends on the chart type and on the object you call the method on (see this article). Basics To obtain statistical data from a chart, the getStat() method with a field name as a parameter is called. Available field names can be found in anychart.enums.Statistics – you can use their name or string representation: var pointsCount = chart.getStat(anychart.enums.Statistics.DATA_PLOT_POINT_COUNT); var bubbleMaxSize = chart.getStat("dataPlotBubbleMaxSize"); In the following sample, the getStat() method is used to obtain the maximum bubble size and number of points in the chart. The information is displayed in the title of the chart: You can call getStat() on the instances of three types of classes: charts, series, and points. Further we will consider the basics of how the method works with each type. Chart You should call the getStat() method of a chart object if you need to get overall statistics on all the series of a multi-series chart or if the chart type does not suggest that there are more than one series. The sample you see below demonstrates how the method allows to obtain the average Y-value of all the points in all the series of a chart (see the title): totalAverage = chart.getStat("dataPlotYAverage"); In the next sample, the sum of all values in a pie chart is displayed (charts of this type always have only one series): numberOfTrees = chart.getStat("sum"); Series Sometimes it is necessary to call the getStat() method of an instance of the series class. Firstly, you may be interested only in one of all the data sets, secondly, the kind of statistics you can obtain depends on the type of a series. The following sample is based on one of the samples from the previous section. Here, the average is obtained for each series separately, and two numbers are displayed in the title: Please note that there are two ways to get a link to a series object instance: a link can be returned either by series constructor methods or by the getSeries() and getSeriesAt() methods of a chart: maleAverage = maleMartians.getStat("seriesYAverage"); femaleAverage = chart.getSeriesAt(1).getStat("seriesYAverage"); Point As a rule, to call getStat() on a point, one needs to use the so-called event listeners and text formatters. However, in some cases you can use the getPoint() method to get a link to a Point object, and invoke the getStat() method on it. We will demonstrate the both ways. In the sample below, the title of the chart shows values of the latest points in both series. In addition, when a user selects a pair of points, a subtitle with information on these points appears: The getPoint() method is used to get links to the latest points in two series, and then the getStat() method is called on them to get their values and create the title: // get links to the latest points in both series latestPointMaleMartians = maleMartians.getPoint(numberOfPoints - 1); latestPointFemaleMartians = femaleMartians.getPoint(numberOfPoints - 1); // get the values of the latest points from both series and use them in the title mainTitleText = "The Height of Martians Today: Males — " + latestPointMaleMartians.getStat("value") + ", Females — " + latestPointFemaleMartians.getStat("value") An event listener is used to listen to the pointsSelect event and get links to the selected points. The getStat() method is called on them to get their category name and values and to create the subtitle. // listen to the pointsSelect event chart.listen("pointsSelect", function(e){ // get categoryName of the selected points selectedPointYear = e.point.getStat("categoryName"); // begin creating a subtitle with the information on the selected points subtitleText = "<span style='font-size:12'>" + selectedPointYear + ": "; // loop through the array of the selected points for (var i = 0; i < e.points.length; i++) { // get the name of the series a selected point belongs to and the value of the point subtitleText += e.points[i].getSeries().name() + " — " + e.points[i].getStat("value") + ", "; } // remove the extra comma at the end of the subtitle and close the <span> tag subtitleText = subtitleText.slice(0, subtitleText.length - 2) + "</span>"; // update the title with the subtitle chart.title(mainTitleText + "<br>" + subtitleText); });
https://docs.anychart.com/v7/Common_Settings/Statistics
2019-05-19T14:46:27
CC-MAIN-2019-22
1558232254889.43
[]
docs.anychart.com
Installing the software Install Change Control or Application Control in the McAfee® ePolicy Orchestrator® (McAfee® ePO™) environment. Prerequisites Before installing Change Control or Application Control, make sure that your environment conforms to these requirements. Supported McAfee ePO versions This release of McAfee® Application Control and McAfee® Change Control is compatible with these McAfee® ePolicy Orchestrator® (McAfee® ePO™) versions. Install the Solidcore extensionThe Solidcore extension integrates with the McAfee ePO console and provides Change Control and Application Control features. The Solidcore extension installs on versions 5.1 and 5.3 of the McAfee ePO server. Specify licensesLicenses determine the product features available to you. Install the Solidcore client The Solidcore client provides change monitoring, change prevention, and whitelisting features on the endpoints where it is installed. You can install and deploy the Solidcore client on Windows and Linux platforms. For all supported platforms, the Solidcore client works well on both physical and virtual machines (VM).
https://docs.mcafee.com/bundle/application-control-8.0.0-installation-guide-epolicy-orchestrator/page/GUID-B075AC84-3D08-43EB-8958-FC1E68FC4E2C.html
2019-05-19T15:19:08
CC-MAIN-2019-22
1558232254889.43
[]
docs.mcafee.com
Filtering OpenVPN Traffic¶ There is a separate OpenVPN tab used for defining rules to cover the traffic entering the firewall across the VPN. This tab appears when any OpenVPN instance is defined. An OpenVPN instance may be assigned to create per-VPN rules, but the rules on the OpenVPN tab still apply and are considered before the per-interface tabs.
https://docs.netgate.com/pfsense/en/latest/vpn/openvpn/openvpn-traffic-filtering.html
2019-05-19T14:31:29
CC-MAIN-2019-22
1558232254889.43
[]
docs.netgate.com