content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
. See also Support and feedback Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback.
https://docs.microsoft.com/en-us/office/vba/api/overview/#vba-programming-in-office
2022-05-16T21:24:18
CC-MAIN-2022-21
1652662512249.16
[]
docs.microsoft.com
Navigation The navigation component is a UI tool that helps you easily implement common navigation patterns. Basic usage The Mobiscroll Navigation control has the following react components: <mobiscroll.BottomNav />, <mobiscroll.HamburgerNav />, <mobiscroll.TabNav /> and <mobiscroll.NavItem /> as the child component of the former. While the navigation components can be used with the react router, it is not a requirement. If you are not using the react router in your app, you can skip the following section and go to the simple examples Using with react router The navigation components can be used with the react-router packages. The <NavItem /> component supports the same props the react router package's <Link /> component has. Version 5 and above of React Router package is only supported from Mobiscroll Version 4.8.2 and upwards. To work with the react-router package, Mobiscroll needs to be configured! The following section describes how to configure the navigation component: Configuration The setup of the Mobiscroll Navigation component with React Router can be done by calling the setupReactRouter function and providing the Route and withRouter parameters from the react-router-dom. Here's an example: import { Route, withRouter } from 'react-router-dom'; import mobiscroll from 'path-to-mobiscroll'; // setup the React Router with Mobiscroll mobiscroll.setupReactRouter(Route, withRouter); After the configuration the navigation component will be aware of the parent Router, and will change it's selected state based on the current location and the to prop passed to it's items. A simple example for rendering different components, using the url hash: import { Route, withRouter, HashRouter as Router, Switch } from 'react-router-dom'; import mobiscroll from 'path-to-mobiscroll'; mobiscroll.setupReactRouter(Route, withRouter); const News = () => <div>News</div>; const Search = () => <div>Search</div>; const Profile = () => <div>My Profile</div>; const App = (props) => { return <Router> <mobiscroll.BottomNav> <mobiscroll.NavItemNews</mobiscroll.NavItem> <mobiscroll.NavItemSearch</mobiscroll.NavItem> <mobiscroll.NavItemProfile</mobiscroll.NavItem> </mobiscroll.BottomNav> <Switch> <Route path="/news" component={News} /> <Route path="/search" component={Search} /> <Route path="/profile" component={Profile} /> </Switch> </Router>; } Using the NavItems The <mobiscroll.NavItem/> works the same way as a NavLink component from the React Router package does. It applies the active styling for itself based on the location match automatically. It also supports most of the props that the NavLink component does, to provide a fully customizable navigation. Here's a complete list of the supported props. The most important prop of the NavItem is the to prop. When the NavItem is pressed, it will navigate to the path provided by it. When the location matches with the NavItems to prop, it is rendered as active. The exact prop can also be usefull, when the location contains multiple segments. For example: When the to="/video" is used, it will also match "/video/search" and "/feed/video/funny". Sometimes this behavior is not wanted, so providing the exact prop to the NavItem will make it match only when the paths are matched exactly. The below example when rendered, will redirect from the "/" path to the "/books" path, so the Books component will be rendered just after. The exact prop will ensure that the redirect route won't be triggered by the other paths that contain the "/" character. import { Route, withRouter, HashRouter as Router, Switch, Redirect } from 'react-router-dom'; // import the Redirect route as well import mobiscroll from 'path-to-mobiscroll'; mobiscroll.setupReactRouter(Route, withRouter); const App = (props) => { return <Router> <mobiscroll.TabNav <mobiscroll.NavItemBooks</mobiscroll.NavItem> <mobiscroll.NavItemMusic</mobiscroll.NavItem> </mobiscroll.TabNav> <Switch> <Route path="/books" component={Books} /> <Route path="/music" component={Music} /> <Redirect path="/" exact </Switch> </Router>; } Simple examples <mobiscroll.TabNav .TabNav> <mobiscroll.HamburgerNav> <mobiscroll.NavItemWi-Fi</mobiscroll.NavItem> <mobiscroll.NavItem disabled={true}Location</mobiscroll.NavItem> <mobiscroll.NavItemSound</mobiscroll.NavItem> <mobiscroll.NavItem selected={true}Rotation</mobiscroll.NavItem> <mobiscroll.NavItemBluetooth</mobiscroll.NavItem> <mobiscroll.NavItemSettings</mobiscroll.NavItem> <mobiscroll.NavItemReading</mobiscroll.NavItem> <mobiscroll.NavItemData</mobiscroll.NavItem> </mobiscroll.HamburgerNav> /* in your component */ constructor(props) { super(props); this.state = { selected: 'home' }; } select = (item) => { this.setState({ selected: item.id }); } items = [ {id: 'home', text: 'Home', disabled: false, icon: 'home', badge: null }, {id: 'feed', text: 'Feed', disabled: true, icon: 'pencil', badge: '2' }, {id: 'settings', text: 'Settings', disabled: false, icon: 'user4', badge: null } ]; render() { return <mobiscroll.BottomNav {this.items.map((item) => { return <mobiscroll.NavItem key={item.id} id={item.id} selected={item.id == this.state.selected} disabled={item.disabled} icon={item.icon} badge={item.badge} onClick={this.select.bind(null, item)} >{item.text}</mobiscroll.NavItem> })} </mobiscroll.BottomNav>; } <mobiscroll.NavItem /> props NavItem props that are inherited from the NavLink component from the React Router Package. <mobiscroll.BottomNav /> <mobiscroll.HamburgerNav /> <mobiscroll.TabNav /> props For many more examples - simple and complex use-cases - check out the navigation demos for react. Options Events Methods Localization Data attributes:
https://docs.mobiscroll.com/react/navigation
2022-05-16T21:45:15
CC-MAIN-2022-21
1652662512249.16
[]
docs.mobiscroll.com
Important This doc is for managing users on the New Relic One user model. For managing users on our original user model, see Original users. To manage their users, New Relic organizations can configure one or more authentication domains, which control how users are added to a New Relic account, how they’re authenticated, and more. What is an authentication domain? An "authentication domain" is a grouping of New Relic users governed by the same user management settings, like how they're provisioned (added and updated), how they're authenticated (logged in), session settings, and how user upgrades are managed. When someone creates a New Relic account, the default authentication settings are: - Users are manually added to New Relic - Users manually log in using their email and password Those default settings would be under one "authentication domain." Another authentication domain might be set up like this: - Users are added and managed from an identity provider using SCIM provisioning - Users are logged in using SAML single sign-on (SSO) from an identity provider When you add users to New Relic, they’re added to a specific authentication domain. Typically organizations have either one or two authentication domains: one for the manual, default methods and one for the methods associated with an identity provider. Learn more in this short video (4:26 minutes): Requirements Authentication domains are for managing users on the New Relic One user model. If your users are on our original user model, see Original accounts. Requirements to manage authentication domains: - Your organization must be either Pro or Enterprise edition to have editable authentication domains. - To view or edit authentication domains, a user must: - Have a user type of core user or full platform user. - Be in a group with the Authentication domain manager role. - SCIM provisioning, also known as automated user management, requires Pro or Enterprise edition. Learn more about requirements. - SAML SSO requires Pro or Enterprise edition. Our SAML SSO support includes: - Active Directory Federation Services (ADFS) - Auth0 - Azure AD (Microsoft Azure Active Directory) - Okta - OneLogin - Ping Identity - Salesforce - Generic support for SSO systems that use SAML 2.0 Create and configure an authentication domain If you meet the requirements, you can add and manage authentication domains. To view and configure authentication domains: from the account dropdown, go to Administration > Organization and access > Authentication domains. If you have existing domains, they'll be on the left. Note that most organizations will have, at most, two or three domains: one with the manual, default settings and one or two for the identity provider-associated settings. To create a new domain from the authentication domain UI page, click Create new. For more about the configuration options, keep reading. Source of users: manual provisioning versus SCIM provisioning Tip For an introduction to our SAML SSO and SCIM offerings, please read Get started with SSO and SCIM. From the authentication domain UI, you can set one of two options for how users are added to New Relic: - Manual: This means that your users are added manually to New Relic from the User management UI. - SCIM: Our automated user management feature allows you to use SCIM provisioning to import and manage users from your identity provider. Notes on these settings: - You can't toggle Source of users. This means if you want to change this for an authentication domain that's already been set up, you'll need to create a new one. - When you first enable SCIM, the bearer token is generated and only shown once. If you need to view a bearer token later, the only way to do this is to generate a new one, which will invalidate the old one and any integrations using the old token. For how to set up SCIM, see Automated user management. Authentication: username/password versus SAML SSO The authentication method is the way in which New Relic users log in to New Relic. All users in an authentication domain have a single authentication method. There are two authentication options: - Username/password: Your users log in via email and password. - SAML SSO: Your users log in via SAML single sign-on (SSO) via your identity provider. To learn how to set that up, keep reading. Set up SAML SSO authentication Before enabling SAML SSO using the instructions below, here are some things to understand and consider: - Consider reading an introduction to getting started with SSO and SCIM. - Consider reviewing the SAML SSO requirements. - Consider watching a video on how to set up SAML SSO. - Note that your SSO-enabled users won't receive email verification notifications from New Relic because the login and password information is handled by your identity provider. - Consult your identity provider service's docs because they may have New Relic-specific instructions. If you're setting up SCIM provisioning: If you only want to enable SAML SSO and not SCIM, and if you use Azure, Okta, or OneLogin, follow these instructions for configuring the relevant app: - If you're implementing SAML using a different identity provider not mentioned above, you'll need to attempt to integrate using the SAML instructions below. Note that your identity provider must use the SAML 2.0 protocol, and must require signed SAML assertions. Next, you'll go to our authentication domain UI. From the account dropdown, click Organization and access, and then click Authentication domains. If you don't already have one, create a new domain to be used for your SAML-authenticating users. Under Authentication, click Configure. Under Method of authenticating users, select SAML SSO. If you're using the Okta, OneLogin, or Azure AD app, you can skip this step. Under Provided by New Relic, we have some New Relic-specific information. You'll need to place these in the relevant fields in your identity provider service. If you're not sure where they go, consult your identity provider docs. Under Provided by you, input the Source of SAML metadata. This URL is supplied by your identity provider and may be called something else. It should conform to SAML V2.0 metadata standards. If your identity provider doesn't support dynamic configuration, you can do this by using Upload a certificate. This should be a PEM encoded x509 certificate. Under Provided by you, set the SSO target URL supplied by your identity provider. You can find this by going to the Source of SAML metadata and finding the POST binding URL. It looks like: If your identity provider has a redirect URL for logout, enter it in the Logout redirect URL; otherwise, leave it blank. If you’re using an identity provider app, you’ll need to input the authentication domain ID in the app. That ID is found at the top of New Relic’s authentication domain UI page. Optional: In New Relic’s authentication domain UI, you can adjust other settings, like browser session length and user upgrade method. You can adjust these settings at any time. If you're enabling SAML only, you need to create groups and assign access grants in New Relic. (If you enabled SCIM, you've already completed this step.) Access grants are what give your users access to New Relic accounts. Without access grants, your users are provisioned in New Relic but have no account access. To learn how to do this: - Okta only: Return to Okta's New Relic app and, from the Add New Relic by organization page, uncheck the two Application visibility "Do not display..." checkboxes and click on Done. To verify it's been set up correctly, see if your users can log in to New Relic via your identity provider and ensure they have access to their accounts. Session duration and timeout In the authentication domain UI, under Management, you can control some other settings for the users in that domain, including: - Length of time users can remain logged in. - Amount of idle time before a user's session expires. - User upgrade requests Manage user type and upgrade requests In the authentication domain UI, under Management, you can control how your users' user type is managed. This includes how the user type can be edited and how upgrade requests are handled. There are two main settings: - Manage user type in New Relic: This is the default option. It allows you to manage your users' user type from New Relic. - Manage user type with SCIM: Enabling this means that you can no longer manage user type from New Relic. You'd only be able to change and manage it from your identity provider. More on these two options: For more about user type, see User type. Note that if you're on our original user model, upgrades work differently.
https://docs.newrelic.com/docs/accounts/accounts-billing/new-relic-one-user-management/authentication-domains-saml-sso-scim-more/?q=
2022-05-16T21:17:10
CC-MAIN-2022-21
1652662512249.16
[]
docs.newrelic.com
Vulnerability management rules: There are separate vulnerability policies for containers, hosts, and serverless functions. Host and serverless rules offer a subset of the capabilities of container rules, the big difference being that container rules support blocking. containersto Off, and click Save. To create a vulnerability rule: - Open Console. - Go toDefend > Vulnerabilities > {Images | Hosts | Functions}. - ClickAdd rule. - Enter a rule name and configure the rule. Configuration options are discussed in the following sections. - ClickSave. - View the impact of your rule. Go toMonitor > Vulnerabilitiesto view the scan reports.. Scope The scope field lets you target rule to specific resources in your environment. The scope of a rule is defined by referencing one or more collections. By default, the scope is set to the Allcollection, which applies the rule globally. For more information about creating and managing collections, see here. . . . Grace period Grace periods temporarily override the blocking action of a rule when new vulnerabilities are found. Grace periods give you time to address a vulnerability without compromising the availability of your app. You can configure a uniform grace period for all severities or provide different settings for each severity. passed. In order to surface the issue as early as possible in the development lifecycle, you can specify a grace period in the CI pipeline. For example, this control would let you fail image builds that have critical vulnerabilities that were fixed over 30 days ago. Configure grace period The following procedure describes how to configure grace periods for blocking actions: - In Console, go toDefend > Vulnerabilities > Images > Deployed. - Select an existing rule or create a new rule with theAdd rulebutton. - Enter a rule name, notes, and scope. - UnderSeverity based actions: - Select the desiredAlert threshold - Select the desiredBlock threshold.The block threshold must be equal or greater than the alert threshold. You must define a block threshold in order to configure grace period. - Configure theBlock grace period: - Select whether you would like to define the same grace period forAll severitiesor grace periodBy severity. - Specify the number of days. Note that in case ofBy severitygrace period you will be able to specify the number of days only for the severities that can be blocked. Values that are not set will be set to 0.Use the same procedure to configure grace periods to fail builds in your CI/CD pipeline. To configure CI/CD pipeline vulnerability scanning rules, go toDefend > Vulnerabilities > Images > CI.: Blocking based on vulnerability severity This example shows you how to create and test a rule that blocks the deployment of images with critical or high severity vulnerabilities. - In Console, go toDefend > Vulnerabilities > Images. - ClickAdd rule. - Enter a rule name, such asmy-rule. - In theSeverity based actionstable, set both theAlert thresholdandBlock thresholdtoHigh. - Target the rule to a very specific image. In theImagesfilter, delete the wildcard, and enternginx*. - ClickSave. - Validate your policy by pulling down the nginx image and running it. - SSH to a host protected by Defender. - Pull the nginx:1.14 image.$ docker pull nginx:1.14Run the nginx image.$ docker run -it nginx:1.14 /bin/sh docker: Error response from daemon: oci runtime error: [Prisma Cloud] Image operation blocked by policy: my-rule, has 7 vulnerabilities, [high:7].Review the scan report for nginx:1.14. Go toMonitor > Vulnerabilities > Images, and click on the entry for nginx:1.14. You’ll see a number of high severity vulnerabilities.By default, Prisma Cloud optimizes resource usage by only scanning images with running containers. Therefore, you won’t see a scan report for ngninx until it’s run.Review the audit (alert) for the block action. Go toMonitor > Events, then click onDocker. - In Console, go toDefend > Vulnerabilities > Images. - ClickAdd rule. - Enter a rule name, such asmy-rule2. - ClickAdvanced settings. - InExceptions, clickAdd Exception. - InCVE, enterCVE-2018-8014.You can find specific CVE IDs in the image scan reports. Go toMonitor > Vulnerabilities > Images, select an image, then clickShow detailsin each row. - InEffect, selectBlock. - ClickAdd. - ClickSave. - Try running an image with the CVE that you’ve explicitly denied.$ docker run -it imiell/bad-dockerfile:latest /bin/sh docker: Error response from daemon: oci runtime error: [Prisma Cloud] Image operation blocked by policy: my-rule2, has specific CVE CVE-2018-8014 Blocking specific CVEs This example shows you how to create and test a rule that blocks images with a specific CVE. Ignoring specific CVEs Follow the same procedure as above, but set the action to Ignoreinstead of Block. This will allow any CVE ID that you’ve defined in the rule, and lets you run images containing those CVEs in your environment. Recommended For You Recommended Videos Recommended videos not found.
https://docs.paloaltonetworks.com/prisma/prisma-cloud/22-01/prisma-cloud-compute-edition-admin/vulnerability_management/vuln_management_rules
2022-05-16T20:58:48
CC-MAIN-2022-21
1652662512249.16
[]
docs.paloaltonetworks.com
get_all_prediction_df - get_all_prediction_df(model, *, triples_factory, k=None, batch_size=1, return_tensors=False, add_novelties=True, remove_known=False, testing=None, mode=None)[source] Compute scores for all triples, optionally returning only the k highest scoring. Note This operation is computationally very expensive for reasonably-sized knowledge graphs. Warning Setting k=None may lead to huge memory requirements. - Parameters model ( Model) – A PyKEEN model triples_factory ( CoreTriplesFactory) – Training triples factory k ( Optional[ int]) – The number of triples to return. Set to Noneto keep all. batch_size ( int) – The batch size to use for calculating scores return_tensors ( bool) – If true, only return tensors. If false (default), return as a pandas DataFrame add_novelties ( bool) – Should the dataframe include a column denoting if the ranked relations correspond to novel triples? remove_known ( bool) – Should non-novel triples (those appearing in the training set) be shown with the results? On one hand, this allows you to better assess the goodness of the predictions - you want to see that the non-novel triples generally have higher scores. On the other hand, if you’re doing hypothesis generation, they may pose as a distraction. If this is set to True, then non-novel triples will be removed and the column denoting novelty will be excluded, since all remaining triples will be novel. Defaults to false. testing ( Optional[ LongTensor]) – The mapped_triples from the testing triples factory (TriplesFactory.mapped_triples) mode ( Optional[ Literal[‘training’, ‘validation’, ‘testing’]]) – The pass mode, which is None in the transductive setting and one of “training”, “validation”, or “testing” in the inductive setting. - Return type Union[ ScorePack, DataFrame] - Returns shape: (k, 3) A dataframe with columns based on the settings or a tensor. Contains either the k highest scoring triples, or all possible triples if k is None. Example usage: from pykeen.pipeline import pipeline from pykeen.models.predict import get_all_prediction_df # Train a model (quickly) result = pipeline(model='RotatE', dataset='Nations', epochs=5) model = result.model # Get scores for *all* triples df = get_all_prediction_df(model, triples_factory=result.training) # Get scores for top 15 triples top_df = get_all_prediction_df(model, k=15, triples_factory=result.training)
https://pykeen.readthedocs.io/en/stable/api/pykeen.models.predict.get_all_prediction_df.html
2022-05-16T22:16:51
CC-MAIN-2022-21
1652662512249.16
[]
pykeen.readthedocs.io
Service Fabric¶ The Autofac.ServiceFabric package enables integration of Autofac with Service Fabric services. Quick Start¶ In your Main program method, build up your container and register services using the Autofac extensions. This will attach service registrations from the container and the ServiceRuntime. Dispose of the container at app shutdown. using System; using System.Diagnostics; using System.Reflection; using System.Threading; using Autofac; using Autofac.Integration.ServiceFabric; namespace DemoService { public static class Program { private static void Main() { try { // The ServiceManifest.xml file defines one or more service type names. // Registering a service maps a service type name to a .NET type. // When Service Fabric creates an instance of this service type, // an instance of the class is created in this host process. // Start with the trusty old container builder. var builder = new ContainerBuilder(); // Register any regular dependencies. builder.RegisterModule(new LoggerModule(ServiceEventSource.Current.Message)); // Register the Autofac magic for Service Fabric support. builder.RegisterServiceFabricSupport(); // Register a stateless service... builder.RegisterStatelessService<DemoStatelessService>("DemoStatelessServiceType"); // ...and/or register a stateful service. // builder.RegisterStatefulService<DemoStatefulService>("DemoStatefulServiceType"); using (builder.Build()) { ServiceEventSource.Current.ServiceTypeRegistered( Process.GetCurrentProcess().Id, typeof(DemoStatelessService).Name); // Prevents this host process from terminating so services keep running. Thread.Sleep(Timeout.Infinite); } } catch (Exception e) { ServiceEventSource.Current.ServiceHostInitializationFailed(e.ToString()); throw; } } } } Per-Request Scopes¶ It is possible to achieve a “per request” style scoping mechanism by making use of the implicit relationships supported by Autofac. For example, if you have a stateless service, its lifetime is effectively a singleton. You would want to use the Func<T> or Func<Owned<T>> relationships (for non-disposable vs. disposable components, respectively) to inject an auto-generated factory into your service. Your service could then resolve dependencies as needed. For example, say you have a user service that is stateless and it needs to read from some backing store that shouldn’t be a singleton. Assuming the backing store is IDisposable you’d want to use Func<Owned<T>> and inject it like this: public class UserService: IUserService { private readonly Func<Owned<IUserStore>> _userStoreFactory; public UserService(Func<Owned<IUserStore>> userStoreFactory) { _userStoreFactory = userStoreFactory; } public async Task<string> GetNameAsync(int id) { using (var userStore = _userStoreFactory()) { return await userStore.Value.GetNameAsync(id); } } } While there’s no “built in” semantics around per-request handling specifically, you can do a lot with the implicit relationships so it’s worth becoming familiar with them. Example¶ There is an example project showing Service Fabric integration in the Autofac examples repository.
https://autofac.readthedocs.io/en/latest/integration/servicefabric.html
2022-05-16T22:03:36
CC-MAIN-2022-21
1652662512249.16
[]
autofac.readthedocs.io
Trace: • LoRa Gateway solution LoRa Gateway solution LoRa Gateway solution Place an order:Ai-Thinker official Alibaba shop Overview Ai-Thinker LoRa Gateway (RG-01) is designed and developed by Ai-Thinker Technology. This gateway is used for ultra-long distance spread spectrum communication. Built-in three RF chips SX1278 has high sensitivity of -148dBm and power output of +20dBm, long transmission distance, high reliability; it can send and receive data in three RF channels at the same time. Each channel can be set with different operating frequency and rate parameters without interfering with each other. Automatically select idle channels to send and receive data during high-load work. Under the premise of low cost, three-channel communication with loRa nodes is realized, and higher communication efficiency is realized. It supports air wakeup, provides three working modes with different power consumption, and supports multiple networking functions such as 4G/WIFI/Ethernet port. Gateway features - LoRa private concentrator protocol, flexible, simple, and customizable - The node modules in the area are automatically added to the gateway to form a star network, and three working modes of Class A/Class B/Class C can be selected to join the gateway - LoRA gateway three-channel communication, three working frequencies can be configured, and three channels of data can be sent and received at the same time. - Support WAN port, support WIFI, optional 4G module - Support MQTT protocol to connect to cloud server, open MQTT protocol interface - Long-distance transmission, the transmission distance can reach 3000 meters in the open outdoor - Using MediaTek processor MT7688, main frequency: 580MHz, 128Mb Flash, 512Mb RAM - Support frequency hopping communication, wake up in the air, CAD channel detection - Support standard POE power supply or power adapter power supply Resource summary RG-01 specification: RG-01_product specification RG-01 Product Specification RG-01 control software (PC side): lora_gateway.zip RG-01 User Manual: RG-01 User Manual_20200718
https://docs.ai-thinker.com/en/loragateway?do=edit
2022-05-16T22:42:53
CC-MAIN-2022-21
1652662512249.16
[]
docs.ai-thinker.com
GroupDocs.Comparison for .NET 22.4 Release Notes This page contains release notes for GroupDocs.Comparison for .NET 22.4 Major Features Below is the list of most notable changes in release of GroupDocs.Comparison for .NET 22.4: - Implemented ability to compare SVG(Scalable Vector Graphics) documents - Fixed issue with group figures lose their name after comparison in Cells - Fixed issue with alternative text not counting as StyleChange in Slides Full List of Issues Covering all Changes in this Release Public API and Backward Incompatible Changes Starting from this version GroupDocs.Comparison has ability to compare SVG documents
https://docs.groupdocs.com/comparison/net/groupdocs-comparison-for-net-22-4-release-notes/
2022-05-16T21:51:21
CC-MAIN-2022-21
1652662512249.16
[]
docs.groupdocs.com
Countdown Latches You can see a list of all the countdown latches in your cluster by clicking on the Countdown Latches menu item in the left menu. A countdown latch has three metrics: Round: Number of the current round. This number is incremented when the countdown latch reaches 0 and is initialized with a new count. Count: Initial countdown. Remaining: Remaining number of expected countdowns. You can sort the table by clicking on the column headers.
https://docs.hazelcast.com/management-center/latest/cp-subsystem/countdown-latch
2022-05-16T21:43:41
CC-MAIN-2022-21
1652662512249.16
[]
docs.hazelcast.com
Table of Contents - Steps - Examples # Steps Mongock provides different runners, from the standalone runner(vanilla version) to Springboot, and other frameworks. In this section will show how to use Mongock with Springboot. Carryng on with our client-service example in what is Mongock?, lets start working with Mongock! # 1- Add Mongock bom to your Pom file <dependencyManagement><dependencies><dependency><groupId>io.mongock</groupId><artifactId>mongock-bom</artifactId><version>LAST_RELEASE_VERSION_5</version><type>pom</type><scope>import</scope></dependency></dependencies></dependencyManagement> # 2- Add the maven dependency for the runner Runner options <dependency><groupId>io.mongock</groupId><artifactId>mongock-springboot</artifactId></dependency> # 3- Add the maven dependency for the driver Driver options <dependency><groupId>io.mongock</groupId><artifactId>mongodb-springdata-v3-driver</artifactId></dependency> Mongock is not intrusive, relies the driver library's version on the developer. These libraries are injected with scope provided. # 4- Create your migration script/class Note that by default, a ChangeUnit is wrapped in a transaction(natively or by using the database support or manually, when transactions are not supported). For more information visit the migration and transaction section package io.mongock.examples.migration;import io.mongock.api.annotations.Execution;import io.mongock.api.annotations.ChangeUnit;import io.mongock.api.annotations.RollbackExecution;@ChangeUnit(id="client-initializer", order = "1", author = "mongock")public class ClientInitializerChange {private final MongoTemplate mongoTemplate;private final ThirPartyService thirdPartyService;public ClientInitializerChange(MongoTemplate mongoTemplate,ThirPartyService thirdPartyService) {this.mongoTemplate = mongoTemplate;this.thirdPartyService = thirdPartyService;}/** This is the method with the migration code **/@Executionpublic void changeSet() {thirdPartyService.getData().stream().forEach(client -> mongoTemplate.save(client, CLIENTS_COLLECTION_NAME));}/**This method is mandatory even when transactions are enabled.They are used in the undo operation and any other scenario where transactions are not an option.However, note that when transactions are avialble and Mongock need to rollback, this method is ignored.**/@RollbackExecutionpublic void rollback() {mongoTemplate.deleteMany(new Document());} # 5- Build the driver (only requried for builder approach) Although all the drivers follow the same build pattern, they may slightly differ from each other. Please visit the specific driver's page for more details. # 6- Driver extra configuration This step is NOT MANDATORY, however for certain features, the driver may require some extra help. For example, in order to enable transactions with spring data, the transaction manager needs to be injected in the application context. Plese visit the specific driver's page for more details. # 7- Build the runner When using the builder approach, the driver needs to be injected to the runner by using the method: setDriver There are two approaches when comes to build the Mongock runner, the builder and the autoconfiguration approach. Visit the runner builder for more information. For this example, we use the autoconfiguration approach with Springboot. # Properties mongock:migration-scan-package:- io.mongock.examples.migration # Indicate spring to use Mongock This approach lies on the underlying framework to provide a smoothly experience. In this case, we take advantage of the Springboot annotations to tell Spring how to run Mongock. However, this approach requires the Spring ApplicationContext, MongoTemplate and MongoTransactionManager to be injected in the Spring context. @EnableMongock@SpringBootApplicationpublic class App {public static void main(String[] args) {new SpringApplicationBuilder().sources(App.class).run(args);}} # 8- Execute the runner Execute runner When using the Springboot runner, you don't need to worry about the execution. Mongock takes care of it 😉 Congratulations! Our basic Mongock setup is done. We just need to run our application and we should see something like this in our log. 2021-09-17 17:27:42.157 INFO 12878 --- [main] i.m.r.c.e.o.c.MigrationExecutorBase : APPLIED - ChangeEntry{"id"="client-initializer", "author"="mongock", "class"="ClientInitializer", "method"="changeSet"} # Examples For code examples, visit the resource page
https://docs.mongock.io/v5/get-started/index.html
2022-05-16T21:52:47
CC-MAIN-2022-21
1652662512249.16
[]
docs.mongock.io
Welcome to pyads’s documentation! This is a Python wrapper for TwinCATs ADS library. It aims to provide a pythonic way to communicate with TwinCAT devices by using the Python programming language. pyads uses the C API provided by TcAdsDll.dll on Windows and adslib.so on Linux. The Linux library is included in this package. The documentation for the ADS API is available on infosys.beckhoff.com. - Installation - Quickstart - Documentation - pyads package
https://pyads.readthedocs.io/en/stable/
2022-05-16T20:52:46
CC-MAIN-2022-21
1652662512249.16
[]
pyads.readthedocs.io
. Wait until a lifecycle policy preview request is complete and results can be accessed It will poll every 5 seconds until a successful state has been reached. This will exit with a return code of 255 after 20 failed checks. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. lifecycle-policy-preview-complete: previewResults lifecycle-policy-preview-complete [--registry-id <value>] --repository-name <value> [--image-ids <value>] [--filter <value>] [--cli-input-json <value>] [--starting-token <value>] [--page-size <value>] [--max-items <value>] [--generate-cli-skeleton <value>] --registry-id (string) The Amazon Web Services account ID associated with the registry that contains the repository. If you do not specify a registry, the default registry is assumed. --repository-name (string) The name of the repository. --image-ids (list) The list of imageIDs to be included. (structure) An object with identifying information for an image in an Amazon ECR repository. imageDigest -> (string)The sha256 digest of the image manifest. imageTag -> (string)The tag used for the image. Shorthand Syntax: imageDigest=string,imageTag=string ... JSON Syntax: [ { "imageDigest": "string", "imageTag": "string" } ... ] --filter (structure) An optional parameter that filters results based on image tag status and all tags, if tagged. tagStatus -> (string)The tag status of the image..
https://docs.aws.amazon.com/cli/latest/reference/ecr/wait/lifecycle-policy-preview-complete.html
2022-05-16T23:23:48
CC-MAIN-2022-21
1652662512249.16
[]
docs.aws.amazon.com
This tip shows how to set custom standardization for a JChem table. Setting a standardizer allows you to define chemical business rules for a JChem structure table. This lets you define things like: How things like charges, nitro groups are handled How salts are handled Tautomers And much more. See here for details Using a custom standardizer configuration requires a standardizer license. Click on the 'Create standardizer' button (only present if you do not already have a custom standardizer defined). Click on the 'Apply' button. The new standarizer configuration will be applied to the structure table. This may take some time, depending on the size of the table, the actions you specified, and the chemical terms columns (if any) that you have added to the table. A progress dialog is shown as the new standardization rules are applied. IJC schema editor (user guide) Editing entities (user guide)
https://docs.chemaxon.com/display/lts-europium/change-standardizer-configuration-for-jchem-table.md
2022-05-16T21:55:34
CC-MAIN-2022-21
1652662512249.16
[]
docs.chemaxon.com
The REST Server Introduction The DeltaJSON DeltaJSON distribution download. It is named using the pattern, deltajson-rest-x.y.z.jar, where x, y and z are respectively the major, minor and patch version numbers of the DeltaJSON release. Supporting resource files are also included in the download. The REST server JAR file and supporting files should be copied to a directory on the host machine. License File The REST server license file named deltajson-rest.lic should be copied to the installation directory. This license file controls the HTTP port that the server listens on and also the maximum number of threads that can be exploited by the REST server. License Server If using a license server, or multiple license servers for redundancy, these can be specified using the license-servers command-line option. For example: java -jar deltajson-rest-x.y.z.jar license-servers="10.1.10.1, 10.1.10.2, 10.1.10.3"tajson-rest-1.0.0.jar java -jar deltajson-rest-x.y.z.jar Apr 12, 2019 8:51:02 AM org.glassfish.grizzly. start INFO: Started listener bound to [0.0.0.0:8080] Apr 12, 2019 8:51:02 AM org.glassfish.grizzly. start INFO: [HttpServer] Started. DeltaJSON REST service started, navigate to: Press Control-C to stop it... When using JDK 9.0 or 10.0 an additional add-modules argument is required when starting the server: java --add-modules java.xml.bind -jar deltajson-rest-x.y.z.jar To stop the service use Control-C as indicated or an appropriate operating system command/tool such as ' kill'. Logging Logging for the REST server is controlled through slf4j, which allows a variety of methods such as java.util.logging and logback. For example with java.util.logging, a logging.properties file that can be placed in the installation directory. Alternatively the Java property java.util.logging.config.file can be used when starting the service, i.e.: java -Djava.util.logging.config.file=/path/to/logging.properties -jar deltajson-rest-x.y.z.jar For example: handlers = java.util.logging.FileHandler, java.util.logging.ConsoleHandler .level = OFF com.deltaxml.json.rest.level = FINER java.util.logging.FileHandler.level = FINER java.util.logging.FileHandler.pattern = deltajson.log java.util.logging.ConsoleHandler.level = FINE Using the FINER level will enable logging of every successful request in addition to errors, etc. Use a less granular level such as FINE if you do not want these to be logged. For logback, you will need to add your configuration file onto the classpath. You may need to start the REST service differently, for example: java -cp deltajson-rest-x.y.z.jar:/path/to/dir-containing-logback-xml com.deltaxml.json.rest.Main File IO The File IO type can be disabled, for example for security reasons, by using the Java property disableFileIO: java -DdisableFileIO=true -jar deltajson-rest-x.y.z.jar
https://docs.deltaxml.com/deltajson/2.1/The-REST-Server.2953805832.html
2022-05-16T21:22:53
CC-MAIN-2022-21
1652662512249.16
[]
docs.deltaxml.com
EtherType (ethertype) Description Returns the EtherType of the Ethernet frame of a packet. How does it work in the search window? Select Create column in the search window toolbar, then select the EtherType operation. You need to specify one argument: The data type of the values in the new column is integer. How does it work in LINQ? Use the operator as... and add the operation syntax to create the new column. This is the syntax for the EtherType operation: ethertype(packet)
https://docs.devo.com/confluence/ndt/v7.1.0/searching-data/building-a-query/operations-reference/packet-group/ethertype-ethertype
2022-05-16T21:31:30
CC-MAIN-2022-21
1652662512249.16
[]
docs.devo.com
Utility functions for handling java.lang.CharSequence instances Checks if start == 0 and count == length of CharSequence It does this check only for String, StringBuilder and StringBuffer classes which have a fast way to check length Calculating length on GStringImpl requires building the result which is costly. This helper method is to avoid calling length on other that String, StringBuilder and StringBuffer classes when checking if the input CharSequence instance is already the same as the requested sub sequence str- CharSequence input start- start index count- length on sub sequence Provides an optimized way to copy CharSequence content to target array. Uses getChars method available on String, StringBuilder and StringBuffer classes. Characters are copied from the source sequence csq csq- the source CharSequence instance. srcBegin- start copying at this offset. srcEnd- stop copying at this offset. dst- the array to copy the data into. dstBegin- offset into dst. Writes a CharSequence instance in the most optimal way to the target writer target- writer csq- source CharSequence instance start- start/offset index end- end index + 1
https://docs.grails.org/4.0.10/api/org/grails/charsequences/CharSequences.html
2022-05-16T21:19:48
CC-MAIN-2022-21
1652662512249.16
[]
docs.grails.org
5.1.1 Release Notes These release notes list any new features, enhancements, and fixes that were made between version 5.0 and 5.1.1 of Hazelcast Management Center (MC). New Features SQL Browser has a new design which shows queryable objects with a schema and index explorer. [MC-1231] [MC-1292] [MC-1235] SQL Browser now handles streaming SQL queries and shows the last 1,000 entries as they are received. [MC-1293] New CP Subsystem metrics screens added to MC. [MC-533] Dynamic Configuration allows for additions to be made to the cluster config via MC. [MC-1268] System properties and environment variables can now be used interchangeably. [MC-1107] Enhancements Security: Console access can be enabled or disabled in the members' config. This defaults to disabled (breaking change). See Managing Console Support in the Platform documentation. [MC-1121] Security: Data access via the SQL Browser and Map Browser can be disabled in the members' configuration. [MC-1039] Standardization of naming of start scripts. hz-mcshould be used and hz-mc conffor any MC offline configuration. [MC-1087] Introduced the mc-hz conf cluster listcommand to list all configured cluster names. [MC-1091] SQL Browser provides instructions if hazelcast-sqlhas not been included in the classpath. [MC-871] SQL Browser has improved messages when mappings cannot be automatically generated. [MC-1160] SQL Browser when creating a mapping, the mapping name now defaults to the map’s name. [MC-1288] SQL Browser the returned results for each field is capped to 1024 characters to prevent large fields from slowing down MC. [MC-1203] MC now quickly identifies whether the cluster has SQL/Streaming enabled or not. [MC-992] Config healthcheck can be customised to ignore specific problems via the Ignore button. [MC-541] Updated Config Heatcheck to allow different config for some security parts. [ [MC-925] Added a button to clear all map entries. [MC-1176] Queues age-related queue metrics can now be reset. [MC-1109] Improved tables with auto widths based on the content. [MC-1073] Improved support for large clusters with lots of metrics. [MC-1242] Persistence of metrics can be disabled via hazelcast.mc.metrics.persistence.enabledproperty [MC-1225] Duration column has been added to the Completed Jobs table [MC-151] Near Cache stats now includes invalidationsand invalidationRequestmetrics. [MC-340] MC own connection(s) to the cluster are now excluded from the client count. [MC-1162] MC’s operation timeout with the cluster can now be configured so that long-running operations (like script executions) do not timeout. [MC-1285] Shutdown member and Shutdown cluster now behave identically for single-node clusters. [MC-514] Changed the button from Connect to Add on the Cluster Connections screen. [MC-808] Pendo.io is used to collect usage analytics and performance of Management Center (Pendo can be disabled by disabling MC phone homes) [MC-1299] Java 17 is now supported. [MC-1120] Fixes Helpful error if running on an architecture that is not supported by RocksDB. [MC-1208] MC will fail fast if it does not have write access to its data directory. [MC-206] Map configs without actual IMap are not shown in SQL Browser dropdown. [MC-1122] Config healthcheck: fixed a false alarm that could be generated for the default map. [MC-700] Clients detail page: display a human-readable text for the corresponding client language.[MC-804] Empty lib directory has been added to support usercode deployment. [MC-1107] The left menu section that was previously opened is now remembered. [MC-1111] Button added to Security Provide configs to clear current value in the field. [MC-201] Fixed a TTL setting in metrics storage. [MC-150] Fixed an incorrect avgEventLatency on WAN Replication screen. [MC-1151] WAN Sync menu dropdowns have been expanded to match the length of the Map names. [MC-1139] Fixed the tooltips shown in the WAN Replication screens so they do not overlap. [MC-1153] Fix for displaying a large number of maps in the WAN Sync screen. [MC-1185] Fix for the maps counter in the left menu to say the total maps count includes system maps. [MC-1207] Fix for the Map Browser TTL value to show time remaining rather than an expiray date. [MC-1366] Maximize chart icon is no longer shown after clicking on it. [MC-1141] Breaking Changes Console access is now disabled by default in the members' configuration. If you use the console in Management Center, console access needs to be enabled. See Managing Console Support in the Platform documentation. Notes Management Center 5.1 does not support Jet 4.5. It only supports IMDG 4.2, Hazelcast Platform 5.0 and later. Management Center 5.1 supports SQL Browsing on Hazelcast Platform 5.0 and later. Platform 5.1 is highly recommended for SQL/Streaming support. You should avoid updating the Client Filtering configuration during rolling upgrades if multiple Management Center instances are connected to the same cluster to keep the configuration consistent.
https://docs.hazelcast.com/management-center/latest/release-notes/5-1-1
2022-05-16T22:17:40
CC-MAIN-2022-21
1652662512249.16
[]
docs.hazelcast.com
and how quickly they can be used to transform imported datasets into clean and actionable data for use across the enterprise.
https://docs.trifacta.com/pages/diffpages.action?originalId=184211694&pageId=184212931
2022-05-16T21:58:08
CC-MAIN-2022-21
1652662512249.16
[]
docs.trifacta.com
In this section the NSX supplied OWASP CRS policy can be configured. It covers the OWASP Top Ten attack protection. If the CRS version is updated, all new CRS rules will be in Detection mode. With this, you can update the CRS ruleset without any risk in production. However, these new rules must be moved into Enforcement mode (or inherited policy mode) manually. All updated rules will continue to remain in the same mode and the existing exclusions will be applied to the rules. To update CRS Rules do the following: - Under the Signatures tab, scroll down to the CRS Rules section. - Click on the required CRS Version to select it. - The change log is displayed as shown below. Click on OK to confirm and update the CRS version. The final step in WAF processing is a signature check. Core Rule Sets (CRS) can be configured under the Signatures tab. You can configure to execute custom rules before CRS or after CRS as well. For more information refer to the below section.
https://docs.vmware.com/en/VMware-NSX-Advanced-Load-Balancer/20.1.4/WAF_Guide/GUID-7FF57A08-0926-4FBA-990C-3473223BCDC4.html
2022-05-16T23:15:23
CC-MAIN-2022-21
1652662512249.16
[]
docs.vmware.com
You can import and synchronize existing cloud accounts from vRealize Automation 8.x to vRealize Operations Cloud. Click Import Accounts from VRA > Import Accounts to list all the cloud accounts associated with vCenter Server, Amazon AWS, and Microsoft Azure that are not managed by vRealize Operations Cloud. You can select and import these accounts into vRealize Operations Cloud directly with existing credentials as defined in vRealize Automation or add or edit the credentials before the import process. The Import Accounts from VRA option is hidden from the user until the integration with vRealize Automation 8.x is enabled from the integration page under or Repository tabs. Prerequisites - Verify that vRealize Automation 8.x is enabled from vRealize Operations Cloud.in - Verify that you know the vCenter Server credentials that have sufficient privileges to connect and collect data. - Verify that the user has privileges of Organizational Owner and Cloud Assembly administrator set in vRealize Automation. Procedure - From the left menu, go to Import Accounts from VRA. tab, click on the horizontal ellipses, and then select - From the Import Accounts page, select the cloud account you want to import. - To override an existing credential from vRealize Automation. - Select the existing credential from the Credential drop-down menu and click Save. - To add a new credential, click the plus icon next to the Credential drop-down menu and enter the credential details and click Save. - Select the collector/group from the drop-down menu. - Click Validate to verify that the connection is successful. - Click Import. Results The imported cloud account is listed in the Warning to OK.page. After the data collection for the cloud account is complete the configuration status changes from
https://docs.vmware.com/en/vRealize-Operations/Cloud/com.vmware.vcom.config.doc/GUID-905FBD95-B0A5-485A-9CEC-B063359723E5.html
2022-05-16T23:07:47
CC-MAIN-2022-21
1652662512249.16
[]
docs.vmware.com
( r.hashMap("name", "John") .with("subscription_date", r.now()) ).run(conn); Couldn't find what you were looking for? © RethinkDB contributors Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License.
https://docs.w3cub.com/rethinkdb~java/api/java/now/
2019-08-17T10:27:36
CC-MAIN-2019-35
1566027312128.3
[]
docs.w3cub.com
What is WSO2 Managed Cloud? WSO2 Managed Cloud is a service that allows you to get a team of WSO2 product specialists to run your Cloud for you. These are the typical tasks carried out by the WSO2 Managed Cloud team: How can I get started? If you are a WSO2 production support customer, contact your WSO2 account manager, or leave a request at to discuss your Cloud requirements. How is WSO2 involved? Although the level of involvement can vary from customer to customer, here are the typical things done by the WSO2 Managed Cloud team: - Set up remote access to the customer's Amazon EC2 instance. - Set up a domain name system (DNS). - Set up a connection to the customer's data center. - Set up the environments (e.g., Development, Test, Pre-Production, and Production). - Carry out system monitoring and alerting. - Implement backup and disaster recovery. - Commit the artifacts such as scripts and diagrams for versioning and history. - Support and maintain the system. This includes activities like sending weekly updates, handling issues, and managing deployment artifacts such as patches. Any other tasks such as continuous integration and deployment that are not listed here are handled separately. Where does WSO2 set up the Cloud? We use Amazon Virtual Private Cloud (VPC). It lets you provision a logically isolated section of the Amazon Web Services (AWS) Cloud where you can launch AWS resources in a virtual network that you define. VPCs provide network-level control and isolation for the AWS deployment. You can create a multi-tier network structure on AWS that allows you to keep your data and configurations in the private space and expose them through the DMZ. We always recommend to use VPCs for production deployments on AWS, considering data isolation and security. What Amazon services does WSO2 use? WSO2 uses the following unless the customer provides alternatives: - Amazon Virtual Private Cloud (VPC) - Amazon Elastic Compute Cloud (EC2) - Amazon Relational Database Service (RDS) - Amazon Simple Storage Service (S3) - Amazon Simple Email Service (SES) - Amazon Route53 services - Amazon Glacier What are the AWS account requirements? Following is the list of AWS specifics that WSO2 needs: - The customer's Amazon Web Services (AWS) account ID. - A multi-factor-authentication-enabled Identity and Access Management (IAM) user with access to the customer's AWS Management Consoles. - IAM users with privileges to invoke APIs. - IAM users with admin privileges to Amazon VPC, EC2, RDS and optionally, to Amazon S3, SES, Route53, and Glacier. I already have a deployment of WSO2 products. Can I get WSO2 to take it over and maintain it? No. WSO2 will set up a new environment for you by following the WSO2 Managed Cloud standards and best practices and maintain the new environment according to the Managed Cloud Service Level Agreement (SLA). What does WSO2 use for deployment synchronization? For artifact synchronization, we currently recommend SVN. Git support is still in the early stages of implementation. We plan to make it available in a future Carbon release.
https://docs.wso2.com/pages/viewpage.action?pageId=57761991
2019-08-17T10:59:25
CC-MAIN-2019-35
1566027312128.3
[]
docs.wso2.com
VoiceXML or Java applications and business data can be resident on a Web server or application server and accessed using TCP/IP. With state tables and custom servers, you can access business data either locally on the pSeries system or remotely using TCP/IP or SNA. You can also access host 3270 applications via TCP/IP or SNA. Having decided how your system is to access business data, refer to Software prerequisites to determine what LPPs you need to install before installing Blueworx Voice Response. The network can include: If your network is managed by NetView® or another network management application that uses simple network management protocol (SNMP), one or more Blueworx Voice Response systems can be managed from a central point (which can be a separate machine or a pSeries computer on which Blueworx Voice Response is running). SNMP is used to send Blueworx Voice Response alarms and other status information to the network management application. Using a service such as NetView, the network operator can keep Blueworx Voice Response running day-to-day. If Blueworx Voice Response stops unexpectedly, the NetView operator can help diagnose the problem, using the information in the alarm messages. (For more information about alarms, see the Blueworx Voice Response for AIX: Configuring the System information and the Blueworx Voice Response for AIX: Problem Determination information.) The network operator can also obtain up-to-date information about trunks, channels, CPU usage, and so on, and can reset their status, giving the operator remote control over the Blueworx Voice Response system.
http://docs.blueworx.com/BVR/InfoCenter/V6.1/help/topic/com.ibm.wvraix.install.doc/datacommunications7.html
2019-08-17T11:30:00
CC-MAIN-2019-35
1566027312128.3
[]
docs.blueworx.com
Notification Templates¶ Notification templates use django’s build-in templating system. This gives us more than enough power to craft the kind of message we want, without making things too onerous. Here, we go over the various fields on a template object, and what they do. Fields¶ - Backend - This sets the backend that the template will use. - Recipients - Here, you can select one or multiple kinds of recipients. If none are selected, the template won’t be used. - Subject - You can use template variables here but be careful not to make it too long. [1] - Content - Here’s where the body of your message goes. You can use template variables here and there is a toolbar at the top of the text books that has a few tools for convenience. Of particular note is the check mark button (✓) that shows a preview. Use this to check your template. - Attachments - You can add zero-multiple attachments here. What’s available here depends on what’s set up in the code [2] - From address - Normally optional since backends have a way to specify a site-wide default from address if they need one at all. You can use template variables here. [3] - Bulk - When this is on, only one message will be sent per template to all recipients and the recipientsparameter will not be available. When it is turned off, a separate message will be made for each recipient, and you can use the recipientsparameter. - Enabled - Turning this off will case the template to be ignored. Example Content¶ The way a template looks shouldn’t be too foreign to you if you’re already used to django; but just in case you’re wondering, here’s an example. <p>Hi {{ recipient.name }},</p> <p>{{ object.poster.name }} has posted a comment on your blog titled {{ object.article.title }}.</p> Note that the exact variables available will depend on which model the notification is attached to. This example assumes bulk is turned off. Variables¶ Several variables are provided to you in the template context. object - This refers to whatever model the notification is attached to. It is visible as the content-type field of the notification when you’re editing it in the admin. Most of the time, you’re probably going to be wanting to use this. actor - This is only available if actor_typeis specified for the notification. It refers to whoever or whatever is causing action associated with the notification. target - This is only available if target_modelis specified for the notification. It refers to whoever or whatever the action associated with the notification is affecting. recipient - The type of this depends on which channel is selected as the recipient of a notification, and what kind of objects that channel returns. In practice, it will probably be some sort of user/user-profile object. When site contacts are the recipient, the value is a SiteContactobject. Most of the time, it’s recommended to just try and use a field on the object variable instead of target or actor. Sometimes, though, this is just not possible, and you want to be able to differentiate between the two at runtime, so that’s why they exist. Miscellaneous Notes¶ Escaping¶ Django’s template engine has been primarily designed for outputting HTML. The only place in which this really matters is when it comes to escaping content. Plain text and HTML content work fine, however, with other formats like Markdown we need to wrap all the template variables with a custom variable escaping object that escapes everything on the fly. This has a few consequences. - Most variables will be wrapped in this class. While the class mostly mimics the behavior of the underlying object, any template filter using isinstancewill fail. - In order to maintain compatibility with template filters, we don’t try to escape any of the basic numeric or date/time objects. For the most part this is okay, but it is theoretically possible to end up with a weird result. - The result of template filters is typically not escaped correctly.
https://django-vox.readthedocs.io/en/latest/templates.html
2019-08-17T10:51:34
CC-MAIN-2019-35
1566027312128.3
[]
django-vox.readthedocs.io
{"_id":"5863f7420355f31900380464",":"2016-12-28T17:32:50.858Z","changelog":[],"body":"# June 9\n**Frame Server 4.5.5**\n+ Resolved an issue where too many requests to change keyboard layout could cause a session to become unresponsive.\n+ Improved session start time.\n+ Resolved an issue where custom variables (as opposed to standard Frame variables) were not set correctly in the new Mount and Unmount custom scripts.\n+ New Feature: Added support for resolutions higher than 2560x1600 (Pro instances only).\n\n**Frame UI**\n+ Resolved an issue where logging in through the Launchpad could cause a \"too many redirects\" warning.\n\n\n# June 8\n\n**Frame UI**\n\n+ New Feature: Super admins can now \"clone\" a Sandbox from one account to another. Please see the [documentation here]().\n+ New Feature: Super admins can now enable \"Allow Universal Sign-in\" to allow users with multiple accounts to login to any of those accounts, one at a time, from a single sign-on page. This feature is disabled by default.\n+ Resolved an issue where the wait animation did not clear after bulk inviting team members or hard deleting team members.\n+ Resolved an issue where end-user subscription plans sometimes did not show instance types or allow selecting valid instance types for a particular subscription plan.\n+ Resolved an issue where Platform Admin Professional accounts were unable to access a newly added Launchpad.\n\n**Frame Terminal**\n+ Resolved an issue where a terminal would briefly reappear after disconnecting from a session.\n+ Resolved an issue where \"Power Off\" would sometimes appear to \"Disconnect\" instead.\n+ New Feature: The Frame App API now includes a way to test for web socket support before starting a session.\n+ Resolved an issue where the terminal did not perform as well when launched from the Launchpad as it performed when launched in other ways.\n+ Resolved an issue where setting Dynamic resolution to 150% on a Pro 16GB would sometimes seem to \"hang.\"\n+ Resolved a layout issue where Launchpad icons would overlap each other after returning from a resized terminal.\n+ Improved the readability of keyboard layout information, so that it looks more like Windows presents keyboard layouts.\n+ Resolved an issue where dynamic resolution didn't work as well for very low resolutions (e.g. 400x350).\n+ Resolved an issue where the session duration warning was still showing after the session duration was already over and the session had closed.","slug":"jun-4-jun-10-2016","title":"Jun 4-Jun 10, 2016"}
https://docs.fra.me/blog/jun-4-jun-10-2016
2019-08-17T11:52:20
CC-MAIN-2019-35
1566027312128.3
[]
docs.fra.me
Setup When you set Logz.io to fetch Elastic Load Balancing logs, Logz.io will periodically read logs from the configured S3 bucket. Elastic Load Balancing logs are useful for application usage intelligence and monitoring. You’ll need: s3:ListBucket and s3:GetObject permissions for the required S3 bucket (one bucket per region) Configuration Send your logs to an S3 bucket Logz.io fetches your Elastic Load Balancing logs from an S3 bucket. For help with setting this up, see these docs from AWS: - For Application Load Balancer, see Access Logs for Your Application Load Balancer. - For Network Load Balancer, see Monitor Your Network Load Balancers. - For Classic Load Balancer, see Enable Access Logs for Your Classic Load Balancer. Add the S3 bucket information To use the S3 fetcher, fill out the S3 bucket information on the Elastic Load Balancing.
https://docs.logz.io/shipping/log-sources/elastic-load-balancing.html
2019-08-17T10:47:37
CC-MAIN-2019-35
1566027312128.3
[]
docs.logz.io
The MyParcel.com API consists mostly of RESTful endpoints that work with resources as described in the json-api standard. However, there are situations where it makes more sense to step away from that standard. In these cases RPC endpoints might be a more fitting solution. These endpoints allow you to call an action on the API for a specific purpose that would not make sense in a resource oriented structure. While RESTful endpoints consist solely of nouns, RPC endpoints take the following form: verb-noun For example, to retrieve suggestions for an address, you would make a POST request to /suggest-address. These endpoints are either GET or POST. The content type of the request and response will be application/json. Not the json-api specific content type the resource endpoints use. When there is an error during an RPC request, it will still return a json-api error with the content type application/vnd.api+json.
https://docs.myparcel.com/api/rpc-endpoints/
2019-08-17T11:15:52
CC-MAIN-2019-35
1566027312128.3
[]
docs.myparcel.com
public interface TransactionManagementConfigurer Configurationclasses annotated with @ EnableTransactionManagementthat wish to or need to explicitly specify the default PlatformTransactionManagerbean to be used for annotation-driven transaction management, as opposed to the default approach of a by-type lookup. One reason this might be necessary is if there are two PlatformTransactionManagerbeans. This is even generally preferred since it doesn't lead to early initialization of the PlatformTransactionManager bean..
https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/transaction/annotation/TransactionManagementConfigurer.html
2019-08-17T11:19:00
CC-MAIN-2019-35
1566027312128.3
[]
docs.spring.io
exponentialMovingAverage() function The exponentialMovingAverage() function calculates the exponential moving average of values in the _value column grouped into n number of points, giving more weight to recent data. Function type: Aggregate exponentialMovingAverage(n: 5) Exponential moving average rules - The first value of an exponential moving average over nvalues is the algebraic mean of nvalues. - Subsequent values are calculated as y(t) = x(t) * k + y(t-1) * (1 - k), where: y(t)is the exponential moving average at time t. x(t)is the value at time t. k = 2 / (1 + n). - The average over a period populated by only nullvalues is null. - Exponential moving averages skip nullvalues. Parameters n The number of points to average. Data type: Integer Examples Calculate a five point exponential moving average from(bucket: "example-bucket"): |> range(start: -12h) |> exponentialMovingAverage(n: 5) Table transformation with a two point exponential moving average Input table: Query: // ... |> exponentialMoving.
https://v2.docs.influxdata.com/v2.0/reference/flux/functions/built-in/transformations/aggregates/exponentialmovingaverage/
2019-08-17T10:47:32
CC-MAIN-2019-35
1566027312128.3
[]
v2.docs.influxdata.com
You must specify the country or region in which the system is to be used by typing its international dialing code. For example, for the U.S. or Canada, type 1; for France, type 33. There is no default value for country or region. Once you have made your choice, the telephony configuration wizard excludes choices of values for other items that are inappropriate for the country or region that you have selected. Parameters that seriously affect the operation of the system are set according to the country or region selected; this might affect compliance with telecommunications authority regulations, and must only be done by authorized personnel familiar with these requirements. The choice of country or region dictates whether the system is E1 or T1. In an E1 system, each trunk has 30 channels; in a T1 system, each trunk has 24 channels.
http://docs.blueworx.com/BVR/InfoCenter/V6.1/help/topic/com.ibm.wvraix.config.doc/countryregion.html
2019-08-17T10:47:27
CC-MAIN-2019-35
1566027312128.3
[]
docs.blueworx.com
[CE] Manage content and appearance 1. Manage ads categories and locations Category Ad Category allows you to sort posts according to categories. Create as many categories as you need and edit or remove them easily whenever you have to. To add a category, hit Add New [+] button, enter category name in the box, and press Enter on your keyboard. You can add sub-categories by hitting [+] button on the right of the main category. To remove a category, hit the [X] button. Please note that parent categories cannot be deleted unless all its sub-categories were deleted. You also need to move the contents of the category to be removed to another category in the list. Locations Ad Locations will help your visitors filter their desired ads according to city, state, or country. To add a location, hit Add New [+] button, enter location in the box, and press Enter on your keyboard. You can add a sub-location by hitting Add [+] button on the right of the main location. To remove a location, hit the [X] button. Please note that parent locations cannot be deleted unless all its sub-locations were deleted. You must also move the contents of the location to be deleted to another location in the list. 2. Customize the theme in the front-end ClassifiedEngine (CE) supports quick customization in front-end to help you easily manage your site’s appearance including layout, font, and color. To customize your site, click Active Customization Mode icon located at the middle left of the page. It will display three parts to modify: 1. Color Schemes. This allows you to change your site’s color scheme. There are eight colors available for your choice. 2. Page Options. This enables you to change your site’s layout, background patterns, hyperlink color, and background colors of header, page, and footer. - Layout Style. Choose whether one column, two-column with left sidebar, or two-column with right sidebar is best for your site. - Background Patterns. ClassifiedEngine provides eight simple background patterns to give your site a sleek look. - Colors. You can change the colors of your site’s header, page, and footer backgrounds to achieve a color-coordinated or distinct style. 3. Content Options. This is where you can change the font style and font size of your site’s heading and content. Remember to click Save to apply the changes to your site.
https://docs.enginethemes.com/article/133-2-7-manage-content-and-appearance
2019-08-17T10:52:14
CC-MAIN-2019-35
1566027312128.3
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56c6e208c697915005a72a5f/images/570cc02ec697917553cf83c8/file-1aapJUxcii.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56c6e208c697915005a72a5f/images/570cc06990336008d09db594/file-nxeLvBe3iD.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56c6e208c697915005a72a5f/images/570cc04690336008d09db593/file-0n3DldLwZT.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56c6e208c697915005a72a5f/images/570cc0b3c697917553cf83cf/file-1SkvJMdCrv.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56c6e208c697915005a72a5f/images/570cc413c697917553cf83ea/file-ifvNmhmw6f.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56c6e208c697915005a72a5f/images/570cc43390336008d09db5b6/file-VSuKe2J1Ph.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56c6e208c697915005a72a5f/images/570cc44a90336008d09db5b8/file-W4PBZEH0En.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56c6e208c697915005a72a5f/images/570cc4adc697917553cf83f3/file-s4mrcn0koX.png', None], dtype=object) ]
docs.enginethemes.com
Software Copyrights¶ All original source code in this repository is Copyright (C) 2015-2018 Espressif Systems. This source code is licensed under the ESPRESSIF MIT License as described in the file LICENSE. Additional third party copyrighted code is included under the following licenses: - esp-stagefright is Copyright (c) 2005-2008, The Android Open Source Project, and is licensed under the Apache License Version 2.0. Please refer to the COPYRIGHT in ESP-IDF Programming Guide Where source code headers specify Copyright & License information, this information takes precedence over the summaries made here.
https://docs.espressif.com/projects/esp-adf/en/latest/COPYRIGHT.html
2019-08-17T10:38:14
CC-MAIN-2019-35
1566027312128.3
[]
docs.espressif.com
{"_id":"59dfb0801a63a80024086bc-12T18:12:16.304Z","changelog":[],"body":"## October 6\n\n**Frame UI**\n+ Resolved an issue where advanced launch parameters were not being copied when using the Copy Settings feature as a super admin\n\n+ Resolved a minor display issue when publishing apps.\n\n+ Resolved an issue where inactivity timeout could be set to a negative value.\n\n+ Resolved an issue which could result in an unintended installer window being displayed.\n\n**Frame Gateway**\n\n+ Resolved an issue with changing a VNet with Azure accounts.\n\n## October 2\n\n**Frame Terminal**\n\n+ Resolved an issue where FrameApp API was not displaying the optional header.\n\n**Frame UI**\n\n+ Resolved an issue with editing applications in Manage Windows Apps view\n\n## Sept 30\n**Frame UI**\n\n+ Resolved an issue where admins' pre-authorized invitations were not being generated for their users.","slug":"sept-30-oct-6-2017","title":"Sept 30 - Oct 6, 2017"}
https://docs.fra.me/blog/sept-30-oct-6-2017
2019-08-17T11:50:34
CC-MAIN-2019-35
1566027312128.3
[]
docs.fra.me
One event can not be accessed from multiple accounts, but users can share their events with others. Here's how: - Step 1: Click on "more settings" in your event's settings - Step 2: Turn on the “share my event” button and save the changes. - Step 3: Share your event's code with your colleagues (i.e.: TUTORIAL) To import your event, your colleagues need only log into their account, click on Import Event and fill in the code you gave them. Note that the new event on your colleague's account is separate from the original. Any changes made to either event will not be synchronised on the other. Was this article helpful? Let us know below so we may better help you and other users in the future! The Wooclap Team
https://docs.wooclap.com/en/articles/1301019-can-you-collaborate-on-or-share-an-event-with-other-users
2019-08-17T11:23:25
CC-MAIN-2019-35
1566027312128.3
[array(['https://downloads.intercomcdn.com/i/o/110325255/78d97942eb5756398971dda8/More+settings.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/110325970/cb1f02c026cfbfaf374b1042/Share+event.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/110326155/a7b07d4bba86de1259fe19b8/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/110326227/95f0d67ac46c2d059770af5e/image.png', None], dtype=object) ]
docs.wooclap.com
Flux built-in selector functions Flux’s built-in.
https://v2.docs.influxdata.com/v2.0/reference/flux/functions/built-in/transformations/selectors/
2019-08-17T11:26:16
CC-MAIN-2019-35
1566027312128.3
[]
v2.docs.influxdata.com
desired language: desired language: Builder FAQ How do scene limits work? What are triangles? How about materials? What’s going on? Genesis City is a really, really big place. In order to make sure everyone has a smooth experience, there’s a limit to how much stuff each scene can hold. In the bottom left corner of the Builder, if you click on the set of squares, you’ll find a little list explaining what each of these limits are, and how far along you are to reaching each one. Let’s take a look at each of these: - Geometries: these define different simple shapes, like a box or a wheel. - Bodies: a body is just a copy of a geometry. For example, a bike might have three bodies: the frame and two wheels. By copying similar geometries, we can save resources. - Triangles: each surface of a body is shaped like a triangle. More complex models have more triangles than simpler models. - Materials: materials make your scenes more realistic by describing how a model or shape should look. They change the way light is reflected (or emitted) from different models, and can include one or more textures. - Textures: these are the images used in materials. Textures are images of different patterns and colors - like wood, stone, or grass. - Entities: an entity can include one or more bodies, like the bike in the example above. Entities include everything you need for an asset: the geometries, bodies, materials, and textures. I can’t submit my scene to the contest. There’s two possible reasons for why you can’t submit your scene: - You’re getting a network error bug. This is likely a problem with the scene size. We are aware of the bug and are working on a fix. You should be able to submit other scene sizes (especially even by even scenes). - The button is disabled because some of your models are falling off your parcel! First, try refreshing or re-opening your project. If that doesn’t work, you might have a model hanging over the edges of your scene. Even if it’s barely out-of-bounds, it’ll still flash blue and red. Watch out! Some of these trouble models might be hiding in other objects you’ve put on the edge of your parcel (like trees!). Can I upload custom assets? Right now, the Builder doesn’t let you import custom models from places like Sketchfab. One reason for this is to make the playing field more level for the Creator Contest. Decentraland hopes to add support for custom assets in the future, after the Creator Contest. Can I deploy my scene to my land? Not yet, however, this is a planned feature. The Builder is made to create scenes for Decentraland, so our top priority is making sure you can deploy your scenes to your LAND. Can I move items underground? No, objects cannot be moved below the ground, except in some cases via rotation. You can rotate an object, and have part of the model extend below the ground. How do I add images to my scene? You can’t import, upload, or paste images into the Builder right now. Can I share my scenes with other Builder users? Not yet, but we know you want to! We’re working on ways to support this in the future. Stay tuned! (In the meantime, try entering preview mode to capture some cool screenshots. Press ‘F’ to fly and get that bird’s-eye view.) How do I save projects? Projects save automatically to your local storage. Don’t use the Builder in Incognito/Private Browsing Mode, and don’t clear your cache on exit (this is almost never done unless you are doing it intentionally). Can I export my scenes? The current Builder can’t export your scenes, but we’re planning an export tool in a later release. Can I import scenes from the SDK? Not yet, the Builder handles scenes differently than the SDK, so it doesn’t make sense to import scenes. We’re working on bridging this gap. Can I group objects? We hear you loud and clear, and want to see this tool soon ourselves! Placing lots of similar objects (like trees) or using structures to make buildings would be way easier with a grouping tool, so we’re working on a solution as we type. Can I snap/attach items to other items? No, but you can press and hold Shift for more precise placement when moving objects. How does Preview mode work? Can I fly? Use the W, A, S, and D keys to move around in Preview mode and press F to toggle Fly Mode. If you can’t move, you may be stuck in an object. Changing where you spawn (enter the scene) is a feature we have planned for the future. Will there be more floors, walls, and doors? After the contest ends, we will be releasing way more asset packs, and you’ll even be able to vote on upcoming packs on Agora, Decentraland’s community voting platform! Can I pick the color or texture of items? Right now, all of the models come with one texture, but we agree that it’d be awesome to have more control over each model’s appearance. The devs are planning ways to change your models’ colors for an upcoming version of the Builder.
https://docs.decentraland.org/decentraland/builder-faq/
2019-08-17T11:30:21
CC-MAIN-2019-35
1566027312128.3
[]
docs.decentraland.org
Allowing users to modify backup interval for laptops Allowing users to modify backup interval As an administrator, you can allow users to modify the interval between two automatic backups. This gives users the opportunity to increase or decrease the frequency of the backups per their preference. Note: This functionality is managed through profiles and not at an individual user level. Once the users have set their own backup interval, you will not be able to force a backup interval on them by changing their profile settings. Even if you move such users to a different profile, backups will continue to be triggered based on the interval set by the users. Procedure To allow users to modify backup interval - On the menu bar, click Manage > Profiles. - Click the profile that you want to modify. - Under the Laptop Backup Schedule tab, in the Laptop Backup Schedule area, click Edit. The Edit Profile window appears. - Select Allow users to change schedule. - Click Save.
https://docs.druva.com/003_inSync_Enterprise/5.3.1/040_Data_Backup_and_Restore/050_Configuring_backup_schedule/040_Allowing_users_to_modify_backup_interval_for_laptops
2019-08-17T11:21:06
CC-MAIN-2019-35
1566027312128.3
[]
docs.druva.com
public interface State java.lang.String getName(). executor- the context passed in by the caller java.lang.Exception- if anything goes wrong boolean isEndState() Stateis an end state. Implementations should return false if processing can continue, even if that would require a restart. Stateis the end of processing
https://docs.spring.io/spring-batch/trunk/apidocs/org/springframework/batch/core/job/flow/State.html
2019-08-17T10:47:08
CC-MAIN-2019-35
1566027312128.3
[]
docs.spring.io
public class AppCacheManifestTransformer extends ResourceTransformerSupport ResourceTransformerimplementation that helps handling resources within HTML5 AppCache manifests for HTML5 offline applications. This transformer: ResourceResolverstrategies This hash is computed using the content of the appcache manifest and the content of the linked resources; so changing a resource linked in the manifest or the manifest itself should invalidate the browser cache. getResourceUrlProvider, resolveUrlPath, setResourceUrlProvider clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public AppCacheManifestTransformer() public AppCacheManifestTransformer(String fileExtension)
https://docs.spring.io/spring/docs/4.3.23.RELEASE/javadoc-api/org/springframework/web/servlet/resource/AppCacheManifestTransformer.html
2019-08-17T10:50:45
CC-MAIN-2019-35
1566027312128.3
[]
docs.spring.io
To use the Google Spreadsheet connector, add the <googlespreadsheet.init> element in your proxy configuration before use any other Google Spreadsheet operations. This The <googlespreadsheet.init> element element is used to authenticates authenticate the user using OAuth2 authentication and allows the user to access the Google account which contains the spreadsheets. For more information on authorizing requests in Google Spreadsheets, see. ... To get the OAuth access token directly call the init method (this method call getAccessTokenFromRefreshToken call getAccessTokenFromRefreshToken method itself) method or add < googlespreadsheet.getAccessTokenFromRefreshToken> element getAccessTokenFromRefreshToken> element before <googlespreadsheet.init> element in your configuration. ...
https://docs.wso2.com/pages/diffpagesbyversion.action?pageId=57738752&selectedPageVersions=14&selectedPageVersions=13
2019-08-17T10:34:05
CC-MAIN-2019-35
1566027312128.3
[]
docs.wso2.com
How do I get more languages? For the system to operate in another language, you must do the following: Always add the language (described in Defining additional languages ). If you need window text in the language, translate window text into the new language (described in Using Blueworx Voice Response to translate window text and Using another editor to translate display text ). If people want the translated window text to display in the windows, add administrator profiles that specify the language as the preferred language (described in Giving people access to Blueworx Voice Response ). If the language is supplied, import it. If the language is not supplied, record new voice segments and create new system prompts as required. See the Blueworx Voice Response for AIX : Designing and Managing State Table Applications information for information about importing languages and for an introduction to voice segments and prompts. If the system prompts are not available in the new language but voice application developers need them, translate the prompts (described in the Blueworx Voice Response for AIX : Designing and Managing State Table Applications information ). Parent topic: About additional languages
http://docs.blueworx.com/BVR/InfoCenter/V6.1/help/topic/com.ibm.wvraix.config.doc/howdoigetmorelanguages6.html
2019-08-17T11:06:52
CC-MAIN-2019-35
1566027312128.3
[]
docs.blueworx.com
Helper class for building or manipulating URI references. Not safe for concurrent use. An absolute hierarchical URI reference follows the pattern: :// ? # Relative URI references (which are always hierarchical) follow one of two patterns: or // An opaque URI follows this pattern: : # Use buildUpon() to obtain a builder representing an existing URI. Constructs a new Builder. Appends the given segment to the path. Encodes the given segment and appends it to the path. Encodes the key and value and then appends the parameter to the query string. Encodes and sets the authority. Constructs a Uri with the current attributes. Clears the the previously set query. Sets the previously encoded authority. Sets the previously encoded fragment. Sets the previously encoded opaque scheme-specific-part. Sets the previously encoded path. If the path is not null and doesn't start with a '/', and if you specify a scheme and/or authority, the builder will prepend the given path with a '/'. Sets the previously encoded query. Encodes and sets the fragment. Encodes and sets the given opaque scheme-specific-part. Sets the path. Leaves '/' characters intact but encodes others as necessary. If the path is not null and doesn't start with a '/', and if you specify a scheme and/or authority, the builder will prepend the given path with a '/'. Encodes and sets the query. Sets the scheme..
http://docs.sumile.cn/android/reference/android/net/Uri.Builder.html
2019-08-17T10:41:41
CC-MAIN-2019-35
1566027312128.3
[]
docs.sumile.cn
.17, “Layout support for incrementally modified graphs” lists the layout algorithms that provide support for incremental graph layout. Table 5.18, :" Partial layout is a related concept that also allows to layout distinct parts of a diagram. With this concept, it is possible to use completely different layout styles for parts of a diagram and add the results to the original, unaltered remainder of the layout. Layout algorithms that provide support for incremental layout, however, will often yield a more sound and truly integrated overall layout of a diagram.
http://docs.yworks.com/yfiles/doc/developers-guide/incremental_layout.html
2018-01-16T11:31:33
CC-MAIN-2018-05
1516084886416.17
[]
docs.yworks.com
Ambari and Hadoop have many advanced security options. This guide provides information on configuring Ambari and Hadoop for strong authentication with Kerberos, as well as other security options. Configuring Ambari and Hadoop for Kerberos Configuring Ambari for LDAP or Active Directory Authentication Configuring Ambari for Non-Root Optional: Encrypt Database and LDAP Passwords Optional: Set Up SSL for Ambari Optional: Set Up Two-Way SSL Between Ambari Server and Ambari Agents Optional: Configure Ciphers and Protocols for Ambari Server
https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-security/content/ch_amb_sec_guide.html
2018-04-19T15:39:39
CC-MAIN-2018-17
1524125936981.24
[]
docs.hortonworks.com
If above provided metadata facilities are not sufficient then a developer can extend the MetadataRepository class provided in the org.teiid.api jar to plug-in their own metadata facilities into the Teiid engine. For example, a user can write a metadata facility that is based on reading data from a database or a JCR repository. See Setting up the build environment to start development. For Example: Then build a JAR archive with above implementation class and create file a named org.teiid.metadata.MetadataRepository in the META-INF/services directory with contents: Once the JAR file has been built, it needs to be deployed in the JBoss AS as a module under <jboss-as>/modules directory. Follow the below steps to create a module. - Create a directory <jboss-as>/modules/com/something/main - Under this directory create a "module.xml" file that looks like - Copy the jar file under this same directory. Make sure you add any additional dependencies if required by your implementation class under dependencies. - Restart the server The below XML fragment shows how to configure the VDB with the custom metadata repository created Now when this VDB gets deployed, it will call the CustomMetadataRepository instance for metadata of the model. Using this you can define metadata for single model or for the whole VDB pragmatically. Be careful about holding state and synchronization in your repository instance. Development Considerations - MetadataRepository instances are created on a per vdb basis and may be called concurrently for the load of multiple models. - See the MetadataFactory and the org.teiid.metadata package javadocs for metadata construction methods and objects. For example if you use your own DDL, then call the MetadataFactory.parse(Reader) method. If you need access to files in a VDB zip deployment, then use the MetadataFactory.getVDBResources method. - Use the MetadataFactory.addPermission and add MetadataFactory.addColumnPermission method to grant permissions on the given metadata objects to the named roles. The roles should be declared in your vdb.xml, which is also where they are typically tied to container roles.
https://docs.jboss.org/author/display/TEIID/Custom+Metadata+Repository
2018-04-19T15:45:22
CC-MAIN-2018-17
1524125936981.24
[]
docs.jboss.org
Lens extensibility for Windows Phone 8 [ This article is for Windows Phone 8 developers. If you’re developing for Windows 10, see the latest documentation. ] In Windows Phone 8, you can create a camera app called a lens. A lens opens from the built-in camera app and launches right into a viewfinder experience to help the user capture the moment. All lenses must register for a lens extension to appear in the lens picker. It’s the responsibility of your app to ensure that it opens to a viewfinder experience when it is launched from the lens picker. You also need to create new icons to use specifically for the lens picker. This topic describes how to incorporate lens extensibility into your app. For info about designing a lens app, see Lens design guidelines for Windows Phone. Step 1: Prepare icons for the lens picker The lens picker requires icons that are a different resolution than the app icon. Your app must provide three icons in the Assets folder, one for each of the three phone resolutions. For more info about these icons, see Lens design guidelines for Windows Phone. Step 2: Register for a lens extension To integrate with the lens experience, register for the Camera_Capture_App extension. This extension declares to the operating system that your app can display a viewfinder when it is launched from the lens picker. It also is used by the Windows Phone Store to identify lenses and display them in the lens picker. Extensions are specified in the WMAppManifest.xml file. Just after the Tokens element, inside the Extensions element, the lens extension is specified with the following Extension element. <Extension ExtensionName="Camera_Capture_App" ConsumerID="{5B04B775-356B-4AA0-AAF8-6491FFEA5631}" TaskID="_default" /> The Windows Phone Manifest Designer does not support extensions; use the XML (Text) Editor to edit them. For more info, see How to modify the app manifest file for Windows Phone 8. Step 3: Handle a launch from the lens picker It’s the responsibility of your app to ensure that it opens to a viewfinder experience when launched from the lens picker. When a user taps your app in the lens picker, a deep link URI is used to take the user to your app. You can either let the URI launch your default page (MainPage.xaml, for example) or use a URI mapper to launch a different page. This step describes both cases. Launch your app to the default page If you have only one page in your app and that page displays a viewfinder, no URI mapping is required. Your app launches to the page that is specified in the DefaultTask element of the app manifest file. Note that when you create a new Windows Phone app, MainPage.xaml is specified as the launch page by default. Launch your app to a different page If your default launch page doesn’t provide a viewfinder, use URI mapping to take the user to a page in your app that does have a viewfinder. To map a launch from the lens picker to a specific page in your app, we recommend that you create your own URI mapper class based on the UriMapperBase class (in the System.Windows.Navigation namespace). In the URI mapper class, override the MapUri(Uri) method to map incoming URIs to pages in your app. For example, the following code looks for a URI that contains the string ViewfinderLaunch. If the URI mapper finds the string, it takes the user to a page that displays a viewfinder named viewfinderExperience.xaml. If it doesn’t find that string, it returns the incoming URI in its original state. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows.Navigation; namespace LensExample {; } } } After you create a URI mapper class for your app, assign it to the frame of the app in the App.xaml.cs file. The following example shows how you can do this. // Assign the lens example URI-mapper class to the application frame. RootFrame.UriMapper = new LensExampleUriMapper(); This code assigns the LensExampleUriMapper class to the UriMapper property of the app frame. Don’t modify any of the existing code in the InitializePhoneApplication method; add only the UriMapper assignment, as shown in the following example.; // Assign the lens example URI-mapper class to the application frame. RootFrame.UriMapper = new LensExampleUriMapper(); // Handle navigation failures RootFrame.NavigationFailed += RootFrame_NavigationFailed; // Ensure we don't initialize again phoneApplicationInitialized = true; } When the app is launched from the lens picker, it assigns the URI mapper during initialization. Before launching any pages, the app calls the MapUri method of the URI mapper to determine which page to launch. The URI that the URI mapper returns is the page that the app launches. See Also Other Resources Photo extensibility for Windows Phone 8 Capturing photos for Windows Phone 8 Additional requirements for specific app types for Windows Phone Lens design guidelines for Windows Phone How to create a base camera app for Windows Phone 8 Advanced photo capture for Windows Phone 8
https://docs.microsoft.com/en-us/previous-versions/windows/apps/jj662936(v=vs.105)
2018-04-19T16:39:39
CC-MAIN-2018-17
1524125936981.24
[]
docs.microsoft.com
Welcome to Splunk Enterprise 6.3 If you are new to Splunk Enterprise, read the Splunk Enterprise Overview. For system requirements information, see the Installation Manual. Before proceeding, review the Known Issues for this release. Splunk Enterprise 6.3 was first released to customers on September 22, 2015. Planning to upgrade from an earlier version? If you plan to upgrade from an earlier version of Splunk Enterprise to version 6.3, read "How to upgrade Splunk Enterprise" in the Installation Manual for important information you need to know before you upgrade. What's New in 6.3 Platform - Search Parallelization. Optimized CPU utilization for faster search execution. See "Manage report acceleration", "Accelerate data models", and "Configure batch mode search" in the Knowledge Manager Manual. - Index Parallelization. Optimized CPU utilization for faster data ingestion. - Intelligent Job Scheduling. Intelligent job scheduling provides improved system utilization and predictable performance. See "Configure the priority of scheduled reports" in the Reporting Manual. - Data Integrity Control. Data integrity control ensures that indexed data has not been modified. See "Manage data integrity" in the Securing Splunk Enterprise manual. - Single Sign-On Using SAML. Support for SAML 2.0 for single sign-on using PingFederate as the Identity Provider. See "About single sign-on using SAML" in the Securing Splunk Enterprise manual. - Search Head Clustering Improvements. Performance optimization, scalability, and management improvements. Support for Windows OS. - Indexer Clustering Improvements. Ability to turn off search affinity. See "Implement search affinity in a multisite indexer cluster" in the Managing Indexers and Clusters of Indexers manual. - HTTP Event Collector. Indexing of high-volume JSON-based application and IOT data sent directly via a secure, scalable HTTP endpoint. No Forwarder required. See "Use the HTTP Event Collector" in the Getting Data In manual. - Custom Alert Actions. Customizable alert actions and packaged integrations with popular third-party applications or messaging systems.. Management and Administration - HTTP Event Collector Configuration. Create and manage configurations for the HTTP Event Collector. See "Use the HTTP Event Collector" in the Getting Data In manual. - Source Type Manager. Create and manage source type configurations independent of getting data in, and search within the source type picker. See "Manage source types" in the Getting Data In manual. - Powershell Input. Native support for ingesting data retrieved by Powershell scripts. See the Splunk Add-on for Microsoft PowerShell manual. - App Browsing Interface. Automates and simplifies app and add-on discovery within Splunk Web. - Indexer Auto-Discovery. Forwarders now dynamically retrieve indexer lists from cluster master to enable elastic deployments. See "Use indexer discovery to connect forwarders to peer nodes" in the Managing Indexers and Clusters of Indexers manual. - Distributed Management Console. New topology views, status, and alerting for Splunk platform deployments including: indexers, search heads, forwarders, and storage utilization. See "About the distributed management console" in the Distributed Management Console Manual. - Field Extractor Enhancements. Simplified field extraction via delimiter and header selection. Displays field extractions within the event preview. See "Build field extractions with the field extractor" in the Knowledge Manager Manual. - Search Process Memory Usage Threshold. New configuration parameters to specify the maximum physical memory usage that a single search process can consume. See the search_process_memory_usage_thresholdand search_process_memory_usage_percentage_thresholdstanzas in "limits.conf" in the Admin Manual. Usability - Single Value Display. Support for at-a-glance, single-value indicators with historical context and change indicators. See the "Single value visualizations" section of "Visualization Reference" in the Dashboards and Visualizations manual. - Geospatial Visualization. Support for choropleth maps to visualize how a metric varies across a customizable geographic area. See "Mapping data" in the Dashboards and Visualizations manual. - Dashboard Enhancements. More powerful dashboards with extended search and token management. See "Token usage in dashboards" in the Dashboards and Visualizations manual. - Search History. View and interact with ad-hoc search command history. See "View and interact with your Search History" in the Search Manual. - Anomaly Detection. New SPL command that offers histogram based approach for detecting anomalies. Also includes the capabilities of existing anomalousvalue and outlier SPL commands. See "anomalydetection" in the Search Reference manual. - Search Helper Improvements. Re-architected to improve responsiveness. Developer - Java logger Support for HTTP Event Collector. Adds support for log4j, logback and java.util.logging to allow logging from Java apps over HTTP. - .NET Logger support for HTTP Event Logger. Adds support for the .NET Trace Listener API and SLAB (Semantic Logging Application Block) to allow logging from apps over HTTP. - Custom Alert Actions. Allows developers to build, package, and integrate custom alert actions as native to Splunk software.. Documentation The Splunk Enterprise 6.3 release includes one new manual and several enhancements to key areas of existing content. - The Distributed Management Console Manual provides dedicated information on the distributed management console that was introduced in Splunk Enterprise 6.2. - The Distributed Deployment Manual has been substantially expanded to provide enhanced guidance on implementing, maintaining, and expanding a distributed deployment. In particular, it now features a set of end-to-end implementation frameworks for common deployment scenarios. - The Getting Data In manual has been reorganized to provide faster access to the information you need to get your data into Splunk Enterprise. The manual includes information on updated features, and content within the book has been reorganized to make procedures easier to understand and follow. - The Forwarding Data manual has been updated to make the installation instructions for the universal forwarder more accessible, and to better group and clarify universal forwarder concepts and activities in deployments of the Splunk platform. New REST APIs This release includes the following updates to the REST API. - data/inputs/http - data/inputs/http/{name} - data/inputs/http/{name}/disable - data/inputs/http/{name}/enable - licenser/usage - services/admin/SAML-groups - services/admin/SAML-idp-metadata - services/admin/SAML-sp-metadata - services/collector/event - services/collector/mint - services/data/ui/alerts - servicesNS/{user}/{app}/data/ui/alerts - services/server/introspection/search/dispatch/Bundle_Directory_Reaper - services/server/introspection/search/dispatch/Dispatch_Directory_Reaper - services/server/introspection/search/dispatch/Search_StartUp_Time - services/server/introspection/search/distributed - services/server/introspection/search/saved - services/search/scheduler - services/search/scheduler/status The REST API Reference Manual describes the endpoints. This documentation applies to the following versions of Splunk: 6.3.0, 6.3.1, 6.3.2, 6.3.3 View the Article History for its revisions. Feedback submitted, thanks!
http://docs.splunk.com/Documentation/Splunk/latest/ReleaseNotes/MeetSplunk
2016-02-06T09:01:17
CC-MAIN-2016-07
1454701146241.46
[]
docs.splunk.com
Revision history of "JDocumentRendererRSS/1.6" View logs for this page There is no edit history for this page. This page has been deleted. The deletion and move log for the page are provided below for reference. - 12:51, 3 May 2013 Wilsonge (Talk | contribs) deleted page JDocumentRendererRSS/1.6 (content was: "__NOTOC__ =={{JVer|1.6}} JDocumentRendererRSS== ===Description=== {{Description:JDocumentRendererRSS}} <span class="editsection" style="font-size:76%;"> <nowiki>[..." (and the only contributor was "Doxiki2"))
https://docs.joomla.org/index.php?title=JDocumentRendererRSS/1.6&action=history
2016-02-06T09:41:59
CC-MAIN-2016-07
1454701146241.46
[]
docs.joomla.org
JForm::getForm::getFormControl Description Method to get the form control. Description:JForm::getFormControl [Edit Descripton] public function getFormControl () - Returns - Defined on line 457 of libraries/joomla/form/form.php - Referenced by See also JForm::getFormControl source code on BitBucket Class JForm Subpackage Form - Other versions of JForm::getFormControl SeeAlso:JForm::getFormControl [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/API17:JForm::getFormControl
2016-02-06T09:38:09
CC-MAIN-2016-07
1454701146241.46
[]
docs.joomla.org
Rendering menu Render Setup Render Setup dialog Turn on Net Render (Render Output group) Render Rendering menu Render To Texture Render To Texture dialog Turn on Net Render (Render Settings group) Render Rendering menu Video Post Set up a sequence with an Image Output Event Turn on Net Render (Output group) Render Use the Network Job Assignment dialog to name rendering jobs, specify the computers that will participate in the rendering, and submit jobs to the rendering servers. You can submit as many jobs as you like in a single session. Open each file you want to render and submit it following the standard procedure. Each job is placed behind the last one submitted. If you submit a job in which the frame output name is the same as another job in the queue, a warning dialog asks you if you want to overwrite the output frames from the other job. You can divide the work of rendering a single image among any number of rendering servers. This is particularly useful when rendering a single, extremely high-resolution image intended for print. To use this feature, turn on the Split Scan Lines option. To use the Network Job Assignment dialog: The Network Job Assignment dialog is accessible when you turn on the Net Render toggle. The Net Render toggle can be accessed from three different dialogs used for rendering. The Network Job Assignment dialog appears. By default, this is the file name of the current scene. Click the plus (+) button next to the Job Name field to increment the job name. Unlike the plus button in the file dialogs, this button does not automatically launch the job. You see a listing of all servers available for network rendering. Each server is marked with a colored icon to denote its current status: Failed. Try rebooting the server or see Troubleshooting for more information on failed servers. Absent. Verify that the Server is currently running and that it has not been "Disallowed" in the Week Schedule. See “Scheduling the Availability of a Render Node Using the Backburner Monitor” in the Autodesk Backburner User’s Reference. If a rendering Server is running on a workstation that also has an interactive session of 3ds Max, you can still select that machine for rendering. A second copy of 3ds Max is launched to execute the network render. You can view statistics of a particular Server by right-clicking its name and choosing Properties. Enter Subnet Mask/Enter Manager Name or IP Address group When Automatic Search is turned off, enter the name of the Network Manager machine or its IP address. When Automatic Search is on, enter a subnet mask for automatic search. For information on using subnet masks, see Configuring TCP/IP. Connects to the network Manager. 3ds Max preserves the connection as a global setting so that you need to change it only when you want to specify an alternative Manager. If connected to the network manager, click Disconnect to disconnect from the current manager so you can choose a different manager. Updates the Server and Job lists. By default, all servers are used for the job. When the Options group Use All Servers check box is turned off, you can choose one or more servers to render the job. If rendering to a multiple-frame file format, such as an AVI or MOV file, you can choose only one server. Specifies a priority ranking for the job. The lower this setting, the higher the job priority. Default=50. For example, consider a job with priority 1 (Job B) that is submitted to a network manager that's already rendering a job with priority 2 (Job A). Because Job B has a higher priority, Job A will be suspended and Job B rendered. When Job B is finished, 3ds Max will resume rendering Job A. If two or more jobs have the same priority, they're executed in order of submission. Sends the job to the head of the queue, preempting the existing jobs. If a server is currently rendering and a critical job is sent to the queue, the server will stop rendering its current job and begin rendering the new, critical job. When finished with the critical job, the server returns to the next job it has been assigned in the queue. Opens the Job Dependencies dialog, which you can use to specify existing jobs that must finish before the current job can start. Lets 3ds Max send rendering-related messages via email. When this is on, its Define button becomes available. For information, see the Notifications dialog topic. Lets you subdivide the rendering of each frame among the rendering servers. This is useful when rendering a single, extremely high-resolution image intended for printing. For information, see the Strips Setup dialog topic. When Split Scan Lines is on, its Define button becomes available. When off, the server attempts to copy the scene file from the manager to the server. If the manager is running on Windows 2000 Professional, only 10 servers will copy the file from the manager; any machines over the limit 10 will use TCP/IP to retrieve the file. When turned on, the servers get the file via TCP/IP only. Default=off. Archives the scene, with all of its maps, any inserted Xrefs and their maps, into a proprietary-format compressed file. The compressed file is sent to each Server, where it is uncompressed into a temporary directory named serverjob in the \network subdirectory of 3ds Max and rendered. Default=off. Use this feature if you have access only to Servers that exist over the Internet or if you have a slow network setup. It is not meant for heavy production use. However, if you don't use it, you must first ensure that all network servers have access to all map and Xref paths referred to in the scene. You choose between using all available servers, all servers in a group, or selected servers. See “Configuring Server Groups” in the Autodesk Backburner User’s Guide for an explanation of how to set up server groups. In a 3ds Max setup it can be useful to set up servers in groups. For example, during busy times you can assign high priority jobs to a group of high performance servers. Allows you to specify an alternate path file in the MXP format that rendering servers can use to find bitmaps that are not found on the primary map paths. When on, you can manually enter the path and file name in the field below the check box, or click the ellipsis button and browse to the MXP file. The Server list, located on the upper-right side of the Network Job Assignment dialog, displays all network rendering servers registered with the network manager after you connect to the manager. There are two types of tabs in the Server list: If more groups are available than can fit in the space above the list, arrow buttons for scrolling the group list horizontally appear above the list's top-right corner. Click these arrow buttons to scroll the list left or right to view additional group tabs. By default, each Server is marked with a colored status icon: Server list right-click menu By default, servers are listed by name only. To see more information about a server, right-click its name in the list. A menu appears with these options: This toggle, when on, displays all details about each server to the right of its name. When off, restores the last saved set of partial server details unless the last saved set was All Server Details, in which case it restores the default set: name only. See the following item for the list of available details. Opens the Set Server Property Tabs dialog, which lets you specify which details are shown in the Server list. The dialog provides check boxes for turning on and off the display of these details: Several factors can affect a machine's performance. CPU power isn't necessarily a concern when large file transfers are involved. For example, if a certain job uses several map files from a centralized server, the performance of the network throughput plays a much larger part than CPU performance, as most machines will spend the majority of the time reading maps. On the other hand, if the machine has all maps locally it will have a huge advantage (local access versus network access) regardless of which CPU it is using. The performance index provides you with information regarding your servers' rendering performance to help analyze your network rendering setup and better distribute the workload. The job list, located on the lower-right side of the Network Job Assignment dialog, displays all jobs submitted to the network manager. Also shown are each job's priority, status, and output file path. To change job settings and manage jobs, use the Backburner Monitor. See “Modifying Job Settings” and “Monitoring and Managing Jobs” in the Autodesk Backburner User’s Guide. Opens the Advanced Settings dialog, where you can make settings for Per-Job Timeouts, TCP port number, Pre-Render MAXScripts and Job Handling. Click Submit to exit this dialog and send the current job to the Network Manager, which places it in the queue for rendering. When you submit a rendering job, if the output file name to be used by the job is the same as that used by an existing job, you're asked if you want to overwrite the existing file(s). Also, if the name of the submitted job replicates one already in the rendering queue, an alert notifies you; click OK, change the job name, and submit it again. This dialog lets you specify jobs that shouldn't begin rendering until other jobs finish. Use the two lists and the Add and Remove buttons to build a list of jobs that must finish rendering before the current job can start. This dialog lets a network rendering job send notifications via email. Such notifications can be useful when you launch a lengthy render, such as an animation, and don't care to spend all your time near the network manager system. The Strips Setup dialog lets you specify how to split up the rendering of a single, large image among several different servers on the network. 3ds Max automatically subdivides the rendering based on settings you provide, and then fits the pieces together into the final image. The Advanced Settings dialog lets you set job timeouts on a per-job basis, assign the TCP port number, specify pre-render scripts and affect job handling and archive settings.
http://docs.autodesk.com/3DSMAX/13/ENU/Autodesk%203ds%20Max%202011%20Help/files/WSf742dab0410631334fe17e2d112a1ceaf4d-7f64.htm
2016-02-06T09:01:32
CC-MAIN-2016-07
1454701146241.46
[]
docs.autodesk.com
How to apply a .sql file to a database From Joomla! Documentation Revision as of 12:00, 25 January 2008 by CirTap (Talk | contribs) (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Login to your MySQL database via phpMyAdmin [1]. Click the link to the Export section. Choose the SQL tab. Enter the code lines into the text box and press Go. You will be provided with the results of the command on this screen. Retrieved from ‘’
https://docs.joomla.org/index.php?title=How_to_apply_a_.sql_file_to_a_database&oldid=2389
2016-02-06T10:17:57
CC-MAIN-2016-07
1454701146241.46
[]
docs.joomla.org
Information for "JSplit/doc" Basic information Display titleTemplate:JSplit/doc Default sort keyJSplit/doc Page length (in bytes)1,089 Page ID280:56, 15 March 2013 Latest editorTom Hutchison (Talk | contribs) Date of latest edit17:21, 27 May 2013 Total number of edits2 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded templates (13)Templates used on this page: Template:AmboxNew (view source) (protected)Template:CatInclude (view source) (protected)Template:CurrentLTSVer (view source) (protected)Template:Documentation subpage (view source) Template:JSplit (view source) Template:Max (view source) Template:Max/2 (view source) Template:Pagetype (view source) Template:Plural (view source) Template:RVer (view source) (protected)Template:Time ago (view source) Template:Time ago/core (view source) Template:Tl (view source) Page transcluded on (1)Template used on this page: Template:JSplit (view source) Retrieved from ‘’
https://docs.joomla.org/index.php?title=Template:JSplit/doc&action=info
2016-02-06T10:07:13
CC-MAIN-2016-07
1454701146241.46
[]
docs.joomla.org
Difference between revisions of "JEditor::getInstance" From Joomla! Documentation Revision as of::getInstance Description Returns the global Editor object, only creating it if it doesn't already exist. Description:JEditor::getInstance [Edit Descripton] SeeAlso:JEditor::getInstance [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/index.php?title=API17:JEditor::getInstance&diff=next&oldid=56690
2016-02-06T10:34:02
CC-MAIN-2016-07
1454701146241.46
[]
docs.joomla.org
Difference between revisions of "Extensions Language Manager Overrides Edit" From Joomla! Documentation Revision as of 17:34, 21 March 2013 Contents How to Access Navigate to the the Language Overrides Manager. To add a new Override, click on the New icon in the toolbar. To edit an existing Override, click on the Overrides Constant or check the Overrides checkbox and press the Edit icon in the toolbar..
https://docs.joomla.org/index.php?title=Help25:Extensions_Language_Manager_Overrides_Edit&diff=prev&oldid=83503
2016-02-06T09:31:21
CC-MAIN-2016-07
1454701146241.46
[]
docs.joomla.org
Information for "JCacheStorageMemcache/get" Basic information Display titleAPI15:JCacheStorageMemcache/get Default sort keyJCacheStorageMemcache/get Page length (in bytes)1,348 Page ID:13, 22 March 2010 Latest editorJoomlaWikiBot (Talk | contribs) Date of latest edit07:50, 12 May 2013 Total number of edits2 Total number of distinct authors2 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’
https://docs.joomla.org/index.php?title=API15:JCacheStorageMemcache/get&action=info
2016-02-06T10:41:15
CC-MAIN-2016-07
1454701146241.46
[]
docs.joomla.org
Difference between revisions of "Internet Relay Chat (IRC)" From Joomla! Documentation Redirect page Revision as of 19:10, 27 June 2013 (view source)HobbesPDX (Talk | contribs) (updated a bit)← Older edit Latest revision as of 18:55, 30 September 2013 (view source) Wilsonge (Talk | contribs) (Add redirect to resources portal) (2 intermediate revisions by 2 users not shown)Line 1: Line 1: −You may find many helpful users on Freenode, in the room #joomla. +#REDIRECT [[Portal:Resources]] 18:55, 30 September 2013 Portal:Resources Retrieved from ‘’
https://docs.joomla.org/index.php?title=Internet_Relay_Chat_(IRC)&diff=104096&oldid=101125
2016-02-06T10:22:42
CC-MAIN-2016-07
1454701146241.46
[]
docs.joomla.org
User Guide Local Navigation Search This Document My device is ringing or vibrating more times than expected For calls, the number of times that your BlackBerry® device vibrates is not determined by the number of vibrations that you set in your sound profile, and there is no setting for the number of rings if you do not subscribe to voice mail. Your device vibrates or rings until the caller or the wireless network ends the connection. Previous topic: Troubleshooting: Ring tones, sounds, and alerts Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/15714/My_device_ringing_vibrating_more_than_expected_50_817344_11.jsp
2013-12-05T08:21:50
CC-MAIN-2013-48
1386163042403
[]
docs.blackberry.com
This article or section is in the process of an expansion or major restructuring. You are welcome to assist in its construction by editing it as well. If this article or section has not been edited in several days, please remove this template. This article was last edited by Bembelimen (talk| contribs) 4 years ago. (Purge) NEEDS DESCRIPTION Click on the picture to see the descriptions (if available)
http://docs.joomla.org/index.php?title=Customising_the_Beez_template&diff=12224&oldid=12128
2013-12-05T08:40:14
CC-MAIN-2013-48
1386163042403
[]
docs.joomla.org
Availability: Windows. New in version 1.5.2. The winsound module provides access to the basic sound-playing machinery provided by Windows platforms. It includes functions and several constants. None. Its interpretation depends on the value of flags, which can be a bit-wise ORed combination of the constants described below. If the system indicates an error, RuntimeError is raised. -1, MB_ICONASTERISK, MB_ICONEXCLAMATION, MB_ICONHAND, MB_ICONQUESTION, and MB_OK, all described below. The value -1produces a ``simple beep''; this is the final fallback if a sound cannot be played otherwise. New in version 2.3.. SystemDefaultsound. SystemExclamationsound. SystemHandsound. SystemQuestionsound. SystemDefaultsound. See About this document... for information on suggesting changes.See About this document... for information on suggesting changes.
http://docs.python.org/release/2.5.4/lib/module-winsound.html
2013-12-05T08:33:19
CC-MAIN-2013-48
1386163042403
[]
docs.python.org
UI. The browser uses these events in order to provide improvements in usability. For example, the user must use their finger to scroll through the content, and can double-tap to zoom in to a content block. To prevent these events from being consumed by the UI, you can use the touch-event-mode meta tag. With this meta tag, you can disable these UI features for a web page, so that the browser passes the entire array of events and gestures to the web page unprocessed. With access to raw touch events, you can, for example, track both the direction and distance of a swipe event, and respond differently based on the swipe direction or distance. The touch-event-mode meta tag is added to the <head> section of a page as part of a <meta> element. For more information see touch-event-mode. Object: TouchList The TouchList object is an array that contains an ordered collection of individual points of contact (represented by TouchPoint objects) for a touch event. Object: TouchPoint Was this information helpful? Send us your comments.
http://docs.blackberry.com/nl-nl/developers/deliverables/27297/Touch_Objects_1593432_11.jsp
2013-12-05T08:35:22
CC-MAIN-2013-48
1386163042403
[]
docs.blackberry.com
This chapter of the documentation illustrates real world use cases, with UltraESB samples. This page provides an index of the samples with the relevant real world use case. A sample categorization based on the user level is available as a reference guide which helps you to find the samples that you should/can follow based on your level of knowledge on UltraESB and ESB in general. The samples shipped in the UltraESB are not limited to the above list, however this page contains all the documented samples. Page: Restful Proxy Services Page: Proxying SOAP Messages Page: Proxying JAX-WS (Fast-Infoset) Messages Page: Proxying Text Responses Page: Schema Validation and Error Handling Page: Hessian Binary Message Proxying Page: Reverse Proxy or Web Proxy Page: HTTP Basic and Digest Authentication Page: HTTP Basic, Digest, NTLM and AWS S3 Authentication Page: WS-Security Gateway Page: Transactional ESB use cases made simple with the UltraESB Page: Transactions spanning multiple resources - an Example with JMS, JDBC and File systems with JTA Page: Restful Mock Services Page: Database look-ups and XQuery Transformations Page: JSON Data Services Page: Using JTA Transactions with SOAP, REST and other Proxy Services Page: Proxying and Load Balancing requests to Tomcat Page: Invoking a Web Service via Email Page: Advanced Cloning and Aggregation with JSON Streaming
http://docs.adroitlogic.org/display/esb/Sample+Use+Cases
2013-12-05T08:19:57
CC-MAIN-2013-48
1386163042403
[]
docs.adroitlogic.org
Configuration in CherryPy is implemented via dictionaries. Keys are strings which name the mapped value; values may be of any type. In CherryPy 3, you use configuration (files or dicts) to set attributes directly on the engine, server, request, response, and log objects. So the best way to know the full range of what’s available in the config file is to simply import those objects and see what help(obj) tells you. n this example determines on which network interface CherryPy will listen. The server.socket_port option declares the TCP port on which to listen. doesn’t use any sections that don. Configuration data may be supplied as a Python dictionary, as a filename, or as an open file object. If you are only deploying a single application, you can make a single config file that contains both global and app entries. Just stick the global entries into a config section named [global], and pass the same file to both config.update and tree.mount . If you’re calling cherrypy.quickstart(app root, script name, config), it will pass the config to both places for you. But as soon as you decide to add another application to the same site, you need to separate the two config files/dicts.: # global config cherrypy.config.update({'environment': 'production', 'log.error_file': 'site.log', # ... }) # Mount each app and pass it its own config cherrypy.tree.mount(root1, "/", appconf1) cherrypy.tree.mount(root2, "/forum", appconf2) cherrypy.tree.mount(root3, "/blog", appconf3) if hasattr(cherrypy.engine, 'block'): # 3.1 syntax cherrypy.engine.start() cherrypy.engine.block() else: # 3.0 syntax cherrypy.server.quickstart() cherrypy.engine.start() Config entries are always a key/value pair, like server.socket_port = 8080. The key is always a name, and the value is always a Python object. That is, if the value you are setting is an int (or other number), it needs to look like a Python int; for example, 8080. If the value is a string, it needs to be quoted, just like a Python string. Arbitrary objects can also be created, just like in Python code (assuming they can be found/imported). Here’s an extended example, showing you some of the different types: [global] log.error_file: "/home/fumanchu/myapp.log" environment = 'production' server.max_request_body_size: 1200 [/myapp] tools.trailing_slash.on = False request.dispatch: cherrypy.dispatch.MethodDispatcher() Config files have a severe limitation: values are always keyed by URL. For example: [/path/to/page] methods_with_bodies = ("POST", "PUT", "PROPPATCH") It’s obvious that the extra method is the norm for that path; in fact, the code could be considered broken without it. In CherryPy, you can attach that bit of config directly on the page handler: def page(self): return "Hello, world!" page.exposed = True")} def page(self): return "Hullo, Werld!" page.exposed = True Note This behavior is only guaranteed for the default dispatcher. Other dispatchers may have different restrictions on where you can attach _cp_config attributes.. Because config entries usually just set attributes on objects, they’re almost all of the form: object.attribute. A few are of the form: object.subobject.attribute. They look like normal Python attribute chains, because they work like them. We call the first name in the chain the “config namespace”. When you provide a config entry, it is bound as early as possible to the actual object referenced by the namespace; for example, the entry response.stream actually sets the stream attribute of cherrypy.response! In this way, you can easily determine the default value by firing up a python interpreter and typing: >>> import cherrypy >>> cherrypy.response.stream False Each config namespace has its own handler; for example, the “request” namespace has a handler which takes your config entry and sets that value on the appropriate “request” attribute. There are a few namespaces, however, which don’t work like normal attributes behind the scenes; however, they still use dotted keys and are considered to “have a namespace”. Entries from each namespace may be allowed in the global, application root ("/") or per-path config, or a combination: class at engine.autoreload; you can set its “frequency” attribute via the config entry engine.autoreload.frequency = 60. In addition, you can turn such plugins on and off by setting engine.autoreload.on = True or False. - engine.SIGHUP/SIGTERM: These entries can be used to set the list of listeners for the given channel. Mostly, this is used to turn off the signal handling one gets automatically via cherrypy.quickstart(). Declares additional request-processing functions. Use this to append your own Hook functions to the request. For example, to add my_hook_func to the before_handler hookpoint: [/] hooks.before_handler = myapp.my_hook_func Configures logging. These can only be declared in the global config (for global logging) or [/] config (for each application). See LogManager for the list of configurable attributes. Typically, the “access_file”, “error_file”, and “screen” attributes are the most commonly configured. Controls the default HTTP server via cherrypy.server (see that class for a complete list of configurable attributes). These can only be declared in the global config. Enables and configures additional request-processing packages. See the Tools overview for more information. class. Controls the “checker”, which looks for common errors in app state (including config) when the engine starts. You can turn off individual checks by setting them to False in config. See cherrypy._cpchecker.Checker for a complete list. Global config only.. The only key that does not exist in a namespace is the “environment” entry. This special entry imports other config entries from a template stored in cherrypy._cpconfig.environments[environment]. It only applies to the global config, and only when you use cherrypy.config.update. If you find the set of existing environments (production, staging, etc) too limiting or just plain wrong, feel free to extend them or add new environments: cherrypy._cpconfig.environments['staging']['log.screen'] = False cherrypy._cpconfig.environments['Greek'] = { 'tools.encode.encoding': 'ISO-8859-7', 'tools.decode.encoding': 'ISO-8859-7', }
http://docs.cherrypy.org/3.2.0/concepts/config.html
2013-12-05T08:19:36
CC-MAIN-2013-48
1386163042403
[]
docs.cherrypy.org
Apply to, "Isn measure model. You did it! You just created a complex measure by add Year and QuarterOfYear as Slicers, we’d get something like this:'ll learn more about context later. DAX includes many functions that return a table rather than a value. The table isn
https://docs.microsoft.com/en-us/power-bi/desktop-quickstart-learn-dax-basics
2020-02-17T01:29:16
CC-MAIN-2020-10
1581875141460.64
[array(['media/desktop-quickstart-learn-dax-basics/qsdax_1_syntax.png', 'DAX formula syntax'], dtype=object) array(['media/desktop-quickstart-learn-dax-basics/qsdax_3_chart.png', 'Previous Quarter Sales and SalesAmount chart'], dtype=object) array(['media/desktop-quickstart-learn-dax-basics/qsdax_4_context.png', 'Store Sales measure'], dtype=object) ]
docs.microsoft.com
MouseWheel Microsoft Silverlight will reach end of support after October 2021. Learn more. Occurs when the mouse wheel is moved while the mouse pointer is over a UI element. <object MouseWheel="eventhandlerFunction" .../> [token = ]object.AddEventListener("MouseWheel", eventhandlerFunction) Arguments AddEventListener Parameters Event Handler Parameters Managed Equivalent Remarks For more information, see the MouseWheel event. Applies To PasswordBox (Silverlight 2) StackPanel (Silverlight 2) Version Information Silverlight 3 See Also
https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/dd833079%28v%3Dvs.95%29
2020-02-17T01:08:22
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
panda3d.core.DatagramGeneratorNet¶ - class DatagramGeneratorNet¶ Bases: DatagramGenerator, ConnectionReader, QueuedReturn_Datagram This class provides datagrams one-at-a-time as read directly from the net, via a TCP connection. If a datagram is not available, getDatagram()will block until one is. Inheritance diagram __init__(manager: ConnectionManager, num_threads: int) → None¶ Creates a new DatagramGeneratorNet with the indicated number of threads to handle requests. Normally num_threads should be either 0 or 1 to guarantee that datagrams are generated in the same order in which they were received. getDatagram(data: Datagram) → bool¶ Reads the next datagram from the stream. Blocks until a datagram is available. Returns true on success, false on stream closed or error. isEof() → bool¶ Returns true if the stream has been closed normally. This test may only be made after a call to getDatagram()has failed.
https://docs.panda3d.org/1.10/cpp/reference/panda3d.core.DatagramGeneratorNet
2020-02-17T01:09:31
CC-MAIN-2020-10
1581875141460.64
[]
docs.panda3d.org
9.1.7. Why does SIMP use rsync?¶ SIMP uses rsync to manage both large files and large numbers of small files. This is to reduce the number of resources in the catalog and take advantage of rsync’s syncing engine to reduce network load and Puppet run times. The common SIMP use cases for rsync include: - clamav - tftpboot - named - dhcpd 9.1.7.1. Large Files¶ Both the system kickstart images, and the clamav virus definitions are fairly large (100MB+). This isn’t itself an issue. However, as the file changes over time, Puppet would have to transfer the entire file every time it changes. To access the accuracy of a file defined in the catalog, Puppet checksums the file and compares it to the checksum of the expected content. This process could take a long time, depending on the size of the file. If the sums don’t match, Puppet replaces and transfers the entire file. Rsync is smarter than that, and only replaces the parts of the file that need replacing. In this case, rsync saves bandwidth, Puppet run time, and a few CPU cycles. 9.1.7.2. Large Numbers of Files¶ named and dhcpd are the opposite situation. In both of these cases, they may manage large numbers of files. Typically, like above, Puppet would have to checksum every file and see if it needed changing, with each file setting up a new connection to the Puppet server transferring each file individually. A small number of file resources wouldn’t be the end of the world when managing something with Puppet, but rsync limits every one of these files to one transaction and one resource. If you have a highly complex site, without rsync, this could grow your catalog to the point where Puppet would have a difficult time processing the entries in a timely manner. Syncing directories in this fashion also allows for configuration to be managed outside of the Puppet space. 9.1.7.3. Where are the rsync files?¶ SIMP packages the rsync materials in the simp-rsync-skeleton RPM, which installs a file tree /usr/share/simp/environment-skeleton/rsync. This directory is automatically installed in the SIMP Secondary Environment for the production SIMP Omni-Environment created by simp config ( /var/simp/environments/production) or the corresponding directory for a new environment created by simp environment new. The rsync directories in the SIMP Secondary Environment are shared by the simp::server::rsync_shares class, which is included on the SIMP server if the simp_options::rsync catalyst is enabled.
https://simp.readthedocs.io/en/latest/help/FAQ/Rsync.html
2020-02-17T00:16:50
CC-MAIN-2020-10
1581875141460.64
[]
simp.readthedocs.io
Incidents are events created when a predefined rule has been met, such as an API reaching an unwanted error threshold. The incidents overview will provide a running list of incidents with their start time, end time (if an incident has ended), which API it was for, and the rule that created it. You can create a new rule directly from the incidents screen by selecting the "Add Rule" button.
https://docs.bearer.sh/dashboard/incidents
2020-02-17T00:53:13
CC-MAIN-2020-10
1581875141460.64
[]
docs.bearer.sh
All content with label build+cache+distribution+grid+gridfs+guide+infinispan+jgroups+listener+s3+scala+userguide. Related Labels: podcast, expiration, publish, datagrid, interceptor, server, rehash, transactionmanager, dist, release, partitioning, query, deadlock, intro, archetype, pojo_cache, lock_striping, nexus, schema, state_transfer, amazon,, hash_function, buddy_replication, loader, colocation, pojo, write_through, cloud, mvcc, notification, tutorial, presentation, murmurhash2, xml, read_committed, jbosscache3x, jira, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, br, development, websocket, async, transaction, interactive, xaresource, hinting, searchable, demo, installation, client, migration, non-blocking, rebalance, jpa, filesystem, tx, user_guide, article, gui_demo, eventing, client_server, testng, infinispan_user_guide, murmurhash, standalone, repeatable_read, snapshot, hotrod, webdav, docs, batching, consistent_hash, store, whitepaper, jta, faq, as5, spring, 2lcache, jsr-107, lucene, locking, rest more » ( - build, - cache, - distribution, - grid, - gridfs, - guide, - infinispan, - jgroups, - listener, - s3, - scala, - userguide ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/build+cache+distribution+grid+gridfs+guide+infinispan+jgroups+listener+s3+scala+userguide
2020-02-17T01:26:55
CC-MAIN-2020-10
1581875141460.64
[]
docs.jboss.org
All content with label coherence+demo+development+dist+docs+grid+hibernate+hot_rod+infinispan+jboss_cache+listener+lock_striping+mvcc+presentation+release+store+user_guide+write_through. Related Labels: podcast, expiration, publish, datagrid, interceptor, server, replication, transactionmanager, partitioning, query, deadlock, intro, archetype, pojo_cache, jbossas, nexus, guide, schema, cache, s3, amazon, test, api, xsd, ehcache, maven, documentation, roadmap, wcm, youtube, userguide, write_behind,, installation, cache_server, scala, client, non-blocking, migration, filesystem, jpa, tx, article, gui_demo, eventing, client_server, testng, infinispan_user_guide, standalone, snapshot, webdav, hotrod, repeatable_read, batching, consistent_hash, whitepaper, jta, faq, 2lcache, as5, jsr-107, lucene, jgroups, locking more » ( - coherence, - demo, - development, - dist, - docs, - grid, - hibernate, - hot_rod, - infinispan, - jboss_cache, - listener, - lock_striping, - mvcc, - presentation, - release, - store, - user_guide, - write_through ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/coherence+demo+development+dist+docs+grid+hibernate+hot_rod+infinispan+jboss_cache+listener+lock_striping+mvcc+presentation+release+store+user_guide+write_through
2020-02-17T00:53:15
CC-MAIN-2020-10
1581875141460.64
[]
docs.jboss.org
Murano is Application catalog that supports types of applications. This document intends to make composing application packages Scripts. Besides the Scripts section the following sections MuranoPL classes control application deployment workflow execution. Full information about MuranoPL classes see here: MuranoPL: Murano Programming Language Example telnet.yaml Namespaces: =: io.murano.apps.linux std: io.murano res: io.murano.resources Name: Telnet Extends: std:Application Properties: name: Contract: $.string().notNull() instance: Contract: $.class(res:Instance).notNull() Workflow: deploy: Body: - $this.find(std:Environment).reporter.report($this, 'Creating VM for Telnet instace.') - $.instance.deploy() - $this.find(std:Environment).reporter.report($this, 'Instance is created. Setup Telnet service.') - $resources: new('io.murano.system.Resources') # Deploy Telnet - $template: $resources.yaml('DeployTelnet.template') - $.instance.agent.call($template, $resources) - $this.find(std:Environment).reporter.report($this, 'Telnet service setup is done.') Note, that see at the corresponding section: Dynamic UI Definition specification. Full example with Telnet application form definition Telnet Definition. Find or create a simple image (in a .png format) associated with your application. Is should be small and have a square shape. You can specify any name of your image. In our example, let’s name it telnet.png. General application metadata should be described in the application manifest file. It should be in a yaml format and should have the following sections Example manifest.yaml Require: io.murano.apps.TelnetHelper: 0.0.1 This step is optional. If you plan on providing images required by your application, you can include images.lst file with image specifications Example images.lst Images: - Name: 'my_image.qcow2' Hash: '64d7c1cd2b6f60c92c14662941cb7913' Meta: title: 'tef' type: 'linux' DiskFormat: qcow2 ContainerFormat: bare - Name: 'my_other_image.qcow2' Hash: '64d7c1cd2b6f60c92c14662941cb7913' Meta: title: 'tef' type: 'linux' DiskFormat: qcow2 ContainerFormat: bare Url: '' If Url is omitted - the images would be searched for in the Murano Repository. An application archive should have the following structure MuranoPL class definitions should be put inside this folder This folder should contain Execution scripts All script files, needed for an application deployment should be placed here Place dynamic ui yaml definitions here or skip to use the default name ui.yaml Image file should be placed in the root folder. It can have any name, just specify it in the manifest file or skip to use default logo.png name Application manifest file. It’s an application entry point. The file name is fixed. List of required images. Optional file. Congratulations! Your application is ready to be uploaded to an Application Catalog.
https://murano.readthedocs.io/en/stable-kilo/articles/app_pkg.html
2020-02-17T01:24:51
CC-MAIN-2020-10
1581875141460.64
[]
murano.readthedocs.io
Using plugins¶ Aldryn News & Blog comes with a set of useful plugins. They are mostly self-explanatory. Where to use plugins¶ Though you can add any of these plugins to any placeholder or static placeholder, some really make only sense in particular contexts (and some will simply do nothing at all in a context where they don’t make sense). For example, the Related articles plugin only makes sense when attached to an article. Dropped into a django CMS page for example, it will do nothing. On the other hand, it would be possible but probably not very desirable to have a list of Recent articles appear in the template of an article. List of plugins¶ Most of the plugins produce output specific to a particular apphook configuration. In these the Application configuration is a required field. In alphabetical order: Archive¶ Archive creates a list of dates representing published articles. Selecting a date takes you a sub-page in the archive, with a paginated list of articles for that date. Article search¶ Article search provides a search field. The search mechanism will search through article Titles and Lead-in fields, but not other content. Categories¶ Categories creates a list of categories articles have been placed in. Selecting a category takes you a sub-page in the archive, with a paginated list of articles for that category. Featured articles¶ Featured articles creates a list of articles that have been marked as Featured. Their display can be styled with CSS to achieve the effect you require - see Customising news output for an example.
https://aldryn-newsblog.readthedocs.io/en/latest/how-to/plugins.html
2020-02-17T00:16:11
CC-MAIN-2020-10
1581875141460.64
[array(['../_images/news-archive.png', 'archive plugin output'], dtype=object) ]
aldryn-newsblog.readthedocs.io
Future Decoded 10–11 Nov 2015 Excel London–FREE Event This year we're doing it bigger and bolder! In 2014, we gave you a tantalising glimpse into the future - this year we're doing it again.. Watch this space for keynote and session announcements, leading up to the big event. Tuesday 10th Nov - The Business Day If you're a large or small business or a valuable Microsoft partner, the Future Decoded Business day is perfect for you. We have scheduled a variety of activities to enable you to gain as much value as possible from your time at Future Decoded, including... A series of customer success stories where you can hear directly from your peers about their transformation journey, sharing their insights and experiences. A tailored selection of roundtables and briefings to explore key trends, identify opportunities, address common challenges and spark new ideas. Presentations from our Partners and Microsoft Product Groups, with experts available in our Expo to help you decode 'the art of the possible' and apply within your organisation. Wherever you are on your journey - digital transformation, enhancing customer experience, gaining data insights or becoming a truly modern business - there will be presentations and breakout sessions to empower you to reach your destination successfully, at the Future Decoded Business day. Tuesday 10th November 2015 08:30 - 09:45 09:45 - 12:00 Keynotes 11:00 - 19:30 Expo Open 12:15 - 16:15 Breakout Sessions 16:45 - 17:45 Closing Keynote 17:45 - 19:00 Networking Wednesday 11th Nov - The Technical Day If you're a developer, an IT professional or any other kind of propeller-head then the Future Decoded Technical Day is the place for you. Where else can you hang out with 4,000 like-minded folks and get... Keynotes from top industry leaders presenting their vision on topics across Cloud, Web and the Future of Computing. Deep technical tracks with world class speakers across programming languages, web, data, internet of things, cross platform apps and also Microsoft technologies like Windows 10 and Visual Studio. Short, snappy demo sessions from leading UK Microsoft Researchers and Most Valued Professionals. Whether you build or manage bits that run on a device, in a browser, on a server, in a database or anywhere else, we've got something for you at the Future Decoded Technical Day. Wednesday 11th November 2015 08:30 - 09:45 09:45 - 12:30 Keynotes 11:00 - 19:30 Expo Open 13:00 - 16:30 Breakout Sessions 16:30 - 17:30 Closing Keynote 17:30 - 19:00 Networking Agenda and sessions will be confirmed see
https://docs.microsoft.com/en-us/archive/blogs/uk_faculty_connection/future-decoded-1011-nov-2015-excel-londonfree-event
2020-02-17T02:35:46
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Scripts¶ The scripts API can be used to read view and modify the scripts which NSClient++ can run. Runtimes¶ As scripts can be provided by multiple plugins (LUAScripts, PythonScripts and CheckExternalScripts) there is a runtime selector which will send the information to the proper runtime. Currently only external scripts are supported. Security¶ As a security mechanism only scripts residing in the configured script root folder is showed. To configure the script root you can add the following to you configuration. [/settings/external scripts] script root=${scripts} List Runtimes¶ The API lists all available runtimes. Request¶ GET /api/v1/scripts Response¶ [ { "module":"CheckExternalScripts", "name":"ext", "title":"CheckExternalScripts" } ] Example¶ Fetch a list of all runtimes with curl curl -s -k -u admin |python -m json.tool [ { "ext_url": "", "module": "CheckExternalScripts", "name": "ext", "title": "CheckExternalScripts" } ] List Scripts¶ The API lists all available commands/scripts for a given runtime. Parameters¶ Request¶ GET /api/v1/scripts/ext Response¶ [ 'check_ok' ] Example 1: Listing active script¶ Fetch all active (currently enabled) scripts from CheckExternalScripts. curl -s -k -u admin |python -m json.tool [ "check_ok" ] Example 2: Listing all scripts¶ Request¶ curl -s -k -u admin |python -m json.tool [ "scripts\\check_60s.bat", "scripts\\check_battery.vbs", "scripts\\check_files.vbs", "scripts\\check_long.bat", "scripts\\check_no_rdp.bat", "scripts\\check_ok.bat", "scripts\\check_ping.bat", "scripts\\check_printer.vbs", "scripts\\check_test.bat", "scripts\\check_test.ps1", "scripts\\check_test.vbs", "scripts\\check_updates.vbs", "scripts\\lua\\check_cpu_ex.lua", "scripts\\lua\\default_check_mk.lua", "scripts\\lua\\noperf.lua", "scripts\\lua\\test.lua", "scripts\\lua\\test_ext_script.lua", "scripts\\lua\\test_nrpe.lua", "scripts\\powershell.ps1" ] Fetch Script¶ Fetch the script definition (ext) and/or the actual script. Request¶ GET /api/v1/scripts/ext/check_ok Response¶ scripts\check_ok.bat "Everything will be fine" Example 1: Show command definitions¶ Show the commands definitions i.e. the configured command which will be executed when the check is executed. curl -s -k -u admin scripts\check_ok.bat "The world is always fine..." Example 2: Listing the actual script¶ Please note that since script definitions are really commands there is no automated way to go from a script definition and its script. But given the above definition we can discern that the script is called scripts\check_ok.bat. We can use either / or \ as path separator here. curl -s -k -u admin @echo OK: %1 @exit 0 Add Script¶ Upload the new script definitions. Please note that it is not possible to upload scripts to the same granularity as you can with the configuration. For that you have to use the configuration API instead. This API is designed for convenience. So for instance you cannot set arguments for scripts via this API. Request¶ PUT /api/v1/scripts/ext/scripts\check_new.bat The posted payload¶ The payload we post is the actual script such as: @echo OK: %1 @exit 0 Response¶ Added check_new as scripts\check_new.bat Example¶ Given a file called check_new.bat which contains the following: @echo OK: %1 @exit 0 We can use the following curl call to upload that as check_new. curl -s -k -u admin -X PUT --data-binary @check_new.bat Added check_new as scripts\check_new.bat configuration¶ The configuration added to execute this script is: [/settings/external scripts/scripts] ; SCRIPT - For more configuration options add a dedicated section (if you add a new section you can customize the user and various other advanced features) check_new = scripts\check_new.bat Delete Script¶ Delete both script definitions and actual script files from disk. Request¶ DELETE /api/v1/scripts/ext/scripts\check_new.bat Response¶ Script file was removed Example 1: Delete the script definition¶ If we have created a script for check_new (see adding script above) we can remove it via the API as well. Please note this will ONLY remove the script definition not the actual script file (to remove the script see below). curl -s -k -u admin -X DELETE Script definition has been removed don't forget to delete any artifact for: scripts\check_new Example 2: Deleting the script file¶ To delete the script file we use the same trick as when we showed it above i.e. we specify the script file instead of the command name. curl -s -k -u admin -X DELETE Script file was removed
https://docs.nsclient.org/api/rest/scripts/
2020-02-17T00:10:50
CC-MAIN-2020-10
1581875141460.64
[]
docs.nsclient.org
Shadowmask lighting mode The High Definition Render Pipelines (HDRP) supports the Shadowmask Lighting Mode which makes the lightmapper precompute shadows for static GameObjects, but still process real-time lighting for non-static GameObjects. HDRP also supports the Baked Indirect mixed Lighting Mode. For more information on mixed lighting and shadowmasks, see Mixed lighting modes and Shadowmask. Using shadowmasks To use shadowmasks in HDRP, you must set up your Project to support them. To do this: Enable Distance Shadowmask for every Quality Level in your Unity Project: - Open the Project Settings window (menu: Edit > Project Setting) and select the Quality tab. - Select a Level (for example, Medium) then, in the Shadows section, set the Shadowmask Mode to Distance Shadowmask. - Do this for every Quality Level. Enable the Shadowmask property in your Unity Project’s HDRP Asset: - In the Project window, select an HDRP Asset to view in the Inspector. - Go to the Lighting section and then to Shadow. - Enable Shadowmask. Set up your Scene to use shadowmask and baked global illumination: - Open the Lighting window (menu: Window > Rendering > Lighting Settings). - In the Mixed Lighting section, enable Baked Global Illumination and set the Lighting Mode to Shadowmask. Make your Cameras use shadowmasks when they render the Scene. To set this as the default behaviour for Cameras: - Open the Project Settings window (menu: Edit > Project Settings) and select the HDRP Default Settings tab. - In the Frame Settings section, set Default Frame Settings For to Camera. - In the Lighting section, enable Shadowmask. Optionally, you can make your Reflection Probes use shadowmask for baked or real-time reflections. To do this, follow the same instructions as in step 3, but set Default Frame Settings For to Baked Or Custom Reflection or Realtime Reflection. Now, on a Light, when you select the Mixed mode. The lightmapper precomputes Shadowmasks for static GameObject that the Light affects. Shadowmask mode To allow for flexible lighting setups, HDRP lets you choose the behaviour of the shadowmasks for each individual Light. To change the behavior of the shadowmask, use the Light’s Shadowmask Mode property. To do this, set the Light’s Mode to Mixed then go to Shadows > Shadow Map and set the Shadowmask Mode to your desired behavior. For information on the behavior of each Shadowmask Mode, see the following table. Details Distance Shadowmask is more GPU intensive, but looks more realistic because real-time lighting that is closer to the Light is more accurate than shadowmask Textures with a low resolution meant to represent areas further away. Shadowmask is more memory intensive because the Camera uses shadowmask Textures for static GameObjects close to the Camera, which requires a larger resolution shadowmask Texture.
https://docs.unity3d.com/Packages/[email protected]/manual/Lighting-Mode-Shadowmask.html
2020-02-17T02:01:20
CC-MAIN-2020-10
1581875141460.64
[]
docs.unity3d.com
GetOpenIDConnectProvider Returns information about the specified OpenID Connect (OIDC) provider resource object in IAM. Request Parameters For information about the parameters that are common to all actions, see Common Parameters. - OpenIDConnectProviderArn The Amazon Resource Name (ARN) of the OIDC provider resource object in IAM to get information for. You can get a list of OIDC provider resource ARNs by using the ListOpenIDConnectProviders operation. For more information about ARNs, see Amazon Resource Names (ARNs) and AWS Service Namespaces in the AWS General Reference. Type: String Length Constraints: Minimum length of 20. Maximum length of 2048. Required: Yes Response Elements The following elements are returned by the service. - ClientIDList.member.N A list of client IDs (also known as audiences) that are associated with the specified IAM OIDC provider resource object. For more information, see CreateOpenIDConnectProvider. Type: Array of strings Length Constraints: Minimum length of 1. Maximum length of 255. - CreateDate The date and time when the IAM OIDC provider resource object was created in the AWS account. Type: Timestamp - ThumbprintList.member.N A list of certificate thumbprints that are associated with the specified IAM OIDC provider resource object. For more information, see CreateOpenIDConnectProvider. Type: Array of strings Length Constraints: Fixed length of 40. - Url The URL that the IAM OIDC provider resource object is associated with. For more information, see CreateOpenIDConnectProvider. Type: String Length Constraints: Minimum length of 1. Maximum length of 255. &OpenIDConnectProviderArn=arn:aws:iam::123456789012:oidc-provider/example.com &Version=2010-05-08 &AUTHPARAMS Sample Response <GetOpenIDConnectProviderResponse xmlns=""> <GetOpenIDConnectProviderResult> <ThumbprintList> <member>c3768084dfb3d2b68b7897bf5f565da8eEXAMPLE</member> </ThumbprintList> <CreateDate>2014-10-09T03:32:51.398Z</CreateDate> <ClientIDList> <member>my-application-ID</member> </ClientIDList> <Url>server.example.com</Url> </GetOpenIDConnectProviderResult> <ResponseMetadata> <RequestId>2c91531b-4f65-11e4-aefa-bfd6aEXAMPLE</RequestId> </ResponseMetadata> </GetOpenIDConnectProviderResponse> See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/IAM/latest/APIReference/API_GetOpenIDConnectProvider.html
2018-06-18T01:52:14
CC-MAIN-2018-26
1529267859923.59
[]
docs.aws.amazon.com
Before. Prerequisites To use MySQL as the Ambari database, you must set up the mysql connector, create a user and grant user permissions before running ambari-setup. Using Ambari with MySQL/MariaDB Steps To start the setup process, run the following command on the Ambari server host. You may also append setup options to the command. ambari-server setup -j $JAVA_HOME Respond to the setup prompt: The following table describes options frequently used for Ambari Server setup. If you have not temporarily disabled SELinux, you may get a warning. Accept the default y, and continue. By default, Ambari Server runs under root. Accept the default nat. If you have not temporarily disabled iptablesyou may get a warning. Enter yto continue. Custom JDK, you must manually install the JDK on all hosts and specify the Java Home path. Review the GPL license agreement when prompted. To explicitly enable Ambari to download and install LZO data compression libraries, you must answer y. If you enter n, Ambari will not automatically install LZO on any new host in the cluster. In this case, you must ensure LZO is installed and configured appropriately. Without LZO being installed and configured, data compressed with LZO will not be readable. If you do not want Ambari to automatically download and install LZO, you must confirm your choice to proceed. yat Enter advanced database configuration. In Advanced database configuration, enter Option [3] MySQL/MariaDB, then enter the credentials you defined for user name, password and database name. At Proceed with configuring remote database connection properties [y/n]choose y. Setup completes. Next Steps More Information Using Ambari with MySQL/MariaDB Configuring Ambari for Non-Root How to Set Up an Internet Proxy Server for Ambari Configuring LZO Compression
https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.1.2/bk_installing-hdf-ppc/content/set_up_the_ambari_server.html
2018-06-18T01:51:18
CC-MAIN-2018-26
1529267859923.59
[]
docs.hortonworks.com
two colors together. Each component is added separately. // blue + red = magenta var result : Color = Color.blue + Color.red; using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { public Color result = Color.blue + Color.red; } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/ScriptReference/Color-operator_add.html
2018-06-18T02:05:19
CC-MAIN-2018-26
1529267859923.59
[]
docs.unity3d.com
MEDIAID Function (MediaSet) Gets the unique identifier that is assigned to a MediaSet of a record. The MediaSet is a collection of media objects that are used on the record that can be displayed in the client. Syntax Guid := Record.MediaSetField.MEDIAID Parameters Record Type: Record Specifies the record that includes the MediaSet. MediaSetField Type: MediaSet Specifies the field that includes the media. This field must have the MediaSet data type. Property Value/Return Value Type: GUID The GUID of MediaSet on the record. Remarks When you import media on a table record by using either the IMPORTFILE Function (MediaSet) or IMPORTSTREAM Function (MediaSet), the media is assigned to a MediaSet GUID in the system table 2000000183 Tenant Media Set of the application database. You can use the MEDIAID function to retrieve the MediaSet GUID. Note that the imported media object is also assigned a GUID. To get the media object's GUID, you can use the MEDIAID Function (Media). Example This example is gets the GUID of the MediaSet that is used on item No. 1000 in the Item table. The field in the Item table that is used for the MediaSet data type is Picture. This code requires you to create the following variables. This code requires you to create the following text constant. item.GET('1000'); mediasetId := item.Picture.MEDIAID; MESSAGE(Text000, mediasetId); See Also Working With Media on Records IMPORTFILE Function (MediaSet) IMPORTSTREAM Function (MediaSet) MediaSet Data Type
https://docs.microsoft.com/en-us/dynamics-nav/mediaid-function--mediaset-
2018-06-18T01:36:44
CC-MAIN-2018-26
1529267859923.59
[]
docs.microsoft.com
. About this task If you provide a VIB, an existing VIB that is installed to your VMware Host Client environment is updated to the new VIB. If a link to a metadata.zip file is provided, the entire ESXi system is updated to the version described by the metadata.zip file. Caution: If the host is managed by vSphere Update Manager, updating the host via this message might cause Update Manager to report the host as non-compliant. Procedure - Click Manage in the VMware Host Client and click Packages. - Click Install update and enter the URL of the VIB or a metadata.zip file. - Click Update. - Click Refresh to make sure that the update is successful.
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.html.hostclient.doc/GUID-B8B2FB8C-F401-445F-8226-D3143E2D2F37.html
2018-06-18T02:08:06
CC-MAIN-2018-26
1529267859923.59
[]
docs.vmware.com
General Options - 1 Introduction - 2 Summary - 3 Detailed Options - 3.1 -r <RegNum> — Pass registration number to APCrypt - 3.2 -l <logfile> — Log to a log file - 3.3 -p — Show progress - 3.4 -v — Print version information - 3.5 -h or -help — Show usage - 3.6 -o <outFile.pdf> — Save to a new file or directory - 3.7 -w — Linearize the file upon save - 4 There are four options available in APCrypt 4.x and later. Introduction The following general options deal with reporting, logging, and file maintenance. Sample command The following command displays usage information for APCrypt: $apcrypt -h Summary The following table provides a summary of the general command-line options. Detailed Options The following sections provide details on using the general command-line options. -r <RegNum> — Pass registration number to APCrypt This option can be used to supply your registration number to APCrypt from a script or another application: $apcryptapp -r XXXX-XXXX-XXXX-XXXX-XXXX-XXXX [other options] This option is typically not necessary and is available for use in cases where the Appligent License File can not be located by the application because of runtime environment restrictions. the utility works. Note: Please see the following section for more clarification on using log files and writing progress messages to the screen. A note on using -p and -l <logfile> together As outlined in the above sections, the utility operation whether there are errors or not. -v — Print version information Display the version of the utility you are running. This is important when corresponding with Appligent support; in order to best understand your problem, we must know what version of the software you have. APCrypt will not do anything else if you use this option. -h or -help — Show usage Display all available options for the utility. APCrypt will not do anything else if you use either of these options. -o <outFile.pdf> — Save to a new file or directory Save the modified file as a new file. We recommend using this option so you do not overwrite your existing files. If you are processing more than one input file at a time, specify a directory to save the resulting files. Note: Do not forget to specify the output file or directory, or the command will fail. When you encrypt/secure several files at one time and use the -o option, make sure to specify the name of an existing directory. If you specify a filename, all but the first of your original files will be overwritten. . There are four options available in APCrypt 4.x and later. ).
https://docs.appligent.com/apcrypt/general-options/
2018-06-18T02:06:44
CC-MAIN-2018-26
1529267859923.59
[]
docs.appligent.com
1. Welcome To The DXR Community¶ Though DXR got its start at Mozilla, it’s seen contributions from a variety of companies and individuals around the world. We welcome contributions of code, bug reports, and helpful feedback. 1.1. Bug Reports¶ Did something explode? Not act as you expected? Let us know. 1.2. Submitting Patches¶ To contribute code, just file a pull request. Include tests to double your chances of getting it merged and qualify for a free Bundt cake. We love tests. Bundt cake isn’t bad, either. 1.3. IRC¶ We hang out in the #static channel of irc.mozilla.org. Poke your head in and say hello. If you have questions, please address them to the public channel; don’t /msg someone in particular. That way, more people have a chance at answering your question, and more people can benefit from hearing the answers. We realize that no one likes looking naive, but please be brave and set an example to embolden the less-brave naive people. We’re a friendly bunch and will never deride anyone for being a beginner. 1.4. Open Bugs¶ Looking for something to hack on? Here are… Before starting work on a bug, hop into the IRC channel and confirm it’s still relevant. We try to garden our bugs, but DXR often moves faster than we can weed.
https://dxr.readthedocs.io/en/latest/community.html
2018-06-18T01:29:40
CC-MAIN-2018-26
1529267859923.59
[]
dxr.readthedocs.io
- Release Notes > - Backup Agent Changelog Backup Agent Changelog¶ On this page - Backup Agent 6.8.1.996 - Backup Agent 6.8.0.993 - Backup Agent 6.7.0.985 - Backup Agent 6.6.1.965 - Backup Agent 6.6.0.959 - Backup Agent 6.5.0.756 - Backup Agent 6.4.0.734 - Backup Agent 6.3.0.728 - Backup Agent 6.2.0.714 - Backup Agent 6.1.1.693 - Backup Agent 6.1.0.688 - Backup Agent 6.0.0.680 - Backup Agent 6.0.0.676 - Backup Agent 5.9.0.662 - Backup Agent 5.8.0.655 - Backup Agent 5.7.0.637 - Backup Agent 5.6.0.61 - Backup Agent 5.5.0.512 - Backup Agent 5.4.0.493 - Backup Agent 5.3.0.484 - Backup Agent 5.2.0.473 - Backup Agent 5.1.0.467 - Backup Agent 5.0.3.465 - Backup Agent 5.0.1.453 - Backup Agent 4.6.0.425 - Backup Agent 4.5.0.412 - Backup Agent 4.4.0.396 - Backup Agent 4.3.0.384 - Backup Agent 4.2.0.373 - Backup Agent 4.1.0.347 - Backup Agent 4.0.0.343 - Backup Agent 3.9.0.336 - Backup Agent 3.8.1.320 - Backup Agent 3.8.0.315 - Backup Agent 3.7.0.300 - Backup Agent 3.6.0.292 - Backup Agent 3.5.0.286-1 - Backup Agent 3.4.0.273 - Backup Agent 3.3.0.261 - Backup Agent 3.2.0.262 - Backup Agent 3.1.0.250 - Backup Agent 3.0.0.246 - Backup Agent 2.9.1.235-1 - Backup Agent 2.9.0.223 - Backup Agent 2.8.0.204 - Backup Agent 2.7.1.206 - Backup Agent 2.7.0.193 - Backup Agent 2.6.0.176 - Backup Agent 2.5.0 - Backup Agent 2.4.0.156 - Backup Agent 2.3.0.149 - Backup Agent 2.2.2.125 - Backup Agent 2.2.1.122 - Backup Agent 2.1.0.106-1 - Backup Agent 2.0.0.90-1 - Backup Agent 1.6.1.87-1 - Backup Agent 1.6.0.55-1 - Backup Agent 1.4.6.43-1 - Backup Agent 1.4.4.34-1 - Backup Agent 1.4.3.28-1 - Backup Agent 1.4.2.23-1 - Backup Agent 1.4.0.17 - Backup Agent v20131216.1 - Backup Agent v20131118.0 - Backup Agent v20130923.0 - Backup Agent v20130826.0 - Backup Agent v20130812.1 Backup Agent 6.8.0.993¶ Released 2018-05-31 - Make responseHeaderTimeout configurable. - Support for upcoming MongoDB 4.0 release. Backup Agent 6.7.0.985¶ Released 2018-05-09 - Support for persistent HTTPS connections (default to off). Backup Agent 6.5.0.756¶ Released 2018-03-06 - Fix: Backup Agent should produce an error message and not crash when erroneous authentication credentials are provided for a source cluster. Backup Agent 6.4.0.734¶ Released 2018-02-13 - During a PIT restore, suppress errors when dropping non-existent namespaces. - During a PIT restore, always apply oplogs with upsert=true. Backup Agent 6.3.0.728¶ Released 2018-01-23 - Fix: Send compound index keys as ordered BSON. - Fix: Send less detailed data in the initial summary payload at the start of an initial sync. Collect more detailed data for each collection individually. Backup Agent 6.2.0.714¶ Released 2018-01-08 - Fix: Relax validation when krb5ConfigLocationparameter is specified. This no longer implies that krb5Principaland krb5Keytabare required. - Fix: Use correct format for point in time restore oplog seed when no oplog are available. Backup Agent 6.1.1.693¶ Released 2017-11-19 Fix: Upgrades of the Backup Agent performed by the Automation Agent were missing a parameter on Windows. Backup Agent 5.8.0.655¶ Released 2017-08-25 - Allow oplogs for a point in time restore to be applied client-side. Backup Agent 5.6.0.61¶ Released 2017-07-11 - During initial sync, add verification that shard name matches the expected shard name. Backup Agent 5.5.0.512¶ Released 2017-06-15 - Use HTTP basic auth to authenticate HTTPS requests between the Backup Agent and cloud.mongodb.com. - Performance enhancement: Use bson.Rawfor initial sync. Backup Agent 5.4.0.493¶ Released 2017-04-19 - Reduce memory used during initial sync. - Ensure messages printed to STDOUTand STDERRis also included in the Backup Agent log file. Backup Agent 5.3.0.484¶ Released 2017-03-29 - Optimization for collection of data in the initial sync phase. (Recompiled with the MGO-128 fix.) Backup Agent 5.2.0.473¶ Released 2017-01-23 - Support for macOS Sierra. - Compiled with Go 1.7.4. - Fix: Can send logs to Cloud Manager for Backup Agents running on Windows. Backup Agent 5.1.0.467¶ Released 2016-12-13 - Handle capped collections that are capped using a floating point size. Backup Agent 5.0.3.465¶ Released 2016-11-21 - Support for MongoDB 3.4 Views. - Support for MongoDB 3.4 featureCompatiblityVersion. Backup Agent 5.0.1.453¶ Released 2016-11-07 - Allow managed Backup Agents to be run as a service on Windows. Backup Agent 4.6.0.425¶ Released 2016-09-14 - Update of underlying Go driver. - Partial support for upcoming major release of MongoDB 3.4.0. - Partial support for Kerberos on Windows. Backup Agent 4.2.0.373¶ Released 2016-04-20 - Added support for log rotation. - Added a sticky header to log files. Backup Agent 4.1.0.347¶ Released 2016-02-18 - Use systemD management on RHEL7 and Ubuntu 16.04. - Set ulimitsin the packaged builds. Backup Agent 4.0.0.343¶ Released 2016-01-07 Backup Agent 3.9.0.336¶ Released 2015-11-02 - Support for streaming initial syncs. - Support for MongoDB 3.2 clusters with config server replica sets. Backup Agent 3.8.0.315¶ Released 2015-09-16 - Built with Go 1.5.0. - Fix: Ignore collections deleted during an initial sync. Backup Agent 3.7.0.300¶ Released 2015-08-10 - Added fix to not trim spaces from collection names. - Upgraded to new version of snappy compression library. Backup Agent 3.6.0.292¶ Released 2015-07-15 - Added minor optimization to explicitly set the Content-Typeon HTTP requests. Backup .. _bgent-5.0.286-1: Backup Agent 3.5.0.286-1¶ Released 2015-06-24 - Updated documentation and setting URLs to cloud.mongodb.com. - Added support for backing up selected namespaces. This functionality is not yet exposed in the Cloud Manager user interface. Backup Agent 3.4.0.273¶ Released 2015-04-22 - Added an explicit timeout for SSL connections to mongod instances. - Added an optimization for syncs of collections with lots of small documents. - The Kerberos credentials cache now uses a fixed name. Backup Agent 3.2.0.262¶ Released 2015-02-23 Ability to monitor and back up deployments without managing them through Automation. Specifically, you can import an existing deployment into Monitoring and then use Cloud Manager to back up the deployment. Backup Agent 2.9.1.235-1¶ Released 2014-12-17 Agent now encodes all collection meta-data. Avoids edge-case issues with unexpected characters in collection settings. Backup Agent 2.9.0.223¶ Released 2014-12-04 Can now explicitly pass collections options for the WiredTiger storage engine from the backed up mongod to Cloud Manager. Backup Agent 2.8.0.204¶ Released 2014-11-12 The Backup Agent will now identify itself to the Cloud Manager servers using the fully qualified domain name (FQDN) of the server on which it is running. Backup Agent 2.7.0.193¶ Released 2014-10-29 - When tailing the oplog, the agent no longer pre-fetches the next batch of oplog entries before exhausting the current batch. - Adds support for non-default Kerberos service names. - Adds support for RHEL7. Backup Agent 2.6.0.176¶ Released 2014-09-30 Minor logging change, clarifying when stopping the balancer if there is no balancer settings document. Backup Agent 2.5.0¶ Released 2014-09-10 Added support for authentication using MongoDB 2.4 style client certificates. Backup Agent 2.4.0.156¶ Released 2014-08-19 The Backup Agent will now capture a checkpoint even if it is unable to stop the balancer. These checkpoints are not guaranteed to be consistent, because of in-progress chunk migrations. The user interface identifies these checkpoints. Backup Agent 2.3.0.149¶ Released 2014-07-29 - Upgraded agent to use to Go 1.3. - Added support for versionand -version. - Added support for connecting to hosts using LDAP authentication. - Agent now provides additional logging information when the Backup Agent manipulates the balancer. - Agent now supports configuring HTTP with the config file. Backup Agent 2.2.2.125¶ Released 2014-07-09 Fixes issue with agent on Windows using the MONGODB-CR authentication mechanism. Backup Agent 2.2.1.122¶ Released 2014-07-08 - Fixes issues with connecting to replica set members that use auth with an updated Go client library. - Agent is now able to send a stack trace of its current state to Cloud Manager. - Fixes regression in the Agent’s rollback handling. Backup Agent 2.1.0.106-1¶ Released 2014-06-17 Support for a new API t hat allows Cloud Manager to ingest oplog entries before the entire payload has reached the Cloud Manager servers. Backup Agent 2.0.0.90-1¶ Released 2014-05-28 - Agent supports deployment architectures with multiple active (i.e. primary) Backup Agents. - Improved stability around oplog tokens for environments with unstable networks. Backup Agent 1.6.1.87-1¶ Released 2014-05-19 - Critical update for users running the MongoDB 2.6 series that use authorization. - The Backup Agent now includes system.versionand system.rolecollections from the admindatabase in the initial sync. Backup Agent 1.6.0.55-1¶ Released 2014-05-09 The agent now sends oplog slices to Cloud Manager in batches to increase throughout and stability. Backup Agent 1.4.6.43-1¶ - Major stability update. - Prevent a file descriptor leak. - Correct handling of timeouts for connections hung in the TLS/SSL handshaking phase. Backup Agent 1.4.3.28-1¶ - Allow upgrading the agent using the Windows MSI installer. - Improved logging. - Fix an open files leak on bad HTTP responses. Backup Agent 1.4.2.23-1¶ - Added support for Windows MSI installer. - For sharded clusters, less aggressive polling to determine if balancer has been stopped. - Fail fast on connections to mongods that are not responding. Backup Agent 1.4.0.17¶ Added support for sharded cluster checkpoints that add additional points-in-time, in between scheduled snapshots, that Cloud Manager can use to create restores. Configure checkpoints using the Edit Snapshot Schedule link and interface. This version marks a change in the numbering scheme of Backup Agents to support improved packaging options for the Backup Agent. Backup Agent v20131216.1¶ - Added support for connecting to MongoDB instances running SSL. See the Configure Backup Agent for SSL documentation for more information. - The agent will try to use additional mongosinstances to take a cluster snapshot if the first mongosis unavailable. Backup Agent v20131118.0¶ - Significantly reduced the amount of time needed by the agent to detect situations that require a resync. - Allow automatic resync operations for config servers in sharded clusters. The agent can now resync automatically from these servers. Backup Agent v20130923.0¶ When the agent sends the initial meta-data about the data to back up (e.g. the list of databases, collections,and indexes,) to the Cloud Manager API, the agent will not include any databases or collections in the “excluded namespace” configuration. Backup Agent v20130826.0¶ Adds support for managing excluded namespaces: Backup Agent no longer sends data for excluded collections or databases.
https://docs.cloudmanager.mongodb.com/release-notes/backup-agent/
2018-06-18T01:58:51
CC-MAIN-2018-26
1529267859923.59
[]
docs.cloudmanager.mongodb.com
You create a new host profile by using the designated reference host's configuration. A host profile can be created from: Host Profile main view host's context menu Create a Host Profile from Host Profiles ViewYou can create a host profile from the Host Profiles main view using the configuration of an existing host. Create a Host Profile from HostYou can create a new host profile from the host's context menu in the Hosts and Clusters inventory view. Parent topic: Using Host Profiles in the vSphere Client
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.hostclient.doc/GUID-6B27C9C1-54ED-45FC-8316-C6BAA66C9842.html
2018-06-18T02:07:32
CC-MAIN-2018-26
1529267859923.59
[]
docs.vmware.com
Set up guide for GDPR The GDPR (General Data Privacy Regulation) is a new set of privacy regulations enacted by the European Union. A big part of the GDPR’s requirements is giving users the ability to consent before their data is collected online. Read more about the GDPR here. The Zonos Checkout is GDPR-ready and features a few different consent options which you can select from, depending on your business needs and legal advice. We have given you options in selecting which boxes to use that allows alignment with your privacy plans. We recommend you consult with your legal representation over which option to choose. Definitions Active consent choices - Consent to process the order and consent for marketing purposes. Legitimate interest - There is no active consent box. Your customer knows that by providing the information to your site, both they and you have a legitimate interest in the order being processed. Where do I choose my store’s consent level? You can modify your store’s consent level by logging into Zonos and navigating to your checkout settings. The GDPR settings are under the “General” section. Marketing consent only (Default option) This option will present only a checkbox allowing users to consent to marketing. When using this option, collecting the data required for order processing is considered legitimate interest. Hint: do not use this if not engaged in a marketing effort. Marketing + processing consent This option will show two separate checkboxes - one for marketing and another for order processing. Processing consent only This option will present only a checkbox allowing users to consent to order processing. Hint: do not use this if not engaged in a marketing effort. Combined processing and marketing consent This option will present a single checkbox with language allowing users to consent to both marketing and order processing at the same time. No consent box Use this option only after seeking legal counsel around legitimate interest. Many of the top retailers are choosing this option. If you want to use your privacy policy, you can contact our support team to insert. You can also enter your privacy page link (available soon).
https://docs.zonos.com/docs/zonos-checkout-gdpr
2018-06-18T01:54:43
CC-MAIN-2018-26
1529267859923.59
[array(['/assets/images/gdpr/Zonos_GDPR_app.png', 'Zonos app GDPR'], dtype=object) array(['/assets/images/gdpr/Zonos_GDPR_marketing_only.png', 'Marketing consent only'], dtype=object) array(['/assets/images/gdpr/Zonos_GDPR_marketing_and_processing.png', 'Marketing + processing consent'], dtype=object) array(['/assets/images/gdpr/Zonos_GDPR_processing_only.png', 'Processing consent only'], dtype=object) array(['/assets/images/gdpr/Zonos_GDPR_combined.png', 'Combined consent'], dtype=object) ]
docs.zonos.com
DKAN Datastore¶ DKAN Datastore bundles a number of modules and configuration to allow users to upload CSV files, parse them and save them into the native database as flat tables, allowing users to query them through a public API. Drupal Architecture The DKAN Datastore’s importer is a wrapper around the Feeds module. The custom Feeds Flatstore Processor and Feeds Field Fetcher plugins were created the file uploaded to the resource form a feed item. The Data module is used to manage datastore tables’ schema. The Datastore API uses the Services module to provide an endpoint, although nearly all the underlying functionality is overridden and provided directly by the DKAN Datastore API module. Getting Started¶ When you create a dataset with resources, you have data in DKAN which you can display and store in several ways. However, DKAN is still reading this data directly from the file or API you added as a resource. To get the fullest functionality possible out of your datasets, you should add your CSV resources to the datastore. If you are exploring a resource that is not yet in the datastore, you will see a message advising you of this. Click the “Manage Datastore” button at the top of the screen. On the “Manage Datastore” page, confirm that the delimiter and file encoding options are correct, then use the “Import” button at the bottom of the page to import the data from your file or API into DKAN’s local datastore. Your data is now ready to use via the API! Click the “Data API” button at the top of the resource screen for specific instructions. TAB delimiter support¶ DKAN supports TAB delimiters for csv files and other file extensions that commonly use TABs as delimiters. The autodetect format function is available for this file types (the format detected will be TSV) and the recline previews will work. The TAB delimiter support has been introduced to the datastore import functionality, so if your resource contains a csv file separated by TABs and you visit the “Manage Datastore” tab, you’ll have an option in the ‘Delimiter’ dropdown to select TAB. Once you select that option and press the ‘Import’ button, your resource will be imported and should be shown as expected in the resource preview. Processing Options¶ By default Resource files are added to the DKAN Datastore manually. This can be changed to: - Import upon form submission - Import in the background - Import periodically Changing Default Datastore Import Behavior¶ Default behavior for linked and uploaded files is controlled through the Feeds module. To access the Feeds administrative interface, enable the Feeds Admin UI module (which is included but not enabled by default in DKAN). Once turned on you can access the Feeds UI at /admin/structure/feeds. You should see two Feeds Importers by default: Import on submission¶ To import a Resource file upon saving the resource, click Import on submission in the settings section for each importer: This is not recommended for large imports as a batch screen will be triggered that will not stop until the entire file is imported. Process in background¶ This setting means that once an import has started, it will be processed in 50 row increments in the background. Processing will occur during cron. The queue of imports is managed by the Job Schedule module. Each cron run will process a maximum of 200 jobs in a maximum of 30 seconds. Note that an import won’t be started by saving the Resource form. This will only be triggered by clicking “Import” on the “Manage Datastore” page or if triggered programatically. This setting can be used in addition to “Import on submission” option to start imports that will be imported in the background. Geocoder¶ DKAN’s native Datastore can use the Drupal Geocoder module to add latitude/longitude coordinates to resources that have plain-text address information. This means that datasets containing plain-text addresses can be viewed on a map using the Data Preview or other map-based data visualizations. It is not included by default with DKAN but can be downloaded here. Instructions¶ - Install and enabling the geocoder module. - Click the Manage Datastore tab on any resource with address information. - Check the “Geolocate” box. - Select the Geolocation Service you will be using. - In the Geolocate Addressses field enter the field or fields from the file that make up the address to geolocate. - Click the Import button Geolocation Services¶ Geolocation services offered are Note that Nominatim is a driven by Open Street Map data, which is the most open of the options offered. Geolocation Limits¶ The number of rows that can be geolocated is determined by the service you select. Google, for example, allows you to geolocate up to 2500 times per day before paying. Managing datastores with Drush¶ To create a datastore from a local file: drush dsc (path-to-local-file) To update a datastore from a local file: drush dsu (datastore-id) (path-to-local-file) To delete a datastore file (imported items will be deleted as well): drush dsfd (datastore-id) To get the URI of the datastore file: drush dsfuri (datastore-id) Using the Fast Import Option¶ DKAN Datastore’s “fast import” allows for importing huge CSV files into the datastore at a fraction of the time it would take using the regular import. When a CSV is imported using the regular import, this is what it happens under the hood: - PHP interpreter reads the file line-by-line from the disk - Each time a line is parsed it sends a query to the database - The database receives the query and parses it - The database creates a query execution plan - The database excecutes the plan (i.e., inserts a new row) Note Steps 3, 4 and 5 are executed for each row in the CSV. The Datastore Fast Import was designed to remove as many steps as possible from the previous list. It performs the following steps: - PHP interpreter sends a LOAD DATA query to the database - The database receive the query and parses it - The database reads and imports the whole file into a table Only one query is executed, so the amount of time required to import a big dataset is drastically reduced. On a multi-megabyte file, this could mean the difference between an import time of hours to minutes. Requirements¶ - A MySQL / MariaDB database - MySQL database should support PDO::MYSQL_ATTR_LOCAL_INFILE and PDO::MYSQL_ATTR_USE_BUFFERED_QUERY flags. - Cronjob or similar to execute periodic imports. - Drush Set up the following command to run periodically using a cronjob or similar: drush queue-run dkan_datastore_fast_import_queue Configuration¶ To configure how Fast Import behaves go to admin/dkan/datastore. There are 3 basic configurations that control the Fast Import functionality: Either of the two “Use fast import” options will also reveal the following additional settings: Usage¶ To import a resource using Fast Import: - Create a resource using a CSV file (node/add/resource) or edit an existing one. - Click on Manage Datastore - Make sure the status says No imported items (You can use the Drop Datastore link if needed). - Check Use Fast Import checkbox - Press import - If you get an error like SQLSTATE[28000]: invalid authorization specification: 1045 access denied for user 'drupal'@'%' (using password: yes)you will need to grant FILE permissions to your MYSQL user. To do so use this command: GRANT FILE ON *.* TO 'user-name' Note If you are using the docker-based development environment described in the DKAN Starter documentation, you will need to execute the following commands (take note that admin123 is the password of the admin user in that mysql environment): ahoy docker exec db bash mysql -u root -padmin123 GRANT FILE ON *.* TO 'drupal'; When the option “Use Fast Import” is checked, some other options become visible that affect how MySQL will parse your file: - Quote delimiters: the character that encloses the fields in your CSV file. - Lines terminated by: the character that works as line terminator in your CSV file. - Fields escaped by: the character used to escape other characters in your CSV file. Also, you can choose if the empty cells will be read as NULL or zeros by checking the box for “Read empty cells as NULL”. Datastore API¶ Once processed, Datastore information is available via the Datastore API. For more information, see the Datastore API page.
http://docs.getdkan.com/en/latest/components/datastore.html
2018-06-18T01:37:33
CC-MAIN-2018-26
1529267859923.59
[array(['../_images/datastore-message.png', '../_images/datastore-message.png'], dtype=object) array(['../_images/datastore-resource.png', '../_images/datastore-resource.png'], dtype=object) array(['../_images/datastore-feeds-importers.png', '../_images/datastore-feeds-importers.png'], dtype=object) array(['../_images/datastore-import-submission.png', '../_images/datastore-import-submission.png'], dtype=object) array(['../_images/datastore-geolocate.png', '../_images/datastore-geolocate.png'], dtype=object)]
docs.getdkan.com
UI Toolkit addresses an application’s need to process terminal input through an input window. An input window is a standard window that can contain text, input fields, and buttons. Input windows are defined with the .INPUT script command or at runtime with the IB_INPUT subroutine (see Building input windows at runtime). Each input field is associated with a position and length within the window, as well as a set of qualifiers that defines how the field should be processed. Qualifiers Input field qualifiers enable you to define input field characteristics, which include the following: Organization of fields You define and determine the order of the input fields and field qualifiers in an input window script using the .FIELD command. (See Script for more information on input window script commands.) Because the maintenance of your field qualifiers takes place in a script file (outside of your application), you can modify the characteristics of your input window or input fields without ever touching your actual code. The following rules apply to field organization: Processing and editing field input The input utilities support menu entries for moving between and editing input fields (I_xxx menu entries), for editing text (E_xxx menu entries), and for moving around selection windows (S_xxx). These menu entries are often referred to as reserved menu entries. Applications on UNIX and OpenVMS require that you place the corresponding reserved menu entry on a placed menu column to make the function available to the user. Refer to Appendix B: Reserved Menu Entries for a list of the specific menu entries available. On Windows, these reserved menu entries are ignored, as functions are already a part of the Windows environment. When UI Toolkit processes input for an empty field, it clears the field to blanks and then displays the default value for the field, if one exists. The user can do one of the following: As soon as the user types any character in the field, the field is cleared and the typed character is the first character of the input. If the field doesn’t have a default, input proceeds normally. Field editing is automatically performed during all input processing. Therefore, if the user moves to a field that already contains data, the user can edit it. For example, on UNIX or OpenVMS, if the entry “E_RIGHT” is present on the menu and the user selects it, Toolkit will move the cursor to the right one character and place the input position at that character. The user can go into edit mode by selecting any one of the “E_” reserved menu entries listed in the I_INPUT Discussion. Let’s assume a user of an application on UNIX or OpenVMS originally typed a telephone number as “3334545,” and the number was reformatted and displayed in the field as “333‑4545.” If the user moved back to the field and selected “E_RIGHT,” “333‑4545” would be cleared from the field and “3334545” would be redisplayed, left‑justified. The user could then edit this data. If the user presses the backspace or rubout key at a field that contains data, the field is cleared and the input is redone. If the user presses any other character, the field is cleared and the typed character is used as the first data of the input. When the user types data in an input field and presses enter, that data goes into the data area associated with the input field in your program. If the user terminates the input field by selecting a menu entry (for example, a search function) instead of pressing enter, the field reverts back to its original contents. You must use the I_FORCE subroutine to force the data from the internal input field buffer into the associated data area in your program if this is the action desired. (See IB_END for information on overriding this default behavior.) By default, Toolkit sets no time limit for input; it waits until I/O processing is completed before returning to the calling subroutine. You can, however, change the default, and you can override the default for a field. Then, if the user does not complete input within the time you specify, input times out. For more information, see the .FIELD qualifier WAIT. To facilitate redirection of input, UI Toolkit enables you to configure the number of successive end‑of‑file characters to allow on input. The field g_eof_max (in tkctl.def), which is set at 100 by default, specifies this maximum. After g_eof_max successive EOF characters (ctrl+d on UNIX, ctrl+z on OpenVMS) have been encountered, a message is displayed on the screen and the program stops. You can set this field to zero to disable this behavior. Text fields Your input windows can contain dimensioned alpha fields that offer full editing capabilities similar to text windows. (Note that only single‑dimension alpha arrays are supported. See Text Routines for more information about text windows.) This feature is ideal for any field in which the user must enter a significant amount of text. For example, if a window in your application has a field for comments, the user would probably appreciate being able to edit the text that she types into that field. The primary difference between edit processing in a text field and edit processing in a text window is that pressing enter in a text field terminates the field’s input (rather than inserting a new line at the cursor position). We made text input fields work this way so that they would terminate in the same way as all other input fields. But this shouldn’t be a problem; words automatically wrap to a new line during data entry. Also, you can override this action by modifying the value of g_txt_rtrn (defined in the tools.def file). On Windows, arrayed text fields are displayed as a multi‑line edit control. Because the sizing of this field includes extra leading for each line, the edit control will contain extra space into which no text can be entered. You can minimize the amount of extra space by setting the environment variable MINIMIZE_LEADING to 1 and using fixed fonts. When using proportional fonts, if the text field uses a smaller font than the input window, the input window’s background color will extend beyond the bottom of the edit control. On Unix and OpenVMS, when the user begins performing input to a text field, any specified text editing menu column is placed on the menu bar, and the user can access these editing commands as if the field were a text window. A menu column entry that inserts a new line is available, if you choose to put it in your text editing menu column. You may decide to make only a subset of the editing commands available on the menu for text field entry, since some of the entries are already available for normal field editing, and the user may not need such functions as paragraph movement or top and bottom buffer movement for small amounts of text. See Appendix B: Reserved Menu Entries for a complete list of reserved menu entries. Understanding display, input, and view lengths Input fields have the following length settings: Toolkit automatically calculates the display length and the input length by looking at field settings like size, type, and the format string. Toolkit then uses the greater of the display length or input length to determine the width of the field, which is referred to as the view length. Typically, Toolkit’s default calculations are sufficient, but you can override them if necessary. View length, display length, and input length can be specified in Repository, in a window script (as .FIELD qualifiers), or at runtime (with I_FLDMOD or IB_FIELD). For more information on these settings, including information on their default values, see DSP_LENGTH, NODSP_LENGTH , INPUT_LENGTH, NOINPUT_LENGTH , and VIEW_LENGTH, NOVIEW_LENGTH . Building input windows at runtime In addition to generating input window definitions from script files, you can generate them at runtime. The subroutines that begin with “IB_” (for “input build”) support runtime generation of input windows. These subroutines provide an alternative to defining your input windows in script files. We recommend that you only use these subroutines when you cannot determine the contents of your input window until runtime. Loading windows from window libraries is faster and more modular than generating windows at runtime. There is almost a one‑to‑one correspondence between the IB_xxx subroutines and the script commands that build an input window (along with I_LDINP). For example: These subroutines are also called in the same order as the input window script commands (IB_INPUT, IB_FIELD, …, IB_END). Each subroutine must be passed a build_id variable, which is initially returned by IB_INPUT. This variable should be an aligned i4. Multiple fields, sets, and structures are supported, but the fields used in a set or structure must be defined before you define a set or structure. Also, the structure used in a set must be defined before the set. IB_RPS_STRUCTURE must be called prior to the IB_FIELD calls for fields to be drawn from a repository structure. IB_STRUCTURE can only be used for local fields (fields not drawn from the Repository). Before either IB_RPS_STRUCTURE or IB_LOCAL is called, the default state is local. IB_INPUT must be called once at the beginning and IB_END once at the end to complete the input window. After IB_END returns successfully, the build_id may be discarded. At this point, the input window is ready to use. Note that build warnings for IB_xxx subroutines display only if g_dtkbounds is set to 1 or 2. And if IB_FIELD creates a text field that is too large for a window, a warning will display only if g_dtkbounds is set to 2 or greater. See g_dtkbounds.
http://docs.synergyde.com/tk/tkChap8Usinginputwindows.htm
2018-06-18T02:12:53
CC-MAIN-2018-26
1529267859923.59
[]
docs.synergyde.com
Elasticsearch¶ We strongly recommend to use a dedicated Elasticsearch cluster for your Graylog setup. If you are using a shared Elasticsearch setup, a problem with indices unrelated to Graylog might turn the cluster status to YELLOW or RED and impact the availability and performance of your Graylog setup. Important Graylog currently does not work with Elasticsearch clusters using the License or Shield plugin. Elasticsearch versions¶ Graylog hosts an embedded Elasticsearch node which is joining the Elasticsearch cluster as a client node. The following table provides an overview over the Elasticsearch version in Graylog: Caution Graylog 2.x does not work with Elasticsearch 5.x! Configuration¶ Graylog¶ The most important settings to make a successful connection are the Elasticsearch cluster name, one or more addresses of Elasticsearch master nodes, and the local network bind address. Graylog needs to know the address of at least one other Elasticsearch master node given in the elasticsearch_discovery_zen_ping_unicast_hosts setting. Vice versa, the Elasticsearch nodes need to be able to access the embedded Elasticsearch node in Graylog via the interface given in the elasticsearch_network_host setting. Cluster Name¶ You need to tell Graylog which Elasticsearch cluster to join. The Elasticsearch default cluster name is elasticsearch and configured for every Elasticsearch node in the elasticsearch.yml configuration file with the cluster.name name. Configure the same cluster name in every Graylog configuration file (e. g. graylog.conf) with the elasticsearch_cluster_name setting (default: graylog). We recommend to call the cluster graylog-production or graylog, but not elasticsearch to prevent accidental cluster name collisions. The Elasticsearch configuration file is typically located at /etc/elasticsearch/elasticsearch.yml. Network setup¶ Graylog is using unicast discovery to find all the Elasticsearch nodes in the cluster. In order for this to work, Graylog has to know some master nodes of the Elasticsearch cluster which can be provided in the elasticsearch_discovery_zen_ping_unicast_hosts configuration setting. For example, add the following lines to your Graylog configuration file for an Elasticsearch cluster which includes the 2 Elasticsearch master nodes es-node-1.example.org and es-node-2.example.org: # List of Elasticsearch master nodes to connect to elasticsearch_discovery_zen_ping_unicast_hosts = es-node-1.example.org:9300,es-node-2.example.org:9300 Additionally, Graylog has to use a network interface for the embedded Elasticsearch node which the other Elasticsearch nodes in the cluster can connect to: # Public IP address or host name of the Graylog node, accessible for the other Elasticsearch nodes elasticsearch_network_host = 198.51.100.23 Also make sure to configure Zen unicast discovery in the Elasticsearch configuration file by adding the discovery.zen.ping.multicast.enabled and discovery.zen.ping.unicast.hosts settings with the list of Elasticsearch nodes to elasticsearch.yml: discovery.zen.ping.multicast.enabled: false discovery.zen.ping.unicast.hosts: ["es-node-1.example.org:9300" , "es-node-2.example.org:9300"] The Elasticsearch default communication port is 9300/tcp (not to be confused with the HTTP interface running on port 9200/tcp by default). The communication port can be changed in the Elasticsearch configuration file ( elasticsearch.yml) with the configuration setting transport.tcp.port. Last but not least, make sure that Elasticsearch is binding to a network interface that Graylog can connect to (see network.host and Commonly Used Network Settings). Configuration of Elasticsearch nodes¶ Disable dynamic scripting¶ Elasticsearch prior to version 1.2 had an insecure default configuration which could lead to a remote code execution. Make sure to add the following settings to the elasticsearch.yml file to disable the dynamic scripting feature and prevent possible remote code executions: script.inline: false script.indexed: false script.file: false Details about dynamic scripting can be found in the reference documentation of Elasticsearch. Control access to Elasticsearch ports¶ Since Elasticsearch has no authentication mechanism at time of this writing, make sure to restrict access to the Elasticsearch ports (default: 9200/tcp and 9300/tcp). Otherwise the data is readable by anyone who has access to the machine over network. Open file limits¶ Because Elasticsearch has to keep a lot of files open simultaneously it requires a higher open file limit that the usual operating system defaults allow. Set it to at least 64000 open file descriptors. Graylog will show a notification in the web interface when there is a node in the Elasticsearch cluster which has a too low open file limit. Read about how to raise the open file limit in the corresponding Elasticsearch documentation page. Heap size¶ It is strongly recommended to raise the standard size of heap memory allocated to Elasticsearch. Just set the ES_HEAP_SIZE environment variable to for example 24g to allocate 24GB. We recommend to use around 50% of the available system memory for Elasticsearch (when running on a dedicated host) to leave enough space for the system caches that Elasticsearch uses a lot. But please take care that you don’t cross 32 GB! Merge throttling¶ Elasticsearch is throttling the merging of Lucene segments to allow extremely fast searches. This throttling however has default values that are very conservative and can lead to slow ingestion rates when used with Graylog. You would see the message journal growing without a real indication of CPU or memory stress on the Elasticsearch nodes. It usually goes along with Elasticsearch INFO log messages like this: now throttling indexing When running on fast IO like SSDs or a SAN we recommend to increase the value of the indices.store.throttle.max_bytes_per_sec in your elasticsearch.yml to 150MB: indices.store.throttle.max_bytes_per_sec: 150mb Play around with this setting until you reach the best performance. Avoiding split-brain and shard shuffling¶ Split-brain events¶ Elasticsearch sacrifices consistency in order to ensure availability, and partition tolerance. The reasoning behind that is that short periods of misbehaviour are less problematic than short periods of unavailability. In other words, when Elasticsearch nodes in a cluster are unable to replicate changes to data, they will keep serving applications such as Graylog. When the nodes are able to replicate their data, they will attempt to converge the replicas and to achieve eventual consistency. Elasticsearch tackles the previous by electing master nodes, which are in charge of database operations such as creating new indices, moving shards around the cluster nodes, and so forth. Master nodes coordinate their actions actively with others, ensuring that the data can be converged by non-masters. The cluster nodes that are not master nodes are not allowed to make changes that would break the cluster. The previous mechanism can in some circumstances fail, causing a split-brain event. When an Elasticsearch cluster is split into two sides, both thinking they are the master, data consistency is lost as the masters work independently on the data. As a result the nodes will respond differently to same queries. This is considered a catastrophic event, because the data from two masters can not be rejoined automatically, and it takes quite a bit of manual work to remedy the situation. Avoiding split-brain events¶ Elasticsearch nodes take a simple majority vote over who is master. If the majority agrees that they are the master, then most likely the disconnected minority has also come to conclusion that they can not be the master, and everything is just fine. This mechanism requires at least 3 nodes to work reliably however, because one or two nodes can not form a majority. The minimum amount of master nodes required to elect a master must be configured manually in elasticsearch.yml: # At least NODES/2+1 on clusters with NODES > 2, where NODES is the number of master nodes in the cluster discovery.zen.minimum_master_nodes: 2 The configuration values should typically for example: Some of the master nodes may be dedicated master nodes, meaning they are configured just to handle lightweight operational (cluster management) responsibilities. They will not handle or store any of the cluster’s data. The function of such nodes is similar to so called witness servers on other database products, and setting them up on dedicated witness sites will greatly reduce the chance of Elasticsearch cluster instability. A dedicated master node has the following configuration in elasticsearch.yml: node.data: false node.master: true Custom index mappings¶ Sometimes it’s useful to not rely on Elasticsearch’s dynamic mapping but to define a stricter schema for messages. Note If the index mapping is conflicting with the actual message to be sent to Elasticsearch, indexing that message will fail. Graylog itself is using a default mapping which includes settings for the timestamp, full_message, and source fields of indexed messages: $ curl -X GET '' { "graylog-internal" : { "order" : -2147483648, "template" : "graylog_*", "settings" : { }, "mappings" : { "message" : { "_ttl" : { "enabled" : true }, "_source" : { "enabled" : true }, "dynamic_templates" : [ { "internal_fields" : { "mapping" : { "index" : "not_analyzed", "type" : "string" }, "match" : "gl2_*" } }, { "store_generic" : { "mapping" : { "index" : "not_analyzed" }, "match" : "*" } } ], "properties" : { "full_message" : { "analyzer" : "standard", "index" : "analyzed", "type" : "string" }, "streams" : { "index" : "not_analyzed", "type" : "string" }, "source" : { "analyzer" : "analyzer_keyword", "index" : "analyzed", "type" : "string" }, "message" : { "analyzer" : "standard", "index" : "analyzed", "type" : "string" }, "timestamp" : { "format" : "yyyy-MM-dd HH:mm:ss.SSS", "type" : "date" } } } }, "aliases" : { } } } In order to extend the default mapping of Elasticsearch and Graylog, you can create one or more custom index mappings and add them as index templates to Elasticsearch. Let’s say we have a schema for our data like the following: This would translate to the following additional index mapping in Elasticsearch: "mappings" : { "message" : { "properties" : { "http_method" : { "type" : "string", "index" : "not_analyzed" }, "http_response_code" : { "type" : "long" }, "ingest_time" : { "type" : "date", "format": "strict_date_time" }, "took_ms" : { "type" : "long" } } } } The format of the ingest_time field is described in the Elasticsearch documentation about the format mapping parameter. Also make sure to check the Elasticsearch documentation about Field datatypes. In order to apply the additional index mapping when Graylog creates a new index in Elasticsearch, it has to be added to an index template. The Graylog default template ( graylog-internal) has the lowest priority and will be merged with the custom index template by Elasticsearch. Warning If the default index mapping and the custom index mapping cannot be merged (e. g. because of conflicting field datatypes), Elasticsearch will throw an exception and won’t create the index. So be extremeley cautious and conservative about the custom index mappings! Creating a new index template¶ Save the following index template for the custom index mapping into a file named graylog-custom-mapping.json: { "template": "graylog_*", "mappings" : { "message" : { "properties" : { "http_method" : { "type" : "string", "index" : "not_analyzed" }, "http_response_code" : { "type" : "long" }, "ingest_time" : { "type" : "date", "format": "strict_date_time" }, "took_ms" : { "type" : "long" } } } } } Finally, load the index mapping into Elasticsearch with the following command: $ curl -X PUT -d @'graylog-custom-mapping.json' '' { "acknowledged" : true } Every Elasticsearch index created from that time on, will have an index mapping consisting of the original graylog-internal index template and the new graylog-custom-mapping template: $ curl -X GET '' { "graylog_2" : { "mappings" : { "message" : { "_ttl" : { "enabled" : true }, "dynamic_templates" : [ { "internal_fields" : { "mapping" : { "index" : "not_analyzed", "type" : "string" }, "match" : "gl2_*" } }, { "store_generic" : { "mapping" : { "index" : "not_analyzed" }, "match" : "*" } } ], "properties" : { "full_message" : { "type" : "string", "analyzer" : "standard" }, "http_method" : { "type" : "string", "index" : "not_analyzed" }, "http_response_code" : { "type" : "long" }, "ingest_time" : { "type" : "date", "format" : "strict_date_time" }, "message" : { "type" : "string", "analyzer" : "standard" }, "source" : { "type" : "string", "analyzer" : "analyzer_keyword" }, "streams" : { "type" : "string", "index" : "not_analyzed" }, "timestamp" : { "type" : "date", "format" : "yyyy-MM-dd HH:mm:ss.SSS" }, "took_ms" : { "type" : "long" } } } } } } Deleting custom index templates¶ If you want to remove an existing index template from Elasticsearch, simply issue a DELETE request to Elasticsearch: $ curl -X DELETE '' { "acknowledged" : true } After you’ve removed the index template, new indices will only have the original index mapping: $ curl -X GET '' { "graylog_3" : { "mappings" : { "message" : { "_ttl" : { "enabled" : true }, "dynamic_templates" : [ { "internal_fields" : { "mapping" : { "index" : "not_analyzed", "type" : "string" }, "match" : "gl2_*" } }, { "store_generic" : { "mapping" : { "index" : "not_analyzed" }, "match" : "*" } } ], "properties" : { "full_message" : { "type" : "string", "analyzer" : "standard" }, "message" : { "type" : "string", "analyzer" : "standard" }, "source" : { "type" : "string", "analyzer" : "analyzer_keyword" }, "streams" : { "type" : "string", "index" : "not_analyzed" }, "timestamp" : { "type" : "date", "format" : "yyyy-MM-dd HH:mm:ss.SSS" } } } } } } Cluster Status explained¶ Elasticsearch provides a classification for the cluster health. The cluster status applies to different levels: - Shard level - see status descriptions below - Index level - inherits the status of the worst shard status - Cluster level - inherits the status of the worst index status That means that the Elasticsearch cluster status can turn red if a single index or shard has problems even though the rest of the indices/shards are okay. Note Graylog checks the status of the current write index while indexing messages. If that one is GREEN or YELLOW, Graylog will continue to write messages into Elasticsearch regardless of the overall cluster status. Explanation of the different status levels: RED¶ The RED status indicates that some or all of the primary shards are not available. In this state, no searches can be performed until all primary shards have been restored. YELLOW¶ The YELLOW status means that all of the primary shards are available but some or all shard replicas are not. With only one Elasticsearch node, the cluster state cannot become green because shard replicas cannot be assigned. In most cases, this can be solved by adding another Elasticsearch node to the cluster or by reducing the replication factor of the indices (which means less resiliency against node outages, though).
http://docs.graylog.org/en/2.1/pages/configuration/elasticsearch.html
2018-06-18T02:09:21
CC-MAIN-2018-26
1529267859923.59
[]
docs.graylog.org
Introduction AppendPDF Pro can stamp barcodes using a Type 1 barcode font, or directly using the Barcode Type. For more information on stamping with Type 1 fonts see Font (optional). Specify stamping a barcode by setting the Type parameter to Barcode: Type (Barcode) AppendPDF Pro only supports Code 128 barcodes, additional codes will be added in future releases. Barcode parameters Code Specifies the barcode type. Use instead of Font and FontFile. AppendPDF Pro currently supports Code 128. Use: Code (128) to specify a Code 128 barcode. AppendPDF Pro uses Code 128 character set B. A simple Code 128 barcode (i.e., one character set) consists of a start character, the coded data, a calculated symbol check character, and an end character. AppendPDF Pro. Example The figure below shows a stamp item that stamps a plain barcode at the bottom of the page.
https://docs.appligent.com/appendpdf-pro/stamp-files/barcodes/
2018-06-18T02:04:09
CC-MAIN-2018-26
1529267859923.59
[array(['/files/2013/03/barcodeexample.jpg', 'Barcode stamp'], dtype=object)]
docs.appligent.com
If you’d like to automate the download of these files, you should be able to do so using any HTTP programming toolkit. Your client must accept cookies and follow any redirects in order to function. To report a problem with this documentation or provide feedback, please contact the DIG mailing list. © 2008-2015 GPLS and others. The Evergreen Project is a member of the Software Freedom Conservancy.
http://docs.evergreen-ils.org/2.8/_automating_the_download.html
2017-04-23T13:54:45
CC-MAIN-2017-17
1492917118707.23
[]
docs.evergreen-ils.org
Integration with Kendo UI for the Web As of the Kendo UI Q1 2014 release, the Kendo UI hybrid for mobile devices can be used alongside the Kendo UI widgets for the web in a regular web page, without an active mobile application instance. Basic Usage This approach is suitable if you use Kendo UI hybrid UI with third-party Single-Page Application (SPA) frameworks like Angular or Backbone, or if you develop a mobile version of a web site which does not need native mobile app look. The Kendo UI Web CSS files contain the necessary rules, so that a unified look can be achieved. Important In addition to kendo.common.cssand the skin stylesheet, the hybrid mobile widgets need one additional reference— kendo.[skin].mobile.cssor kendo.[skin].mobile.min.css, where [skin]is your current Kendo UI web skin name. The stylesheets are available in the Web/Complete bundles. For instance, if the SilverKendo UI web skin should be used for styling Kendo UI web and hybrid widgets, the stylesheet references shown in the example below are needed. Example <link href="styles/kendo.common.min.css" rel="stylesheet" type="text/css" /> <link href="styles/kendo.silver.min.css" rel="stylesheet" type="text/css" /> <link href="styles/kendo.silver.mobile.min.css" rel="stylesheet" type="text/css" /> Additionally, these web mobile skins can be used with a normal hybrid mobile Kendo UI Application. Note that cannot be used with the Kendo UI mobile platform styling, so the Kendo UI mobile platform CSS—even the common styling—should not be loaded (everything needed is already included). Getting Started Instantiate the Hybrid Mobile Switch The example below demonstrates how to instantiate a hybrid mobile Switch widget. Example <input type="checkbox" id="my-switch" /> <script> $("#my-switch").kendoMobileSwitch(); </script> Known Limitations - As a mobile application instance is missing, its features—declarative widget initialization, view transitions, and browser history binding among others—do not work. - Unlike the application mode, this mode primarily targets mobile web sites. Thus the mobile OS skins—Android/iOS—are not supported. - Certain ListView features—pull to refresh, endless scrolling, press to load more, fixed headers—rely on the mobile Scroller. The ListView widget should be instantiated in a mobile Scroller widget element. - The mobile Drawer widget should have its containerconfiguration option set. The Drawer is not going to close automatically when navigation is performed. See also Other articles on the integration of Kendo UI hybrid components:
http://docs.telerik.com/kendo-ui/controls/hybrid/support/regular-usage
2017-04-23T13:50:32
CC-MAIN-2017-17
1492917118707.23
[]
docs.telerik.com
If you have a four channel SD soundcard you are able to configure the rear channels for cue channel output. Note: The Cue channel feature is available in the CD Scratch 1200 Deluxe product. For details on purchasing a CD Scratch 1200 Deluxe license click here. Click the icon on the toolbar. CD track. Tips:. Note: The cue channel feature is available in the CD Scratch 1200 Deluxe product. To purchase a CD Scratch 1200 Deluxe license click here. Cue channel output options How to optimize Air & Cue cross talk How to remove "ON-AIR" signal from the cue channel Adjusting cue channel levels How to set up your soundcard (main channel) Output Configuration dialog box
http://docs.otslabs.com/CDScratch/help/soundcard_configuration/cue/how_to_configure_a_four_channel_soundcard_for_cue.htm
2017-04-23T13:55:32
CC-MAIN-2017-17
1492917118707.23
[]
docs.otslabs.com
JavaScript seem to be disabled in your browser. You must have JavaScript enabled in your browser to utilize the functionality of this website. The site validation master plan describes the assessment of the validation for the sites facilities, utilities, computer systems and manufacturing processes. Design Transfer SOP Medical Device Design Inputs Complaint Handling - Customer Complaints Clinical Evaluation for Medical Devices in Development Design Control SOP Risk Management Process FMEA Failure Mode and Effect Analysis Medical Device Design Outputs Compliance Checklist CFR 820 Medical Device Design Changes SOP Development Project Initialization and Design Review Management Review for Medical Devices Vendor, Supplier and Contractor Audit Medical Device Design Verification SOP Combination Products SOP Supplier Approval, Qualification and Certification SOP Standard Operating Procedure Template - SOP Template Change Control SOP Design History File (DHF) SOP Medical Device Design Validation SOP
http://www.qm-docs.com/glossary/entry/Site+Validation+Master+Plan
2017-04-23T13:50:56
CC-MAIN-2017-17
1492917118707.23
[]
www.qm-docs.com
animation-play-state animation-play-state This article is Ready to Use. W3C Editor's Draft Summary Defines whether an animation is running or paused. Overview table Syntax animation-play-state: paused animation-play-state: running Values - running - Plays the animation. If restarting a paused animation, the animation resumes from the current (paused) state. - paused - Pauses the animation. A paused animation continues to display the current state of the animation. Compatibility Desktop Mobile Examples The CSS uses the animation property and the @keyframes property as well as the animation-play-state property and more. The example show how to create a counter like function. By using the ":checked" selector for radio buttons we toggle the animation states for the counter CSS /* position the handles */ #stopwatch_handles { margin-top: 0px; } /* Style the labels itself, at the bottom we hide the radio buttons itself */ #stopwatch_handles label { cursor: pointer; padding: 5px 5px; font-family: Verdana, sans-serif; font-size: 12px; } input[name="handles"] {display: none;} /*Actual handles this triggers the stopwatch to start and stop based on the state of the radio buttons */ #stopbtn:checked~.stopwatch .numbers { animation-play-state: paused } #startbtn:checked~.stopwatch .numbers { animation-play-state: running } /* we set the animation in 10 steps of 1 second, and set the play state to paused by default */ .moveten { animation: moveten 1s steps(10, end) infinite; animation-play-state: paused; } /* here we do the same except for six */ .movesix { animation: movesix 1s steps(6, end) infinite; animation-play-state: paused; } /* here we actualy set the duration of the seconds so that they sync up when needed */ .second { animation-duration: 10s; } .tensecond { animation-duration: 60s; } /* and here are the keyframes so that the numbers animate vertically The height is 30 and the there are 10 digits so to move up we use -300px (30x10) */ @keyframes moveten { 0% {top: 0;} 100% {top: -300px;} } /* The same goes for this one but instead of ten we have 6 so we get 30x6 = 180px */ @keyframes movesix { 0% {top: 0;} 100% {top: -180px;} } View live exampleA mobile-like interface featuring a keyframe-animated pulsing icon. When the application enters an interruption mode, the icon is paused and the page presents another panel to indicate that the animation is inactive. CSS div.selected { animation: pulse 0.5s infinite alternate running; } body.interrupt div.selected { animation-play-state: paused; } @keyframes pulse { from { transform : scale(1) translateX(0); opacity : 1; } to { transform : scale(0.75) translateX(0); opacity : 0.25; } } Usage Can also be a comma-separated list of play states, e.g., running, paused, running, where each play state is applied to the corresponding ordinal position value of the animation-name property. Related specifications See also Other articles - Making things move with CSS3 animations - @keyframes - animation - animation-delay - animation-direction - animation-duration - animation-fill-mode - animation-iteration-count - animation-name - animation-timing-function Attribution This article contains content originally from external sources. Portions of this content come from the Microsoft Developer Network: Windows Internet Explorer API reference Article
https://docs.webplatform.org/wiki/css/properties/animation-play-state
2015-02-27T06:00:25
CC-MAIN-2015-11
1424936460576.24
[]
docs.webplatform.org
Logging on to the Mac after joining a domain When using Auto Zone, all Active Directory users in the domain become valid users on a joined computer. To verify that Centrify is working properly, you can simply log into the Mac computer by using an Active Directory account. On the Mac login screen, select Other and enter an Active Directory user name and password:
https://docs.centrify.com/Content/mac-admin/DomainLogOnAfterJoining.htm
2022-01-16T19:10:05
CC-MAIN-2022-05
1642320300010.26
[]
docs.centrify.com
New Relic's VMware vSphere integration helps you understand the health and performance of your vSphere environment. You can: - Query data to get insights on the performance on your hypervisors, virtual machines, and more. - Go from high level views down to the most granular data. vSphere data visualized in a New Relic dashboard includes operating systems, status, average CPU and memory consumption, and more. Our integration uses the vSphere API to collect metrics and events generated by all vSphere's components, and forwards the data to our platform via the infrastructure agent. Why it matters With our vSphere integration you can: Instrument and monitor multiple vSphere instances using the same account. Collect data on snapshots, VMs, hosts, resource pools, clusters, and datastores, including tags. Monitor the health of your hypervisors and VMs using our charts and dashboards. Use the data retrieved to monitor key performance and key capacity scaling indicators. Set alerts based on any metrics collected from vCenter. Create workloads to group resources and focus on key data. You can create workloads using data collected via the vSphere integration. Compatibility and requirements Our integration is compatible with VMware vSphere 6.5 or higher. Before installing the integration, make sure that you meet the following requirements: - Infrastructure agent installed on a host - vCenter service account having at least read-only global permissions with the propagate to childrenoption checked Important Large environments: In environments with more than 800 virtual machines, the integration cannot report all data and may fail. We offer a workaround that will preserve all metrics and events, but it will disable entity registration. To apply the workaround, add the following environment variable to the configuration file: EVENTS: trueMETRICS: true Install and activate To install the vSphere integration, choose your setup: Configure the integration An integration's YAML-format configuration is where you can place required login credentials and configure how data is collected. Which options you change depend on your setup and preference. To configure the vSphere integration, you must define the URL of the vSphere API endpoint, and your vSphere username and password. For configuration examples, see the sample configuration files. Some vSphere integration features are optional and can be enabled via configuration settings. In addition, with secrets management, you can configure on-host integrations with New Relic's infrastructure monitoring agent to use sensitive data (such as passwords) without having to write them as plain text into the integration's configuration file. Important If you connect the integration directly to the ESXi host, vCenter data is not available (for example, events, tags, or datacenter metadata). Example configuration Here are examples of the vSphere integration configuration, including performance metrics: vsphere-config.yml.sample(Linux) vsphere-win-config.yml.sample(Windows) vsphere-performance.metrics(Performance metrics) For more information, see our documentation about the general structure of on-host integration configurations. Important The configuration option inventory_source is not compatible with this integration. Update your integration On-host integrations do not automatically update. For best results, regularly update the integration package and the infrastructure agent. View and use data Data from this service is reported to an integration dashboard. You can query this data for troubleshooting purposes or to create charts and dashboards. vSphere data is attached to these event types: VSphereHostSample VSphereClusterSample VSphereVmSample VSphereDatastoreSample VSphereDatacenterSample VSphereResourcePoolSample VSphereSnapshotVmSample Performance data is enabled and configured separately (see Enable and configure performance metrics). For more on how to view and use your data, see Understand integration data. Metric data The vSphere integration provides metric data attached to the following New Relic events: VSphereHostSample VSphereVmSample VSphereDatastoreSample VSphereDatacenterSample VSphereResourcePoolSample VSphereClusterSample VSphereSnapshotVmSample
https://docs.newrelic.com/docs/infrastructure/host-integrations/host-integrations-list/vmware-vsphere-monitoring-integration
2022-01-16T19:34:53
CC-MAIN-2022-05
1642320300010.26
[]
docs.newrelic.com
9.2. Lesson: Plugin-uri QGIS Utile¶ Acum, că puteți instala, activa și dezactiva plugin-uri, să vedem cum vă poate ajuta în practică acest lucru, privind la câteva exemple de plugin-uri utile. Scopul acestei lecții: De a vă familiariza cu interfața plugin-urilor și de a face cunoștință cu unele plugin-uri utile. 9.2.1. Follow Along: The QuickMapServices Plugin¶ The QuickMapServices plugin is a simple and easy to use plugin that adds base maps to your QGIS project. It has many different options and settings, let’s start to explore some of its features. Start a new map and add the roads layer from the training_dataGeopackage. Install the QuickMapServices plugin. Open the plugin’s search tab by clicking on. This option of the plugin allows you to filter the available base maps by the current extent of the map canvas. Click on the Filter by extent and you should see one service available. Click on the Add button next to the map to load it. The base map will be loaded and you will have a satellite background for the map. The QuickMapServices plugin makes a lot of base maps available. Close the Search QMS panel we opened before Click again on. The first menu lists different map providers with available maps: But there is more. If the default maps are not enough for you, you can add other map providers. Click on More services tab.and go to the Read carefully the message of this tab and if you agree click on the Get Contributed pack button. If you now open themenu you will see that more providers are available. Choose the one that best fits your needs! 9.2.2. Follow Along: The QuickOSM Plugin¶ With an incredible simple interface, the QuickOSM plugin allows you to download OpenStreetMap data. Start a new empty project and add the roads layer from the training_dataGeoPackage. Install the QuickOSM plugin. The plugin adds two new buttons in the QGIS Toolbar and is accessible in themenu. Open the QuickOSM dialog. The plugin has many different tabs: we will use the Quick Query one. You can download specific features by selecting a generic Key or be more specific and choose a specific Key and Value pair. Sfat if you are not familiar with the Key and Value system, click on the Help with key/value button. It will open a web page with a complete description of this concept of OpenStreetMap. Look for railway in the Key menu and let the Value be empty: so we are downloading all the railway features without specifying any values. Select Layer Extent in the next drop-down menu and choose roads. Click on the Run query button. After some seconds the plugin will download all the features tagged in OpenStreetMap as railway and load them directly into the map. Nothing more! All the layers are loaded in the legend and are shown in the map canvas. 9.2.3. Follow Along: The QuickOSM Query engine¶ The quickest way to download data from QuickOSM plugin is using the Quick query tab and set some small parameters. But if you need some more specific data? If you are an OpenStreetMap query master you can use QuickOSM plugin also with your personal queries. QuickOSM has an incredible data parser that, together with the amazing query engine of Overpass, lets you download data with your specific needs. For example: we want to download the mountain peaks that belongs into a specific mountain area known as Dolomites. You cannot achieve this task with the Quick query tab, you have to be more specific and write your own query. Let’s try to do this. Start a new project. Open the QuickOSM plugin and click on the Query tab. Copy and paste the following code into the query canvas: <!-- This shows all mountains (peaks) in the Dolomites. You may want to use the "zoom onto data" button. => --> <osm-script <!-- search the area of the Dolomites --> <query type="area"> <has-kv <has-kv <has-kv </query> <print mode="body" order="quadtile"/> <!-- get all peaks in the area --> <query type="node"> <area-query/> <has-kv </query> <print mode="body" order="quadtile"/> <!-- additionally, show the outline of the area --> <query type="relation"> <has-kv <has-kv <has-kv </query> <print mode="body" order="quadtile"/> <recurse type="down"/> <print mode="skeleton" order="quadtile"/> </osm-script> Notă This query is written in a xmllike language. If you are more used to the Overpass QLyou can write the query in this language. And click on Run Query: The mountain peaks layer will be downloaded and shown in QGIS: You can write complex queries using the Overpass Query language. Take a look at some example and try to explore the query language. 9.2.4. Follow Along: The DataPlotly Plugin¶ The DataPlotly plugin allows you to create D3 plots of vector attributes data thanks to the plotly library. Start a new project Load the sample_points layer from the exercise_data/pluginsfolder Install the plugin following the guidelines described in Follow Along: Instalarea Noilor Plugin-uri searching Data Plotly Open the plugin by clicking on the new icon in the toolbar or in themenu In the following example we are creating a simple Scatter Plot of two fields of the sample_points layer. In the DataPlotly Panel: Choose sample_points in the Layer filter, cl for the X Field and mg for the Y Field: If you want you can change the colors, the marker type, the transparency and many other settings: try to change some parameters to create the plot below. Once you have set all the parameters, click on the Create Plot button to create the plot. The plot is interactive: this means you can use all the upper buttons to resize, move, or zoom in/out the plot canvas. Moreover, each element of the plot is interactive: by clicking or selecting one or more point on the plot, the corresponding point(s) will be selected in the plot canvas. You can save the plot as a png static image or as an html file by clicking on the or on the button in the lower right corner of the plot. There is more. Sometimes it can be useful to have two (or more) plots showing different plot types with different variables on the same page. Let’s do this! Go back to the main plot settings tab by clicking on the button in the upper left corner of the plugin panel Change the Plot Type to Box Plot Choose group as Grouping Field and ph as Y Field In the lower part of the panel, change the Type of Plot from SinglePlot to SubPlots and let the default option Plot in Rows selected. Once done click on the Create Plot button to draw the plot Now both scatter plot and box plot are shown in the same plot page. You still have the chance to click on each plot item and select the corresponding features in the map canvas. 9.2.5. In Conclusion¶ Sunt disponibile multe plugin-uri utile pentru QGIS. Folosind instrumentele încorporate, pentru instalarea și gestionarea acestor plugin-uri, puteți găsi noi plugin-uri și să efectuați o utilizare optimă a acestora.
https://docs.qgis.org/3.16/ro/docs/training_manual/qgis_plugins/plugin_examples.html
2022-01-16T18:30:41
CC-MAIN-2022-05
1642320300010.26
[]
docs.qgis.org
Row level security (RLS) Using row level security, you can restrict data that appears in search results and pinboards by group. Row through groups they are a member of. The rules restrict the visible data when users: view a table view a Worksheet derived from the table view answers from restricted data - either that they’ve created or that were shared with them interact with pinboards from restricted data - either that they’ve created or that were shared with them Search suggestions also fall under row-level security. If a user would not have access to the row data, then values from the row do not appear in Search suggestions. If you are using passthrough security for a Snowflake or Google BigQuery connection, search suggestions may not fall under row-level security. Note that passthrough security for Google BigQuery is in Beta and off by default in 7.0. When using passthrough security, ThoughtSpot builds the search index on the user who created the connection. This user may have less restrictive row-level-security, or may be able to see all data. Other users may be able to see search suggestions for columns or values they should not see. They cannot run queries on these columns or values, however. If you are using passthrough security, ThoughtSpot recommends you turn off indexing for sensitive columns. Why use RLS? RLS allows you to set up flexible rules that are self-maintaining. An RLS configuration can handle thousands of groups. There are several reasons you might want to use row level security: - Hide sensitive data from groups who should not see it In a report with customer details, hide potential customers (those who have not yet completed their purchase) from everyone except the sales group. - Filter tables to reduce their size, so that only the relevant data is visible Reduce the number of rows that appear in a very large table of baseball players, so that players who are no longer active are not shown except to historians. - Enable creation of a single pinboard or visualization, which can display different data depending on the group who is accessing it Create one sales pinboard that shows only the sales in the region of the person who views it. This effectively creates a personalized pinboard, depending on the viewer’s region. Related information - To continue learning about RLS, see How rule-based RLS works. - Search suggestions relies on compile indices to present suggestions to users from your data. See Manage suggestion indexing to learn how to configure suggestions.
https://docs.thoughtspot.com/software/7.0/security-rls.html
2022-01-16T20:02:20
CC-MAIN-2022-05
1642320300010.26
[]
docs.thoughtspot.com
- Add to Cart Lasers with different elements: fire, light, magic and others. Works with collisions! The asset includes: Highlights: My social networks: YouTube | Second YouTube channel | Twitter | Artstation | Instagram | Discord (Support) Features: 10 high-quality lasers with hit and flash effects made in Niagara. Type of Emitters: CPU | Ribbon | Beam | Mesh Emitters Number of Niagara Effects: 30 LODs: No Number of Blueprints: 12 Number of Textures: 39 Number of Materials and Material Instances: 34 Number of Material Functions: 1 Number of models: 2 Supported Development Platforms: PC | Mobiles | Consoles | VR | WEB
https://docs.unrealengine.com/marketplace/en-US/product/3d-lasers
2022-01-16T20:10:18
CC-MAIN-2022-05
1642320300010.26
[]
docs.unrealengine.com
AnimationNodeTransition¶ Inherits: AnimationNode < Resource < Reference < Object A generic animation transition node for AnimationTree. Description¶ Simple state machine for cases which don't require a more advanced AnimationNodeStateMachine. Animations can be connected to the inputs and transition times can be specified. Property Descriptions¶ The number of available input ports for this node. Cross-fading time (in seconds) between each animation connected to the inputs.
https://docs.godotengine.org/ko/latest/classes/class_animationnodetransition.html
2022-01-16T19:30:07
CC-MAIN-2022-05
1642320300010.26
[]
docs.godotengine.org
Metrics Collector Introduction When running applications in Kubernetes, observability is key. K8ssandra includes Prometheus and Grafana for storage and visualization of metrics associated with the Cassandra cluster. Metrics Collector for Apache Cassandra (MCAC) is the key to providing useful metrics for K8ssandra users. MCAC is deployed to your Kubernetes environment by K8ssandra. If you haven’t already installed K8ssandra, see the install topics. MCAC aggregates OS and Cassandra metrics along with diagnostic events to facilitate problem resolution and remediation. K8ssandra provides preconfigured Grafana dashboards to visualize the collected metrics. About Metric Collector Built on collectd, a popular, well-supported, open source metric collection agent. With over 90 plugins, you can tailor the solution to collect metrics most important to you and ship them to wherever you need. Cassandra sends metrics and other structured events to collectd over a local Unix socket. Fast and efficient. MCAC can track over 100k unique metric series per node. That is, metrics for hundreds of Cassandra tables. Comes with extensive dashboards out of the box. The Cassandra dashboards let you aggregate latency accurately across all nodes, dc or rack, down to an individual table. Design principles: - Little or no performance impact to Cassandra - Simple to deploy via the K8ssandra install, and self managed - Collect all OS and Cassandra metrics by default - Keep historical metrics on node for analysis - Provide useful integration with Prometheus and Grafana Supported versions of Apache Cassandra: 2.2+ (2.2.X, 3.0.X, 3.11.X, 4.0) Sample overview metrics in Grafana Cassandra node-level metrics are reported in the Prometheus format, covering everything from operations per second and latency, to compaction throughput and heap usage. Example: Sample OS metrics in Grafana Sample cluster metrics in Grafana Architecture details K8ssandra uses the kube-prometheus-stack, a Helm chart from the Prometheus Community project, to deploy Prometheus and Grafana and connect them to Cassandra, as shown in the figure below. Let’s walk through this architecture from left to right. We’ll provide links to the Kubernetes documentation so you can dig into those concepts more if you’d like to. The Cassandra nodes in a K8ssandra-managed cluster are organized in one or more datacenters, each of which is composed of one or more racks. Each rack represents a failure domain with replicas being placed across multiple racks (if present). In Kubernetes, racks are represented as StatefulSets. (We’ll focus here on details of the Cassandra node related to monitoring. Each Cassandra node is deployed as its own pod. The pod runs the Cassandra daemon in a Java VM. Each Apache Cassandra pod is configured with the DataStax Metrics Collector for Apache Cassandra, which is implemented as a Java agent running in that same VM. The Metrics Collector is configured to expose metrics on the standard Prometheus port (9103). One or more Prometheus instances are deployed in another StatefulSet, with the default configuration starting with a single instance. Using a StatefulSet allows each Prometheus node to connect to a Persistent Volume (PV) for longer term storage. The default K8ssandra chart configuration does not use PVs. By default, metric data collected in the cluster is retained within Prometheus for 24 hours. An instance of the Prometheus Operator is deployed using a Replica Set. The kube-prometheus-stack also defines several useful Kubernetes custom resources (CRDs) that the Prometheus Operator uses to manage Prometheus. One of these is the ServiceMonitor. K8ssandra uses ServiceMonitor resources, specifying labels selectors to indicate the Cassandra pods to connect to in each datacenter, and how to relabel each metric as it is stored in Prometheus. K8ssandra provides a ServiceMonitor for Stargate when it is enabled. Users may also configure ServiceMonitors to pull metrics from the various operators, but pre-configured instances are not provided at this time. The AlertManager is an additional resource provided by kube-prometheus-stack that can be configured to specify thresholds for specific metrics that will trigger alerts. Users may enable, and configure, AlertManager through the values.yaml file. See the kube-prometheus-stack example for more information. An instance of Grafana is deployed in a Replica Set. The GrafanaDataSource is yet another resource defined by kube-prometheus-stack, which is used to describe how to connect to the Prometheus service. Kubernetes config maps are used to populate GrafanaDashboard resources. These dashboards can be combined or customized. Ingress or port forwarding can be used to expose access to the Prometheus and Grafana services external to the Kubernetes cluster. FAQs in the MCAC repo. How can I filter out metrics I don’t care about? Please read the metric-collector.yaml section in the MCAC GitHub repo on the MCAC GitHub repo to parse these logs which can be analyzed or piped into jq. Alternatively, we offer free support for issues, and these logs can help our support engineers help diagnose your problem. Next steps - For details about viewing the metrics in Grafana dashboards provided by K8ssandra, see Monitor Cassandra. -.
https://docs.k8ssandra.io/components/metrics-collector/
2022-01-16T19:07:14
CC-MAIN-2022-05
1642320300010.26
[array(['monitoring-overview.png', 'Monitoring Overview'], dtype=object) array(['grafana-overview.png', 'Grafana Overview'], dtype=object) array(['grafana-os-metrics.png', 'OS metrics displayed in Grafana'], dtype=object) array(['grafana-cluster-metrics.png', 'Cluster metrics'], dtype=object) array(['monitoring-architecture.png', 'Monitoring Architecture'], dtype=object) ]
docs.k8ssandra.io