content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Build a new planner¶ Watcher Decision Engine has an external planner plugin interface which gives anyone the ability to integrate an external planner in order to extend the initial set of planners Watcher provides. This section gives some guidelines on how to implement and integrate custom planners with Watcher. Creating a new plugin¶ First of all you have to extend the base BasePlanner class which defines an abstract method that you will have to implement. The schedule() is the method being called by the Decision Engine to schedule a given solution ( BaseSolution) into an action plan by ordering/sequencing an unordered set of actions contained in the proposed solution (for more details, see definition of a solution). Here is an example showing how you can write a planner plugin called DummyPlanner: # Filepath = third-party/third_party/dummy.py # Import path = third_party.dummy from oslo_utils import uuidutils from watcher.decision_engine.planner import base class DummyPlanner(base.BasePlanner): def _create_action_plan(self, context, audit_id): action_plan_dict = { 'uuid': uuidutils.generate_uuid(), 'audit_id': audit_id, 'first_action_id': None, 'state': objects.action_plan.State.RECOMMENDED } new_action_plan = objects.ActionPlan(context, **action_plan_dict) new_action_plan.create(context) new_action_plan.save() return new_action_plan def schedule(self, context, audit_id, solution): # Empty action plan action_plan = self._create_action_plan(context, audit_id) # todo: You need to create the workflow of actions here # and attach it to the action plan return action_plan This implementation is the most basic one. So if you want to have more advanced examples, have a look at the implementation of planners already provided by Watcher like DefaultPlanner. A list with all available planner plugins can be found here. Define configuration parameters¶ At this point, you have a fully functional planner. However, in more complex implementation, you may want to define some configuration options so one can tune the planner to its needs. To do so, you can implement the get_config_opts() class method as followed: from oslo_config import cfg class DummyPlanner(base.BasePlanner): # [...] def schedule(self, context, audit_uuid, solution): assert self.config.test_opt == 0 # [...] @classmethod def get_config_opts(cls): return super( DummyPlanner,_planners.dummy] # Option used for testing. test_opt = test_value Then, the configuration options you define within this method will then be injected in each instantiated object via the config parameter of the __init__() method. Abstract Plugin Class¶ Here below is the abstract BasePlanner class that every single planner should implement: - class watcher.decision_engine.planner.base. BasePlanner(config)[source] - classmethod get_config_opts()[source] Defines the configuration options to be associated to this loadable - Returns A list of configuration options relative to this Loadable - Return type list of oslo_config.cfg.Optinstances - abstract schedule(context, audit_uuid, solution)[source] The planner receives a solution to schedule - Parameters solution ( BaseSolutionsubclass instance) – A solution provided by a strategy for scheduling audit_uuid (str) – the audit uuid - Returns Action plan with an ordered sequence of actions such that all security, dependency, and performance requirements are met. - Return type watcher.objects.ActionPlaninstance Register a new entry point¶ In order for the Watcher Decision Engine to load your new planner, the latter must be registered as a new entry point under the watcher_planners entry point namespace of your setup.py file. If you are using pbr, this entry point should be placed in your setup.cfg file. The name you give to your entry point has to be unique. Here below is how you would proceed to register DummyPlanner using pbr: [entry_points] watcher_planners = dummy = third_party.dummy:DummyPlanner Using planner plugins¶ The Watcher Decision Engine service will automatically discover any installed plugins when it is started. This means that if Watcher is already running when you install your plugin, you will have to restart the related Watcher services. If a Python package containing a custom plugin is installed within the same environment as Watcher, Watcher will automatically make that plugin available for use. At this point, Watcher will use your new planner if you referenced it in the planner option under the [watcher_planner] section of your watcher.conf configuration file when you started it. For example, if you want to use the dummy planner you just installed, you would have to select it as followed: [watcher_planner] planner = dummy As you may have noticed, only a single planner implementation can be activated at a time, so make sure it is generic enough to support all your strategies and actions.
https://docs.openstack.org/watcher/latest/contributor/plugin/planner-plugin.html
2020-08-03T15:39:55
CC-MAIN-2020-34
1596439735812.88
[]
docs.openstack.org
This. For baseline information and compatibility with Servlet container and Java EE version ranges, see the Spring Framework Wiki. 1.1. DispatcherServlet Spring MVC, as by using Java configuration or in web.xml. In turn, the DispatcherServlet uses Spring configuration to discover the delegate components it needs for request mapping, view resolution, exception handling, and more. The following example of the Java configuration registers and initializes the DispatcherServlet, which/*"); } } The following example of web.xml configuration registers and initializes. can be overridden (that is, re-declared) in the Servlet-specific child WebApplicationContext, which typically contains beans local to the given Servlet. The following image shows this relationship: The following example configures/*" }; } } The following example shows.1.2. Special Bean Types.4. Servlet Config In a Servlet 3.0+ environment, you have the option of configuring the Servlet container programmatically as an alternative or in combination with a web.xml file. The following example registers overriding methods to specify the servlet mapping and the location of the DispatcherServlet configuration. This is recommended for applications that use Java-based Spring configuration, as the following example shows: you use XML-based Spring configuration, you should extend directly from AbstractDispatcherServletInitializer, as the following example shows: be automatically mapped to the DispatcherServlet, as the following example shows:.1.5. Processing.. Alternatively, for annotated controllers, the response can be rendered (within the HandlerAdapter) instead of returning a view. If a model is returned, the view is rendered. If no model is returned (maybe. The following table lists the supported parameters: 1.1.6. Interception All HandlerMapping implementations support continues.: .1.8. View Resolution: Handling you use ,. By using the properties of this locale resolver, you can specify the name of the cookie as well as the maximum age. The following example defines following table describes the properties CookieLocaleResolver:: Spring also provides a ThemeChangeInterceptor that lets theme changes on every request with a simple request parameter. 1:, you can add a bean of type StandardServletMultipartResolver with a name of multipartResolver. 1.1.12. Logging. 1.3.1. Declaration You can define controller beans by") public class WebConfig { // ... } The following example shows the XML configuration equivalent of the preceding example: < to indicate a controller whose every method inherits the type-level @ResponseBody annotation and, therefore, writes directly to the response body versus view resolution and rendering with an HTML template. AOP Proxies In some cases, you many need to decorate a controller (such as InitializingBean, *Aware, and others), you may need to explicitly configure class-based proxying. For example, with <tx:annotation-driven/>, you can change to <tx:annotation-driven. 1.3.2. Request Mapping by using the following glob patterns and wildcards: ?matches one character *matches zero or more characters within a path segment **match zero or more path segments You can also declare URI variables and access their values with @PathVariable, as the following example shows: @GetMapping("/owners/{ownerId}/pets/{petId}") public Pet findPet(@PathVariable Long ownerId, @PathVariable Long petId) { // ... } You can declare URI variables at the class and method levels, as the following example shows: , "/spring-web-3.0.5 .jar", the following method extracts the name, version, and file extension: @GetMapping("/{name:[a-z-]+}-{version:\\d\\.\\d\\.\\d}{ext:\\.[a-z]+}") public void handle(@PathVariable String version, @PathVariable String ext) { // ... } URI path patterns can also have embedded ${…} placeholders that are resolved on startup by using PropertyPlaceHolderConfigurer against local, system, environment, and other property sources. You can use this, for example, to parameterize a base URL based on some external configuration. Pattern Comparison When multiple patterns match a URL, they must be compared to find the best match. This is done by using AntPathMatcher.getPatternComparator(String path), which looks for patterns overlain with the use of URI variables, path parameters, and URI encoding. Reasoning about URL-based authorization and security (see next section for more details) also become more difficult. To completely disable the use of file extensions, you must set both of the following:. through URL path extensions. Disabling suffix pattern matching and using. However, it can. See CVE-2015-5211 for additional recommendations related to RFD. Consumable Media Types You can narrow the request mapping based on the Content-Type of the request, as the following example shows: @PostMapping(path = "/pets", consumes = "application/json") (1) public void;charset=UTF-8") (1) @ResponseBody public Pet getPet(@PathVariable String petId) { // ... } extends the class-level declaration. Parameters, headers) { // ... } You can also use the same with request header conditions, as the following example shows: @GetMapping(path = "/pets", headers = "myHeader=myValue") (1) public void findPet(@PathVariable String petId) { // ... } HTTP HEAD, OPTIONS You can programmatically register handler methods, which you can use for dynamic registrations or for advanced cases, such as different instances of the same handler under different URLs. The following example registers a handler method: ) } } 1.3.3. Handler Methods @RequestMapping handler methods have a flexible signature and can choose from a range of supported controller method arguments and return values. Method Arguments The next table describes the supported controller method return values. Reactive types are supported for all return values. Type Conversion configuration, you need to set a UrlPathHelper with removeSemicolonContent=false through Path Matching. In the MVC XML namespace, you can set <mvc:annotation-driven. @RequestParam.. Consider the following) //... } If the target method parameter type is not String, type conversion is automatically applied. See Type Conversion. When.) //... } if the target method parameter type is not String, type conversion is applied automatically. See Type Conversion. @ModelAttribute, BindException is raised. However,"; } // ... } In some cases, you may want access to a model attribute without data binding. For such cases, you can inject the Model into the controller and access it directly or, alternatively, set @ModelAttribute(binding=false), as the following example shows: ) { (1) // ... }"; } // ... } Note that using ), as the following example shows: @Controller @SessionAttributes("pet") (1) public class EditPetForm { // ... @PostMapping("/pets/{id}") public String handle(Pet pet, BindingResult errors, SessionStatus status) { if (errors.hasErrors) { // ... } status.setComplete(); (2) // ... } } } @SessionAttribute) // ... }, you can use the @RequestAttribute annotations to access pre-existing request attributes created earlier (for example, by a Servlet Filter or HandlerInterceptor): @GetMapping("/") public String handle(@RequestAttribute Client client) { (1) // ... }. However,,. Multipart After a MultipartResolver has been enabled, the content of POST requests with multipart/form-data is parsed and accessible as regular request parameters. The following example access. You can also use multipart content as part of data binding to a command object. For example, the form field and file from the preceding example could be fields on a form object, as the following example shows:. The following example shows a file) { // ... } You can use @RequestPart in combination with javax.validation.Valid or use("/") public String handle(@Valid @RequestPart("meta-data") MetaData metadata, BindingResult result) { // ... } @RequestBody HttpEntity is more or less identical to using @RequestBody but is based on a container object that exposes request headers and body. The following listing shows an example: @PostMapping("/accounts") public void handle(HttpEntity<Account> entity) { // ... } @ResponseBody Spring MVC provides built-in support for Jackson’s Serialization Views, which allow that rely on view resolution, you can add the serialization view class to the model, as the following example shows: "; } } 1.3.4. Model); } customize the model attribute name, as the following example shows: @GetMapping("/accounts/{id}") @ModelAttribute("myAccount") public Account handle() { // ... return account; } 1.3.5. DataBinder @Controller or @ControllerAdvice classes can have @InitBinder methods that initialize instances of WebDataBinder, and those, in turn, can: Bind request parameters (that is, form or query data) MVC config listing shows an example: .3.6. Exceptions @Controller and @ControllerAdvice classes can have @ExceptionHandler methods to handle exceptions from controller methods, as the following example shows: @Controller public class SimpleController { // ... @ExceptionHandler public ResponseEntity<String> handle(IOException ex) { // ... } } The exception may the preceding example shows., Typically @ExceptionHandler, @InitBinder, and @ModelAttribute methods apply within the @Controller class (or class hierarchy) in which they are declared. If you want such methods to apply more globally (across controllers), you can declare them in a class marked with @ControllerAdvice or @RestControllerAdvice. @ControllerAdvice is marked with @Component, which means such classes can be registered as Spring beans through component scanning. @RestControllerAdvice is also a meta-annotation marked with both @ControllerAdvice and @ResponseBody, which essentially means @ExceptionHandler methods are rendered to the response body through message conversion (versus view resolution or template rendering). On startup, the infrastructure classes for @RequestMapping and @ExceptionHandler methods detect Spring beans of type .4. URI Links This section describes various options available in the Spring Framework to work with URI’s. 1"); 1_COMPONENTS: Uses UriComponents#encode(), corresponding to the second option in the earlier list, to encode URI component value after URI variables are expanded. NONE: No encoding is applied..4.4. Relative Servlet Requests You can use ServletUriComponentsBuilder to create URIs relative to the current request, as the following example shows:,, as the following example shows earlierand Callablereturn works as follows: The controller returns a Callable. Spring MVC calls request.startAsync()and submits the Callableto a TaskExecutorfor processing in a separate thread. Meanwhile, the DispatcherServletand all filters you use ,, lets applications exit the Filter-Servlet chain but leave the response open for further processing. The Spring MVC asynchronous: (); You can also use ResponseBodyEmitter as the body in a ResponseEntity, letting you customize the status and headers of the response. When an emitter throws an IOException (for example,. Spring MVC supports use of reactive client libraries in a controller (also read Reactive Libraries in the WebFlux section)., The Servlet API does not provide any notification when a remote client goes away. Therefore, while streaming to the response, whether through SseEmitter or <<mvc-ann-async-reactive-types,reactive types>, it is important to send data periodically, since the write fails.5.7. Configuration The asynchronous request processing feature must be enabled at the Servlet container level. The MVC configuration also exposes several options for asynchronous requests. Servlet Container Filter and Servlet declarations have an asyncSupported flag that needs to be set to true to, Spring MVC lets you handle CORS (Cross-Origin Resource Sharing). This section describes how to do so. 1.6.2. Processing..6.3. @CrossOrigin The @CrossOrigin annotation enables cross-origin requests on annotated controller methods, as the following example shows: To enable CORS in the MVC Java config, you can use the CorsRegistry callback, as the following example shows: Configuration To enable CORS in the XML namespace, you can use the <mvc:cors> element, as the following example shows: <mvc:cors> <mvc:mapping <mvc:mapping </mvc:cors> 1.6.5. CORS Filter You can apply CORS support through the built-in CorsFilter. To configure the filter, pass a CorsConfigurationSource to its constructor, as the following example shows:.7. Web Security The Spring Security project provides support for protecting web applications from malicious exploits. See the Spring Security reference documentation, including: 1.8. HTTP Caching Controllers can add explicit support for HTTP caching. We recommended doing so, since the lastModified or ETag value for a resource needs to be calculated before it can be compared against conditional request headers. A controller can add an ETag header); } The preceding example sends 409 (PRECONDITION_FAILED), to prevent concurrent modification. 1.8.3. Static Resources The use of view technologies in Spring MVC is pluggable, whether you decide to use Thymeleaf, Groovy Markup Templates, JSPs, or other technologies, is primarily a matter of a configuration change. This chapter covers view technologies integrated with Spring MVC.. If you want to replace JSPs, Thymeleaf offers one of the most extensive set of features.9.2. FreeMarker Apache FreeMarker is a template engine for generating any kind of text output from HTML to email and others. The Spring Framework has a built-in integration for using Spring MVC with FreeMarker templates. View Configuration The following example shows how; } } The following example shows how> Alternatively, you can also declare the FreeMarkerConfigurer bean for full control over all properties, as the following example shows: in the preceding example. Given the preceding configuration, if your controller returns a view name of welcome, the resolver looks for the /WEB-INF do so: is 'command', unless you changed it in your FormController properties) followed by a period and the name of the field on the command object to which you wish to bind. You can also use nested fields,. showsof:), you can create the map of codes with suitable keys, as the following example. Configuration The following example shows how; } } The following example shows how. The Spring Framework has a built-in integration for using Spring MVC with any templating library that can run on top of the JSR-223 Java scripting engine. We have tested the following templating libraries You can declare a ScriptTemplateConfigurer bean to specify the script engine to use, the script files to load, what function to call to render templates, and so on. The following example uses following example shows the same arrangement: defines only the window object needed by Handlebars to run properly, as); } 1.9.5. JSP and byusing only one resolver, as the following example shows: <!-- we show in the following examples, the form tags make JSPs easier to develop, read, and maintain.' <br/>: date, range, and others. Note that entering type='text' is not required, since text is the default type. 1.9.6. Tiles You can integrate Tiles - just as any other view technology - in web applications that use Spring. This section describes, in a broad way, how to do so.. UrlBasedViewResolver The UrlBasedViewResolver instantiates the given viewClass for each view it has to resolve. The following bean defines a UrlBasedViewResolver: that contains view names and view classes that the resolver can use. The following example shows a bean definition for a ResourceBundleViewResolver and the corresponding view names and view classes (taken from the Pet Clinic sample): ... the following example shows: can. It is based on Apache POI, with specialized subclasses ( AbstractXlsxView and AbstractXlsxStreamingView) that supersede.9.9. Jackson Spring offers support for the Jackson JSON library.: ; } } Controller We also need a Controller that encapsulates our word-generation logic.: <> The preceding transform is rendered as the following HTML: <html> <head> <META http- <title>Hello!</title> </head> <body> <h1>My First Words</h1> <ul> <li>Hello</li> <li>Spring</li> <li>Framework</li> </ul> </body> </html> 1.10. MVC Config: <.10.4. Validation> Note that you can also register Validator implementations locally, as the following example shows: @Controller public class MyController { @InitBinder protected void initBinder(WebDataBinder binder) { binder.addValidators(new FooValidator()); } } 1.10.5. Interceptors In Java configuration, you can register interceptors to apply to incoming requests, as the following example shows: /*"); } } The following example shows how to achieve the same configuration in XML: .10.6. Content Types preceding, Which adds support for accessing parameter names (a feature added in Java 8). This builder customizes Jackson’s default properties as follows: DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIESis disabled. MapperFeature.DEFAULT_VIEW_INCLUSIONis disabled. It also automatically registers the following well-known modules if they are detected on the classpath: jackson-datatype-jdk7: Support for Java 7 types, such as java.nio.file.Path. jackson-datatype-joda: Support for Joda-Time types. jackson-datatype-jsr310: Support for Java 8 Date and Time API types. jackson-datatype-jdk8: Support for other Java 8 types, such as Optional. The MVC configuration simplifies the registration of view resolvers. The following Java configuration example configures content negotiation view resolution by(); } } The following example shows how to achieve the same configuration. The following example works configuration, you can add the respective Configurer bean, as the following example shows: .10.10. Static Resources: ("/**")); } } The following example shows how to achieve the same configuration in XML: content-based versions are always computed reliably, based on the unencoded file. WebJars is also supported through WebJarsResourceResolver and is automatically registered when org.webjars:webjars-locator is present on the classpath. The resolver can re-write URLs to include the version of the jar and can also match to incoming URLs without versions — for example, /jquery/jquery.min.js to /jquery/1.2.0/jquery.min.js. 1.10.11. Default Servlet Spring MVC You can customize options related to path matching and treatment of the URL. For details on the individual options, see the PathMatchConfigurer javadoc. The following example shows how to customize path matching in Java configuration: ()) .addPathPrefix("/api", HandlerTypePredicate.forAnnotation(RestController.class)); } @Bean public UrlPathHelper urlPathHelper() { //... } @Bean public PathMatcher antPathMatcher() { //... } } The following example shows how to achieve the same configuration in XML: .10.13. Advanced Java Config , see the HTTP/2 wiki page. The Servlet API does expose one construct related to HTTP/2. You can use the javax.servlet.http.PushBuilder and exposes a simple, template-method API over underlying HTTP client libraries..with This part of the reference documentation covers support for Servlet stack, WebSocket messaging that includes raw WebSocket interactions, WebSocket emulation through SockJS, and publish-subscribe messaging through STOMP as a sub-protocol over WebSocket.... 4.2. WebSocket API The Spring Framework provides a WebSocket API that you can. The following example uses Text configuration and XML namespace support for mapping the preceding WebSocket handler to a specific URL,:handlers> <bean id="myHandler" class="org.springframework.samples.MyHandler"/> </beans> The pereceding example. 4.2.3. Deployment The Spring WebSocket API is easy to integrate into a Spring MVC application where the DispatcherServlet serves both HTTP WebSocket handshake (a Servlet 3 feature) at startup. with server-specific RequestUpgradeStrategy implementations, as the following example shows: . The following example shows how to do so: <web-app <absolute-ordering> <name>spring_web</name> </absolute-ordering> </web-app> 4.2.4. Server Configuration,(); } } The following example shows the XML configuration equivalent of the preceding example: to be idle. The solution to this problem is WebSocket emulation — that, following example you can use to send messages through the broker to other connected clients or to send messages to the server to request that some work be performed. When you use . The following list briefly describes: . You can declare exceptions in the annotation itself or through a method argument if you want to get access to the exception instance. The following example declares an exception through a method) continues to try to reconnect, every 5 seconds, until it succeeds. Any Spring bean can implement ApplicationListener<BrokerAvailabilityEvent> to receive notifications when the “system” connection to the broker is lost and re-established. For example, a Stock Quote service that broadcasts. The following example shows how to do that: () {: "); } } The following example shows the XML configuration equivalent of the preceding example: : . You can do so by setting the broadcast attribute to false, as the following example shows: ; } }. 4.4.17. Interception Events provide notifications for the lifecycle of a STOMP connection but not for every client message. Applications can also register a ChannelInterceptor to intercept any message and in any part of the processing chain. The following example shows how, or .
https://docs.spring.io/spring/docs/5.1.3.RELEASE/spring-framework-reference/web.html
2020-08-03T14:51:38
CC-MAIN-2020-34
1596439735812.88
[array(['images/mvc-context-hierarchy.png', 'mvc context hierarchy'], dtype=object) array(['images/message-flow-simple-broker.png', 'message flow simple broker'], dtype=object) array(['images/message-flow-broker-relay.png', 'message flow broker relay'], dtype=object)]
docs.spring.io
Arduino Nano ID for board option in “platformio.ini” (Project Configuration File): [env:nanoatmega328] platform = atmelavr board = nanoatmega328 You can override default Arduino Nano ATmega328 settings per build environment using board_*** option, where *** is a JSON object path from board manifest nanoatmega328.json. For example, board_build.mcu, board_build.f_cpu, etc. [env:nanoatmega328] platform = atmelavr board = nanoatmega328 ; change microcontroller board_build.mcu = atmega328p ; change MCU frequency board_build.f_cpu = 16000000L Debugging¶ PIO Unified Debugger currently does not support Arduino Nano ATmega328 board.
http://docs.platformio.org/en/latest/boards/atmelavr/nanoatmega328.html
2018-12-10T06:10:39
CC-MAIN-2018-51
1544376823318.33
[]
docs.platformio.org
Adafruit Trink5 ID for board option in “platformio.ini” (Project Configuration File): [env:trinket5] platform = atmelavr board = trinket5 You can override default Adafruit Trinket 5V/16MHz settings per build environment using board_*** option, where *** is a JSON object path from board manifest trinket5.json. For example, board_build.mcu, board_build.f_cpu, etc. [env:trinket5] platform = atmelavr board = trinket5 ; change microcontroller board_build.mcu = attiny85 ; change MCU frequency board_build.f_cpu = 16000000L Debugging¶ PIO Unified Debugger currently does not support Adafruit Trinket 5V/16MHz board.
http://docs.platformio.org/en/latest/boards/atmelavr/trinket5.html
2018-12-10T06:11:15
CC-MAIN-2018-51
1544376823318.33
[]
docs.platformio.org
and criminalization. for theand inserting for the Chief Administrative Officer and the. the oversightand inserting the policy direction and oversight. odd-numberedafter each; applicable periodand insert Congress; in the case of the first such report in each Congress,; and a regular session of Congress, or after December 15and insert the last regular session of a Congress, or after December 15 of an even-numbered year. supplemental, minority, or additionaleach place it appears and insert (in each instance) supplemental, minority, additional, or dissenting. and; ; and; and Each committee shall adopt written rules to govern its implementation of this clause. Such rules shall contain provisions to the following effectand insert Written rules adopted by each committee pursuant to clause 2(a)(1)(D) shall contain provisions to the following effect. used, or made available for use, as partisan political campaign material to promote or oppose the candidacy of any person for elective public officeand insert used for any partisan political campaign purpose or be made available for such use. 20and insert 22and strike 12and insert 13. major legislationmeans any bill or joint resolution— budgetary effectsmeans changes in revenues, outlays, and deficits. (1), by striking (A)and inserting (1), and by striking (B)and inserting (2); and 20and insert 45and strike 10and insert 25. accompanying document—and all that follows and inserting “accompanying document— new officer or employeeand inserting new Member, Delegate, Resident Commissioner, officer, or employee. Joint Committee on Internal Revenue Taxationeach place it appears and insert (in each instance) Joint Committee on Taxation; and Joint Committee on Internal Revenue Taxationand insert Joint Committee on Taxation. 31b-5and insert 5128. pursuant to clause 1and insert by August 1 of each year. this concurrent resolution(or, in the case of section 408 of such concurrent resolution, this resolution) shall be considered for all purposes in the House to be references to the allocations, aggregates, or other appropriate levels contained in the statement of the chair of the Committee on the Budget of the House of Representatives printed in the Congressional Record of April 29, 2014, as adjusted in the One Hundred Thirteenth Congress.. Fast and Furiousand related matters. Fast and Furiousand related matters who failed to comply with such subpoena, or any successor to such individual. Direct Spending, which shall include a category for Means-Tested Direct Spendingand a category for Nonmeans-Tested Direct Spendingand sets forth— directed rule makingmeans a specific rule making within the meaning of section 551 of title 5, United States Code, specifically directed to be completed by a provision in the measure, but does not include a grant of discretionary rule making authority. Committee) shall promulgate regulations as follows: eligible Congressional Member Organizationmeans, with respect to the One Hundred Fourteenth Congress, an organization meeting each of the following requirements:
https://docs.house.gov/billsthisweek/20150105/BILLS-114hres5pih.xml
2018-12-10T07:38:14
CC-MAIN-2018-51
1544376823318.33
[]
docs.house.gov
Contents Now Platform Administration Previous Topic Next Topic Monitor the event queue for login activities ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Monitor the event queue for login activities Every single sign-on integration creates events for login activities. You can use these events to monitor for login failures and determine if there are any security concerns to address. Table 1. Monitoring the event queue for login failures Event Name Description Record Parameter 1 Parameter 2
https://docs.servicenow.com/bundle/kingston-platform-administration/page/integrate/single-sign-on/reference/r_EventQueueLoginActivities.html
2018-12-10T06:54:13
CC-MAIN-2018-51
1544376823318.33
[]
docs.servicenow.com
Once the software is installed, the following steps provide a basic Panther application server for a JetNet/Oracle Tuxedo application. On a UNIX application server, create an application directory containing: On a Windows application server, create an application folder containing: The middleware configuration file determines the machines and application servers needed for the application. To create a middleware configuration file: Once the configuration file is created and JetMan displays the application: Applications which are running remote reports must have a file access server on the same machine as the standard server in order to access and distribute the report files. To start the application server in JetMan, highlight the application and choose Edit Activate. An alternative is to use the command line utility rbboot. The Status window shows the messages for each server process. To stop the application server in JetMan, choose Edit Deactivate or, if clients are connected, Edit Forcibly Deactivate. An alternative is to use the command line utility rbshutdown. Workstation (or remote) clients set SMRBPORT and SMRBHOST in order to access the remote application server. For Windows, these settings are stored in prol5w32.ini or prol5w64.ini. Native (or local) clients set SMRBCONFIG in order to access the middleware configuration file on the same host machine.
http://docs.prolifics.com/panther/html/jet_html/jncheck.htm
2018-12-10T07:26:38
CC-MAIN-2018-51
1544376823318.33
[]
docs.prolifics.com
Hardware accleration is available on devices that have a dedicted CPU for handling graphics. This will come into play when generally for rendering application interfaces as well as when using certain CSS3 attributes typically for animation or transitions. Support android:hardwareAccelerated attribute at AndroidManifest.xml for main Rhodes activity. To set android:hardwareAccelerated=‘true’ at Androidmanifest.xml for RhodesActivity enable the hardware_acceleration capability. This is done by adding the following lines to build.yml: android: capabilities: - hardware_acceleration Windows Mobile or Windows CE devices do not have the capability of running applications with dedicated CPU processing for graphics. This is due to both platform and hardware constraints. Running graphic intensive operations will still run without failure, but performance may not be acceptable. RhoMobile applications running on iOS run inside of the stock Webkit (Safari) for that particular version of iOS. You should consult Apple’s website for support of hardware accleration for particular OS versions. There may also be other techniques for using specific CSS attributes to force hardware acceleration like -webkit-transform: translateZ(0); and -webkit-transform: translate3d(0, 0, 0);
http://docs.tau-technologies.com/en/6.0/guide/hardware_accleration
2018-12-10T07:33:57
CC-MAIN-2018-51
1544376823318.33
[]
docs.tau-technologies.com
Security & Privacy¶ Control Panel Location: This section of the Control Panel allows you to define the basic security-related settings for your website. These are security settings that apply throughout the website/system. Settings¶ - CP session type - Website Session type - Share analytics with the ExpressionEngine Development Team? - Domain - Path - Prefix - Send cookies over HTTP only? - Send cookies securely? - Require user consent to set cookies? - Allow members to change username? - Minimum username length - Allow multiple logins? - Require user agent and IP for login? - Enable password lock out? - Password lock out interval - Require secure passwords? - Minimum password length - Allow dictionary words in passwords? - Dictionary file - Deny duplicate data? - Require user agent and IP for posting? - Apply XSS filtering? - Enable Rank Denial to submitted links? - Force redirect confirmation on submitted links? CP session type¶ This determines how sessions are handled for the Control Panel. You may use cookies, session IDs, or a combination. The available options are: - Cookies and session ID: Both cookies and URL session ID parameters are used to track the admin user. This is the most secure setting since it relies on two individual cookies and a URL session ID. - Cookies only: Only cookies are used to track the admin user. When this setting is used a “remember me” checkbox will appear next to the Control Panel login page, enabling users to stay permanently logged in. - Session ID only: Only URL session IDs are used to track the admin user. This option should only be used if your desktop computer prevents you from accepting cookies in the event you are behind a firewall or due to some other technical issue. Website Session type¶ This determines how sessions are handled for the front-end of the site., and generally the best option since it prevents URLs from showing session IDs. - Session ID only: Only URL session IDs are used to track the user throughout their visit. Domain¶ Optionally specify a domain the cookie is available to. By default, the exact hostname of the requested page is set as the cookie domain. For example, if the page at is loaded and the cookie domain is left blank in ExpressionEngine’s configuration, the browser will use as the cookie domain. The browser will only make these cookies available when the page’s hostname is exactly. If the cookie domain is explicitly specified, however, the browser will make the cookie available whenever the requested page’s hostname contains the cookie domain. For example, setting the cookie domain to .example.com will ensure the cookie is shared whenever the requested page’s hostname includes example.com,, admin.example.com, blog.example.com, and so on. If you’re running multiple subdomains on a single ExpressionEngine installation and want member sessions to be valid across all subdomains, you should explicitly set the cookie domain. Note There’s an important difference between example.com and .example.com. When the cookie domain begins with a dot, browsers match any hostname that includes the cookie domain. Without the dot prefix, browsers are looking for an exact hostname match in the URL, which means cookies will not be available to subdomains. A cookie set by PHP with an explicitly specified cookie domain will always include the dot prefix, whether or not one is included in this ExpressionEngine setting. For clarity’s sake, the examples here include a leading dot when the cookie domain is being explicitly set. Note Browsers will not save cookies if the specified cookie domain isn’t included in the request’s hostname. In other words, a site can only set cookies for .example.com if its hostname actually includes example.com. Path¶ Optionally specify a cookie path. When a cookie path is set, the browser will only share cookies with ExpressionEngine when the beginning of the URL path matches the cookie path. For example, if the cookie path is set to /blog/, a cookie for the domain example.com will only be sent by the browser if the URL begins with. This can be useful if you have ExpressionEngine installed in a sub-directory and want to ensure that only that particular installation has access to the cookies it sets. Prefix¶ Specify a prefix for the cookie name set by ExpressionEngine. This protects against collisions from separate ExpressionEngine installations on the same cookie domain. Allow members to change username?¶ As the name suggests, this setting determines whether or not members are allowed to change their own usernames after registration. (Members will always be able to change their own screen names.) Minimum username length¶ You may specify the minimum length required for a member username during new member registration. Specify the minimum number of characters required. Allow multiple logins?¶ Set whether an account can have multiple active sessions at one time. Note This feature is incompatible with the “Cookies Only” session type. Require user agent and IP for login?¶ If this preference is set to “Yes”, then users will not be able to log in unless their browser (or other access device) correctly supplies their IP address and User Agent (browser) information. Having this set to “Yes” can help prevent hackers from logging in using direct socket connections or from trying to access the system with a masked IP address. Enable password lock out?¶. Password lock out interval¶?¶. Minimum password length¶ You may specify the minimum length required for a member password during new member registration. Specify the minimum number of characters required. It is common practice to require passwords at least eight (8) characters long. Allow dictionary words in passwords?¶ Set whether words commonly found in the dictionary can be used as passwords. Disabling will make “dictionary attacks” by hackers much more difficult. Note In order to be able to use this setting you must have a dictionary file installed. Dictionary file¶ This is the filename of the dictionary file used for the previous preference. Download the dictionary file, unzip, and upload the text file ( dictionary.txt) to system/user/config/. Enter only the filename of the file ( dictionary.txt) in this field. Deny duplicate data?¶ This option prevents data submitted by users (such as comments) from being processed if it is an exact duplicate of data that already exists. This setting is designed to deter automated spam attacks as well as multiple accidental submissions. Require user agent and IP for posting?¶ Similar to the previous setting, when turned on, this setting requires IP address and user agent information to be supplied when submitting comments. Apply XSS filtering?¶ Checks all file uploads for code injection attempts before finalizing the upload. Superadmins are exempt from image XSS filtering. Enable Rank Denial to submitted links?¶ When enabled, all outgoing links are sent to a redirect page. This prevents spammers from gaining page rank. Force redirect confirmation on submitted links?¶ When Enable Rank Denial is turned on, this setting will appear to enable forcing the showing of a confirmation screen when a submitted linked is clicked. This can prevent issues where a link looks like it leads to one place, but actually leads to another, and allows the user to confirm the URL is correct before they continue.
https://docs.expressionengine.com/latest/cp/settings/security-privacy.html
2018-12-10T07:36:29
CC-MAIN-2018-51
1544376823318.33
[]
docs.expressionengine.com
Int The Int scalar type represents non-fractional signed whole numeric values. Int can represent values between -(2^31) and 2^31 - 1. GraphQL schema definition - scalar Int Required by - RelayInput: - HotelXHotelListInput: - HotelXRoomQueryInput: - HotelXDestinationListInput: - HotelXDestinationSearcherInput: - HotelSettingsInput: Settings that you can edit for this avail. Values are loaded by default in our Back Office. - PaxInput: Pax object that contains the pax age. - BusinessRulesInput: List of business rules to use as filter on the options. - SettingsBaseInput: Contains the time out and business rules of a supplier or an access. - StatsRequest: Contains internal information. - StatAccess: - Pax: Specifies the age pax. The range of what is considered an adult, infant or baby is particular to each supplier. - Occupancy: Information about occupancy. - Room: Contains the room information of the option returned. - Supplement: Supplement that it can be or its already added to the option returned. Contains all the information about the supplement. - Bed: Contains information about a bed. - CancelPenalty: Contains information for cancellation penalities.. - BookingRoom: - BookRoomInput: Input BookRoom contains list of pax and the room's reference. - ExpireDateInput: The card expiration date - BookPaxInput: Input BookPax contains basic information abaout pax suach as name, surname and age.
https://docs.travelgatex.com/hotelx/reference/scalars/int/
2018-12-10T06:24:25
CC-MAIN-2018-51
1544376823318.33
[]
docs.travelgatex.com
SP-initiated SSO—Redirect to the IdP's SSO service. If the user is not already logged on to the IdP site or if re-authentication is required, the IdP asks for credentials (e.g., ID and password) and the user logs on. Additional information about the user may be retrieved from the user data store for inclusion in the SAML response. (These attributes are predetermined as part of the federation agreement between the IdP and the SP—see About attributes.) The IdP's SSO service returns an HTML form to the browser with a SAML response containing the authentication assertion and any additional attributes. The browser automatically posts the HTML form back to the SP. NoteSAML specifications require that POST responses be digitally signed. (Not shown) If the signature and the assertion (or the JSON Web Token) are valid, the SP establishes a session for the user and redirects the browser to the target resource. Parent TopicSingle sign-on Tags Capability > User Provisioning; Capability > Single Sign On; Hosting Environment > On-Premises; Product > PingFederate > PingFederate 8.4; Product > PingFederate; Task Type > Deployment
https://docs.pingidentity.com/bundle/pf_sm_supportedStandards_pf84/page/task/spInitiatedSsoRedirectPost.html
2018-12-10T07:21:21
CC-MAIN-2018-51
1544376823318.33
[]
docs.pingidentity.com
Add LinkedIn login and registration to WordPress Create a LinkedIn App Login with your LinkedIn account at the following page then click the "Create application" button. Fill all the required fields of the form and as "website url" enter the url of your website. Once an app has been created, you'll be readirected to the settings of your app where you'll find Client ID and Client Secret keys. Make sure that r_basicprofile and r_emailaddress are enabled. Under the Authorized Redirect URLs setting enter your website url followed by /?wpumsl=linkedin so if your website url is you'll need to type and then press the "Add" button. Add credentials in WordPress Make a copy of the Client ID and Client Secret keys. Now login into your WordPress dashboard and navigate to "Users -> Settings -> General -> Social Login" and locate the settings LinkedIn Client ID and LinkedIn Client Secret and type the keys from your LinkedIn APP. Now you're ready to accept logins through LinkedIn.
https://docs.wpusermanager.com/article/417-add-linkedin-login-to-wordpress
2018-12-10T06:02:23
CC-MAIN-2018-51
1544376823318.33
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55911f12e4b01a224b42f596/images/5a97009404286374f7086a1a/file-Sxra57oq5i.png', None], dtype=object) ]
docs.wpusermanager.com
changes.mady.by.user John Burk Saved on Jul 26, 2017 Saved on Aug 07, 2018 .... In this example, all worker licenses for this supervisor will use metered licenses. qbping jburk-15-mbPro:LaunchAgents jburk$ qbping supervisor - active - tag: 10.0.1.101 1478:104F:9F43:E368:70FD:FD 67C 7.40-4 bld-custom 1c rel-7.0-0006 osx - - host - 0/10 licenses0 unlimited licenses (metered=0/300) - mode=0 (0). The license count displayed here means: "0 of 10 licenses in use". counts displayed can be interpreted in the following manner: 0/0 unlimited licenses: 0 licenses in use, 0 worker licenses installed 0/0 unlimited licenses: metered=0/300.0 licenses, you haven't yet installed the worker keys, or the license file has become invalid (perhaps it got saved out with an extension other that .lic, or saved in RTF or Wordpad format). Check the license file with a plaintext editor You can run a supervisor without any worker licenses if you have setup Metered Licensing. See Getting Started with Metered Licensing
http://docs.pipelinefx.com/pages/diffpagesbyversion.action?pageId=4238280&selectedPageVersions=26&selectedPageVersions=25
2018-12-10T07:38:04
CC-MAIN-2018-51
1544376823318.33
[]
docs.pipelinefx.com
Stores all repository data on disk, in a configured location using H2's MVStore API. Users coming from ModeShape 3 and 4 which had their data stored anywhere except in a relational DB should opt for this store. Configuration JSON The persistent store can be easily configured like so: and will store data in the target/persistent_repository/modeshape.repository file relative to the current running directory of the JVM. You can also use an absolute path with or without the help of environment variables. For example: will store the data in a file called modeshape.repository inside the value of the folder ${java.io.tmpdir}/modeshape JBoss AS If you're using the JBoss AS kit, you configure the file system persistence like so: which will store the data in the default location of ${jboss.server.data.dir}/modeshape/<repositoryName>/modeshape.repository or you can explicitly define the place where to store the data: Attributes The list of attributes supported by this store is:
https://docs.jboss.org/author/display/MODE50/File+System+persistence
2018-12-10T06:53:11
CC-MAIN-2018-51
1544376823318.33
[]
docs.jboss.org
Writes a line of text to an open fileint sm_fio_puts(char *string, int file_stream); string Character string to be output. -swrites the contents of stringto the specified open file and appends a newline \ncharacter. Be sure to call sm_fio_close on file_streamafter you finish writing the data; the actual write operation is not complete until the handle to this file stream is released. proc put string in current occurrence */ str = comments[occurNo] /* put string into next line of file stream */ err = sm_fio_puts(str, fileStream) } /* close file stream when done */ call sm_fio_close(fileStream) return }
http://docs.prolifics.com/panther/html/prg_html/libfu128.htm
2018-12-10T06:43:24
CC-MAIN-2018-51
1544376823318.33
[]
docs.prolifics.com
The Module System exposes the core Splunk knowledge base for the purpose of building custom app customized for your application domain. A framework is inherently complex and requires suitable documentation and examples to be able to use it effectively. Here, you'll find documentation that covers the key concepts and reference material needed to begin creating apps. You'll find this documentation to be applicable to different phases of the development lifecycle and different levels of expertise. For example, Getting Started is most useful for initial familiarization with the framework and development process, while the reference API might be consulted frequently to refresh your memory about programming details. Because examples and learn-by-doing provide the most effective techniques for learning complex topics, an example accompanies most discussion. In particular, the Cookbook includes a complete set of examples, in the menu above, that you can actually run in the context of this app. Become familiar with terminology. If you are new to Module System development, read Getting Started. Consult the Cookbook and its associated examples to learn how to implement common use cases. You'll find the implementation details in the Reference useful after you've learned the basics of how to develop in the Module System environment. The documentation set provides the various system views a developer needs to understand and use the Module!
http://docs.splunk.com/Documentation/Splunk/6.5.1/Module/Welcome
2018-12-10T06:45:02
CC-MAIN-2018-51
1544376823318.33
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Search the Application Analytics topics: AppDynamics Application Analytics powers Business iQ, a powerful part of the App iQ Platform. Integration of all of AppDynamics’ App i obtain Transaction Analytics, Log Analytics, Browser Analytics, Mobile Analytics, and Browser Synthetic? Watch the Video For full-screen viewing, click Application Analytics Tour.
https://docs.appdynamics.com/display/PRO45/Application+Analytics?showChildren=false
2018-12-10T07:16:00
CC-MAIN-2018-51
1544376823318.33
[]
docs.appdynamics.com
TOC & Recently Viewed Recently Viewed Topics Showvulns Patch Release Notes - 3/23/2018 Apply this patch to SecurityCenter installations running version 5.6.2. The fixes in this patch resolve several issues related to a showvulns bug. Note: You do not need to apply this patch to SecurityCenter installations running earlier versions. Contents - showvulns - showvulns-archive - showvulns-individual Fixed an issue where Exploit Available always displayed "False" on the Vulnerability Detail View page. Resolved errors that occur when evaluating queries using the Vulnerability Details tool. Steps to apply Note: If you are applying the patch to a Tenable Appliance, you must enable SSH access as described in the Tenable Appliance User Guide. If you are running a Tenable Appliance version earlier than 4.4.0, contact Support for assistance. - Download the appropriate patch to SecurityCenter. The files are named SC-201803.1-5.6.2-rh6-64.tgz and SC-201803.1-5.6.2-rh7-64.tgz and can be placed anywhere, though /tmp is a good choice. - Untar the patch file: tar zxf SC-201803.1-5.6.2-rh6-64.tgz - Change the directory to the extracted directory: cd SC-201803.1-5.6.2-rh6-64 - Run the install: sh ./install.sh File Names & MD5 Checksums
https://docs.tenable.com/releasenotes/securitycenter/securitycenter81.htm
2018-12-10T07:53:41
CC-MAIN-2018-51
1544376823318.33
[]
docs.tenable.com
Configuring Network Interfaces In the Viptela overlay network design, interfaces are associated with VPNs. The interfaces that participate in a VPN are configured and enabled in that VPN. Each interface can be present only in a single VPN. At a high level, for an interface to be operational, you must configure an IP address for the interface and mark it as operational (no shutdown). In practice, you always configure additional parameters for each interface. You can configure up to 512 interfaces on a Viptela device. This number includes physical interfaces, loopback interfaces, and subinterfaces. This article describes how to configure the general properties of WAN transport and service-side network interfaces. For information about how to configure specific interface types and properties—including cellular interfaces, DHCP, PPPoE, VRRP, and WLAN interfaces—see the links in the Additional Information section at the end of this article. Configure Interfaces in the WAN Transport VPN (VPN 0) VPN 0 is the WAN transport VPN. This VPN handles all control plane traffic, which is carried over OMP sessions, in the overlay network. For a Viptela device to participate in the overlay network, at least one interface must be configured in VPN 0, and at least one interface must connect to a WAN transport network, such as the Internet or an MPLS or a metro Ethernet network. This WAN transport interface is referred to as a tunnel interface. At a minimum, for this interface, you must configure an IP address, enable the interface, and set it to be a tunnel interface. To configure a tunnel interface on a vSmart controller or a vManage NMS, you create an interface in VPN 0, assign an IP address or configure the interface to receive an IP address from DHCP, and mark it as a tunnel interface. The IP address can be either an IPv4 or IPv6 address. To enable dual stack, configure both address types. You can optionally associate a color with the tunnel. Note: You can configure IPv6 addresses only on transport interfaces; that is, only in VPN 0. vSmart/vManage(config)# vpn 0 vSmart/vManage(config-vpn-0)# interface interface-name vSmart/vManage(config-interface)# [ip address prefix/length | ip dhcp-client [dhcp-distance number] vSmart/vManage(config-interface)# [ipv6 address prefix/length | ipv6 dhcp-client [dhcp-distance number] [dhcp-rapid-commit] vSmart/vManage(config-interface)# no shutdown vSmart/vManage(config-interface)# tunnel-interface vSmart/vManage(config-tunnel-interface)# color color vSmart/vManage(config-tunnel-interface)# [no] allow-service service Tunnel interfaces on vEdge routers must have an IP address, a color, and an encapsulation type. The IP address can be either an IPv4 or IPv6 address. To enable dual stack, configure both address types. vEdge(config)# vpn 0 vEdge(config-vpn-0)# interface interface-name vEdge(config-interface)# [ip address prefix/length | ip dhcp-client [dhcp-distance number] vEdge(config-interface)# [ipv6 address prefix/length | ipv6 dhcp-client [dhcp-distance number] [dhcp-rapid-commit] vEdge(config-interface)# no shutdown vEdge(config-interface)# tunnel-interface vEdge(config-tunnel-interface)# color color [restrict] vEdge(config-tunnel-interface)# encapsulation (gre | ipsec) vEdge(config-tunnel-interface)# [no] allow-service service On vSmart controllers and vManage NMSs, interface-name can be either ethnumber or loopbacknumber. Because vSmart controllers and vManage NMSs participate only in the overlay network's control plane, the only VPN that you can configure on these devices is VPN 0, and hence all interfaces are present only in this VPN. On vEdge routers, interface-name can be either geslot/port or loopbacknumber. To enable the interface, include the no shutdown command. For the tunnel interface, you can configure a static IPv4 or IPv6 address, or you can configure the interface to receive its address from a DHCP server. To enable dual stack, configure both an IPv4 and an IPv6 address on the tunnel interface. Color is a Viptela software construct that identifies the transport tunnel. It can be 3g, biz-internet, blue, bronze, custom1, custom2, custom3, default, gold, green, lte, metro-ethernet, mpls, private1 through private6, public-internet, red, and silver. The colors metro-ethernet, mpls, and private1 through private6 are referred to as private colors, because they use private addresses to connect to the remote side vEdge router in a private network. You can use these colors in a public network provided that there is no NAT device between the local and remote vEdge routers. To limit the remote TLOCs that the local TLOC can establish BFD sessions with, mark the TLOC with the restrict option. When a TLOC is marked as restricted, a TLOC on the local router establishes tunnel connections with a remote TLOC only if the remote TLOC has the same color. On a vSmart controller or vManage NMS, you can configure one tunnel interface. On a vEdge router, you can configure up to seven tunnel interfaces. This means that each vEdge router can have up to seven TLOCs. On vEdge routers, you must configure the tunnel encapsulation. The encapsulation can be either IPsec or GRE. For IPsec encapsulation, the default MTU is 1442 bytes, and for GRE it is 1468 bytes, These values are a function of overhead required for BFD path MTU discovery, which is enabled by default on all TLOCs. (For more information, see Configuring Control Plane and Data Plane High Availability Parameters.) You can configure both IPsec and GRE encapsulation by including two encapsulation commands under the same tunnel-interface command. On the remote vEdge router, you must configure the same tunnel encapsulation type or types so that the two routers can exchange data traffic. Data transmitted out an IPsec tunnel can be received only by an IPsec tunnel, and data sent on a GRE tunnel can be received only by a GRE tunnel. The Viptela software automatically selects the correct tunnel on the destination vEdge router. A tunnel interface allows only DTLS, TLS, and, for vEdge routers, IPsec traffic to pass through the tunnel. To allow additional traffic to pass without having to create explicit policies or access lists, enable them by including one allow-service command for each service. You can also explicitly disallow services by including the no allow-service command. Note that services affect only physical interfaces. You can allow or disallow these services on a tunnel interface:. On a vEdge router that is behind a NAT, you can also have tunnel interface to discover its public IP address and port number from the vBond controller: vEdge(config-tunnel-interface)# vbond-as-stun-server With this configuration, the vEdge router uses the vBond orchestrator as a STUN server, so the router can determine its public IP address and public port number. (With this configuration, the router cannot learn the type of NAT that it is behind.) No overlay network control traffic is sent and no keys are exchanged over tunnel interface configured to the the vBond orchestrator as a STUN server. However, BFD does come up on the tunnel, and data traffic can be sent on it. Because no control traffic is sent over a tunnel interface that is configured to use the vBond orchestrator as a STUN server, you must configure at least one other tunnel interface on the vEdge router so that it can exchange control traffic with the vSmart controller and the vManage NMS. Configuring Localized Data Policy for IPv4 or Configuring Localized Data Policy for IPv6. For each transport tunnel on a vEdge router and for each encapsulation type on a single transport tunnel, the Viptela software creates a TLOC, which consists of the routers system IP address, the color, and the encapsulation. The OMP session running on the tunnel sends the TLOC, as a TLOC route, to the vSmart controller, which uses it to determine the overlay network topology and to determine the best paths for data traffic across the overlay network. To display information about interfaces in the WAN transport VPN that are configured with IPv4 addresses, use the show interface command. For example:.5.21/24 Up Up null transport 1500 00:0c:29:6c:30:c1 10 full 0 0:04:03:41 260025 260145 0 ge0/2 - Down Up null service 1500 00:0c:29:6c:30:cb 10 full 0 0:04:03:41 3506 1 0 ge0/3 - Down Up null service 1500 00:0c:29:6c:30:d5 10 full 0 0:04:03:41 260 1 0 ge0/4 - Down Up null service 1500 00:0c:29:6c:30:df 10 full 0 0:04:03:41 260 1 0 ge0/5 - Down Up null service 1500 00:0c:29:6c:30:e9 10 full 0 0:04:03:41 260 1 0 ge0/6 10.0.7.21/24 Up Up null service 1500 00:0c:29:6c:30:f3 10 full 0 0:04:03:41 265 2 0 ge0/7 10.0.100.21/24 Up Up null service 1500 00:0c:29:6c:30:fd 10 full 0 0:04:03:41 278 2 0 system 172.16.255.21/32 Up Up null loopback 1500 00:00:00:00:00:00 10 full 0 0:04:03:37 0 0 To display information for interfaces configured with IPv6 addresses, use the show ipv6 interface command. For example: vEdge# show ipv6 interface vpn 0 IF IF TCP AF ADMIN OPER ENCAP SPEED MSS RX TX VPN INTERFACE TYPE IPV6 ADDRESS STATUS STATUS TYPE PORT TYPE MTU HWADDR MBPS DUPLEX ADJUST UPTIME PACKETS PACKETS LINK LOCAL ADDRESS ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0 ge0/1 ipv6 2001::a00:1a0b/120 Up Up null service 1500 00:0c:29:ab:b7:62 1000 full 1420 0:01:30:00 2 6 fe80::20c:29ff:feab:b762/64 0 ge0/2 ipv6 2001::a00:50b/120 Up Up null service 1500 00:0c:29:ab:b7:6c 1000 full 1420 0:01:30:00 21 5 fe80::20c:29ff:feab:b76c/64 0 ge0/3 ipv6 fd00:1234::/16 Up Up null service 1500 00:0c:29:ab:b7:76 1000 full 1420 0:01:08:33 0 8 fe80::20c:29ff:feab:b776/64 0 ge0/4 ipv6 - Up Up null service 1500 00:0c:29:ab:b7:80 1000 full 1420 0:01:30:00 18 5 fe80::20c:29ff:feab:b780/64 0 ge0/5 ipv6 - Down Up null service 1500 00:0c:29:ab:b7:8a 1000 full 1420 0:01:44:19 1 1 fe80::20c:29ff:feab:b78a/64 0 ge0/6 ipv6 - Down Up null service 1500 00:0c:29:ab:b7:94 1000 full 1420 0:01:44:19 0 1 fe80::20c:29ff:feab:b794/64 0 ge0/7 ipv6 - Up Up null service 1500 00:0c:29:ab:b7:9e 1000 full 1420 0:01:43:02 55 5 fe80::20c:29ff:feab:b79e/64 0 system ipv6 - Up Up null loopback 1500 00:00:00:00:00:00 10 full 1420 0:01:29:31 0 0 - 0 loopback1 ipv6 2001::a00:6501/128 Up Up null transport 1500 00:00:00:00:00:00 10 full 1420 0:03:49:09 0 0 - 0 loopback2 ipv6 2001::a00:6502/128 Up Up null transport 1500 00:00:00:00:00:00 10 full 1420 0:03:49:05 0 0 - 0 loopback3 ipv6 2001::a00:6503/128 Up Up null transport 1500 00:00:00:00:00:00 10 full 1420 0:03:49:01 0 0 - 0 loopback4 ipv6 2001::a00:6504/128 Up Up null transport 1500 00:00:00:00:00:00 10 full 1420 0:03:48:54 0 0 - In the command output, a port type of "transport" indicates that the interface is configured as a tunnel interface, and a port type of "service" indicates that the interface is not configured as a tunnel interface and can be used for data plane traffic. The port type for the system IP address interface is "loopback". Associate a Carrier Name with a Tunnel Interface To associate a carrier name or private network identifier with a tunnel interface, use the carrier command. carrier-name can be default and carrier1 through carrier8: Viptela(config)# vpn 0 Viptela(config-vpn-0)# interface interface-name Viptela(config-interface)# tunnel-interface Viptela(config-tunnel-interface)# carrier carrier-name Limit Keepalive Traffic on a Tunnel Interface By default, Viptela devices send a Hello packet once per second to determine whether the tunnel interface between two devices is still operational and to keep the tunnel alive. The combination of a hello interval and a hello tolerance determines how long to wait before declaring a DTLS or TLS tunnel to be down. The default hello interval is 1 second, and the default tolerance is 12 seconds. With these default values, if no Hello packet is received within 11 seconds, the tunnel is declared down at 12 seconds. If the hello interval or the hello tolerance, or both, are different at the two ends of a DTLS or TLS tunnel, the tunnel chooses the interval and tolerance as follows: - For a tunnel connection between two controller devices, the tunnel uses the lower hello interval and the higher tolerance interval for the connection between the two devices. (Controller devices are vBond controllers, vManage NMSs, and vSmart controllers.) This choice is made in case one of the controllers has a slower WAN connection. The hello interval and tolerance times are chosen separately for each pair of controller devices. -. To minimize the amount of keepalive traffic on a tunnel). The hello tolerance interval must be at most one-half the OMP hold time. The default OMP hold time is 60 seconds, and you configure it with the omp timers holdtime command. Limit the HTTPS Connections to a vManage Server By default, a vManage application server accepts a maximum of 50 HTTPS connections from users in the overlay network. To modify the maximum number of HTTPS connections that can be established: vManage(config)# vpn 0 interface interface-name tunnel-interface control-connections number The number can be from 1 through 512. Configure Multiple Tunnel Interfaces on a vEdge Router On a vEdge router, you can configure up to eight tunnel interfaces in the transport interface (VPN 0). This means that each vEdge router can have up to eight TLOCs. When a vEdge router has multiple TLOCs, each TLOC is preferred equally and traffic to each TLOC is weighted equally, resulting in ECMP routing. ECMP routing is performed regardless of the encapsulation used on the transport tunnel, so if, for example, a router has one IPsec and one GRE tunnel, with ECMP traffic is forwarded equally between the two tunnels. You can change the traffic distribution by modifying the preference or the weight, or both, associated with a TLOC. (Note that you can also affect or change the traffic distribution by applying a policy on the interface that affects traffic flow.) vEdge(config)# vpn 0 vEdge(config-vpn-0)# interface interface-name vEdge(config-tunnel-interface) encapsulation (gre | ipsec) vEdge(config-encapsulation)# preference number vEdge(config-encapsulation)# weight number The preference command controls the preference for directing traffic to a tunnel. The preference can be a value from 0 through 4294967295 (232 – 1), and the default value is 0. A higher value is preferred over a lower value. When a vEdge router has two or more tunnels, if all the TLOCs have the same preference and no policy is applied that affects traffic flow, all the TLOCs are advertised into OMP. When the router transmits or receives traffic, it distributes traffic flows evenly among the tunnels, using ECMP. When a vEdge router has two or more tunnels, if the TLOCs all have different preferences and no policy is applied that affects traffic flow, only the TLOC with the highest preference is advertised into OMP. When the router transmits or receives traffic, it sends the traffic only to the TLOC with the highest preference. When there are three or more tunnels and two of them have the same preference, traffic flows are distributed evenly between these two tunnels. A remote vEdge router trying to reach one of these prefixes selects which TLOC to use from the set of TLOCs that have been advertised. So, for example, if a remote router selects a GRE TLOC on the local router, the remote router must have its own GRE TLOC to be able to reach the prefix. If the remote router has no GRE TLOC, it is unable to reach the prefix. If the remote router has a single GRE TLOC, it selects that tunnel even if there is an IPsec TLOC with a higher preference. If the remote router has multiple GRE TLOCs, it selects from among them, choosing the one with the highest preference or using ECMP among GRE TLOCs with equal preference, regardless of whether there is an IPsec TLOC with a higher preference. The weight command controls how traffic is balanced across multiple TLOCs that have equal preferences values. The weight can be a value from 1 through 255, and the default is 1. When the weight value is higher, the router sends more traffic to the TLOC. You typically set the weight based on the bandwidth of the TLOC. When a router has two or more TLOCs, all with the highest equal preference value, traffic distribution is weighted according to the configured weight value. For example, if TLOC A has weight 10, and TLOC B has weight 1, and both TLOCs have the same preference value, then roughly 10 flows are sent out TLOC A for every 1 flow sent out TLOC B. Configure Control Plane High Availability A highly available Viptela network contains two or more vSmart controllers in each domain. A Viptela domain can have up to eight vSmart controllers, and each vEdge router, by default, connects to two of them. You change this value on a per-tunnel basis: vEdge(config-tunnel-interface)# max-controllers number When the number of vSmart controllers in a domain is greater than the maximum number of controllers that a domain's vEdge routers are allowed to connect to, the Viptela software load-balances the connections among the available vSmart controllers. Configure Other WAN Interface Properties You can modify the distribution of data traffic across transport tunnels by applying a data policy in which the action sets TLOC attributes (IP address, color, and encapsulation) to apply to matching data packets. For more information, see Configuring Centralized Data Policy. Configure the System Interface For each Viptela device, you configure a system interface with the system system-ip command. The system interface's IP address is a persistent address that identifies the Viptela device. It is similar to a router ID on a regular router, which is the address used to identify the router from which packets originated. Viptela(config)# system system-ip ipv4-address Specify the system IP address. The system interface is placed in VPN 0, as a loopback interface named system. Note that this is not the same as a loopback address that you configure for an interface. To display information about the system interface, use the show interface command. For example: vEdge# show running-config system system-ip system system-ip 172.16.255.11 !:32:16 1606 8 0 ge0/2 10.0.5.11/24 Up Up null transport 1500 00:0c:29:ab:b7:6c 1000 full 1420 0:10:32:16 307113 303457 0 ge0/3 - Down Up null service 1500 00:0c:29:ab:b7:76 1000 full 1420 0:10:47:49 1608 0 0 ge0/4 10.0.7.11/24 Up Up null service 1500 00:0c:29:ab:b7:80 1000 full 1420 0:10:32:16 1612 8 0 ge0/5 - Down Up null service 1500 00:0c:29:ab:b7:8a 1000 full 1420 0:10:47:49 1621 0 0 ge0/6 - Down Up null service 1500 00:0c:29:ab:b7:94 1000 full 1420 0:10:47:49 1600 0 0 ge0/7 10.0.100.11/24 Up Up null service 1500 00:0c:29:ab:b7:9e 1000 full 1420 0:10:47:31 3128 1165 0 system 172.16.255.11/32 Up Up null loopback 1500 00:00:00:00:00:00 10 full 1420 0:10:31:58 0 0 The system IP address is used as one of the attributes of the OMP TLOC. Each TLOC is uniquely identified by a 3-tuple comprising the system IP address, a color, and an encapsulation. To display TLOC information, use the show omp tlocs command. For device management purposes, it is recommended as a best practice that you also configure the same system IP address on a loopback interface that is located in a service-side VPN that is an appropriate VPN for management purposes. You use a loopback interface because it is always reachable when the router is operational and when the overlay network is up. If you were to configure the system IP address on a physical interface, both the router and the interface would have to be up for the router to be reachable. You use a service-side VPN because it is reachable from the data center. Service-side VPNs are VPNs other than VPN 0 (the WAN transport VPN) and VPN 512 (the management VPN), and they are used to route data traffic. Here is an example of configuring the system IP address on a loopback interface in VPN 1: vEdge# config Entering configuration mode terminal vEdge(config)# vpn 1 vEdge(config-vpn-1)# interface loopback0 ip address 172.16.255.11/32 vEdge(config-vpn-1)# no shutdown vEdge(config-interface-loopback0)# commit and-quit Commit complete. vEdge# show interface:27:33 1597 8 0 ge0/2 10.0.5.11/24 Up Up null transport 1500 00:0c:29:ab:b7:6c 1000 full 1420 0:10:27:33 304819 301173 0 ge0/3 - Down Up null service 1500 00:0c:29:ab:b7:76 1000 full 1420 0:10:43:07 1599 0 0 ge0/4 10.0.7.11/24 Up Up null service 1500 00:0c:29:ab:b7:80 1000 full 1420 0:10:27:33 1603 8 0 ge0/5 - Down Up null service 1500 00:0c:29:ab:b7:8a 1000 full 1420 0:10:43:07 1612 0 0 ge0/6 - Down Up null service 1500 00:0c:29:ab:b7:94 1000 full 1420 0:10:43:07 1591 0 0 ge0/7 10.0.100.11/24 Up Up null service 1500 00:0c:29:ab:b7:9e 1000 full 1420 0:10:42:48 3118 1164 0 system 172.16.255.11/32 Up Up null loopback 1500 00:00:00:00:00:00 10 full 1420 0:10:27:15 0 0 1 ge0/0 10.2.2.11/24 Up Up null service 1500 00:0c:29:ab:b7:58 1000 full 1420 0:10:27:30 5734 4204 1 loopback0 172.16.255.11/32 Up Up null service 1500 00:00:00:00:00:00 10 full 1420 0:00:00:28 0 0 512 eth0 10.0.1.11/24 Up Up null service 1500 00:50:56:00:01:0b 1000 full 0 0:10:43:03 20801 14368 Extend the WAN Transport VPN When two vEdge routers are collocated at a site that has only one WAN circuit, you can configure the vEdge router that is not connected to the circuit to be able to establish WAN transport tunnels through the other router's TLOCs. In this way, you extend the WAN transport VPN so that both routers can establish tunnel interfaces, and hence can establish independent TLOCs, in the overlay network. The following figure illustrates a site with two vEdge routers. vEdge-2 has two WAN circuits, one to the Internet and a second to a private MPLS network, and so has two TLOCs. By itself, vEdge1 has no TLOCs. You can configure vEdge-2 to extend its WAN transport VPN to vEdge1 so that vEdge-1 can participate independently in the overlay network. When you extend the WAN transport VPN, no BFD sessions are established between the two collocated vEdge routers. To extend the WAN transport VPN, you configure the interface between the two routers: - For the router that is not connected to the circuit, you configure a standard tunnel interface in VPN 0. - For the router that is physically connected to the WAN or private transport, you associate the physical interface that connects to the circuit, configuring this in VPN 0 but not in a tunnel interface. To configure the non-connected router (vEdge-1 in the figure above), create a tunnel interface in VPN 0 on the physical interface to the connected router. vEdge-1(config-vpn-0)# interface geslot/port vEdge-1(config-interface)# ip address prefix/length vEdge-1(config-interface)# no shutdown vEdge-1(config-interface)# mtu number vEdge-1(config-interface)# tunnel-interface vEdge-1(config-tunnel-interface)# color color For the router connected to the WAN or private transport (vEdge-2 in the figure above), configure the interface that connects to the non-connected router, again in VPN 0: vEdge-2(config-vpn-0)# interface geslot/port vEdge-2(config-interface)# ip address prefix/length vEdge-2(config-interface)# tloc-extension geslot/port vEdge-2(config-interface)# no shutdown vEdge-2(config-interface)# mtu number The physical interface in the interface command is the one that connects to the other router. The tloc-extension command creates the binding between the non-connected router and the WAN or private network. In this command, you specify the physical interface that connects to the WAN or private network circuit. If the circuit connects to a public network: - Configure a NAT on the public-network-facing interface on the vEdge router. The NAT configuration is required because the two vEdge routers are sharing the same transport tunnel. - Configure a static route on the non-connected router to the TLOC-extended interface on the router connected to the public network. If the circuit connects to a private network, such as an MPLS network: - Enable routing on the non-connected router so that the interface on the non-connected router is advertised into the private network. - Depending on the routing protocol you are using, enable either OSPF or BGP service on the non-connected router interface so that routing between the non-connected and the connected routers comes up. To do this, use the allow-service command. You cannot extend a TLOC configured on a loopback interface, that is, when you use a loopback interface to connect to the public or private network. You can extend a TLOC only on a physical interface. If one of the routers is connected to two WAN transports (such as the Internet and an MPLS network), create subinterfaces between the two routers, creating the tunnel on the subinterface. The subinterfaces on the two routers must be in the same subnet. Because you are using a subinterface, the interface's MTU must be at least 4 bytes less than the physical MTU. By default, routers at one site form BFD tunnels only with routers at remote sites. If you want the routers at the same site to form BFD tunnels between them, enable the formation of these tunnels: vEdge(config)# system allow-same-site-tunnels Here is a sample configuration that corresponds to the figure shown above. Because the router vEdge-2 connects to two transports, we create subinterfaces between the vEdge-1 and vEdge-2 routers. One subinterface binds to the Internet circuit, and the second one binds to the MPLS connection. vEdge-1# show running-config vpn 0 interface ge0/2.101 ip address 101.1.19.15/24 mtu 1496 tunnel-interface color lte ... ! no shutdown ! interface ge0/2.102 ip address 102.1.19.15/24 mtu 1496 tunnel-interface color mpls ... ! no shutdown ! ip route 0.0.0.0/0 101.1.19.16 vEdge-2# show running-config vpn 0 interface ge0/0 ip address 172.16.255.2 tunnel-interface color lte ... ! no shutdown ! interface ge0/3 ip address 172.16.255.16 tunnel-interface color mpls ... ! no shutdown ! interface ge0/2.101 ip address 101.1.19.16/24 mtu 1496 tloc-extension ge0/0 no shutdown ! interface ge0/2.102 ip address 102.1.19.16/24 mtu 1496 tloc-extension ge0/3 no shutdown ! For this example configuration, vEdge-1 establishes two control connections to each vSmart controller in the overlay network—one connection for the LTE tunnel and the second for the MPLS tunnel. These control connections are separate and independent from those established on vEdge-2. The following output shows the control connections on vEdge-1 in a network with two vSmart controllers: vEdge-1# show control connections PEER PEER CONTROLLER PEER PEER PEER SITE DOMAIN PEER PRIVATE PEER PUBLIC GROUP TYPE PROTOCOL SYSTEM IP ID ID PRIVATE IP PORT PUBLIC IP PORT LOCAL COLOR STATE UPTIME NAME -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- vsmart dtls 172.16.255.19 100 1 10.0.5.19 12346 10.0.5.19 12346 lte up 0:00:18:43 default vsmart dtls 172.16.255.19 100 1 10.0.5.19 12346 10.0.5.19 12346 mpls up 0:00:18:32 default vsmart dtls 172.16.255.20 200 1 10.0.12.20 12346 10.0.12.20 12346 lte up 0:00:18:38 default vsmart dtls 172.16.255.20 200 1 10.0.12.20 12346 10.0.12.20 12346 mpls up 0:00:18:27 default You can verify that the two vEdge routers have established no BFD sessions between them. On vEdge-1, we see no BFD sessions to vEdge-2 (system IP address 172.16.255.16): vEdge-1# show bfd sessions SOURCE TLOC REMOTE TLOC DST PUBLIC DST PUBLIC DETECT TX TRANSI- SYSTEM IP SITE ID STATE COLOR COLOR SOURCE IP IP PORT ENCAP MULTIPLIER INTERVAL(msec) UPTIME TIONS --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 172.16.255.11 100 up lte lte 101.1.19.15 10.0.101.1 12346 ipsec 20 1000 0:00:20:26 0 172.16.255.11 100 up lte 3g 101.1.19.15 10.0.101.2 12346 ipsec 20 1000 0:00:20:26 0 172.16.255.11 100 up lte gold 101.1.19.15 10.0.101.3 12346 ipsec 20 1000 0:00:20:26 0 172.16.255.11 100 up lte red 101.1.19.15 10.0.101.4 12346 ipsec 20 1000 0:00:20:26 0 172.16.255.11 100 up mpls lte 102.1.19.15 10.0.101.1 12346 ipsec 20 1000 0:00:20:26 0 172.16.255.11 100 up mpls 3g 102.1.19.15 10.0.101.2 12346 ipsec 20 1000 0:00:20:26 0 172.16.255.11 100 up mpls gold 102.1.19.15 10.0.101.3 12346 ipsec 20 1000 0:00:20:26 0 172.16.255.11 100 up mpls red 102.1.19.15 10.0.101.4 12346 ipsec 20 1000 0:00:20:26 0 172.16.255.14 400 up lte lte 101.1.19.15 10.1.14.14 12360 ipsec 20 1000 0:00:20:26 0 172.16.255.14 400 up mpls lte 102.1.19.15 10.1.14.14 12360 ipsec 20 1000 0:00:20:26 0 172.16.255.21 100 up lte lte 101.1.19.15 10.0.111.1 12346 ipsec 20 1000 0:00:20:26 0 172.16.255.21 100 up lte 3g 101.1.19.15 10.0.111.2 12346 ipsec 20 1000 0:00:20:26 0 172.16.255.21 100 up mpls lte 102.1.19.15 10.0.111.1 12346 ipsec 20 1000 0:00:20:26 0 172.16.255.21 100 up mpls 3g 102.1.19.15 10.0.111.2 12346 ipsec 20 1000 0:00:20:26 0 Configure Interfaces in the Management VPN (VPN 512) On all Viptela devices, VPN 512 is used, by default, for out-of-band management, and its configuration is part of the factory-default configuration. The interface type for management interfaces is mgmt, and the initial address for the interface is 192.168.1.1. Viptela# show running-config vpn 512 vpn 512 interface mgmt0 ip dhcp-client no shutdown ! ! To display information about the configured management interfaces, use the show interface command. For example: vEdge# show interface vpn 512 IF IF TCP ADMIN OPER ENCAP PORT SPEED MSS RX TX VPN INTERFACE IP ADDRESS STATUS STATUS TYPE TYPE MTU HWADDR MBPS DUPLEX ADJUST UPTIME PACKETS PACKETS -------------------------------------------------------------------------------------------------------------------------------------------- 512 mgmt0 192.168.1.1/24 Up Up null service 1500 00:50:56:00:01:1f 1000 full 0 0:04:08:01 1131 608 Note that VPN 512 is not a routable VPN. If you need a routable management VPN, create a VPN with a number other than 512. Configure Interfaces for Carrying Data Traffic On vEdge routers, in VPNs other than 0 and 512, you configure the interfaces that carry data traffic between vEdge routers and sites across the overlay network. At a minimum, for these interfaces, you must configure an IP address, and you must enable it: Viptela(config)# vpn number Viptela(config-vpn-number)# interface geslot/port Viptela(config-interface)# ip address prefix/length Viptela(config-interface)# no shutdown To display information about the configured data traffic interfaces, use the show interface command. For example: vEdge# show interface vpn 1 IF IF TCP ADMIN OPER ENCAP PORT SPEED MSS RX TX VPN INTERFACE IP ADDRESS STATUS STATUS TYPE TYPE MTU HWADDR MBPS DUPLEX ADJUST UPTIME PACKETS PACKETS --------------------------------------------------------------------------------------------------------------------------------------------- 1 ge0/1 10.192.1.1/28 Up Up null service 1500 00:0c:bd:05:f0:84 100 full 0 1:05:44:07 399 331 1 loopback1 1.1.1.1/32 Up Up null service 1500 00:00:00:00:00:00 10 full 0 1:05:44:07 0 0 For some protocols, you specify an interface as part of the protocol's configuration. In these cases, the interface used by the protocol must be the same as one of the interfaces configured in the VPN. As example is OSPF, where you place interfaces in OSPF areas. In this example, the interface ge0/0 is configured in VPN 1, and this interface is configured to be in the OSPF backbone area: vEdge# show running-config vpn 1 vpn 1 router ospf router-id 172.16.255.21 timers spf 200 1000 10000 redistribute static redistribute omp area 0 interface ge0/0 exit exit ! ! interface ge0/0 ip address 10.2.3.21/24 no shutdown ! ! Configure Subinterfaces and VLANs You can configure IEEE 802.1Q VLANs on vEdge routers. In such VLANs, physical interfaces are divided into subinterfaces. When you configure a subinterface, the interface name has the format geslot/port.vlan-number. The VLAN number, vlan-number, can be in the range 1 through 4094. As with all interfaces, the subinterface must be activated, by configuring it with the no shutdown command. To accommodate the 32-bit field added to packets by the 802.1Q protocol, you must also configure the MTU for VLAN subinterfaces to be at least 4 bytes smaller than the MTU of the physical interface. You do this using the mtu command. The default MTU on a physical interface is 1500 bytes by default, so the subinterface's MTU here can be no larger than 1496 bytes. For subinterfaces to work, you must configure the physical interface in VPN 0 and activate it with a no shutdown command. If the physical interface goes down for any reason, all its subinterfaces also go down. If you shut down the subinterface with a shutdown command, the operational status remains up as long as the physical interface is up. You can place the VLANs associated with a single physical interface into multiple VPNs. Each individual subinterface can be present only in a single VPN. Here is an example of a minimal VLAN configuration. The VLANs are configured on subinterfaces ge0/6.2 and ge0/6.3 in VPN 1, and they are associated with the physical interface ge0/6 in VPN 0. vEdge# show running-config vpn 1 vpn 1 interface ge0/6.2 mtu 1496 no shutdown ! interface ge0/6.3 mtu 1496 no shutdown ! ! vEdge# show running-config vpn 0 vpn 0 interface ge0/0 ip dhcp-client tunnel-interface encapsulation ipsec no allow-service all no allow-service bgp allow-service dhcp allow-service dns allow-service icmp no allow-serivce ospf no allow-service sshd no allow-service ntp no allow-service stun ! no shutdown ! interface ge0/6 ip address 57.0.1.15/24 no shutdown ! ! The output of the show interface command shows the physical interface and the subinterfaces. The Encap Type column shows that the subinterfaces are VLAN interfaces, and the MTU column shows that the physical interface has an MTU size of 1500 bytes, while the MTU of the subinterfaces is 1496 bytes, so 4 bytes less.:04:32:28 289584 289589 0 ge0/1 10.1.17.15/24 Up Up null service 1500 00:0c:29:7d:1e:08 10 full 0 0:04:32:28 290 2 0 ge0/2 - Down Up null service 1500 00:0c:29:7d:1e:12 10 full 0 0:04:32:28 290 1 0 ge0/3 10.0.20.15/24 Up Up null service 1500 00:0c:29:7d:1e:1c 10 full 0 0:04:32:28 290 2 0 ge0/6 57.0.1.15/24 Up Up null service 1500 00:0c:29:7d:1e:3a 10 full 0 0:04:32:28 290 2 0 ge0/7 10.0.100.15/24 Up Up null service 1500 00:0c:29:7d:1e:44 10 full 0 0:04:32:28 300 2 0 system 172.16.255.15/32 Up Up null loopback 1500 00:00:00:00:00:00 10 full 0 0:04:32:27 0 0 1 ge0/4 10.20.24.15/24 Up Up null service 1500 00:0c:29:7d:1e:26 10 full 0 0:04:32:18 2015 1731 1 ge0/5 56.0.1.15/24 Up Up null service 1500 00:0c:29:7d:1e:30 10 full 0 0:04:32:18 290 3 1 ge0/6.2 10.2.2.3/24 Up Up vlan service 1490 00:0c:29:7d:1e:3a 10 full 0 0:04:32:18 0 16335 1 ge0/6.3 10.2.3.5/24 Up Up vlan service 1496 00:0c:29:7d:1e:3a 10 full 0 0:04:32:18 0 16335 512 eth0 10.0.1.15/24 Up Up null service 1500 00:50:56:00:01:0f 1000 full 0 0:04:32:21 3224 1950 Configure Loopback Interfaces You can configure loopback interfaces in any VPN. Use the interface name format loopbackstring, where string can be any alphanumeric value and can include underscores (_) and hyphens (–). The total interface name, including the string "loopback", can be a maximum of 16 characters long. (Note that because of the flexibility of interface naming in the CLI, the interfaces lo0 and loopback0 are parsed as different strings and as such are not interchangeable. For the CLI to recognize as interface as a loopback interface, its name must start with the full string loopback.) One special use of loopback interfaces is to configure data traffic exchange across private WANs, such as MPLS or metro Ethernet networks. To allow a vEdge router that is behind a private network to communicate directly over the private WAN with other vEdge routers, you direct data traffic to a loopback interface that is configured as a tunnel interface rather than to an actual physical WAN interface. For more information, see Configure Data Traffic Exchange across Private WANs in the Configuring Segmentation (VPNs) article. Configure GRE Interfaces and Advertise Services To Them When a service, such as a firewall, is available on a device that supports only GRE tunnels, you can configure a GRE tunnel on the vEdge router to connect to the remote device. You then advertise that the service is available via a GRE tunnel, and you direct the appropriate traffic to the tunnel either by creating centralized data policy or by configuring GRE-specific static routes. GRE interfaces are logical interfaces, and you configure them just like any other physical interface. Because a GRE interface is a logical interface, you must bind it to a physical interface, as described below. To configure a GRE tunnel interface to a remote device that is reachable through a transport network, configure the tunnel in VPN 0: vEdge(config)# vpn 0 interface grenumber vEdge(config-interface-gre)# (tunnel-source ip-address | tunnel-source-interface interface-name) vEdge(config-interface-gre)# tunnel-destination ip-address vEdge(config-interface-gre)# no shutdown The GRE interface has a name in the format grenumber, where number can be from 1 through 255. To configure the source of the GRE tunnel on the local device, you can specify either the IP address of the physical interface (in the tunnel-source command or the name of the physical interface (in the tunnel-source-interface command). Ensure that the physical interface is configured in the same VPN in which the GRE interface is located. To configure the destination of the GRE tunnel, specify the IP address of the remote device in the tunnel-destination command. The combination of a source address and a destination address defines a single GRE tunnel. Only one GRE tunnel can exist that uses a specific source address (or interface name) and destination address pair. You can optionally configure an IP address for the GRE tunnel itself: vEdge(config-interface-gre)# ip address ip-address Because GRE tunnels are stateless, the only way for the local router to determine whether the remote end of the tunnel is up, is to periodically send keepalive messages over the tunnel. The keepalive packets are looped back to the sender, and receipt of these packets by the local router indicates that the remote GRE device is up. By default, the router sends keepalive packets every 10 seconds, and if it receives no response, retries 3 times before declaring the remote device to be down. You can modify these default values with the keepalive command: vEdge(config-interface-gre)# keepalive seconds retries The keepalive interval can be from 0 through 65535 seconds, and the number of retries can be from 0 through 255. If the vEdge router sits behind a NAT and you have configured GRE encapsulation, you must disable keepalives, with a keepalive 0 0 command. (Note that you cannot disable keepalives by issuing a no keepalive command. This command returns the keepalive to its default settings of sending a keepalive packet every 10 seconds and retrying 3 times before declaring the remote device down.) For GRE interfaces, you can configure only the following additional interface properties: vEdge(config-interface-gre)# clear-dont-fragment vEdge(config-interface-gre)# description text vEdge(config-interface-gre)# mtu bytes vEdge(config-interface-gre)# tcp-mss-adjust GRE interfaces do not support cFlowd traffic monitoring. You can configure one or two GRE interfaces per service. When you configure two, the first interface is the primary GRE tunnel, and the second is the backup tunnel. All packets are sent only to the primary tunnel. If that tunnel fails, all packets are then sent to the secondary tunnel. If the primary tunnel comes back up, all traffic is moved back to the primary GRE tunnel. You direct data traffic from the service VPN to the GRE tunnel in one of two ways: either with a GRE-specific static route or with a centralized data policy. To create a GRE-specific static route in the service VPN (a VPN other than VPN 0 or VPN 512), use the ip gre-route command: vEdge(config-vpn)# ip gre-route prefix vpn 0 interface grenumber [grenumber2] This GRE-specific static route directs traffic from the specified prefix to the primary GRE interface, and optionally to the secondary GRE interface, in VPN 0. The OMP administrative distance of a GRE-specific static route is 5, and that for a regular static route (configured with the ip route command) is 0. For more information, see Unicast Overlay Routing Overview. To direct the data traffic to the GRE tunnel using a centralized data policy is a two-part process: you advertise the service in the service VPN, and then you create a centralized data policy on the vSmart controller to forward matching traffic to that service. To advertise the service, include the service command in the service VPN (a VPN other than VPN 0 or VPN 512): vEdge(config-vpn)# service service-name interface grenumber [grenumber2] The service name can be FW, IDP, or IDS, or a custom service name netsvc1 through netsvc4. The interface is the GRE interface in VPN 0 that is used to reach the service. If you have configured a primary and a backup GRE tunnel, list the two GRE interfaces (grenumber1 grenumber2) in the service command. Once you have configured a service as reachable a the GRE interface, you cannot delete the GRE interface from the configuration. To delete the GRE interface, you must first delete the service. You can, however, reconfigure the service itself, by modifying the service command. Then, create a data policy on the vSmart controller that applies to the service VPN. In the action portion of the data policy, you must explicitly configure the policy to service the packets destined for the GRE tunnel. To do this, include the local option in the set service command: vSmart(config-policy-data-policy-vpn-list-vpn-sequence)# action accept vSmart(config-action-accept)# set service service-name local If the GRE tunnel used to reach the service is down, packet routing falls back to using standard routing. To drop packets when a GRE tunnel to the service is unreachable, add the restrict option: vSmart(config-policy-data-policy-vpn-list-vpn-sequence)# action accept vSmart(config-action-accept)# set service service-name local restrict To track GRE tunnels and their traffic, use the following commands: - show interface—List data traffic transmitted and received on GRE tunnels. - show tunnel gre-keepalives—List GRE keepalive traffic transmitted and received on GRE tunnels. - show tunnel statistics—List both data and keepalive traffic transmitted and received on GRE tunnels. The following figure illustrates an example of configuring a GRE tunnel in VPN 0, to allow traffic to be redirected to a service that is not located at the same site as the vEdge router. In this example, local traffic is directed to the GRE tunnel using a centralized data policy, which is configured on the vSmart controller. The configuration looks like this: vEdge# show running-config vpn 0 vpn 0 interface gre1 ip address 172.16.111.11/24 keepalive 60 10 tunnel-source 172.16.255.11 tunnel-destination 10.1.2.27 no shutdown ! ! vEdge# show running-config vpn 1 service vpn 1 service FW interface gre1 vSmart# show running-config policy policy lists prefix-list for-firewall ip-prefix 58.0.1.0/24 site-list my-site site-id 100 vpn-list for-vpn-1 vpn 1 data-policy to-gre-tunnel vpn-list for-vpn-1 sequence 10 match source-data-prefix-list for-firewall action accept set service FW local apply-policy site-list my-site data-policy to-gre-tunnel from-service Here is an example of the same configuring using a GRE-specific static route to direct data traffic from VPN 1 into the GRE tunnels: vEdge# show running-config vpn 0 interface gre1 ip address 172.16.111.11/24 keepalive 60 10 tunnel-source 172.16.255.11 tunnel-destination 10.1.2.27 no shutdown ! ! vpn 1 ip gre-route 58.0.1.0/24 vpn 0 interface gre1 The show interface command displays the GRE interface in VPN 0: vEdge# show interface vpn 0 IF IF TCP ADMIN OPER ENCAP SPEED MSS RX TX VPN INTERFACE IP ADDRESS STATUS STATUS TYPE PORT TYPE MTU HWADDR MBPS DUPLEX ADJUST UPTIME PACKETS PACKETS -------------------------------------------------------------------------------------------------------------------------------------------------- 0 gre1 172.16.111.11/24 Up Down null service 1500 0a:00:05:0b:00:00 - - 1420 - 0 0 0 ge0/1 10.0.26.11/24 Up Up null service 1500 00:0c:29:ab:b7:62 10 full 1420 0:03:35:14 89 5 0 ge0/2 10.0.5.11/24 Up Up null transport 1500 00:0c:29:ab:b7:6c 10 full 1420 0:03:35:14 9353 18563 0 ge0/3 - Down Up null service 1500 00:0c:29:ab:b7:76 10 full 1420 0:03:57:52 99 0 0 ge0/4 10.0.7.11/24 Up Up null service 1500 00:0c:29:ab:b7:80 10 full 1420 0:03:35:14 89 5 0 ge0/5 - Down Up null service 1500 00:0c:29:ab:b7:8a 10 full 1420 0:03:57:52 97 0 0 ge0/6 - Down Up null service 1500 00:0c:29:ab:b7:94 10 full 1420 0:03:57:52 85 0 0 ge0/7 10.0.100.11/24 Up Up null service 1500 00:0c:29:ab:b7:9e 10 full 1420 0:03:56:30 3146 2402 0 system 172.16.255.11/32 Up Up null loopback 1500 00:00:00:00:00:00 10 full 1420 0:03:34:15 0 0 You can also view the GRE tunnel information: vEdge# show tunnel gre-keepalives REMOTE REMOTE IF ADMIN OPER KA TX RX TX RX TX RX VPN NAME SOURCE IP DEST IP STATE STATE ENABLED PACKETS PACKETS PACKETS PACKETS ERRORS ERRORS TRANSITIONS --------------------------------------------------------------------------------------------------------------------------- 0 gre1 10.0.5.11 10.1.2.27 up down true 0 0 442 0 0 0 0 vEdge# show tunnel statistics tunnel statistics gre 10.0.5.11 10.1.2.27 0 0 tunnel-mtu 1460 tx_pkts 451 tx_octets 54120 rx_pkts 0 rx_octets 0 tcp-mss-adjust 1380 Set the Interface Speed When a vEdge router comes up, the Viptela software autodetects the SFPs present in the router and sets the interface speed accordingly. The software then negotiates the interface speed with the device at the remote end of the connection to establish the actual speed of the interface. To display the hardware present in the router, use the show hardware inventory command: vEdge# show hardware inventory HW DEV HW TYPE INDEX VERSION PART NUMBER SERIAL NUMBER DESCRIPTION ----------------------------------------------------------------------------------------------------------------------- Chassis 0 3.1 vEdge-1000 11OD145130001 vEdge-1000 CPU 0 None None None Quad-Core Octeon-II DRAM 0 None None None 2048 MB DDR3 Flash 0 None None None nor Flash - 16.00 MB eMMC 0 None None None eMMC - 7.31 GB PIM 0 None ge-fixed-8 None 8x 1GE Fixed Module Transceiver 0 A FCLF-8521-3 PQD3FHL Port 0/0, Type 0x8 (Copper), Vendor FINISAR CORP. Transceiver 1 PB 1GBT-SFP05 0000000687 Port 0/1, Type 0x8 (Copper), Vendor BEL-FUSE FanTray 0 None None None Fixed Fan Tray - 2 Fans To display the actual speed of each interface, use the show interface command. Here, interface ge0/0, which connects to the WAN cloud, is running at 1000 Mbps (1Gbps; it is the 1GE PIM highlighted in the output above), and interface ge0/1, which connects to a device at the local site, has negotiated a speed of 100 Mbps. vEdge# show interface IF IF TCP ADMIN OPER ENCAP SPEED MSS RX TX VPN INTERFACE IP ADDRESS STATUS STATUS TYPE PORT TYPE MTU HWADDR MBPS DUPLEX ADJUST UPTIME PACKETS PACKETS ------------------------------------------------------------------------------------------------------------------------------------------------ 0 ge0/0 192.168.1.4/24 Up Up null transport 1500 00:0c:bd:05:f0:83 1000 full 1300 0:06:10:59 2176305 2168760 0 ge0/2 - Down Down null service 1500 00:0c:bd:05:f0:81 - - 0 - 0 0 0 ge0/3 - Down Down null service 1500 00:0c:bd:05:f0:82 - - 0 - 0 0 0 ge0/4 - Down Down null service 1500 00:0c:bd:05:f0:87 - - 0 - 0 0 0 ge0/5 - Down Down null service 1500 00:0c:bd:05:f0:88 - - 0 - 0 0 0 ge0/6 - Down Down null service 1500 00:0c:bd:05:f0:85 - - 0 - 0 0 0 ge0/7 - Down Down null service 1500 00:0c:bd:05:f0:86 - - 0 - 0 0 0 system 1.1.1.1/32 Up Up null loopback 1500 00:00:00:00:00:00 10 full 0 0:06:11:15 0 0 1 ge0/1 10.192.1.1/28 Up Up null service 1500 00:0c:bd:05:f0:84 100 full 0 0:06:10:59 87 67 1 loopback1 1.1.1.1/32 Up Up null service 1500 00:00:00:00:00:00 10 full 0 0:06:10:59 0 0 2 loopback0 10.192.1.2/32 Up Up null service 1500 00:00:00:00:00:00 10 full 0 0:06:10:59 0 0 512 mgmt0 - Up Down null mgmt 1500 00:0c:bd:05:f0:80 - - 0 - 0 0 For non-physical interfaces, such as those for the system IP address and loopback interfaces, the interface speed is set by default to 10 Mbps. To override the speed negotiated by the two devices on the interface, disable autonegotiation and configure the desired speed: vEdge(config-vpn)# interface interface-name no autonegotiate vEdge(config-vpn)# interface interface-name speed (10 | 100) For vSmart controllers and vManage NMS systems, the initial interface speeds are 1000 Mbps, and the operating speed is negotiated with the device at the remote end of the interface. Set the Interface MTU By default, all interfaces have an MTU of 1500 bytes. You can modify this on an interface: Viptela(config-vpn)# interface interface-name mtu bytes The MTU can range from 576 through 2000 bytes. To display an interface's MTU, use the show interface command. For vBond, vManage, and vSmart devices, you can configure interfaces to use ICMP to perform path MTU (PMTU) discovery. When PMTU discovery is enabled, the device to automatically negotiates the largest MTU size that the interface supports in an attempt to minimize or eliminate packet fragmentation: Viptela(config-vpn)# interface interface-name pmtu On vEdge routers, the Viptela BFD software automatically performs PMTU discovery on each transport connection (that is, for each TLOC, or color). BFD PMTU discovery is enabled by default, and it is recommended that you use it and not disable it. To explicitly configure BFD to perform PMTU discovery, use the bfd color pmtu-discovery configuration command. However, you can choose to instead use ICMP to perform PMTU discovery: vEdge(config-vpn)# interface interface-name pmtu BFD is a data plane protocol and so does not run on vBond, vManage, and vSmart devices. Configure Static ARP Table Entries By default, vEdge routers respond to ARP requests only if the destination address of the request is on the local network. You can configure static ARP entries on a Gigabit Ethernet interface in any VPN to allow the router to respond to ARP requests even if the destination address of the request is not local to the incoming interface. The ARP entry maps an IP address to a MAC address. Viptela(config-vpn)# interface interface-name arp ip ip-address mac mac-address Monitoring Bandwidth on a Transport Circuit You can monitor the bandwidth usage on a transport circuit, to determine how the bandwidth usage is trending. If the bandwidth usage starts approaching a maximum value, you can configure the software to send a notification. Notifications are sent as Netconf notifications, which are sent to the vManage NMS, SNMP traps, and syslog messages. You might want to enable this feature for bandwidth monitoring, such as when you are doing capacity planning for a circuit or when you are gathering trending information about bandwidth utilization. You might also enable this feature to receive alerts regarding bandwidth usage, such as if you need to determine when a transport interface is becoming so saturated with traffic that a customer's traffic is impacted, or when customers have a pay-per-use plan, as might be the case with LTE transport. To monitor interface bandwidth, you configure the maximum bandwidth for traffic received and transmitted on a transport circuit. The maximum bandwidth is typically the bandwidth that has been negotiated with the circuit provider. When bandwidth usage exceeds 85 percent of the configured value for either received or transmitted traffic, a notification, in the form of an SNMP trap, is generated. Specifically, interface traffic is sampled every 10 seconds. If the received or transmitted bandwidth exceeds 85 percent of the configured value in 85 percent of the sampled intervals in a continuous 5-minute period, an SNMP trap is generated. After the first trap is generated, sampling continues at the same frequency, but notifications are rate-limited to once per hour. A second trap is sent (and subsequent traps are sent) if the bandwidth exceeds 85 percent of the value in 85 percent of the 10-second sampling intervals over the next 1-hour period. If, after 1 hour, another trap is not sent, the notification interval reverts to 5 minutes. You can monitor transport circuit bandwidth on vEdge routers and on vManage NMSs. To generate notifications when the bandwidth of traffic received on a physical interface exceeds 85 percent of a specific bandwidth, configure the downstream bandwidth: vEdge/vManage(config)# vpn vpn-id interface interface-name bandwidth-downstream kbps To generate notifications when the bandwidth of traffic transmitted on a physical interface exceeds 85 percent of a specific bandwidth, configure the upstream bandwidth: vEdge/vManage(config)# vpn vpn-id interface interface-name bandwidth-upstream kbps In both configuration commands, the bandwidth can be from 1 through 2147483647 (232 / 2) – 1 kbps. To display the configured bandwidths, look at the bandwidth-downstream and bandwidth-upstream fields in the output of the show interface detail command. The rx-kbps and tx-kbps fields in this command shows the current bandwidth usage on the interface.
https://sdwan-docs.cisco.com/Product_Documentation/Software_Features/Release_16.3/02System_and_Interfaces/06Configuring_Network_Interfaces
2018-12-10T06:14:53
CC-MAIN-2018-51
1544376823318.33
[array(['https://sdwan-docs.cisco.com/@api/deki/files/2272/s00173.png?revision=4&size=bestfit&width=762&height=166', 's00173.png'], dtype=object) array(['https://sdwan-docs.cisco.com/@api/deki/files/2438/s00183.png?revision=2&size=bestfit&width=771&height=153', 's00183.png'], dtype=object) ]
sdwan-docs.cisco.com
Please see Kublr-in-a-Box for new installation guide. Kublr-in-a-Box is a simple and convenient demo version of Kublr, running in Vagrant, that allows you set-up Kubernetes clusters. You can also use Kublr-in-a-Box to install a non-production licensed version of the full Kublr Platform. By using and downloading Kublr and Kublr-in-a-Box, you agree to the following Terms of Use and Privacy Policy The virtual machine should start up and ask for an External IP: ==> default: Running provisioner: shell... Virtual machine needs know you external IP for the machine in order to do proper installation of baremetal clusters. External IP: Please provide the IP address of your local machine in your local network, so your on-premise nodes are able to connect to this IP address while deploying a cluster. Wait while the Kublr-in-a-Box virtual machine starts up. It should end with the following output: ... default: Kubectl config file is available in current directory. default: default: The following links are available: default: Kublr HTTPS: admin / kublrbox default: Kublr HTTP: admin / kublrbox default: Keycloak HTTPS: admin / kublrbox default: Keycloak HTTP: admin / kublrbox default: K8s: Open your favorite browser and navigate to Use following credentials to access Kublr-in-a-Box UI: admin kublrbox Now you are good to go with Kublr-in-a-Box. If you no longer need Kublr-in-a-Box, you can clean up resources: Open terminal with the current directory pointing to the downloaded Vagrantfile file and launch the following commands: vagrant destroy -f || rm -rf Vagrantfile ==> default: Forcing shutdown of VM... ==> default: Destroying VM and associated drives... To remove installed Kublr Vagrant boxes: Print the list of all installed Kublr boxes: vagrant box list | grep kublr kublr-321 (virtualbox, 0) kublr-322 (virtualbox, 0) Run the removal command for each box you want to remove: vagrant box remove kublr-321 Removing box 'kublr-321' (v0) with provider 'virtualbox'... vagrant box remove kublr-322 Removing box 'kublr-322' (v0) with provider 'virtualbox'... If you receive a “The installation failed” error message while installing the VirtualBox, follow these instructions: VirtualBox Installation Error If your virtual machine crashes with the message like this: Stderr: VBoxManage.exe: error: The virtual machine 'kublr' has terminated unexpectedly during startup with exit code 1 (0x1). More details may be available in 'C:\Users\xxx\VirtualBox VMs\kublr\Logs\VBoxHardening.log' VBoxManage.exe: error: Details: code E_FAIL (0x80004005), component MachineWrap, interface IMachine Double check that you are using Virtualbox 5.2 or later. VirtualBox 5.1 is not supported. If problem still persists on VirtualBox 5.2, please contact a Kublr representative. If Vagrant is already installed, you may experience following errors: On Windows with the latest version of Vagrant (2.1.2) you may see the error as: C:/HashiCorp/Vagrant/embedded/gems/2.1.2/gems/vagrant-2.1.2/bin/vagrant:47:in `[]=': Invalid argument - ruby_setenv(VAGRANT_NO_PLUGINS)(E from C:/HashiCorp/Vagrant/embedded/gems/2.1.2/gems/vagrant-2.1.2/bin/vagrant:47:in `block in <main>' from C:/HashiCorp/Vagrant/embedded/gems/2.1.2/gems/vagrant-2.1.2/bin/vagrant:36:in `each_index' from C:/HashiCorp/Vagrant/embedded/gems/2.1.2/gems/vagrant-2.1.2/bin/vagrant:36:in `<main>' Open Vagrantfile and find the line: required_plugins = (plagin-name) There are list of plugins that you need to install To fix the issue, run the command not in directory with Vagrantfile (up to the level above): vagrant plugin install plagin-name Some non-standard browser extensions or plugins (not installed by default) aren’t compatible with the Kublr Control Plane. In this case, please use the browser in incognito mode (plugins are automatically disabled) or temporary disable the plugins. More details can be found at:. Please upgrade Microsoft PowerShell to version 5.0. Please contact your IT department to resolve the issue. For questions or troubleshooting please contact contact us We welcome your feedback. After all, Kublr was created to simplify your Kubernetes experience. Questions? Suggestions? Need help? Contact us.
https://docs.kublr.com/installationguide/bootstrap/
2018-12-10T07:08:48
CC-MAIN-2018-51
1544376823318.33
[]
docs.kublr.com
Swapping Drawings in the Timeline View_0<< -.
https://docs.toonboom.com/help/harmony-14/premium/cut-out-animation/swap-draw-timeline-view.html
2018-12-10T07:31:18
CC-MAIN-2018-51
1544376823318.33
[array(['../Resources/Images/HAR/Stage/Cut-out/HAR12/swap2.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Cut-out/HAR12/HAR12_drwg_sub1.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Cut-out/HAR12/HAR12_drwg_sub2.png', None], dtype=object) ]
docs.toonboom.com
Install the vEdge 5000 Router Once you have prepared your site for router installation, follow the instructions below to unpack the vEdge 5000 router and install it on four posts in a 19-inch rack. Unpack the vEdge 5000 Router A vEdge 5000 router is shipped in a cardboard carton and secured firmly in place with foam packing material. The carton contains an accessory box with Quick Start instructions. It is recommended that you do not unpack the router until you are ready to install it. To unpack the router: - Move the cardboard carton close to the installation site, making sure you have adequate space to remove all the contents of the box. - Open the top flaps of the carton. The router chassis and the accessories are packed together in the same box with partitions in the packing foam to accommodate the accessories. - Gradually remove the packing foam holding the router and the accessories in place. See Figure 1. - Take out the router and each accessory. - Verify the router components against the packing list included in the box. Figure 1: Unpacking the vEdge 5000 5000 Router The cardboard carton in which the router is packed includes a packing list. Check the parts you receive with your router against the items on the packing list. The packing list specifies the name, part number, and quantity of each item in the carton and the accessory box. Table 1 lists the parts shipped with the vEdge 5000 router and their quantities. Mount the vEdge 5000 Router in a Rack You can mount the vEdge 5000 router on four posts in a 19-inch rack. In addition to the accessory box, you need the following tools to mount a vEdge 5000 router in a 19-inch rack: - Number 2 Phillips (+) screwdriver - Tape measure The accessory box ships with two slide rails. Each slide rail has two parts: an inner rail that attaches to the router chassis and an outer rail that attaches to the 19-inch rack. Warning: To prevent bodily injury when mounting or servicing the vEdge 5000 router in a rack, you must take special precautions to ensure that the system remains stable. The following guidelines are provided to ensure your safety: - If this is the only router in the rack, mount it at the bottom of the rack. - If you are mounting the router in a partially filled rack, start to load the rack from the bottom, placing the heaviest component at the bottom of the rack. To mount the vEdge 5000 router on all four posts in a 19-inch rack: - Place the router chassis on the floor or on a sturdy table near the rack. - Verify the internal dimensions of the rack with a tape measure. The chassis is 440 mm wide and must fit within the mounting posts. - Detach both inner rails from the two slide rails: - Take one of the slide rails, and slide the inner bracket all the way up to the end of the slide rail until you hear a click. - Push the slide rail lock outwards in the direction indicated by the arrow in Figure 2. Then pull the inner bracket out of the slide rail. - Repeat Steps 3a and 3b to release the inner rail from the second slide rail. Figure 2: Detaching the Inner Rail from the Slide Rail - Attach the two inner rails to either side of the router chassis using the inner rail screws in the accessory box. Use five screws to secure each inner rail. Figure 3: Attaching the Inner Rails to the vEdge 5000 Router Chassis - Attach the two mounting ears to either side of the router chassis using the mounting ear screws in the accessory box. Use three screws to attach each mounting ear. Figure 4: Attaching the Mounting Ears to the vEdge 5000 Router Chassis - Install the two outer rails into the 19-inch rack: - Clip the outer rail to the front of the rack, aligning the three holes on the outer rail with the threaded holes in the front post of the rack. You will hear a click once the outer rail is firmly attached. - Clip the outer rail to the rear of the rack, aligning the three holes on the outer rail with the threaded holes in the rear post of the rack. You will hear a click once the rail is firmly attached. Repeat Steps 6a and 6b for the other slide rail. Figure 5: Clipping the Outer Rail to the Front and Rear of the Rack - Grasp both sides of the router chassis, making sure that the front of the chassis is facing you. - Stand at the front of the rack, and lift the router to align it with the slide rails. - Gently insert the chassis into the outer rails on both sides of the rack, as shown in Figure X. Figure 6: Inserting the Chassis Into the Outer Rails - Slide the chassis as far back as possible until you hear a click. While sliding the chassis in, push the release tabs on both the inner rails in the direction of the arrow shown in Figure 11. If you do not push the release tabs, you will be able to slide the router in only halfway. Figure 7: Pushing the Release Tabs on the Inner Rails - Secure the mounting ears to the front of the rack using the rack-mount screws in the accessory box. Use one screw in the center. Then, tighten the screws. Figure 8: Securing the Mounting Ears to the Front of the Rack To slide the router chassis out of the slide rails, gently pull it outwards. Then, press the blue slide rail locks on both sides and slide the chassis out. Additional Information Prepare for Router Installation Connect the vEdge 5000 Router
https://sdwan-docs.cisco.com/Product_Documentation/vEdge_Routers/vEdge_5000_Router/03Planning_and_Installation/02Install_the_vEdge_5000_Router
2018-12-10T07:08:14
CC-MAIN-2018-51
1544376823318.33
[array(['https://sdwan-docs.cisco.com/@api/deki/files/4349/H00213.png?revision=2', 'h00213.png'], dtype=object) array(['https://sdwan-docs.cisco.com/@api/deki/files/4343/H00210.png?revision=4', 'h00210.png'], dtype=object) array(['https://sdwan-docs.cisco.com/@api/deki/files/4358/H00211.png?revision=3', 'h00211.png'], dtype=object) array(['https://sdwan-docs.cisco.com/@api/deki/files/4375/H00231.png?revision=2', 'H00231.png'], dtype=object) array(['https://sdwan-docs.cisco.com/@api/deki/files/4345/H00214.png?revision=6', 'h00214.png'], dtype=object) array(['https://sdwan-docs.cisco.com/@api/deki/files/4351/H00217.png?revision=4', 'h00217.png'], dtype=object) array(['https://sdwan-docs.cisco.com/@api/deki/files/4348/H00218.png?revision=4', 'h00218.png'], dtype=object) array(['https://sdwan-docs.cisco.com/@api/deki/files/4363/H00229.png?revision=4', 'h00229.png'], dtype=object) ]
sdwan-docs.cisco.com
>> are specified throughout a the Splunk documentation - Providing feedback.
http://docs.splunk.com/Documentation/Splunk/7.0.1/SearchTutorial/WelcometotheSearchTutorial
2018-12-10T07:06:20
CC-MAIN-2018-51
1544376823318.33
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
In this step you will add code to start tracing when the hidden filter XXX_XHOMESALE_SCRUD_Filter_1 is used. Add this statement to the uInitialize method routine to start the trace when the filter is initialized: Set #AvFrameworkManager avTrace(TRUE) Compile the filter. Execute the Framework as VLF-ONE application. Deselect the tracing option: Select your application, then the Home business object. Notice that the tracing starts as you do this.
https://docs.lansa.com/14/en/lansa048/content/lansa/lansa048_6450.htm
2018-12-10T06:21:14
CC-MAIN-2018-51
1544376823318.33
[]
docs.lansa.com
For production environments, several factors influence installation. Consider the following questions as you read through the documentation: Do you install on-premises or in public/private clouds? The Installation Methods section provides more information about the cloud providers options available.. OKD can be installed on-premises or hosted on public or private clouds. Ansible playbooks can help you with automating the provisioning and installation processes. For information, see Advanced Installation. Determine how many nodes and pods you require for your OKD cluster. Cluster scalability correlates to the number of pods in a cluster environment. That number influences the other numbers in your setup. See Cluster Limits for the latest limits for objects in OKD..
https://docs.okd.io/3.9/install_config/install/planning.html
2018-12-10T06:31:34
CC-MAIN-2018-51
1544376823318.33
[]
docs.okd.io
Add Rules to Fiddler Customize Rules To add custom columns to the Fiddler UI, modify requests or responses, test application performance, and a variety of other custom tasks, add rules to Fiddler's JScript.NET CustomRules.js file in FiddlerScript. Click Rules > Customize Rules.... Enter FiddlerScript code inside the appropriate function. Save the file. Fiddler will automatically reload the rules. Use Additional .NET Assemblies To use additional .NET assemblies in your script: Click Tools > Fiddler Options. Click the Extensions tab. Edit the References list. Either: Register the assembly in the GAC; or Copy the assembly to the folder that contains Fiddler.exe. To use the new assembly's functions without fully-qualifying them, update the #import clause at the top of the script. Change the JScript Editor Launched from the Rules Menu Click Tools > Fiddler Options. Edit the Editor string. Restore Default Rules Delete the CustomRules.js file in ~/Documents/Fiddler2/Scripts. Restart Fiddler. Note: Fiddler's default rules are stored in ~/Program Files/Fiddler2/Scripts/SampleRules.js.
http://docs.telerik.com/fiddler/extend-fiddler/addrules
2016-10-20T21:19:46
CC-MAIN-2016-44
1476988717954.1
[]
docs.telerik.com
Documentation Cycle is designed to take the weight of managing infrastructure and application deployment off your shoulders. This documentation is your guide to the platform. What is Cycle? Cycle is multi-cloud container orchestration, abstracting away all the distractions of infrastructure, networking, OS patching, and server maintenance and filtering it under a beautiful user centric interface. Cycle helps with managing private, public, hybrid, or a combination (multi) cloud environments as hubs. What are Containers? Containers are portable, self-contained applications that carry with them everything they need to run. This has created a paradigm shift in devops, creating ecosystems where these containers can be passed back and forth between developers, automated testing infrastructure, production servers, and everywhere else without worrying about dependency management or if the containers will even run. There are a number of other benefits to containers, and Cycle takes advantage of them to provide performant, secure application deployment in a repeatable, dependable way. Public Roadmap Follow along with Cycle development and get your voice heard! Head over to our public roadmap hosted on Trello and get a peek on what's coming next, or vote on what you think should be implemented. Contact & Support How can we help? If you have any questions at all, reach out to us through any of the methods below. Devops doesn't sleep, and neither do we. Slack For general discussions and connecting with the Cycle community, join our Slack workspace! Our team, and other Cycle users just like you are online right now ready to talk.
https://docs.cycle.io/
2019-08-17T14:58:26
CC-MAIN-2019-35
1566027313428.28
[]
docs.cycle.io
Enables you to prevent a batch edit confirmation message from being displayed. BatchEditConfirmShowing: ASPxClientEvent<ASPxClientVerticalGridBatchEditConfirmShowingEventHandler<ASPxClientVerticalGrid>> The BatchEditConfirmShowing event handler receives an argument of the ASPxClientVerticalGridBatchEditConfirmShowingEventArgs type. The following properties provide information specific to this event. In the batch edit mode, if the GridBatchEditSettings.ShowConfirmOnLosingChanges property is true (the default behavior), a confirmation dialog is displayed when a grid page contains modified values and an end-user tries to send a request, for instance to sort grid data. The BatchEditConfirmShowing event occurs before displaying this confirmation dialog and allows you to cancel its display, if required. For instance, you may want to prevent the batch edit confirmation dialog from being displayed, if you place the ASPxVerticalGrid into an ASPxCallbackPanel together with other controls (such as ASPxTabControl, ASPxButton, etc.), and some of these controls (but not the ASPxVerticalGrid) initiates the panel update by calling the ASPxCallbackPanel's ASPxClientCallbackPanel.PerformCallback client method. In this case, you can set a client flag in the initiator control's client event handler, and then analyze this flag within a handler of the BatchEditConfirmShowing event. In the event handler, you can set the ASPxClientCancelEventArgs.cancel property to true to prevent display of the grid confirmation dialog.
https://docs.devexpress.com/AspNet/js-ASPxClientVerticalGrid.BatchEditConfirmShowing
2019-08-17T14:44:17
CC-MAIN-2019-35
1566027313428.28
[]
docs.devexpress.com
Child Care Benefit and Child Care Rebate payments to two providers Total dollar figure, aggregated by financial year for differing periods, for all Child Care benefit and Child Care Rebate payments made by the Department of Education and Training to RHT Investments (QLD) Pty Ltd and Camelia Avenue Childcare Centre.
https://docs.education.gov.au/documents/child-care-benefit-and-child-care-rebate-payments-two-providers
2019-08-17T14:55:52
CC-MAIN-2019-35
1566027313428.28
[]
docs.education.gov.au
Graphic Rule This is an example of the graphic representation of piping plan volumes in a drawing. Label Rule All drawing key plan volumes are labeled with their 3D model name. By default, the label is placed in the upper left corner of the volume in the drawing. For an example, see the above graphic.
https://docs.hexagonppm.com/reader/hK0ODlG2hJkCr_AZXM8KrA/_iUDDe0jyzVMPjD3mbxN6A
2019-08-17T14:35:46
CC-MAIN-2019-35
1566027313428.28
[]
docs.hexagonppm.com
If an error such as 500 Internal Server Error, 502 Bad Gateway, or 504 Gateway Timeout occurs after your web server connects to WAF. Use the following methods to locate the cause and remove the error: Interception by a firewall, security protection software installed on the backend server, or the rate limiting policy Add the WAF IP address ranges to the whitelist of the firewall (hardware or software), security protection software, and rate limiting module. Origin server configuration error Locate the target domain name record in the domain name list and click the domain name. On the displayed page, click in the Server Information area to check whether the protocol, IP address, and port number used by the origin server are correct. For details about editing domain information, see Editing Domain Information. As shown in Figure 1, you can access the IP address of the origin server to check whether the backend service port is enabled. Outdated HTTPS version Backend server performance issue
https://docs.otc.t-systems.com/en-us/usermanual/waf/waf_01_0066.html
2019-08-17T15:44:18
CC-MAIN-2019-35
1566027313428.28
[]
docs.otc.t-systems.com
changes.mady.by.user Nirdesha Munasinghe Saved on Dec 23, 2014 ... In the Manage screen, you can specify an authentication type for the methods of the resource that you created earlier.. Application & Application User Publish the API to the API Store. Powered by a free Atlassian Confluence Community License granted to WSO2, Inc.. Evaluate Confluence today.
https://docs.wso2.com/pages/diffpagesbyversion.action?pageId=34612738&selectedPageVersions=23&selectedPageVersions=24
2019-08-17T15:32:00
CC-MAIN-2019-35
1566027313428.28
[]
docs.wso2.com
Class humhub\modules\activity\components\MailSummary MailSummary is send to the user with a list of new activities Public Properties Hide inherited properties Public Methods Protected Methods Constants Property Details The interval of this summary The mail summary layout file for html mails The mail summary layout file for plaintext mails public string $layoutPlaintext = '@activity/views/mails/plaintext/mailSummary' The maximum number of activities in the e-mail summary public integer $maxActivityCount = 50 The user public humhub\modules\user\models\User $user = null Method Details Returns the list of activities for the e-mail summary Returns the last summary date Returns the mode (exclude, include) of given content containers See also humhub\modules\activity\models\MailSummaryForm. Returns a list of content containers which should be included or excluded. Returns the subject of the MailSummary Returns a list of suppressed activity classes Sends the summary mail to the user Stores the date of the last summary mail
http://docs.humhub.org/humhub-modules-activity-components-mailsummary.html
2019-08-17T15:16:45
CC-MAIN-2019-35
1566027313428.28
[]
docs.humhub.org
File:ExportTab-1.png From iPi Docs Size of this preview: 800 × 585 pixels. Other resolution: 973 × 711 pixels. Original file (973 × 711 pixels, file size: 641 KB, MIME type: image/png) File history Click on a date/time to view the file as it appeared at that time. - You cannot overwrite this file. File usage The following 4 pages link to this file:
http://docs.ipisoft.com/index.php?title=File:ExportTab-1.png&oldid=763
2019-08-17T14:35:42
CC-MAIN-2019-35
1566027313428.28
[array(['/images/thumb/b/b8/ExportTab-1.png/800px-ExportTab-1.png', 'File:ExportTab-1.png'], dtype=object) ]
docs.ipisoft.com
Object Positioning in Silverlight This overview demonstrates how to control the position of objects (such as shapes, text, and media) in your Microsoft Silverlight-based application. The overview contains the following sections: - Plug-in Position and Dimensions - The Canvas Object - Positioning Paths, Geometries, and Other Shapes - Transforms - Z-Order. Because of this, there are two frames of reference when positioning Silverlight objects. - Within the plug-in: Position objects relative to the root Canvas within the plug-in. Most of this overview describes this type of positioning. - Within the HTML: The entire plug-in and all the objects positioned within it are subject to where you place the plug-in in HTML. Position of the plug-in within the HTML page depends on where you put the HTML container element (usually a DIV) within the HTML. In the example below, the HTML embeds the plug-in within an HTML TABLE. The following illustration shows the previous HTML, given a Silverlight-based application that contains only a red Canvas that is 50 pixels wide and 50 pixels high. A plug-in instance inside an HTML page In the previous example, the plug-in displayed as 50 pixels high and 50 pixels wide. To specify dimensions like this, you need to keep in mind three different files where dimensions are typically set. HTML file: The HTML file has to allow the plug-in to be displayed. For example, HTML objects can lay on top of the plug-in or crowd it out of display. CreateSilverlight.js file: The function in the CreateSilverlight.js helper file can specify the size of the plug-in. For example, the following code shows how to specify dimensions of 50 pixels by 50 pixels within the Silverlight.createObject method defined inside the CreateSilverlight.js file. Root Canvas: You can set width and height properties on the root Canvas of your application. The following code shows the 50-by-50 red Canvas. If you are embedding a plug-in with specific dimensions you may have to make sure all three locations match. Instead of embedding the plug-in within a larger HTML page, you might want to have the plug-in take up the entire Web page. In the following HTML, the width and height are set to 100% for the HTML element, BODY element, and the DIV that contains the plug-in. In addition to setting the height and width to 100% in the HTML page, set the height and width to 100% in the CreateSilverlight.js file, as follows. The Canvas Object All Silverlight objects must be contained in a Canvas object and positioned relative to their containing Canvas. You control the positioning of objects inside the Canvas by specifying x and y coordinates. These coordinates are in pixels, which are resolution-independent. The x and y coordinates are often specified by using the Canvas.Left and Canvas.Top attached properties. Canvas.Left specifies the object's distance from the left side of the containing Canvas (x coordinate), and Canvas.Top specifies the object's distance from the top of the containing Canvas (y coordinate). The following example shows how to position a rectangle 30 pixels from the left and 30 pixels from the top of a Canvas. The following illustration shows how this code renders inside the Canvas. Positioning a rectangle inside the Canvas Note Silverlight does not support high dpi in version 1. Therefore, objects rendered on the screen and the coordinate system do not scale in response to monitor resolution. In addition, coordinates that are used by mouse events are not affected by monitor resolution. You can nest Canvas objects and position them by using the Canvas.Left and Canvas.Top properties. When you nest objects, the coordinates used by each object are relative to its immediate containing Canvas. In the following example, the root (white) Canvas contains a nested (blue) Canvas that has Canvas.Left and Canvas.Top properties of 30. The nested blue Canvas contains a red rectangle that also has Canvas.Left and Canvas.Top values of 30. The following illustration shows how this code renders. Nested objects Note In Silverlight-based applications that are embedded in HTML pages, the HTML element that hosts the Silverlight plug-in often has a specific width and height. Because of this, it is possible for objects to be positioned out of view. For example, if your host HTML element is only 300 pixels wide and you position an object 400 pixels to the right in your Silverlight-based application, your users will not be able to see the object. Positioning Paths, Geometries, and Other Shapes Not all objects are positioned by using the Canvas.Left and Canvas.Top properties. However, they all use x and y coordinates that are relative to the containing Canvas. Examples of these objects include the Path object, various geometry objects, and shapes such as Line. The following example shows the positioning syntax for some of these objects. For more information about paths, geometries, and other shapes, see Shapes and Drawing in Silverlight Overview, Geometries Overview, and Path Markup Syntax. The following illustration shows how this code renders. Positioning Line, Polyline, and Path objects Transforms Another way to change the position of an object is to apply a transform to it. You can use transforms to move the object, rotate the object, skew the object's shape, change the size (scale) of the object, or a combination of these actions. The following example shows a RotateTransform that rotates a Rectangle element 45 degrees about the point (0,0). The following illustration shows how this code is rendered. A Rectangle rotated 45 degrees about the point (0,0) For more information about transforms and how to use them, see Transforms Overview. Z-Order So far, the discussion has focused on positioning objects in two-dimensional space. You can also position objects on top of one another. The z-order of an object determines whether an object is in front of or behind another overlapping object. By default, the z-order of objects within a Canvas is determined by the sequence in which they are declared. Objects that are declared later appear in front of objects that are declared first. The following example creates three Ellipse objects. You can see that the Ellipse declared last (the lime-colored ellipse) is in the foreground, in front of the other two Ellipse objects. The following illustration shows how this code renders. Overlapping Ellipse objects You can change this behavior by setting the Canvas.ZIndex attached property on objects within the Canvas. Higher values are closer to the foreground; lower values are farther from the foreground. The following example is similar to the preceding one, except that the z-order of the Ellipse objects is reversed. The Ellipse that is declared first (the silver ellipse) is now in front. The following illustration shows how this code renders. Reversing the z-order of the Ellipse objects See Also Shapes and Drawing Overview Geometries Overview Path Markup Syntax Transforms Overview Silverlight Object Models and Scripting to the Silverlight Plug-inbr />Overviews and How-to Topics
https://docs.microsoft.com/en-us/previous-versions/bb693296%28v%3Dmsdn.10%29
2019-08-17T15:08:38
CC-MAIN-2019-35
1566027313428.28
[array(['images/bb693296.positioning_overview_html_table%28en-us%2cmsdn.10%29.gif', 'A plug-in instance inside an HTML page'], dtype=object) array(['images/bb693296.positioning_ovw1%28en-us%2cmsdn.10%29.gif', 'Positioning a rectangle inside the Canvas'], dtype=object) array(['images/bb693296.positioning_ovw2%28en-us%2cmsdn.10%29.gif', 'Nested objects'], dtype=object) array(['images/bb693296.positioning_ovw3%28en-us%2cmsdn.10%29.gif', 'Positioning Line, Polyline, and Path objects'], dtype=object) array(['images/bb412391.local_772404290_graphicsmm_fe_rotated_about_upperleft_corner%28en-us%2cmsdn.10%29.png', 'A Rectangle rotated 45 degrees about (0,0)'], dtype=object) array(['images/bb693296.positioning_ovw4%28en-us%2cmsdn.10%29.gif', 'Overlapping Ellipse objects'], dtype=object) array(['images/bb693296.positioning_ovw5%28en-us%2cmsdn.10%29.gif', 'Reversing the z-order of the Ellipse objects'], dtype=object)]
docs.microsoft.com
The rotation as Euler angles in degrees..); } } Do not set one of the eulerAngles axis separately (eg. eulerAngles.x = 10; ) since this will lead to drift and undesired rotations. When setting them to a new value set them all at once as shown above. Unity will convert the angles to and from the rotation stored in Transform.rotation.
https://docs.unity3d.com/kr/560/ScriptReference/Transform-eulerAngles.html
2019-08-17T15:41:32
CC-MAIN-2019-35
1566027313428.28
[]
docs.unity3d.com
Log Streaming is available to customers enrolled in Premier Support using a supported Appian version as per the Product Support Policy. Appian customers must purchase Premier Support to use the functionality described below. The functionality described below is not included in the base Appian platform. Appian Cloud instances can be configured to stream supported logs, in real time, to a syslog receiver owned by customers. Once logs are stored in a central repository, customers can index, access, search, and correlate events using their existing Log Management and Security Information and Event Management (SIEM) tools. This service operates on a push-based model, in which Appian Cloud instances are configured to send a stream of logs to the customer’s syslog receiver. Logs are forwarded in real-time as the messages are written in the Appian Cloud instance(s). These logs can be further digested and aggregated by tools of your choice, such as Splunk, LogRhythm, and Elasticsearch-Logstash-Kibana (ELK) stack. Customers with this service enabled on their Appian Cloud instances can integrate the information contained in the logs for a consolidated view of their enterprise operations. Some benefits of this service include: Log Streaming supports the transmission of messages to either an on-premise syslog receiver or Sumo Logic Cloud Syslog Source. The figure below shows an example of the message flow between your Appian Cloud instances and an on-premise syslog receiver in the customer network. For on-premise syslog receivers, logs transmission is performed over an IPsec VPN tunnel established to the customer network. As an additional security layer, syslog messages can be encrypted using a TLS certificate installed in the syslog receiver provided by the customer. TLS encryption is enabled by default but can be disabled upon customer request. The figure below shows an example of the message flow between your Appian Cloud instance and a Sumo Logic Cloud Syslog Source. For Sumo Logic Cloud Syslog Source, logs transmission is performed over the Internet and traffic is encrypted with TLS using the trusted public CA provided by the customer’s Sumo Logic deployment. The table below contains the logs to be forwarded by each Appian Cloud instance with this feature enabled. For details about the contents and frequency of the log messages, refer to the Appian Logging documentation. Syslog messages have the following format: PRI: Specifies the priority of the syslog message (RFC5424) TIMESTAMP: Date and time of the message. The value will be expressed in the timezone configured in the customer, customers can follow these steps to enable log streaming in their Appian Cloud instance(s): On This Page
https://docs.appian.com/suite/help/18.3/Log_Streaming_for_Appian_Cloud.html
2019-08-17T15:01:50
CC-MAIN-2019-35
1566027313428.28
[]
docs.appian.com
Method Base. Method Is Security Transparent Base. Method Is Security Transparent Base. Method Is Security Transparent Base. Property Is Security Transparent Definition Gets a value that indicates whether the current method or constructor is transparent at the current trust level, and therefore cannot perform critical operations. public: virtual property bool IsSecurityTransparent { bool get(); }; public virtual bool IsSecurityTransparent { get; } member this.IsSecurityTransparent : bool Public Overridable ReadOnly Property IsSecurityTransparent As Boolean Property Value true if the method or constructor is security-transparent at the current trust level; otherwise, false. Remarks). Using these properties is much simpler than examining the security annotations of an assembly and its types and members, checking the current trust level, and attempting to duplicate the runtime's rules..
https://docs.microsoft.com/en-us/dotnet/api/system.reflection.methodbase.issecuritytransparent?view=netframework-4.7.2
2019-08-17T15:17:34
CC-MAIN-2019-35
1566027313428.28
[]
docs.microsoft.com
3. Advanced Tutorial¶ 3 advanced_tutorial.py or advanced_tutorial.ipynb (for Jupyter notebooks) files in your PYTHONPATH/Lib/site-packages/eonr/examples folder - feel free to load that into your Python IDE to follow along. 3.2. Calculate EONR for several economic scenarios¶ In this tutorial, we will run EONR.calculate_eonr() in a loop, adjusting the economic scenario prior to each run. 3.3. Load modules¶ Load pandas and EONR: [1]: import pandas as pd import eonr print('EONR version: {0}'.format(eonr.__version__)) EONR version: 0.1.4 3.4. Load the data¶ EONR uses Pandas dataframes to access and manipulate the experimental data. [2]: df_data = pd.read_csv(r'data\minnesota_2012.csv') df_data [2]: 3.5. Set column names and units¶ The table containing the experimental data must have a minimum of two columns: * Nitrogen fertilizer rate * Grain yield We’ll also set nitrogen uptake and available nitrogen columns right away for calculating the socially optimum nitrogen rate. As a reminder, we are declaring the names of these columns and units because they will be passed to EONR later. [3]: col_n_app = 'rate_n_applied_kgha' col_yld = 'yld_grain_dry_kgha' col_crop_nup = 'nup_total_kgha' col_n_avail = 'crop_n_available_kgha' unit_currency = '$' unit_fert = 'kg' unit_grain = 'kg' unit_area = 'ha') C:\Anaconda3\lib\site-packages\ipykernel_launcher.py:17: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: 3.6. Turn base_zero off¶ You might have noticed the base_zero option for the EONR class in the API. base_zero is a True/ False flag that determines if gross return to nitrogen should be expressed as an absolute values. We will see a bit later that upon executing EONR.calculate_eonr(), grain yield from the input dataset is used to create a new column for gross return to nitrogen (“grtn”) by multiplying the grain yield column by the price of grain ( price_grain variable). If base_zero is True (default), the observed yield return data are standardized so that the best-fit quadratic-plateau model passes through the y-axis at zero. This is done in two steps: 1. Fit the quadratic-plateau to the original data to determine the value of the y-intercept of the model (\(\beta_0\)) 2. Subtract \(\beta_0\) from all data in the recently created “grtn” column (temporarily stored in EONR.df_data) This behavior ( base_zero = True) is the default in EONR. However, base_zero can simply be set to False during the initialization of EONR. We will set it store it in its own variable now, then pass to EONR during initialization: [4]: base_zero = False 3.7. Initialize EONR¶ Let’s set the base directory and initialize an instance of EONR, setting cost_n_fert = 0, costs_fixed = 0, and price_grain = 1.0 (\$1.00 per kg) as the default values (we will adjust them later on in the tutorial): [5]: import os base_dir = os.path.join(os.getcwd(), 'eonr_advanced_tutorial') my_eonr = eonr.EONR(cost_n_fert=0, costs_fixed=0, price_grain=1.0, col_n_app=col_n_app, col_yld=col_yld, col_crop_nup=col_crop_nup, col_n_avail=col_n_avail, unit_currency=unit_currency, unit_grain=unit_grain, unit_fert=unit_fert, unit_area=unit_area, base_dir=base_dir, base_zero=base_zero) 3.8. Calculate the AONR¶ You may be wondering why cost_n_fert was set to 0. Well, setting our nitrogen fertilizer cost to $0 essentially allows us to calculate the optimum nitrogen rate ignoring the cost of the fertilizer input. This is known as the Agronomic Optimum Nitrogen Rate (AONR). The AONR provides insight into the maximum achievable grain yield. Notice price_grain was set to 1.0 - this effectively calculates the AONR so that the maximum return to nitrogen (MRTN), which will be expressed as $ per ha when ploting via EONR.plot_eonr(), is similar to units of kg per ha (the units we are using for grain yield). Let’s calculate the AONR and plot it (adjusting y_max so it is greater than our maximum grain yield): [6]: my_eonr.calculate_eonr(df_data, ) my_eonr.plot_eonr(x_min=-5, x_max=300, y_min=-100, y_max=18000)_0<< We see that the agronomic optimum nitrogen rate was calculated as 177 kg per ha, and the MRTN is 13.579 Mg per ha (yes, it says $13,579, but because price_grain was set to \$1, the values are equivalent and the units can be substituted. If you’ve gone through the first tutorial, you’ll notice there are a few major differences in the look of this plot: The red line representing nitrogen fertilizer cost is on the bottom of the plot This happened because cost_n_fert was set to zero, and nitrogen fertilizer cost is simply being plotted as a flat horizontal line at \(\text{y}=0\) The blue line is missing? The gross return to nitrogen (GRTN) line representing the best-fit of the quadratic-plateau model (blue line) is there, but it is actually being covered up by the green line (net return to nitrogen; NRTN). This is the case because cost_n_fert *was set to zero. The GRTN/NRTN lines do not pass through the y-intercept at \(\text{y}=0\)? Because base_zero was set to False, the observed data (blue points) were not standardized as to “force” the best-fit quadratic-plateau model from passing through at \(\text{y}=0\). 3.9. Bootstrap confidence intervals¶ We will calculate the AONR again, but this time we will compute the bootstrap confidence intervals in addition to the profile-likelihood and Wald-type confidence intervals. To tell EONR to compute the bootstrap confidence intervals, simply set bootstrap_ci to True in the EONR.calcualte_eonr() function: [7]: my_eonr.calculate_eonr(df_data, bootstrap_ci=True) my_eonr.plot_tau() my_eonr.fig_tau = my_eonr.plot_modify_size(fig=my_eonr.fig_tau.fig, plotsize_x=5, plotsize_y=4.0)_1<< 3.10. Set fixed costs¶ Fixed costs (on a per area basis) can be considered by EONR. Simply set the fixed costs (using EONR.update_econ()) before calculating the EONR: [8]: costs_fixed = 12.00 # set to $12 per hectare my_eonr.update_econ(costs_fixed=costs_fixed) 3.11. Loop through several economic conditions¶ EONR computes the optimum nitrogen rate for any economic scenario that we define. The EONR class is designed so the economic conditions can be adjusted, calculating the optimum nitrogen rate after each adjustment. We just have to set up a simple loop to update the economic scenario (using EONR.update_econ()) and calculate the EONR (using EONR.calculate_eonr()). We will also generate plots and save them to our base directory right away: [9]: price_grain = 0.157 # set from $1 to 15.7 cents per kg cost_n_fert_list = [0.44, 1.32, 2.20] for cost_n_fert in cost_n_fert_list: # first, update fertilizer cost my_eonr.update_econ(cost_n_fert=cost_n_fert, price_grain=price_grain) # second, calculate EONR my_eonr.calculate_eonr(df_data) # third, generate (and save) the plots my_eonr.plot_eonr(x_min=-5, x_max=300, y_min=-100, y_max=2600) my_eonr.plot_save(fname='eonr_mn2012_pre.png', fig=my_eonr.fig_eonr) Computing EONR for Minnesota 2012 Pre Cost of N fertilizer: $0.44 per kg Price grain: $0.16 per kg Fixed costs: $12.00 per ha Economic optimum N rate (EONR): 169.9 kg per ha [135.2, 220.9] (90.0% confidence) Maximum return to N (MRTN): $2043.53 per ha Computing EONR for Minnesota 2012 Pre Cost of N fertilizer: $1.32 per kg Price grain: $0.16 per kg Fixed costs: $12.00 per ha Economic optimum N rate (EONR): 154.8 kg per ha [125.7, 195.0] (90.0% confidence) Maximum return to N (MRTN): $1900.67 per ha Computing EONR for Minnesota 2012 Pre Cost of N fertilizer: $2.20 per kg Price grain: $0.16 per kg Fixed costs: $12.00 per ha Economic optimum N rate (EONR): 139.7 kg per ha [115.9, 170.1] (90.0% confidence) Maximum return to N (MRTN): $1771.10 per ha A similar loop can be made adjusting the social cost of nitrogen. my_eonr.base_zero is set to True to compare the graphs: [10]: price_grain = 0.157 # keep at 15.7 cents per kg cost_n_fert = 0.88 # set to be constant my_eonr.update_econ(price_grain=price_grain, cost_n_fert=cost_n_fert) my_eonr.base_zero = True # let's use base zero to compare graphs cost_n_social_list = [0.44, 1.32, 2.20, 4.40] for cost_n_social in cost_n_social_list: # first, update social cost my_eonr.update_econ(cost_n_social=cost_n_social) # second, calculate EONR my_eonr.calculate_eonr(df_data) # third, generate (and save) the plots my_eonr.plot_eonr(x_min=-5, x_max=300, y_min=-400, y_max=1400) my_eonr.plot_save(fname='eonr_mn2012_pre.png', fig=my_eonr.fig_eonr) Computing SONR for Minnesota 2012 Pre Cost of N fertilizer: $0.88 per kg Price grain: $0.16 per kg Fixed costs: $12.00 per ha Social cost of N: $0.44 per kg Socially optimum N rate (SONR): 160.1 kg per ha [129.0, 204.0] (90.0% confidence) Maximum return to N (MRTN): $749.35 per ha Computing SONR for Minnesota 2012 Pre Cost of N fertilizer: $0.88 per kg Price grain: $0.16 per kg Fixed costs: $12.00 per ha Social cost of N: $1.32 per kg Socially optimum N rate (SONR): 155.8 kg per ha [126.3, 196.7] (90.0% confidence) Maximum return to N (MRTN): $737.04 per ha Computing SONR for Minnesota 2012 Pre Cost of N fertilizer: $0.88 per kg Price grain: $0.16 per kg Fixed costs: $12.00 per ha Social cost of N: $2.20 per kg Socially optimum N rate (SONR): 151.8 kg per ha [123.8, 189.9] (90.0% confidence) Maximum return to N (MRTN): $725.79 per ha Computing SONR for Minnesota 2012 Pre Cost of N fertilizer: $0.88 per kg Price grain: $0.16 per kg Fixed costs: $12.00 per ha Social cost of N: $4.40 per kg Socially optimum N rate (SONR): 142.8 kg per ha [117.9, 175.0] (90.0% confidence) Maximum return to N (MRTN): $701.67 per ha Notice that we used the same EONR instance ( my_eonr) for all runs. This is convenient if there are many combinations of economic scenarios (or many experimental datasets) that you’d like to loop through. If you’d like the results to be saved separate (perhaps to separate results depending if a social cost is considered) that’s fine too. Simply create a new instance of EONR and customize how you’d like. 3.12. View results¶ All eight runs can be viewed in the dataframe: [11]: my_eonr.df_results [11]: 9 rows × 32 columns EONR.df_results contains the following data columns (some columns are hidden by Jupyter in the table above): - price_grain – price of grain - cost_n_fert – cost of nitrogen fertilizer - cost_n_social – other “social” costs of nitrogen - price_ratio – cost:grain price ratio - unit_price_grain – units descirbing the price of grain - unit_cost_n – units describing cost of nitrogen (both fertilizer and social costs) - location – location of dataset - year – year of dataset - time_n – nitrogen application timing of dataset - base_zero – if base_zero = True, this is the y-intercept (\(\beta_0\)) of the quadratic-plateau model before standardizing the data - eonr – optimum nitrogen rate (can be agronomic, economic, or socially optimum; in units of EONR.unit_nrate) - eonr_bias – bias in reparameterized quadratic-plateau model for computation of confidence intervals - R* – the coefficient representing the generalized cost function - costs_at_onr – total costs at the optimum nitrogen rate - ci_level – confidence interval (CI) level (for subsequent confidence bounds) - ci_wald_l – lower Wald CI - ci_wald_u – upper Wald CI - ci_pl_l – lower profile-likelihood CI - ci_pl_u – upper profile-likelihood CI - ci_boot_l – lower bootstrap CI - ci_boot_u – upper bootstrap CI - mrtn – maximum return to nitrogen (in units of EONR.unit_currency) - grtn_r2_adj – adjusted \(\text{r}^2\) value of the gross return to nitrogen (GRTN) model - grtn_rmse – root mean squared error of the GRTN - grtn_max_y – maximum y value of the GRTN (in units of EONR.unit_rtn) - grtn_crit_x – critical x-value of the GRTN (point where the “quadratic” part of the quadratic-plateau model terminates and the “plateau” commences) - grtn_y_int – y-intercept (\(\beta_0\)) of the GRTN model - scn_lin_r2 – adjusted \(\text{r}^2\) value of the linear best-fit for social cost of nitrogen - scn_lin_rmse – root mean squared error of the the linear best-fit for social cost of nitrogen - scn_exp_r2 – adjusted \(\text{r}^2\) value value of the exponential best-fit for social cost of nitrogen - scn_exp_rmse – root mean squared error of the the exponential best-fit for social cost of nitrogen 3.13. Save_336_4400”, corresponding to cost_n_social > 0, price_ratio == 33.6, and cost_n_social == 4.40 for “social”, “336”, and “4400” in the folder name, respectively): [12]: print(my_eonr.base_dir) my_eonr.df_results.to_csv(os.path.join(os.path.split(my_eonr.base_dir)[0], 'advanced_tutorial_results.csv'), index=False) my_eonr.df_ci.to_csv(os.path.join(os.path.split(my_eonr.base_dir)[0], 'advanced_tutorial_ci.csv'), index=False) F:\nigo0024\Documents\GitHub\eonr\eonr\examples\eonr_advanced_tutorial\social_336_4400
https://eonr.readthedocs.io/en/latest/advanced_tutorial.html
2019-08-17T15:57:34
CC-MAIN-2019-35
1566027313428.28
[array(['_images/advanced_tutorial_12_1.png', '_images/advanced_tutorial_12_1.png'], dtype=object) array(['_images/advanced_tutorial_15_1.png', '_images/advanced_tutorial_15_1.png'], dtype=object) array(['_images/advanced_tutorial_19_1.png', '_images/advanced_tutorial_19_1.png'], dtype=object) array(['_images/advanced_tutorial_19_2.png', '_images/advanced_tutorial_19_2.png'], dtype=object) array(['_images/advanced_tutorial_19_3.png', '_images/advanced_tutorial_19_3.png'], dtype=object) array(['_images/advanced_tutorial_21_1.png', '_images/advanced_tutorial_21_1.png'], dtype=object) array(['_images/advanced_tutorial_21_2.png', '_images/advanced_tutorial_21_2.png'], dtype=object) array(['_images/advanced_tutorial_21_3.png', '_images/advanced_tutorial_21_3.png'], dtype=object) array(['_images/advanced_tutorial_21_4.png', '_images/advanced_tutorial_21_4.png'], dtype=object)]
eonr.readthedocs.io
Deduplication reduces the amount of physical storage required for a volume (or all the volumes in an AFF aggregate) by discarding duplicate blocks and replacing them with references to a single shared block. Reads of deduplicated data typically incur no performance charge. Writes incur a negligible charge except on overloaded nodes. As data is written during normal use, WAFL uses a batch process to create a catalog of block signatures. After deduplication starts, ONTAP compares the signatures in the catalog to identify duplicate blocks. If a match exists, a byte-by-byte comparison is done to verify that the candidate blocks have not changed since the catalog was created. Only if all the bytes match is the duplicate block discarded and its disk space reclaimed.
http://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-concepts/GUID-C7FBF94A-336F-4332-A0F8-79ED1BAE17E2.html
2019-08-17T15:05:40
CC-MAIN-2019-35
1566027313428.28
[]
docs.netapp.com
java.lang.Object com.atlassian.confluence.pages.persistence.dao.FileSystemAttachmentDataDaocom.atlassian.confluence.pages.persistence.dao.FileSystemAttachmentDataDao com.atlassian.confluence.pages.persistence.dao.HierarchicalFileSystemAttachmentDataDaocom.atlassian.confluence.pages.persistence.dao.HierarchicalFileSystemAttachmentDataDao public class HierarchicalFileSystemAttachmentDataDao DAO that stores attachments on the file system in a hierarchical structure. The top of the hierarchy is based on the space the attachments belong to with the remainder of the hierarchy being based on the page id the attachment is attached to. public static final String NEW_ATTACHMENT_SUBDIR public static final String NON_SPACED_DIRECTORY_NAME public HierarchicalFileSystemAttachmentDataDao() public FileSystemAttachmentDataDao.FileSystemAttachmentNamingStrategy getNamingStrategy() getNamingStrategyin class FileSystemAttachmentDataDao protected File getDirectoryForAttachment(ContentEntityObject content, Attachment attachment) FileSystemAttachmentDataDao getDirectoryForAttachmentin class FileSystemAttachmentDataDao content- the ContentEntityObject the Attachment belongs to public File getDirectoryForAttachmentContainer(long contentId, long spaceId) Return the directory that represents where attachments for the supplied content should be stored based also upon the id of the space that contains the content (if it's spaced content) This method is only public so it is available to the upgrade task responsible for migrating to this new attachment storage structure. contentId- the id of the content the attachment belongs to spaceId- the id of the space the content belongs to or 0 if it is not spaced content protected File getConfluenceAttachmentDirectory() getConfluenceAttachmentDirectoryin class FileSystemAttachmentDataDao public void setHashGenerator(IdMultiPartHashGenerator hashGenerator)
https://docs.atlassian.com/atlassian-confluence/4.2.7/com/atlassian/confluence/pages/persistence/dao/HierarchicalFileSystemAttachmentDataDao.html
2019-08-17T15:25:05
CC-MAIN-2019-35
1566027313428.28
[]
docs.atlassian.com
This API is used to add or delete tags of a specific instance in batches. The TMS may use this API to manage service resource tags. A resource can have up to 10 tags. The API is idempotent. If there are duplicate keys in the request body when you add tags, an error is reported. If the key of the to-be-created tag is the same as that of an existing tag, the value of the existing tag will be overwritten. When tags are being deleted and some tags do not exist, the operation is considered successful by default, and the character set of the tags will not be checked upon deletion. A key and a value can respectively consist of up to 127 and 255 characters. The tag structure cannot be missing, and the key cannot be left blank or an empty string. POST /v1/{project_id}/csbs_backup_policy/{resource_id}/tags/action POST /v1/{project_id}/csbs_backup_policy/{resource_id}/tags/action { "action": "create", "tags": [ { "key": "key1", "value": "value1" }, { "key": "key", "value": "value3" } ] } or { "action": "delete", "tags": [ { "key": "key1", "value": "value1" }, { "key": "key2", "value": "value3" } ] } None For details, see Error Codes.
https://docs.otc.t-systems.com/en-us/api/csbs/en-us_topic_0098635093.html
2019-08-17T15:46:00
CC-MAIN-2019-35
1566027313428.28
[]
docs.otc.t-systems.com
Addon configuration overview¶ Django applications may require or offer configuration options. Typically this will be achieved via the settings.py file, or through environment variables that Django picks up. This is largely handled by the aldryn_config.py in each application. Divio Cloud projects offers both these methods, as well as configuration via the Control Panel: - Django settings - environment variables - addon configuration field Environment variable, setting or Addon configuration field?¶ When should you adopt each of these methods in your applications? Rules of thumb: -.
http://docs.divio.com/en/latest/background/addon-configuration-overview.html
2019-08-17T15:09:41
CC-MAIN-2019-35
1566027313428.28
[]
docs.divio.com
Description: This function prepares the Subject to be inserted into JSON document by escaping the characters in the String using Json String rules. The function correctly escapes quotes and control-chars (tab, backslash, cr, ff, etc.) Subject Type: String Arguments: No arguments Return Type: String Examples: If the "message" attribute is 'He didn't say, "Stop!"', then the Expression ${message:escapeJson()} will return 'He didn't say, \"Stop!\"'
https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.1.0/bk_expression-language/content/escapejson.html
2018-05-20T17:29:30
CC-MAIN-2018-22
1526794863662.15
[]
docs.hortonworks.com
AfxThrowDBException Call this function to throw an exception of type CDBException from your own code. void AfxThrowDBException( RETCODE nRetCode, CDatabase* pdb, HSTMT hstmt ); Parameters nRetCode A value of type RETCODE, defining the type of error that caused the exception to be thrown. pdb A pointer to the CDatabase object that represents the data source connection with which the exception is associated. hstmt An ODBC HSTMT handle that specifies the statement handle with which the exception is associated. Remarks. Requirements Header: afxdb.h
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/7043995t(v=vs.100)
2018-05-20T18:56:13
CC-MAIN-2018-22
1526794863662.15
[]
docs.microsoft.com
add extracted folder of MiXCR distribution to your PATHvariable or add symbolic link for mixcrscript to your bin/folder (e.g. ~/bin/in Ubuntu and many other popular linux distributions) Installation on Windows¶ Currently there is no execution script or installer for Windows. Still MiXCR can easily be used by direct execution from the jar file. check that you have Java 1.7+ installed on your system by typing java -version download latest binary distributaion of MiXCR from the release page on GitHub unzip the archive use mixcr.jarfrom the archive in the following way: > java -Xmx4g -Xms3g -jar path_to_mixcr\jar\mixcr.jar ... For example: > java -Xmx4g -Xms3g -jar C:\path_to_mixcr\jar\mixcr.jar align input.fastq.gz output.vdjca To use mixcr from jar file one need to substitute mixcr command with java -Xmx4g -Xms3g -jar path_to_mixcr\jar\mixcr.jar in all examples from this manual.
http://mixcr.readthedocs.io/en/latest/install.html
2018-05-20T17:27:13
CC-MAIN-2018-22
1526794863662.15
[]
mixcr.readthedocs.io
You lose network connectivity when you delete a port group associated with the iBFT network adapter. Problem A loss of network connectivity occurs after you delete a port group.. Solution Do not set an iBFT gateway unless it is required. If the gateway is required, after installation, manually set the system's default gateway to the one that the management network uses.
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.storage.doc/GUID-B28F7443-B456-4517-8D86-83AB14A36F20.html
2018-05-20T17:29:51
CC-MAIN-2018-22
1526794863662.15
[]
docs.vmware.com
Deploy applications with System Center. Important You can simulate the deployment of required applications, but not packages or software updates. MDM-enrolled devices don't support simulated deployments, user experience, or scheduling settings. Deploy an application In the Configuration Manager console, go to Software Library > Application Management > Applications. In the Applications list, select the application that you want to deploy. Then, on the Home tab, in the Deployment group, click Deploy. Specify general information about the deployment have not associated the selected collection with a distribution point group, this option is grayed out. Automatically distribute content for dependencies: If any of the deployment types in the application contain dependencies, then the site also sends dependent application content to distribution points. Important If you update the dependent application after deploying the primary application, the site doesn't automatically distribute any new content for the dependency. Specify content options for the deployment On the Content page, click Add to add the content associated with this deployment to distribution points or distribution point groups. If you select Use default distribution points associated to this collection on the General page, then this option is automatically populated. Only a member of the Application Administrator security role can modify it. Specify deployment settings On the Deployment Settings page of the Deploy Software wizard, specify the following information: Action: From the drop-down list, choose whether this deployment is to Install or Uninstall the application. Note If an application is deployed twice to a device, once with an action of Install and once with an action of Uninstall, the application deployment with an action of Install takes priority. You cannot change the action of a deployment after you create it. Purpose: From the drop-down list, choose one of the following options: - Available: If the application is deployed to a user, the user sees the published application in Software Center and can install it on demand. Required: The application is deployed automatically according to the schedule. If the application deployment status is not hidden, anyone using the application can track its deployment status and install the application from Software Center before the deadline. Note When the deployment action is set to Uninstall, the deployment purpose is automatically set to Required. You can't change this behavior. interact with the installation. This option is only available when the deployment has a purpose of Required. Send wake-up packets: If the deployment purpose is Required, a wake-up packet is sent to computers before the client runs the deployment. This packet wakes the computers at the installation deadline time. Before using this option, computers and networks must be configured for Wake On LAN. - Allow clients on a metered Internet connection to download content after the installation deadline, which might incur additional costs: This option is only available for deployments with a purpose of Required. Automatically close any running executables you specified on the install behavior tab of the deployment type properties dialog box: For more information, see check for running executable files before installing an application. Require administrator approval if users request this application: For versions 1710 and prior, the administrator approves any user requests for the application before the user can install it. This option is grayed out when the deployment purpose is Required, or when the application is deployed to a device collection. Note the application is deployed to a device collection. This is an optional feature. For more information, see Enable optional features from updates. If this feature is not enabled, you see the prior experience. Important The Configuration Manager client must be on version 1802 as well. You must also be using the new Software Center. Note View Approval Requests under Application Management in the Software Library workspace of the Configuration Manager console. There. Automatically upgrade any superseded version of this application: The client upgrades any superseded version of the application with the superseding application. Note Starting in version 1802, for either Available or Required install purpose, you can enable or disable this option. Specify scheduling settings for the deployment On the Scheduling page of the Deploy Software wizard, set the time when this application is deployed or available to client devices. The options on this page differ depending on whether the deployment action is set to Available or Required. In some cases, you might want to give users more time to install required application deployments or software updates beyond any deadlines you set up. This behavior is typically required when a computer has been turned off for long time and needs to install many applications. For example, if a user has returned from vacation, they might have to wait for a long time as overdue application deployments are installed. To help solve this problem, you can now define an enforcement grace period by deploying Configuration Manager client settings to a collection. To configure the grace period, take the following actions: - On the Computer Agent page of client settings, configure the new property Grace period for enforcement after deployment deadline (hours) with a value between 1 and 120 hours. - On the Scheduling page of a required application deployment, choose to Delay enforcement of this deployment according to user preferences, up to the grace period defined in client settings. The enforcement grace period applies to all deployments with this option enabled and targeted to devices to which you also deployed the client setting. After the application install deadline is reached, the client installs the application in the first non-business window, which the user configured, up to that grace period. However, the user can still open Software Center and install the application at any time they want. Once the grace period expires, enforcement reverts to normal behavior for overdue deployments. If the application you're deploying supersedes another application, you can set the installation deadline when users receive the new application. Set the Installation Deadline to upgrade users with the superseded application. Specify user experience settings for the deployment On the User Experience page of the Deploy Software wizard, specify information about how users can interact with the application installation. When you deploy applications to write-filter enabled Windows Embedded devices, you can specify to install the application on the temporary overlay and commit changes later. You can also specify to commit the changes at the installation deadline or during a maintenance window. If you commit changes at the installation deadline or during a maintenance window, you must restart the device. The changes persist on the device. Note When you deploy an application to a Windows Embedded device, make sure it's a member of a collection with a maintenance window. For more information about maintenance windows and Windows Embedded devices, see Create Windows Embedded applications. The options Software Installation and System restart (if required to complete the installation) are not used if the deployment purpose is set to Available. You can also configure the level of notification a user sees when the application is installed. Specify alert options for the deployment On the Alerts page of the Deploy Software wizard, set up how Configuration Manager and System Center Operations Manager generate alerts for this deployment. You can configure thresholds for reporting alerts and turn off reporting for the duration of the deployment. Associate the deployment with an iOS app configuration policy On the App Configuration Policies page, click New to associate this deployment with an iOS app configuration policy (if you have created one). For more information about this type of policy, see Configure iOS apps with app configuration policies. Deployment properties Find the new deployment in the Deployments node of the Monitoring workspace. You can edit the properties of this deployment or delete the deployment from the Deployments tab of the application detail pane. Delete an application deployment - In the Configuration Manager console, go to Software Library > Application Management > Applications. - In the Applications list, select the application that includes the deployment you delete. - In the Deployments tab of the <application name> list, select the application deployment to delete. Then on the Deployment tab, in the Deployment group, click Delete. When you delete an application deployment, any instances of the application that have already been installed aren't removed. To remove these applications, you must deploy the application to computers with Uninstall. If you delete an application deployment, or remove a resource from the collection you're deploying to, the application is no longer visible in Software Center. User notifications for required deployments When you receive required software from the Snooze and remind me setting, you can select from the following drop-down list of<< How to check for running executable files before installing an application In the Properties dialog box of a deployment type, on the Install Behavior tab, specify one or more executable files. If one of these executable files is running on the client, it blocks the installation of the deployment type. The user must close the running executable file before the client can install the deployment type. For deployments with a purpose of required, the client can automatically close the running executable file. - Open the Properties dialog box for any deployment type. - On the Install Behavior tab of the Properties dialog box, click Add. - In the Add or Edit Executable File dialog box, enter the name of the executable file that, if running, blocks install of the application. Optionally, you can also enter a friendly name for the application to help you identify it in the list. - Click OK, then close the Properties dialog box. - When you deploy the application, on the Deployment Settings page of the Deploy Software Wizard, select Automatically close any running executables you specified on the install behavior tab of the deployment type properties dialog box. After clients receive the deployment, the following behavior applies: If you deployed the application as Available, and an end user sees a dialog box informing them that the specified executable files are automatically closed when the application installation deadline is reached. You can schedule these dialogs in Client Settings > Computer Agent. If you don’t want the end user to see these messages, select Hide in Software Center and all notifications on the User Experience tab of the deployment’s be able to resolve the fully qualified domain name (FQDN) of the HTTPS-enabled management point
https://docs.microsoft.com/en-us/sccm/apps/deploy-use/deploy-applications
2018-05-20T18:18:41
CC-MAIN-2018-22
1526794863662.15
[array(['media/computeragentsettings.png', 'Computer Agent group in default client settings'], dtype=object) array(['media/client-toast-notification.png', 'Required software dialog notifies you of critical software maintenance'], dtype=object) ]
docs.microsoft.com
Footer Footer Widgets Go to Customize -> Styling Options -> Footer to reach footer related settings. Footer Widgets You can add footer widgets via Appearance -> Widgets page. Use the Footer related widget areas (sidebars). 4 tool icons available for the header; of a […] Header Widgets There are four header widget areas (sidebars) available. These widget areas support only the Text, Social Media, Custom Menu, Custom HTML widgets. Usage of other widgets may break the header design but it won’t be a big problem if you know what are you doing. […]
https://docs.rtthemes.com/document-category/naturalife/
2021-06-13T02:41:36
CC-MAIN-2021-25
1623487598213.5
[]
docs.rtthemes.com
app-automation_catalog Change Logs -. Bug Fixes -. Chores -..0.
https://docs.itential.io/changelog/app-automation_catalog/
2021-06-13T02:24:37
CC-MAIN-2021-25
1623487598213.5
[]
docs.itential.io
The New York release is no longer supported. As such, the product documentation and release notes are provided for informational purposes only, and will not be updated. View choice list definitions The Choice Set [sys_choice_set] table contains a record for every field that uses a choice list. Before you beginRole required: personalize_choicesNote: The personalize_choices role must be explicitly granted to the user; it cannot be an ACL. About this task The choice set record is associated with an application file, which allows update sets and team development to track and transfer all choices for a field in a single update record. Choice list values allow a maximum length of 40 characters. The range of allowable numerical values is [-999, 999]. Procedure Right-click the choice list field label and select Show Choice List. To view other choice list values, modify the filter at the top of the list.Note: When you use an ACL to grant personalize_choices on a particular field, Show Choice List is not available. It is only available if you explicitly grant the role to the user. Configure Choices continues to appear regardless of whether it is an ACL or an explicitly granted user role. Review the items in the list. Warning: Do not add new choices to the list. To add new choices to a choice list field, use the Configure Choices option. Define an option for a choice list You can personalize the options that are available You can change the default display label of the None option for a choice field. Before you beginRole required: personalize_choicesNote: The personalize_choices role must be explicitly granted to the user; it cannot be an ACL., string, or reference field. Before you beginRole required: personalize_dictionary About this task You can use this configuration to standardize data entry and limit available options for a field while still maintaining the original field type. Procedure Navigate to System Definition > Dictionary. Open the dictionary entry for the field. Note: Reference fields with a large number of records in the reference table cannot be converted to look like choice fields. A reference field with too many records reverts to looking like a reference
https://docs.servicenow.com/bundle/newyork-platform-administration/page/administer/field-administration/task/t_ViewChoiceListDefinitions.html
2021-06-13T03:11:29
CC-MAIN-2021-25
1623487598213.5
[]
docs.servicenow.com
This guide demonstrates the power of ThoughtSpot to solve real solutions we developed for our clients. The purpose of this section is to guide you through a few solutions we created for our clients, so you can leverage our experience to quickly and confidently employ ThoughtSport in meeting your own business objectives. Each topic and scenario includes a real-world data modeling problem, and how we solved it with ThoughtSpot technology.
https://docs.thoughtspot.com/6.3/reference/practice/intro.html
2021-06-13T01:36:25
CC-MAIN-2021-25
1623487598213.5
[]
docs.thoughtspot.com
Contents This document describes how N+1 redundancy is achieved for instanes using the plain disk template. Ganeti has long considered N+1 redundancy for DRBD, making sure that on the secondary nodes enough memory is reserved to host the instances, should one node fail. Recently, htools have been extended to also take N+1 redundancy for shared storage into account. For plain instances, there is no direct notion of redundancy: if the node the instance is running on dies, the instance is lost. However, if the instance can be reinstalled (e.g, because it is providing a stateless service), it does make sense to ask if the remaining nodes have enough free capacity for the instances to be recreated. This form of capacity planning is currently not addressed by current Ganeti. The basic considerations follow those of N+1 redundancy for shared storage. Also, the changes to the tools follow the same pattern. The changes to the exisiting tools are literally the same as for N+1 redundancy for shared storage with the above definition of N+1 redundancy substituted in for that of redundancy for shared storage. In particular, gnt-cluster verify will not be changed and hbal will use N+1 redundancy as a final filter step to disallow moves that lead from a redundant to a non-redundant situation.
https://docs.ganeti.org/docs/ganeti/2.17/html/design-plain-redundancy.html
2021-06-13T02:47:30
CC-MAIN-2021-25
1623487598213.5
[]
docs.ganeti.org
Get metrics in from other sources If you are gathering metrics from a source that is not natively supported, you can still add this metrics data to a metrics index. Get metrics in from files in CSV format There are two accepted formats for CSV files when you use them as inputs for metrics data. The format you use depends on how you want the Splunk software to index the information in the CSV file. Should it index so that each data point has multiple measurements, or so that each data point has only one measurement? It is more efficient to use metric data points that can contain multiple measurements. When you index metrics data this way you reduce your data storage costs and can benefit from improved search performance. Set metrics CSV source types and data inputs If your metrics data is in CSV format, use the metrics_csv pre-trained source type. It can handle both CSV metrics formats. Create a data input to add your CSV data to a metrics index. The input uses the pretrained metrics_csv source type. The data input should have: - Source type: Metrics > metrics_csv - Index: a metrics index After you set up your metrics_csv input, you should have the following inputs.conf configuration on your universal forwarder. It monitors the CSV data and sends it to the metrics indexer. #inputs.conf [monitor:///opt/metrics_data] index = metrics sourcetype = metrics_csv You should also See Monitor files and directories in the Getting Data In manual, and Create metrics indexes in the Managing Indexers and Clusters of Indexers manual. Format a CSV file for multiple-measurement metric data points When you format a CSV file for multiple-measurement metric data points, the first column header is _time, the metric timestamp. It is a required field. This is followed by one or more column headers for each metric measurement. Each measurement column header follows this syntax: metric_name:<metric_name>. The Splunk software considers additional columns that are not a timestamp or a measurement to be dimensions. Each row of the CSV table is a separate metric data point. Here is an example of a CSV file that is formatted for multiple-measurement metric data points. The first column is _time, the metric timestamp. The middle three columns are measurements. The last two columns are dimensions. "_time","metric_name:cpu.usr","metric_name:cpu.sys","metric_name:cpu.idle","dc","host" "1562020701",11.12,12.23,13.34,"east","east.splunk.com" "1562020702",21.12,22.33,23.34,"west","west.splunk.com" This CSV file example contains the same information as the example CSV file for single-measurement metric data points in the following section. However, because it uses two data points for this information instead of six, it will take up less space on disk when it is indexed. Format a CSV file for single-measurement metric data points When you format a CSV file for single-measurement metric data points, the first three columns are fields that are required for single-measurement metric data points: metric_timestamp metric_name _value. All additional columns are considered to be dimensions. During the ingestion and indexing process the metric_name and _value measurements will merged into the metric_name:<metric_name>=<numeric_value> format. Here is an example of a CSV file that is formatted for single-measurement metric data points. The first three columns of the table are the fields that are required for single-measurement metric data points. All additional columns are dimensions. This CSV file has dc and host as dimensions. "metric_timestamp","metric_name","_value","dc","host" "1562020701","cpu.usr",11.12,"east","east.splunk.com" "1562020701","cpu.sys",12.23,"east","east.splunk.com" "1562020701","cpu.idle",13.34,"east","east.splunk.com" "1562020702","cpu.usr",21.12,"west","west.splunk.com" "1562020702","cpu.sys",22.33,"west","west.splunk.com" "1562020702","cpu.idle",23.34,"west","west.splunk.com" If you compare this example to the example for multiple-measurement metric data points, you can see how the single-metric format would take up more space on disk. This table contains the same information as the multiple-measurement table. However, this table uses six data points where the multiple-measurement table only uses two. Set up and use HTTP Event Collector in Splunk Web in Getting Data In.field set to metric. For more information about HEC, see Set up and use HTTP Event Collector in Splunk Web and Format events for HTTP Event Collector in Getting Data In. For the /collector endpoint reference, see /collector in the REST API Reference Manual. Example of sending metrics using HEC The following example shows a command that sends a metric data point_1.splunk.com","fields":{"region":"us-west-1","datacenter":"dc}}' The measurements for this metric data point appear at the end of the JSON blob. They follow a multiple-metric format that uses the "metric_name:<metric_name>":<numeric_value> syntax. The multiple-metric JSON format Versions of the Splunk platform previous to 8.0.0 used a JSON format that only supported one metric measurement per JSON object. This resulted in metric data points that could only contain one measurement at a time. Version 8.0.0 of the Splunk platform supports a JSON format which allows each JSON object to contain measurements for multiple metrics. These JSON objects generate multiple-measurement metric data points. Multiple-measurement metric data points take up less space on disk and can improve search performance. Here is an example of a JSON object in the multiple-metric format. { "time": 1486683865, "event": "metric", "source": "metrics", "sourcetype": "perflog", "host": "host_1.splunk.com", "fields": { "region": "us-west-1", "datacenter": "dc!
https://docs.splunk.com/Documentation/Splunk/8.0.8/Metrics/GetMetricsInOther
2021-06-13T02:56:15
CC-MAIN-2021-25
1623487598213.5
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Splunk Cloud Quick Start This topic shows you the basic steps required to start using your Splunk Cloud deployment, and provides a simple quick start tutorial to help you get up and running quickly. To get started with your Splunk Cloud deployment, follow these high-level steps: - Get data in - Search and manage your data Log in to Splunk Cloud To log in to your Splunk Cloud deployment, you must use the dedicated Splunk Cloud URL and log in credentials provided to you in the "Welcome to Splunk Cloud" email you received when you opened your account. Get data into Splunk Cloud To get data into Splunk Cloud, the most common approach is to install the Splunk Universal Forwarder on the machines where your source data resides, and configure them to send data to Splunk Cloud. You can also upload files, or monitor files and inputs. For more information on the options available for getting data into Splunk Cloud, see Introduction to getting data in. Search and manage your data After you get your data into Splunk Cloud, you can search the data to create reports, display the results using dashboards and visualizations, and set alerts that trigger when specific conditions are met. For detailed information, see the following manuals. Quick start tutorial If you are new to Splunk Cloud and want to get started quickly, follow the steps in this brief tutorial to get some data into your Splunk Cloud deployment and start searching it. What you need - Your Splunk Cloud URL and log in credentials. See Log in to Splunk Cloud. - A standard log file to use as sample data for this exercise, such as a /var/log/messagesfile on a Unix machine, or a text file in C:\Windows\System32\LogFileson a Windows computer. Step 1. Log in to Splunk Cloud To log in to Splunk Cloud: - In your web browser, navigate to your Splunk Cloud URL. For example, - Enter the credentials provided to you when you opened your account. The Splunk Web UI appears. You can now interact with your Splunk Cloud.. (optional) Forward data To feed data continually to your Splunk Cloud deployment, you can install and configure the Splunk universal forwarder on the machine where the data resides. For information on how to install and configure forwarders, see the following platform-specific documentation: - more information,, 8.2.2105 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/SplunkCloud/latest/Admin/SplunkCloudQuickstart
2021-06-13T03:39:35
CC-MAIN-2021-25
1623487598213.5
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Ganeti currently has no support for Out of Band management of the nodes in a cluster. It relies on the OS running on the nodes and has therefore limited possibilities when the OS is not responding. The command gnt-node powercycle can be issued to attempt a reboot of a node that crashed but there are no means to power a node off and power it back on. Supporting this is very handy in the following situations: - Emergency Power Off: During emergencies, time is critical and manual tasks just add latency which can be avoided through automation. If a server room overheats, halting the OS on the nodes is not enough. The nodes need to be powered off cleanly to prevent damage to equipment. - Repairs: In most cases, repairing a node means that the node has to be powered off. - Crashes: Software bugs may crash a node. Having an OS independent way to power-cycle a node helps to recover the node without human intervention. Ganeti will be extended with OOB capabilities through adding a new Cluster Parameter (--oob-program), a new Node Property (--oob-program), a new Node State (powered) and support in gnt-node for invoking an External Helper Command which executes the actual OOB command (gnt-node <command> nodename ...). The supported commands are: power on, power off, power cycle, power status and health. This is a convenience command to allow easy emergency power off of a whole cluster or part of it. It takes care of all steps needed to get the cluster into a sane state to turn off the nodes. With --on it does the reverse and tries to bring the rest of the cluster back to life. Note The master node is not able to shut itself cleanly down. Therefore, this command will not do all the work on single node clusters. On multi node clusters the command tries to find another master or if that is not possible prepares everything to the point where the user has to shutdown the master node itself alone this applies also to the single node cluster configuration. Note If --oob-program is set to ! then the node has no OOB capabilities. Otherwise, we will inherit the node group respectively the cluster wide value. I.e. the nodes have to opt out from OOB capabilities. - existence and execution flag of OOB program on all Master Candidates if the cluster parameter --oob-program is set or at least one node has the property --oob-program set. The OOB helper is just invoked on the master - check if node state powered matches actual power state of the machine for those nodes where --oob-program is set Ganeti supports the following two boolean states related to the nodes: And will extend this list with the following boolean state: Additionally modify the meaning of the offline state as follows: The corresponding command extensions are: Additional Output (SoR, ommited if node property --oob-program is not set): powered: [True|False] - If no nodenames are passed to power [on|off|cycle], the user will be prompted with "Do you really want to power [on|off|cycle] the following nodes: <display list of OOB capable nodes in the cluster)? (y/n)" - For power-status, nodename is optional, if omitted, we list the power-status of all OOB capable nodes in the cluster (SoW) - User should be warned and needs to confirm with yes if s/he tries to power [off|cycle] a node with running instances. Note Example output (represents SoW): gnt-node oob power-status Node Power Status node1.example.com on node2.example.com off node3.example.com on node4.example.com unknown Note Example output (represents SoR): gnt-node info node1.example.com Node name: node1.example.com primary ip: 192.168.1.1 secondary ip: 192.168.2.1 master candidate: True drained: False offline: False powered: True primary for instances: - inst1.example.com - inst2.example.com - inst3.example.com secondary for instances: - inst4.example.com - inst5.example.com - inst6.example.com - inst7.example.com Note Only nodes which are not opted out from OOB management will report the powered state. Caveats: - If no nodename(s) are provided, we will report the health of all nodes in the cluster which have --oob-program set. - Only nodes which are not opted out from OOB management will report their health. Invoking the command on a node that does not meet this condition will result in an error message “Node does not support OOB commands”. For error handling see Error Handling Error messages are passed from the helper program to Ganeti through stderr(3) (return code == 1). On stdout(3), the helper program will send data back to Ganeti (return code == 0). The format of the data is JSON. For the health output, the fields are: Note
https://docs.ganeti.org/docs/ganeti/2.17/html/design-oob.html
2021-06-13T02:28:14
CC-MAIN-2021-25
1623487598213.5
[]
docs.ganeti.org
DisableSerialConsoleAccess Disables access to the EC2 serial console of all instances for your account. By default, access to the EC2 serial console is disabled for your account. For more information, see Manage account access to the EC2 serial console in the Amazon EC2 - serialConsoleAccessEnabled If true, access to the EC2 serial console of all instances is enabled for your account. If false, access to the EC2 serial console of all instances is disabled for your account. Type: Boolean Errors For information about the errors that are common to all actions, see Common client error codes. See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DisableSerialConsoleAccess.html
2021-06-13T01:49:18
CC-MAIN-2021-25
1623487598213.5
[]
docs.aws.amazon.com
Getting started with AWS Control Tower This getting started procedure is for AWS Control Tower central cloud administrators. Use this procedure when you're ready to set up your landing zone. From start to finish, it should take about half an hour. This procedure has a prerequisite and four steps. Prerequisite: Automated pre-launch checks for your management account Before AWS Control Tower sets up the landing zone, it automatically runs a series of pre-launch checks in your account. There's no action required on your part for these checks, which ensure that your management account is ready for the changes that establish your landing zone. Here are the checks that AWS Control Tower runs before setting up a landing zone: The existing service limits for the AWS account must be sufficient for AWS Control Tower to launch. For more information, see Limitations and quotas in AWS Control Tower. The AWS account must be subscribed to the following AWS services: Amazon Simple Storage Service (Amazon S3) Amazon Elastic Compute Cloud (Amazon EC2) Amazon SNS Amazon Virtual Private Cloud (Amazon VPC) AWS CloudFormation AWS CloudTrail Amazon CloudWatch AWS Config AWS Identity and Access Management (IAM) AWS Lambda Note By default, all accounts are subscribed to these services. Considerations for AWS Single Sign-On (SSO) customers If AWS Single Sign-On (AWS SSO) is already set up, the AWS Control Tower home Region must be the same as the AWS SSO Region. AWS Control Tower does not manage the SSO directory if SSO has been set up with an external identity provider. AWS SSO can be installed only in the management account of an organization. Considerations for AWS Config and AWS CloudTrail customers The AWS account cannot have trusted access enabled in the organization management account for either AWS Config or AWS CloudTrail. If you have an existing AWS Config Recorder, delivery channel or aggregation setup, you must remove these configurations so that AWS Control Tower can configure AWS Config on your behalf during landing zone launch. If you used AWS CloudFormation to create these AWS Config resources, ensure that you also use CloudFormation to remove the resources. If you are running ephemeral workloads from accounts in AWS Control Tower, you will see an increase in costs associated with AWS Config. Contact your AWS account representative for more specific information about managing these costs. When you enroll an account into AWS Control Tower, your account is governed by the AWS CloudTrail trail for the AWS Control Tower organization. If you have an existing deployment of a CloudTrail trail, you may see duplicate charges unless you delete the existing trail for the account before you enroll it in AWS Control Tower. Requirements for your shared account email addresses If you're setting up your landing zone in a new AWS account, for information on creating your account and your IAM administrator, see Setting up. To set up your landing zone, AWS Control Tower requires two unique email addresses that aren't already associated with an AWS account. Each of these email addresses will serve as a collaborative inbox -- a shared email account -- intended for the various users in your enterprise that will do specific work related to AWS Control Tower. The email addresses are required for: Audit account – This account is for your team of users that need access to the audit information made available by AWS Control Tower. You can also use this account as the access point for third-party tools that will perform programmatic auditing of your environment to help you audit for compliance purposes. Log archive account – This account is for your team of users that need access to all the logging information for all of your enrolled accounts within registered OUs in your landing zone. These accounts are created in the Security OU when you create your landing zone. As a best practice, we recommend that when you need to perform some action in these accounts, you should use an AWS SSO user with the appropriately scoped permissions. For the sake of clarity, this User Guide always refers to the shared accounts by their default names: log archive and audit. As you read this document, remember to substitute the customized names you give to these accounts initially, if you choose to customize them. You can view your accounts with their customized names on the Account details page. We are changing our terminology regarding the default names of some AWS Control Tower organizational units (OUs) to align with the AWS multi-account strategy. You may notice some inconsistencies while we are making a transition to improve the clarity of these names. The Security OU was formerly called the Core OU. The Sandbox OU was formerly called the Custom OU. Expectations for landing zone configuration The process of setting up your AWS Control Tower landing zone has multiple steps. Certain aspects of your AWS Control Tower landing zone are configurable. Other choices are "one-way doors" that cannot be changed after setup. Key items to configure during setup You can select your top-level OU names during setup, and you also can change OU names after you've set up your landing zone. By default, the top-level OUs are named Security and Sandbox. For more information, see Guidelines to set up a well-architected environment. During setup, you can select customized names for your shared accounts, called log archive and audit by default, but you cannot change these names after setup. (This is a one-time selection.) Configuration choices that cannot be undone You cannot change your home Region after you've set up your landing zone. After you select any Region for governance by AWS Control Tower, you cannot unselect the Region to remove it from governance. If you're provisioning Account Factory accounts with VPCs, VPC CIDRs can't be changed after they are created. Configure and launch your landing zone Before you launch your AWS Control Tower landing zone, determine the most appropriate home Region. For more information, see Administrative Tips for Landing Zone Setup. Changing your home Region after you have deployed your AWS Control Tower landing zone requires the assistance of AWS Support. This practice is not recommended. AWS Control Tower has no APIs or programmatic access. To configure and launch your landing zone, perform the following series of steps. Prepare: Navigate to the AWS Control Tower console Open a web browser, and navigate to the AWS Control Tower console at . In the console, verify that you are working in your desired home Region for AWS Control Tower. Then choose Set up your landing zone. Step 1. Review pricing and select your AWS Regions. Be sure you've correctly designated the AWS Region that you select for your home Region. After you've deployed AWS Control Tower, you can't change the home Region. In this section of the setup process, you can add any additional AWS Regions that you require. You can add more Regions at a later time, if needed. After you add a Region into governance by AWS Control Tower, you cannot remove it from governance. To select additional AWS Regions to govern The panel shows you the current Region selections. Open the dropdown menu to see a list of additional Regions available for governance. Check the box next to each Region to bring into governance by AWS Control Tower. Your home Region selection is not editable. Step 2. Configure your organizational units (OUs). If you accept the default names of these OUs, there's no action you need to take for setup to continue. To change the names of the OUs, enter the new names directly in the form field. Foundational OU – AWS Control Tower relies upon a Foundational OU that is initially named the Security OU. You can change the name of this OU during initial setup and afterward, from the OU details page. This Security OU contains your two shared accounts, which by default are called the log archive account and the audit account. Additional OU – AWS Control Tower can set up one or more Additional OUs for you. We recommend that you provision at least one Additional OU in your landing zone, besides the Security OU. If this Additional OU is intended for development projects, we recommend that you name it the Sandbox OU, as given in the Guidelines to set up a well-architected environment. If you already have an existing OU in AWS Organizations, you may see the option to skip setting up an Additional OU in AWS Control Tower. Step 3. Configure your shared accounts. In this section of the setup process, the panel shows the default selections for the names of your shared AWS Control Tower accounts. These accounts are an essential part of your landing zone. Do not move or delete these shared accounts, although you can choose customized names for them during setup. You must provide unique email addresses for your log archive and audit accounts, and you can verify the email address that you previously provided for your management account. Choose the Edit button to change the editable default values. About the shared acccounts The management account – The AWS Control Tower management account is part of the Root level. The management account allows for AWS Control Tower billing. The account also has administrator permissions for your landing zone. You cannot create separate accounts for billing and for administrator permissions in AWS Control Tower. The email address shown for the management account is not editable during this phaase of setup. It is shown as a confirmation, so you can check that you're editing the correct management account, in case you have multiple accounts. The two shared accounts – You can choose customized names for these two accounts, and you must supply a unique email address for each account. Remember that the email addresses must not already have associated AWS accounts. To configure the shared accounts, fill in the requested information. At the console, select a name for the account initially called the log archive account. Many customers decide to keep the default name for this account. Provide a unique email address for this account. Select a name for the account initially called the audit account. Many customers choose to call it the Security account. Provide a unique email address for this account. Step 4. Review and set up the landing zone The next section in the setup shows you the permissions that AWS Control Tower requires for your landing zone. Choose a checkbox to expand each topic. You'll be asked to agree to these permissions, which may affect multiple accounts, and to agree to the overall Terms of Service. To finalize At the console, review the Service permissions, and when you're ready, choose I understand the permissions AWS Control Tower will use to administer AWS resources and enforce rules on my behalf. To finalize your selections and initialize launch, choose Set up landing zone. This series of steps starts the process of setting up your landing zone, which can take about thirty minutes to complete. During setup, AWS Control Tower creates your Root level, the Security OU, and the shared accounts. Other AWS resources are created, modified, or deleted. The email address you provided for the audit account will receive AWS Notification - Subscription Confirmation emails from every AWS Region supported by AWS Control Tower. To receive compliance emails in your audit account, you must choose the Confirm subscription link within each email from each AWS Region supported by AWS Control Tower. Next steps Now that your landing zone is set up, it's ready for use. To learn more about how you can use AWS Control Tower, see the following topics: For recommended administrative practices, see Best Practices. You can set up AWS SSO users and groups with specific roles and permissions. For recommendations, see Recommendations for Setting Up Groups, Roles, and Policies. To begin enrolling organizations and accounts from your AWS Organizations deployments, see Govern existing organizations and accounts. Your end users can provision their own AWS accounts in your landing zone using Account Factory. For more information, see Permissions for Configuring and Provisioning Accounts. To assure Compliance Validation for AWS Control Tower, your central cloud administrators can review log archives in the Log Archive account, and designated third-party auditors can review audit information in the Audit (shared) account, which is a member of the Security OU. To learn more about the capabilities of AWS Control Tower, see Related information. From time to time, you may need to update your landing zone to get the latest backend updates, the latest guardrails, and to keep your landing zone up-to-date. For more information, see Configuration update management in AWS Control Tower. If you encounter issues while using AWS Control Tower, see Troubleshooting.
https://docs.aws.amazon.com/controltower/latest/userguide/getting-started-with-control-tower.html
2021-06-13T03:39:49
CC-MAIN-2021-25
1623487598213.5
[]
docs.aws.amazon.com
A group rule is a criterion that you define to enable storage objects (volumes, clusters, or SVMs) to be included in a specific group. You can use condition groups or conditions for defining group rule for a group. You can create multiple condition groups, and each condition group can have one or more conditions. You can apply all the defined condition groups in a group rule for groups in order to specify which storage objects are included in the group. Conditions within a condition group are executed using logical AND. All the conditions in a condition group must be met. When you create or modify a group rule, a condition is created that applies, selects, and groups only those storage objects that satisfy all conditions in the condition group. You can use multiple conditions within a condition group when you want to narrow the scope of which storage objects to include in a group.. The list of operands in Unified Manager changes based on the selected object type. The list includes the object name, owning cluster name, owning SVM name, and annotations that you define in Unified Manager. The list of operators changes based on the selected operand for a condition. The operators supported in Unified Manager are Is and Contains. When you select the Is operator, the condition is evaluated for exact match of operand value to the value provided for the selected operand. Containsoperator, the condition is evaluated to meet one of the following criteria: The value field changes based on the operand selected. Consider a condition group for a volume with the following two conditions: vol data_svm This condition group selects all volumes that include vol in their names and that are hosted on SVMs with the name data_svm. Condition groups are executed using logical OR, and then applied to storage objects. The storage objects must satisfy one of the condition groups to be included in a group. The storage objects of all the condition groups are combined. You can use condition groups to increase the scope of storage objects to include in a group. Consider two condition groups for a volume, with each group containing the following two conditions: vol data_svm Condition group 1 selects all volumes that include vol in their names and that are hosted on SVMs with the name data_svm. vol critical Condition group 2 selects all volumes that include vol in their names and that are annotated with the data-priority annotation value as critical. When a group rule containing these two condition groups is applied on storage objects, then the following storage objects are added to a selected group: volin their names and that are hosted on the SVM with the name data_svm. volin their names and that are annotated with the data-priority annotation value critical.
https://docs.netapp.com/ocum-99/topic/com.netapp.doc.onc-um-ag/GUID-B6C48629-45A0-4E98-9035-BF283B6C1F04.html?lang=en
2021-06-13T02:01:44
CC-MAIN-2021-25
1623487598213.5
[]
docs.netapp.com
"Looks like your organization isn't supported on this version of Teams." Do you have any idea what is causing this message? "Looks like your organization isn't supported on this version of Teams." Do you have any idea what is causing this message? Hi Thomas! To my knowledge, there are certain types of organizations aren't eligible for Microsoft Teams free. If you're signed in to a Microsoft 365 work or school account, you won't be able to sign up for Teams free. Accounts at schools or other academic institutions aren't eligible for this Teams free offer. Account at U.S. government institutions aren't eligible for this Teams free offer. For more details about it, please refer to: Hi Jimmy, Thanks for our answer! Our company has never had an Office 365 account before (according to the information I have) . And we are neither a school, nor a government organization, just a simple company. I've been asked (as IT admin) to test Teams in a free version, to a possible future implementation. Is it possible that someone in the past, activated Office 365 with our domain? How can I verify/reset that? Thanks for help. Regards, Thomas Hi Tomas! Indeed, it is possible that someone activated Office 365 with your domain. In my experience, you can add or remove domains if you are a Global Administrator of a business or enterprise plan. If someone add your company domain in Office 365, you need to contact the administrator to remove it. For more details about how to add and remove domains in Office 365, please refer to: 3 people are following this question.
https://docs.microsoft.com/en-us/answers/questions/26871/i-would-like-to-test-teams-free-version-in-my-orga.html
2021-06-13T03:49:36
CC-MAIN-2021-25
1623487598213.5
[]
docs.microsoft.com
Where to find additional information Contributors Download PDF of this page To learn more about the information that is described in this document, see the following documents and websites: NVA- 1131-DEPLOY: FlexPod Express with VMware vSphere 6.7UI and NetApp AFF A220 with Direct Attached IP-Based Storage NVA Deploy AFF and FAS Systems Documentation Center ONTAP 9 Documentation Center NetApp Product Documentation
https://docs.netapp.com/us-en/flexpod/express-direct-attach-design/express-direct-attach-design_where_to_find_additional_information.html
2021-06-13T01:57:22
CC-MAIN-2021-25
1623487598213.5
[]
docs.netapp.com
Information provided above refers to the core components of the Open Systems Pharmacology Suite including PK-Sim®, MoBi®. Both, PK-Sim® and MoBi® can be installed as stand-alone software packages to reduce the disk space required. The Open Systems Pharmacology Suite includes interfaces to MS Excel® and R (version 3.5 or 3.6, 64 bit). These are separate programs that are not available within the Open Systems Pharmacology Suite. You need to have these programs installed in order to use their interfaces! Excel® is a registered trademark of Microsoft Inc., Redmond, USA; R is a product of the R Foundation for Statistical Computing, Vienna, Austria. To correctly install the software, administrator rights are necessary. If you do not have these rights, your IT-administrator should carry out the installation. The modular structure of the Open Systems Pharmacology Suite is explained in Modules, Philosophy, and Building Blocks. Both PK-Sim® and MoBi® can be installed stand-alone. However, to obtain full modeling and simulation capabilities, we recommend that both programs are installed. To install the Open Systems Pharmacology Suite core components: Download installation packages from Start the OSPSuite-Full.X.Y.Z.exe* (where X.Y.Z is a program version, e.g. 7.4.0) from the menu Start -> Run or from Windows Explorer. Follow the instructions of the installation program. In most cases, the installation should be carried out with the default settings. In most cases, you will have to restart your computer following installation. Download PK-Sim® gene expression databases and copy it to a folder accessible for all users. Configure PK-Sim® gene expression databases (for details see PK-Sim® - Options). (Re-)Qualification Framework Optional OSP Suite components which are only required for the creation of qualification reports. Installation instructions are provided here Besides the core components of the Open Systems Pharmacology Suite including PK-Sim®, MoBi®, interfaces are available for MS Excel®, Matlab® and R. For purchasing and installation options, please contact the suppliers indicated in section, “Trademark information”. Additional information on the software is available on. For support, bug reports, etc. please contact.
https://docs.open-systems-pharmacology.org/open-systems-pharmacology-suite/getting-started
2021-06-13T02:36:29
CC-MAIN-2021-25
1623487598213.5
[]
docs.open-systems-pharmacology.org
Sometimes you may find yourself needing to hold down a specific key for a long period of time. Key Lock holds down the next key you press for you. Press it again, and it will be released. Let's say you need to type in ALL CAPS for a few sentences. Hit KC_LOCK, and then Shift. Now, Shift will be considered held until you tap it again. You can think of Key Lock as Caps Lock, but supercharged. First, enable Key Lock by setting KEY_LOCK_ENABLE = yes in your rules.mk. Then pick a key in your keymap and assign it the keycode KC_LOCK. Key Lock is only able to hold standard action keys and One Shot modifier keys (for example, if you have your Shift defined as OSM(KC_LSFT)). This does not include any of the QMK special functions (except One Shot modifiers), or shifted versions of keys such as KC_LPRN. If it's in the Basic Keycodes list, it can be held. Switching layers will not cancel the Key Lock.
https://beta.docs.qmk.fm/using-qmk/software-features/feature_key_lock
2021-06-13T02:36:44
CC-MAIN-2021-25
1623487598213.5
[]
beta.docs.qmk.fm
plot_static_mapper_graph¶ gtda.mapper. plot_static_mapper_graph(pipeline, data, layout='kamada_kawai', layout_dim=2, color_variable=None, node_color_statistic=None, color_by_columns_dropdown=False, clone_pipeline=True, n_sig_figs=3, node_scale=12, plotly_params=None)[source]¶ Plot Mapper graphs without interactivity on pipeline parameters. The output graph is a rendition of the igraph.Graphobject computed by calling the fit_transformmethod of the MapperPipelineinstance pipeline on the input data. The graph’s nodes correspond to subsets of elements (rows) in data; these subsets are clusters in larger portions of data called “pullback (cover) sets”, which are computed by means of the pipeline’s “filter function” and “cover” and correspond to the differently-colored portions in this diagram. Two clusters from different pullback cover sets can overlap; if they do, an edge between the corresponding nodes in the graph may be drawn. Nodes are colored according to color_variable and node_color_statistic and are sized according to the number of elements they represent. The hovertext on each node displays, in this order: a globally unique ID for the node, which can be used to retrieve node information from the igraph.Graphobject, see Nerve; the label of the pullback (cover) set which the node’s elements form a cluster in; a label identifying the node as a cluster within that pullback set; the number of elements of data associated with the node; the value of the summary statistic which determines the node’s color. - Parameters pipeline ( MapperPipelineobject) – Mapper pipeline to act onto (None, callable, or ndarray of shape (n_nodes,) or (n_nodes, 1),. If a numpy array, it must have the same length as the number of nodes in the Mapper graph and its values are used directly as node colors (color_variable is ignored). fig – Figure representing the Mapper graph with appropriate node colouring and size. - Return type plotly.graph_objects.Figureobject Examples Setting a colorscale different from the default one: >>> import numpy as np >>> np.random.seed(1) >>> from gtda.mapper import make_mapper_pipeline, plot_static_mapper_graph >>> pipeline = make_mapper_pipeline() >>> data = np.random.random((100, 3)) >>> plotly_params = {"node_trace": {"marker_colorscale": "Blues"}} >>> fig = plot_static_mapper_graph(pipeline, data, ... plotly_params=plotly_params) Inspect the composition of a node with “Node ID” displayed as 0 in the hovertext: >>> graph = pipeline.fit_transform(data) >>> graph.vs[0]["node_elements"] array([70]) References - 1 igraph.Graph.layout documentation.
https://docs-tda.giotto.ai/0.4.0/modules/generated/mapper/visualization/gtda.mapper.plot_static_mapper_graph.html
2021-06-13T01:40:11
CC-MAIN-2021-25
1623487598213.5
[]
docs-tda.giotto.ai
You must monitor the overall storage capacity for your grid to ensure that adequate free space remains for object data and object metadata. Understanding how storage capacity changes over time can help you plan to add Storage Nodes or storage volumes before the grid's usable storage capacity is consumed. You must be signed in to the Grid Manager using a supported browser.
https://docs.netapp.com/sgws-115/topic/com.netapp.doc.sg-troubleshooting/GUID-0C8D411F-A0A0-45B6-A9DA-770A9306FAFE.html?lang=en
2021-06-13T01:52:02
CC-MAIN-2021-25
1623487598213.5
[]
docs.netapp.com
Network features by release 05/17/2021 Download PDF of this page Analyze the impact of network features available with each ONTAP 9 release. Available beginning Feature Description ONTAP 9.9.1 Cluster resiliency The following cluster resiliency and diagnostic improvements improve the customer experience: Port monitoring and avoidance: In two-node switchless cluster configurations, the system avoids ports that experience total packet loss (connectivity loss). Previously this functionality was only available in switched configurations. Automatic node failover: If a node cannot serve data across its cluster network, that node should not own any disks. Instead its HA partner should take over, if the partner is healthy. Commands to analyze connectivity issues: Use the following command to display which cluster paths are experiencing packet loss: network interface check cluster-connectivity show ONTAP 9.9.1 VIP LIF enhancements The following fields have been added to extend virtual IP (VIP) border gateway protocol (BGP) functionality: -asn or -peer-asn (4-byte value) The attribute itself is not new, but it now uses a 4-byte integer. -med -use-peer-as-next-hop The asn_integer parameter specifies the autonomous system number (ASN) or peer ASN. Starting in ONTAP 9.8, ASN for BGP supports a 2-byte non-negative integer. This is a 16-bit number (0 - 64511 available values). Starting in ONTAP 9.9.1, ASN for BGP supports a 4-byte non-negative integer (65536 - 4294967295). The default ASN is 65501. ASN 23456 is reserved for ONTAP session establishment with peers that do not announce 4-byte ASN capability. You can make advanced route selections with Multi-Exit Discriminator (MED) support for path prioritization. MED is an optional attribute in the BGP update message that tells routers to select the best route for the traffic. The MED is an unsigned 32-bit integer (0 - 4294967295); lower values are preferred. VIP BGP provides default route automation using BGP peer grouping to simplify configuration. ONTAP has a simple way to learn default routes using the BGP peers as next-hop routers when the BGP peer is on the same subnet. To use the feature, set the -use-peer-as-next-hop attribute to true. By default, this attribute is false. Configure virtual IP (VIP) LIFs ONTAP 9.8 Auto port placement ONTAP can automatically configure broadcast domains, select ports, and help configure network interfaces (LIFs), virtual LANs (VLANs), and link aggregation groups (LAGs) based on reachability and network topology detection. When you first create a cluster, ONTAP automatically discovers the networks connected to ports and configures the needed broadcast domains based on layer 2 reachability. You no longer have to configure broadcast domains manually. A new cluster will continue to be created with two IPspaces: Cluster IPspace: Containing one broadcast domain for the cluster interconnect. You should never touch this configuration. Default IPspace: Containing one or more broadcast domains for the remaining ports. Depending on your network topology, ONTAP configures additional broadcast domains as needed: Default-1, Default-2, and so on. You can rename these broadcast domains if desired, but do not modify which ports are configured in these broadcast domains. When you configure network interfaces, the home port selection is optional. If you do not manually select a home port, ONTAP will attempt to assign an appropriate home port in the same broadcast domain as other network interfaces in the same subnet. When creating a VLAN or adding the first port to a newly created LAG, ONTAP will attempt to automatically assign the VLAN or LAG to the appropriate broadcast domain based on its layer 2 reachability. By automatically configuring broadcast domains and ports, ONTAP helps to ensure that clients maintain access to their data during failover to another port or node in the cluster. Finally, ONTAP sends EMS messages when it detects that the port reachability is incorrect and provides the "network port reachability repair" command to automatically repair common misconfigurations. ONTAP 9.8 Internet Protocol security (IPsec) over wire encryption To ensure data is continuously secure and encrypted, even while in transit, ONTAP uses the IPsec protocol in transport mode. IPsec offers data encryption for all IP traffic including the NFS, iSCSI, and SMB/CIFS protocols. IPsec provides the only encryption in flight option for iSCSI traffic. Once IPsec is configured, network traffic between the client and ONTAP is protected with preventive measures to combat replay and man-in-the-middle (MITM) attacks. Configure IP security (IPsec) over wire encryption ONTAP 9.8 Virtual IP (VIP) expansion New fields have been added to the network bgp peer-group command. This expansion allows you to configure two additional Border Gateway Protocol (BGP) attributes for Virtual IP (VIP). AS path prepend: Other factors being equal, BGP prefers to select the route with shortest AS (autonomous system) Path. You can use the optional AS path prepend attribute to repeat an autonomous system number (ASN), which increases the length of the AS path attribute. The route update with the shortest AS path will be selected by the receiver. BGP community: The BGP community attribute is a 32-bit tag that can be assigned to the route updates. Each route update can have one or more BGP community tags. The neighbors receiving the prefix can examine the community value and take actions like filtering or applying specific routing policies for redistribution. ONTAP 9.8 Switch CLI simplification To simplify switch commands, the cluster and storage switch CLIs are consolidated. The consolidated switch CLIs include Ethernet switches, FC switches, and ATTO protocol bridges. Instead of using separate "system cluster-switch" and "system storage-switch" commands, you now use "system switch". For the ATTO protocol bridge, instead of using "storage bridge", use "system bridge". Switch health monitoring has similarly expanded to monitor the storage switches as well as the cluster interconnect switch. You can view health information for the cluster interconnect under "cluster_network" in the "client_device" table. You can view health information for a storage switch under "storage_network" in the "client_device" table. ONTAP 9.8 IPv6 variable length The supported IPv6 variable prefix length range has increased from 64 to 1 through 127 bits. A value of bit 128 remains reserved for virtual IP (VIP). When upgrading, non-VIP LIF lengths other than 64 bits are blocked until the last node is updated. When reverting an upgrade, the revert checks any non-VIP LIFs for any prefix other than 64 bits. If found, the check blocks the revert until you delete or modify the offending LIF. VIP LIFs are not checked. ONTAP 9.7 Automatic portmap service The portmap service maps RPC services to the ports on which they listen. The portmap service is always accessible in ONTAP 9.3 and earlier, is configurable in ONTAP 9.4 through ONTAP 9.6, and is managed automatically starting in ONTAP 9.7. In ONTAP 9.3 and earlier: The portmap service (rpcbind) is always accessible on port 111 in network configurations that rely on the built-in ONTAP firewall rather than a third-party firewall. From ONTAP 9.4 through ONTAP 9.6: You can modify firewall policies to control whether the portmap service is accessible on particular LIFs. Starting in ONTAP 9.7: The portmap firewall service is eliminated. Instead, the portmap port is opened automatically for all LIFs that support the NFS service. Portmap service configuration ONTAP 9.7 Cache search You can cache NIS netgroup.byhost entries using the vserver services name-service nis-domain netgroup-database commands. ONTAP 9.6 CUBIC CUBIC is the default TCP congestion control algorithm for ONTAP hardware. CUBIC replaced the ONTAP 9.5 and earlier default TCP congestion control algorithm, NewReno. CUBIC addresses the problems of long, fat networks (LFNs), including high round trip times (RTTs). CUBIC detects and avoids congestion. CUBIC improves performance for most environments. ONTAP 9.6 LIF service policies replace LIF roles You can assign service policies (instead of LIF roles) to LIFs that determine the kind of traffic that is supported for the LIFs. Service policies define a collection of network services supported by a LIF. ONTAP provides a set of built-in service policies that can be associated with a LIF. ONTAP supports service policies starting with ONTAP 9.5; however, service policies can only be used to configure a limited number of services. Starting with ONTAP 9.6, LIF roles are deprecated and service policies are supported for all types of services. LIFs and service policies ONTAP 9.5 NTPv3 support Network Time Protocol (NTP) version 3 includes symmetric authentication using SHA-1 keys, which increases network security. ONTAP 9.5 SSH login security alerts When you log in as a Secure Shell (SSH) admin user, you can view information about previous logins, unsuccessful attempts to log in, and changes to your role and privileges since your last successful login. ONTAP 9.5 LIF service policies You can create new service policies or use a built-in policy. You can assign a service policy to one or more LIFs; thereby allowing the LIF to carry traffic for a single service or a list of services. LIFs and service policies ONTAP 9.5 VIP LIFs and BGP support A VIP data LIF is a LIF that is not part of any subnet and is reachable from all ports that host a border gateway protocol (BGP) LIF in the same IPspace. A VIP data LIF eliminates the dependency of a host on individual network interfaces. Create a virtual IP (VIP) data LIF ONTAP 9.5 Multipath routing Multipath routing provides load balancing by utilizing all the available routes to a destination. Enable multipath routing ONTAP 9.4 Portmap service The portmap service maps remote procedure call (RPC) services to the ports on which they listen. The portmap service is always accessible in ONTAP 9.3 and earlier. Starting in ONTAP 9.4, the portmap service is configurable. You can modify firewall policies to control whether the portmap service is accessible on particular LIFs. Portmap service configuration ONTAP 9.4 SSH MFA for LDAP or NIS SSH multi-factor authentication (MFA) for LDAP or NIS uses a public key and nsswitch to authenticate remote users. ONTAP 9.3 SSH MFA SSH MFA for local administrator accounts use a public key and a password to authenticate local users. ONTAP 9.3 SAML authentication You can use Security Assertion Markup Language (SAML) authentication to configure MFA for web services such as Service Processor Infrastructure (spi), ONTAP APIs, and OnCommand System Manager. ONTAP 9.2 SSH login attempts You can configure the maximum number of unsuccessful SSH login attempts to protect against brute force attacks. ONTAP 9.2 Digital security certificates ONTAP provides enhanced support for digital certificate security with Online Certificate Status Protocol (OCSP) and pre-installed default security certificates. ONTAP 9.2 Fastpath As part of a networking stack update for improved performance and resiliency, fast path routing support was removed in ONTAP 9.2 and later releases because it made it difficult to identify problems with improper routing tables. Therefore, it is no longer possible to set the following option in the nodeshell, and existing fast path configurations are disabled when upgrading to ONTAP 9.2 and later: ip.fastpath.enable Network traffic not sent or sent out of an unexpected interface after upgrade to 9.2 due to elimination of IP Fastpath ONTAP 9.1 Security with SNMPv3 traphosts You can configure SNMPv3 traphosts with the User-based Security Model (USM) security. With this enhancement, SNMPv3 traps can be generated by using a predefined USM user’s authentication and privacy credentials. Configure traphosts to receive SNMP notifications ONTAP 9.0 IPv6 Dynamic DNS (DDNS) name service is available on IPv6 LIFs. Create a LIF ONTAP 9.0 LIFs per node The supported number of LIFs per node has increased for some systems. See the Hardware Universe for the number of LIFs supported on each platform for a specified ONTAP release. Create a LIF NetApp hardware universe ONTAP 9.0 LIF management ONTAP and System Manager automatically detect and isolate network port failures. LIFs are automatically migrated from degraded ports to healthy ports. Monitor the health of network ports ONTAP 9.0 LLDP Link Layer Discovery Protocol (LLDP) provides a vendor-neutral interface for verifying and troubleshooting cabling between an ONTAP system and a switch or router. It is an alternative to Cisco Discovery Protocol (CDP), a proprietary link layer protocol developed by Cisco Systems. Enable or Disable LLDP ONTAP 9.0 UC compliance with DSCP marking Unified Capability (UC) compliance with Differentiated Services Code Point (DSCP) marking. Differentiated Services Code Point (DSCP) marking is a mechanism for classifying and managing network traffic and is a component of Unified Capability (UC) compliance. You can enable DSCP marking on outgoing (egress) IP packet traffic for a given protocol with a default or user-provided DSCP code. If you do not provide a DSCP value when enabling DSCP marking for a given protocol, a default is used: 0x0A (10): The default value for data protocols/traffic. 0x30 (48): The default value for control protocols/traffic. DSCP marking for US compliance ONTAP 9.0 SHA-2 password hash function To enhance password security, ONTAP 9 supports the SHA-2 password hash function and uses SHA-512 by default for hashing newly created or changed passwords. Existing user accounts with unchanged passwords continue to use the MD5 hash function after the upgrade to ONTAP 9 or later, and users can continue to access their accounts. However, it is strongly recommended that you migrate MD5 accounts to SHA-512 by having users change their passwords. ONTAP 9.0 FIPS 140-2 support You can enable the Federal Information Processing Standard (FIPS) 140-2 compliance mode for cluster-wide control plane web service interfaces. By default, the FIPS 140-2 only mode is disabled. Configure network security using Federal Information Processing Standards (FIPS)
https://docs.netapp.com/us-en/ontap/networking/network_features_by_release.html
2021-06-13T02:25:21
CC-MAIN-2021-25
1623487598213.5
[]
docs.netapp.com
Legacy Coaching Legacy Coaching LoopsCoaching Loopshas been deprecated in the Orlando release and the plugin can only be activated by the Customer Service and Support team LoopsThis is an overview of domain separation and Coaching Loops. Domain separation enables you to separate data, processes, and administrative tasks into logical groupings called domains. You can then control several aspects of this separation, including which users can see and access data.
https://docs.servicenow.com/bundle/paris-service-management-for-the-enterprise/page/product/coaching-loops/concept/c_CoachingLoops.html
2021-06-13T02:59:01
CC-MAIN-2021-25
1623487598213.5
[]
docs.servicenow.com
Managing Elastic Beanstalk Environments with the EB CLI After installing the EB CLI and configuring your project directory, you are ready to create an Elastic Beanstalk environment using the EB CLI, deploy source and configuration updates, and pull logs and events.. The EB CLI returns a zero ( 0) exit code for all successful commands, and a non-zero exit code when it encounters any error. The following examples use an empty project folder named eb that was initialized with the EB CLI for use with a sample Docker application. Basic Commands eb create To create your first environment, run eb create and follow the prompts. If your project directory has source code in it, the EB CLI will bundle it up and deploy it to your environment. Otherwise, a sample application will be used. ~/eb$ eb createEnter Environment Name (default is eb-dev): eb-devEnter DNS CNAME prefix (default is eb-dev): eb-devWARNING: The current directory does not contain any source code. Elastic Beanstalk is launching the sample application instead. Environment details for: elasticBeanstalkExa-env Application name: elastic-beanstalk-example Region: us-west-2 Deployed Version: Sample Application Environment ID: e-j3pmc8tscn Platform: 64bit Amazon Linux 2015.03 v1.4.3 running Docker 1.6.2 Tier: WebServer-Standard CNAME: eb-dev.elasticbeanstalk.com Updated: 2015-06-27 01:02:24.813000+00:00 Printing Status: INFO: createEnvironment is starting. -- Events -- (safe to Ctrl+C) Use "eb abort" to cancel the command. Your environment can take several minutes to become ready. Press Ctrl-C to return to the command line while the environment is created. eb status Run eb status to see the current status of your environment. When the status is ready, the sample application is available at elasticbeanstalk.com and the environment is ready to be updated. ~/eb$ eb statusEnvironment details for: elasticBeanstalkExa-env Application name: elastic-beanstalk-example Region: us-west-2 Deployed Version: Sample Application Environment ID: e-gbzqc3jcra Platform: 64bit Amazon Linux 2015.03 v1.4.3 running Docker 1.6.2 Tier: WebServer-Standard CNAME: elasticbeanstalkexa-env.elasticbeanstalk.com Updated: 2015-06-30 01:47:45.589000+00:00 Status: Ready Health: Green eb health Use the eb health command to view health information about the instances in your environment and the state of your environment overall. Use the --refresh option to view health in an interactive view that updates every 10 seconds. ~/eb$ eb healthapi Ok 2016-09-15 18:39:04 WebServer Java 8 total ok warning degraded severe info pending unknown 3 3 0 0 0 0 0 0 instance-id status cause health Overall Ok i-0ef05ec54918bf567 Ok i-001880c1187493460 Ok i-04703409d90d7c353 Ok instance-id r/sec %2xx %3xx %4xx %5xx p99 p90 p75 p50 p10 Overall 8.6 100.0 0.0 0.0 0.0 0.083* 0.065 0.053 0.040 0.019 i-0ef05ec54918bf567 2.9 29 0 0 0 0.069* 0.066 0.057 0.050 0.023 i-001880c1187493460 2.9 29 0 0 0 0.087* 0.069 0.056 0.050 0.034 i-04703409d90d7c353 2.8 28 0 0 0 0.051* 0.027 0.024 0.021 0.015 instance-id type az running load 1 load 5 user% nice% system% idle% iowait% i-0ef05ec54918bf567 t2.micro 1c 23 mins 0.19 0.05 3.0 0.0 0.3 96.7 0.0 i-001880c1187493460 t2.micro 1a 23 mins 0.0 0.0 3.2 0.0 0.3 96.5 0.0 i-04703409d90d7c353 t2.micro 1b 1 day 0.0 0.0 3.6 0.0 0.2 96.2 0.0 instance-id status id version ago deployments i-0ef05ec54918bf567 Deployed 28 app-bc1b-160915_181041 20 mins i-001880c1187493460 Deployed 28 app-bc1b-160915_181041 20 mins i-04703409d90d7c353 Deployed 28 app-bc1b-160915_181041 27 mins eb events Use eb events to see a list of events output by Elastic Beanstalk. ~/eb$ eb events2015-06-29 23:21:09 INFO createEnvironment is starting. 2015-06-29 23:21:10 INFO Using elasticbeanstalk-us-west-2-EXAMPLE as Amazon S3 storage bucket for environment data. 2015-06-29 23:21:23 INFO Created load balancer named: awseb-e-g-AWSEBLoa-EXAMPLE 2015-06-29 23:21:42 INFO Created security group named: awseb-e-gbzqc3jcra-stack-AWSEBSecurityGroup-EXAMPLE 2015-06-29 23:21:45 INFO Created Auto Scaling launch configuration named: awseb-e-gbzqc3jcra-stack-AWSEBAutoScalingLaunchConfiguration-EXAMPLE ... eb logs Use eb logs to pull logs from an instance in your environment. By default, eb logs pull logs from the first instance launched and displays them in standard output. You can specify an instance ID with the --instance option to get logs from a specific instance. The --all option pulls logs from all instances and saves them to subdirectories under .elasticbeanstalk/logs. ~/eb$ eb logs --allRetrieving logs... Logs were saved to /home/local/ANT/mwunderl/ebcli/environments/test/.elasticbeanstalk/logs/150630_201410 Updated symlink at /home/local/ANT/mwunderl/ebcli/environments/test/.elasticbeanstalk/logs/latest eb open To open your environment's website in a browser, use eb open: ~/eb$ eb open In a windowed environment, your default browser will open in a new window. In a terminal environment, a command line browser (e.g. w3m) will be used if available. eb deploy Once the environment is up and ready, you can update it using eb deploy. This command works better with some source code to bundle up and deploy, so for this example we've created a Dockerfile in the project directory with the following content: ~/eb/Dockerfile FROM ubuntu:12.04 RUN apt-get update RUN apt-get install -y nginx zip curl RUN echo "daemon off;" >> /etc/nginx/nginx.conf RUN curl -o /usr/share/nginx/www/master.zip -L RUN cd /usr/share/nginx/www/ && unzip master.zip && mv 2048-master/* . && rm -rf 2048-master master.zip EXPOSE 80 CMD ["/usr/sbin/nginx", "-c", "/etc/nginx/nginx.conf"] This Dockerfile deploys an image of Ubuntu 12.04 and installs the game 2048. Run eb deploy to upload the application to your environment: ~/eb$ eb deployCreating application version archive "app-150630_014338". Uploading elastic-beanstalk-example/app-150630_014338.zip to S3. This may take a while. Upload Complete. INFO: Environment update is starting. -- Events -- (safe to Ctrl+C) Use "eb abort" to cancel the command. When you run eb deploy, the EB CLI bundles up the contents of your project directory and deploys it to your environment. Note If you have initialized a git repository in your project folder, the EB CLI will always deploy the latest commit, even if you have pending changes. Commit your changes prior to running eb deploy to deploy them to your environment. eb config Take a look at the configuration options available for your running environment with the eb config command: ~/eb$ eb configApplicationName: elastic-beanstalk-example DateUpdated: 2015-06-30 02:12:03+00:00 EnvironmentName: elasticBeanstalkExa-env SolutionStackName: 64bit Amazon Linux 2015.03 v1.4.3 running Docker 1.6.2' ... This command populates a list of available configuration options in a text editor. Many of the options shown have a null value, these are not set by default but can be modified to update the resources in your environment. See Configuration Options for more information about these options. eb terminate If you are done using the environment for now, use eb terminate to terminate it. ~/eb$ eb terminateThe environment "eb-dev" and all associated instances will be terminated. To confirm, type the environment name: eb-dev INFO: terminateEnvironment is starting. INFO: Deleted CloudWatch alarm named: awseb-e-jc8t3pmscn-stack-AWSEBCloudwatchAlarmHigh-1XLMU7DNCBV6Y INFO: Deleted CloudWatch alarm named: awseb-e-jc8t3pmscn-stack-AWSEBCloudwatchAlarmLow-8IVI04W2SCXS INFO: Deleted Auto Scaling group policy named: arn:aws-cn:autoscaling:us-west-2:123456789012:scalingPolicy:1753d43e-ae87-4df6-a405-11d31f4c8f97caleUpPolicy-A070H1BMUQAJ INFO: Deleted Auto Scaling group policy named: arn:aws-cn:autoscaling:us-west-2:123456789012:scalingPolicy:1fd24ea4-3d6f-4373-affc-4912012092bacaleDownPolicy-LSWFUMZ46H1V INFO: Waiting for EC2 instances to terminate. This may take a few minutes. -- Events -- (safe to Ctrl+C) For a full list of available EB CLI commands, check out the EB CLI Command Reference.
http://docs.amazonaws.cn/en_us/elasticbeanstalk/latest/dg/eb-cli3-getting-started.html
2019-01-16T02:19:24
CC-MAIN-2019-04
1547583656577.40
[]
docs.amazonaws.cn
All content with label as5+gridfs+infinispan+listener. Related Labels: expiration, publish, datagrid, coherence, server, replication, transactionmanager, dist, release,, out_of_memory, concurrency, jboss_cache, import, events, hash_function, configuration, batch, buddy_replication, loader, write_through, cloud, tutorial, notification, jbosscache3x, read_committed, xml, distribution, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, websocket,, hot_rod more » ( - as5, - gridfs, - infinispan, - listener ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/as5+gridfs+infinispan+listener
2019-01-16T02:53:41
CC-MAIN-2019-04
1547583656577.40
[]
docs.jboss.org
Note: Built-In Function Rules Usage Options Used in conjunction with START_RTV_SPLF_LIST and GET_SPLF_LIST_ENTRY. The START_RTV_SPLF_LIST must be used first to provide the selection criteria for the retrieval of spool files. Once the selection criteria are established, the GET_SPLF_LIST_ENTRY can be used to retrieve the details of the spool files. The END_RTV_SPLF_LIST must be used after the list of spool files have been retrieved. This will close the list and release the storage allocated to that list. Return Values Example Refer to 9.109 GET_SPLF_LIST_ENTRY for an example.
https://docs.lansa.com/14/en/lansa015/content/lansa/end_rtv_splf_list.htm
2019-01-16T01:25:21
CC-MAIN-2019-04
1547583656577.40
[]
docs.lansa.com
is still registered as a class alias, so extensions can call the class via the Extbase PHP namespace in TYPO3 v8 without any downsides.
https://docs.typo3.org/typo3cms/extensions/core/latest/Changelog/8.7/Important-78650-TypoScriptServiceClassMovedFromExtbaseToCore.html
2019-01-16T02:47:09
CC-MAIN-2019-04
1547583656577.40
[]
docs.typo3.org
Unsubscribe. This action is throttled at 100 transactions per second (TPS). Request Parameters For information about the parameters that are common to all actions, see Common Parameters. - SubscriptionArn The ARN of the subscription to be &SubscriptionArn=arn%3Aaws%3Asns%3Aus-east-2%3A123456789012%3AMy-Topic%3A80289ba6-0fd4-4079-afb4-ce8c8260f0ca &Version=2010-03-31 &AUTHPARAMS Sample Response <UnsubscribeResponse xmlns=""> <ResponseMetadata> <RequestId>18e0ac39-3776-11df-84c0-b93cc1666b84</RequestId> </ResponseMetadata> </UnsubscribeResponse> See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/sns/latest/api/API_Unsubscribe.html
2019-01-16T02:30:40
CC-MAIN-2019-04
1547583656577.40
[]
docs.aws.amazon.com
Table Definition¶ DataJoint models data as sets of entities with shared attributes, often visualized as tables with rows and columns. Each row represents a single entity and the values of all of its attributes. Each column represents a single attribute with a name and a datatype, applicable to entity in the table. Unlike rows in a spreadsheet, entities in DataJoint don’t have names or numbers: they can only be identified by the values of their attributes. Defining a table means defining the names and datatypes of the attributes as well as the constraints to be applied to those attributes. Both MATLAB and Python use the same syntax define tables. For example, the following code in defines the table User, that contains users of the database: The table definition is contained in the first block comment in the class definition file. Note that although it looks like a mere comment, the table definition is parsed by DataJoint. This solution is thought to be convenient since MATLAB does not provide convenient syntax for multiline strings. %{ # database users username : varchar(20) # unique user name --- first_name : varchar(30) last_name : varchar(30) role : enum('admin', 'contributor', 'viewer') %} classdef User < dj.Manual end This defines the class User that creates the table in the database and provides all its data manipulation functionality. Table creation on the database server¶ Users do not need to do anything special to have the table created in the database. The table is created upon the first attempt to use the class for manipulating its data (e.g. inserting or fetching entities). Changing the definition of an existing table¶ Once the table is created in the database, the definition string has no further effect. In other words, changing the definition string in the class of an existing table will not actually update the table definition. To change the table definition, one must first drop the existing table. This means that all the data will be lost, and the new definition will be applied to create the new empty table. Therefore, in the initial phases of designing a DataJoint pipeline, it is common to experiment with variations of the design before populating it with substantial amounts of data. It is possible to modify a table without dropping it. This topic is covered separately. Reverse-engineering the table definition¶ DataJoint objects provide the describe method, which displays the table definition used to define the table when it was created in the database. This definition may differ from the definition string of the class if the definition string has been edited after creation of the table.
https://docs.datajoint.io/matlab/definition/03-Table-Definition.html
2019-01-16T01:19:33
CC-MAIN-2019-04
1547583656577.40
[]
docs.datajoint.io
Hortonworks Docs » Hortonworks Data Platform 3.0.1 » Using Apache HBase to store and access data Using Apache HBase to store and access data - What's New in Apache HBase - Overview of Apache HBase - Apache HBase installation - Installing HBase through Ambari - HBase cluster capacity planning - Configuring HBase cluster for the first time - Node count and JVM configuration - Options to increase HBase Region count and size - Enable multitenancy with namespaces - Security features that are available - Managing Apache HBase clusters - Monitoring Apache HBase clusters through Grafana-based dashboard - Optimizing Apache HBase I/O - Import data into HBase with Bulk load - Using Snapshots in HBase - Backing up and restoring Apache HBase datasets - Planning a backup-and-restore Strategy for your environment - Best practices for backup-and-restore - Running the backup-and-restore utility - Medium Object (MOB) storage support in Apache HBase - Methods to enable MOB storage support - Method 1:Enable MOB Storage support using configure options in the command line - Method 2: Invoke MOB support parameters in a Java API - Test the MOB storage support configuration - MOB storage cache properties - HBase quota management - Setting up quotas - Throttle quotas - Space quotas - Quota enforcement - Quota violation policies - Impact of quota violation policy - Number-of-Tables Quotas - Number-of-Regions Quotas - Understanding Apache HBase Hive integration - HBase Best Practices
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/hbase-data-access/content/impact-quota-violation-policy.html
2019-01-16T02:40:32
CC-MAIN-2019-04
1547583656577.40
[array(['/common/themes/pre-hdp-3.1/images/loading.gif', 'loading table of contents...'], dtype=object) ]
docs.hortonworks.com
Windows N-tier application on Azure with SQL Server This reference architecture shows how to deploy VMs and a virtual network configured for an N-tier application, using SQL Server on Windows for the data tier. Deploy this solution. Download a Visio file of this architecture. Architecture The architecture has the following components: Resource group. Resource groups are used to group resources so they can be managed by lifetime, owner, or other criteria. Virtual network (VNet) and subnets. Every Azure VM is deployed into a VNet that can be segmented into subnets. Create a separate subnet for each tier. Application gateway. Azure Application Gateway is a layer 7 load balancer. In this architecture, it routes HTTP requests to the web front end. Application Gateway also provides a web application firewall (WAF) that protects the application from common exploits and vulnerabilities. NSGs. Use network security groups (NSGs) to restrict network traffic within the VNet.. See Security considerations. Virtual machines. For recommendations on configuring VMs, see Run a Windows VM on Azure and Run a Linux VM on Azure. Availability sets. Create an availability set for each tier, and provision at least two VMs in each tier, which makes the VMs eligible for a higher service level agreement (SLA). Load balancers. Use Azure Load Balancer to distribute network traffic from the web tier to the business tier, and from the business tier to SQL Server. Public IP address. A public IP address is needed for the application to receive Internet traffic.. SQL Server Always On Availability Group. Provides high availability at the data tier, by enabling replication and failover. It uses Windows Server Failover Cluster (WSFC) technology for failover. Active Directory Domain Services (AD DS) Servers. The computer objects for the failover cluster and its associated clustered roles are created in Active Directory Domain Services (AD DS). Cloud Witness. A failover cluster requires more than half of its nodes to be running, which is known as having quorum. If the cluster has just two nodes, a network partition could cause each node to think it's the master node. In that case, you need a witness to break ties and establish quorum. A witness is a resource such as a shared disk that can act as a tie breaker to establish quorum. Cloud Witness is a type of witness that uses Azure Blob Storage. To learn more about the concept of quorum, see Understanding cluster and pool quorum. For more information about Cloud Witness, see Deploy a Cloud Witness for a Failover Cluster. Azure DNS. Azure DNS is a hosting service for DNS domains. It provides name resolution using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records using the same credentials, APIs, tools, and billing as your other Azure services. Recommendations Your requirements might differ from the architecture described here. Use these recommendations as a starting point. VNet / Subnets When you create the VNet, determine how many IP addresses your resources in each subnet require. Specify a subnet mask and a VNet address range large enough for the required IP addresses, using CIDR notation. Use an address space that falls within the standard private IP address blocks, which are 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. Choose an address range that does not overlap with your on-premises network, in case you need to set up a gateway between the VNet and your on-premise network later. Once you create the VNet, you can't change the address range. Design subnets with functionality and security requirements in mind. All VMs within the same tier or role should go into the same subnet, which can be a security boundary. For more information about designing VNets and subnets, see Plan and design Azure Virtual Networks. Load balancers Don't expose the VMs directly to the Internet, but instead give each VM a private IP address. Clients connect using the public IP address associated with the Application Gateway. Define load balancer rules to direct network traffic to the VMs. For example, to enable HTTP traffic, map port 80 from the front-end configuration to port 80 on the back-end address pool. When a client sends an HTTP request to port 80, the load balancer selects a back-end IP address by using a hashing algorithm that includes the source IP address. Client requests are distributed across all the VMs in the back-end address pool. Network security groups Use NSG rules to restrict traffic between tiers. In the three-tier architecture shown above, the web tier does not communicate directly with the database tier. To enforce this, the database tier should block incoming traffic from the web tier subnet. - Deny all inbound traffic from the VNet. (Use the VIRTUAL_NETWORKtag in the rule.) - Allow inbound traffic from the business tier subnet. - Allow inbound traffic from the database tier subnet itself. This rule allows communication between the database VMs, which is needed for database replication and failover. - Allow RDP traffic (port 3389) from the jumpbox subnet. This rule lets administrators connect to the database tier from the jumpbox. Create rules 2 – 4 with higher priority than the first rule, so they override it. SQL Server Always On Availability Groups We recommend Always On Availability Groups for SQL Server high availability. Prior to Windows Server 2016, Always On Availability Groups require a domain controller, and all nodes in the availability group must be in the same AD domain. Other tiers connect to the database through an availability group listener. The listener enables a SQL client to connect without knowing the name of the physical instance of SQL Server. VMs that access the database must be joined to the domain. The client (in this case, another tier) uses DNS to resolve the listener's virtual network name into IP addresses. Configure the SQL Server Always On Availability Group as follows: Create a Windows Server Failover Clustering (WSFC) cluster, a SQL Server Always On Availability Group, and a primary replica. For more information, see Getting Started with Always On Availability Groups. Create an internal load balancer with a static private IP address. Create an availability group listener, and map the listener's DNS name to the IP address of an internal load balancer. Create a load balancer rule for the SQL Server listening port (TCP port 1433 by default). The load balancer rule must enable floating IP, also called Direct Server Return. This causes the VM to reply directly to the client, which enables a direct connection to the primary replica. Note When floating IP is enabled, the front-end port number must be the same as the back-end port number in the load balancer rule. When a SQL client tries to connect, the load balancer routes the connection request to the primary replica. If there is a failover to another replica, the load balancer automatically routes new requests to a new primary replica. For more information, see Configure an ILB listener for SQL Server Always On Availability Groups. During a failover, existing client connections are closed. After the failover completes, new connections will be routed to the new primary replica. If your application makes significantly more reads than writes, you can offload some of the read-only queries to a secondary replica. See Using a Listener to Connect to a Read-Only Secondary Replica (Read-Only Routing). Test your deployment by forcing a manual failover of the availability group. Jumpbox Don't allow RDP access from the public Internet to the VMs that run the application workload. Instead, all RDP access to these VMs must come through the jumpbox. An administrator logs into the jumpbox, and then logs into the other VM from the jumpbox. The jumpbox allows RDP traffic from the Internet, but only from known, safe IP addresses. The jumpbox has minimal performance requirements, so select a small VM size. Create a public IP address for the jumpbox. Place the jumpbox in the same VNet as the other VMs, but in a separate management subnet. To secure the jumpbox, add an NSG rule that allows RDP connections only from a safe set of public IP addresses. Configure the NSGs for the other subnets to allow RDP traffic from the management subnet. Scalability considerations For the web and business tiers, consider using virtual machine scale sets, instead of deploying separate VMs into an availability set. A scale set makes it easy to deploy and manage a set of identical VMs, and autoscale the VMs based on performance metrics. As the load on the VMs increases, additional VMs are automatically added to the load balancer. Consider scale sets if you need to quickly scale out VMs, or need to autoscale. There are two basic ways to configure VMs deployed in a scale set: Use extensions to configure the VM after it's deployed. With this approach, new VM instances may take longer to start up than a VM with no extensions. Deploy a managed disk with a custom disk image. This option may be quicker to deploy. However, it requires you to keep the image up-to-date. For more information, see Design considerations for scale sets. Tip When using any autoscale solution, test it with production-level workloads well in advance. Each Azure subscription has default limits in place, including a maximum number of VMs per region. You can increase the limit by filing a support request. For more information, see Azure subscription and service limits, quotas, and constraints. Availability considerations If you don't use virtual machine scale sets, put VMs for the same tier into an availability set. Create at least two VMs in the availability set to support the availability SLA for Azure VMs. For more information, see Manage the availability of virtual machines. Scale sets automatically use placement groups, which act as an implicit availability set. The load balancer uses health probes to monitor the availability of VM instances. If a probe can't reach an instance within a timeout period, the load balancer stops sending traffic to that VM. However, the load balancer will continue to probe, and if the VM becomes available again, the load balancer resumes sending traffic to that VM. Here are some recommendations on load balancer health probes: - Probes can test either HTTP or TCP. If your VMs run an HTTP server, create an HTTP probe. Otherwise create a TCP probe. - For an HTTP probe, specify the path to an HTTP endpoint. The probe checks for an HTTP 200 response from this path. This path can be the root path ("/"), or a health-monitoring endpoint that implements some custom logic to check the health of the application. The endpoint must allow anonymous HTTP requests. - The probe is sent from a known IP address, 168.63.129.16. Don't block traffic to or from this IP address in any firewall policies or NSG rules. - Use health probe logs to view the status of the health probes. Enable logging in the Azure portal for each load balancer. Logs are written to Azure Blob storage. The logs show how many VMs aren't getting network traffic because of failed probe responses. If you need higher availability than the Azure SLA for VMs provides, consider replication the application across two regions, using Azure Traffic Manager for failover. For more information, see Multi-region N-tier application for high availability. Security considerations Virtual networks are a traffic isolation boundary in Azure. VMs in one VNet can't communicate directly with VMs in a different VNet. VMs within the same VNet can communicate, unless you create network security groups (NSGs) to restrict traffic. For more information, see Microsoft cloud services and network security. DMZ. Consider adding a network virtual appliance (NVA) to create a DMZ between the Internet and the Azure virtual network. NVA is a generic term for a virtual appliance that can perform network-related tasks, such as firewall, packet inspection, auditing, and custom routing. For more information, see Implementing a DMZ between Azure and the Internet. Encryption. Encrypt sensitive data at rest and use Azure Key Vault to manage the database encryption keys. Key Vault can store encryption keys in hardware security modules (HSMs). For more information, see Configure Azure Key Vault Integration for SQL Server on Azure VMs. It's also recommended to store application secrets, such as database connection strings, in Key Vault. DDoS protection. The Azure platform provides basic DDoS protection by default. This basic protection is targeted at protecting the Azure infrastructure as a whole. Although basic DDoS protection is automatically enabled, we recommend using DDoS Protection Standard. Standard protection uses adaptive tuning, based on your application's network traffic patterns, to detect threats. This allows it to apply mitigations against DDoS attacks that might go unnoticed by the infrastructure-wide DDoS policies. Standard protection also provides alerting, telemetry, and analytics through Azure Monitor. For more information, see Azure DDoS Protection: Best practices and reference architectures. Deploy the solution A deployment for this reference architecture is available on GitHub. The entire deployment can take up to two hours, which includes running the scripts to configure AD DS, the Windows Server failover cluster, and the SQL Server availability group. Prerequisites Clone, fork, or download the zip file for the reference architectures GitHub repository. Install Azure CLI 2.0. Install the Azure building blocks npm package. npm install -g @mspnp/azure-building-blocks From a command prompt, bash prompt, or PowerShell prompt, sign into your Azure account as follows: az login Deployment steps Run the following command to create a resource group. az group create --location <location> --name <resource-group-name> Run the following command to create a Storage account for the Cloud Witness. az storage account create --location <location> \ --name <storage-account-name> \ --resource-group <resource-group-name> \ --sku Standard_LRS Navigate to the virtual-machines\n-tier-windowsfolder of the reference architectures GitHub repository. Open the n-tier-windows.jsonfile. Search for all instances of "witnessStorageBlobEndPoint" and replace the placeholder text with the name of the Storage account from step 2. "witnessStorageBlobEndPoint": "https://[replace-with-storageaccountname].blob.core.windows.net", Run the following command to list the account keys for the storage account. az storage account keys list \ --account-name <storage-account-name> \ --resource-group <resource-group-name> The output should look like the following. Copy the value of key1. [ { "keyName": "key1", "permissions": "Full", "value": "..." }, { "keyName": "key2", "permissions": "Full", "value": "..." } ] In the n-tier-windows.jsonfile, search for all instances of "witnessStorageAccountKey" and paste in the account key. "witnessStorageAccountKey": "[replace-with-storagekey]" In the n-tier-windows.jsonfile, search for all instances of [replace-with-password]and [replace-with-sql-password]replace them with a strong password. Save the file. Note If you change the adminstrator user name, you must also update the extensionsblocks in the JSON file. Run the following command to deploy the architecture. azbb -s <your subscription_id> -g <resource_group_name> -l <location> -p n-tier-windows.json --deploy For more information on deploying this sample reference architecture using Azure Building Blocks, visit the GitHub repository.
https://docs.microsoft.com/en-ca/azure/architecture/reference-architectures/n-tier/n-tier-sql-server
2019-01-16T01:58:49
CC-MAIN-2019-04
1547583656577.40
[array(['images/n-tier-sql-server.png', 'N-tier architecture using Microsoft Azure'], dtype=object)]
docs.microsoft.com
RQ Deprecated Client This SDK has been superseded by a new unified one. The documentation here is preserved for customers using the old client. For new projects have a look at the new client documentation: Unified Python SDK. Starting with RQ version 0.3.1, support for Sentry has been built in. Usage RQ natively supports binding with Sentry by passing your SENTRY_DSN through rqworker: $ rqworker --sentry-dsn="___DSN___" Extended Setup If you want to pass additional information, such as release, you’ll need to bind your own instance of the Sentry Client: from raven import Client from raven.transport.http import HTTPTransport from rq.contrib.sentry import register_sentry client = Client('___DSN___', transport=HTTPTransport) register_sentry(client, worker) Please see rq‘s documentation for more information:
https://docs.sentry.io/clients/python/integrations/rq/
2019-01-16T01:34:20
CC-MAIN-2019-04
1547583656577.40
[]
docs.sentry.io
Notification Digests Sentry provides a service that will collect notifications as they occur and schedule them for delivery as aggregated “digest” notifications. Configuration Although the digest system is configured with a reasonable set of default options, the SENTRY_DIGESTS_OPTIONS setting can be used to fine-tune the digest backend behavior to suit the needs of your unique installation. All backends share a common set of options defined below, while some backends may also define additional options that are specific to their individual implementations. minimum_delay The minimum_delayoption defines the default minimum amount of time (in seconds) to wait between scheduling digests for delivery after the initial scheduling. This can be overriden on a per-project basis in the Notification Settings. maximum_delay The maximum_delayoption defines the default maximum amount of time (in seconds) to wait between scheduling digests for delivery. This can be overriden on a per-project basis in the Notification Settings. increment_delay The increment_delayoption defines how long each observation of an event should delay scheduling up until the maximum_delayafter the last time a digest was processed. capacity The capacityoption defines the maximum number of items that should be contained within a timeline. Whether this is a hard or soft limit is backend dependent – see the truncation_chanceoption. truncation_chance The truncation_chanceoption defines the probability that an addoperation will trigger a truncation of the timeline to keep it’s size close to the defined capacity. A value of 1 will cause the timeline to be truncated on every addoperation (effectively making it a hard limit), while a lower probability will increase the chance of the timeline growing past it’s intended capacity, but increases the performance of addoperations by avoiding truncation, which is a potentially expensive operation, especially on large data sets. Backends Dummy Backend The dummy backend disables digest scheduling, and all notifications are sent as they occur (subject to rate limits.) This is the default digest backend for installations that were created prior to version 8. The dummy backend can be specified via the SENTRY_DIGESTS setting: SENTRY_DIGESTS = 'sentry.digests.backends.dummy.DummyBackend' Redis Backend The Redis backend uses Redis to store schedule and pending notification data. This is the default digest backend for installations that were created since version 8. The Redis backend can be specified via the SENTRY_DIGESTS setting: SENTRY_DIGESTS = 'sentry.digests.backends.redis.RedisBackend' The Redis backend accepts several options beyond the basic set, provided via SENTRY_DIGESTS_OPTIONS: cluster The clusteroption defines the Redis cluster that should be used for storage. If no cluster is specified, the defaultcluster is used. Important Changing the cluster value or the cluster configuration after data has been written to the digest backend may cause unexpected effects – namely, it creates the potential for data loss during cluster size changes. This option should be adjusted with care on running systems. ttl The ttloption defines the time-to-live (in seconds) for records, timelines, and digests. This can (and should) be a relatively high value, since timelines, digests, and records should all be deleted after they have been processed – this is mainly to ensure stale data doesn’t hang around too long in the case of a configuration error. This should be larger than the maximum scheduling delay to ensure data is not evicted too early. Example Configuration SENTRY_DIGESTS = 'sentry.digests.backends.redis.RedisBackend' SENTRY_DIGESTS_OPTIONS = { 'capacity': 100, 'cluster': 'digests', }
https://docs.sentry.io/server/digests/
2019-01-16T01:51:59
CC-MAIN-2019-04
1547583656577.40
[]
docs.sentry.io
By taking advantage of an HA pair's takeover and giveback operations, you can change hardware components and perform software upgrades in your configuration without disrupting access to the system's storage. You can perform nondisruptive operations on a system by having its partner take over the system's storage, performing maintenance, and then giving back the storage. Aggregate relocation extends the range of nondisruptive capabilities by enabling storage controller upgrade and replacement operations.
http://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-hacg/GUID-519510F4-AE00-4859-9706-217553D05FB9.html
2019-01-16T02:04:54
CC-MAIN-2019-04
1547583656577.40
[]
docs.netapp.com
window_mouse_get_x(); Returns: Real With this function you can get the x position of the mouse cursor (in pixels) within the browser if it is an HTML5 game or within the display if it is a Windows, Ubuntu (Linux) or MacOS game. NOTE: For regular mouse functions see the section on Mouse Input. wx = window_mouse_get_x(); The above code stores the current x axis window position of the mouse in the variable "wx".
http://docs.yoyogames.com/source/dadiospice/002_reference/windows%20and%20views/the%20game%20window/window_mouse_get_x.html
2019-01-16T01:38:30
CC-MAIN-2019-04
1547583656577.40
[]
docs.yoyogames.com
>>: Zones − Table of content Zone's principles ¶ Zones in Jelix represent small parts of the final response. They are intended to manage and generate the content of a part of the screen, the web page. A web page is then primarily made of zones. What is interesting in using zones is to : - be able to reuse a zone in different pages : a zone is in theory independent from the context : it calls business classes by itself and has its own template. - have a generation of content that accept parameters. - be able to generate pages faster, activating the zone's cache : only the zones whose parameters have changed (the main zone in general) are regenerated (or those whose cache hase been deleted). - make the code of controllers lighter. Using zones ¶ Creation ¶ A zone is declared with a class extending jZone. Its name must begin with "Zone". class ZoneTest extends jZone { } It must be placed in a name_of_zone.zone.php file, in the zones/ directory of the module. In our example : it is the zones/test.zone.php file. A jZone object instantiates its own template by default. Using it without template ¶ If you don't want to use a template for your zone, you have to overload the _createContent method, which must return the content of the zone in the form of string. You mustn't use an echo or print ! class testZone extends jZone { protected function _createContent(){ return "<p>This is the content of a zone</p>"; } } Using it with a template ¶ Most of the time, you will use a template. You have to indicate in the $_tlpname property the template that you will use (using a selector), and overload the _prepareTpl() method. This method is in charge of the initialization of the jTpl object automatically instantiated by jZone and placed in the _tpl property. class testZone extends jZone { protected $_tplname='template_test'; protected function _prepareTpl(){ $this->_tpl->assign('foo','bar'); } } And the template (placed in templates/template_test.tpl): <p>This is a template. And foo = {$foo}.</p> Calling it ¶ There are several ways to retrieve the content of a zone according to what we want to do. If you simply want to retrieve its content (in a controller), you do : $content = jZone::get('test'); // or 'TheModule~test'... However, you will often have to affect the content of the zone to a main template variable, when the response has a main template (which is the case of html responses, through its $body property which is a jTpl object). In the controller, we will thus be able to use the assignZone method of jTpl: $rep = $this->getResponse('html'); $rep->title = 'test page'; $rep->bodyTpl = 'testapp~main'; $rep->body->assignZone('MAIN', 'test'); testcorrespondant to the file test.zone.php MAINcorrespondant to the template variable {$MAIN} Another solution is to have a direct call to a zone in a template : <div id="menu"> {zone 'TheModule~test'}</div> Calling it with parameters ¶ It is possible pass parameters to a zone. Parameters should be in an associative array. $content = jZone::get('test', array('foo'=>'bar')); With the assignZone method of jTpl: $rep = $this->getResponse('html'); $rep->title = 'test page'; $rep->bodyTpl = 'testapp~main'; $rep->body->assignZone('MAIN', 'test', array('foo'=>'bar')); To retrieve the variable in the zone, we use the param() method : class testZone extends jZone { protected $_tplname='template_test'; protected function _prepareTpl(){ $foo = $this->param('foo'); $foo = strtoupper($foo); $this->_tpl->assign('foo', $foo); } } In this example we pass the 'foo' variable with 'bar' value as parameter of the zone. We have retrieved the 'foo' variable in the zone to process it (here: to make it upper case) and we affected 'foo' to the template of the zone. Don't forget that Jelix affects automatically the variables passed as parameters of the zone to the template of the zone if it exists. You then can avoid to write : protected function _prepareTpl(){ $this->_tpl->assign('foo', $this->param('foo'); } If you use the zone plugin template, you pass the parameters to the zone this way : <div id="menu"> {zone 'TheModule~test', array('foo'=>'bar')} </div> Using the cache ¶ It is possible to put the generated content in a cache. And you can have a cache for each parameter value of the zone. Enabling the cache ¶ By default, a zone does not cache the generated content, so you should activate it in your class, via the property _useCache: Class testZone extends jZone { Protected $_useCache = true; } If the zone is called without parameters, there will be a single cache file. If you have several parameters, then there will be a cache file for each given value in parameters. For example, if you have an 'article_id' parameter, there will have some cache file for each value of article_id. Carefull: a cache is a file in the temp directory of the application. If you have thousands of articles, it can generate as many files into your temporary directory. You should avoid to enable the cache if for example you have a limited number of files in your web hosting. Use the cache wisely. For example, a moderately popular application (the same article read only about once a day), it is not necessary to activate the cache. You be the judge ... Refreshing the cache ¶ It is necessary to regenerate the cache when the information is obsolete. This regeneration can be done automatically regularly (all n seconds), or be forced manually. You use one or the other methods as appropriate. The second method is less greedy in resource since the cache is regenerated when you do it. The disadvantage is that you should explicitly clear the cache in your business code. The first method avoids this job, but consumes more resources, and the content of the zone is not up to date during a laps of time. But it could not be vital if cached informations are not important. Automatic refresh ¶ For an automatic refresh, you have just to indicate the time in $_cacheTimeout, in seconds: Class testZone extends jZone { Protected $_useCache = true; Protected $_cacheTimeout = 60; } Here, the cache will be regenerated every 60 seconds. If you put 0, there won't be automatic refresh. Forced refresh ¶ The "manual" removal of cache is done via static methods clear() and clearAll().. For example, in the business class which manage your article, during the update of the article (in a database, for example) or when you delete it, you are going to call jZone to delete the corresponding cache, so that it will be regenerated in the next display. Of course you should indicate the parameters values which identify the cache. In our example, therefore, id_article. jZone::clear ( 'mymodule~product', array ( 'id_article' => 546)); If you want to delete all the caches at the same time, you can call clearAll(): jZone::clearAll('mymodule~product'); And if you want to delete all caches of all zone: jZone::clearAll (); Prevent temporarily caching ¶ It should be noted that the methods _createContent() and _prepareTpl() (that you can override), are called only when the cache must be regenerated. It may be that for some reason or another (depending on the value of a parameter for example), you don't want sometimes to use the cache of the zone. To do it, in _createContent() or _prepareTpl(), you just have to set $_cancelCache property to true: protected function _prepareTpl { / / .... $this->_cancelCache = true; //... } Disable cache during development ¶ To disable all zone cache, and thus be able to build your zone contents and see directly the results, you can set a config parameter, in [zones] section : [zones] disableCache = on Automatics parameters ¶ The display of a zone may depend explicitly given parameters, but also on "external" parameters implicitly. One example is a zone which displays the version of an article based on the language set in the app. You can of course indicate the code language at each call of the zone, but it is not practical. It might not have to indicate the parameters, and recover them in _createContent() or _prepareTpl(), but then it is not possible for this implicit parameters to be criterions for the cache system. The solution is to override the constructor, and initialize this parameter: class articleZone extends jZone { protected $_useCache = true; public function __construct ($params = array ()) { $params['lang'] = jApp::config()->locale; parent:: __construct ($params); } }
https://docs.jelix.org/en/manual-1.4/zones
2019-01-16T01:49:56
CC-MAIN-2019-04
1547583656577.40
[array(['/design/2011/icons/ui-menu-blue.png', None], dtype=object)]
docs.jelix.org
stingray paint ball gun power tube is to far wide fit within the adjustment ring on that bolt so old adjustable a no go waiting hear new 2 paintball parts. Related Post Little Tikes Giant Toy Chest Exerpeutic Magnetic Upright Exercise Bike With Heart Pulse Sensors Patriots Earrings Used Hiking Backpack Marcy Cardio Mini Cycle Adidas Ladies Golf Shoes Mountainsmith Packs Womens Petite Golf Club Sets 16 Inch Boys Bike Blue Glass Cafe Leftie Golf Clubs Non Polarized Power Cord Suggestion Box With Lock Gold Tape Dispenser Hummingbird Depth Finder
http://top-docs.co/stingray-paint-ball-gun/stingray-paint-ball-gun-power-tube-is-to-far-wide-fit-within-the-adjustment-ring-on-that-bolt-so-old-adjustable-a-no-go-waiting-hear-new-2-paintball-parts/
2019-01-16T02:09:39
CC-MAIN-2019-04
1547583656577.40
[array(['http://top-docs.co/wp-content/uploads/2018/04/stingray-paint-ball-gun-power-tube-is-to-far-wide-fit-within-the-adjustment-ring-on-that-bolt-so-old-adjustable-a-no-go-waiting-hear-new-2-paintball-parts.jpg', 'stingray paint ball gun power tube is to far wide fit within the adjustment ring on that bolt so old adjustable a no go waiting hear new 2 paintball parts stingray paint ball gun power tube is to far wide fit within the adjustment ring on that bolt so old adjustable a no go waiting hear new 2 paintball parts'], dtype=object) ]
top-docs.co
3.1. Introduction Into Configuring¶ 3.1.1. Configuration files¶ By default, CouchDB reads configuration files from the following locations, in the following order: etc/default.ini etc/default.d/*.ini etc/local.ini etc/local.d/*.ini the default.ini and default.d directories, and /Users/youruser/Library/Application Support/CouchDB2/etc/couchdb for the local.ini and local.d directories. Settings in successive documents override the settings in earlier entries. For example, setting the httpd/bind_address parameter in local.ini would override any setting in default.ini. Warning The default.ini file may be overwritten during an upgrade or re-installation, so localised changes should be made to the local.ini file or files within the local.d directory. The configuration file chain may be changed by setting the ERL_FLAGS environment variable: export ERL_FLAGS="-couch_ini /path/to/my/default.ini /path/to/my/local.ini" or by placing the -couch_ini .. flag directly in the etc/vm.args file. Passing -couch_ini .. as a command-line argument when launching couchdb is the same as setting the ERL_FLAGS environment variable. Warning The environment variable/command-line flag overrides any -couch_ini option specified in the etc/vm.args file. And, BOTH of these options completely override CouchDB from searching in the default locations. Use these options only when necessary, and be sure to track the contents of etc/default.ini, which may change in future releases. 3.1.2. Parameter names and values¶ All parameter names are case-sensitive. Every parameter takes a value of one of five types: boolean, integer, string, tuple and proplist. Boolean values can be written as true or false. Parameters with value type of tuple or proplist are following the Erlang requirement for style and naming. 3.1.3. Setting parameters via the configuration file¶ The common way to set some parameters is to edit the local.ini file (location explained above). For example: ; This is a comment [section] param = value ; inline comments are allowed Each configuration file line may contains section definition, parameter specification, empty (space and newline characters only) or commented line. You can set up inline commentaries for sections or parameters. The section defines group of parameters that are belongs to some specific CouchDB subsystem. For instance, httpd section holds not only HTTP server parameters, but also others that directly interacts with it. The parameter specification contains two parts divided by the equal sign ( =): the parameter name on the left side and the parameter value on the right one. The leading and following whitespace for = is an optional to improve configuration readability. Note In case when you’d like to remove some parameter from the default.ini without modifying that file, you may override in local.ini, but without any value: [httpd_global_handlers] _all_dbs = This could be read as: “remove the _all_dbs parameter from the httpd_global_handlers section if it was ever set before”. The semicolon ( ;) signals the start of a comment. Everything after this character is ignored by CouchDB. After editing the configuration file, CouchDB should be restarted to apply any changes. 3.1.4. Setting parameters via the HTTP API¶ Alternatively, configuration parameters could be set via the HTTP API. This API allows to change CouchDB configuration on-the-fly without requiring a server restart: curl -X PUT<name@host>/_config/uuids/algorithm -d '"random"' In the response the old parameter’s value returns: "sequential" You should be careful with changing configuration via the HTTP API since it’s easy to make CouchDB unavailable. For instance, if you’d like to change the httpd/bind_address for a new one: curl -X PUT<name@host>/_config/httpd/bind_address -d '"10.10.0.128"' However, if you make a typo, or the specified IP address is not available from your network, CouchDB will be unavailable for you in both cases and the only way to resolve this will be by remoting into the server, correcting the errant file, and restarting CouchDB. To protect yourself against such accidents you may set the httpd/config_whitelist of permitted configuration parameters for updates via the HTTP API. Once this option is set, further changes to non-whitelisted parameters must take place via the configuration file, and in most cases, also requires a server restart before hand-edited options take effect.
http://docs.couchdb.org/en/latest/config/intro.html
2017-09-19T20:34:04
CC-MAIN-2017-39
1505818686034.31
[]
docs.couchdb.org
#include <CCrateController.h> class CCrateController { CCrateController(unsigned int b, unsigned int c); void Z(); void C(); long Lams(); void UnInhibit(); void DisableDemand(); void Inhibit(); void nableDemand(); bool isInhibited(); bool isDemanding(); bool isDemandEnabled(); void BroadcastControl(unsigned int f, unsigned int a); void BroadcastWrite(unsigned int f, unsigned int a, unsigned long d); void MulticastControl(unsigned int f, unsigned int a, unsigned long nMask); void MulticastWrite(unsigned int f, unsigned int a, unsigned long nMask, unsigned long nData); void InitializeCrate();} Instances of the CCrateController class allow you to manipulate a BiRA Model 1302 Type A1/A2 crate controller on the end of a parallel branch highway. The CAMAC Branch highway standard provides some common specifications for the behavior of Type A1 crate controller. In addition, there are some model specific features of the controller. It is certain that this class will work correctly with respect to the standard features of all A1/A2 crate controllers. Extension may or may not work depending on the way they've been implemented. Constructs a CCrateController object. b is the branch number, which also implies a VME crate number. c is the crate numbers set in the thumbwheels on the front of the CAMAC crate controller module. Performs a Z cycle on the CAMAC crate. The dataway Z line is pulse in accordance with the requirements of the CAMAC crate standard for Z cycles. Peforms a C cyle on the crate. The C line is pulsed in accordance with the requirments of the CAMAC crate standard for C cycles. Returns the 24 bit graded LAM mask from the controller. This is only meaningful if a lam grader is installed in the device to map the dataway LAM pattern to graded LAM values. At the NSCL we generally use a passive jumper block for a lam grader that maps the LAM of each slot into the corresponding bit of the LAM grader. The LAM grader plugs into a connector on the back of the controller module. Deasserts the data way I line. The I line provides an Inhibit signal to the modules. while modules are free, under the standard, to interpret the presence or absence of the I signal in any way desired by the designer, in general the I line is used to disable some functions of the module. For, example, in many digitization modules, the I line prevents gates from having any effect on the module. Disables the crate controllers ability to assert the branch demand (BD) line. The branch demand line is generally asserted by a crate controller if it has LAMs. Asserts the I (inhibit) line on the crate dataway. See the UnInhibit function description for more information about this line and its purpose/funcion. Enable the module to assert the branch demand (BD) line if LAMs are present. Returns true if the data way is inhibited. Note that this function cannot be atomic, a test function must be performed, and the state of the branch Q read. Therefore to ensure reliable results, the VME bus should be software locked when doing this function. Returns true if the controller is asserting a branch demand. Returns true if the branch demand is enabled. Performs a control (non data transfer) broadcast operation to the crate. A broadcast operation sends the function code f and subaddress a to all modules by asserting all N lines (slot selects) during the cycle. f must be in one of the inclusive ranges 8-15 or 24-31. Performs a broadcast write. Broadcast writes are like broadcast control operations (See BroadcastControl above) but data are assserted on the data way lines and the function code, f, must be in the inclusive range 16-23. This causes common write data to be transferred to all modules on the data way. Note that as the data way read lines are bussed, it makes no sense to provide a broadcast read operation...although the open collector nature of the CAMAC dataway would not cause electrical damage. Performs a multicast control operation by writing the mask of affected slots, nMask, to the station number register (a misnomer), and performing a multicast operation using function code f and a as the subaddress. The function value f must be in the range of valid non data transfer CAMAC function codes. Performs a multicast write. See MulticastControl for much of the description, however, the function code f must be a valid write operation and the data nData is written on the data way. Initializes a CAMAC crate by performing a C cycle, a Z cycle, Disabling the demand and uninhibiting the dataway.
http://docs.nscl.msu.edu/daq/newsite/nscldaq-11.2/r31335.html
2017-09-19T20:45:43
CC-MAIN-2017-39
1505818686034.31
[]
docs.nscl.msu.edu
The Import/Export tool is designed to easily transfer and copy different objects and their properties. The Import/Export tool is available in both Wialon Hosting interfaces – manager's and user's. To open the tool, click on a corresponding button in the top panel of CMS Manager or in the ground panel of the main interface. You can import/export: Moreover, you can choose particular items to be imported/exported, for example, you can indicate not all but certain service intervals or sensors (for units), certain geofences and jobs (for resources), etc. Data can be imported and exported via files or directly from one object into another. Exporting to a file gives you possibility to store data on disc and use it when necessary. For instance, you can create templates of unit properties, which makes it considerably easier to create and configure new units. Two file formats are supported: Exporting to an object allows you to transfer data (properties or contents) straight from an object to another object of the same type or to several objects at once. For example, you can copy geofences from one resource to another. Access rights are important for import/export. Bear in mind two simple rules: See more:
http://docs.wialon.com/en/hosting/1311/cms/port/port
2017-09-19T20:46:57
CC-MAIN-2017-39
1505818686034.31
[]
docs.wialon.com
ExositeReady™ Gateway Engine ExositeReady™ Gateway Engine (GWE) was created by Exosite to service a commonly occurring design pattern in IoT applications. This page provides information about what GWE does and does not do, as well as a list of the terms used and the additional resources available. Resources - Getting Started - Product Overview - Release Packages - Custom Gateway Applications - Over the Air Updates - GWE Solution App - Device Client - Docs - GWE - Docs - Gateway Message Queuing - Docs About GWE What is a Gateway? In the context of IoT, a "gateway" can be loosely defined as any device that serves as a communication broker for other devices. Gateways, in this context, often bridge the gap between an IoT platform (Exosite) and some collection of devices that do not possess the ability of communicating on the Internet. Sometimes the "devices" generating the data you want on the Internet are not devices, per se, but data from other networks the gateway can access such as modbus and CAN. Either way, the purpose of any gateway is to move local data to an external agent on the Internet. Since using gateways is common throughout so many industrial applications, Exosite created GWE as an out-of-the-box developer and deployment tool for Internet-connected gateways. What GWE Does - It is the product that installs and modifies software over the air in a secure and scalable manner. - It is an application-hosting framework for Custom Gateway Applications. - It provides an Exosite API library in Python called device-client. - It is integrated with Supervisor to manage your Custom Gateway Applications' runtime environment. What GWE Does Not Do - It does not read any sensor data. - It does not auto-discover any connected nodes or sensors and automatically send data. - It does not know what a Custom Gateway Application does. Notational Conventions As mentioned in the HTTP Device API, this document follows some notational conventions: - Any JSON is pretty printed for clarity. The extra whitespace is not necessarily included in any commands. #, //) are occasionally included in example code, runnable commands and JSON to give hints or provide detail. These comments are not necessary (and sometimes error-prone) in actual example code, commands, requests, and responses. - A name in angle brackets (e.g., <pid>, <myvar>) is a placeholder that will be defined elsewhere. - Code blocks are distinguished by command and example headings where commands can be copy-pasted into terminals, whereas examples are samples of terminal output when running the commands.
http://docs.exosite.com/development/exositeready/gwe/
2017-09-19T20:39:44
CC-MAIN-2017-39
1505818686034.31
[]
docs.exosite.com
. For a list of all built in execution modules, click here For information on writing execution modules, see this page. Sometimes. The auth module system allows for external authentication routines to be easily added into Salt. The auth function needs to be implemented to satisfy the requirements of an auth module. Use the pam module as an example. The fileserver module system is used to create fileserver backends used by the Salt Master. These modules need to implement the functions used in the fileserver subsystem. Use the gitfs module as an example. Grain modules define extra routines to populate grains data. All defined public functions will be executed and MUST return a Python dict object. The dict keys will be added to the grains made available to the minion.. Tops modules are used to convert external data sources into top file data for the state system.
https://docs.saltstack.com/en/latest/topics/development/modular_systems.html
2017-09-19T20:28:50
CC-MAIN-2017-39
1505818686034.31
[]
docs.saltstack.com
Apache Zookeeper Maintenance Tasks Edge for Private Cloud v. 4.16.09 Four-Letter Commands Apache ZooKeeper has a number of "four-letter commands" that can be helpful in determining the current status of ZooKeeper voter and observer nodes. These commands can be invoked using "nc", "telnet" or another utility that has the ability to send commands to a specific port. Details on the four-letter commands can be found at:. Removing Old Snapshot Files Apache ZooKeeper also requires periodic maintenance to remove old snapshot files which accumulate as updates to the system are made. You typically remove old snapshot files as part of a regular maintenance task or when you notice that free disk space is below a threshold. Instructions on how to clean up old snapshots can be found at. Log File Maintenance Apache Zookeeper log files are kept in /<inst_root>/apigee/var/log/zookeeper. Normally, log file maintenance should not be required, but if you find that there are an excessive number of ZooKeeper logs or that the logs are very large you can modify ZooKeeper’s log4j properties to set the maximum file size and file count. - Edit /<install_dir>/apigee/customer/application/zookeeper.properties to set the following properties. If that file does not exist, create it: conf_log4j_log4j.appender.rollingfile.maxfilesize=10MB # max file size conf_log4j_log4j.appender.rollingfile.maxbackupindex=50 # max open files - Restart ZooKeeper by using the commands: $ /<install_dir>/apigee/apigee-service/bin/apigee-service apigee-zookeeper restart Help or comments? - If something's not working: Ask the Apigee Community or see Apigee Support. - If something's wrong with the docs: Send Docs Feedback (Incorrect? Unclear? Broken link? Typo?)
http://ja.docs.apigee.com/private-cloud/v4.16.09/apache-zookeeper-maintenance-tasks
2017-09-19T20:51:39
CC-MAIN-2017-39
1505818686034.31
[]
ja.docs.apigee.com
Welcome to the Komodo User Guide What's new in Komodo IDE) (Read the full Release Notes.) sup/port .
http://docs.activestate.com/komodo/7.1/
2014-12-18T02:20:10
CC-MAIN-2014-52
1418802765584.21
[array(['img/komodo.png', None], dtype=object)]
docs.activestate.com
Users¶ A sample of institutions using bcbio-nextgen for solving biological problems. Please submit your story if you’re using the pipeline in your own research. - Harvard School of Public Health: We use bcbio-nextgen within the bioinformatics core for variant calling on large population studies related to human health like Breast Cancer and Alzheimer’s disease. Increasing scalability of the pipeline has been essential for handling study sizes of more than 1400 whole genomes. - Massachusetts General Hospital: The Department of Molecular Biology uses the pipeline to automatically process samples coming off Illumina HiSeq instruments. Automated pipelines perform alignment and sample-specific analysis, with results directly uploaded into a local Galaxy instance. - Science for Life Laboratory: The genomics core platform in the Swedish National Genomics Infrastructure (NGI) has crunched over 16TBp (terabasepairs) and processed almost 7000+ samples from the beginning of 2013 until the end of July. UPPMAX, our cluster located in Uppsala runs the pipeline in production since 2010. - Institute of Human Genetics, UCSF: The Genomics Core Facility utilizes bcbio-nextgen in processing more than 2000 whole genome, exome, RNA-seq, ChIP-seq on various projects. This pipeline tremendously lowers the barrier of getting access to next generation sequencing technology. The community engaged here is also very helpful in providing best practices advices and up-to-date solution to ease scientific discovery. - IRCCS “Mario Negri” Institute for Pharmacological Research: The Translational Genomics Unit in the Department of Oncology uses bcbio-nextgen for targeted resequencing (using an Illumina MiSeq) to identify mutations and other variants in tumor samples to investigate their link to tumor progression, patient survival and drug sensitivity and resistance. A poster from the 2014 European Society of Human Genetics meeting provides more details on usage in ovarian cancer. A paper on the study of longitudinal ovarian cancer biopsies, which makes extensive use of bcbio-nextgen, was published in 2015 in Annals of Oncology. - The Translational Genomics Research Institute (TGen): Members of the Huentelman lab at TGen apply bcbio-nextgen to a wide variety of studies of with a major focus in the neurobiology of aging and neurodegeneration in collaboration with the The Arizona Alzheimer’s Consortium (AAC) and the McKnight Brain Research Foundation. We also use bcbio in studies of rare diseases in children through TGen’s Center for Rare Childhood Disorders (C4RCD), and other rare diseases such as Multiple System Atrophy (MSA). bcbio-nextgen has also been instrumental in projects for TGen’s Program for Canine Health & Performance (PCHP) and numerous RNA-seq projects using rodent models. Our work with bcbio started with a partnership with Dell and The Neuroblastoma and Medulloblastoma Translational Research Consortium (NMTRC), and TGen as part of a Phase I clinical trial in these rare childhood cancers. - Computer Science and Artificial Intelligence Laboratory (CSAIL), MIT: The Gifford lab uses the bcbio-nextgen pipeline to analyze a variety of sequencing datasets for their research in genetics and regulatory genomics (including the SysCode and Stem Cell to Neuron projects). The pipeline applies collaboratively-developed best practices for analysis as well as computation, which enables the lab to run the pipeline on local clusters and Amazon EC2. - Sheffield Bioinformatics Core, The University of Sheffield: The Sheffield Bioinformatics Core is a relatively new Core facility at The University of Sheffield, and bcbio has been instrumental in setting-up a best-practice Bioinformatics analysis service. We employ bcbio to automate the analyses of RNA-seq, small RNA and ChiP-Seq datasets for researchers at The University of Sheffield and NIHR Biomedical Research Centre. In conjunction with the bcbioRNASeq Bioconductor package, we deliver publication-quality reports to our researchers based on reproducible analyses.
https://bcbio-nextgen.readthedocs.io/en/latest/contents/users.html
2021-04-10T11:26:26
CC-MAIN-2021-17
1618038056869.3
[]
bcbio-nextgen.readthedocs.io
Selenium PrerequisitesPrerequisites Centreon PluginCentreon Plugin Install this plugin on each needed poller: yum install centreon-plugin-Applications-Selenium Be sure that centreon-plugin-Applications-Selenium is installed and that the communication between your monitoring poller and the selenium server is OK on port 4444 using App-Selenium-Katalon-custom template : Click on the Save button. Service Macro ConfigurationService Macro Configuration The following macros must be configured on the deployed services : Click on the Save button.
https://docs.centreon.com/20.10/en/integrations/plugin-packs/procedures/applications-selenium.html
2021-04-10T11:13:22
CC-MAIN-2021-17
1618038056869.3
[]
docs.centreon.com
This page lists all pre-defined variables that you can use in your XML files. These variables are case sensitive. MapBuilderPath This variable contains the path to the folder the tool's executable ( LSHaRMB.exe) is located in. <Include Path="$(MapBuilderPath)\Rules\Zone.xml"/> GamePath This variable contains the path that you have The Simpsons: Hit & Run installed to. <InputPure3DFile Name="L2Terra" Path="$(GamePath)\art\L2_TERRA.p3d" /> ParentPath This variable contains the path to the XML file it's being used in. This can be useful for setting path variables in your main rules file that are used by files in a sub-directory. <SetVariable Name="LocalResourcesPath" Value="$(ParentPath)\Resources" /> System Environment Variables System environment variables can also be used like other variables in XML files. Note that instead of the Windows syntax %system_variable%, you would use the same syntax as other variables in this tool: $(system_variable).
https://docs.donutteam.com/docs/lucasmapbuilder/xml-format/predefined-variables
2021-04-10T11:21:18
CC-MAIN-2021-17
1618038056869.3
[]
docs.donutteam.com
You might benefit from automating all or part of the StorageGRID installation. “Automating the installation” before beginning the installation process.
https://docs.netapp.com/sgws-112/topic/com.netapp.doc.sg-install-rhel/GUID-0A049FF8-F34A-40C8-95DE-794C2EA79857.html?lang=en
2021-04-10T11:45:35
CC-MAIN-2021-17
1618038056869.3
[]
docs.netapp.com
Every Android device is distributed with a built-in launcher. However, built-in launchers generally do not provide support for icon packs. If your built-in launcher does not support the icon pack format used by Icon Pack Studio, then you won't be able to use any of the icon packs created or downloading using Icon Pack Studio. The problem can be solved by downloading a third party launcher from Play Store. Third party launchers provide extra features and in some cases, they can also improve your device resources usage. Icon your creation in the Icon Pack Studio community so nicknames are not uniquely linked to an account. This may change in future versions of Icon Pack Studio. This happens for the same reason as above. If you see this happening just use the button report to let us know about a problem. We will hopefully solve the problem in future versions.
https://docs.smartlauncher.net/other-products/iconpackstudiofaq
2021-04-10T11:14:36
CC-MAIN-2021-17
1618038056869.3
[]
docs.smartlauncher.net
Installing the application components The following topics are provided: Preparing to install the application components Before you begin, ensure that you have carefully reviewed all the information discussed in the Preparing section and Preparing an application for installation. Scope of application installation The following table lists the components and installers you need to install BMC Remedy IT Service Management (ITSM) applications. Related topics Was this page helpful? Yes No Submitting... Thank you
https://docs.bmc.com/docs/brid91/en/installing-the-application-components-825210630.html
2021-04-10T12:13:33
CC-MAIN-2021-17
1618038056869.3
[]
docs.bmc.com
Multi-site considerations Site properties and multi-site permissions The Site class provides methods to set a site's properties, site owner of an object, and access to an object from other sites. However, as a best practice, administrators should perform these functions in Brightspot. See Creating sites and Assigning permissions to content on multiple sites for more information. Querying with site restrictions In a multi-site environment, you may have use cases where queries need to reflect the site context. For example, the Brightspot Recent Activity widget and search panel query for objects that are owned by the site that the user selects in the Dashboard. By default, a query searches for objects across all sites unless otherwise restricted. For example, a visitor requested an article, and you want to display 10 associated articles. You can use a query field in the model, run the query in the view model, and display the results in the view. The view model must retrieve only articles owned by the site the visitor is at, or articles to which non-owning sites have been granted access. To accomplish this filtering, the view model class can use the Site#itemsPredicate method in the query that gets related articles.
https://docs.brightspot.com/4.2/en/developer-guide/sites/multi-site-considerations.html
2021-04-10T12:16:43
CC-MAIN-2021-17
1618038056869.3
[]
docs.brightspot.com
Rate•Give Feedback doctl projects [flags] The subcommands of doctl projects allow you to create, manage, and assign resources to your projects. Projects allow you to organize your DigitalOcean resources (like Droplets, Spaces, load balancers, domains, and floating IPs) into groups that fit the way you work. You can create projects that align with the applications, environments, and clients that you host on DigitalOcean.
https://docs.digitalocean.com/reference/doctl/reference/projects/
2021-04-10T12:59:20
CC-MAIN-2021-17
1618038056869.3
[]
docs.digitalocean.com
1 Introduction A navigation tree displays menu items of a navigation profile or menu document in the form of a tree. These items are determined by the Menu source and are either configured in the Navigation or a Menu. The menu structure of a navigation tree can have three levels, that means that menu items can have sub-items. For more information on menu items and their properties, see Menu. 2 Properties An example of navigation tree properties is represented in the image below: Navigation tree.
https://docs.mendix.com/refguide8/navigation-tree
2021-04-10T11:48:00
CC-MAIN-2021-17
1618038056869.3
[array(['attachments/menu-widgets/navigation-tree.png', 'Navigation Tree'], dtype=object) array(['attachments/menu-widgets/navigation-tree-properties.png', 'Navigation Tree Properties'], dtype=object) ]
docs.mendix.com
This tutorial explain about Background Video. To add a Video other than Background, please read next article. Only Youtube and Vimeo are supported if you are using our built in Video widget. To add video from third party like Wistia, Dailymotion etc. Please read respected Article. A default video will appear in the stripe. To change the video, click on it and a then click on Edit Video button..
https://docs.vintcer.com/add-a-video
2021-04-10T12:21:21
CC-MAIN-2021-17
1618038056869.3
[]
docs.vintcer.com
JPathway:::addItem Description Create and add an item to the pathway. Description:JPathway::addItem [Edit Descripton] public function addItem ( $name $link='' ) - Returns boolean True on success - Defined on line 149 of libraries/joomla/application/pathway.php - Since See also JPathway::addItem source code on BitBucket Class JPathway Subpackage Application - Other versions of JPathway::addItem SeeAlso:JPathway::addItem [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/API17:JPathway::addItem
2015-11-25T00:48:14
CC-MAIN-2015-48
1448398444138.33
[]
docs.joomla.org
DEPARTMENT OF STATE DIVISION OF PROTOCOL March 27, 1939 Memorandum for Colonel Watson: In drawing up detailed plans in connection with the visit of the King and Queen of Great Britain, we have come to the matter of visiting "The Battery" in New York City. In one of the conferences the President stated that he would get in touch with Governor Lehman and Mayor LaGuardia about the kind of ceremony to be held, but last Thursday the President indicated that he had not yet written to these officials, and he intimated that he might have them come to Washington for a short conference in regard to the matter. Possibly the President might wish to telephone the Governor and the Mayor. Also, we know nothing as yet about the ceremony to take place at Columbia University, in honor of the King and the Queen, on the afternoon of June 10th. George T. Summerlin
http://docs.fdrlibrary.marist.edu/psf/box32/t304q03.html
2015-11-25T00:13:37
CC-MAIN-2015-48
1448398444138.33
[]
docs.fdrlibrary.marist.edu