content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Controllers¶ controllers are a great way to keep code clean. From here onwards only controller class(not closure) will be treated as controllers and closure controllers will be specifically called closure controllers( see Basics ). controllers helps to increase reusability of code. and it helps to group code better based on functionality Basics¶ You must specify a controller directly from router or pass router to controller. to specify a controller from router $app = new App(); $app->route('name',function($r){ $r->get('/','IndexController'); $r->get('/contact','IndexController@contact'); },'parent',$args); IndexController@contact means the contact method at IndexController here a router instance is passed to $r and using $r->get() we set route / to IndexController since no method is specified it will use the default index method. next we set /contact to contact method of IndexController. @ is the spererator to identify controller from action (action is another name commonly used for methods inside a controller) Or we pass the controller to router. $app = new App(); $app->route('admin','AdminController'); here we passed the router to AdminController . more about controller routing is available at Controller Routing section. Creating a controller¶ a controller must be inside controller namespace which is by default MyApp\Controllers if you look at MyApp/Controllers directory you can see a file ErrorsController Dont delete that file. it is your responsibility to care that file. you can edit it in the way you want but dont delete it. back to creating controller. the controller should extend Briz\Concrete\BController. if you dont do so nothing is wrong. it will still work. but you will not get access to some helper methods such as renderer() and show404() basic structure of a controller is as follows namespace MyApp\Controllers; use Briz\Concrete\BController; class IndexController extends BController { /** * Index Page */ public function index() { $this->renderer('index',['param'=>'value']); } /** * Contact page. * * @Use app */ public function contact($app) { $data = $app; $this->renderer('index',['name'=>$app]); } } this is our IndexController with the routes we defined above in Basics . index method will match with the route / and contact method will match with the route if used with the routes defined above. if you want to know what is the @Use in the docblock above the function contact, read the section below. Dependancy Injection and Passing Values¶ Dependancies are the components used by the Briz. In Briz adding a dependancy is very easy. dependancies are added using the config files in config directory. there is a container for storing all the dependancies. every value stored in this container can be accessed consider there are two routes /profile and /profile/{name} for get method (see Route Patterns). which will point to the methods index and showProfile in the controller ProfileController namespace MyApp\Controllers; use Briz\Concrete\BController; /** * Profile controller * * @Use auth */ class ProfileController extends BController { /** * Index Page * * @Use membership */ public function index($mem) { $min = $mem->type(); $name = $this->auth->getname(); $this->renderer('profile',['name'=>$name,'mem'=>$min]); } /** * show profile. * * @Use app_name */ public function showProfile($app,$name) { $data = $app; $this->renderer('index',['name'=>$app,'user'=>$name]); } } we can pass dependancies using @Use in docblock. it specifies which components should be loaded from container. by default you will have request and response dependancies injected. you can pass more using @Use. here we use two imaginary components auth and membership. if we want a dependancy available everywhere in the class the @Use can be used above the class as in the example above. in that case it will be in the form $component where component is the key for the value stored in the container use it as $this->component inside a method. when the dependancy is only needed inside a method we can pass it using @Use above the method. in that case it will be passed to the arguments in the function. if there are two injections then it will be passed to first two function parameters in order. this is done by the internal ControllerResolver which resolves the controller for router. the showProfile method has one parameter from @Use and one from /profile/{name} the parameter from route will be stored to $name in that method. which means the @Use parameters will be resolved before route parameters there is no limit on what should be stored in container. here app_name is a string storing the application name. you can edit it inside config/application.php. Note if you dont want to use or don’t like using @Use annotation to get values. you can simply use something like $mem = $this->container->get('membership') inside the method. the container holds reference to all dependancies. you can access it using $this->container->get('key') method. more about this at Container Reference. Available Methods¶ The Following Methods are Available from BController class. if you create a new Response object then there must a return statement with that Response or set $this->response to that object. show404¶ displaying 404 page. you can edit its default look inside ErrorsController show404() Show a 404 response message usage public function show($name) { if($this->notFound($name) { return $this->show404(); } } redirect¶ redirect($url, $code=302) redirect to a given url usage public function show($name,$redirectUrl) { if($this->isAuthorized) { return $this->redirect($redirectUrl); } } renderer¶ use ther selected renderer to process the input to generate response renderer() minimum number of arguments is two. but can have more than that based on number of responses. Basic usage public function show($name,$redirectUrl) { $params = $arrayOrObject; $this->renderer('hello',$params); }
http://briz.readthedocs.io/en/latest/controller.html
2018-02-18T03:26:01
CC-MAIN-2018-09
1518891811352.60
[]
briz.readthedocs.io
Docs Setup Installfest Instructions for installing Ruby and Rails on your computer. You need to complete these steps before starting a Rails workshop!.) Message Board Build a message board! This curriculum is for students who have completed the Suggestotron and the Job Board curricula. This curriculum is a challenge because it won't tell you what to type in! Testing Rails Applications Increase the stability of your Rails app by learning about tests: what they are, why they're used, and how to use them! This curriculum is for students who have completed the Suggestotron, the Job Board, and the Message Board curricula. There will be challenges! Frontend Javascript Javascript Snake Game A beginner Javascript specific curriculum that walks you through building a simple game. Javascript To Do List An intermediate all JavaScript curriculum that builds a simple to do list application using AJAX and jQuery. Javascript To Do List With React An advanced all JavaScript curriculum that builds a simple to do list application using AJAX, jQuery, and React. Meant for students with some exposure to JavaScript. Ruby. Ruby A ruby-specific curriculum, expanded from the "Ruby for Beginners" slide deck. Still new, with room for your contributions.
https://docs.railsbridgecapetown.org/docs/
2018-02-18T02:34:02
CC-MAIN-2018-09
1518891811352.60
[]
docs.railsbridgecapetown.org
Installation Download the latest Mac OS X Installer. Double-click the downloaded file to run the installer. You will initially be prompted with a license agreement to accept: Drag and drop the application into the Applications folder on your Mac as prompted: PhoneGap Desktop is now installed and ready to run.
http://docs.phonegap.com/references/desktop-app/install/mac/
2018-02-18T03:27:25
CC-MAIN-2018-09
1518891811352.60
[]
docs.phonegap.com
Installing the Visual Studio SDK Note This article applies to Visual Studio 2015. If you're looking for the latest Visual Studio documentation, use the version selector at the top left. We recommend upgrading to Visual Studio 2019. Download it here Starting in Visual Studio 2015, you do not install the Visual Studio SDK from the download center. It is included as an optional feature in Visual Studio setup. You can also install the VS SDK later on. Installing the Visual Studio SDK as Part of a Visual Studio Installation If you’d like to include the VSSDK in your Visual Studio installation, you must do a custom installation. Note In the installation executable, the Visual Studio SDK is called Visual Studio Extensibility Tools. Start the Visual Studio 2015 installation. You can install any edition of Visual Studio except Express. On the first screen, select Custom, not Default. Click Next. You should see a tree view of custom features. Open Common Tools. You should see Visual Studio Extensibility Tools . Check Visual Studio Extensibility Tools , then click Next and continue the installation. Installing the Visual Studio SDK after Installing Visual Studio If you decide to install the Visual Studio SDK after completing your Visual Studio installation, you should follow the following procedure: Go to Control Panel / Programs / Programs and Features, and look for Visual Studio 2015. You can install the Visual Studio SDK for any edition of Visual Studio 2015 except Express. Right-click Visual Studio 2015, and then click Change. You should see the installation page. Follow the same procedure as in Installing the Visual Studio SDK as Part of a Visual Studio Installation above. Click the Visual Studio Extensibility Tools link to install the Visual Studio SDK. Installing the Visual Studio SDK from a Solution If you open a solution with an extensibility project without first installing the VSSDK, you will be prompted by a highlighted information bar above the Solution Explorer. It should look something like the following: Installing the Visual Studio SDK from the Command Line You can install the VSSDK from the command line by using the /InstallSelectableItems switch with the Visual Studio installer. For details about using command-line parameters with the installer, see Installing Visual Studio 2015. Here is how to install the VSSDK silently using the Visual Studio 2015 Community installer: vs_community.exe /s /installSelectableItems VS_SDK_GROUPV1 Note that you must use the Visual Studio installer that matches your installed version of Visual Studio. For example, if you have Visual Studio Enterprise installed on your computer, you must run the Visual Studio Enterprise installer (vs_enterprise.exe).
https://docs.microsoft.com/en-us/visualstudio/extensibility/installing-the-visual-studio-sdk?view=vs-2015
2019-08-17T12:11:40
CC-MAIN-2019-35
1566027312128.3
[array(['media/solutionexplorerinstall.png?view=vs-2015', 'SolutionExplorerInstall SolutionExplorerInstall'], dtype=object)]
docs.microsoft.com
Specifies the oldest time to be included in the results. Relative start times are defined using negative durations. Negative durations are relative to now. Absolute start times are defined using timestamps. Data type: Duration or Timestamp stop Specifies the newest time to be included in the results. Defaults to now. Relative stop times are defined using negative durations. Negative durations are relative to now. Absolute stop times are defined using timestamps. Data type: Duration or Timestamp Flux only honors RFC3339 timestamps and ignores dates and times provided in other formats..
https://v2.docs.influxdata.com/v2.0/reference/flux/functions/built-in/transformations/range/
2019-08-17T10:47:48
CC-MAIN-2019-35
1566027312128.3
[]
v2.docs.influxdata.com
How to get PayPal to redirect users back to your website after successful payment With Auto Return for Website Payments, your buyers are automatically redirected to your website after payment completion. Here’re the steps to turn Auto Return on: Step 1: Login your business PayPal account at. Note: You must use PayPal Business account to log in since Personal accounts do not have this option. Step 2: Click on the Profile subtab under My Account. Step 3: Under the Selling online section, click the Update link to the right of the Website preferences. The Website Preferences page appears. Step 4: Under Auto Return for Website Payments, select On radio button to enable Auto Return. Step 5: Enter the URL that will be used to redirect your buyers after successful payments in the Return URL field. Note:This URL is not important because URL Return has already been set in the code of the theme. You can enter into Return URL instead. Step 6: Scroll to the bottom of the page, click the Save button to complete your changes. Step 7: Finally, you use this PayPal account to set as PayPal email in your site ( Engine Settings > Settings > Payment > Payment Gateways > PayPal).
https://docs.enginethemes.com/article/515-how-to-get-paypal-auto-redirect-user-after-successful-payment
2019-08-17T11:04:49
CC-MAIN-2019-35
1566027312128.3
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56c6e208c697915005a72a5f/images/576a5b329033601c8a8eccc8/file-IFes6UxklY.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56c6e208c697915005a72a5f/images/576a5a7a9033601c8a8eccc5/file-9H3yYFCeeZ.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56c6e208c697915005a72a5f/images/576a5756c6979172d75a6fe8/file-8Op6ijlIbx.png', None], dtype=object) ]
docs.enginethemes.com
The type field identifies your log type. Logz.io parses logs based on type. For example, if a log type is apache_access, Logz.io automatically parses these logs as Apache access logs. This table shows the built-in log types that Logz.io supports. If you don’t see your log type here, you can create custom data parsing using the data parsing wizard.
https://docs.logz.io/user-guide/log-shipping/built-in-log-types.html
2019-08-17T11:22:16
CC-MAIN-2019-35
1566027312128.3
[]
docs.logz.io
Makes the most recently extracted character available again. Then the function behaves as UnformattedInputFunction. After constructing and checking the sentry object, if any ios_base::iostate flags are set, the function sets failbit and returns. Otherwise, calls rdbuf()->sungetc(). If rdbuf()->sungetc() returns Traits::eof(), calls setstate(badbit). In any case, sets the gcount() counter to zero. (none). <sstream> #include <iostream> int main() { std::istringstream s1("Hello, world."); char c1 = s1.get(); if (s1.unget()) { char c2 = s1.get(); std::cout << "Got: " << c1 << " got again: " << c2 << '\n'; } } Output: Got: H got again: H © cppreference.com Licensed under the Creative Commons Attribution-ShareAlike Unported License v3.0.
https://docs.w3cub.com/cpp/io/basic_istream/unget/
2019-08-17T10:38:04
CC-MAIN-2019-35
1566027312128.3
[]
docs.w3cub.com
class Firewall implements EventSubscriberInterface Firewall uses a FirewallMap to register security listeners for the given request. It allows for different security strategies within the same application (a Basic authentication for the /api, and a web based authentication for everything else for instance). Returns an array of event names this subscriber wants to listen to. The array keys are event names and the value can be: For instance: © 2004–2017 Fabien Potencier Licensed under the MIT License.
https://docs.w3cub.com/symfony~4.1/symfony/component/security/http/firewall/
2019-08-17T11:37:43
CC-MAIN-2019-35
1566027312128.3
[]
docs.w3cub.com
GTFS-Flex routing Many agencies run flexible services to complement their fixed-route service. "Flexible" service does not follow a strict timetable or route. It may include any of the following features: boardings or alightings outside its scheduled timetable and route; booking and scheduling in advance; or transit parameters which depend on customer requests ("demand-responsive transit" or DRT). These services are typically used in rural areas or for mobility-impaired riders. A GTFS extension called GTFS-Flex defines how to model some kinds of flexible transit. A subset of GTFS-Flex has been implemented in OpenTripPlanner as part of US DOT's Mobility-on-Demand Sandbox Grant. In particular, OTP now has support for these modes of GTFS-Flex: - "flag stops", in which a passenger can flag down the a vehicle along its route to board, or alight in between stops - "deviated-route service", in which a vehicle can deviate from its route within an area or radius to do a dropoff or pickup - "call-and-ride", which is an entirely deviated, point-to-point segment. These modes can co-exist with fixed-route transit, and with each other. For example, some agencies have fixed-route services that start in urban areas, where passengers must board at designated stops, but end in rural areas where passengers can board and alight wherever they please. A fixed-route service may terminate in an defined area where it can drop off passengers anywhere -- or have such an area at the beginning or middle of its route. A vehicle may be able to deviate a certain radius outside its scheduled route to pick up or drop off passengers. If both a pickup and dropoff occur in between scheduled timepoints, from the passenger's perspective, the service may look like a call-and-ride trip. Other call-and-ride services may operate more like taxis, in which all rides are independently scheduled. Configuration In order to use flexible routing, an OTP graph must be built with a GTFS-Flex dataset and OpenStreetMap data. The GTFS data must include shapes.txt. In addition, the parameter useFlexService: true must be added to router-config.json. A number of routing parameters can be used to control aspects of flexible service. These parameters typically change the relative cost of using various flexible services relative to fixed-route transit. All flex-related parameters begin with the prefix "flex" and can be found in the Javadocs for RoutingRequest.java. The following example router-config.json enables flexible routing and sets some parameters: { "useFlexService": true, "routingDefaults": { "flexCallAndRideReluctance": 3, "flexMaxCallAndRideSeconds": 7200, "flexFlagStopExtraPenalty": 180 } } Implementation The general approach of the GTFS-Flex implementation is as follows: prior to the main graph search, special searches are run around the origin and destination to discover possible flexible options. One search is with the WALK mode, to find flag stops, and the other is in the CAR mode, to find deviated-route and call-and-ride options. These searches result in the creation of temporary, request-specific vertices and edges. Then, the graph search proceeds as normal. Temporary graph structures are disposed at the end of the request's lifecycle. For flag stops and deviated-route service, timepoints in between scheduled locations are determined via linear interpolation. For example, say a particular trip departs stop A at 9:00am and arrives at stop B at 9:30am. A passenger would be able to board 20% of the way in between stop A and stop B at 9:06am, since 20% of 30 minutes is 6 minutes. For deviated-route service and call-and-ride service, the most pessimistic assumptions of vehicle travel time are used -- e.g. vehicle travel time is calculated via the drt_max_travel_time formula in the GTFS-Flex (see the spec here).
http://docs.opentripplanner.org/en/latest/Flex/
2019-08-17T11:23:44
CC-MAIN-2019-35
1566027312128.3
[]
docs.opentripplanner.org
Message-ID: <563718666.12729.1566040896363.JavaMail.confluence@ip-10-0-1-134> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_12728_2096738702.1566040896362" ------=_Part_12728_2096738702.1566040896362 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: The following error typically occurs when an agent on a Solaris = or Linux system is not returning data or cannot be added to Uptime Infrastr= ucture Monitor: uptime agent daemon.error: Permission denied This problem may also be detected by the presence of the following log m= essage in the /var/adm/messages or /var/log/messages files: Oct 30 = 10:48:07 hostname inetd[6402]: [ID 388736 daemon.error] execv /opt/uptime-a= gent/bin/uptmagnt: Permission denied=20 This error is generally caused by inetd attempting to run the uptmagnt b= inary as one user when the binary is owned by another user. In the ex= ample above, we can find that the /opt/uptime-agent/bin/uptmagnt file is ow= ned by nobody: # ls -l= /opt/uptime-agent/bin/uptmagnt -rwxr--r-- 1 nobody nobody 62736 Oct 2 16:0= 8 uptmagnt=20 However, the inted.conf file shows that the uptmagnt is to be run as the= uptime user: # grep = uptmagnt /etc/inetd.conf # *** Installed by the up.time pkgadd command (uptmagnt) uptmagnt stream tc= p nowait uptime /opt/uptime-agent/bin/uptmagnt=20 By changing the inetd.conf line to run the uptmagnt as nobody i= nstead of uptime and then restarting inetd, the agent should start= responding normally.
http://docs.uptimesoftware.com/exportword?pageId=4555093
2019-08-17T11:21:36
CC-MAIN-2019-35
1566027312128.3
[]
docs.uptimesoftware.com
1.1.0 Release Notes Bug Fixes Discovering new column in MongoDB reader can cause a schema change error If a new top-level column is discovered after the first batch, a Schema Change Error was thrown, but the knew column was not added to the known schema, so it would fail again the next time. This is fixed so that this case will not throw a Schema change error. Execution threads can get into a bad state if a specific exception is thrown When an execution task is ready to yield, and an exception is thrown inside the retire method of FragmentExecutor, we fail to handle it properly putting the task into a bad state that will prevent it to run and will keep logging the same error over and over again. The fix ensures executing tasks are always in a consistent state even when such exceptions are thrown. Deleting a container like a source, space or a folder, should delete the reflections of the datasets under the container When a container like source, space or a folder is deleted, the underlying datasets were removed. But any reflections on the datasets were not removed. This is fixed by scheduling a periodic cleanup task which will remove the reflections if their related datasets do not exist. The cleanup is scheduled to run every 4 hours and on restart. Cannot add S3 source with name with a space in it Fix addresses failure while attempting to register an S3 source with special characters in the name. Issue while parsing text file with empty first line When working with a set of files with empty files or with empty first N lines in the file, processing of those text files would fail. This fix ensures such cases don't cause query failure. Unable to apply functions with Array output on top of each other This behavior was caused by an issue with handling complex return type from functions being inverted as BOOLEAN versus ANY. No logging showing for YARN containers When provisinoing Dremio executors via YARN, no logging would show up in YARN containers related to Dremio. Failed to query Parquet files with different data types for the same column using vectorized parquet reader. Dremio supports having same column with different types in different files and exposes this field as “Mixed Type” field. When working with parquet files, however, queries would fail when converting column into “Mixed Type” fields. This behavior is now fixed. Simple limit queries are now optimized to read in a single thread Dremio was reading in multiple thread for simple limit queries that involve no joins are aggregation (ex. SELECT * FROM hive.employee LIMIT 20) causing delay in response. This behavior is now fixed to read the table in a single thread. DateTime functions are returning incorrect results Issues fixed are in functions: date_trunc, CAST(varcharTypeCol AS INTERVAL SECOND) date_sub and date_add should return the same type for the same input date_dub and date_add functions were not returning same type of output. date_add was returning timestamp whereas date_sub was returning date type. Implement Project Push-down for Excel (XLSX, XLS) readers Excel record readers did not have the capability to project a handful of columns but the plan rule was pushing down ‘project’. This would lead to frequent schema learning related problems when working with Excel files. Recreating datasets from Parquet files with the same path Usage of invalid cached parquet file footer for the re-creation of dataset from a completely different parquet file that happens to have the same path name as the parquet file used to create the dataset initially. Improved support for special characters Fixed particular cases where certain characters in the names of sources, spaces, and datasets would prevent certain functionality from working. Fixed status indicators for reflections and jobs Fixed issues where in particular situations reflections and jobs would show an incorrect status. Fixed issue where a new source could not be saved if the first attempt was not successful If a source was attempted to be created with (e.g. incorrect credentials) it would not be created but its name would become reserved by system. Now correcting the configuration of the source will allow you to save it with the desired name. New Features Sample Data Source Administrators who haven’t set up any sources yet now have the option to add sample data with a single click. Cache hasAccessPermission by source/user according to metadata policy Under each source configuration, administrators can define for how long permission checks should be cached. This parameter was unused before 1.1.0. Permission checks, per user per table, are now cached for each user up to the defined duration. Each coordinator maintains a cache of 10000 permission checks.
https://docs.dremio.com/release-notes/110-release-notes.html
2019-08-17T11:08:42
CC-MAIN-2019-35
1566027312128.3
[]
docs.dremio.com
[EstateEngine] Introduction to EstateEngine What is EstateEngine? EstateEngine is a kind of directory Wordpress theme giving you a fantastic business in the aspect of Real Estate. The properties here for rental or sale are mainly lands, apartments, houses or villas, etc. As an admin of the site, you can certainly charge users for their property postings on your site. By creating payment plans, users can choose the posting package with its detailed description. You can also mention the area to display advertisements. Besides the desktop version, EstateEngine also supports tablet and mobile version; you never have to worry about missing the users with mobile devices What can users do? As a user, you can do a lot of things in the site: - Register an account - Login via social network accounts - Submit a property with detailed information and photos / videos, properties descriptions, features, floor plan, notes. - Claim a place - Share a place via social media - Add a place to favorite list - Review other places - Contact the other owners by sending the private message to the author’s address right below his ads. - Do an advanced search with a variety of criterion
https://docs.enginethemes.com/article/213-introduction-to-estateengine
2019-08-17T10:29:01
CC-MAIN-2019-35
1566027312128.3
[]
docs.enginethemes.com
Documentation Add-ons and Extensions Reference¶ This documentation is created using Sphinx application that renders text source files in reStructuredText ( .rst) format located in docs directory. For some more details on that process, please refer to section Documenting Code. Besides Sphinx there are several other applications that help to provide nicely formatted and easy to navigate documentation. These applications are listed in section Setup for building documentation locally with the installed version numbers provided in file docs/requirements.txt. On top of that we have created a couple of custom add-ons and extensions to help integrate documentation with underlining ESP-IDF repository and further improve navigation as well as maintenance of documentation. The purpose of this section is to provide a quick reference to the add-ons and the extensions. Documentation Folder Structure¶ - The ESP-IDF repository contains a dedicated documentation folder docs in the root. - The docsfolder contains localized documentation in docs/en (English) and docs/zh_CN (simplified Chinese) subfolders. - Graphics files and fonts common to localized documentation are contained in docs/_static subfolder - Remaining files in the root of docsas well as docs/enand docs/zh_CNprovide configuration and scripts used to automate documentation processing including the add-ons and extensions. - Several folders and files are generated dynamically during documentations build and placed primarily in docs/[lang]/_buildfolders. These folders are temporary and not visible in ESP-IDF repository, Add-ons and Extensions Reference¶ - docs/conf_common.py - This file contains configuration common to each localized documentation (e.g. English, Chinese). The contents of this file is imported to standard Sphinx configuration file conf.pylocated in respective language folders (e.g. docs/en, docs/zh_CN) during build for each language. - docs/check_doc_warnings.sh - If there are any warnings reported during documentation build, then the build is failed. The warnings should be resolved before merging any documentation updates. This script is doing check for warnings in respective log file to fail the build. See also description of sphinx-known-warnings.txtbelow. - docs/check_lang_folder_sync.sh To reduce potential discrepancies when maintaining concurrent language version, the structure and filenames of language folders docs/enand docs/zh_CNfolders should be kept identical. The script check_lang_folder_sync.shis run on each documentation build to verify if this condition is met. Note If a new content is provided in e.g. English, and there is no any translation yet, then the corresponding file in zh_CNfolder should contain an .. include::directive pointing to the source file in English. This will automatically include the English version visible to Chinese readers. For example if a file docs/zh_CN/contribute/documenting-code.rstdoes not have a Chinese translation, then it should contain .. include:: ../../en/contribute/documenting-code.rstinstead. - docs/docs_common.mk - It contains the common code which is included into the language-specific Makefiles. Note that this file contains couple of customizations comparing to what is provided within standard Sphinx installation, e.g. gh-linkcheckcommand has been added. - docs/gen-dxd.py - A Python script that generates API reference files based on Doxygen xmloutput. The files have an incextension and are located in docs/[lang]/_build/incdirectory created dynamically when documentation is build. Please refer to Documenting Code and API Documentation Template, section API Reference for additional details on this process. - docs/gen-toolchain-links.py - There couple of places in documentation that provide links to download the toolchain. To provide one source of this information and reduce effort to manually update several files, this script generates toolchain download links and toolchain unpacking code snippets based on information found in tools/toolchain_versions.mk. - docs/gen-version-specific-includes.py - Another Python script to automatically generate reStructuredText Text .incsnippets with version-based content for this ESP-IDF version. - docs/html_redirects.py - During documentation lifetime some source files are moved between folders or renamed. This Python script is adding a mechanism to redirect documentation pages that have changed URL by generating in the Sphinx output static HTML redirect pages. The script is used together with a redirection list html_redirect_pagesdefined in file docs/conf_common.py. - docs/link-roles.py - This is an implementation of a custom Sphinx Roles to help linking from documentation to specific files and folders in ESP-IDF. For description of implemented roles please see Linking Examples and Linking Language Versions. - docs/local_util.py - A collection of utility functions useful primarily when building documentation locally (see Setup for building documentation locally) to reduce the time to generate documentation on a second and subsequent builds. The utility functions check what Doxygen xmlinput files have been changed and copy these files to destination folders, so only the changed files are used during build process. - docs/sphinx-known-warnings.txt - There are couple of spurious Sphinx warnings that cannot be resolved without doing update to the Sphinx source code itself. For such specific cases respective warnings are documented in sphinx-known-warnings.txtfile, that is checked during documentation build, to ignore the spurious warnings. - tools/gen_esp_err_to_name.py - This script is traversing the ESP-IDF directory structure looking for error codes and messages in source code header files to generate an .incfile to include in documentation under Error Codes Reference. - tools/kconfig_new/confgen.py - Options to configure ESP-IDF’s components are contained in Kconfigfiles located inside directories of individual components, e.g. components/bt/Kconfig. This script is traversing the componentdirectories to collect configuration options and generate an .incfile to include in documentation under Configuration Options Reference.
https://docs.espressif.com/projects/esp-idf/en/latest/contribute/add-ons-reference.html
2019-08-17T11:04:11
CC-MAIN-2019-35
1566027312128.3
[]
docs.espressif.com
Enterprise Recon 2.0.27 Linux Agent Install the Node Agent - On your Web Console, go to DOWNLOADS > NODE AGENT DOWNLOADS. - On the Node Agent Downloads page, click on the Filename for your Platform. Select an Agent Installer Select an Agent installer based on the Linux distribution of the host you are installing the Agent on. The following is a table of installation packages available at DOWNLOADS > NODE AGENT DOWNLOADS: Run uname -r in the terminal of the Agent host to display the operating system kernel version. For example, running uname -r on a CentOS 6.9 (64-bit) host displays 2.6.32-696.16.1.el6.x86_64. This tells us that it is running a 64-bit Linux 2.6 kernel. - Examples of Debian-based distributions are Debian, Ubuntu, and their derivatives. - Examples of RPM-based distributions are CentOS, Fedora, openSUSE, Red Hat and its derivatives. Debian-based Linux Distributions To install the Node Agent on Debian or similar Linux distributions: # Install Linux Agent, where 'er2_2.0.x-linux26-x64.deb' is the location of the deb package on your computer. dpkg -i er2_2.0.x-linux26-x64.deb RPM-based Linux Distributions To install the Node Agent on a RPM-based or similar Linux distributions: # Remove existing ER2 packages rpm -e er2 # Install Linux Agent, where 'er2-2.0.x-linux26-rh-x64.rpm' is the location of the rpm package on your computer. rpm -ivh er2-2.0.x-linux26-rh-x64.rpm Install GPG Key for RPM Package Verification From ER 2.0.19, Node Agent RPM packages are signed with a Ground Labs GPG key. For instructions on how to import GPG keys, see GPG Keys (RPM Packages).. Use Custom configuration File To run the Node Agent using a custom configuration file: Generate a custom configuration file: # Where 'custom.cfg' is the location of the custom configuration file. # Run the interactive configuration tool. er2-config -c custom.cfg -interactive # (Optional) Manual configuration. er2-config -i <hostname|ip_address> [-t] [-k <master_server_key>] [-g <target_group>] ## Required # -i : MASTER SERVER ip or host name. ## Optional parameters # -t : Tests if NODE AGENT can connect to the given host name or ip address. # -k <master server key> : Sets the Master Public Key. # -g <target group> : Sets the default TARGET GROUP for scan locations added for this AGENT. Change the file owner and permissions for the custom configuration file: chown erecon:root custom.cfg chmod 644 custom.cfg - Restart the Node Agent. Start the Node Agent with the custom configuration flag -c. er2-agent -c custom.cfg -start To check which configuration file the Node Agent is using: ps aux | grep er2 # Displays output similar to the following, where 'custom.cfg' is the configuration file used by the 'er2-agent' process: # erecon 2537 0.0 2.3 32300 5648 ? Ss 14:34 0:00 er2-agent -c custom.cfg -start Upgrade Node Agents See Agent Upgrade for more information.21-xxxxxxx-x64. Restart the Node Agent For your configuration settings to take effect, you must restart the Node Agent: ## Run either of these options # Option 1 /etc/init.d/er2-agent restart # Option 2 er2-agent -stop # stops the agent er2-agent -start # starts the agent
https://docs.groundlabs.com/er2027/Content/Node-Agents/Linux-Agent.html
2019-08-17T10:57:28
CC-MAIN-2019-35
1566027312128.3
[]
docs.groundlabs.com
NXP LPCXpresso54608546xx ID for board option in “platformio.ini” (Project Configuration File): [env:lpc546xx] platform = nxplpc board = lpc546xx You can override default NXP LPCXpresso54608 settings per build environment using board_*** option, where *** is a JSON object path from board manifest lpc546xx.json. For example, board_build.mcu, board_build.f_cpu, etc. [env:lpc546xx] platform = nxplpc board = lpc546xx ; change microcontroller board_build.mcu = lpc54608et512 ; change MCU frequency board_build.f_cpu = 180000000L Uploading¶ NXP LPCXpresso54608 supports the next uploading protocols: jlink mbed Default protocol is mbed You can change upload protocol using upload_protocol option: [env:lpc546xx] platform = nxplpc board = lpc546xx54608 has on-board debug probe and IS READY for debugging. You don’t need to use/buy external debug probe.
http://docs.platformio.org/en/latest/boards/nxplpc/lpc546xx.html
2019-08-17T11:00:35
CC-MAIN-2019-35
1566027312128.3
[]
docs.platformio.org
Microduino Core STM32 to Flash microduino32_flash ID for board option in “platformio.ini” (Project Configuration File): [env:microduino32_flash] platform = ststm32 board = microduino32_flash You can override default Microduino Core STM32 to Flash settings per build environment using board_*** option, where *** is a JSON object path from board manifest microduino32_flash.json. For example, board_build.mcu, board_build.f_cpu, etc. [env:microduino32_flash] platform = ststm32 board = microduino32_flash ; change microcontroller board_build.mcu = stm32f103cbt6 ; change MCU frequency board_build.f_cpu = 72000000L Uploading¶ Microduino Core STM32 to Flash supports the next uploading protocols: blackmagic dfu jlink stlink Default protocol is dfu You can change upload protocol using upload_protocol option: [env:microduino32_flash] platform = ststm32 board = microduino32_flash STM32 to Flash does not have on-board debug probe and IS NOT READY for debugging. You will need to use/buy one of external probe listed below.
http://docs.platformio.org/en/latest/boards/ststm32/microduino32_flash.html
2019-08-17T11:43:40
CC-MAIN-2019-35
1566027312128.3
[]
docs.platformio.org
Password policy in Azure AD Updated: June 8, 2015 Applies To: Azure, Office 365, Windows Intune Note This topic provides online help content for cloud services, such as Microsoft Intune and Office 365, which rely on Microsoft Azure Active Directory for identity and directory services. This topic describes the various password policies and complexity requirements associated with the user accounts stored in your Azure AD tenant. UserPrincipalName policies that apply to all user accounts Every user account that needs to sign in to the Azure AD authentication system must have a unique user principal name (UPN) attribute value associated with that account. The following table outlines the polices that apply to both on-premises Active Directory-sourced user accounts (synced to the cloud) and to cloud-only user accounts. Password policies that apply only to cloud user accounts The following table describes the available password policy settings that can be applied to user accounts that are created and managed in Azure AD. See Also Concepts Manage Azure AD using Windows PowerShell
https://docs.microsoft.com/en-us/previous-versions/azure/jj943764(v=azure.100)
2019-08-17T12:32:06
CC-MAIN-2019-35
1566027312128.3
[]
docs.microsoft.com
Plane Constructs a plane. Syntax plane(Point, Segment) plane(List) plane(Real, Real, Real, Real) plane(Point, Vector) plane(Point, Vector, Vector) plane(Point, Line) plane(Point, Point, Point) plane(Line, Line) Description plane(Point, Segment) Given a point and a segment, constructs a plane perpendicular to the segment that passes through the point. plane(List) Given a list with, at least, three coplanar points, constructs a plane that passes through all the points. plane(Real, Real, Real, Real) The general equation for a plane is: . The four inputs are the four coefficients: , , and . plane(Point, Vector) Given a point and a vector, constructs a plane perpendicular to the vector given and passing through the point. plane(Point, Vector, Vector) Given a point and two vectors, constructs the plane with vectors given and passing through the point. plane(Point, Line) Given a point and a line, constructs the plane perpendicular to the line and passing through the point. plane(Point, Point, Point) Constructs the plane passing through all three points. plane(Line, Line) Given two lines, constructs a plane that contains such lines. Related functions Line, Perpendicular Table of Contents Syntax Description Related functions
https://docs.wiris.com/en/calc/commands/geometry/plane
2019-08-17T11:04:39
CC-MAIN-2019-35
1566027312128.3
[]
docs.wiris.com
AssociateAssessmentReportEvidenceFolder Associates an evidence folder to the specified assessment report in AWS Audit Manager. Request Syntax PUT /assessments/ assessmentId/associateToAssessmentReport HTTP/1.1 Content-type: application/json { "evidenceFolderId": ". - evidenceFolderId The identifier for the folder in which evidence is stored. Type: String Length Constraints: Fixed length of 36. Pattern: ^[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$ Required: Yes Response Syntax HTTP/1.1 200 Response Elements If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body.:
https://docs.aws.amazon.com/audit-manager/latest/APIReference/API_AssociateAssessmentReportEvidenceFolder.html
2021-09-16T22:11:31
CC-MAIN-2021-39
1631780053759.24
[]
docs.aws.amazon.com
Date: Sat, 27 Jul 2013 03:41:35 GMT From: FreeBSD Security Advisories <[email protected]> To: FreeBSD Security Advisories <[email protected]> Subject: FreeBSD Security Advisory FreeBSD-SA-13:08.nfsserver Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive | Help -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 ============================================================================= FreeBSD-SA-13:08.nfsserver Security Advisory The FreeBSD Project Topic: Incorrect privilege validation in the NFS server Category: core Module: nfsserver Announced: 2013-07-26 Credits: Rick Macklem, Christopher Key, Tim Zingelman Affects: FreeBSD 8.3, FreeBSD 9.0 and FreeBSD 9.1 Corrected: 2012-12-28 14:06:49 UTC (stable/9, 9.2-BETA2) 2013-07-26 22:40:23 UTC (releng/9.1, 9.1-RELEASE-p5) 2013-01-06 01:11:45 UTC (stable/8, 8.3-STABLE) 2013-07-26 22:40:29 UTC (releng/8.3, 8.3-RELEASE-p9) CVE Name: CVE-2013-4851 both server and client implementations of NFS. II. Problem Description The kernel incorrectly uses client supplied credentials instead of the one configured in exports(5) when filling out the anonymous credential for a NFS export, when -network or -host restrictions are used at the same time. III. Impact The remote client may supply privileged credentials (e.g. the root user) when accessing a file under the NFS share, which will bypass the normal access checks. IV. Workaround Systems that do not provide the NFS service are not vulnerable. Systems that do provide the NFS service are only vulnerable when -mapall or -maproot is used in combination with network and/or host245086 releng/8.3/ r253694 stable/9/ r244772PrkACgkQFdaIBMps37I9YACfSu4orRhgOhol8vacW9kF3ZGP jtAAn0t2i14CMo1MT5MztI6RWX3hnUWZ =xjf/ -----END PGP SIGNATURE----- Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=6789+0+archive/2013/freebsd-security-notifications/20130728.freebsd-security-notifications
2021-09-16T22:53:42
CC-MAIN-2021-39
1631780053759.24
[]
docs.freebsd.org
Julia 1.6 Documentation Welcome to the documentation for Julia 1.6. Please read the release notes to see what has changed since the last release. The documentation is also available in PDF format: julia-1.6.2.pdf. Introduction. Julia features optional typing, multiple dispatch, and good performance, achieved using type inference and just-in-time (JIT) compilation, implemented using LLVM. It is multi-paradigm, combining features of imperative, functional, and object-oriented programming. Julia provides ease and expressiveness for high-level numerical computing, in the same way as languages such as R, MATLAB, and Python,; Julia Base and the standard library Although one sometimes speaks of dynamic languages as being "typeless", they are definitely not: every object, whether primitive or user-defined, has a type. The lack of type declarations in most dynamic languages, however, means that one cannot instruct the compiler about the types of values, and often cannot explicitly talk about types at all. In static languages, on the other hand, while one can – and usually must – annotate types for the compiler, types exist only at compile time and cannot be manipulated or expressed at run time. In Julia, types are themselves run-time objects, and can also be used to convey information to the compiler. While the casual programmer need not explicitly use types or multiple dispatch, they are the core unifying features of Julia: functions are defined on different combinations of argument types, and applied by dispatching to the most specific matching definition. This model is a good fit for mathematical programming, where it is unnatural for the first argument to "own" an operation as in traditional object-oriented dispatch. Operators are just functions with special notation – to extend addition to new user-defined data types, you define new methods for the + function. Existing code then seamlessly applies to the new data types. Partly because of run-time type inference (augmented by optional type annotations), and partly because of a strong focus on performance from the inception of the project, Julia's computational efficiency exceeds that of other dynamic languages, and even rivals that of statically-compiled languages. For large scale numerical problems, speed always has been, continues to be, and probably always will be crucial: the amount of data being processed has easily kept pace with Moore's Law over the past decades. Julia aims to create an unprecedented combination of ease-of-use, power, and efficiency in a single language. In addition to the above, some advantages of Julia over comparable systems include: -
https://docs.julialang.org/en/v1/
2021-09-16T21:47:36
CC-MAIN-2021-39
1631780053759.24
[]
docs.julialang.org
Tutorial: Azure AD SSO integration with VergeSense In this tutorial, you'll learn how to integrate VergeSense with Azure Active Directory (Azure AD). When you integrate VergeSense with Azure AD, you can: - Control in Azure AD who has access to VergeSense. - Enable your users to be automatically signed-in to VergeSense with their Azure AD accounts. - Manage your accounts in one central location - the Azure portal. Prerequisites To get started, you need the following items: - An Azure AD subscription. If you don't have a subscription, you can get a free account. - VergeSense single sign-on (SSO) enabled subscription. Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment. - VergeSense supports SP and IDP initiated SSO. Add VergeSense from the gallery To configure the integration of VergeSense into Azure AD, you need to add VergeS VergeSense in the search box. - Select VergeSense from results panel and then add the app. Wait a few seconds while the app is added to your tenant. Configure and test Azure AD SSO for VergeSense Configure and test Azure AD SSO with VergeSense using a test user called B.Simon. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in VergeSense. To configure and test Azure AD SSO with VergeSense, VergeSense SSO - to configure the single sign-on settings on application side. - Create VergeSense test user - to have a counterpart of B.Simon in VergeSense that is linked to the Azure AD representation of user. - Test SSO - to verify whether the configuration works. Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal. In the Azure portal, on the VergeSense, the user does not have to perform any step as the app is already pre-integrated with Azure. Click Set additional URLs and perform the following step if you wish to configure the application in SP initiated mode: In the Sign-on URL text box, type the URL: VergeSense application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. In addition to above, VergeSense VergeSense VergeSense. - In the Azure portal, select Enterprise Applications, and then select All applications. - In the applications list, select VergeS VergeSense SSO To configure single sign-on on VergeSense side, you need to send the downloaded Certificate (Base64) and appropriate copied URLs from Azure portal to VergeSense support team. They set this setting to have the SAML SSO connection set properly on both sides. Create VergeSense test user In this section, you create a user called Britta Simon in VergeSense. Work with VergeSense support team to add the users in the VergeSense platform. Users must be created and activated before you use single sign-on. Test SSO In this section, you test your Azure AD single sign-on configuration with following options. SP initiated: Click on Test this application in Azure portal. This will redirect to VergeSense Sign on URL where you can initiate the login flow. Go to VergeSense Sign-on URL directly and initiate the login flow from there. IDP initiated: - Click on Test this application in Azure portal and you should be automatically signed in to the VergeSense for which you set up the SSO. You can also use Microsoft My Apps to test the application in any mode. When you click the VergeSense tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the VergeSense for which you set up the SSO. For more information about the My Apps, see Introduction to the My Apps. Next steps Once you configure VergeSense you can enforce session control, which protects exfiltration and infiltration of your organization’s sensitive data in real time. Session control extends from Conditional Access. Learn how to enforce session control with Microsoft Cloud App Security.
https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/vergesense-tutorial
2021-09-16T22:08:23
CC-MAIN-2021-39
1631780053759.24
[]
docs.microsoft.com
Text (Available in all TurboCAD Variants) Default UI Menu: Draw/Text/Text Ribbon UI Menu: These tools enable you to add strings of letters and other characters into your model. The text tools are available on the Insert menu, and can be accessed on the flyout toolbar on the Drawing Tools toolbar. You can also display the Text toolbar by right-clicking on any toolbar area and selecting Text. Note: With this tool you can add single straight lines of text. To add multiple lines in paragraph format, see Multi Text . To create text that follows a curve, see Text Along Curve . Editing Text You. Inserting Text - Set the desired font and other text parameters. See Text Properties . - Click on the point where you want to place your text. 3. Type the text, using the Backspace key to make corrections. Press Enter to add a new text line. To finish, click on the drawing, press Shift+Enter, or select Finish from the local menu. Note: By default, the text is centered at the insertion point. You can change this, however, via the Properties window, or by using the Align local menu option. Local menu options: Height: Changes the text height. Move the mouse to adjust the height rectangle, or enter a height in the Inspector Bar. Angle: Adjust the angle of the text line (not the text slant). Move the mouse to rotate the text rectangle, or enter the angle in the Inspector Bar. Text.
http://docs.imsidesign.com/projects/TurboCAD-2019-User-Guide-Publication/TurboCAD-2019-User-Guide/Annotation/Text/
2021-09-16T21:32:17
CC-MAIN-2021-39
1631780053759.24
[array(['../../Storage/turbocad-2019-user-guide-publication/text-2019-02-13.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/7-1-text-img0002.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/7-1-text-img0003.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/7-1-text-img0004.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/7-1-text-img0006.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/7-1-text-img0007.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/7-1-text-img0008.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/7-1-text-img0009.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/7-1-text-img0010.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/7-1-text-img0011.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/7-1-text-img0012.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/7-1-text-img0013.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/7-1-text-img0014.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/7-1-text-img0015.png', 'img'], dtype=object) ]
docs.imsidesign.com
When your C++ programs depend on ROS messages or services, they must be defined by catkin packages like std_msgs and sensor_msgs, which are used in the examples below. Dependencies on these packages must be declared in your package.xml and CMakeLists.txt files to resolve message references. For each C++ message dependency, package.xml should provide a <depend> tag with the ROS package name: <depend>std_msgs</depend> <depend>sensor_msgs<})
http://docs.ros.org/en/jade/api/catkin/html/howto/format2/cpp_msg_dependencies.html
2021-09-16T21:20:00
CC-MAIN-2021-39
1631780053759.24
[]
docs.ros.org
Getting there From this dialog box, you can change the password for the designated principal. Note: Before you can change passwords, you must first define an administrative Key Distribution Center (KDC). The options are: Principal Displays the selected principal. To change the password for a different principal, select it before you open this dialog box. Old password Type the password you want to change. New password Type the new password you want to use. Verify new password Retype the new password, to verify that you entered it correctly. Related Topics Specify the Administrative KDC for a Realm
https://docs.attachmate.com/Reflection/2008/R1SP2/Guide/en/user-html/kerberos_change_password_cs.htm
2021-09-16T21:59:16
CC-MAIN-2021-39
1631780053759.24
[]
docs.attachmate.com
CreateAssessmentReport Creates an assessment report for the specified assessment. Request Syntax assessmentId/reports HTTP/1.1 Content-type: application/json { "description": " string", "name": ". - description The description of the assessment report. Type: String Length Constraints: Maximum length of 1000. Pattern: ^[\w\W\s\S]*$ Required: No - name The name of the new assessment report. Type: String Length Constraints: Minimum length of 1. Maximum length of 300. Pattern: ^[a-zA-Z0-9-_\.]+$ Required: Yes Response Syntax HTTP/1.1 200 Content-type: application/json { "assessmentReport": { "assessmentId": "string", "assessmentName": "string", "author": "string", "awsAccountId": "string", "creationTime": number, "description": "string", "id": "string", "name": "string", "status": "string" } } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - assessmentReport The new assessment report returned by the CreateAssessmentReportAPI. Type: AssessmentReport object:
https://docs.aws.amazon.com/audit-manager/latest/APIReference/API_CreateAssessmentReport.html
2021-09-16T23:13:48
CC-MAIN-2021-39
1631780053759.24
[]
docs.aws.amazon.com
method table for Type contains all constructor definitions. All subtypes of Type ( Type, UnionAll, Union, and DataType) currently share a method table via special arrangement. Builtins The "builtin" functions, defined in the Core module, are: === typeof sizeof <: NamedTuple argument prepended, which gives the names and values of passed keyword arguments. The kwsorter's job is to move keyword arguments into their canonical positions based on name, plus evaluate and substitute any needed default value expressions. The result is a normal positional argument list, which is then passed to yet another compiler-generated function. The easiest way to understand the process is to look at how a keyword argument method definition is lowered. The code: function circle(center, radius; color = black, fill::Bool = true, options...) # draw end actually produces three method definitions. The first is a function that accepts all arguments (including keyword arguments), pairs(NamedTuple()), circle, center, radius) end This simply dispatches to the first method, passing along default values. pairs is applied to the named tuple of rest arguments to provide key-value pair iteration. Note that if the method doesn't accept rest keyword arguments then this argument is absent. Finally there is the kwsorter definition: function (::Core.kwftype(typeof(circle)))(kws, circle, center, radius) if haskey(kws, :color) color = kws.color else color = black end # etc. # put remaining kwargs in `options` options = structdiff(kws, NamedTuple{(:color, :fill)}) # if the method doesn't accept rest keywords, throw an error # unless `options` is empty #circle#1(color, fill, pairs(options), circle, center, radius) end The function Core.kwftype(t) creates the field t.name.mt.kwsorter (if it hasn't been created yet), and returns the type of that function.)(merge((color = red,), other), circle, (0,0), 1.0) kwfunc (also in Core) fetches the kwsorter for the called function. The keyword splatting operation (written as other...) calls the named tuple merge function. This function further unpacks each element of other, expecting each one to contain two values (a symbol and a value). Naturally, a more efficient implementation is available if all splatted arguments are named tuples. behaves as if the @nospecialize annotation were applied. struct, @test was modified to expand to a try-catch block that records the test result (true, false, or exception raised) and calls the test suite handler on it.
https://docs.julialang.org/en/v1.0/devdocs/functions/
2021-09-16T20:57:16
CC-MAIN-2021-39
1631780053759.24
[]
docs.julialang.org
You can use SnapCenter to restore file system backups. File system restoration is a multiphase process that copies all the data from a specified backup to the original location of the file system. You cannot restore a single file from a backup because the restored file system overwrites any data on the original location of the file system. To restore a single file from a file system backup, you must clone the backup and access the file in the clone. Most of the fields on the Restore wizard pages are self-explanatory. The following information describes fields for which you might need guidance. Protecting Microsoft SQL Server databases with SnapCenter Protecting Microsoft Exchange Server databases with SnapCenter
https://docs.netapp.com/ocsc-40/topic/com.netapp.doc.ocsc-dpg-wfs/GUID-805DFBC3-5EE3-452B-A22F-BF907BE1A02C.html
2021-09-16T21:07:31
CC-MAIN-2021-39
1631780053759.24
[]
docs.netapp.com
RDS Session can either be a desktop session or an application session to an RDS Farm. A farm can have only 1 desktop pool, but can have multiple application pools. Design Consideration This dashboard is designed to complement the RDS Farm Performance dashboard and has a similar design. It acts as the details dashboard, allowing you to drill down from farm to one of its host members. A large environment can have tens of thousands of sessions. To see live performance, use the Live! Horizon Session Performance dashboard. How to Use the Dashboard Review the table RDS Farms. Expect all of them to be in the green range. At the very least, none of them should be in the red. Select one of the entries in the table. The session performance distribution is shown in the bar charts. There are two bar charts, one for the datacenter performance and the other for network performance. The list of sessions in the farm is shown in the RDS Sessions in the Farms table. Review the table RDS Sessions in the Farms. As this is RDS session, a single session should not be dominating the shared RDS Host. A host serving 10 sessions means that each session should be using 10% on average. The table shows the highest usage in the last 1 week. If that results in to many outliers in your case, change the metric to show the 99th percentile instead. This removes the highest 6–7 results. Select one of the entries in the table. The KPI of the selected entry is shown in the scoreboards. There are five scoreboards showing different aspects of performance. Its relevant property is shown in the property widget. Select one or more entries in the scoreboard. The line chart below the scoreboard plots the selected metrics. Use the metric chart widget to compare metrics to see if there is any correlation. Point to Note Other than the protocol metrics, each RDS session only has CPU utilization and memory utilization metric.
https://docs.vmware.com/en/Management-Packs-for-vRealize-Operations-Manager/1.2/Horizon/GUID-F8EC9D78-CB00-4392-8BD6-D26D39DE2270.html
2021-09-16T23:26:21
CC-MAIN-2021-39
1631780053759.24
[]
docs.vmware.com
On March 25th 2015, Vantage 1.4.1 was released. This is a minor release including few bug fixes. Upgrading is recommended. Fixed 4 tickets total. Upgrade Information Use the AppThemes Updater Plugin to automatically update the theme, or visit and click on My Account to download the updated version. Fixes - Show user description for users who have registered but don’t have listings yet - SEO footer link - Sharing site in FaceBook (now use Default Open Graph Image control in Customizer) Changes - Updated MarketPlace Addons module Added - Default Open Graph Image control in Customizer
https://docs.appthemes.com/vantage/vantage-version-1-4-1/
2019-08-17T17:38:30
CC-MAIN-2019-35
1566027313436.2
[]
docs.appthemes.com
Creating Migration Reports with the Workload Qualification Framework AWS Workload Qualification Framework (AWS WQF) is a standalone app that is included with AWS SCT. You can use WQF to analyze your migration to the AWS Cloud. It assesses and rates the workload for the entire migration, including database and app modifications. WQF can recommend strategies and tools that you can use for your migration, and give you feedback that you can use to make changes. It can also identify actions that you need to take on a database to complete a migration to Amazon RDS or Amazon Aurora. You can use WQF for the following migration scenarios: Oracle to Amazon RDS for PostgreSQL or Aurora with PostgreSQL compatibility Oracle to Amazon RDS for MySQL or Aurora with MySQL compatibility Microsoft SQL Server to Amazon RDS PostgreSQL or Aurora PostgreSQL You can use WQF during the planning phase of your migration process to determine what you need to do to migrate your data and apps. SCT accesses your schema conversion; in contrast, WQF reports on the following: Workload assessment based on complexity, size, and technology used Recommendations on migration strategies Recommendations on migration tools Feedback on what exactly to do Assessment of the effort required based on the number of people on the migration project Topics
https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP-WQF.html
2019-08-17T17:27:27
CC-MAIN-2019-35
1566027313436.2
[]
docs.aws.amazon.com
Part 4: Category tiles and the Product Detail page ⚠️ Note: This tutorial has been deprecated as the release of Reaction 2.0. The latest tutorial can be found at here. Building a Swag Shop: Category Tiles & Related ProductsBuilding a Swag Shop: Category Tiles & Related Products In this part, we'll show you how we ... - Created visual category tiles for the Landing Page - Implemented the Related Products feature on the Product Detail Page (PDP) - Customized the default layout that comes with ReactionAvatar - Deployed our swag shop All code presented here can be found in our Swag Shop repository on GitHub. Adding category tiles to the Landing PageAdding category tiles to the Landing Page The purpose of category tiles is to provide users with another entry point for browsing tagged products. From a functional point of view, category tiles are identical to the default navbar tags that come with typical Reaction shops. Tiles present all of the available categories in a visually appealing way. For this to work, the tags must be enhanced so they can hold information about the category image, along with its name: /imports/plugins/custom/reaction-swag-shop/lib/collections/schemas/tags.js import Schemas from "@reactioncommerce/schemas"; import { Tags } from "/lib/collections"; const TagSchema = Schemas.Tag.extend({ catTileImageUrl: { type: String, defaultValue: "", optional: true }, catHeroImageUrl: { type: String, defaultValue: "", optional: true }, catHeroSloganI18nKey: { type: String, defaultValue: "", optional: true }, catHeroTitleI18nKey: { type: String, defaultValue: "", optional: true } }); Tags.attachSchema(TagSchema, { replace: true }); Later, I'll show you how to make this field editable in the admin backend. For now, let's assume that in our Mongo database, all documents in the Tags collection have specified a string value for catTileImageUrl, eg. "cat-tile-women.jpg." To render the tiles on the landing page, change the Products component. First, let's render the tile section in the Shop all products image: In mobile, this image is placed before the actual category images. The markup is structured in a way to support Reaction's mobile-first approach: /imports/plugins/custom/reaction-swag-shop/client/components/product-variant/customer/productGrid.js renderCategories() { return ( <div className={"categories row"}> <div className={"cat-tile col-xs-12 col-sm-push-4 col-sm-4"}> <div className={"pic-essentials"}> <div className={"btn-essentials"}> <Components.Button className={"btn-blue"} label={this.shopAllLabel()} bezelStyle={"solid"} primary={false} type="button" onClick={this.heroClicked} /> </div> </div> </div> {this.renderCategoryChunks(this.props.tags)} </div> ); } Two things to mention here. First, as defined by Bootstrap CSS rules, the image consumes all available width on small devices, but only 1/3 of the available width for large screens. Additionally, the image is pushed to the right, since it should be centered when viewing on desktop. Second, we want to stack every second image vertically, so the category images are divided into chunks of two. This is done in renderCategoryChunks: /imports/plugins/custom/reaction-swag-shop/client/components/product-variant/customer/productGrid.js renderCategoryChunks(tags) { const chunkSize = 2; const chunks = []; for (let i = 0; i < tags.length; i += chunkSize) { const temp = tags.slice(i, i + chunkSize); let className = "col-sm-4"; if (i === 0) { className += " col-sm-pull-4"; } chunks.push(<div className={className} key={i}> {temp.map((element, index) => this.renderCategory(element, index))} </div>); } return chunks; } This snippet renders containers for each tile. It also ensures that the aforementioned Shop all products container swaps its place with the middle column container for large screens ( col-sm-pull-4). The category images itself are rendered in renderCategory: /imports/plugins/custom/reaction-swag-shop/client/components/product-variant/customer/productGrid.js renderCategory(tag) { return ( <div className={"cat-tile col-xs-12"} key={tag._id}> <a href={`/tag/${tag.slug}`}> <img alt={tag.name} src={`/plugins/reaction-swag-shop/${tag.catTileImageUrl}`} /> <span className={"category"}>{tag.name}</span> </a> </div> ); } As you can see, the actual image URL is read from the property catTileImageUrl, which we've added before to the Tag schema. It's important to mention that all public assets need to exist in the plugin's /import/plugins/custom/reaction-swag-shop/public folder, since the reaction-cli copies all files from there to its final destination in /public/plugins/reaction-swag-shop during the build process. Meteor's HTTP server will then make them available via the URL pathname /plugins/reaction-swag-shop/. This is how it looks: Adding category tiles to the adminAdding category tiles to the admin The next logical step is to make catTileImageUrl available in the admin backend. Per design, we have a one-to-one relationship, meaning that every tag or category is connected to exactly one image. This information should be managed the same way tags are currently managed as an admin: The idea here is to connect the existing drag handle to a popover, where the catTileImageUrl property can be edited. This is how it looks when we're all done: Next, let's extend the original TagItem component from /imports/plugins/core/ui/client/components/tags/tagItem.js and override in the render() method. /imports/plugins/custom/reaction-swag-shop/client/components/core/ui/tags/tagItem.js import React from "react"; import { Components, replaceComponent, getRawComponent } from "@reactioncommerce/reaction-components"; import classnames from "classnames"; import { Button, Handle } from "/imports/plugins/core/ui/client/components/index"; import { Tags } from "/lib/collections/index"; class TagItem extends getRawComponent("TagItem") { // -------------- %< -------------------- // more stuff // -------------- %< -------------------- renderEditableTag() { const baseClassName = classnames({ "rui": true, "tag": true, "edit": true, "draggable": this.props.draggable, "full-width": this.props.fullWidth }); return this.props.connectDropTarget(<div onMouseLeave={this.handleMouseLeave} <div className={baseClassName} data-id={this.props.tag._id} > <form onSubmit={this.handleTagFormSubmit}> <Components.Popover isOpen={this.state.popOverIsOpen} attachment="top left" targetAttachment="bottom left" constraints={[ { to: "scrollParent", pin: true }, { to: "window", attachment: "together" } ]} showDropdownButton={false} > <div ref="popoverContent" onMouseEnter={this.handleMouseEnter} onMouseLeave={this.handleMouseLeave} className={"tag-image-form"} > <Components.TextField label="Category Tile Image URL" i18nKeyLabel="catTileImageUrl" type="text" name="catTileImageUrl" value={this.state.catTileImageUrl} onBlur={this.handleBlur} onChange={this.handleImageUrlChange} /> // -------------- %< -------------------- // more stuff // -------------- %< -------------------- ); } } replaceComponent("TagItem", TagItem); export default TagItem; This will give us a nice popover that allows us to edit the catTileImageUrl field. To finish this up, let's add the changes made in the popover to MongoDB: /imports/plugins/custom/reaction-swag-shop/client/components/core/ui/tags/tagItem.js handleBlur = (event) => { let { value } = event.currentTarget; if (typeof value !== "string") { return; } value = value.trim(); Tags.update(this.tag._id, { $set: { [event.currentTarget.name]: value } }); } One interesting thing to take note of: this update is happening client-side through Minimongo. Minimongo takes care of propagating the changes via DDP to the server, where it will eventually synchronize with MongoDB. This method will only work with certain collections, as it requires special permissions to be set server-side. It's not always easy to secure big, complex applications, which may account for some of its controversy in the Meteor community. Adding related products to the Product Detail PageAdding related products to the Product Detail Page The Related Product feature can be found on the product pages of many shops. This feature shows similar products alongside the product that is currently viewed. This helps shoppers identify additional products to add to their carts: There are many possible ways to implement a feature like this. We decided to use the tagging concept, since it proved to be quite flexible, and came with some out-of-the-box functionality that satisfied the needs of our user story: - We need an existing product subscription that filters products for a specific tag. Each product gets its own tag in the Tags collection, the related product tag, which follows this naming pattern: <handle>-related — So for localhost:3000/product/t-shirt, the related product tag would be t-shirt-related. - The admin UI already provides ways to add arbitrary tags to each product we're interested in linking. - One product can be related to multiple other products through the tagging concept. It's possible to tag a product with <product-1-related> and <product-2-related>, which would appear as a related product on /product/product-1 and /product/product-2. Now let's add the relatedTag field to the product schema. This field's value should populate automatically whenever the product's permalink (e.g.) changes. The permalink itself is built from yet another field on the product schema, the handle field /imports/plugins/custom/reaction-swag-shop/lib/collections/schemas/swagProduct.js import { Meteor } from "meteor/meteor"; import Schemas from "@reactioncommerce/schemas"; import { Products } from "/lib/collections/index"; const ExtendedSchema = Schemas.Product.extend({ // -------------- %< -------------------- // more stuff // -------------- %< -------------------- relatedTag: { optional: true, type: String, autoValue() { const isSimpleProduct = this.siblingField("type").value === "simple"; if (isSimpleProduct && this.operator === "$set") { const productHandle = this.siblingField("handle").value; const slug = `${productHandle}-related`; Meteor.call("createTag", slug); return slug; } } } }); Products.attachSchema(ExtendedSchema, { replace: true, selector: { type: "simple" } }); Here, we're defining a new field, relatedTag on the product schema. We also want the field's value to automatically populate from the product's permalink. This is where SimpleSchema's autoValue comes into play. Whenever the field is updated ( this.operator === "$set"), first check to see if it's a simple product, and not a product variant. Then, use the permalink to set the field's value. Create a new Tag in the Tags collection via Meteor.call("createTag", slug), if it doesn't exist yet. Because the initial products are inserted into the database through data fixtures, the field relatedTag can also be found in /imports/plugins/custom/reaction-swag-shop/private/data/Products.json. Now, how do we render related products when navigating to the PDP? Again, let's go to the Reaction component API, overwrite the appropriate React components, and render the same ProductGridItems component. Here it is on the homepage, as well as the category grid page: For a quick reference, here are the necessary pieces that lead to our goal: - Overwriting the ProductDetail component to render the containers for the PDP filler image (called static image in the screenshot) and the related products section. - A higher-order component (HOC) to inject the related products data from the database into the component that will render them. It uses the relatedTagschema property defined above to query for all related products. - The component to render the related products itself: /imports/plugins/custom/reaction-swag-shop/client/components/similar-products.js How to customize ReactionLayoutHow to customize ReactionLayout The PDP's layout is different from other components used in Reaction because it is configurable during runtime in a generic manner. This is possible because the React components are created dynamically from a data structure a in database, rather than from JSX that is living in static files and transpiled during build time. The layout information for the PDP page lives in the Templates collection. Here's an extract: { "_id" : "jnyaozhKdtFXQfnjo", "name" : "productDetailSimple", "type" : "react", "title" : "Product Detail Simple Layout", "templateFor" : [ "pdp" ], "template" : [ { "type" : "block", "columns" : 12, "size" : "half", "permissions" : [ "admin" ], "audience" : [ "guest", "anonymous" ], "style" : { "padding" : "40px", "@media only screen and (max-width: 921px)" : { "minWidth" : "100%", "maxWidth" : "100%" } }, "children" : [ { "component" : "MediaGalleryContainer" }, { "component" : "ProductTags" }, { "component" : "ProductMetadata" } ] }, { "type" : "block", "columns" : 6, "size" : "half", "children" : [ { "component" : "ProductField", }, // -------------- %< -------------------- // more stuff // -------------- %< -------------------- A ReactionLayout is made of different containers, or blocks, which themselves are made of other containers or concrete components. This allows users to have control over the rendered HTML structure in a very flexible way, while still having the ability to reuse existing React components. For the Swag Shop, the ProductTags component does not need to be rendered on the PDP, which is why we removed it, as seen in /imports/plugins/custom/reaction-swag-shop/private/data/productDetailSimple.json. Registration of the modified PDP layout can be found in /imports/plugins/custom/reaction-swag-shop/server/register.js. Deploying your swag shopDeploying your swag shop Now that we have a running shop, we want to show it to the world—and hopefully sell a lot of stuff. Generally, we recommend deploying via Docker Image. For an introduction into a self-hosted deployment approach, check out Deploying Reaction Using Docker. ConclusionConclusion The Reaction architecture is laid out carefully, with a great focus on extensibility. For most use cases, we don't need to dig very deep into the code, although you could if you wanted to. It's totally possible to plug into the core mechanics of Reaction, such as cart, order processing, etc. and customize these workflows as they fit you. This is perhaps a bit more work than simply working with the components API, but once you're familiar with the codebase, it's not that difficult either. And that's how to create your own shop from scratch! We've covered all the basics on how to build a custom shop plugin. We hope you find the community team's swag shop series to be a valuable learning resource for your next Reaction project. If you have any questions or suggestions for the community team, feel free to join our next community call. Or, ask away in our developer chat.
https://docs.reactioncommerce.com/docs/1.16.0/swag-shop-pdp
2019-08-17T18:48:42
CC-MAIN-2019-35
1566027313436.2
[array(['https://user-images.githubusercontent.com/1733229/35669741-65975f86-0736-11e8-9269-47a0f3aa5c60.jpg', None], dtype=object) array(['https://user-images.githubusercontent.com/1733229/35676726-32bfcb2c-074d-11e8-87c9-d41fa4971ce4.jpg', None], dtype=object) array(['https://user-images.githubusercontent.com/1733229/35683281-c1afebee-0763-11e8-91ca-e911a78dc6e7.jpg', None], dtype=object) array(['https://user-images.githubusercontent.com/1733229/35684180-635c46f2-0766-11e8-83ee-e27352b298fd.jpg', None], dtype=object) array(['https://user-images.githubusercontent.com/1733229/35684849-6764266e-0768-11e8-90f7-65429d43baad.jpg', None], dtype=object) array(['https://user-images.githubusercontent.com/1733229/35738566-835dba5c-082f-11e8-88e8-735ddce2bd10.jpg', None], dtype=object) array(['https://user-images.githubusercontent.com/1733229/35793728-34afdb44-0a53-11e8-8cf2-dd5af3e9f538.jpg', None], dtype=object) ]
docs.reactioncommerce.com
Have you ever wished that a price in ResponsiBid would not only show the price, but also add something to the end of the price to give context? For example: $54 /week In this video we show you how you can use the suffix builder to make a price calculate based off a large amount of time and then divide it down to regular pricing intervals… and then to allow the price to actually present in a way where it has a suffix to describe how often the price would need to be paid. You will be able to see the power of calculating based off of a large amount of time and posting the price as a factor of that… or just simply adding a suffix to a price that is already being calculated by having “no additional calculations” done.
https://docs.symphosize.com/responsibid/?article=how-to-add-a-suffix-to-a-responsibid-price
2019-08-17T17:25:18
CC-MAIN-2019-35
1566027313436.2
[]
docs.symphosize.com
Use the Physics 2D settings to apply global settings for 2D physics. Note: To manage global settings for 3D physics, use the Physics 3D settings instead. The Physics 2D settings define limits on the accuracy of the physical simulation. Generally speaking, a more accurate simulation requires more processing overhead, so these settings offer a way to trade off accuracy against performance. See the Physics section of the manual for further information. The settings in the Job Options section allow you to use the C# Job System to configure multi-threaded physics. 2018–10–02 編集レビュー を行って修正されたページ Updated for Unified Settings
https://docs.unity3d.com/ja/current/Manual/class-Physics2DManager.html
2019-08-17T17:36:48
CC-MAIN-2019-35
1566027313436.2
[]
docs.unity3d.com
Creating and Managing Tablespaces A newer version of this documentation is available. Click here to view the most up-to-date release of the Greenplum 5.x documentation. Creating and Managing Tablespaces Tablespaces allow database administrators to have multiple file systems per machine and decide how to best use physical storage to store database objects. They are named locations within a filespace in which you can create objects. Tablespaces allow you to assign different storage for frequently and infrequently used database objects or to control the I/O performance on certain database objects. For example, place frequently-used tables on file systems that use high performance solid-state drives (SSD), and place other tables on standard hard drives. A tablespace requires a file system location to store its database files. In Greenplum Database, the master and each segment (primary and mirror) require a distinct storage location. The collection of file system locations for all components in a Greenplum system is a filespace. Filespaces can be used by one or more tablespaces. Creating a Filespace A filespace sets aside storage for your Greenplum system. A filespace is a symbolic storage identifier that maps onto a set of locations in your Greenplum hosts' file systems. To create a filespace, prepare the logical file systems on all of your Greenplum hosts, then use the gpfilespace utility to define the filespace. You must be a database superuser to create a filespace. To create a filespace using gpfilespace - Log in to the Greenplum Database master as the gpadmin user. $ su - gpadmin - Create a filespace configuration file: $ gpfilespace -o gpfilespace_config - At the prompt, enter a name for the filespace, the primary segment file system locations, the mirror segment file system locations, and a master file system location. Primary and mirror locations refer to directories on segment hosts; the master location refers to a directory on the master host and standby master, if configured. For example, if your configuration has 2 primary and 2 mirror segments per host: Enter a name for this filespace> fastdisk primary location 1> /gpfs1/seg1 primary location 2> /gpfs1/seg2 mirror location 1> /gpfs2/mir1 mirror location 2> /gpfs2/mir2 master location> /gpfs1/master - gpfilespace creates a configuration file. Examine the file to verify that the gpfilespace configuration is correct. - Run gpfilespace again to create the filespace based on the configuration file: $ gpfilespace -c gpfilespace_config Moving the Location of Temporary or Transaction Files You can move temporary or transaction files to a specific filespace to improve database performance when running queries, creating backups, and to store data more sequentially. The dedicated filespace for temporary and transaction files is tracked in two separate flat files called gp_temporary_files_filespace and gp_transaction_files_filespace. These are located in the pg_system directory on each primary and mirror segment, and on master and standby. You must be a superuser to move temporary or transaction files. Only the gpfilespace utility can write to this file. About Temporary and Transaction Files Unless otherwise specified, temporary and transaction files are stored together with all user data. The default location of temporary files, <filespace_directory>/<tablespace_oid>/<database_oid>/pgsql_tmp is changed when you use gpfilespace --movetempfiles for the first time. Also note the following information about temporary or transaction files: - You can dedicate only one filespace for temporary or transaction files, although you can use the same filespace to store other types of files. - You cannot drop a filespace if it used by temporary files. - You must create the filespace in advance. See Creating a Filespace. To move temporary files using gpfilespace - Check that the filespace exists and is different from the filespace used to store all other user data. - Issue smart shutdown to bring the Greenplum Database offline. If any connections are still in progess,the gpfilespace --movetempfiles utility will fail. - Bring Greenplum Database online with no active session and run the following command: gpfilespace --movetempfilespace filespace_name The location of the temporary files is stored in the segment configuration shared memory (PMModuleState) and used whenever temporary files are created, opened, or dropped. To move transaction files using gpfilespace - Check that the filespace exists and is different from the filespace used to store all other user data. - Issue smart shutdown to bring the Greenplum Database offline. If any connections are still in progess, the gpfilespace --movetransfiles utility will fail. - Bring Greenplum Database online with no active session and run the following command: gpfilespace --movetransfilespace filespace_name The location of the transaction files is stored in the segment configuration shared memory (PMModuleState) and used whenever transaction files are created, opened, or dropped. Creating a Tablespace After you create a filespace, use the CREATE TABLESPACE command to define a tablespace that uses that filespace. For example: =# CREATE TABLESPACE fastspace FILESPACE fastdisk; Database superusers define tablespaces and grant access to database users with the GRANTCREATE command. For example: =# GRANT CREATE ON TABLESPACE fastspace TO admin; Using a Tablespace to Store Database Objects Users with the CREATE privilege on a tablespace can create database objects in that tablespace, such as tables, indexes, and databases. The command is: CREATE TABLE tablename(options) TABLESPACE spacename For example, the following command creates a table in the tablespace space1: CREATE TABLE foo(i int) TABLESPACE space1; You can also use the default_tablespace parameter to specify the default tablespace for CREATE TABLE and CREATE INDEX commands that do not specify a tablespace: SET default_tablespace = space1; CREATE TABLE foo(i int); The tablespace associated with a database stores that database's system catalogs, temporary files created by server processes using that database, and is the default tablespace selected for tables and indexes created within the database, if no TABLESPACE is specified when the objects are created. If you do not specify a tablespace when you create a database, the database uses the same tablespace used by its template database. You can use a tablespace from any database if you have appropriate privileges. Viewing Existing Tablespaces and Filespaces Every Greenplum Database system has the following default tablespaces. - pg_global for shared system catalogs. - pg_default, the default tablespace. Used by the template1 and template0 databases. These tablespaces use the system default filespace, pg_system, the data directory location created at system initialization. To see filespace information, look in the pg_filespace and pg_filespace_entry catalog tables. You can join these tables with pg_tablespace to see the full definition of a tablespace. For example: =# SELECT spcname as tblspc, fsname as filespc, fsedbid as seg_dbid, fselocation as datadir FROM pg_tablespace pgts, pg_filespace pgfs, pg_filespace_entry pgfse WHERE pgts.spcfsoid=pgfse.fsefsoid AND pgfse.fsefsoid=pgfs.oid ORDER BY tblspc, seg_dbid; Dropping Tablespaces and Filespaces To drop a tablespace, you must be the tablespace owner or a superuser. You cannot drop a tablespace until all objects in all databases using the tablespace are removed. Only a superuser can drop a filespace. A filespace cannot be dropped until all tablespaces using that filespace are removed. The DROP TABLESPACE command removes an empty tablespace. The DROP FILESPACE command removes an empty filespace.
https://gpdb.docs.pivotal.io/5170/admin_guide/ddl/ddl-tablespace.html
2019-08-17T17:13:17
CC-MAIN-2019-35
1566027313436.2
[]
gpdb.docs.pivotal.io
[ aws . codecommit ] Adds or updates a file in a branch in an AWS CodeCommit repository, and generates a commit for the addition in the specified branch. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. put-file --repository-name <value> --branch-name <value> --file-content <value> --file-path <value> [--file-mode <value>] [--parent-commit-id <value>] [--commit-message <value>] [--name <value>] [--email <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>] --repository-name (string) The name of the repository where you want to add or update the file. --branch-name (string) The name of the branch where you want to add or update the file. If this is an empty repository, this branch will be created. --file-content (blob) The content of the file, in binary object format. --file-path (string) The name of the file you want to add or update, including the relative path to the file in the repository. Note If the path does not currently exist in the repository, the path will be created as part of adding the file. --file-mode (string) The file mode permissions of the blob. Valid file mode permissions are listed below. Possible values: - EXECUTABLE - NORMAL - SYMLINK --parent-commit-id (string) The full commit ID of the head commit in the branch where you want to add or update the file. If this is an empty repository, no commit ID is required. If this is not an empty repository, a commit ID is required. The commit ID must match the ID of the head commit at the time of the operation, or an error will occur, and the file will not be added or updated. --commit-message (string) A message about why this file was added or updated. While optional, adding a message is strongly encouraged in order to provide a more useful commit history for your repository. --name (string) The name of the person adding or updating the file. While optional, adding a name is strongly encouraged in order to provide a more useful commit history for your repository. An email address for the person adding or updating the file to a repository The following put-file example adds a file named 'ExampleSolution.py' to a repository named 'MyDemoRepo' to a branch named 'feature-randomizationfeature' whose most recent commit has an ID of '4c925148EXAMPLE'. aws codecommit put-file \ --repository-name MyDemoRepo \ --branch-name feature-randomizationfeature \ --file-content \ --file-path /solutions/ExampleSolution.py \ --parent-commit-id 4c925148EXAMPLE \ --name "Maria Garcia" \ --email "[email protected]" \ --commit-message "I added a third randomization routine." Output: { "blobId": "2eb4af3bEXAMPLE", "commitId": "317f8570EXAMPLE", "treeId": "347a3408EXAMPLE" }
https://docs.aws.amazon.com/cli/latest/reference/codecommit/put-file.html
2019-08-17T17:34:30
CC-MAIN-2019-35
1566027313436.2
[]
docs.aws.amazon.com
Foreach¶ - Foreach works like so: local tbl = {1, 2, 3} foreach (k, v, tbl) { print(k, v) } - prints: 1 1 2 2 3 3 - The arguments are keyname, valuename and the iteration object if you have a custom class and want to support foreach, then all you need to do is add a method called NextKey, and return the indexing key for the next entry. After NextKeyis called, the get index operator [] is called with the returned key. return null if the end of the object has been reached
https://docs.revenyou.io/how_to_code/foreach.html
2019-08-17T17:14:25
CC-MAIN-2019-35
1566027313436.2
[]
docs.revenyou.io
EBS Snapshot Scheduler Notice: EBS Snapshot Scheduler has been superseded by AWS Ops Automator. In 2016, the EBS Snapshot Scheduler was launched to help AWS customers automatically create snapshots of their Amazon Elastic Block Store (Amazon EBS) volumes on a defined schedule. In 2017, AWS launched the AWS Ops Automator, a new and improved solution that enables customers to schedule EBS and Amazon Redshift snapshots, and automate other operational tasks. We encourage customers to migrate to AWS Ops Automator for future updates and new features. Legacy templates, scripts, and documentation for EBS Snapshot Scheduler are available in our GitHub repository.
https://docs.aws.amazon.com/solutions/latest/ebs-snapshot-scheduler/welcome.html
2019-08-17T17:29:11
CC-MAIN-2019-35
1566027313436.2
[]
docs.aws.amazon.com
Qtum Documentation Please take some time to read the following, this is for your own safety! - This website provides documentation on different aspects of Qtum usage. It does not make the Qtum Foundation liable of any issues that may occur for anyone following these guides. - Every guide provides security tips depending on the platform and app used, it doesn't mean you'll be 100% safe after following the guide, please do research on current and possible situations that a may happen. DO NOT use a daily usage computer for Staking (home or work computer). - You're responsible for your own security. If, after following these guides, you continue to access your funds using a daily usage computer which may or not be affected by malware, the possibilities of being hacked are quite high. - ALWAYS make backups, any wallet, or important data (private keys) must be backed up in a secure manner. It's up to the user to choose which backup method will be used. - DO NOT give your private keys to anyone. No one from the Qtum foundation will ever ask you for your private keys, only you can have access to them as they control your funds. If you give your private keys to someone else, you just gave them all your Qtum. - Transactions cannot be reverted or cancelled. On a decentralized blockchain, there's no way to revert/stop transactions, please make sure to check as many times as possible that you're sending the funds to the correct address. - Qtum is open source software, provided as-is with no warranty. - Please remember, these safety tips are just our opinions, they're not supposed to constitute a definite guide on security. - Again, you as user are completely responsible over your own security, please take time to evaluate every step followed and make sure to be as safe as possible.
https://docs.qtum.site/en/
2019-08-17T16:58:37
CC-MAIN-2019-35
1566027313436.2
[]
docs.qtum.site
Contents Now Platform Administration Previous Topic Next Topic Define an LDAP server Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Define an LDAP server Create a new LDAP server record in the instance. Before you beginRole required: admin Procedure Navigate to System LDAP > Create New Server. Fill in the form fields.. Alternatively, you can add a redundant LDAP server by navigating to an existing LDAP server record and inserting a row in the LDAP Server URLs embedded list. Click Submit. Note: You can also modify an existing LDAP server record by navigating to System LDAP > LDAP Servers and making the needed changes. Make changes to the fields as necessary.. If you selected a MID Server, this field is not available. If you use an LDAPS integration and the default SSL port is 636, no further configuration is necessary; SSL is automatically enabled. If the LDAPS integration uses another SSL port, define the alternate SSL connection properties.. Listener Select this check box to enable the integration to periodically poll Microsoft Active Directory servers or LDAP servers that support persistent search request control. Additionally, if you selected a MID Server, the listener functionality is available for that MID Server. See Enable an LDAP listener and set system properties. Note: If you provide an LDAP password, the integration performs a Simple Bind operation. If you do not provide an LDAP password, the LDAP server must allow anonymous login or the integration cannot bind to the LDAP server. ResultWhen 2. LDAP server connection status Related tasksUpload the LDAP X.509 SSL certificateEnable an LDAP listener and set system propertiesSpecify LDAP attributesTest an LDAP connectionDefine LDAP organizational unitsCreate a data source for LDAPAuto provision LDAP usersRelated conceptsLDAP integration via MID Server On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/madrid-platform-administration/page/integrate/ldap/task/t_DefineAnLDAPServer.html
2019-08-17T17:39:03
CC-MAIN-2019-35
1566027313436.2
[]
docs.servicenow.com
compilation options. Note: this enum and its values:
https://docs.unity3d.com/ScriptReference/Build.Player.ScriptCompilationOptions.html
2019-08-17T17:48:11
CC-MAIN-2019-35
1566027313436.2
[]
docs.unity3d.com
Using ClassiPress 4.x you can create custom fields and then apply these to a form. This allows you to gather specific ad information that relates directly to the site you have created. This is very useful if you want to collect different information on a category-by-category level. For example, if the majority of ads posted on your site will be about cars for sale, you may wish to create a specific form that asks for the make and year of the car. Note: If you do not wish to use this feature there is nothing you need to do. The default form will be used instead. If you wish to change the default form then you will need to create a new form and apply all categories to it. Any category without a form will always fall back to using the default form. Locating the Custom Fields and Form Menus - Click on the Ads menu. - Once in Ads, you will see a number of different menus. The two menus we will focus on in this documentation is the Forms menu and the Custom Fields menu. Creating Custom Fields (Step 1) You will first need to create the custom fields that you will add to your form. - Once you are in the Custom Fields menu, click the Add New button. - You will need to give your field a Name and a Description. For example, the name could be ‘Car Year‘ and the description could read ‘The Year the Car was Made‘. - Add a Field Tooltip if needed. The Tool Tip field allows you to put a message next to the field so users can get more information about what is required. - Choose the Field Type. This is what you expect for a response. For example, if you want a text answer choose Text. - Once you are complete click the Create New Field button. Any field you create can be used for multiple forms. - Create any additional custom fields that you will use in your form. Creating a Form (Step 2) Think of a Form Layout as a container for fields. Fields alone do nothing. Once your fields are complete you can add them to a form. Creating the Form - Once you are in the Forms menu click on the Add New button. - Give your form a Name. The name should best describe what category this form applies to. - Enter a Description for the form. This will not be visible to your users. - Check each Category that this form will apply to. Each form can be applied to multiple categories although each category can only have one form. Any category that does not have a custom form will use the default form. - Once you have completed the form click the Create New Form button. - You have now created a form. The next step is to add the custom fields to the form. Adding Custom Fields to the Form - From the Form Layouts page click on the Edit Form Layout button for the form you wish to add Custom Fields to. - On this page you will see two columns. The left side is what your form will look like. The right side contains the available fields you can add to your form. Go ahead and check the three fields we just created and click Add Fields to Form Layout. The page will reload and you will see that they moved to the left side. - You can reorder, remove, and make certain fields mandatory. To reorder your fields, just drag and drop them. To make them mandatory check the box under the required column. Lastly you can choose which fields use advanced search. - Once you are all done make sure to click the Save Changes button. - This form will now appear when someone clicks on the category that you specified. In the image above, that form will appear only for the Cars category. What will my Form Look Like? When a user selects an ad category that has a custom form applied to it, they will need to fill out that form opposed to the default form. In the example below, the user has selected the Cars category. This has a custom form attached. As you can see there is an extra custom field within this form.
https://docs.appthemes.com/classipress-4-x/creating-forms-custom-fields-using-classipress-4-x/
2019-08-17T18:11:26
CC-MAIN-2019-35
1566027313436.2
[array(['https://docs.appthemes.com/files/2018/05/creating-custom-fields-classipress.jpg', 'creating-custom-fields-classipress'], dtype=object) array(['https://docs.appthemes.com/files/2018/05/edit-form-layout-classipress.jpg', 'edit-form-layout-classipress'], dtype=object) array(['https://docs.appthemes.com/files/2018/05/custom-forms-classipress.jpg', 'custom-forms-classipress'], dtype=object) ]
docs.appthemes.com
Contents Performance Analytics and Reporting Previous Topic Next Topic How to access related tables Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share How to access related tables Watch video to learn more about accessing related tables in reports. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/geneva-performance-analytics-and-reporting/page/use/reporting/concept/c_HowToAccessRelatedTables.html
2019-08-17T17:28:20
CC-MAIN-2019-35
1566027313436.2
[]
docs.servicenow.com
We have added several custom widgets to Taskerr for the maximum amount of flexibility. Taskerr also works with all the WordPress pre-packaged widgets. More info on WordPress Widgets can be found on the WordPress codex. To get to WordPress settings, go to the left sidebar menu and click Appearance then Widgets. Please visit the Demo site to see all these widgets in action. Widgetized areas include - Main Sidebar (home page and blog) - Single Service Sidebar - Blog sidebar - Page sidebar - Dashboard sidebar - Top Advert - Central Area - Central Area-Home - Bottom Advert - Footer Taskerr Custom Widgets - AppThemes Breadcrumbs - Displays the Breadcrumbs - AppThemes Facebook Like Box - This places a Facebook page Like Box in your sidebar to attract and gain Likes from visitors. - AppThemes Recent Projects - Select post type between posts,pages,services,orders - Number of posts to show - Enter posts IDs delimited by comma - Posts content template name - Display only sticky posts (if post type supports) - Display Rating (requires “StarStruck” plugin) - Display “Read More” button - Display post date - Display post thumbnail - Display post excerpt - TR Taxonomy List - Displays the list of selected taxonomy terms - TR 125×125 ads - Add advertising code - HireBee Facebook Widget - This places a Facebook page Like Box in your sidebar to attract and gain Likes from visitors.
https://docs.appthemes.com/taskerr/taskerr-widgets/
2019-08-17T17:31:25
CC-MAIN-2019-35
1566027313436.2
[]
docs.appthemes.com
This powerful widget may just make you rethink the way you design in Muse! Our Presentation Panels. Since this widget takes over the page it's placed on, you will not see headers and footer applied by a master page. Use the "Header and Footer Settings" section in the Controller widget component to add custom class headers and footers designed to match the headers and footers used elsewhere on your site. Why doesn't your live preview work right in Safari? Presentation Panels works great in Safari. Our live preview system uses an iframe, which has a strange quirk when viewed in Safari. When used in your own project, you will not have any issues. When I use Presentation Panels, I can't see the header or footer I use throughout my site. Since Presentation Panels is a full screen widget, it's designed to "take over" the page - so content such as a header or footer will be overridden by the widget. This is why we've added the header and footer options within the widget options. The best way to do it is to create state buttons that are designed to look exactly like your header and footer areas, and define the applied graphic styles in the widget panel. This is a powerful widget that takes control of page positioning, and may conflict with other fullscreen or location-specific elements. We do not recommend combining it with Pushy Panes, Fullscreen Thumbnail Gallery, Media Pro, Versa Slide, Fullscreen Video Backgrounds, SHOW Video Backgrounds and scroll effects.
https://docs.muse-themes.com/widgets/presentation-panels
2019-08-17T17:43:58
CC-MAIN-2019-35
1566027313436.2
[]
docs.muse-themes.com
pyramid.exceptions¶ - class ConfigurationError[source]¶ Raised when inappropriate input values are supplied to an API method of a Configurator - class URLDecodeError[source]¶ This exception is raised when Pyramid cannot successfully decode a URL or a URL path segment. This exception it behaves just like the Python builtin UnicodeDecodeError. It is a subclass of the builtin UnicodeDecodeErrorexception only for identity purposes, mostly so an exception view can be registered when a URL cannot be decoded.
https://docs.pylonsproject.org/projects/pyramid/en/1.3-branch/api/exceptions.html
2019-06-16T06:31:10
CC-MAIN-2019-26
1560627997801.20
[]
docs.pylonsproject.org
Help: Turn Windows Firewall on with no exceptions Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2 To turn Windows Firewall on or off with no exceptions Open Windows Firewall. Click On, select the Don't allow exceptions check box,. See Also Concepts Help: Understanding Windows Firewall exceptions Help: Administering Windows Firewall with Netsh Help: Administering Windows Firewall with Group Policy Help: Determine which profile Windows Firewall is using
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc785043%28v%3Dws.10%29
2019-06-16T07:36:44
CC-MAIN-2019-26
1560627997801.20
[]
docs.microsoft.com
.control_gpio - system.indicator.gpio - system.indicator.state - tcp.server.connected_gpio - tcp.server.data_gpio - udp.server.data_gpio GPIO Functions and Pins GPIO functions and pins differ for different devices. Moray board (GPIO13): gdi 13 none For the change to persist after reboot, you need to set the gpio.init variable and save. For example: set gpio.init 13 none save Deregistering Alternative Function GPIOs Variables that allow functions to be assigned to a GPIO are as follows: - bus.stream.cmd_gpio - ioconn.control_gpio - ioconn.status_gpio - setup.gpio.control_gpio - system.indicator.gpio - tcp.server.connected_gpio - udp.server.data. Setting GPIO Function GPIO function can be set by commands or variables. To use a GPIO as Standard I/O, deregister the GPIO if it is already assigned, then use the gpio_dir command to set its direction. For example, to set user LED1 on the Moray to an output: gdi 13 out For the change to persist after reboot, you need to set the gpio.init variable and save. For example: set gpio.init 13 out save. On the Mackerel and Moray evaluation boards, the red, yellow and green LEDs are configured by default as system indicators, showing soft AP, network and wlan status. on the Mackerel evaluation board, we use Button 2 (GPIO 11). Because the device is in stream mode, characters typed into the serial terminal are not echoed. Press Button 2, and Gecko OS displays: Command Mode Start While holding down Button 2, Gecko OS commands may be typed into the terminal, and characters are echoed. For example, you can set bus mode to command and save, to restore command mode after a reboot. Release Button LEDs The level of a LED can be controlled via a GPIO with the Standard I/O output. GPIO Standard I/O Control A LED can be turned on or off by configuring as a Standard I/O output, then setting the GPIO level. For example, to turn on Mackerel user LED1, use the gpio_dir command and the gpio_set command: Communicating Peripheral Information Over the Network The Gecko OS TCP and UDP client and server features provide frameworks for building a customised application for transmitting and receiving monitor and control data from your Gecko OS device. For examples of these applications, see: Gecko OS also provides general systems for handling information transfer. A Gecko OS device can be configured to broadcast GPIO and other data at regular intervals, via the broadcast group of variables. The broadcast.data variable lets you specify what data is broadcast. GPIOs are specified by GPIO number. See the Broadcast UDP Packet for a demonstration of this feature. Peripheral GPIO Mapping by Function See your Silabs datasheet. .
https://docs.silabs.com/gecko-os/2/amw007-w00001/2.0/peripherals
2019-06-16T06:37:30
CC-MAIN-2019-26
1560627997801.20
[]
docs.silabs.com
Creating a Breaking News Banner¶ Editors perform this task. Switch to the site on which you want to set the banner. In the Brightspot Dashboard header, click the Search field. The Search Panel appears. In the Create drop-down list, select Breaking News Banner, then select New. The Content Edit form appears. If you want the banner message to link to an article, click the Linked Article selection field. Set Headline to the banner message that you want to display at the top of every site page. Optionally, set Expiration to a date when the banner will not longer appear on the site. Create a draft or publish the banner.
http://docs.brightspot.com/cms/plugins/news/creating-banner.html
2019-06-16T06:39:53
CC-MAIN-2019-26
1560627997801.20
[]
docs.brightspot.com
Detail vdisk Applies To: Windows 7, Windows Server 2008 R2 Displays the properties of the selected virtual hard disk (VHD). Syntax detail vdisk Remarks - A VHD must be selected for this operation to succeed. Use the select vdisk command to select a vdisk and shift the focus to it. Examples To see details about the selected VHD, type: detail vdisk
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/gg252637%28v%3Dws.10%29
2019-06-16T08:13:12
CC-MAIN-2019-26
1560627997801.20
[]
docs.microsoft.com
two session-specific features: flash messages, and cross-site request forgery attack prevention. Using the Default Session Factory¶ In order to use sessions, you must set up a session factory during your Pyramid configuration. A very basic, insecure sample its data cannot easily be tampered with. You can configure this session factory in your Pyramid application by using the pyramid.config.Configurator.set_session_factory() method. Warning By default the SignedCookieSessionFactory() implementation is unencrypted. You should not use it when you keep sensitive information in the session object, as the information can be easily read by both users of your application and third parties who have access to your users' network traffic. And, if you use this sessioning implementation, and you inadvertently create a cross-site scripting vulnerability in your application, because the session data is stored unencrypted in a cookie, it will also be easier for evildoers to obtain the current user's cross-site scripting token. In short, use a different session factory implementation (preferably one which keeps session data on the server) for anything but the most basic of applications where "session security doesn't matter", and you are sure your application has no cross-site scripting vulnerabilities. Using a Session Object¶ Once a session factory has been configured for your application, you can access session objects provided by the session factory via the session attribute of any request object. For example: pickleable. This means, typically, that they are instances of basic types of objects, such as strings, lists, dictionaries, tuples, integers, etc. If you place an object in a session data key or value that is not pickleable, an error will be raised when the session is serialized. -() [] Preventing Cross-Site Request Forgery Attacks¶ Cross-site request forgery attacks are a phenomenon whereby a user who is logged in to your website might inadvertantly sessions provide facilities to create and check CSRF tokens. To use CSRF tokens, you must first enable a session factory as described in Using the Default Session Factory or Using Alternate Session Factories. Using the session.get_csrf_token Method¶ To get the current CSRF token from the session, use the session.get_csrf_token() method. token = request.session.get_csrf_token() The session.get_csrf_token() method accepts no arguments. It returns a CSRF token string. If session.get_csrf_token() or session.new_csrf_token() was invoked previously for this session, then the existing token will be returned. If no CSRF token previously existed for this session, then a new token will be set into the session and returned. The newly created token will be opaque and randomized.="${request.session.get_csrf_token()}"> <input type="submit" value="Delete Everything"> </form> Or include it as a header in a jQuery AJAX request: var csrfToken = ${request.session.get_csrf_token()}; $.ajax({ type: "POST", url: "/myview", headers: { 'X-CSRF-Token': csrfToken } }).done(function() { alert("Deleted"); }); The handler for the URL that receives the request should then require that the correct CSRF token is supplied. Checking CSRF Tokens Manually¶ In request handling code, you can check the presence and validity of a CSRF token with pyramid.session.check_csrf_token(). If the token is valid, it will return True, otherwise it will raise HTTPBadRequest. Optionally, you can specify raises=False to have the check return False instead of raising an exception. By default, it checks for a GET or POST parameter named csrf_token or a header named X-CSRF-Token. from pyramid.session import check_csrf_token def myview(request): # Require CSRF Token check_csrf_token(request) # ... Checking CSRF Tokens with a View Predicate¶ A convenient way to require a valid CSRF token for a particular view is to include check_csrf=True as a view predicate. See pyramid.config.Configurator.add_view(). @view_config(request_method='POST', check_csrf=True, ...) def myview(request): ... Note A mismatch of a CSRF token is treated like any other predicate miss, and the predicate system, when it doesn't find a view, raises HTTPNotFound instead of HTTPBadRequest, so check_csrf=True behavior is different from calling pyramid.session.check_csrf_token(). Using the session.new_csrf_token Method¶ To explicitly create a new CSRF token, use the session.new_csrf_token() method. This differs only from session.get_csrf_token() inasmuch as it clears any existing CSRF token, creates a new CSRF token, sets the token into the session, and returns the token. token = request.session.new_csrf_token()
https://docs.pylonsproject.org/projects/pyramid/en/1.5-branch/narr/sessions.html
2019-06-16T06:59:14
CC-MAIN-2019-26
1560627997801.20
[]
docs.pylonsproject.org
Contents Now Platform Capabilities Previous Topic Next Topic Service Portal search sources Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Service Portal search sources A search source is a record that describes the behavior and source of searchable data.. To learn more, see Define a search source. settings Typeahead returns search results in real time as a user types in the search field. You can configure typeahead settings, or disable the feature entirely, within the search source record. Simple Define an icon to display beside typeahead results and the target page to display typeahead selections. Advanced Define a template for the typeahead result. See Create an advanced typeahead template. Search engine and custom settings When a simple search source is defined, Service Portal uses the search engine settings configured on your instance. To learn more, see Search administration . Define a search sourceConfigure a basic search source to query data from an instance table, or configure an advanced data fetch script to query
https://docs.servicenow.com/bundle/kingston-servicenow-platform/page/build/service-portal/concept/search-service-portal.html
2019-06-16T07:14:17
CC-MAIN-2019-26
1560627997801.20
[]
docs.servicenow.com
How to use Boxes With Boxes you can add dynamic contents into your Webflow project and revise them directly from the WordPress admin panel. You can use Contents for whatever you want: any media,WordPress shortcodes, etc. You’ll find Boxes extension within the Udesly plugin. You can add unlimited boxes with any type of content you wish (even some Page builders). Any time you create a box, you’ll find a correlate shortcode like that:
https://docs.udesly.com/how-to-use-boxes/
2019-06-16T06:58:48
CC-MAIN-2019-26
1560627997801.20
[array(['/assets/boxes-1.png', None], dtype=object) array(['/assets/boxes-2.png', None], dtype=object)]
docs.udesly.com
Form. Minimum Form. Size Changed Minimum Form. Size Changed Minimum Form. Size Changed Minimum Event Size Changed Definition Occurs when the value of the MinimumSize property has changed. public: event EventHandler ^ MinimumSizeChanged; public event EventHandler MinimumSizeChanged; member this.MinimumSizeChanged : EventHandler Public Custom Event MinimumSizeChanged As EventHandler Examples The following code example demonstrates the use of this member. In the example, an event handler reports on the occurrence of the MinimumSize MinimumSizeChanged event. private void Form1_MinimumSizeChanged(Object sender, EventArgs e) { MessageBox.Show("You are in the Form.MinimumSizeChanged event."); } Private Sub Form1_MinimumSizeChanged(sender as Object, e as EventArgs) _ Handles Form1.MinimumSizeChanged MessageBox.Show("You are in the Form.MinimumSizeChanged event.") End Sub Remarks For more information about handling events, see Handling and Raising Events.
https://docs.microsoft.com/en-us/dotnet/api/system.windows.forms.form.minimumsizechanged?view=netframework-4.8
2019-06-16T06:44:12
CC-MAIN-2019-26
1560627997801.20
[]
docs.microsoft.com
A temporal database stores data that relates to time periods and time instances. It provides temporal data types and stores information relating to the past, present, and future. For example, it stores the history of a stock or the movement of employees within an organization. The difference between a temporal database and a conventional database is that a temporal database maintains data with respect to time and allows time-based reasoning, whereas a conventional database captures only a current snapshot of reality. For example, a conventional database cannot directly support historical queries about past status and cannot represent inherently retroactive or proactive changes. Without built-in temporal table support from the DBMS, applications are forced to use complex and often manual methods to manage and maintain temporal information.
https://docs.teradata.com/reader/eR4tKhtSLaR~f1mU47J7dQ/z~PKGgaSkun38WAWAa4jAQ
2019-06-16T07:55:19
CC-MAIN-2019-26
1560627997801.20
[]
docs.teradata.com
DHCP Server Migration: Migrating the DHCP Server Role Applies To: Windows Server 2008 R2 Complete the following procedures to migrate a DHCP Server. Migrating DHCP Server to the destination server Migrating DHCP Server from the source server Destination server final migration steps Migrating DHCP Server to the destination server Membership in Domain Administrators or equivalent is the minimum required to complete these procedures. Review details about how to use the appropriate accounts and group memberships at Run a program with administrative credentials (). To migrate DHCP Server to the destination server: Stop-Service DHCPserver If you are unsure whether the service is running, you can check its state by running the following command: Get-Service DHCPServer Migrating DHCP Server from the source server Follow these steps to migrate DHCP Server from the source server. To migrate DHCP Server from the source server Open a Windows PowerShell session with elevated user rights. To do this, click Start, click All Programs, click Accessories, open the Windows PowerShell folder, right-click Windows PowerShell, and then click Run as administrator. Load Windows Server Migration Tools into your Windows PowerShell session. If you opened the current Windows PowerShell session by using the Windows Server Migration Tools shortcut on the Start menu, skip this step, and go to step 3. Only load the Windows Server Migration Tools snap-in in a Windows PowerShell session that was opened by using some other method, and into which the snap-in has not already been loaded. To load Windows Server Migration Tools, type the following, and then press Enter. Add-PSSnapin Microsoft.Windows.ServerManager.Migration From Windows PowerShell, collect data from the source server by running the Export-SmigServerSetting cmdlet as an administrator. The Export-SmigServerSetting cmdlet. Important If the source server is a domain controller, but the destination server is not, Domain Local groups are migrated as local groups, and domain users are migrated as local: IPConfig /all > IPSettings.txt The **Import-SmigServerSetting** cmdlet requires you to map the source physical address to the destination physical address. Note The destination server can be assigned the same static IP address as the source server, unless other roles on the source server must continue to run on it. In that case, the static IP address of the destination server can be any unallocated static IP address in the same subnet as the source server. On the source server, run the Export-SmigServerSetting cmdlet, where <storepath> is the path that will contain the Svrmig.mig file after this step is completed. An example of the path is \\fileserver\users\username\dhcpstore. Export-SmigServerSetting -featureID DHCP -User All -Group -IPConfig -path <storepath> -Verbose. Netsh DHCP delete server <Server FQDN> <Server IPAddress> Destination server final migration steps. Important If you will be importing role and IP settings separately, you should import IP settings first to avoid any IP conflicts. You can then import the DHCP role. - If the DHCP Administrators group includes local users, then use the **-Users** parameter combined with the **-Group** parameter to import local users into the DHCP Administrators group. If it only contains domain users, then use only the **-Group** parameter. <table> <colgroup> <col style="width: 100%" /> </colgroup> <thead> <tr class="header"> <th><img src="images/Dd379483.security(WS.10).gif" />Security Note</th> </tr> </thead> <tbody> <tr class="odd"> <td>If the source server is a domain member server, but the destination server is a domain controller, imported local users are elevated to domain users, and imported local groups become Domain Local groups on the destination server. <p></p></td> </tr> </tbody> </table> -: Import-SmigServerSetting -featureid DHCP -User All -Group -IPConfig <All | Global | NIC> -SourcePhysicalAddress <SourcePhysicalAddress-1>,<SourcePhysicalAddress-2> -TargetPhysicalAddress <TargetPhysicalAddress-1>,<TargetPhysicalAddress-2> -Force -path <storepath> -Verbose The -IPConfig switch should be used with the value All in case the user wants to import all source settings. For more information, see the IP Configuration Migration Guide (). Important If you import the source server IP address to the target server together with the DHCP role without disconnecting or changing the IP address of the source server, an IP address conflict will occur. Run the following command in Windows PowerShell to start the DHCP service: Start-Service DHCPServer Authorize the destination server. Command parameters are case-sensitive and must appear exactly as shown. On the destination server, run the following command where Server FQDN is the FQDN of the DHCP Server and Server IPAddress is the IP address of the server: netsh DHCP add server <Server FQDN> <Server IPAddress> Note After authorization, the Server Manager event log might show event ID 1046. This is a known issue and is expected to occur only once. The event can be safely ignored. When this migration is finished, client computers on the network server are served by the new x64-based destination server running Windows Server 2008 R2. The migration is complete when the destination server is ready to serve IP addresses to the network. See Also Concepts DHCP Server Migration Guide DHCP Server Migration: Preparing to Migrate DHCP Server Migration: Verifying the Migration DHCP Server Migration: Post-Migration Tasks DHCP Server Migration: Appendix A
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd379483(v=ws.10)
2019-06-16T07:20:56
CC-MAIN-2019-26
1560627997801.20
[]
docs.microsoft.com
Contents Now Platform Custom Business Applications Previous Topic Next Topic Designing services Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Designing services Service creator includes an interface for designing services. Using this interface, service category managers and editors can create and publish services, and edit service details. All services must belong to a service category. If your department or group does not have an existing service category, you must create a new service category before you can design services for that category. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/kingston-application-development/page/build/service-creator/concept/c_DesigningServices.html
2019-06-16T07:16:32
CC-MAIN-2019-26
1560627997801.20
[]
docs.servicenow.com
(Dashboard >> Admin >> API Pickup Passphrases) The API Pickup Passphrases interface allows you to create and manage the pickup phrases to use when you authenticate with your Manage2 account. To add a new pickup passphrase to your Manage2 account, perform the following steps: The Active Pickup Passphrases table lists all of your Manage2 account's current passphrases. cPanel, WebHost Manager, and WHM are registered trademarks of cPanel, L.L.C. for providing its computer software that facilitates the management and configuration of Internet web servers. ®2019 All rights reserved.
https://hou-1.docs.confluence.prod.cpanel.net/display/MAN/API+Pickup+Passphrases
2019-06-16T06:37:31
CC-MAIN-2019-26
1560627997801.20
[]
hou-1.docs.confluence.prod.cpanel.net
Plugin: The main design activities in a Concurrent Design study are performed by the design team within the frame of an Engineering Model. This section describes typical use of the different possible combinations of Model Kinds and Study Phases for engineering models. More information is given in the related topics of managing engineering model setups on the high level administrative actions, and the setup of engineering models on creating and updating the actual design data model in the engineering model. For a Scratch and Study Model, depending on the setting of the Study Phase, full functionality is available to e.g. A Template Model can only contain one Iteration, as mentioned in the description of Model Kinds. If a scatch or study_ engineering model with multiple iterations is turned into a template model, only the active iteration will be kept. Apart from the possibility to use iterations, all other functionality is available to e.g.: A Model Catalogue is intended to allow reuse of element definitions. A model catalogue can only contain one Iteration, and only one Option within this iteration, as mentioned in the description of Model Kinds. A Scratch Model could typically be used to setup and prepare engineering models, very likely outside the scope of an actual [Concurrent Design Activity][CD_A]. If this type of model is used in a [Concurrent Design Activity][CD_A], since no suitable study Model or Template Model is available to start from, it may well be that a Scratch Model is initially used for the preparations, with the Study Phase indeed set to Preparation Phase. It is advisable however to not use a Scratch Model too long within the scope of an actual Concurrent Design activity, but to turn this into a Study Model as soon as possible; in any case when starting the Design Sessions Phase. Please note that outside the scope of an actual Concurrent Design activity, it may make good sense to use the other Study Phase indications as well, e.g. for testing purposes, especially since different phases have differences in possible functionalities. Setting up and updating a set of template models can be used to facilitate a successful start of new Concurrent Design studies supporting different types of projects (e.g. development of hardware vs. more software oriented projects), application fields (e.g. develop a business case, develop proposals, support planning activities, create a design) or complying to other phases (e.g. in a preliminary/ conceptual design phase or a subsequent detailed engineering phase). Other types of projects or applications may lead to (very) different sets of parameters that need to be exchanged between disciplines. There may also be a need to follow other or different (industry or organizational) standards or approaches. As an example, specific sets of parameter types can be created in the model RDL of two or more template models. Any engineering model that is created based on any of these templates will have the possibility to use that particular set to create parameters, as it is in the chain of RDLs, while not having access to the sets in the other template model RDLs. The use of a template model therefore provides a possibility to define and use specific sets for different types of projects, without cluttering the set of parameters for other (types of) projects that have no use for these parameter sets. For a Catalogue Model, --> When a new [engineering model is being created][Create_EM], the Model Kind is by default preselected as a Study Model in Preparation Phase. This would be a typical situation for new engineering models that are being created for new Concurrent Design studies, that are based on an already available engineering models, most likely a Template Model, another Study Model or a suitable Scratch Model. These source engineering models will have to be available on the CDP™4 Database for the hosting organization(s). If no (suitable) source models are available yet, these will have to be created. In this case, it will be possible to create a Study Model, but a preferred approach can be to create a Scratch Model to be used to prepare and setup a suitable engineering model first. It is up to the organization to choose if it is preferred to do this within the scope of an existing Concurrent Design study, or as a separate tasks in managing a Concurrent Design centre. In any case, having an engineering model as a Scratch Model can be used to provide a clear indication that the engineering model is (still) 'under construction'. When the quality and level of detail of a Scratch Model is sufficient to be used, it can then be edited to become e.g. a Study Model or it can be turned into a Template Model to serve as a basis for multiple new engineering models that benefit from the available setup. It is in principle likewise possible to directly create a Catalogue or Template Model that is empty and to build this up gradually in the engineering model itself. For the reasons given here above about a clearer indication of the status, it is advisable however to always first create these models as Scratch Model as well, before turning them into the desired Model Kind. This would give users a better idea of which engineering models may be more useable, complete and useful for their tasks. It has to be understood however that engineering models in general are never "finished" and will be continuously updated, upgraded and improved. Minor updates can usually be done within an existing Catalogue Model or Template Model. For major changes it may be advisable to turn the Template Model into a Scratch Model. This will allow making and testing the changes and adaptations, before changing back to a full Catalogue or Template. It is up to organizations and/or individual users managing various engineering models to define a more fixed or more open approach to handle management of engineering models. Last modified 1 year ago.
http://cdp4docs.rheagroup.com/?c=C%20Managing%20the%20CDP/Engineering%20Models&p=Using_EM_MK_SPhs.md
2019-06-16T07:44:23
CC-MAIN-2019-26
1560627997801.20
[]
cdp4docs.rheagroup.com
Azure Storage samples Use the links below to view and download Azure Storage sample code and applications. Azure Code Samples library The Azure Code Samples library includes samples for Azure Storage that you can download and run locally. The Code Sample Library provides sample code in .zip format. Alternatively, you can browse and clone the GitHub repository for each sample. .NET samples To explore the .NET samples, download the .NET Storage Client Library from NuGet. The .NET storage client library is also available in the Azure SDK for .NET. Java samples To explore the Java samples, download the Java Storage Client Library. Node.js samples To explore the Node.js samples, download the Node.js Storage Client Library. - Blob uploader - Upload and download blob - Continuation token - Retry policy - Shared access signature - Snapshot - Table query C++ samples To explore the C++ samples, download the C++ Storage Client Library from NuGet. API reference and source code Next steps The following articles index each of the samples by service (blob, file, queue, table). Feedback Send feedback about:
https://docs.microsoft.com/en-us/azure/storage/common/storage-samples
2019-06-16T07:06:57
CC-MAIN-2019-26
1560627997801.20
[]
docs.microsoft.com
Rest. Before: Personalization toolbar before Platform Update 22 The following image shows how the personalization toolbar appears in Platform Update 22. After: Personalization toolbar in Platform Update 22
https://docs.microsoft.com/en-us/business-applications-release-notes/October18/dynamics365-finance-operations/restyled-personalization-toolbar
2019-06-16T06:59:31
CC-MAIN-2019-26
1560627997801.20
[array(['media/oldpersonalizationtoolbar.png', 'Personalization toolbar before Platform Update 22 Personalization toolbar before Platform Update 22'], dtype=object) array(['media/restyledpersonalizationtoolbar.png', 'Personalization toolbar in Platform Update 22 Personalization toolbar in Platform Update 22'], dtype=object) ]
docs.microsoft.com
Running Your First VM in Cloud Pre-requisites No pre-installation is required for migration of Windows virtual machines (see release notes for supported versions). If the virtual machine you select runs a Linux, you will need to first install the pre-requisite Velostrata preparation RPM package and its dependencies. visit Deploying Velostrata Prep Package for Linux Virtual Machines page to obtain the specific package and for installation instructions. Wait for the Cloud Extension to be marked Active in the Datacenter summary page, in the vCenter Web Client. You can now perform an end-to-end trial run of the system by selecting a virtual machine and running it in the cloud. Select a virtual machine designated for test purposes.Select a virtual machine designated for test purposes. - Right-click the Virtual Machine, select Velostrata operations > Run in Cloud. A wizard appears. - In the wizard, select the Velostrata Cloud Extension you have created. Click Next. - Select the GCP VM Size for the workload. Click Next. - For Storage policy, leave the default selection (Write back). Click Next. - Select a Subnet for the workload. Click Next. - Review the Summary page and then click Finish. - Notice that the VM icon in the virtual machines inventory changesand it will be marked as managed by Velostrata. - To monitor the run-in-cloud task progress, navigate to the Virtual Machine summary page and review the Cloud Instance Information portlet. Wait for the Remote Console probe to show Ready. Refer to the Velostrata User Guide for detailed instructions on Velostrata VM Operations. Access the VM in cloudAccess the VM in cloud To access the VM, now an instance running in GCP, we will need to make use of the jump server or PC you have prepared earlier with a static route to the VPN gateway. - Login or connect to the jump server using RDP. - RDP to the running instance on GCP using its FQDN (if dynamic DNS updates are enabled on the DNS server) or its Private IP address as shown in the Cloud Instance Information portlet. - Once logged into the virtual machine now in GCP, you may browse around, run your preferred test applications, or for example, create a test file on the desktop which you can inspect after moving the machine back on-premises, to experience the write-back feature. - You may review performance metrics for your virtual machine when running in cloud by selecting the Monitoring page in vCenter for the virtual machine , selecting the Cloud Instance tab, and then inspecting the various graphs shown, such as IOPS, IO latency, and so on. Additional monitoring graphs are available in the Datacenter > Monitoring > Velostrata Service tab. These statistics are at the Velostrata Cloud Extension level, and aggregate metrics across all virtual machines running in cloud using this Cloud Extension. Run the VM back on-premises.Run the VM back on-premises. - In vCenter, right-click on the virtual machine, then select Velostrata operations > Run on Premises. A dialog appears. Read through the on-screen notes and confirm the operation. - When the Run on Premises task is complete, the virtual machine no longer appears as managed by Velostrata, and remains shut down. Start the virtual machine. - When the virtual machine is back online, connect to it using RDP or its virtual console, and verify that all changes made while running in cloud are in place. The Velostrata system is now ready for your PoC activities. We recommend that you work with a Velostrata sales engineer or a support representative to effectively plan your use PoC use cases, evaluation goals and success criteria.
http://docs.velostrata.com/m/73203/l/807802-running-your-first-vm-in-cloud
2019-06-16T07:39:08
CC-MAIN-2019-26
1560627997801.20
[array(['https://media.screensteps.com/image_assets/assets/001/161/178/original/f9b0b36b-27f5-41a9-8a73-990cce591e01.png', None], dtype=object) array(['https://media.screensteps.com/image_assets/assets/001/161/179/original/b925c20c-1831-46bc-bf3c-c52ae3c0763a.jpg', None], dtype=object) ]
docs.velostrata.com
Power supply How to power Reach M+¶ Emlid Reach M+ module can be powered using Micro-USB port or JST-GH JST-GH ports¶ Reach can be powered by providing 5 Volts to corresponding pins on any of the two JST-GH ports. When Reach is powered over JST-GH port it will pass power to devices connected to Micro-USB OTG port such as flash drives, 3G\4G modems, USB radios etc.
https://docs.emlid.com/reachm-plus/power-supply/
2019-06-16T07:17:33
CC-MAIN-2019-26
1560627997801.20
[array(['../img/reachm-plus/power-supply/wrong-power-supply.png', None], dtype=object) array(['../img/reachm-plus/power-supply/usb-power-supply.png', None], dtype=object) array(['../img/reachm-plus/power-supply/power-supply-options.png', None], dtype=object) array(['../img/reachm-plus/power-supply/jst-gh-power-supply.png', None], dtype=object) ]
docs.emlid.com
This page provides information on the Volumetric Grid Input. Overview This page contains information on input channels that can be loaded and re-mapped for a VRayVolumeGrid. The input file loaded for the Volumetric Grid must be generated from the source program with channels mapped in such a way that VRayVolumeGrid can read them correctly. The input file is selected when the VRayVolumeGrid is first created, or in the Input rollout on the Modify panel. Supported Channels VRayVolumeGrid is based on Phoenix FD, and in its core it supports the following channels: Liquid/Temperature Smoke Velocity Speed RGB Fuel Different applications use different channels and names for them. When loading f3d/vdb files, VRayVolumeGrid tries to automatically make the conversion to the supported channels. If a channel is not mapped by default, a channel. Manually equalizing the render settings with FumeFX Certain FumeFX settings can be optimized to keep render times as low as possible: Distributed Rendering with VRayVolumeGrid A common problem with setting up distributed rendering for VRayVolumeGrid is that the rendering process might look for the cache files in a local machine directory. However, at the start of a network render, the scene file is copied to all render machines on the network to a new location, e.g. C:\Users\user\AppData\Local\backburner\Jobs\, while cache files are not automatically sent to the host machine. The cache files are not sent because they could potentially overload the host's disk space if they are very large, and also because in many cases not all of them are actually used in the rendering. This is why when the rendering begins, if the hosts are looking for the cache files in a directory that was originally on the local computer, the cache files won't be found. The solution is to move the cache sequence to a shared folder on the network, or a mapped network drive, and set its path in the Input rollout using a network-visible UNC input path (a path that starts with \\). You can also browse from the Input rollout's path options. if the path points to a drive on the local computer rather than a UNC path, a message will appear: "You are using local machine Input path with distributed rendering!".
https://docs.chaosgroup.com/pages/diffpagesbyversion.action?pageId=38572110&selectedPageVersions=1&selectedPageVersions=2
2019-06-16T06:30:40
CC-MAIN-2019-26
1560627997801.20
[]
docs.chaosgroup.com
Dependency Tracking in Azure Application Insights A dependency is an external component that is called by your app.: Setup automatic dependency tracking in Console Apps To automatically track dependencies from .NET/.NET Core console apps, install the Nuget package Microsoft.ApplicationInsights.DependencyCollector, and initialize DependencyTrackingTelemetryModule as follows: DependencyTrackingTelemetryModule depModule = new DependencyTrackingTelemetryModule(); depModule.Initialize(TelemetryConfiguration.Active); as described here. This document focuses on dependencies from server components. blade. Let's walk through an example of that. Tracing from requests to dependencies Open the Performance blade, and look at the grid of requests: The top one is taking long. Let's see if we can find out where the time is spent. Click that row to see individual request events: Click any long-running instance to inspect it further, and scroll down to the remote dependency calls related to this request: It looks like most of the time servicing this request was spent in a call to a local service. Select that row to get more information: Looks like this dependency is where the problem is. We've pinpointed the problem, so now we just need to find out why that call is taking so long. Request timeline In a different case, there is no dependency call that is particularly long. But by switching to the timeline view, we can see where the delay occurred in our internal processing: There seems to be a large gap after the first dependency call, so we should look at our code to see why that is. Profile your live site No idea where the time goes? The Application Insights profiler traces HTTP calls to your live site and shows you the functions in your code that took the longest time. Failed requests Failed requests might also be associated with failed calls to dependencies. Again, we can click through to track down the problem. Click through to an occurrence of a failed request, and look at its associated events. Video. Next steps Feedback Send feedback about:
https://docs.microsoft.com/en-us/azure/azure-monitor/app/asp-net-dependencies
2019-06-16T08:17:05
CC-MAIN-2019-26
1560627997801.20
[array(['media/asp-net-dependencies/02-reqs.png', 'List of requests with averages and counts'], dtype=object) array(['media/asp-net-dependencies/03-instances.png', 'List of request occurrences'], dtype=object) array(['media/asp-net-dependencies/04-dependencies.png', 'Find Calls to Remote Dependencies, identify unusual Duration'], dtype=object) array(['media/asp-net-dependencies/05-detail.png', 'Click through that remote dependency to identify the culprit'], dtype=object) array(['media/asp-net-dependencies/04-1.png', 'Find Calls to Remote Dependencies, identify unusual Duration'], dtype=object) array(['media/asp-net-dependencies/06-fail.png', 'Click the failed requests chart'], dtype=object) array(['media/asp-net-dependencies/07-faildetail.png', 'Click a request type, click the instance to get to a different view of the same instance, click it to get exception details.'], dtype=object) ]
docs.microsoft.com
Tool Strip Tool Combo Box. End Update Strip Tool Combo Box. End Update Strip Tool Combo Box. End Update Strip Method Combo Box. End Update Definition Resumes painting the ToolStripComboBox control after painting is suspended by the BeginUpdate() method. public: void EndUpdate(); public void EndUpdate (); member this.EndUpdate : unit -> unit Public Sub EndUpdate () Remarks The preferred way to add items to the ToolStripComboBox is to use the AddRange method through the Items property of the ToolStripComboBox. This enables you to add an array of items to the list at one time. However, if you want to add items one at a time using the Add method, you can use the BeginUpdate method to prevent the control from repainting the ToolStripComboBox each time an item is added to the list. Once you have completed the task of adding items to the list, call the EndUpdate method to enable the ToolStripComboBox to repaint. This way of adding items can prevent flicker during the drawing of the ToolStripComboBox when a large number of items are being added to the list.
https://docs.microsoft.com/en-us/dotnet/api/system.windows.forms.toolstripcombobox.endupdate?view=netframework-4.8
2019-06-16T08:01:05
CC-MAIN-2019-26
1560627997801.20
[]
docs.microsoft.com
concurrent_vector::grow_to_at_least Method Grows this concurrent vector until it has at least _N elements. This method is concurrency-safe. iterator grow_to_at_least( size_type _N ); Parameters - _N The new minimum size for the concurrent_vector object. Return Value An iterator that points to beginning of appended sequence, or to the element at index _N if no elements were appended. Requirements Header: concurrent_vector.h Namespace: concurrency
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2013/ee355373%28v%3Dvs.120%29
2019-06-16T06:59:52
CC-MAIN-2019-26
1560627997801.20
[]
docs.microsoft.com
SGX Infrastructure design¶ Important This design document describes a feature of Corda Enterprise. This document is intended as a design description of the infrastructure around the hosting of SGX enclaves, interaction with enclaves and storage of encrypted data. It assumes basic knowledge of SGX concepts, and some knowledge of Kubernetes for parts specific to that. High level description¶ The main idea behind the infrastructure is to provide a highly available cluster of enclave services (hosts) which can serve enclaves on demand. It provides an interface for enclave business logic that’s agnostic with regards to the infrastructure, similar to serverless architectures. The enclaves will use an opaque reference to other enclaves or services in the form of enclave channels. Channels hides attestation details and provide a loose coupling between enclave/non-enclave functionality and specific enclave images/services implementing it. This loose coupling allows easier upgrade of enclaves, relaxed trust (whitelisting), dynamic deployment, and horizontal scaling as we can spin up enclaves dynamically on demand when a channel is requested. For more information see: Infrastructure components¶ Here are the major components of the infrastructure. Note that this doesn’t include business logic specific infrastructure pieces (like ORAM blob storage for Corda privacy model integration). Infrastructure interactions¶ - Enclave deployment: This includes uploading of the enclave image/container to enclave storage and adding of the enclave metadata to the key-value store. - Enclave usage: This includes using the discovery service to find a specific enclave image and a host to serve it, then connecting to the host, authenticating(attestation) and proceeding with the needed functionality. - Ops: This includes management of the cluster (Kubernetes/Kubespray) and management of the metadata relating to discovery to control enclave deployment (e.g. canary, incremental, rollback). Decisions to be made¶ Further details¶ Example deployment¶ This is an example of how two Corda parties may use the above infrastructure. In this example R3 is hosting the IAS proxy and the enclave image store and the parties host the rest of the infrastructure, aside from Intel components. Note that this is flexible, the parties may decide to host their own proxies (as long as they whitelist their keys) or the enclave image store (although R3 will need to have a repository of the signed enclaves somewhere). We may also decide to go the other way and have R3 host the enclave hosts and the discovery service, shared between parties (if e.g. they don’t have access to/want to maintain SGX capable boxes).
https://docs.corda.net/design/sgx-infrastructure/design.html
2019-09-15T07:25:26
CC-MAIN-2019-39
1568514570830.42
[array(['../../_images/ExampleSGXdeployment.png', 'Example SGX deployment'], dtype=object) ]
docs.corda.net
Description Minipapers is a method for which to easily embed a miniaturized version of a flipbook on your website. The following view options are available: - Perspective—displays a three-dimensional flipbook - Single page—displays a single page on the flipbook - Spread—displays a spread on the flipbook - Page slider—displays an animated slider showing pages and spreads of a pre-defined page range Once integrated on your website, clicking on a minipaper will redirect the user to the actual flipbook being displayed. Redirecting to an alternative URL It is possible to override the default link action and target when a user clicks on the embedded minipaper. This behavior can be configured by appending relevant query strings to the URL found in the src attribute of the <iframe> element in the embed code. For example, given the following embed snippet: <!-- iPaper snippet start --> <iframe style="display: block;" src="<MinipaperEmbedUrl>" scrolling="no" frameborder="0"></iframe> <!-- iPaper snippet end --> The URL to be modified will be the string <MinipaperEmbedUrl>. The following query strings are supported: If you want multiple query string keys to be attached to the URL, you can separate them using the & character, i.e.: <MinipaperEmbedUrl>?targetUrl=<Url>&target=<Target>. The order of the queries does not matter. Preserving aspect ratio If you want to preserve the aspect ratio of the embedded minipaper, so that it is scaled proportionately in a page with responsive design, we suggest using the padding-bottom trick, which will force an aspect ratio of your choice on the surrounding container. The element displaying the minipaper—the <iframe> element—should then be positioned absolutely relative to this container. For example, if a 2:1 ratio is desired for a container, padding-bottom: 50% should be used: <div style="position: relative; width: 100%; padding-bottom: 50%;"> <!-- iPaper snippet start --> <iframe style="display: block; width: 100%; height: 100%;" src="<MinipaperEmbedUrl>" scrolling="no" frameborder="0"></iframe> <!-- iPaper snippet end --> </div>
https://docs.ipaper.io/integration/minipapers
2019-09-15T07:27:09
CC-MAIN-2019-39
1568514570830.42
[]
docs.ipaper.io
All content with label amazon+gridfs+import+infinispan+infinispan_user_guide+listener+migration+out_of_memory+snapshot+transaction. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, recovery, transactionmanager, dist, release, query, deadlock, archetype, lock_striping, nexus, guide, schema, cache, s3, grid, test, api, xsd, ehcache, maven, documentation, userguide, 缓存, ec2, hibernate, aws, interface, custom_interceptor, setup, clustering, eviction, concurrency, jboss_cache, index, events, configuration, hash_function, batch, buddy_replication, loader, xa, write_through, cloud, mvcc, tutorial, notification, jbosscache3x, read_committed, xml, distribution, meeting, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, development, permission, websocket, async, interactive, xaresource, build, searchable, demo, scala, installation, ispn, client, non-blocking, jpa, filesystem, tx, user_guide, gui_demo, eventing, client_server, testng, standalone, hotrod, webdav, repeatable_read, docs, consistent_hash, batching, store, jta, faq, 2lcache, as5, jsr-107, docbook, jgroups, lucene, locking, hot_rod more » ( - amazon, - gridfs, - import, - infinispan, - infinispan_user_guide, - listener, - migration, - out_of_memory, - snapshot, - transaction ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/amazon+gridfs+import+infinispan+infinispan_user_guide+listener+migration+out_of_memory+snapshot+transaction
2019-09-15T08:09:36
CC-MAIN-2019-39
1568514570830.42
[]
docs.jboss.org
All content with label cloud+eventing+gridfs+import+infinispan+installation+jsr-107+post+repeatable_read+store+wcm+xsd. Related Labels: podcast, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, query, deadlock, intro, future, archetype, jbossas, lock_striping, nexus, guide, schema, editor, listener, cache, amazon, s3, grid, test, jcache, api, ehcache, maven, documentation, youtube, page, write_behind, ec2, 缓存, s, hibernate, getting, aws, templates, interface, custom_interceptor, setup, clustering, eviction, template, concurrency, out_of_memory, examples, jboss_cache, tags, index, events, configuration, hash_function, batch, buddy_replication, loader, write_through, mvcc, tutorial, notification, presentation, xml, read_committed, jbosscache3x, distribution, composition, started, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, development, permission, websocket, async, transaction, interactive, xaresource, build, gatein, categories, demo, client, migration, non-blocking, jpa, filesystem, design, tx, gui_demo, content, client_server, testng, infinispan_user_guide, standalone, hotrod, webdav, snapshot, tasks, docs, batching, consistent_hash, jta, faq, 2lcache, as5, downloads, docbook, jgroups, lucene, locking, rest, uploads, hot_rod more » ( - cloud, - eventing, - gridfs, - import, - infinispan, - installation, - jsr-107, - post, - repeatable_read, - store, - wcm, - xsd ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/cloud+eventing+gridfs+import+infinispan+installation+jsr-107+post+repeatable_read+store+wcm+xsd
2019-09-15T08:41:25
CC-MAIN-2019-39
1568514570830.42
[]
docs.jboss.org
Configuration The steps below summarizes how to configure the Token Server as OpenID Provider. Configure JWT Keys for signing Token Server, the Token Server Enable OpenID compliant grant type - either Authorization Code or Implicit. Enable OpenID Connect by adding openid scope to either Default Scopes or Additional Scopes and configure OpenID specific settings as described in Enabling OpenID Connect capability. Identity Provider A Relying Party may request additional claims as specified in OpenID Connect scopes. To enable the Token Server capability to retrieve these information it is required to enable User Info as described in Configure User Info Endpoint for Identity Provider associated with the web client.
https://docs.onegini.com/msp/token-server/10.1.1/topics/oidc/configuration/configuration.html
2019-09-15T07:32:38
CC-MAIN-2019-39
1568514570830.42
[array(['img/jwt-key-configuration.png', 'JWT Key configuration'], dtype=object) array(['img/jwt-key-signing-algorithm.png', 'Signing algorithm'], dtype=object) ]
docs.onegini.com
Defining bad words You can define and manage bad words in the Bad words application. Adding a bad word - Click New bad word. - On the New bad word page, enter the following details: - Bad word - a string that must not appear in the input text. - Bad word is a regular expression - if enabled, the string entered into the previous field is searched for as a regular expression. - Match whole word - if enabled, only whole word occurrences of the expression are identified. If disabled, even words with substrings matching the expression are identified. - Action - action that the system performs if the bad word is detected. - Use default settings - if enabled, global value is used as specified in Settings -> Security & Protection -> Bad words -> Bad word action. - Replace with - if the Replace action is selected, defines the substitute for the bad word. - Use default settings - if enabled, global value is used as specified in Settings -> Security & Protection -> Bad words -> Bad word replacement. - Click Save. The system now checks all user inputs for occurrences of the defined bad word or regular expression. When a user input matches the bad word definition, the system performs the specified action. The Event log also logs an event with the BADWORD event code. Selecting bad word language You can define in which cultures the system filters a certain bad word. In the Bad words application, edit a bad word and switch to its Cultures tab. Here, the system offers the following options: - The word is not allowed in all cultures - the system filters the bad word in all website cultures. - The word is not allowed only in following cultures - the system filters the bad word only in selected website cultures. Use the Add cultures and Remove buttons to modify the list of cultures. The Cultures tab is not available when adding a new bad word. Save the new bad word before editing its cultures. Was this page helpful?
https://docs.kentico.com/k12/community-features/bad-words/defining-bad-words
2019-09-15T08:39:09
CC-MAIN-2019-39
1568514570830.42
[]
docs.kentico.com
the Modelers and Developer Portal. The processes involving the Web Modeler described here apply to collaborative working in Mendix versions 7.23.3 and above. Mendix versions 7.23.2 and below use a different method to sync work between the Web and Desktop Modelers. See Sync the Web Modeler & Desktop Modeler for more information. 2 Concepts 2.1 Team Server The. For more information, see section App Team Roles in Company & App Roles. the Desktop Modeler in two circumstances: - The app is committed to the Repository - A Desktop Modeler working copy is updated from the Web Modeler working copy 2.4 Working Copy A working copy is the version of your app which is currently being worked on in the Modelers. For the Desktop Modeler, there is one working copy for each development line of the app. This model is held locally, on each computer where development work is taking place. For the Web Modeler, there is one additional working copy, held in the cloud. Only one developer at a time can edit this. 2.5 Merge Merging is the action of taking one revision of an app and applying the differences which have been made in a different revision. See section 4.3, Merging Branches for more information. If any of the differences cannot be applied, then there is a conflict. 2.6 Conflict A conflict occurs when two versions of the app cannot be combined automatically. This happens when the same document has been changed in the Desktop Modeler working copy and a committed revision and these changes cannot be reconciled. Examples are the following: - the Desktop Modeler, which gets the latest revision of the current development line from the Team Server repository and merges the differences into the current working copy. If the Web Modeler is enabled for this development line, the process first ensures that the Web Modeler working copy is stored as a new revision. 2.8 Commit Committing is the action, invoked in the Desktop Modeler, of sending all your changes to the repository and making a new revision. If the Web Modeler is enabled for this development line, the process first ensures that the Web Modeler working copy is stored as a new revision and merged into the working copy of the Desktop Modeler. If there are not conflicts, the changes are then sent to the repository to make a new revision. 2.9 Development Line Development of an app is done in a Development Line where a set of related changes is made. There are two types of Development Line: Main Lines section 4, Branches, for more information on how branch lines can be used. 2.10 Web Modeler Enabled You may enable the Web Modeler for one of the development lines. This means that a developer can make changes to the app through the Web Modeler and share changes with the team. All changes will be linked to the selected branch and committed as revisions to that branch. Changes made to other development lines will not be available in the Web Modeler. The Web Modeler cannot be used to develop the app if it is not enabled for any development lines. For starter apps created via the Developer Portal, the main line of a new app will be Web Modeler 3 Version Control Processes for a Single Branch The figure below shows how two developers might work on a Web Modeler enabled development line of an app. One developer is working in the Web Modeler, and one in the Desktop Modeler. They both work on the same development line (for example, the Main Line). 3.1 Work in Web Modeler Only The developer works on the app in the Web Modeler. They start with the app in state 1, this can be a new app or a revision of the app. Changes are made continuously to the working copy for the Web Modeler, stored in the cloud. 3.2 Work in Desktop Modeler Only Another (or the same) developer opens the app for the first time in the Desktop Modeler. A new revision (state 2) is created on the Team Server from the current state of the Web Modeler working copy. It is downloaded to the local machine as the working copy for the Desktop Modeler. The Web Modeler is locked temporarily so that the Web Modeler working copy is stable while it is copied. The developer works in the Desktop Modeler on the local working copy of the app. There is no work done in the Web Modeler in this scenario. The developer can commit this to the Team Server repository at any time to make a new revision (state 3). This revision is copied into the Web Modeler working copy and the developer using the Web Modeler will get the changes automatically. 3.3 Work in Both Modelers Two developers are working on the same development line of the same app at the same time. One is using the Desktop Modeler, the other is using the Web Modeler. Changes from both Modelers are stored in the respective working copies: on the local machine for the Desktop Modeler and in the cloud for the Web Modeler. 3.4 Update Desktop Modeler Working Copy The developer using the Desktop Modeler wants to include the changes made by the developer using the Web Modeler. They choose to update their working copy. All the changes from the Web Modeler working copy are put into a new revision on the Team Server (state 4). This revision is merged into the Desktop Modeler working copy. While the Desktop Modeler working copy is being updated, the Web Modeler is locked temporarily so that the Web Modeler working copy is stable while it is copied. This will also pick up changes from other developers using the Desktop Modeler, if they have committed changes to this branch. If there are conflicts, the developer using the Desktop Modeler will have to resolve them before they can commit the changes to the Team Server repository. 3.5 Commit Changes to Team Server Repository The developer using the Desktop Modeler wants to commit a new revision to the Team Server. This will enable the developer using the Web Modeler, or a different developer using the Desktop Modeler, to see and work with the changes the developer has made. It also means that the revision can be deployed to the cloud. The developer selects to commit, and the following things happen: - The Web Modeler is locked temporarily - The Web Modeler working copy is committed as a revision (restore point – state 5) - The revision just created (state 5) is merged with the Desktop Modeler working copy If there are no merge conflicts, the updated Desktop Modeler working copy is committed as a new revision (state 6) and the Web Modeler is updated to the new revision and unlocked. If there are conflicts, the developer using the Desktop Modeler will need to resolve these. The Web Modeler will be unlocked, without receiving any of the changes from the Desktop Modeler, while they do this. The developer using the Desktop Modeler then needs to commit again, and the process starts from the beginning (the Web Modeler is locked ready for a new revision to be committed from the Web Modeler section 3, Version Control Processes for a Single Branch. Initially, developers using the Web Modeler only have access to the development line for which the Web Modeler is enabled. They can be switched to another development line, however, by a developer using the Desktop Model. Two cases for doing this section 3, Version Control Processes for a Single Branch, there may be conflicts during the merge, and these will have to be resolved before you can commit the changes to your app. Note that errors can be introduced by the merge process even if no conflicts are identified during the merge. Errors are inconsistencies which are flagged in the Modelers and will prevent the app from being deployed. They could lead to a revision not being deployable, so it is important to check for errors after you have done a merge.
https://docs.mendix.com/refguide7/version-control
2019-09-15T07:31:47
CC-MAIN-2019-39
1568514570830.42
[array(['attachments/version-control/image1.png', None], dtype=object) array(['attachments/version-control/image2.png', None], dtype=object) array(['attachments/version-control/image3.png', None], dtype=object) array(['attachments/version-control/image4.png', None], dtype=object) array(['attachments/version-control/image5.png', None], dtype=object) array(['attachments/version-control/image6.png', None], dtype=object) array(['attachments/version-control/image7.png', None], dtype=object) array(['attachments/version-control/image8.png', None], dtype=object) array(['attachments/version-control/image9.png', None], dtype=object)]
docs.mendix.com
Current Series Release Notes¶ 12.0.0.0rc1-63¶ New Features¶ OS::Aodh::LBMemberHealthAlarm resource plugin is added to manage Aodh loadbalancer_member_health alarm. Added a new config option server_keystone_endpoint_type to specify the keystone authentication endpoint (public/internal/admin) to pass into cloud-init data. If left unset the original behavior should remain unchanged. This feature allows the deployer to unambiguously specify the keystone endpoint passed to user provisioned servers, and is particularly useful where the deployment network architecture requires the heat service to interact with the internal endpoint, but user provisioned servers only have access to the external network. For more information see. 12.0.0.0rc1. 11.0.0.0rc1. Upgrade Notes¶ The ceilometer client plugin is no longer provided, due to the Ceilometer API no longer being available from Queens and the python-ceilometerclient library being unmaintained.). 11.0.0.0b3¶ New Features¶. 11.0.0.0b1¶ Upgrade Notes¶ The database upgrade for Heat Queens release drops ‘watch_rule’ and ‘watch_data’ tables from the heat database..0rc1. OpenStack deployments, packagers, and deployment projects which deploy/package CloudWatch should take appropriate action to remove support. Security Issues¶ Heat no longer uses standard Python RNG when generating values for OS::Heat::RandomString resource, and instead relies on system’s RNG for that. Other Notes¶. 10.0.0.0b3¶ New Features¶. Upgrade Notes¶ Default policy.json file is now removed as we now generate the default policies in code. Please be aware that when using that file in your environment. You still can generate a policy.yaml file if that’s required in your environment. Deprecation Notes¶ Threshold alarm which uses ceilometer API is deprecated in aodh since Ocata. Please use OS::Aodh::GnocchiAggregationByResourcesAlarmin place of OS::Aodh::Alarm.. 10.0.0.0b2¶ New Features¶ Adds REST api support to cancel a stack create/update without rollback. The template validate API call now returns the Environment calculated by heat - this enables preview of the merged environment when using parameter_merge_strategy prior to creating the stack Added a new schema property tags, to parameters, to categorize parameters based on features. Deprecation Notes¶. 10.0.0.0b1¶ New Features¶ All developer, contributor, and user content from various guides in openstack-manuals has been moved in-tree and are published at. Known Issues¶ Heat does not work with keystone identity federation. This is a known limitation as heat uses keystone trusts for deferred authentication and trusts don’t work with federated keystone. For more details check. Deprecation Notes¶ Hidden Designate resource plugins OS::Designate::Domainand OS::Designate::Record. To use OS::Designate::Zoneand OS::Designate::RecordSetinstead. Bug Fixes¶. 9.0.0.0rc1¶ 9.0.0.0b3¶ Prelude¶ Magnum recently changed terminology to more intuitively convey key concepts in order to align with industry standards. “Bay” is now “Cluster” and “BayModel” is now “ClusterTemplate”. This release deprecates the old names in favor of the new. New Features¶ The ‘contains’ function was added, which checks whether the specified value is in a sequence. In addition, the new function can be used as a condition function. A new OS::Zun::Container resource is added that allows users to manage docker containers powered by Zun. This resource will have an ‘addresses’ attribute that contains various networking information including the neutron port id. This allows users to orchestrate containers with other networking resources (i.e. floating ip). New resource OS::Neutron::Trunkis added to manage Neutron Trunks. A new property, deployment_swift_data is added to the OS::Nova::Server and OS::Heat::DeployedServer resources. The property is used to define the Swift container and object name that is used for deployment data for the server. If unset, the fallback is the previous behavior where these values will be automatically generated. OS::Magnum::Cluster resource plugin added to support magnum cluster feature, which is provided by magnum clusterAPI. OS::Magnum::ClusterTemplate resource plugin added to support magnum cluster template feature, which is provided by magnum clustertemplatesAPI. Added new section. Two new policies soft-affinity and soft-anti-affinity have been supported for the OS::Nova::ServerGroup resource. Resource attributes are now stored at the time a resource is created or updated, allowing for fast resolution of outputs without having to retrieve live data from the underlying physical resource. To minimise compatibility problems, the behaviour of the show attribute, the with_attr option to the resource show API, and stacks that do not yet use the convergence architecture (due to the convergence_engine being disabled at the time they were created) is unchanged - in each of these cases live data will still be returned. Support to managing rbac policy for ‘qos_policy’ resource, which allows to share Neutron qos policy to subsets of tenants. Deprecation Notes¶ Magnum terminology deprecations * OS::Magnum::Bay is now deprecated, should use OS::Magnum::Cluster instead * OS::Magnum::BayModel is now deprecated, should use OS::Magnum::ClusterTemplate instead Deprecation warnings are printed for old usages. Critical Issues¶ Since Aodh drop support for combination alarm, therefore OS::Aodh::CombinationAlarm is now mark as hidden resource with directly inheriting from None resource which will make the resource do nothing when handling any actions (other than delete). And please don’t use it. Old resource which created with that resource type still able to delete. It’s recommand to switch that resource type ASAP, since we will remove that resource soon. 9.0.0.0b2¶ New Features¶ The list_concat_unique function was added, which behaves identically to the function list_concatto concat several lists using python’s extend function and make sure without repeating items. The list_concat function was added, which concats several lists using python’s extend function. Allow to set or update the tags for OS::Neutron::Router resource. A new OS::Mistral::ExternalResource is added that allows users to manage resources that are not known to Heat by specifying in the template Mistral workflows to handle actions such as create, update and delete. New item key ‘allocate_network’ of ‘networks’ with allowed values ‘auto’ and ‘none’ for OS::Nova::Server, to support ‘Give Me a Network’ nova feature. Specifying ‘auto’ would auto allocate a network topology for the project if there is no existing network available; Specifying ‘none’ means no networking will be allocated for the created server. This feature requires nova API micro version 2.37 or later and the auto-allocated-topologyAPI is available in the Neutron networking service. A new openstackclient plugin to use python-openstacksdk library and a neutron.segmentcustom constraint. A new OS::Neutron:Segmentresource to create routed networks. Availability of this resource depends on availability of neutron segmentAPI extension. Resource OS::Neutron::Subnetnow supports segmentoptional property to specify a segment. Resource OS::Neutron::Netnow supports l2_adjacencyatribute on whether L2 connectivity is available across the network or not. ParameterGroups section is added to the nested stacks, for the output of the stack validate templates. Allow to set or update the tags for OS::Neutron::Net resource. Allow to set or update the tags for OS::Neutron::Port resource. Allow to set or update the tags for OS::Neutron::Subnet resource. Allow to set or update the tags for OS::Neutron::SubnetPool resource. Deprecation Notes¶ nova-network is no longer supported in OpenStack. Please use OS::Neutron::FloatingIPAssociation and OS::Neutron::FloatingIP in place of OS::Nova::FloatingIPAssociation and OS::Nova::FloatingIP The AWS::EC2::EIP domain is always assumed to be ‘vpc’, since nova-network is not supported in OpenStack any longer. The ‘attachments’ attribute of OS::Cinder::Volume has been deprecated in favor of ‘attachments_list’, which has the correct type of LIST. This makes this data easier for end users to process. 9.0.0.0b1¶ New Features¶ Supports to get the webmks console url for OS::Nova::Server resource. And this requires nova api version equal or greater than 2.8. The Pike version of HOT (2017-09-01) adds a make_url function to simplify combining data from different sources into a URL with correct handling for escaping and IPv6 addresses. 8.0.0.0b3¶ New Features¶ Designate v2 resource plugins OS::Designate::Zone and OS::Designate::RecordSet are newly added. A new resource plugin OS::Keystone::Domainis added to support the lifecycle of keystone domain. New resource OS::Neutron::Quotais added to manage neutron quotas. A new resource OS::Sahara::Jobhas been added, which allows to create and launch sahara jobs. Job can be launched with resource-signal. Custom constraints for all sahara resources added - sahara.cluster, sahara.cluster_template, sahara.data_source, sahara.job_binary, sahara.job_type. OS::Nova::Server now supports ephemeral_size and ephemeral_format properties for block_device_mapping_v2 property. Property ephemeral_size is integer, that require flavor with ephemeral disk size greater that 0. Property ephemeral_format is string with allowed values ext2, ext3, ext4, xfs and ntfs for Windows guests; it is optional and if has no value, uses default, defined in nova config file. 8.0.0.0b2¶ New Features¶ OS::Aodh::CompositeAlarm resource plugin is added to manage Aodh composite alarm, aim to replace OS::Aodh::CombinationAlarm which has been deprecated in Newton release. The resource mark unhealthycommand now accepts either a logical resource name (as it did previously) or a physical resource ID to identify the resource to be marked unhealthy. New OS::Zaqar::Subscriptionand OS::Zaqar::MistralTriggerresource types allow users to attach to Zaqar queues (respectively) notifications in general, and notifications that trigger Mistral workflow executions in particular. 8.0.0.0b1¶ 7.0.0.0rc1¶ New Features¶. 7.0.0.0b3. Add map_replacefunction, that takes 2 arguments an input map and a map containing a keysand/or valuesmap. key/value substitutions on the input map are performed based on the mappings passed in keysand values.. 7.0.0.0b2¶ New Features¶. 7.0.0.0b1¶ New Features¶ Add template_dir to config. Normally heat has template directory /etc/heat/templates. This change makes it more official. In the future, it is possible to implement features like access templates directly from global template environment. Adds new ‘max_server_name_length’ configuration option which defaults to the prior upper bound (53) and can be lowered by users (if they need to, for example due to ldap or other internal name limit restrictions). OS::Glance::Image resource plug-in is updated to support tagging when image is created or updated as part of stack. OS::Monasca::AlarmDefinition and OS::Monasca::Notification resource plug-ins are now supported by heat community as monasca became offcial OpenStack project.
https://docs.openstack.org/releasenotes/heat/unreleased.html
2019-09-15T08:32:32
CC-MAIN-2019-39
1568514570830.42
[]
docs.openstack.org
chainer.initializers.GlorotNormal¶ - class chainer.initializers. GlorotNormal(scale=1.0, dtype=None)[source]¶ Initializes array with scaled Gaussian distribution. Each element of the array is initialized by the value drawn independently from Gaussian distribution whose mean is 0, and standard deviation is \(scale \times \sqrt{\frac{2}{fan_{in} + fan_{out}}}\), where \(fan_{in}\) and \(fan_{out}\) are the number of input and output units, respectively. Reference: Glorot & Bengio, AISTATS 2010 -.
https://docs.chainer.org/en/latest/reference/generated/chainer.initializers.GlorotNormal.html
2019-09-15T08:27:44
CC-MAIN-2019-39
1568514570830.42
[]
docs.chainer.org
Creating a Facebook Account In August 2014 Facebook instituted a new process that alters the support for Facebook Query Language (FQL). This change has required Genesys to make some changes to the process of creating Facebook applications. As of the publication of this page, the following applies: - If you have a working Facebook application that you generated before August 2014, you can continue to use it until August 2016. Do not delete your application; otherwise you will lose compatibility with FQL. - If you do not have a working Facebook application that you generated before August 2014, do not create it. Instead, provide a Facebook User ID or the User name of your admin user to Genesys, and we will add your Facebook admin user to Genesys application as a "tester user," and thereby you will be able to obtain your own access token(s), as described in the procedure on this page. ImportantThe APIs and other features of social media sites may change with little warning. The information provided on this page was correct at the time of publication (8 January 2015). This procedure requires Google Chrome, Mozilla Firefox, or Microsoft Internet Explorer 8 or later. Procedure - If you do not have a Facebook account, create one. - Log in to Facebook as the admin user. - Click the icon with your profile picture in order to see your timeline, as shown in the Figure below: - On the resulting page, copy the URL from the browser. - If your Facebook admin User did not enter a username, the URL includes a long digital string; for example - If your Facebook admin User did enter a username, the URL will be more like - Send an e-mail to Genesys Customer Care with the following text or something similar: My Facebook User ID/User Name is contained in the following URL: Please add it as a tester user so that I can use the Genesys Facebook application for my Genesys Social Engagement Solution. - Genesys then adds your Facebook admin user as Tester User to the Genesys Facebook application. Now you can retrieve the Facebook access token: while logged into Facebook, your Facebook admin user must enter the following URL:? client_id=<application_ID_provided_by_genesys> &redirect_uri= &scope=public_profile,manage_pages,read_page_mailboxes,publish_actions &response_type=token - On the permissions page, click Allow. The resulting page displays the single word success. You are then prompted to set the visibility of posts published by Genesys Facebook driver on your behalf: "Genesys Application Name would like to post to Facebook for you. Who do you want to share these posts with?." Select from the dropdown: Public, Friends of Friends, Friends, and so on. - Make note of the long alphanumeric string following access_code= in the URL (excluding &expires=<number> at the end of the string)—this is your user access token. The access token may not stay visible in the browser’s address field for very long. If it disappears, you can retrieve it from the browser history (click Control-H). - Genesys strongly recommends using the Page Access Token as the main access token (options x-access-token and access-token). Doing so ensures that the Genesys Facebook driver communicates with Facebook on behalf of your particular Facebook Page, and that all posts, comments, and replies are published on behalf of that Page. The Page Access Token is required for Private Messaging monitors. - To check when the user access token expires, go to and enter in the access token from the previous step. - If your user access token is short lived, you can extend it by entering the following URL in a browser, substituting the Application Id for APPLICATION_ID, the Application Secret for APPLICATION_SECRET, and the token from step 7 for TOKEN:? grant_type=fb_exchange_token &client_id=APPLICATION_ID &client_secret=APPLICATION_SECRET &fb_exchange_token=TOKEN - This produces a screen with a single line of text consisting of access_token= followed by a long alphanumeric string. Make a note of this string. It is the value of the Social Messaging Server configuration option x-access-token.Important&expires=<number> at the end of the string must not be included in the value of x-facebook-access-token. - You can always revoke your permissions for an application and start again from Step 6. To revoke the permissions for an application: - Open Graph explorer. - Select the application you want to deauthorize. - Enter the User access token you received previously. - Enter me/permissions. - Select DELETE and click Submit. Permissions For the permission codes to be used in the URL given in Step 6, Genesys recommends the following minimum list: public_profile,manage_pages,read_page_mailboxes,publish_actions For further details on permissions in the Facebook API v2.2, see. Next StepsDeploy Social Messaging Server with a Facebook Channel This page was last modified on November 30, 2018, at 00:49. Feedback Comment on this article:
https://docs.genesys.com/Documentation/ES/8.5.1/SMSolution/CreatingaFacebookApplication
2019-09-15T07:28:20
CC-MAIN-2019-39
1568514570830.42
[]
docs.genesys.com
: -- access.lua local function load_entity_key(api_key) -- IMPORTANT: the callback is executed inside a lock, hence we cannot terminate -- a request here, we MUST always return. local apikeys, err = kong querystring = kong.request.get_query() local apikey = querystring = kong.cache:get("apikeys." .. apikey, nil, load_entity_key, apikey) if err then return kong.response.exit(500, "Unexpected error: " .. err) end if not credential then -- no credentials in cache nor datastore return kong.response.exit(403, "Invalid authentication credentials") end -- set an upstream header if the credential exists and is valid kong.service.request.set_header("X-API-Key", credential.apikey) SCHEMA = { primary_key = { "id" }, table = "keyauth_credentials", cache_key = { "key" }, -- cache key for this entity fields = { id = { type = "id" }, created_at = { type = "timestamp", immutable = true }, consumer_id = { type = "id", required = true, foreign = "consumers:id"}, key = { type = "string", required = false, unique = true } } } return { keyauth_credentials = SCHEMA }.dao apikey = kong.request.get_query().apikey local cache_key = kong.dao.keyauth_credentials:cache_key(apikey) local credential, err = kong.cache:get(cache_key, nil, load_entity_key, apikey) if err then return kong.response.exit(500, "Unexpected error: " .. err)",
https://docs.konghq.com/0.14.x/plugin-development/entities-cache/
2019-09-15T08:18:55
CC-MAIN-2019-39
1568514570830.42
[]
docs.konghq.com
All Files Dropper Tool Properties When you select the Dropper tool, its properties and options appears in the Tool Properties view. NOTETo learn how to use the Dropper tool, see Picking a Colour with the Dropper Tool. Tool Options Icon Property Description Sample All Layers On bitmap layers, if strokes with transparency located on separated art layers overlap, the Dropper will pick the combination of the two colours. When disabled, the Dropper will pick the colour on the current art layer. Do not Pick Transparency On bitmap layers, when enabled, the dropper will pick the colour at 100% of opacity even if the selection has some transparency.
https://docs.toonboom.com/help/harmony-15/paint/reference/tool-properties/droppper-tool-properties.html
2019-09-15T07:39:00
CC-MAIN-2019-39
1568514570830.42
[]
docs.toonboom.com
Trainer Extensions¶ In this section, you will learn about the following topics: How to create your own trainer extension What is trainer Extension?¶ Extension is a callable object that takes a Trainer object as an argument. By adding an Extension to a Trainer using the extend() method, the Extension will be called according to the schedule specified by using a trigger object (See the details in 1. trigger) The Trainer object contains all information used in a training loop, e.g., models, optimizers, updaters, iterators, and datasets, etc. This makes it possible to change settings such as the learning rate of an optimizer. Write a simple function¶ You can make a new Extension by writing a simple function which takes a Trainer object as its argument. For example, when you want to reduce the learning rate periodically during training, an lr_drop extension can be written as follows: def lr_drop(trainer): trainer.updater.get_optimizer('main').lr *= 0.1 Then you can add this function to a Trainer object via extend() method. trainer.extend(lr_drop, trigger=(10, 'epoch')) It lowers the learning rate every 10 epochs by multiplying 0.1 with the current learning rate. Write a function decorated with @make_extension¶ make_extension() is a decorator that adds some attributes to a given function. For example, the simple extension we created above can be written in this form: @training.make_extension(trigger=(10, 'epoch')) def lr_drop(trainer): trainer.updater.get_optimizer('main').lr *= 0.1 The difference between the above example and this is whether it has a default trigger or not. In the latter case, lr_drop() has its default trigger so that unless another trigger is specified via extend() method, the trigger specified in make_extension() is used by default. The code below acts the same as the former example, i.e., it reduces the learning rate every 10 epochs. trainer.extend(lr_drop) There are several attributes you can add using the make_extension() decorator. 1. trigger¶ trigger is an object that takes a Trainer object as an argument and returns a boolean value. If a tuple in the form (period, unit) is given as a trigger, it will be considered as an IntervalTrigger that invokes the extension every period unit. For example, when the given tuple is (10, 'epoch'), the extension will run every 10 epochs. trigger can also be given to the extend() method that adds an extension to a Trainer object. The priority of triggers is as follows: When both extend()and a given Extensionhave triggers, the triggergiven to extend()is used. When Noneis given to extend()as the triggerargument and a given Extensionhas trigger, the triggergiven to the Extensionis used. When both triggerattributes in extend()and Extensionare None, the Extensionwill be fired every iteration. See the details in the documentation of get_trigger() for more information. 2. default_name¶ An Extension is kept in a dictionary which is a property in a Trainer. This argument gives the name of the Extension. Users will see this name in the keys of the snapshot which is a dictionary generated by serialization. 3. priority¶ As a Trainer object can be assigned multiple Extension objects, the execution order is defined according to the following three values: PRIORITY_WRITER: The priority for extensions that write some records to the observation dictionary. It includes cases that the extension directly adds values to the observation dictionary, or the extension uses the chainer.report() function to report values to the observation dictionary. Extensions which write something to reporter should go first because other Extensions which read those values may be added. PRIORITY_EDITOR: The priority for extensions that edit the observation dictionary based on already reported values. Extensions which edit some values of reported ones should go after the extensions which write values to reporter but before extensions which read the final values. PRIORITY_READER: The priority for extensions that only read records from the observation dictionary. This is also suitable for extensions that do not use the observation dictionary at all. Extensions which read the reported values should be fired after all the extensions which have other priorities, e.g, PRIORITY_WRITERand PRIORITY_EDITORbecause it should read the final values. See the details in the documentation of Trainer for more information. 4. finalizer¶ You can specify a function to finalize the extension. It is called once at the end of the training loop, i.e., when run() has finished. Write a class inherited from the Extension class¶ This is the way to define your own extension with the maximum degree of freedom. You can keep any values inside of the extension and serialize them. As an example, let’s make an extension that drops the learning rate polynomially. It calculates the learning rate by this equation: The learning rate will be dropped according to the curve below with \({\rm power} = 0.5\): class PolynomialShift(training.Extension): def __init__(self, attr, power, stop_trigger, batchsize=None, len_dataset=None): self._attr = attr self._power = power self._init = None self._t = 0 self._last_value = 0 if stop_trigger[1] == 'iteration': self._maxiter = stop_trigger[0] elif stop_trigger[1] == 'epoch': if batchsize is None or len_dataset is None: raise ValueError( 'When the unit of \'stop_trigger\' is \'epoch\', ' '\'batchsize\' and \'len_dataset\' should be ' 'specified to calculate the maximum iteration.') n_iter_per_epoch = len_dataset / float(batchsize) self._maxiter = float(stop_trigger[0] * n_iter_per_epoch) def initialize(self, trainer): optimizer = trainer.updater.get_optimizer('main') # ensure that _init is set if self._init is None: self._init = getattr(optimizer, self._attr) def __call__(self, trainer): self._t += 1 optimizer = trainer.updater.get_optimizer('main') value = self._init * ((1 - (self._t / self._maxiter)) ** self._power) setattr(optimizer, self._attr, value) self._last_value = value def serialize(self, serializer): self._t = serializer('_t', self._t) self._last_value = serializer('_last_value', self._last_value) if isinstance(self._last_value, np.ndarray): self._last_value = self._last_value.item() stop_trigger = (10000, 'iteration') trainer.extend(PolynomialShift('lr', 0.5, stop_trigger)) This extension PolynomialShift takes five arguments. attr: The name of the optimizer property you want to update using this extension. power: The power of the above equation to calculate the learning rate. stop_trigger: The trigger given to the Trainerobject to specify when to stop the training loop. batchsize: The training mini-batchsize. len_dataset: The length of the dataset, i.e., the number of data in the training dataset. This extension calculates the number of iterations which will be performed during training by using stop_trigger, batchsize, and len_dataset, then stores it as a property _maxiter. This property will be used in the __call__() method to update the learning rate. The initialize() method obtains the initial learning rate from the optimizer given to the Trainer object. The serialize() method stores or recovers the properties, _t (number of iterations) and _last_value (the latest learning rate), belonging to this extension.
https://docs.chainer.org/en/latest/guides/extensions.html
2019-09-15T08:34:45
CC-MAIN-2019-39
1568514570830.42
[array(['../_images/polynomial.png', '../_images/polynomial.png'], dtype=object) ]
docs.chainer.org
, click on the icon Pool of the top bar, or open the "Guest / Host / Pool" section in the left area of the Dashboard and click the right mouse button on "Pools". Then click "New Pool". A window will open to start the creation of the new Pool. Enter the name of the Pool and click "Next". You can select the Hosts you want to associate with the new Pool Press "Next". In this window you can select the amount of resources that the Pool will reserve from the Hosts to run its Guests: - CPU reservation block size, in number of vCPUs. - RAM reservation block size, in MB (although the slider shows RAM in GB to save space). - The amount of "reservation blocks" that this Pool will reserve from the Hosts. - Priority..
https://docs.flexvdi.com/display/V30/Creating+a+new+Pool
2019-09-15T07:34:27
CC-MAIN-2019-39
1568514570830.42
[]
docs.flexvdi.com
Configuring marketing automation Marketing automation helps you automate, optimize and analyze your campaigns that are promoted by emails. Moreover, Marketing automation allows you to nurture your website visitors and leads – represented by contacts in Kentico. To be able to work with processes and steps in the Marketing automation application, you need to have the permissions for the On-line marketing and Contact management modules. First, you need to enable on-line marketing functionality so that you can track contacts handled in the automation processes: Furthermore, you can set up the system to send an automated reminder to customers who leave a website and still have products in the shopping cart: Additionaly, you can create custom action steps, which users can then incorporate into automation processes: You can also implement custom handlers to influence the processing of triggers in the automation processes: Features described on this page require the Kentico EMS license. Enabling on-line marketing You need to track contacts in order to leverage the functionality of the automation processes. To track contacts, enable on-line marketing: - Open the Settings application. - Navigate to On-line marketing. - Select the Enable on-line marketing check box. - Click Save." /> Was this page helpful?
https://docs.kentico.com/k11/on-line-marketing-features/configuring-and-customizing-your-on-line-marketing-features/configuring-marketing-automation
2019-09-15T08:39:34
CC-MAIN-2019-39
1568514570830.42
[]
docs.kentico.com
[OSEv3:children] masters nodes etcd Until OpenShift Container Platform 3.6, it was possible to deploy a cluster with an embedded etcd instance. This embedded etcd instance was deployed on your OpenShift Container Platform instance. Starting in OpenShift Container Platform version 3.7, this is no longer possible. This migration process performs the following steps: Stop the master service. Perform an etcd backup of embedded etcd. Deploy external etcd (on the master or new host). Perform a backup of the original etcd master certificates. Generate new etcd certificates for the master. Transfer the embedded etcd backup to the external etcd host. Start the external etcd from the transfered etcd backup. Re-configure master to use the external etcd. Start master. Additionally, the etcd API version since OpenShift Container Platform 3.6 defaults to v3. Also, since OpenShift Container Platform 3.7, v3 is the only version allowed. Therefore, older deployments with embedded etcd with the etcd API version v2 need to migrate to the external etcd first, followed by data migration, before they can be upgraded to OpenShift Container Platform 3.7. Migration to external RPM etcd or external containerized etcd is currently supported. A migration playbook is provided to automate all aspects of the process; this is the preferred method for performing the migration. You must have access to your existing inventory file with both the master and external etcd host defined in their separate groups. In order to perform the migration on Red Hat Enterprise Linux Atomic Host, you must be running Atomic Host 7.4 or later. Add etcd under the [OSEv3:children] section if it does not already exist: [OSEv3:children] masters nodes etcd Your inventory file is expected to have exactly one host in an [etcd] host group. In most scenarios, it is best to use your existing master, as there is no need for a separate host. Add an [etcd] host group to your inventory file if it does not already exist, and list the host to migrate your etcd to: [etcd] master1.example.com Pull the latest subscription data from Red Hat Subscription Manager (RHSM): # subscription-manager refresh To get the latest playbooks, manually disable the OpenShift Container Platform 3.6 channel and enable the 3.7 channel on the host you are running the migration from: # subscription-manager repos --disable="rhel-7-server-ose-3.6-rpms" \ --enable="rhel-7-server-ose-3.7-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-fast-datapath-rpms" # yum clean all Run the embedded2external.yml playbook using your inventory file: # ansible-playbook [-i /path/to/inventory] \ /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-etcd/embedded2external.yml Successful completion of the playbook will show the following: INSTALLER STATUS ************************************** Initialization : Complete etcd Install : Complete To verify that the migration from embedded to external etcd was successful, run the following on the etcd host and check for an etcd process: # ps -aux | grep etcd etcd 22384 2.1 3.9 5872848 306072 ? Ssl 10:36 0:02 /usr/bin/etcd --name=master1.example.com --data-dir=/var/lib/etcd/ --listen-client-urls=
https://docs.openshift.com/container-platform/3.7/upgrading/migrating_embedded_etcd.html
2019-09-15T08:03:25
CC-MAIN-2019-39
1568514570830.42
[]
docs.openshift.com
New Panel ‘Monitoring’ added to list the violations for each policy. Added support to create and delete datasource using congress dashboard Added support to enter rules in policy language directlt using congress_dashboard if the user doesn’t need navigation support to create rules. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/releasenotes/congress-dashboard/pike.html
2019-09-15T07:40:56
CC-MAIN-2019-39
1568514570830.42
[]
docs.openstack.org
- ! History of mathematics in China Résumé de l'exposé ?Chinese mathematics? was defined by Chinese in ancient times as the ?art of calculation?. This art was both a practical and a spiritual one. Like in Europe, many traces of calculations and solutions of equations were found by archaeologists. Today, these archaeological discoveries enable us to assert that Chinese civilization was very advanced compared to the other civilizations in the field of mathematics. But how did Chinese mathematics evolve through the centuries and on which concepts and discoveries were they precursor of modern mathematics? These numerical inscriptions contained both tally and code symbols which were based on a decimal system and they employed a positional value system. This proves that the Chinese were among the first civilizations to understand and efficiently use a decimal numeration system. Moreover, the ancient Chinese civilization was the first to discover many mathematical concepts, such as the pi number (?), the existence of zero, the magic squares or the Pascal's triangle. All these discoveries, which nowadays constitute the fundamental bases of arithmetics, were discovered centuries later in Occident. Then, during the 1st century A.D., the Chinese worked out the most famous of the mathematical treaties of ancient China, the "Jiuzhang Suanshu". This treaty, also called "Arithmetic in Nine Sections", is the most well-known and influential Chinese mathematical text. Sommaire de l'exposé - Introduction. - Brief history. - The Chinese counting system. - Origins. - Schemes of notation. - Rod numeral system. - Traditional system (still used nowadays). - Complements. - Instruments to calculate. - Chinese counting boards. - The abacus. - The Chinese discoveries. - Computation of Pi. - Magic squares. - Pascal's triangle. - Chinese problems. - The broken bamboo problem. - The hundred fowl problem. - The rice problem. - Nine chapters on the mathematical art. - Land surveying. - Millet and rice. - Distribution by proportion. - Short width. - Civil engineering. - Fair distribution of goods. - Excess and deficit. - Calculation by square tables. - Right angled triangles. - Liu Hui. - Conclusion. Extraits de l'exposé [...] In Problem 32 an accurate approximation is given for p. This is discussed in detail in Liu Hui's biography. Chapter. [...] [...] These first eleven problems involve unit fractions are all of the following type, where n = 12: Suppose a field has width 1/2 + 1/3 + . + 1/n. What must its length be if its area. [...] [...]? [Answer: 15/74 of a day] Chapter Excess and Deficit. The 20 problems give a rule of double false position. Essentially linear equations are solved by making two guesses at the solution, then computing the correct answer from the two errors. For example to solve ax + b = c we try x = and instead of we get c + d. [...] À propos de l'auteurDavid B.Etudiant - Niveau - Grand public - Etude suivie - sciences... - Ecole, université - IEP... Descriptif de l'exposé - Date de publication - 2006-04-29 - Date de mise à jour - 2006-04-29 - Langue - anglais - Format - Word - Type - dissertation - Nombre de pages - 24 pages - Niveau - grand public - Téléchargé - 1 fois - Validé par - le comité de lecture Autres docs sur : History of mathematics in China
https://docs.school/matieres-scientifiques-et-technologiques/mathematiques/dissertation/histoire-mathematiques-chine-16237.html
2019-09-15T08:00:58
CC-MAIN-2019-39
1568514570830.42
[array(['https://ncd1.docsnocookie.school/vapplication/image/icons/cat-TS.png', None], dtype=object) ]
docs.school
Contents IT Business Management Previous Topic Next Topic Create a resource role Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Create a resource role Define project-specific roles for team members based on their skills and competencies. Before you beginRole required: resource_manager Procedure Navigate to Resource > Resources > Resource Roles. Click New. On the Resource Role form, fill in the fields. Table 1. Resource Role form Field Description Name Name of the resource role.Note: The resource role name must be unique. It is not possible to create duplicate roles. Hourly rate Hourly rate for the resource role used for calculating the task cost based on time worked. Description Detailed description of the resource role. Click Submit. Related tasksAllocate with the Resource Allocations related listReject a resource plan from the Resource Plan formRelated referenceUser resources and group resources On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-it-business-management/page/product/resource-management/task/create-resource-role.html
2019-09-15T08:44:43
CC-MAIN-2019-39
1568514570830.42
[]
docs.servicenow.com
This section explains the difference between and . Key as Path Parameter A key is an identifier that allows to identify entities such as Devices or Variables. Entity Key parameters in the path are defined as <entity_key>, e.g. <device_key>. Entity Keys allow to identify the entity instance with their corresponding id or label. The label identification should use the prefix ~. Example: //GET Request to Obtain a Specific Device //URL Definition<device_key>/ //Identification using Device Id //Identification using Device Label Id as Path Parameter An id is an identifier that allows to identify entities such as Devices or Variables. Entity Id parameters in the path are defined as <entity_id>, e.g. <log_id>. Entity Ids allow to identify the entity instance only by their id. Example: //GET Request to Obtain a Specific Event Log //URL Definition<event_key>/logs/<log_id>/ //Identification using Log Id<event_key>/logs/6d07d65aa7a4f169cc90/ Key as Body Parameter Certain POST requests have an entity parameter. A good example is the request Create Event. This request has organization as organization_key parameter. Hence when creating an event the organization parameter can be sent in any of the following ways: //POST Request to Create an Event curl -X POST '' \ -H 'Content-Type: application/json' \ -H 'X-Auth-Token: oaXBo6ODhIjPsusNRPUGIK4d72bc73' \ -d '{ //Sending Organization using the Id "organization":"567890ff1d8472686e9abb4f" //Sending Organization using the Label "organization":"~first-organization" //Sending Organization as Object "organization":{"id":"567890ff1d8472686e9abb4f"} }' Id as Body Parameter It's simple, all entity parameters in the body are Keys, hence they can be identified with their id but also as object or with their label.
https://docs.ubidots.com/reference/filters
2021-09-16T19:26:23
CC-MAIN-2021-39
1631780053717.37
[]
docs.ubidots.com
Date: Tue, 21 Jan 2003 01:43:54 +0100 From: "Simon L. Nielsen" <[email protected]> To: [email protected] Subject: Sanity check in ipfw(8) Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive | Help --tmoQ0UElFV5VgXgH Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable Hello I recently found a problem where ipfw2 would allow the user to create firewall rules that does not make sense like (notice udp and setup) : ipfw add allow udp from any to any setup I filed a PR (bin/47120) with a "fix" since I thought this was a trivial change to check in the ipfw userland program for protocol when specifying options like setup, icmpoptions and the likes. The fix is not correct since I did not notice that it is possible to use multiple protocols with or statements. Now for the point :-)... Is it interesting to have the extra sanity check in ipfw(8) ? If it is I will try to make a patch that actually works, but if it isn't there is not much reason to make a new patch... Btw. could a committer take a quick look at bin/46785 which is a trivial change to ipfw -h. --=20 Simon L. Nielsen --tmoQ0UElFV5VgXgH Content-Type: application/pgp-signature Content-Disposition: inline -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (FreeBSD) iD8DBQE+LJfJ8kocFXgPTRwRAjiRAKDFQbHvu/JsBWpaYfnnFeByUN1hKgCdFkQe 1Ocyh0OoEpye9wC5u/frlhk= =W8z8 -----END PGP SIGNATURE----- --tmoQ0UElFV5VgXgH-- To Unsubscribe: send mail to [email protected] with "unsubscribe freebsd-ipfw" in the body of the message Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=3650+0+archive/2003/freebsd-ipfw/20030126.freebsd-ipfw
2021-09-16T18:54:58
CC-MAIN-2021-39
1631780053717.37
[]
docs.freebsd.org
Do not use batch request types in plug-ins and workflow activities Category: Usage, Reliability, Performance Impact potential: Medium Symptoms Due to their long-running nature, using ExecuteMultipleRequest or ExecuteTransactionRequest message request classes within the context of a plug-in or workflow activity expose sandbox-isolated plug-in types to the two-minute (120000ms) channel timeout exception and can degrade the user experience for synchronous registrations. Guidance Use these batch messages where code is being executed outside of the platform execution pipeline, such as integration scenarios where network latency would likely reduce the throughput and increase the duration of larger, bulk operations. More specifically, use them in the following scenarios: Use ExecuteMultipleRequest to bulk load data or external processes that are intentional about executing long-running operations (greater than two minutes). Use ExecuteMultipleRequest to minimize the round trips between custom client and Dynamics 365 servers, thereby reducing the cumulative latency incurred. Use ExecuteTransactionRequest for external clients that require the batch of operations to be committed as a single, atomic database transaction or rollback if any exception is encountered. Be aware of the potential for database blocking for the duration of the long-running transaction. Problematic patterns Below is an example usage of ExecuteMultipleRequest in the context of a plug-in. Warning This scenario should be avoided. public class ExecuteMultipleRequestInPlugin :); QueryExpression query = new QueryExpression("account") { ColumnSet = new ColumnSet("accountname", "createdon"), }; //Obtain the results of previous QueryExpression EntityCollection results = service.RetrieveMultiple(query); if (results != null && results.Entities != null && results.Entities.Count > 0) { ExecuteMultipleRequest batch = new ExecuteMultipleRequest(); foreach (Entity account in results.Entities) { account.Attributes["accountname"] += "_UPDATED"; UpdateRequest updateRequest = new UpdateRequest(); updateRequest.Target = account; batch.Requests.Add(updateRequest); } service.Execute(batch); } else return; } } This example includes usage of the type directly with the Execute method. The usage can be anywhere within the context of a plug-in or workflow activity execution. This might be in a method that is contained within the same or a separate class, as well. It isn't limited to being directly contained in the Execute method definition. Additional information ExecuteMultiple and ExecuteTransaction messages are considered batch request messages. Their purpose is to minimize round trips between client and server over high-latency connections. Plug-ins either execute directly within the application process or in close proximity when sandbox-isolated, meaning latency is rarely an issue. Plug-in code should be very focused operations that execute quickly and minimize blocking to avoid exceeding timeout thresholds and ensure a responsive system for synchronous scenarios. Simply submit each request directly instead of batching them and submitting as a single request. For example: foreach (request in requests) service.Execute(request) On the server side, the operations included in a batch request are executed sequentially and aren't done in parallel. This is the case even if the ExecuteMultipleSettings.ReturnResponses property is set to false. Developers tend to use batch requests in this manner assuming that it will allow for parallel processing. Batch requests won't accomplish this objective. Another common motivator is an attempt to ensure that each operation is included in a transaction. This is unnecessary because the plug-in is often already being executed within the context of a database transaction, negating the need to use the ExecuteTransaction message. See also Event Framework Run-time limitations Execute multiple requests using the Organization service Execute messages in a single database transaction
https://docs.microsoft.com/en-us/powerapps/developer/data-platform/best-practices/business-logic/avoid-batch-requests-plugin
2021-09-16T19:53:02
CC-MAIN-2021-39
1631780053717.37
[]
docs.microsoft.com
Specify.
http://docs.imsidesign.com/projects/Turbocad-2018-User-Guide-Publication/TurboCAD-2018-User-Guide-Publication/Rendering/UV-Mapping/UV-Mapping-Options/Packing-General/
2021-09-16T18:47:20
CC-MAIN-2021-39
1631780053717.37
[array(['../../Storage/turbocad-2018-user-guide-publication/packing-general-img0002.png', 'img'], dtype=object) ]
docs.imsidesign.com
On a Cloud: Improving User Experiences User Experience with Data Latency Here is a fairly simple technique that can really enable your company to focus on the most important areas of your application; by taking the user experience to heart, and understanding which components are critical, you can more easily hit the critical SLA’s that your application needs to meet to make your users happy. API or No API? I am a firm believer in off loading as much server side code onto others infrastructure. Since my adoption of the cloud, I have been leveraging internet accessible disks to provide data sets to end users. By doing this, you are enabling your better up-time and scalability. Connect with me on GitHub; or ask me questions on Twitter.
https://docs.microsoft.com/en-us/archive/blogs/cdndevs/on-a-cloud-improving-user-experiences
2020-02-17T02:32:59
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Publishing and Hosting Apps Developed in SnapDevelop Last Updated: Most of the content, except for a few places (except for a few settings and instructions) in this document has been updated against SnapDevelop 2019 R2. Introduction From within SnapDevelop, applications can be published using the following three options: - File System - Web Deploy - Docker Before you publish your project in SnapDevelop, you should be familiar with the pros and cons of each way of publication and select the desired way. Publishing a Project to a Local Folder with File System Prerequisites: SnapDevelop 2019 installed with the appropriate workloads: Universal C Runtime (CRT) Universal CRT is installed by default on Windows 10. On the other supported Windows platforms, you can follow the on-screen instructions to download and install the package when you install SnapDevelop. See Downloading and Installing UCRT. Microsoft .NET Framework 4.6.1 SDK. A project to publish Publishing to a Local Folder In Solution Explorer, right-click the project node and choose Publish (or use the Build > Publish menu item). On the popup window, select File System, specify the target location of publication and click Save to continue. Click Publish. Check the publication status in the Output window. Double-click the Startup.cs and change ConnectionStrings:key to ConnectionStrings:local. Double-click the appsettings.json and enter the data source, initial catalog, user ID and password. Click Run on the toolbar or press Ctrl + F5 to run the application. Publishing a Project to IIS with Web Deploy Prerequisites Before you use Web Deploy to publish an application, some requirements need to be satisfied on the server side and client side respectively. Requirements for Server Side Windows Server 2012 or 2016 Windows Server 2012 or 2016 is recommended. If your operating system is Windows 10, for example, you can use File System, rather than Web Deploy, to publish your project to your local folder.) Runtime & Hosting Bundle Download link for Runtime & Hosting Bundle: (Runtime 2.1.* | Windows | ASP.NET Core/.NET Core: Runtime & Hosting Bundle) .NET Core Installer Download link for .NET Core Installer: (SDK 2.1.* | Windows | .NET Core Installer: x64 | x86; install according to the version of your operating system). Note that .NET SDK 2.1.* needs to be installed when SnapDevelop is not installed on the server side. Requirements for Client Side Windows 7/8.1/10). Note that versions of Web Deploy must be the same on the server and client sides. Configuring Server Side Configuring IIS To configure IIS, you should: Launch Server Manager. Use the Add Roles and Features wizard from the Manage menu or the link in Server Manager. On the Server Roles step, check the boxes for Web Server (IIS) and Management Tools. On the Features step, check the boxes for .NET Framework 3.5 Features and .NET Framework 4.6 Features. Confirm the roles and features you select and then click Install to continue. View the results of the installations. Click Close if the installations are successful. Installing and Configuring Web Deploy Click the download link () to install Web Deploy. Enable the Web Management Service and Web Deployment Agent Service after Web Deploy is properly installed. To do so, you should: Launch the desktop application Services. Locate Web Management Service and Web Deployment Agent Service and, if necessary, right-click any one of them and then select Start. Enabling Remote Connections To enable remote connections, you should: Launch IIS Manager, select the name of your computer in the left column, and then double-click Management Service on the right side. Launch the Management Service. Click Stop first, select Enable remote connections and then click Apply. Configuring IIS Users There are two ways of user configuration, which are configuration of IIS Manager users and configuration of Windows users. Windows users are granted more permissions for publishing an application. Configuring IIS Manager Users To configure IIS Manager users, you should: Double-click IIS Manager Users so that you can go to the IIS Manager Users page. Add a user and then enter the user name and password. In IIS Manager, open the server's node in the Connections panel. Right-click the Sites folder. Select Add Website from the context menu. Provide a Site name and change the Physical path. Provide the Binding configuration and create the website by clicking OK. Expand the Sites folder. Select the newly created website and then double-click IIS Manager Permissions. In the IIS Manager Permissions dialog box, click Allow User so that a popup window appears. Choose IIS Manager and then click Select to select a desired user you added previously. Configuring Windows Users To configure Windows users, you should: Right-click the Sites folder in the Connections panel. Select Add Website from the contextual menu. Provide the Site name and change the Physical path. Provide the Binding configuration and create the website by clicking OK. Expand the Sites folder. Select the newly created website and double-click IIS Manager Permissions. On the IIS Manager Permissions page, click Allow User so that a popup window appears. Choose Windows and then click Select to continue. Specify the object type and location and enter the object name to select. Check the object name after you enter it. Select Basic Settings. Choose Connect as > Specific user > Set and then enter the user name and password. View the result of connection test by clicking Test Settings in step 7. Click OK to continue if the specified user credentials are valid. Performing the Publish Operations Take the following steps to deploy your application: Right-click the project you create and then select Publish to launch the publish wizard. Select Web Deploy and enter the corresponding information (including server IP address, site name, user name and password). Note that the user name can be either the IIS Web Deploy user name or the Windows user name. Click Validate Connection after the information is entered. Note that a green tick will appear if IIS and Web Deploy users are configured correctly. Click Save to continue. Check the configuration information and modify if necessary. Click Publish to publish your application to a local host or a remote server. Check the output message to see if the publication succeeds. Troubleshooting There are some common errors that may appear during server configuration. These errors are listed in the following paragraphs and corresponding solutions are offered. 1. "the site does not exist" error When the plug-in Web Deploy required for publication is to be installed, the Complete setup type must be installed so that SnapDevelop can successfully publish the application. Otherwise, the "the site does not exist" error will be reported since Web deployment agent service is not enabled in the other setup types of Web Deploy. Solution: Uninstall the current version of Web Deploy and then reinstall the Complete version. 2. "The Web server is configured to not list the contents of this directory" error Solution: Launch IIS -> select your website -> click Directory Browsing -> click Enable on the top right corner of the page. 3. Issues concerning IIS application pool permission There is sometimes the situation where data cannot be available after the publication of an application and the error code 500 is returned. This is due to the fact that the IIS application pool does not have permission to log in to the database. Solution 1: Change the application pool identifier and select a user who has permission. Solution 2: Change stdoutLogEnabled="false" to stdoutLogEnabled="true" in the web.config file that appears after the publication. 4. Error indicating that files are locked when an application is published to a remote site where an application has previously been published Solution 1: Close down the site and then launch the site after publication. Solution 2: Check the box for Remove additional files at destination when publishing. 5. Runtime error 500.30 with .NET Core 2.2d application published to IIS Solution: Change the method: Change InProcess to OutOfProcess in PropertyGroup in the application configuration file and then add ModelName. Error 502.5 may occur after the change of method. To resolve Error 502.5, you can change the .NET path of Web API as an absolute path and restart your computer. If Error 502.5 still remains, you can check the event log. If the log indicates that the startup process fails, this error can be caused by the lack of permissions. Change the permissions under the IIS application pool, right-click the website and click Advanced Settings. Choose Identity in Process Model and then check the radio button for the Custom account to enter the local administrator's user name and password. Publishing a Project with Docker Understanding Docker Basics Before you use Docker to publish your projects, you should have mastered the basics of Docker publication. Concepts Related to Docker You should first familiarize yourself with a variety of Docker-related concepts that will be used in this tutorial. Image An image is a read-only template that contains instructions for creating a Docker container. Container A container is an executable instance of an image, and it is used to wrap up an application into its own isolated package. The major difference between an image and a container is that a container contains top readable and writable layers. Registry A registry is where Docker images are stored. You can have your own private registry, or just use Docker Hub, a public registry that anyone can use. Docker is configured to automatically look for images on Docker Hub. When you execute such commends as docker pull or docker run, the required images are pulled from the configured registry. When you execute the docker push command, your image is pushed to the configured registry. Docker Engine Docker engine is a client-server application, which primarily consists of the following three components: A Command Line Interface (CLI) client. A REST API which specifies interfaces that applications can use to communicate with daemon and instruct it what to do. A server named daemon. Docker Daemon Docker daemon is a server that continuously runs on your host operating system. It listens for Docker API requests and manages such Docker objects as images and containers. You can connect a Docker client to a Docker daemon on the same computer, or to a Docker daemon on a remote computer. Engine API The Engine API is an HTTP API served by Docker Engine. It is used by the Docker client to communicate with the Engine. What the Docker client can do can be done with the API. Docker Host Docker host refers to a computer that runs the Docker engine. For detailed information about Docker and its related concepts, please visit Docker’s official website at. Benefits of Docker Docker enjoys a variety of advantages over the other deployment modes: Isolation; Dependencies in a container are independent of any other container that may be running and will not affect any installations on your computer. Docker Hub; Docker Hub accommodates a large number of available images that can be pulled very quickly, making the build process fast and simple. Reproducibility; The various specifications of a Docker container are stored in a Dockerfile. All images built from the same Dockerfile will work identically. Security; Isolating the various components of a large application into different containers can have security benefits. Common Docker Commands The following table lists some common Docker commands that you can run using the Command Prompt tool and explains the meanings of individual commands. Note You can view all necessary information about docker image or container if you add ‘-a’ to command ‘docker images’ or ‘docker ps’, and you can forcefully delete anything by adding ‘-f’ to command ‘docker rmi’ or ‘docker rm’. For more information about docker commands, please visit Docker’s official website at. Publishing a Project with Docker Engine When your project is ready for deployment, you can choose to publish the project in a Docker container. To publish with Docker, right-click on the project, select Publish from the context menu, and then select Docker. On the pop-up page, select Start to configure the various settings for Docker publication. Preparing for Docker Publication Docker supports publication using: Local Docker engine; Remote Docker engine. You can select either way to publish your project, depending on whether Docker is installed on your local computer. After you have made a decision on the way of publication, you must then decide where you want to publish your project. Currently Docker supports publication to: Docker Hub; Self-hosted registry. Once you have made all the decisions, you need to make preparations for publication accordingly. Preparing the Operating System Windows 10 (64-bit) with Hyper-V enabled (for local computer); CentOS Linux 7 or later (for remote computer). Preparing a Docker Engine You can choose to prepare a local Docker engine or a remote Docker engine, depending on whether you want to install Docker Desktop on your local computer. Local Docker Engine To use a local Docker engine to publish your project, you need to: Download Docker Desktop; Install Docker Desktop on your local computer. Before you can download Docker Desktop, you are required to sign up for Docker Hub first. Go to Docker’s official website () and follow the on-screen instructions to sign up for Docker Hub. Downloading Docker Desktop After you have successfully signed up for Docker Hub, you can download Docker Desktop at. Installing Docker Desktop When you have downloaded Docker Desktop successfully, you need to install Docker Desktop on your local computer. Remote Docker Engine If you want to use a remote Docker engine to publish your project, you need to use a remote computer installed with CentOS Linux to: Install Docker; Register for a Docker ID (optional); Enable remote access to Docker; Generate certificates (optional). Installing Docker Refer to Get Docker Engine – Community for CentOS for instructions on how to install Docker. Registering for a Docker ID (Optional) If you want to publish your project to Docker Hub, you need to register for a Docker ID. Refer to How do you register for a Docker ID? for instructions on how to register for a Docker ID. Enabling Remote Access to Docker Refer to How do I enable the remote API for dockerd for instructions on how to enable remote access to Docker. Generating Certificates (Optional) If you want your Docker to be accessible through the network in a safe manner, you can enable TLS by specifying the tlsverify flag and pointing Docker’s tlscacert flag to a trusted CA certificate. Refer to the Create a CA, server and client keys with OpenSSL section in Protect the Docker daemon socket for instructions on how to generate certificates. Preparing a Docker Registry After you have installed Docker, you must then decide where you want to publish your project. You can publish your project to Docker Hub, or to a self-hosted Docker registry. Publishing a project to Docker Hub may raise security concerns since Docker Hub is a public registry. If you have such concerns, you can publish your project to a private registry URL. Self-hosted Docker Registry To publish your project to a self-hosted Docker registry, you need to: Pull a Docker registry from Docker Hub; Start the Docker registry; Configure Docker daemon; Configure user name and password for connection to registry. Pulling Docker Registry You can use the docker pull or docker run command to pull a registry from Docker Hub, or to load a registry if the registry already exists on your computer. Run the following command to pull a registry from Docker Hub: docker pull registry Run the following command to load a registry from a local computer: docker load –i For example, docker load –i\\172.16.0.95\share\docker\linux\registry.rar Starting Docker Registry When a registry is successfully pulled or loaded, you need to run the following command to start the registry. docker run -d -v /registry:/home/docker-registry -p 5000:5000 --restart=always --privileged=true --name registry registry:latest Configuring Docker Daemon You need to configure Docker daemon differently, depending on the operating system of the computer installed with Docker. Configuring Docker Daemon (for Windows 10) To configure Docker daemon, you need to: Run Docker Desktop. Click the Docker icon at the bottom right corner of the desktop and then select Settings. Select Daemon on the Settings page. Configure Insecure Registries. Here, you need to configure the registry URL to which you want to publish your project. For example, you can enter registry.hub.docker.com if you want to publish your project to Docker Hub. For security considerations, it is recommended that you publish your project to a private registry URL. When you pulled a registry to a computer by running the docker pull registry command, the registry URL is the IP of the computer (e.g., 172.16.100.73:5000). Select Apply to validate daemon configurations. Configuring Docker Daemon (for CentOS Linux) Refer to the Configure the Docker daemon section of Configure and troubleshoot the Docker daemon for instructions on how to configure the Docker daemon. Configuring Docker Proxy You need to configure Docker proxy because Internet connection is required when you pull images from or push images to the registry, and when you download the NuGet packages on which your project depends. Note When your project is built inside the Docker container, Docker needs to download the dependencies of your project, even if these dependencies already exist on your local computer. Configuring Docker Proxy (for Windows 10) To configure Docker proxy, you need to: Run Docker Desktop. Click the Docker icon at the bottom right corner of the desktop and then select Settings. Select Proxies on the Settings page. Configure the Web server proxy. Select Apply to validate proxy configurations. Configuring Docker Daemon (for CentOS Linux) Refer to Configure Docker to use a proxy server for instructions on how to configure the Docker proxy. Configuring User Name and Password for Connection to Registry Refer to the Native basic auth section in Deploy a registry server for instructions on how to configure user name and password for connection to your private registry. Docker Hub To publish your project to Docker Hub, you need only to configure Docker daemon and Docker proxy. Configuring Docker Daemon Refer to the Self-hosted Docker Registry section in this tutorial for information about Docker daemon configuration. Configuring Docker Proxy Refer to the Self-hosted Docker Registry section in this tutorial for instructions on how to configure Docker proxy. Connecting to a Docker Daemon You can connect to a local Docker daemon or to a remote Docker daemon, depending on whether your local computer is installed with Docker Desktop. Local Docker If your local computer is installed with Docker Desktop, you can select the Local Docker radio button to publish your project with the local Docker engine. Note Make sure your local Docker Desktop is running before you can continue to configure the various Docker settings. Remote Docker If your local computer is not installed with Docker Desktop, you can select the Remote Docker radio button to publish your project with a remote Docker engine. Likewise, you should make sure the remote Docker is running. In addition, you need to provide the following information: engine API URL, and certificates (optional). Engine API URL The engine API URL refers to the IP of the remote computer installed with Docker. Certificates Folder Contains the certificates (ca.pem, cert.pem and key.pem) generated in the Remote Docker Engine section for authentication by the remote Docker server. Connecting to a Docker Registry After you have connected to a Docker daemon, you can continue to configure the Docker connection settings. Registry URL Enter the registry URL (the IP of the computer that hosts the registry, for example, 172.16.100.73:5000). If you want to publish your project to Docker Hub, you can select the default registry URL (registry.hub.docker.com) from the drop-down list. User Name and Password Enter the user name and password you have configured in the Self-hosted Docker Registry section . If you use registry.hub.docker.com as the registry URL, you just enter the Docker ID and password you use to sign in. After you have provided the registry URL and user name and password, you can click Next to verify whether this connection succeeds. If the connection fails, you will see a message box displaying the error message. In this case, you need to re-configure the Docker connection according to the error message. If the connection succeeds, you will be able to configure the various Docker settings. Configuring Publish Settings When you have successfully connected to a Docker registry, you need then to configure the publish settings, including the mode of configuration, target framework, deployment mode, and target runtime. Configuration Specifies the configuration for building your .NET project. The Debug mode is used for debugging the .NET project step by step, while the Release mode is used for the final build of the Assembly file (.dll or .exe). Target Framework Specifies the version of the .NET that you want your project to target. Deployment Mode Framework-dependent deployment is the default deployment option. It means that you can deploy portable code that is ready to run in any compatible environment, provided .NET Core is installed. Target Runtime There are five different target platforms where the application runs: Portable Win-x86 Win-x64 Osx-x64 Linux-x64 They are runtime identifiers used by .NET packages to represent platform-specific assets in NuGet packages. Configuring Docker Settings After you have configured the publish settings, you need to configure the Docker settings, including the image name, custom tag, and container port(s) to be exposed. Image Name Specifies a name for your image. Tag Allows you to add a tag for your image to distinguish it from the various other images (if any). Expose Ports Allows you to specify one or more container ports that can be exposed to be outside world. Note that container ports must be exposed so that they can be used for port mapping, and the exposed container ports will appear in the Container Port dropdown list. It is recommended that you expose port 80. Configuring Publish Options Deleting Intermediate Images Removes the cached images that can be used for subsequent builds. Examples of intermediate images are images named <none>, and useless images (e.g., 172.16.100.18:5000/smm). Running the Container after Publishing Specifies whether to run the container immediately after you deploy your project to the target registry. If you enable this option, you need then to configure port mapping. By default, the published container does not have any port accessible to the outside world. To enable the container to accept incoming connections, you need to select an exposed container port, and specify a host port to which the container port will map. Container port: it is the port you exposed in Expose Ports. Host port: this is the port to be used by the Docker host to access the container. Make sure you specify a port number that has not been assigned to any other service yet. Note: If you don’t enable the Run the container after publishing option, when you later run the container manually with the command line arguments, make sure you will specify the port mapping as well. The basic command line syntax is as follows: docker run -d -p host port:container port --name container name image For example: docker run -d -p 8080:80 --name dockertest 172.16.100.73:5000/dockertest:latest Executing the Publish Operations When you have properly configured the various Docker settings and then click Finish, you will see the Dockerfile in your project in Solution Explorer. Dockerfile contains all the commands used by a user to assemble the image. The following table presents a brief introduction to the various commands in the Dockerfile. Refer to Dockerfile reference for more information about the commands that may be used in the Dockerfile. When you click Finish, the publication process launches immediately. In the Output pane, you can check if the commands in the Dockerfile are being executed correctly. If an error occurs, you can view the error message and then correct the error accordingly. If no error occurs, it is recommended that you do not interrupt the publication process until the publication succeeds. If you have enabled the Run the container after publishing option, the Docker host will run the published container, which means that the project will launch on the launchURL configured in the project launchSettings.json file. For example, if the launchSettings.json file is as follows: The deployed project in the container will run: Finally, you can check whether your project is successfully published to the target registry URL. Publishing and Hosting an ASP.NET Core Application on Linux In this section, we create a Web API application, publish it using the File System method, and then host the app on Linux CentOS7 operating system following these steps: In SnapDevelop, publish the Web API application with middleware configured to support reverse proxy Copy over the Web API application from the local published folder to CentOS server In CentOS, host Web API application on CentOS with reverse proxy (Apache) configured For more information on how to set up Apache as a reverse proxy on CentOS 7, you can further read the Microsoft article: Host ASP.NET Core on Linux with Apache. Prerequisites - SnapDevelop 2019 - Server running CentOS 7 with a standard sudo user account - .NET Core Runtime is installed on CentOS 7 with the following instructions Install .NET Core Runtime on CentOS 7 Before installing .NET Runtime, you need to register the Microsoft key, register the product repository, and install required dependencies. These only need to be done once per machine. Open a terminal and run the following commands: # Install repository configuration curl > ./microsoft-prod.repo sudo cp ./microsoft-prod.repo /etc/yum.repos.d/ # Install Microsoft's GPG public key curl > ./microsoft.asc sudo rpm --import ./microsoft.asc Install the latest available updates for the product, then install the .NET Runtime. In your terminal, run the following commands: # Install the latest available updates for the product sudo yum update # Install the ASP.NET Core 2.1 runtime sudo yum install aspnetcore-runtime-2.1 The previous command will install the .NET Core Runtime Bundle, which includes the .NET Core runtime and the ASP.NET Core 2.1 runtime. To install just the .NET Core runtime, use the dotnet-runtime-2.1 package. Publishing an ASP.NET Core Application to Server Preparation for the Application In SnapDevelop, select File from the menu bar and then choose New > Project. In the dialog box, select C# > .NET Core and then select ASP.NET Core Web API. Then, fill in the project information to create a .NET Core project named WebAPI1 by clicking OK. Some configurations for reverse proxy and Kestrel server need to be made before publishing. Configuration for Forwarded Headers Middleware for diagnostics and error handling middleware. This running order forwarded headers are None. Proxies running on loopback addresses (127.0.0.0/8, [::1]), including the standard localhost address (127.0.0.1), are trusted by default. For other trusted proxies or networks within the organization handling requests between the Internet and the web server, add them to the list of KnownProxies or KnownNetworks as options in ForwardedHeadersOptions. The following example adds a trusted proxy server at IP address 10.0.0.100 to the KnownProxies of Forwarded Headers Middleware in Startup.ConfigureServices: services.Configure<ForwardedHeadersOptions>(options => { options.KnownProxies.Add(IPAddress.Parse("10.0.0.100")); }); For more information, see Configure ASP.NET Core to work with proxy servers and load balancers. Configuration for Secure (HTTPS) Local Connections It is important for the application to make secure connections (HTTPS), configure the application to use a certificate for your development using one of the following approaches: - Replace the default certificate from configuration (Recommended) - KestrelServerOptions.ConfigureHttpsDefaults The configuration for the application to listen to the URLs is in applicationUrl property in the Properties/launchSettings.json file, for example,;. The app runs locally, so it is optional to configure the web server (Kestrel) with secure (HTTPS) local connections. You can remove (if present) from the configuration. Publishing the Application to a Local Folder In Solution Explorer, right-click the project and choose Publish. On the popup window, select File System. Specify the target location of publication. Go to Settings tab, change Target Runtime to linux-64. Note that the application is configured for a framework-dependent deployment by default. Click Save to continue. Click Publish, then check the publication status in the Output window. Copying the Application to the Server Once the Web API application is published, copy the application from the published folder to the CentOS server using a tool that integrates into the organization's workflow (for example, SCP, SFTP). It's common to locate web applications under the var directory (for example, var/www/WebAPI1). Hosting ASP.NET Core on CentOS with Apache Configuring a Proxy Server A reverse proxy is a common setup for serving dynamic web applications. The reverse proxy terminates the HTTP request and forwards it to the Web API application. In this example, Apache is configured as a reverse proxy, and Kestrel serves the Web API application. Apache forwards client requests to the Web API application running on Kestrel instead of fulfilling requests itself. Installing Apache Update CentOS packages to the latest stable versions: sudo yum update -y Install the Apache web server on CentOS with a single yum command: sudo yum -y install httpd mod_ssl To verify where Apache is installed, run whereis httpd from a command prompt. Configuring Apache Open the /etc/httpd/conf/httpd.conf file and set the ServerName directive globally: sudo nano /etc/httpd/conf/httpd.conf # Set ServerName in this configuration file ServerName The configuration files for Apache are located in the /etc/httpd/conf.d/ directory. Any file with the .conf extension is processed in alphabetical order in addition to the module configuration files in /etc/httpd/conf.modules.d/, which contains all the necessary configuration files to load modules. Create a configuration file: sudo nano /etc/httpd/conf.d/WebAPI1.conf An example of configuration files for the application: <VirtualHost *:*> RequestHeader set "X-Forwarded-Proto" expr=$scheme </VirtualHost> <VirtualHost *:80> ProxyPreserveHost On ProxyPass / ProxyPassReverse / ServerName ServerAlias *.example.com ErrorLog /var/log/httpd/WebAPI1-error.log CustomLog /var/log/httpd/WebAPI1-access.log common </VirtualHost> The VirtualHost block can appear multiple times in one or more files on a server. In the preceding configuration file, Apache accepts public traffic on port 80. The domain is being served, and the *.example.com alias resolves to the same domain. application Monitoring the Application Apache is now setup to forward requests made to to the Web API application running on Kestrel at. However, Apache isn't set up to manage the Kestrel process. Use systemd and create a service file to start and monitor the underlying web application. systemd is an init system that provides many powerful features for starting, stopping, and managing processes. Creating the Service File Create the service definition file: sudo nano /etc/systemd/system/kestrel-webapi1.service An example of service files for the application: [Unit] Description=Example .NET Web API application running on CentOS 7 [Service] WorkingDirectory=/var/www/WebAPI1 ExecStart=/usr/share/dotnet /var/www/WebAPI1/WebAPI1 the files. Use TimeoutStopSec to configure the duration of time to wait for the application to shut down after it receives the initial interrupt signal. If the application doesn't shut down in this period, SIGKILL is issued to terminate the application. Save the file and enable the service: sudo systemctl enable kestrel-webapi1.service Start the service and verify that it's running: # Start the service sudo systemctl start kestrel-webapi1.service # Verify the service sudo systemctl status kestrel-webapi1.service ● kestrel-webapi1.service - Example .NET Web API application running on CentOS 7 Loaded: loaded (/etc/systemd/system/kestrel-webapi1.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2019-08-06 13:46:27 CST; 5s ago Main PID: 46668 (dotnet) Tasks: 16 CGroup: /system.slice/kestrel-webapi1.service └─46668 /usr/share/dotnet/dotnet /var/www/WebAPI1/WebAPI1.dll With the reverse proxy configured and Kestrel managed through systemd, the web application is fully configured and can be accessed from a browser on a local machine at. Inspecting the response headers, the Server header indicates that the Web API application is served by Kestrel: HTTP/1.1 200 OK Date: Tue, 06 Aug 2019 05:52:17 GMT Server: Kestrel Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Transfer-Encoding: chunked Viewing Logs Since the web application using Kestrel is managed using systemd, events and processes are logged to a centralized journal. However, this journal includes all of the entries for services and processes managed by systemd. View the specific items of kestrel-webapi1.service with the following command: sudo journalctl -fu kestrel-webapi1.service For time filtering, specify time options with the command. For example, use --since today to filter for the current day or --until 1 hour ago to see the previous hour's entries. For more information, see man page for journalctl. sudo journalctl -fu kestrel-webapi1.service --since "2019-08-06" --until "2019-08-06 23:00" Configuring /etc/httpd/conf.d/WebAPI1.conf file to enable URL rewriting and secure communication on port 443: <VirtualHost *:*> RequestHeader set "X-Forwarded-Proto" expr=$scheme </VirtualHost> <VirtualHost *:80> RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule ^/?(.*){SERVER_NAME}/$1 [R,L] </VirtualHost> <VirtualHost *:443> ProxyPreserveHost On ProxyPass / ProxyPassReverse / ErrorLog /var/log/httpd/WebAPI1-error.log CustomLog /var/log/httpd/WebAPI1 present) that was supplied by the certificate authority. Save the file and test the configuration: sudo service httpd configtest Restart Apache: sudo systemctl restart httpd After HTTPS is fully configured, the web application can be accessed from a browser on a local machine at. Please pay attention here, it's https://.... Attribution The content of this section incorporates some materials from “Host ASP.NET Core on Linux with Apache“ by Shayne Boyer licensed under CC BY 2.0.
https://docs.appeon.com/appeon_online_help/snapdevelop2019r2/Publish/index.html
2020-02-17T00:24:13
CC-MAIN-2020-10
1581875141460.64
[array(['assets/Web_Deploy/image23.png', None], dtype=object) array(['assets/Web_Deploy/image24.png', None], dtype=object) array(['assets/Web_Deploy/image25.png', None], dtype=object) array(['assets/Web_Deploy/image26.png', None], dtype=object) array(['assets/Web_Deploy/image27.png', None], dtype=object) array(['assets/Web_Deploy/image28.png', None], dtype=object) array(['assets/Web_Deploy/image29.png', None], dtype=object) array(['assets/Web_Deploy/image30.png', None], dtype=object) array(['assets/Web_Deploy/image31.png', None], dtype=object) array(['assets/Web_Deploy/image32.png', None], dtype=object) array(['assets/Docker/image1.png', None], dtype=object) array(['assets/Docker/image3.png', None], dtype=object) array(['assets/Docker/image4.png', None], dtype=object) array(['assets/Docker/image5.png', None], dtype=object) array(['assets/Docker/image6.png', None], dtype=object) array(['assets/Docker/image7.png', None], dtype=object) array(['assets/Docker/image8.png', None], dtype=object) array(['assets/Docker/image9.png', None], dtype=object) array(['assets/Docker/image10.png', None], dtype=object) array(['assets/Docker/image13.png', None], dtype=object) array(['assets/Docker/image14.png', None], dtype=object) array(['assets/Docker/image15.png', None], dtype=object) array(['assets/Docker/image16.png', None], dtype=object) array(['assets/Docker/image17.png', None], dtype=object) array(['assets/Docker/image18.png', None], dtype=object) array(['assets/Docker/image19.png', None], dtype=object) array(['assets/Docker/image20.png', None], dtype=object) array(['assets/Docker/image21.png', None], dtype=object) array(['assets/Docker/image22.png', None], dtype=object) array(['assets/Docker/image23.png', None], dtype=object) array(['assets/Docker/image24.png', None], dtype=object) array(['assets/Docker/image25.png', None], dtype=object) array(['assets/Docker/image26.png', None], dtype=object)]
docs.appeon.com
[ aws . waf-regional ] Associates a LoggingConfiguration with a specified web ACL. You can access information about all traffic that AWS WAF inspects using the following steps: Note Do not create the data firehose using a Kinesis stream as your source. .", logging configuration for the web ACL ARN with the specified Kinesis Firehose stream ARN The following put-logging-configuration example displays logging configuration for WAF with ALB/APIGateway in Region us-east-1. aws waf put-logging-configuration \ --logging-configuration,RedactedFields=[] \ --region us-east-1 Output: { "LoggingConfiguration": { " ] } } LoggingConfiguration -> (structure) The LoggingConfiguration that you submitted in the request. .
https://docs.aws.amazon.com/cli/latest/reference/waf-regional/put-logging-configuration.html
2020-02-17T00:55:16
CC-MAIN-2020-10
1581875141460.64
[]
docs.aws.amazon.com
System. Windows. Forms Namespace The System.Windows.Forms namespace contains classes for creating Windows-based applications that take full advantage of the rich user interface features available in the Microsoft Windows operating system. Classes Structs Interfaces Enums Delegates Remarks. Caution. By default visual styles are enabled for the .NET Framework versions 1.1, 1.2, and 2.0.
https://docs.microsoft.com/en-gb/dotnet/api/system.windows.forms?view=netframework-1.1
2020-02-17T01:10:08
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Researchers Develop Bendable Battery WASHINGTON (AP) -. Just imagine if they could make this flexible enough to make clothes out of it. Your jacket could double as a power supply for your phone, laptop, or media player. With batteries this thin, you could conceivably put them just about anywhere. Researchers Develop Bendable Battery
https://docs.microsoft.com/en-us/archive/blogs/gduthie/researchers-develop-bendable-battery
2020-02-17T02:33:30
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
How to extract OSX Lion image and burn it to DVD: If you already installed Lion, you will have to download again the package from AppStore so open it while holding Option key (ALT) button and from "Purchases" tab, open "OS X Lion" link again while pressing Option key (ALT). Now, let's assume you have the Lion package in the right place. Navigate to /Applications/Install\ Mac\ OS\ X\ Lion.app/Contents/SharedSupport and you will have there InstallESD.dmg. user@osx: ~$ cd /Applications/Install\ Mac\ OS\ X\ Lion.app/Contents/SharedSupport user@osx: SharedSupport $ ls -la total 7411920 drwxr-xr-x 4 root wheel 136 Dec 21 23:57 . drwxr-xr-x 12 root wheel 408 Dec 21 23:57 .. -rw-r--r-- 1 root wheel 3794414600 Dec 21 23:56 InstallESD.dmg -rw-r--r-- 1 root wheel 484418 Oct 6 15:59 OSInstall.mpkg Burn this image to a DVD and you will have a bootable OS X Lion installation disk! How to extract OSX Mavericks image and create USB stick bootable image: 1. The destination USB stick should be called Untitled and formatted as Mac OS Extended (Journaled). 2. Run the following command - replace /path/to with yours - output following: flmbp:~ root# [color=red]/path/to/Install\ OS\ X\ Mavericks.app/Contents/Resources/createinstallmedia --volume /Volumes/Untitled --applicationpath /path/to/tmp/Install\ OS\ X\ Mavericks.app --nointeraction[/color] Erasing Disk: 0%... 10%... 20%... 30%...100%... Copying installer files to disk... Copy complete. Making disk bootable... Copying boot files... Copy complete. Done. 3. DONE!
http://docs.gz.ro/osx-image-to-dvd-or-usb.html
2020-02-17T01:59:01
CC-MAIN-2020-10
1581875141460.64
[array(['http://docs.gz.ro/sites/default/files/styles/thumbnail/public/pictures/picture-1-1324065756.jpg?itok=rS4jtWxd', "root's picture root's picture"], dtype=object) ]
docs.gz.ro
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here. Class: Aws::APIGateway::Types::StageKey Overview Note: When passing StageKey as input to an Aws::Client method, you can use a vanilla Hash: { rest_api_id: "String", stage_name: "String", } A reference to a unique stage identified in the format `{restApiId}/{stage}`. Instance Attribute Summary collapse - #rest_api_id ⇒ String - #stage_name ⇒ String The stage name associated with the stage key.
https://docs.aws.amazon.com/sdkforruby/api/Aws/APIGateway/Types/StageKey.html
2020-02-17T01:08:23
CC-MAIN-2020-10
1581875141460.64
[]
docs.aws.amazon.com
Celebrating 40th Anniversary “Social Issues in Computing” A few people were invited to contribute articles in honour of Kelly’s and Allan’s seminal book, Social Issues in Computing. The University of Toronto has a blog in celebration of the 40th anniversary in the publication of: “In 1973, Kelly Gotlieb and Allan Borodin’s seminal book, Social Issues in Computing, was published by Academic Press. It tackled a wide array of topics: Information Systems and Privacy; Systems, Models and Simulations; Computers and Planning; Computer System Security; Computers and Employment; Power shifts in Computing; Professionalization and Responsibility; Computers in Developing Countries; Computers in the Political Process; Antitrust actions and Computers; and Values in Technology and Computing, to name a few. The book was among the very first to deal with these topics in a coherent and consistent fashion, helping to form the then-nascent field of Computing and Society. In the ensuing decades, as computers proliferated dramatically and their importance skyrocketed, the issues raised in the book have only become more important. The year 2013, the 40th anniversary of the book, provides an opportunity to reflect on the many aspects of Computing and Society touched on by the book, as they have developed over the four decades since it was published. After soliciting input from the book’s authors and from distinguished members of the Computers and Society intellectual community, we decided that this blog, with insightful articles from a variety of sources, was a fitting and suitable way to celebrate the 40th anniversary of the book.”
https://docs.microsoft.com/en-us/archive/blogs/cdnitmanagers/celebrating-40th-anniversary-social-issues-in-computing
2020-02-17T02:24:58
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
P free) allowing access anywhere you go. And even the hotel has Wi-Fi (again, free). I have been sitting in the bar at the hotel restaurant at night browsing the web, reading and responding to mail, and preparing this blog entry. At PDC 2001 the hotel I was staying at had wired broadband, but they were charging $30 a day. A lot has changed in just a few years. I spent much of my time yesterday meeting with people at the Expo booth. Many of the questions we got were the standard ones about the automation model and VSIP, which means that we really need to work on that FAQ that we have been planning. I did go to a talk about MSBuild, it is some really cool technology making building software much easier both within and outside of VS. Earlier this week we switched back to standard time. That makes it a bit easier for taking the telescope out in the back yard since it is darker earlier and I don’t need to stay up as late to see the planets. Unfortunately, this also means that not too long from now it will be dark around 4:30PM. I hate going to work in the morning when it is dark, and driving home from work in the dark. There has not been too much here lately about new features in the next version of Visual Studio, code named Whidbey. Since more info is being given about the next version I will try to pick up the pace of descriptions about what we are doing. This time I would like to talk about the new way that VS will find and load Add-ins. VB5 would look for an .INI file, open and read its contents, then load the Add-ins it described. This was not the best way to create Add-ins since you had to write some special code, or even hand-modify, the .INI file when the Add-in is installed. Then in VS 6 we would look to the registry, enumerating the list of Add-in ProgIDs and do the appropriate loading of the defined Add-ins. This type of registry-based Add-in loading was good for VC/ATL written Add-ins because it was easy to modify .rgs files to register the Add-in, less optimal for VB6 Add-ins but still workable if you used the Add-in Designer because it would automatically generate the registry keys when it was registered. Then there was the requirement that the DLL be registered. This worked for COM Add-ins, but was a real pain to the .NET developer. To make it easier for the .NET developer, and to fit into the xcopy deployment, non-registration spirit of .NET, we now have a way of loading Add-ins from XML files. The extension of this XML file is .Addin, and has a format such as the following: Microsoft Visual Studio 8.0 Decsription of the Add-in Short name of the Add-in C:\foo.dll ClassNamespace.ClassName 0 1 The tag can contain either a full path to an assembly, a path relative from the .Addin file to the assembly, or the full name of an assembly within the GAC. You can even give a UNC or URL to an Add-in. So now when you create an Add-in, you generate an XML file that looks like that above, write a class that implements the IDTExtensibility interface, and put it in a specific directory on your hard disk. After doing this VS will find, load, and (depending on how you declare your Add-in within the .Addin file) run the Add-in. There is also now a page within the Tools | Options dialog box that will allow you to select which folders on disk to look for .Addin files. Not only that but options are also available on this dialog box for completely enabling and disabling running both Add-ins and macros, for security reasons. I will not be talking about it today (I need something for the next time I post), but you can add additional entries in the .Addin XML file to control other features of extensibility. More on those later
https://docs.microsoft.com/en-us/archive/blogs/craigskibo/pdc-monday
2020-02-17T02:15:05
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com