content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Contents IT Operations Management Previous Topic Next Topic Integrate AWS platform as a data source Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Integrate AWS platform as a data source Integrate Amazon Web Services (AWS) with Event Management. To add AWS platform as a data source, configuration is required in AWS platform. Before you beginRole required: evt_mgmt_integration About this task When an AWS platform alarm arrives, Event Management: Extracts information from the original AWS platform alarm to populate required event fields and inserts the event into the database. Captures the content in the additional_info field. The AWS platform transform script is located in Event Management > Event Listener (Push) > Listener Transform Scripts. In the Listener Transform Scripts page, click AWS Events Transform Script. Note: The AWS transform script that is provided in the base system handles AWS CloudWatch alarms only. To handle Simple Notification Service (SNS) alarms that are other than AWS CloudWatch, create a new script or customise the AWS transform script. Procedure In the AWS platform console, select Simple Notification Service. If an SNS topic does not exist, create a new one. Under the topic, create a new subscription. Take Topic ARN from the topic that you created. The Amazon Resource Name (ARN) is necessary for binding an Event Management alert to a CI. Set Protocol to: https. Set Endpoint to: https://<username>:<password>@<instance-name>.service-now.com/api/global/em/inbound_event?source=AWS. If AWS platform Multi-Factor Authentication (MFA) is enabled, when signing in to the AWS platform website, the user name and password are prompted for, as well as an authentication code from the AWS platform MFA device of the user. Wait until the subscription changes from Pending to Confirmed and the subscription ARN is populated. Create alarms in AWS platform to send to Event Management. Link the alarms to the Simple Notification Service topic that you created. These event rules are provided with the base system: Event rule Description AWS host binding Bind AWS platform alarms, on either the host or VM, to the host Hardware CI. AWS LB binding Bind AWS platform alarms on the Load Balancer (LB) to the Cloud Load Balancer CI AWS RDS binding Bind AWS platform alarms on the Amazon Relational Database Service (RDS) to the Cloud Database CI. AWS vm binding By default, this event rule is disabled. Bind AWS platform alarms, on either the host or VM, to the Virtual Machine Instance CI. To enable this rule, first disable the AWS platform host binding rule. Related TasksConfigure listener transform scriptsRelated ConceptsEvent field format for event collection On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/london-it-operations-management/page/product/event-management/task/aws-events-transform-script.html
2019-03-18T22:24:41
CC-MAIN-2019-13
1552912201707.53
[]
docs.servicenow.com
You can configure one or two remote syslog servers. NSX Edge events and logs related to firewall events that flow from NSX Edge appliances are sent to the syslog servers. Procedure - Log in to the vSphere Web Client. - Click Networking & Security and then click NSX Edges. - Double-click a NSX Edge. - Click the Manage tab, and then click the Settings tab. - In the Details panel, click Change next to Syslog servers. - Type the IP address of both remote syslog servers and select the protocol. - Click OK to save the configuration.
https://docs.vmware.com/en/VMware-NSX-Data-Center-for-vSphere/6.3/com.vmware.nsx.logging.doc/GUID-9C25E097-E2CC-461A-9DA6-E8118D16EE62.html
2019-03-18T22:30:38
CC-MAIN-2019-13
1552912201707.53
[]
docs.vmware.com
Database Administration¶ If you are using a real DBMS to hold your botanic data, then you need do something about database administration. While database administration is far beyond the scope of this document, we make our users aware of it. SQLite¶ SQLite is not what one would consider a real DBMS: each SQLite database is just in one file. Make safety copies and you will be fine. If you don’t know where to look for your database files, consider that, per default, bauble puts its data in the ~/.bauble/ directory. In Windows it is somewhere in your AppData directory, most likely in AppData\Roaming\Bauble. Do keep in mind that Windows does its best to hide the AppData directory structure to normal users. The fastest way to open it is with the file explorer: type %APPDATA% and hit enter. MySQL¶ Please refer to the official documentation. Backing up and restoring databases is described in breadth and depth starting at this page. PostgreSQL¶ Please refer to the official documentation. A very thorough discussion of your backup options starts at chapter 24. Ghini Configuration¶ Ghini uses a configuration file to store values across invocations. This file is associated to a user account and every user will have their own configuration file. To review the content of the Ghini configuration file, type :prefs in the text entry area where you normally type your searches, then hit enter. You normally do not need tweaking the configuration file, but you can do so with a normal text editor program. Ghini configuration file is at the default location for SQLite databases.
https://ghini.readthedocs.io/en/ghini-1.0-dev/administration.html
2019-03-18T22:02:11
CC-MAIN-2019-13
1552912201707.53
[]
ghini.readthedocs.io
- A single platform helps you optimize data, manage information and provides seamless integrations that cuts across different business reports and engagements you may want to do. - It also helps save space on your app size, reduces crash and conflicts between SDKs and is a central repository of data. - Furthermore, it is safe to believe that a single SDK also helps reducing the internet data consumed for the user, as we take great efforts in reducing our data requirements, which is then passed on to your app. This in turn also reduces processing power and power drain on your user’s devices. Since mTraction provides multiple functionalities from the same data collected, we believe we are more cost efficient when compared collectively and sometimes individually, to other players offering only a single solution.
http://docs.mtraction.com/index.php/docs/faq-frequently-asked-questions/why-should-i-choose-a-single-platform-for-all-purposes/
2019-03-18T21:40:43
CC-MAIN-2019-13
1552912201707.53
[]
docs.mtraction.com
1st World Congress on Women’s Health Innovations and Inventions (WHII) Start Date : July 9, 2019 End Date : July 11, 2019 Time : 8:00 am to7:00 pm Phone : +972 3 566 6166 Location : Hilton Hotel, Independence Park, Tel Aviv-Yafo, Israel Description The 1st World Congress on Women's Health Innovations and Inventions (WHII): Addressing Unmet Needs will be held on July 9-11, 2019 in Tel Aviv, Israel. We invite you to participate in the first Women's Health Congress bringing technology and science together to address the unmet needs.: Maternal Fetal Medicine (MFM)/Offspring Health), Reproductive Medicine, Gynecological Oncology, Menopause, Genetics and Epigenetics, Non Communicable Diseases (NCD), Obesity, Environment, Lifestyle and Nutrition as Predisposition to Women's Health, "The Unaccpetable and Unmentionable" in Women's Health, Heath IT, Digital Health, Big Data and AI: Prices: International Participant: USD 300.0, Residents/Students/Nurses: USD 200.0 Speakers: Benny Zeevi, Moshe Hod, Michal Rosen Zvi, Jacques Abramowicz, Birgit Arabin, Ran Balicer, Jacob Bar, Eytan Barnea, Vincenzo Berghella, Neerja Bhatla, Gian Carlo Di Renzo, Hema Divakar, Ram Eitan, Vassilios Fanos, Dov Feldberg, Jason Gardosi, Andrea Gennazani, Shahar Kol, Ofer Lavie, Silke Mader, Edgar Mocanu, Shlomo Nimrodi, Raoul Orvieto, Felice Petraglia, Liona Poon, Meredith Rose Burak, Varda Shalev, Eyal Sheiner, Lior Soussan Gutman, Yariv Yogev, Uzi Beller, Orit Jacobson, Karin Mayer Rubinstein, Sharon Rashi-Elkeles, Sari Prutchi Sagiv, Larry Adelson, Anya Eldan Registration Info Contact the OrganizerContact the Organizer Organized by Organized by LoveEvvnt1 Comtecmed Event Categories: Health & Nutrition.
http://meetings4docs.com/event/1st-world-congress-on-womens-health-innovations-and-inventions-whii/
2019-03-18T21:59:58
CC-MAIN-2019-13
1552912201707.53
[]
meetings4docs.com
United European Gastroenterology (UEG) Week Barcelona 2019 Start Date : October 19, 2019 End Date : October 23, 2019 Time : 7:00 am to2:00 pm Phone : +4672971639 Location : Fira Gran Via - North Access Hall 8, Carrer del Foc, 37, Barcelona, 08038, Spain Description UEG's mission is to continuously improve standards of care in gastroenterology, and promote ever greater understanding of digestive and liver disease – among the public and medical experts alike. UEG Week is the perfect stage to present new research and thinking across a wide range of digestive disease areas, cutting-edge post-graduate teaching sessions, some of the best GI abstracts and posters and simultaneous live streams to a global audience and endoscopic, ultrasound and surgical hands-on training. The focus for UEG Week 2019 in Barcelona will be to advance science and link people in the global GI community. Submit your abstract until April 26, 2019. Register until May 16 and benefit from our reduced fees! For sponsoring and exhibiting opportunities, please visit or contact [email protected] URL: Website: Registration Info Admission : Delegate Fee: EUR 470.0, Fellow in Training of UEG Week: EUR 200.0, Postgraduate Teaching Programme: EUR 250.0 Organized by Organized by LoveEvvnt1 Event Categories: Gastro- enterology and Hepatology.
http://meetings4docs.com/event/united-european-gastroenterology-ueg-week-barcelona-2019/
2019-03-18T21:34:45
CC-MAIN-2019-13
1552912201707.53
[]
meetings4docs.com
Migrate a Magento installation There are two important steps when migrating a Magento Stack: create a backup of your system and database on the old server, and restore it to the new one. To make this process easier, divide them into four sub-steps. Export the Magento installation files and database The first step you need to follow is to create a full backup both of the Magento installation files and its database. Once you have obtained the backup files, export and save them to a safe location. In the old machine you have installed Magento, perform the following: Create a system backup using the Magento admin interface section. Remember to select the “System Backup” option. TIP: You can enable the maintenance mode while creating the backup. Visitors will see a “Service Temporarily Unavailable” message in their web browsers instead of the store. - Once the backup has been created, you will see your backup in the list of available backup files. Select the one you have created and click on the tgz link to download it. Create a database backup. You can do this manually or with phpMyAdmin. If you use phpMyAdmin, export it as an .sql file. Download the resulting .sql file. If you created the database backup using phpMyAdmin, the file was automatically downloaded. If you created the database backup manually, download the file from your server with SFTP./ctlscript.sh restart mysql Restore the database by following these instructions. Substitute pub/ and var/ directories To complete the migration of your old Magento installation to the new one, you need to replace some directories and delete some others in order to save space and clear some useless data. Change to the /opt/bitnami/apps/magento/htdocs/ directory and remove the pub/ and var/ folders: $ cd /opt/bitnami/apps/magento/htdocs/ $ rm -rf var pub Uncompress the system backup .tar file. To do so, execute the following command (replace the BACKUP_FILE placeholder with the name of your backup file): $ sudo tar xzpf BACKUP_FILE Remove the var/cache, var/session, var/report and var/log directories: $ cd var $ rm -rf cache session report log Stop the database service: $ sudo .
https://docs.bitnami.com/oracle/apps/magento/administration/migrate/
2019-03-18T22:52:47
CC-MAIN-2019-13
1552912201707.53
[]
docs.bitnami.com
Creating macro namespaces Macro namespaces serve as containers for static macro methods and fields. Users can access the members of namespaces when writing macro expressions, for example {% Math.Pi %} or {% Math.Log(x) %}. Namespaces also appear in the macro autocomplete help. The system uses several default namespaces such as Math, String or Util, and you can create your own namespaces for custom macros. To add a custom macro namespace: - Create a class inheriting from MacroNamespace<Namespace type>. In web site projects, you can either add the class into the App_Code folder or as part of a custom assembly. - Register macro fields or methods into the namespace — add Extension attributes to the class, with the types of the appropriate container classes as parameters. using CMS.Base; using CMS.MacroEngine; [Extension(typeof(CustomMacroFields))] [Extension(typeof(CustomMacroMethods))] public class CustomMacroNamespace : MacroNamespace<CustomMacroNamespace> { } See Registering custom macro methods and Adding custom macro fields to learn about creating container classes for macro fields and methods. Registering macro namespaces Once you have defined the macro namespace class, you need to register the namespace as a source into a macro resolver (typically the global resolver). We recommend registering your macro namespaces. The following steps describe how to register a macro namespace into the global resolver using the App_Code folder: Create a class file. Call the SetNamedSourceData method for the global resolver with the following parameters: A string that sets the visible name of the namespace (used in macro syntax). - An instance of your macro namespace class. - (Optional) By default, the registered namespace appears in the high priority section of the autocomplete help and macro tree. To add namespaces with normal priority, add false as the third parameter. using CMS.Base; using CMS.MacroEngine; Namespace" into the macro engine MacroContext.GlobalResolver.SetNamedSourceData("CustomNamespace", CustomMacroNamespace.Instance); } } } The system registers your custom macro namespace when the application starts. Users can access the namespace's members when writing macro expressions. Registering namespaces as anonymous sources By registering a macro namespace as an anonymous source, you can allow users to access the namespace's members directly without writing the namespace as a prefix. For example, {% Field %} instead of {% Namespace.Field %}. // Registers "CustomNamespace" as an anonymous macro source MacroContext.GlobalResolver.AddAnonymousSourceData(CustomMacroNamespace.Instance); You can register the same namespace as both a named and anonymous source. If you only register a namespace as an anonymous source, users cannot access the members using the prefix notation, and the namespace does not appear in the macro autocomplete help. Note: Data items registered through anonymous macro sources do NOT appear in the macro autocomplete help. As a result, the autocomplete help only displays namespace members when using the prefix notation, even when the namespace is registered as both a named and anonymous source.
https://docs.kentico.com/k9/macro-expressions/extending-the-macro-engine/creating-macro-namespaces
2019-03-18T22:36:11
CC-MAIN-2019-13
1552912201707.53
[]
docs.kentico.com
Crash and Watchdog Diagnostics Crash and watchdog diagnostic functions. More... Detailed Description Crash and watchdog diagnostic functions. See diagnostic.h for source code. Macro Definition Documentation Macro evaluating to true if the last reset was a crash, false otherwise. Definition at line 504 of file diagnostic.h. Function Documentation If last reset was from an assert, return saved assert information. - Returns - Pointer to struct containing assert filename and line. Returns the number of bytes used in the main stack. - Returns - The number of bytes used in the main stack. Print the complete crash data. - Parameters - Print the complete, decoded crash details. - Parameters - Print a summary of crash details. - Parameters -
https://docs.silabs.com/connect-stack/2.2/group-diagnostics
2019-03-18T21:58:25
CC-MAIN-2019-13
1552912201707.53
[]
docs.silabs.com
Matrix4x4 現在のグラフィックスAPI用に補正された射影行列 カメラの射影行列から、GPU の射影行列を計算します.
https://docs.unity3d.com/ja/current/ScriptReference/GL.GetGPUProjectionMatrix.html
2019-03-18T22:06:35
CC-MAIN-2019-13
1552912201707.53
[]
docs.unity3d.com
window_mouse_set(x, y); Returns:N/A With this function you can change or set the position of the mouse within the game window which can be useful for FPS games, for example. The function will only work while the game is in focus and using alt + tab will unlock the mouse. NOTE: For regular mouse functions see the section on Mouse Input. window_mouse_set(window_get_width() / 2, window_get_height() / 2); The above code would center the mouse in the game window.
https://docs.yoyogames.com/source/dadiospice/002_reference/windows%20and%20views/the%20game%20window/window_mouse_set.html
2019-03-18T21:46:29
CC-MAIN-2019-13
1552912201707.53
[]
docs.yoyogames.com
Introduction Thank you for choosing Telerik RadEditor for SharePoint! RadEditor for Microsoft Office SharePoint Server extends the web content authoring environment of SharePoint 2007/2010 by providing cross-browser compatibility and support for the Apple Mac OS platform. In the default configuration, the product offers an almost identical functionality level as the integrated rich-text editor and can be used in the following scenarios: Rich-text field control in SharePoint forms (in Lists, Wikis, Blogs, etc.) Content editor Web Part Rich-HTML field in Web Content Management (publishing) scenarios
http://docs.telerik.com/devtools/aspnet-ajax/sharepoint/2007/radeditor-for-moss/introduction
2017-05-23T01:19:06
CC-MAIN-2017-22
1495463607245.69
[array(['images/EditorLogoPr.gif', None], dtype=object)]
docs.telerik.com
Thank you for choosing RadMultiResolutionImage. The RadMultiResolutionImage control is an extended Image control which allows you to use a single element in code and have different images in the application, depending on the resolution of the device that runs it. In order to use RadMultiResolutionImage for Windows Phone 8, the following references are required:
http://docs.telerik.com/help/windows-phone/multiresolution-overview.html
2017-05-23T02:13:06
CC-MAIN-2017-22
1495463607245.69
[]
docs.telerik.com
If none of the above modules can be loaded by the RestClient module, then the RestClient module will fail to initialize.
https://docs.qore.org/current/modules/RestClient/html/index.html
2017-05-23T01:06:59
CC-MAIN-2017-22
1495463607245.69
[]
docs.qore.org
Spell Overview Thank you for choosing RadSpell for ASP.NET AJAX! RadSpell for ASP.NET AJAX is the successor of the well known industry standard RadSpell for ASP.NET. RadSpell for ASP.NET AJAX has been added to the suite and takes full advantage of the ASP.NET AJAX framework and the new client-side programming model. RadSpell for ASP.NET AJAX enables developers to add multilingual spellchecking capabilities to their ASP.NET applications. The component is completely customizable and can be attached to any server or client-side editable element (textbox, div, iframe). It currently supports dozens of languages and can have custom user dictionaries for every language. RadSpell for ASP.NET AJAX is a cross-browser server control, which uses no-postback algorithm and requires no installation or downloads on the client machine. To suit the specific requirements of your web-application the spellchecker interface can be easily localized, re-skinned or completely redesigned. Key Features No-postback algorithm for superior performance Options: check all caps, check words with numbers, etc. Custom dictionary for every language Support for subject-specific dictionaries (i.e. medical) Optional phonetic algorithm Ignoring inline scripts and style definitions Ignoring text fragments Ability to check multiple controls at once
http://docs.telerik.com/devtools/aspnet-ajax/controls/spell/overview
2017-05-23T01:17:20
CC-MAIN-2017-22
1495463607245.69
[]
docs.telerik.com
RadSparkline is a tailored version of the RadChart implementation that allows you to display simple, word-sized graphics that can be embedded in different places—text, tables, headlines, etc. This help article will help you get started with using the RadSparkline control.Sparkline in the HTML markup, add an empty span element with a data-win-control attribute with value of Telerik.UI.RadSparkline. <span id="sparkline1" data-</span> You can achieve the same result in JavaScript—just define the host span element on the page and then instantiate a new RadSparkline by passing a reference to the span DOM object in the control constructor: var sparkline = new Telerik.UI.RadSparkline(document.getElementById("sparkline1")); Creating a RadSparkline without setting any options will not result in a usable control. RadSparkline needs at least a basic set of data that it has to visualize. You can find a basic example of how you can set property values below. Same as with the rest of the Windows Runtime JavaScript controls, RadSparkline's properties can be set through the data-win-options attribute of the host element: <span class="control" data-win-control="Telerik.UI.RadSparkline" data-win-options="{ data: [22, 13, 37, 42, 12], type: 'column', height: 50, width: 100 }"></span> In JavaScript, you can pass an options object as a second argument to the RadSparkline constructor: var sparkline = new Telerik.UI.RadSparkline(document.getElementById("mySparkline"), { data: [22, 13, 37, 42, 12], type: "column", height: 50, width: 100 }); Here is an image of the produced RadSparkline control: As described in this MSDN article about adding controls to a Windows Store app, any control in a Windows Runtime JavaScript application requires a call to WinJS.UI.processAll() for proper initialization. The same holds true for any of the Telerik UI controls. Once the WinJS framework has initialized all the controls on the page, the RadSparkline control instance associated with a host HTML element can be retrieved using the winControl expando property of the host HTML element. <!--Define your RadSparkline control in the HTML--> <span id="mySparkline" data- </span> WinJS.Utilities.ready(function () { WinJS.UI.processAll().then(function () { //wait for the processAll() method to finish, then find the //sparkline control from the host element's winControl property var sparklineElement = document.getElementById("mySparkline"); var mySparklineControl = sparklineElement.winControl; console.log(mySparklineControl instanceof Telerik.UI.RadSparkline); //true }); }); Once RadSparkline is loaded and the control is referenced in JavaScript, it exposes an extensive set of properties, methods and events. args.setPromise(WinJS.UI.processAll().then(function () { var sparkline = document.getElementById("mySparkline").winControl; sparkline.tooltip.font = "9pt Segoe UI"; })); You can either declare an event handler in the options object that you pass to the control during initialization, or you can use the addEventListener method to attach a function for execution upon a certain event. Below you can see samples of both approaches: var sparkline = new Telerik.UI.RadSparkline(document.getElementById("mySparkline"), { ondatabound: databoundHandlerFnName }); // OR var sparkline = new Telerik.UI.RadSparkline(document.getElementById("mySparkline"), { ondatabound: function(e) {//...} }); If you attach the event handler using the on[eventname] property in HTML mark-up, you would need to mark the handler function as safe in your JavaScript code, using the WinJS.Utilities.markSupportedForProcessing function. sparkline.addEventListener("databound", databoundHandlerFnName);
http://docs.telerik.com/help/windows-8-html/sparkline-getting-started.html
2017-05-23T02:12:47
CC-MAIN-2017-22
1495463607245.69
[]
docs.telerik.com
Tutorials imgix’s wide variety of parameters give you powerful controls to manipulate, enhance, and optimize your images for delivery. These tutorials walk through specific problems and use cases and show you how to use imgix to unleash the full power of your imagery across your entire catalog. Focal Point Cropping for Responsive Art Direction Art-direct your responsive images more effectively for editorial with focal point cropping. Next-Generation Responsive Images with Client Hints Automate your responsive design with imgix’s Client Hints support Responsive Images with srcset Learn how to use srcset to support different device pixel ratios with imgix Using imgix with <picture> Learn how to use the picture element for art direction in responsive design Improved Compression with Automatic Content Negotiation Let imgix determine the best image format for your users, automatically Best Practices for Creating Vector Assets Learn best practices for setting up Illustrator and Sketch to create vector graphics for use with imgix Managing Brand Assets from PDFs Keep your brand assets together more easily and serve them at any size, on demand Designing for Retina, Deploying with imgix Design your images for optimal deployment at any device pixel ratio (DPR) Multi-line Text & Overlays with the Typesetting Endpoint Learn how to use imgix’s Typesetting Endpoint and the mark and blend parameters to do advanced text compositing, including leading and letterspacing Managing User-Generated Images Use imgix to automatically standardize and enhance images uploaded by your users Dynamically Blending Images Beautify your images with image and color blends Dynamic Masking Hide parts of your image with imgix’s masking parameters Image Auto Enhancement On the Fly Apply basic image enhancements to all of your images automatically, including color correction and red-eye removal Automatic Point-of-Interest Cropping Use imgix’s entropy cropping to automatically crop to the most meaningful content in your images Optimize Images Using Dynamic Color Quantization Reduce the color palette or yoru images to optimize image file size Extract Image Metadata with JSON Output Format Retrieve image metadata in JSON and use it to apply other parameters
https://docs.imgix.com/tutorials
2017-05-23T01:04:38
CC-MAIN-2017-22
1495463607245.69
[]
docs.imgix.com
Design Considerations: Refactoring POM Loading and Building for Maven 2.1 Accommodating New POM Elements Namespacing might be able to help here, since it could allow us to implement chain-of-command for XML parsing itself. Also, it could help us to provide better support to users for XML editing. Replacing XPP3 for Parsing XPP3 is a dead project, and the parser has some deficiencies (need details here). POM Encoding Support This is partially dependent on XPP3 replacement, but we need to support document encodings for POMs and other parsed models. This will most likely involve fixing Modello and/or providing other XML parser implementations, and making them encoding-aware. Fixed by XML Encodingsolution. Switching to Chain-of-Command for Project Loading This will allow us to implement a more flexible approach to POM loading, and support customization of this process. Accommodating ModelVersions above 4.0.0 Need to be able to load multiple versions of the POM from the same runtime. Refactoring Interpolation Avoid chicken-and-egg problems with interpolation. Provide better handling for path-alignment. Refactoring Inheritance and Profile-injection Need more information about what is needed here.
http://docs.codehaus.org/pages/diffpagesbyversion.action?pageId=58360&selectedPageVersions=3&selectedPageVersions=4
2014-08-20T06:58:42
CC-MAIN-2014-35
1408500800767.23
[]
docs.codehaus.org
Metadata Types - Overview Types are the top-level constructs that form the basis of the Telerik Data Access metadata model. Telerik Data Access provides the Telerik.OpenAccess.Metadata namespace, which contains a set of CLR types that represents concepts defined in the Telerik Data Access metadata model. There are two main layers in the metadata type hierarchy: conceptual and relational. In the conceptual layer, types are the top-level constructs from which the actual classes in the project are generated. The relational layer on the other side provides description for the database schema in terms of Tables, View, Constraints and Stored Procedures. The purpose of this section is to give you a quick overview of all types exposed by the Telerik.OpenAccess.Metadata namespace:
http://docs.telerik.com/data-access/feature-reference/api/data-access-metadata/metadata-types/metadata-metadata-types-overview.html
2017-08-16T15:04:22
CC-MAIN-2017-34
1502886102307.32
[]
docs.telerik.com
CRUD Operations Chart The CRUD operations chart shows visual information about the number CRUD operations that are performed. The Y-Axis displays information about the number of queries that are executed. The X-Axis displays information about the time when the operation is executed. Zoom In/Out In order to zoom in/out the chart control, you need to use the left/right thumb of the scroll. Filter Control The profiler allows you to filter the information displayed on the chart. By default, the Filter Control is hidden. To show it, you need to use the Show Filter Toolbar Command. The filter control (shown on the image below) provides the following filtering options: - Start Time - the start time of the operation. - End Time - the end time of the operation. - Context Id - the id of the OpenAccessContext. - Context Name - the name of the context class. - Parameter Values - Semicolon (;) separated values for specific parameters used in the a SQL query.
http://docs.telerik.com/data-access/feature-reference/tools/profiler-and-tuning-advisor/feature-ref-tools-profiler-crud-operations-chart.html
2017-08-16T15:05:35
CC-MAIN-2017-34
1502886102307.32
[array(['/data-access/images/1feature-ref-tools-profiler-crudchart-010.png', None], dtype=object) array(['/data-access/images/1feature-ref-tools-profiler-crudchart-020.png', None], dtype=object) array(['/data-access/images/1feature-ref-tools-profiler-crudchart-040.png', None], dtype=object) array(['/data-access/images/1feature-ref-tools-profiler-crudchart-030.png', None], dtype=object) ]
docs.telerik.com
Layout Modes The RadRichTextEditor allows you to choose between several layout modes. Paged Figure 1: DocumentLayoutMode.Paged When using the paged mode, the content of the edited document is divided into pages. The size and layout of each page are defined by the DefaultPageLayoutSettings property of RadDocument and specifically - the Width and Height properties of the PageLayoutSettings object. Next, the margins of the control in a page are specified by the PageMargin property of each Section. In Paged mode, resizing a RadRichTextEditor will not affect the document layout but scroll bars will appear if the document does not fit in the view. Flow Figure 2: DocumentLayoutMode.Flow In Flow layout mode, the document content is not divided into pages. Instead the whole content is displayed as in a TextBox or RichTextBox. This layout option resembles MS Word’s Web-Layout mode. The width of the document is the same as that of the RadRichTextEditor and changing the control’s width will also resize the content of the document. FlowNoWrap The FlowNoWrap layout mode is similar to the Flow layout mode, but it doesn't allow the text in the separate paragraphs to get wrapped when the free space gets exceeded. Instead a horizontal scroll bar will appear.
http://docs.telerik.com/devtools/winforms/richtexteditor/getting-started/layout-modes
2017-08-16T15:01:56
CC-MAIN-2017-34
1502886102307.32
[array(['images/richtexteditor-layout-modes001.png', 'richtexteditor-layout-modes 001'], dtype=object) array(['images/richtexteditor-layout-modes002.png', 'richtexteditor-layout-modes 002'], dtype=object)]
docs.telerik.com
Использование Справка - Меню - Горячая клавиша Ctrl-D Справка - Меню Displays a popover window that allows editing the custom expression and input variables of the driver without opening the full Drivers Editor. Many drivers don’t use their F-Curve component, so this reduced interface is sufficient. Open Drivers Editor Справка - Меню Opens a new window with the Drivers Editor and selects the driver associated with the property. Copy & Paste Справка - Меню - Меню Drivers can be copied and pasted via the context menu. When adding drivers with the same settings, this can save time modifying settings. Copy As New Driver Справка - Меню This is a quick way to add drivers with a scripted expression. First click the property you want to add a driver to, then type a hash # and a scripted expression. Some examples: #frame #frame / 20.0 #sin(frame) #cos(frame) Removing Drivers Справка - Меню - Меню - Горячая клавиша Ctrl-Alt-D Removes driver(s) associated with the property, either for the single selected property or sub-channel, or all components of a vector.
https://docs.blender.org/manual/ru/dev/animation/drivers/usage.html
2022-08-07T22:01:17
CC-MAIN-2022-33
1659882570730.59
[]
docs.blender.org
Payment API Payments API endpoints provide all the functionality required for you to perform financial operations on your users behalf. The API provides functionality to: - Initiate payments - Manage user beneficiaries Payment Processing API provides two endpoints to process a transaction: - Create Transfer - Transfer Auto Flow If you use Create Transfer endpoint you have to implement custom bank processing logic and validations on your end. You will require use of following endpoints to process a transaction: - Get Metadata - Create Beneficiary - Get Beneficiaries Meanwhile Transfer Auto Flow endpoint abstracts all these requirements. There is no need to implement custom bank processing logic or validations when using this endpoint. Dapi recommends use Transfer Auto Flow to initiate a payment. Beneficiaries Important Beneficiaries are not required when performing a transaction using Auto Flow endpoint. They are only needed if transaction is processed through Create Transfer endpoint Beneficiary can be considered similar to a contact registered on user's account to whom user can send a transaction. They are required by some of the banks to perform a payment. In other words, if recipient of the amount is not registered as a beneficiary for the user, bank will not allow the transaction to go through. API provides two endpoints to work with beneficiaries allowing you to: - Create new beneficiary on users behalf - Get list of users beneficiaries Note Some of the banks have a cool-down period for beneficiary. The maximum cool-down period can be 24 hours. This means that once you register a beneficiary for the user, you will not be able to transfer the amount to newly registered beneficiary until the cool-down period is passed. You can find more about cool-down period and other bank data available through API on Metadata Documentation All the endpoints of Dapi Payments API requires users accessToken to be specified in the Authorization header. How To Make A Payment Below you can find process of hot to initiate a payment transaction using create transfer endpoint. Important Below process describes transaction initiation using Create Transferendpoint. We suggest use of Auto Flowendpoint that abstracts the whole process. Payment Initiation Process Updated over 1 year ago
https://docs.dapi.com/docs/payment-api
2022-08-07T21:48:43
CC-MAIN-2022-33
1659882570730.59
[array(['https://files.readme.io/8d3a2bd-Screen_Shot_2021-03-23_at_11.30.11_PM.png', 'Screen Shot 2021-03-23 at 11.30.11 PM.png 775'], dtype=object) array(['https://files.readme.io/8d3a2bd-Screen_Shot_2021-03-23_at_11.30.11_PM.png', 'Click to close... 775'], dtype=object) ]
docs.dapi.com
Prifina Developer Docs Development Guide Prifina uses Yarnas its package manager default, so it’s highly recommendable to continue using it to avoid problems caused by mixing multiple package managers. Prifina hooks will come already integrated inside the starter repos There are two types of starter repos: widget-starter and app-starter. Choose the appropriate one based on what you are planning to develop. Development Stages The process of building apps using the starter repos is divided into multiple stages allowing you to track and secure your work from stage to stage. If something goes the wrong way you can always start again fresh from the stage you are in without having to start from scratch.Application development Widget Development Environment Starter repos are monorepo environments, containing everything you need to start building your apps and widgets. You will find predefined starter-apppackage for you to explore, start developing in it immediately, or use it as a playground to test out your ideas. Idea of the starter-appis to serve as a test app and help you get through branch step instructions available to you to find out what this starter is capable of. In case you are already familiar with these kinds of environments, or this is not your first widget or app you can skip the guide and go right to Add your package manually section and start from there. Add your package manually You can add your app/package manually inside monorepo. Use lernacommand from the root of your stater repo. yarn lerna create my-app # lerna create command
https://docs.prifina.com/
2022-08-07T22:42:01
CC-MAIN-2022-33
1659882570730.59
[]
docs.prifina.com
With the Shopware Newsletter Manager you have the ideal tool to stay permanently in contact with your customers. We will tell you here how you can set up your first newsletter in a few steps. In addition to the HTML Newsletter described here, we can provide you our plugin "Intelligent Newsletter". With this Plugin you can set up your Newsletter easily by drag and drop for Banner, pictures, articles, and other items. The plugin is also able to create customers tailored product advices. At first open the backend with the newsletter module, this can be found at "Marketing > Newsletter Manager". After opening the Newsletter Manager you get an overview of the created newsletters and other settings. When you click on "Create Newsletter" (1) you generate an empty Newsletter. After creating a Newsletter it will be displayed in the Overview (2). In status you see at a glance if your newsletter is Not sent The percentage of the sending or if all mails were sent. Here you get a compact view about how many recipients have reached your newsletter. The item "Letters Read" is counted how many newsletters have been opened. (Only the first time the email was opened is counted.) In "Clicks" the total amount of clicks in the newsletter that ended in your shop are listed. Most interesting to a newsletter is sometimes how much revenue the newsletter has introduced. Under the heading "Turnover"you can find the sum of all sales that were generated with a store access through the newsletter. In the right panel of the overview are the Actions (3) of the newsletter. Here you have the possibility to edit, delete or send your newsletter. Using the Search (4) you can find a specific newsletter faster. The search includes the sender address and the subject of each newsletter. In Administration tab you can create general settings for all of your newsletters. Here you can define sender, create recipient groups and manage your receiver. Here you can manage, create, edit or delete the sender addresses and names for your newsletter. The defined addresses / names are displayed to the recipient as the sender of the newsletter. You can "Create new sender (1) by entering an e-mail address (2) and a name (3). You can edit or delete your sender on the Actions (4) fields on the right side. By using different groups of recipients, you can sort your subscribers and achieve a better and more efficient use of your newsletter. For example you can create a group of recipients receiving a weekly newsletter or a group that only receives the newsletter once a month. When creating a newsletter you can select the customer groups and the recipient groups which you create here. You can chose the receiver group that will be assigned by default to a new customer "Settings> Preferences> Storefront> Login / Registration". Note to the number of the recipient: This recipient groups include all recipients that are assigned. Duplicates are not considered at this point! The final number of recipients is not visible when you run the cron jobs, since these filters out the duplicates and the number prints out. Here you can manually add or delete email addresses. In addition you can change the assignment of a customer with a doubleclick in the group with a pull down menu. In the newsletter itself you have 2 tabs. The editor where you can add your newsletter in HTML or plain text, and the Settings tab. All settings are stored which apply only to this newsletter. With the HTML window, you can breathe life into your newsletter. With the Settings from the Editor (1) you can also create your newsletter without any knowledge about HTML. Just insert the content into the input mask (2). If you want to watch the newsletter as the newsletter will look for your customers use the Preview(3) button. At first you should set up a subject (1) for your newsletter. The Sender (2) can be selected via a pull down menu. At next you choose which customer group (3) is directed to the newsletter. With the option Select language: (4) you can select the shop using the newsletter. With shopware 5.1.2 you can use timed send newsletters, the cron job checks whether the newsletter is released and the delivery time Send at: (5)is reached. Here you can define where the Newsletter sent to a date. You can choose whether the newsletter should be sent as plain text or HTML + Plain text with the Sending type: (6). If your newsletter is ready, you can use the check box Published (7) If you want to use the timed sending, then use Release for sending: (8) set 'the hook in' so that the newsletter is sent when cron call the specified time. The newsletter is sent only when the defined time is reached. At the end of the settings you can select the check boxes of the newsletter recipients here you can choose between Customer groups (9) or your Own recipient groups (10) the newsletter be delivered to. Under Settings> Preferences> Storefront> Email Settings you can use a double opt-in to the newsletter subscription, the customer would then receive an email in which he must acknowledge receipt of the newsletter. On the right edge of the Newsletter page you find the function "Send Newsletter" (1). After clicking this button 2 confirmations appear. With the first, you confirm sending the newsletter. With the second you can choose whether the package script to be executed manually or shipment is processed through a previously established Cronjob. To test the cronjob open your browser with the URL of your shop a: If you are sending the newsletter manually a new tab with the valid url for your shop is opened automatically. The execution is performed directly and the result should be visible in the browser as text. You should update this site every 5 minutes until the message "Nothing to do ..." appears. Inside the shell there are no limitations regarding to the script runtime. The cron jobs can be ideally started by Shell or console command. In the management interface of your provider, check matching settings. Our certified hoster will help you setting this up. Just add here the cronjob URL of your shop: We recommend to choose a setting that is running the file every 10-15 minutes. You can use the "Release" feature to mark a Newsletter as visible after the complete delivery. They will appear in the archive and can be viewed by your visitors. Thus, even visitors who do not receive the newsletter, stay up to date. With the link all sent newsletters are displayed. With the shoppage function you can easily add the newsletter archive link. The layout of the newsletter also uses the Shopware text modules. Thus it is very easy and comfortable to adapt the texts, such as copyright or the links in the backend. Open your backend to select Settings> Text Blocks. On the left side you can choose namespaces and newsletter. All text modules of newsletter templates are now listed in the right pane. Example: Adding a link to contact and imprint in the block NewsletterFooterNavigation In "Configuration > Basic Settings > Additional Settings > Newsletter" you can set how many eMails should be sent per step/request. It is important to split the sending of the newsletter into smaller steps to prevent the mail server from being overloaded. Additionally it can come to problems with certain providers, if too many mails are sent at the same time over SMTP. Once you have approved your newsletter, you can no longer send it manually. If you do, you will have to deactivate your approval. The setting of the dispatch time is ACL compatible, which means that if your users only have read access to this module, you cannot control the dispatch either. You want to add your own logo to the newsletter? Create a header.tpl in your own theme at themes\Frontend\Your_THEME\newsletter\index\header.tpl Here you can add the following content: <div> <div align="left"> {* align left needed for old outlook versions *} <img align="left" src="{link file='frontend/_public/src/img/logos/logo--mobile.png' fullPath}" /> </div> <div align="right"> <span style="color:#999;font-size:13px;">NEWSLETTER</span> </div> </div> {* Clear floating *} <div style="clear:both;float:none;height: 0px; line-height: 0px;"></div> <br> Then you can replace frontend/_public/src/img/logos/logo--mobile.png with your own shop logo. The newsletter module is one of the most powerful marketing tools in Shopware. In the standard version it is not possible to switch off the newsletter function globally. To hide the newsletter function in the frontend and backend, you have to work through the following checklist. Deactivating the newsletter function in the backend Deactivating the newsletter function in the frontend The cache must be cleared for the settings to be accepted. Empty the entire cache except for the SEO and search.
https://docs.shopware.com/en/shopware-5-en/marketing-and-shopping-worlds/newsletter?category=shopware-5-en/marketing-and-shopping-worlds
2022-08-07T22:44:01
CC-MAIN-2022-33
1659882570730.59
[]
docs.shopware.com
smoke - sherpa.smoke(verbosity=0, require_failure=False, fits=None, xspec=False, ds9=False)[source] [edit on github] Run Sherpa’s “smoke” test. The smoke test is a simple test that ensures the Sherpa installation is functioning. It is not a complete test suite, but it fails if obvious issues are found. - Parameters verbosity (int, optional) – The level of verbosity of this test require_failure (boolean, optional) – For debugging purposes, the smoke test may be required to always fail. Defaults to False. fits (str or None, optional) – Require a fits module with this name to be present before running the smoke test. This option makes sure that when the smoke test is run the required modules are present. Note that tests requiring fits may still run if any fits backend is available, and they might still fail on their own. xspec (boolean, optional) – Require xspec module when running tests. Tests requiring xspec may still run if the xspec module is present. ds9 (boolean, optional) – Requires DS9 when running tests. - Raises SystemExit – Raised if any errors are found during the tests.
https://sherpa.readthedocs.io/en/latest/overview/api/sherpa.smoke.html
2022-08-07T22:54:04
CC-MAIN-2022-33
1659882570730.59
[]
sherpa.readthedocs.io
Welcome to TracPro’s documentation!¶ TracPro is a minimal, generic dashboard for polls in any RapidPro org, such as for a real-time monitoring service for education. TracPro is for simple dashboards that help to create information loops, not for advocacy or flashy visuals. TracPro is open source, anyone can build their own dashboard. Deployment requires advanced Linux systems administration experience, but once running it can be configured and used by anyone. TracPro was built for UNICEF by Nyaruka and is currently maintained by Caktus Group. User Guides Developer Guides Releases - Changelog - v1.9.0 (released 2018-03-13) - v1.8.0 (released 2017-12-28) - v1.7.0 (released 2017-12-01) - v1.6.1 (released 2017-06-05) - v1.6.0 (released 2017-05-26) - v1.5.4 (released 2017-05-08) - v1.5.3 (released 2017-04-10) - v1.5.2 (released 2017-03-17) - v1.5.1 (released 2017-02-28) - v1.5.0 (released 2017-02-27) - v1.4.3 (released 2016-04-06) - v1.4.2 (released 2016-04-04) - v1.4.1 (released 2016-03-29) - v1.4.0 (released 2016-03-28) - v1.3.3 (released 2016-03-28) - v1.3.2 (released 2016-03-23) - v1.3.1 (released 2016-03-23) - v1.3.0 (released 2016-03-22) - v1.2.1 (released 2016-03-21) - v1.2.0 (released 2016-03-14) - v1.1.1 (released 2016-03-01) - v1.1.0 (released 2016-02-24) - v1.0.4 (never released) - v1.0.3 (released 2015-11-30) - v1.0.2 (released 2015-11-25) - v1.0.1 (released 2015-11-25) - v1.0.0 (released 2015-11-19)
https://tracpro.readthedocs.io/en/latest/
2022-08-07T23:08:36
CC-MAIN-2022-33
1659882570730.59
[]
tracpro.readthedocs.io
Difference between revisions of "RecordSetQuery.Type" From Xojo Documentation Latest revision as of 19:11, 3 September 2020 Method RecordSetQuery.Type(FieldName as String) As Integer Supported for all project types and targets. Supported for all project types and targets. Returns the data type of the passed column. Notes Please refer to the table of field types in the Database class notes for the numeric values corresponding to SQL data types.
http://docs.xojo.com/index.php?title=RecordSetQuery.Type&diff=prev&oldid=73088
2022-08-07T22:20:28
CC-MAIN-2022-33
1659882570730.59
[]
docs.xojo.com
rpc OverviewOverview The dbt rpc command runs a Remote Procedure Call dbt Server. This server can compile and run queries in the context of a dbt project. Additionally, it provides methods that can be used to list and terminate running processes. The rpc server should be run from a directory which contains a dbt project. The server will compile the project into memory, then accept requests to operate against that project's dbt context. Running on Windows The rpc server is not supported on Windows, due to historic reliability issues. A docker container may be a useful workaround if required. Running the server: $ dbt rpcRunning with dbt=0.15.016:34:31 | Concurrency: 8 threads (target='dev')16:34:31 |16:34:31 | Done.Serving RPC server at 0.0.0.0:8580Send requests to Configuring the server --host: Specify the host to listen on (default= 0.0.0.0) --port: Specify the port to listen on (default= 8580) Submitting queries to the server: The rpc server expects requests in the following format: {"jsonrpc": "2.0","method": "{ a valid rpc server command }","id": "{ a unique identifier for this query }","params": {"timeout": { timeout for the query in seconds, optional },}} Built-in MethodsBuilt-in Methods statusstatus The status method will return the status of the rpc server. This method response includes a high-level, like ready, compiling, or error, as well the set of logs that accumulated during the initial compilation of the project. When the rpc server is in the compiling or error state, all non-builtin methods of the RPC server will be rejected. Example request {"jsonrpc": "2.0","method": "status","id": "2db9a2fe-9a39-41ef-828c-25e04dd6b07d"} Example response {"result": {"status": "ready","error": null,"logs": [..],"timestamp": "2019-10-07T16:30:09.875534Z","pid": 76715},"id": "2db9a2fe-9a39-41ef-828c-25e04dd6b07d","jsonrpc": "2.0"} pollpoll The poll endpoint will return the status, logs, and results (if available) for a running or completed task. The poll method requires a request_token parameter which indicates the task to poll a response for. The request_token is returned in the response of dbt tasks like compile, run and test. Parameters: request_token: The token to poll responses for logs: A boolean flag indicating if logs should be returned in the response (default=false) logs_start: The zero-indexed log line to fetch logs from (default=0) Example request {"jsonrpc": "2.0","method": "poll","id": "2db9a2fe-9a39-41ef-828c-25e04dd6b07d","params": {"request_token": "f86926fa-6535-4891-8d24-2cfc65d2a347","logs": true,"logs_start": 0}} Example Response {"result": {"results": [],"generated_at": "2019-10-11T18:25:22.477203Z","elapsed_time": 0.8381369113922119,"logs": [],"tags": {"command": "run --models my_model","branch": "abc123"},"status": "success"},"id": "2db9a2fe-9a39-41ef-828c-25e04dd6b07d","jsonrpc": "2.0"} psps The ps methods lists running and completed processes executed by the RPC server. Parameters completed: If true, also return completed tasks (default=false) Example request: {"jsonrpc": "2.0","method": "ps","id": "2db9a2fe-9a39-41ef-828c-25e04dd6b07d","params": {"completed": true}} Example response: {"result": {"rows": [{"task_id": "561d4a02-18a9-40d1-9f01-cd875c3ec56d","request_id": "3db9a2fe-9a39-41ef-828c-25e04dd6b07d","request_source": "127.0.0.1","method": "run","state": "success","start": "2019-10-07T17:09:49.865976Z","end": null,"elapsed": 1.107261,"timeout": null,"tags": {"command": "run --models my_model","branch": "feature/add-models"}}]},"id": "2db9a2fe-9a39-41ef-828c-25e04dd6b07d","jsonrpc": "2.0"} killkill The kill method will terminate a running task. You can find a task_id for a running task either in the original response which invoked that task, or in the results of the ps method. Example request {"jsonrpc": "2.0","method": "kill","id": "2db9a2fe-9a39-41ef-828c-25e04dd6b07d","params": {"task_id": "{ the task id to terminate }"}} Running dbt projectsRunning dbt projects The following methods make it possible to run dbt projects via the RPC server. Common parametersCommon parameters All RPC requests accept the following parameters in addition to the parameters listed: timeout: The max amount of time to wait before cancelling the request. task_tags: Arbitrary key/value pairs to attach to this task. These tags will be returned in the output of the polland psmethods (optional). Running a task with CLI syntaxRunning a task with CLI syntax Parameters: cli: A dbt command (eg. run --models abc+ --exclude +def) to run (required) {"jsonrpc": "2.0","method": "cli_args","id": "<request id>","params": {"cli": "run --models abc+ --exclude +def","task_tags": {"branch": "feature/my-branch","commit": "c0ff33b01"}}} Several of the following request types accept these additional parameters: threads: The number of threads to use when compiling (optional) models: The space-delimited set of models to compile, run, or test (optional) select: The space-delimeted set of resources to seed or snapshot (optional) selector: The name of a predefined YAML selector that defines the set of resources to execute (optional) exclude: The space-delimited set of resources to exclude from compiling, running, testing, seeding, or snapshotting (optional) state: The filepath of artifacts to use when establishing state (optional) docs)Compile a project ( {"jsonrpc": "2.0","method": "compile","id": "<request id>","params": {"threads": "<int> (optional)","models": "<str> (optional)","exclude": "<str> (optional)","selector": "<str> (optional)","state": "<str> (optional)"}} docs)Run models ( Additional parameters: defer: Whether to defer references to upstream, unselected resources (optional, requires state) {"jsonrpc": "2.0","method": "run","id": "<request id>","params": {"threads": "<int> (optional)","models": "<str> (optional)","exclude": "<str> (optional)","selector": "<str> (optional)","state": "<str> (optional)","defer": "<bool> (optional)"}} docs)Run tests ( Additional parameters: data: If True, run data tests (optional, default=true) schema: If True, run schema tests (optional, default=true) {"jsonrpc": "2.0","method": "test","id": "<request id>","params": {"threads": "<int> (optional)","models": "<str> (optional)","exclude": "<str> (optional)","selector": "<str> (optional)","state": "<str> (optional)","data": "<bool> (optional)","schema": "<bool> (optional)"}} docs)Run seeds ( Parameters: show: If True, show a sample of the seeded data in the response (optional, default=false) {"jsonrpc": "2.0","method": "seed","id": "<request id>","params": {"threads": "<int> (optional)","select": "<str> (optional)","exclude": "<str> (optional)","selector": "<str> (optional)","show": "<bool> (optional)","state": "<str> (optional)"}} docs)Run snapshots ( {"jsonrpc": "2.0","method": "snapshot","id": "<request id>","params": {"threads": "<int> (optional)","select": "<str> (optional)","exclude": "<str> (optional)","selector": "<str> (optional)","state": "<str> (optional)"}} docs)Generate docs ( Additional parameters: compile: If True, compile the project before generating a catalog (optional, default=false) {"jsonrpc": "2.0","method": "docs.generate","id": "<request id>","params": {"compile": "<bool> (optional)","state": "<str> (optional)"}} Compiling and running SQL statementsCompiling and running SQL statements Compiling a queryCompiling a query This query compiles the sql select {{ 1 + 1 }} as id (base64-encoded) against the rpc server: {"jsonrpc": "2.0","method": "compile compiled_sql with a value of 'select 2'. Executing a queryExecuting a query This query executes the sql select {{ 1 + 1 }} as id (bas64-encoded) against the rpc server: {"jsonrpc": "2.0","method": "run table with a value of {'column_names': ['?column?'], 'rows': [[2.0]]} Reloading the ServerReloading the Server When the dbt Server starts, it will load the dbt project into memory using the files present on disk at startup. If the files in the dbt project should change (either during development or in a deployment), the dbt Server can be updated live without cycling the server process. To reload the files present on disk, send a "hangup" signal to the running server process using the Process ID (pid) of the running process. Finding the server PIDFinding the server PID To find the server PID, either fetch the .result.pid value from the status method response on the server, or use ps: # Find the server PID using `ps`:ps aux | grep 'dbt rpc' | grep -v grep After finding the PID for the process (eg. 12345), send a signal to the running server using the kill command: kill -HUP 12345 When the server receives the HUP (hangup) signal, it will re-parse the files on disk and use the updated project code when handling subsequent requests.
https://60ec94561e8ebb00082d81fa--docs-getdbt-com.netlify.app/reference/commands/rpc/
2022-08-07T23:15:03
CC-MAIN-2022-33
1659882570730.59
[]
60ec94561e8ebb00082d81fa--docs-getdbt-com.netlify.app
Define “what’s inside” your system and what’s outside. You should demarcate your system from other IT systems in order to show how it fits into the existing IT environment, which external interfaces are offered or consumed and what users or roles are using the system. That way you show the scope of your system: what are the responsibilities of your system and what are the responsibilities of its neighbor systems. You find an example in the diagram below: It shows the context of a webshop which delegates the payment handling to an external provider (removing this responsibility from the scope of the webshop).
https://docs.arc42.org/tips/3-1/
2022-08-07T22:59:30
CC-MAIN-2022-33
1659882570730.59
[array(['/images/03-simple-context.png', None], dtype=object)]
docs.arc42.org
Known Issues for Deploying an RODC Applies To: Windows Server 2008 This section describes some of the known issues for deploying an RODC running Windows Server 2008. Some of the problems are avoided or mitigated by following the guidelines (described earlier) for placing RODCs. For more information, see RODC Placement Considerations for Windows Server 2003 Domains. In this section Client and Server Operating System Issues Domain Controllers Running Windows Server 2003 Perform Automatic Site Coverage for Sites with RODCs RODCs Do Not Perform Domain Controller Certificate Enrollment
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753207(v=ws.10)?redirectedfrom=MSDN
2022-08-07T22:22:43
CC-MAIN-2022-33
1659882570730.59
[]
docs.microsoft.com
TensorFlow Using TensorFlow with Go¶ For an introduction please read Understanding Tensorflow using Go. The TensorFlow API for Go is well suited to loading existing models and executing them within a Go application. It requires the TensorFlow C library to be installed. A full TensorFlow installation is not needed. It is not possible to statically link against the C library, but the issue is known and there might be a fix later this year. Vision¶ Our long-term goal is to become an open platform for machine learning research based on real-world photo collections. External Resources¶ - - TensorFlow Hub is a library for reusable machine learning modules - - TensorFlow for C - - types of neural networks explained - - Experiment with neural networks in your browser - - open-source web application for creating and sharing documents that contain live code - - Machine Learning Crash Course with TensorFlow APIs - - - Deploy Your First Deep Learning Model On Kubernetes With Python, Keras, Flask, and Docker - - Getting Inception Architectures to Work with Style Transfer - jdeng/goface - Face Detector based on MTCNN, tensorflow and golang - - Vector Representations of Words (for searching/tagging) - chtorr/go-tensorflow-realtime-object-detection - Real-time object detection with Go, Tensorflow, and OpenCV - - Accelerated Training and Inference with the Tensorflow Object Detection API - NanoNets/object-detection-sample-golang - NanoNets Object Detection API Example for Golang - - Implementing Object detection with Go using TensorFlow
https://docs.photoprism.app/developer-guide/technologies/tensorflow/
2022-08-07T23:03:43
CC-MAIN-2022-33
1659882570730.59
[]
docs.photoprism.app
Customer Case provides a convenient way of managing ideas and tickets on the forums. You can submit tickets in two ways:. Accounts of company representatives commenting ideas or replying to your comments are highlighted with the marker. The Official answer section shows the comment pinned to the top of the comment list by a company representative. The official answer may contain the official resolution on the idea or description of the temporary workaround.. Accounts of company representatives commenting on ideas or replying to your comments are highlighted with the marker. You can add the automatic signature which will be inserted into your replies in Customer Case. You can use text, links, and images in your automatic signature. Customer Case will automatically insert this signature into all replies you write to customers on support and feedback forums. If you want to modify the saved signature, enter a new text snippet. Then click the Canned responses icon and select Update auto signature. Customer Case will update your automatic signature with the new variant. Users of Customer Case can edit a description of their own ideas, tickets, and comments. The way of editing the description and comments is identical for feedback and support forums. Customer Case allows you to submit requests by email. This requires the configured mailbox for the forum. The vendor or company should share this email address with you, so you can send your requests to this address.
https://docs.stiltsoft.com/plugins/viewsource/viewpagesrc.action?pageId=83100138
2022-08-07T22:15:08
CC-MAIN-2022-33
1659882570730.59
[]
docs.stiltsoft.com
ndupload Command Line FlexNet Manager Suite 2022 R1 (On-Premises) The uploader runs on managed devices and inventory beacons. It allows you to transfer event logs, inventories and other files from managed devices to FTP or local file servers. Tip: Only HTTP and HTTPS protocols are supported by inventory beacons. File and FTP protocols are available for alternative upload arrangements. The uploader can transfer any file to a specified URL. If an FTP URL is supplied, then a username and password must also be given. The uploader deletes files from managed devices after they have been successfully uploaded. Files are not removed if the upload fails. The file path supplied to the uploader can contain wildcards, so that multiple files of a similar type can be uploaded with a single command. Synopsis Options: -a -f path\and\filename.ext -o tag = value Return codes The uploader returns a zero on success. If you receive a non-zero return code, check the log file. Details of the log file may also be configured with command-line options, as described in the section on Preferences: Command line examples The following command uploads the file file.txtto the FTP location f tp://server/dir1/dir2using the login name user1and the password abc123: The following command uploads the fileThe following command uploads the file ndupload -f file.txt -o UploadLocation= -o UploadUser=user1 -o UploadPassword=abc123 file.txtto the mapped drive f:/dir1/dir2: This example uploads the inventory fileThis example uploads the inventory file ndupload -f file.txt -o UploadLocation= myInventory.ndito an inventory beacon (where ManageSoftRLis the name of a web service on the inventory beacon that receives the uploaded inventory and saves it by default to %CommonAppData%\Flexera Software\Incoming\Inventories): ndupload -f myInventory.ndi -o UploadLocation="" Options Tip: Although thePossible tags for use with the Network*preferences remain available for special circumstances, network throttling for package downloads is not normally required. For details, see the discussion under NetworkSpeed. -ooptions are: - AddClientCertificateAndKey (UNIX-like platforms only) - CheckCertificateRevocation - CheckServerCertificate - LogFile (upload component) - LogFileOld (upload component) - LogFileSize (upload component) - MaxKeepAliveLifetime - MaxKeepAliveRequests - NetworkHighSpeed - NetworkHighUsage - NetworkHighUsageLowerLimit - NetworkHighUsageUpperLimit - NetworkLowUsage - NetworkLowUsageLowerLimit - NetworkLowUsageUpperLimit - NetworkMaxRate - NetworkMinSpeed - NetworkSense - NetworkSpeed - NetworkTimeout - PreferIPVersion - PrioritizeRevocationChecks (UNIX-like platforms only) - SendTCPKeepAlive (Windows platforms only) - SourceFile - SourceRemove - SSLCACertificateFile (UNIX-like platforms only) - SSLCACertificatePath (UNIX-like platforms only) - SSLClientCertificateFile (UNIX-like platforms only) - SSLClientPrivateKeyFile (UNIX-like platforms only) - SSLCRLCacheLifetime (UNIX-like platforms only) - SSLCRLPath (UNIX-like platforms only) - SSLDirectory (UNIX-like platforms only) - SSLOCSPCacheLifetime (UNIX-like platforms only) - SSLOCSPPath (UNIX-like platforms only) TenantUID, to override the value set in the registry (multi-tenant mode only) - UploadLocation - UploadPassword - UploadRule - UploadType - UploadUser. FlexNet Manager Suite (On-Premises) 2022 R1
https://docs.flexera.com/FlexNetManagerSuite2022R1/EN/GatherFNInv/SysRef/FlexNetInventoryAgent/reference/FUA-CommandLine.html
2022-08-07T22:38:56
CC-MAIN-2022-33
1659882570730.59
[]
docs.flexera.com
Restore a backup Find out how to restore a backup Find out how to restore a backup Last updated May 4th, 2022 OVHcloud Databases as-a-service (DBaaS) allow you to focus on building and deploying cloud applications while OVHcloud takes care of the database infrastructure and maintenance. This guide explains how to restore a backup of a database solution in the OVHcloud Control Panel. We continuously improve our offers. You can follow and submit ideas to add to our roadmap at. In this guide we will use a Postgresql database engine as an example, but the procedure is exactly the same for all other engines. You can learn more about how backups works on the Automated backups guide. Restoration of a backup is done by creating a new service and pushing the backup data to this new service. This full process is called forking and is fully automated. Once this process is done, you will have two independent service running, the one from which the backup comes from, and a new one on which the backup data has been imported. First you need to go on the overview page of the service you want to restore the backup from. In the tab list click on Backups. Select the backup from which you want to restore from. To help you choose, observe the dates at which the backups have been performed in the "Creation date" column. Click on the ... button corresponding to the chosen backup. Then click on Duplicate (Fork) to go on the configuration page of the new service. The MongoDB service has the option to restore a backup in place, meaning restoring the backup on the same service. This option will rollback ALL data to the state it was in when the backup was done. This can induce data loss. As seen before, when restoring a backup you create a new separate database service on which the backup data will be imported. You are able to configure this new service as you wish. For obvious reasons, you can not change the engine, this option is greyed. Same goes for the engine version, you will be able to update engine version once the new service is running. When restoring a backup you can select another service plan. We currently don't allow changing the region. Your new service must be on the same region as the old one. You are able to change the node flavor of the service. Please note that the selection is restricted, depending on backup size. You can't restore a 400 GB data set on a node with only 320 GB of disk space. You can update the database name. Now click on Create a database service and the new service will be created. Please note that depending on the backup size it can take some time before the service is available. You now just have to wait for your service to be ready. This new service is now completly independent from the one you forked the backup from. You can safely delete the old service without impacting the new one. The newly created service does not duplicate IP restrictions nor users which were created on the old service. You will have to recreate those before using your new
https://docs.ovh.com/gb/en/publiccloud/databases/restore-backup/
2022-08-07T22:53:03
CC-MAIN-2022-33
1659882570730.59
[]
docs.ovh.com
はじめに Once an armature is skinned by the needed object(s), you need a way to configure the armature into positions known as poses. Basically, by transforming the bones, you deform or transform the skinned object(s). However, you will notice that you cannot do this in Edit Mode -- remember that Edit Mode is used to edit the default, base, or "rest" position of an armature. You may also notice that you cannot use Object Mode either, as here you can only transform whole objects. So, armatures have a third mode dedicated to the process of posing known as Pose Mode. In rest position (as edited in Edit Mode), each bone has its own position/rotation/scale to neutral values (i.e. 0.0 for position and rotation, and 1.0 for scale). Hence, when you edit a bone in Pose Mode, you create an offset in the transform properties, from its rest position. This may seem quite similar if you have worked with relative shape keys or Delta Transformations. Even though it might be used for completely static purposes, posing is heavily connected with animation features and techniques. So if you are not familiar at all with animation in Blender, it might be a good idea to read the animation chapter first, and then come back here. 表示の仕様 Bone State Colors The color of the bones are based on their state. There are six different color codes, ordered here by precedence (i.e. the bone will be of the color of the bottom-most valid state): Gray: Default. Blue wire-frame: in Pose Mode. Green: with Constraint. Yellow: with IK Solver constraint. Orange: with Targetless Solver constraint. 注釈 When Bone Groups colors are enabled, the state colors will be overridden.
https://docs.blender.org/manual/ja/dev/animation/armatures/posing/introduction.html
2022-08-07T21:13:20
CC-MAIN-2022-33
1659882570730.59
[]
docs.blender.org
Create topics from existing online support content Select the version of Power Virtual Agents you're using here: You can use content from existing webpages when creating a Power Virtual Agents bot. This is useful if you already have help or support content, such as FAQ pages or support sites. Rather than copying and pasting or manually re-creating this content, you can use AI-assisted authoring to automatically extract and insert relevant content from existing online resources into your bot. The underlying capability identifies the structure and content on a webpage or online file, isolates content blocks that pertain to a support issue or question, and then classifies them into topics with corresponding Trigger phrase and Message nodes for each topic. There are three main steps to using the feature: - Select Suggest topics on the Topics page to extract content. - Add the suggested topics to your bot. - Enable the suggested topics. You can test the topics in the test chat, but you'll need to publish your bot for customers to see the latest changes. Prerequisites Supported content Uploading files is not supported, instead you must provide a URL that meets the following requirements: - Points to a webpage or supported file type - Is accessible by anyone on the internet - Doesn't require a user to login - Uses HTTPS (starts with https://) The Suggest topics capability is built to extract topics from content with a FAQ or support structure. Webpages with a different structure might not work as expected. If you can't extract content from your webpage, try providing the content as a CSV file. Supported file types Tabular file types require a two-column format where each row represents a question and answer pair: the first column contains the question and the second column contains the answer. Important You must provide the full URL to the location of the file, including the file extension, as in the example. 1 Only the first sheet is imported. Extract content from webpages or online files First, you'll need to point to the webpages or online files from which you want to extract content. After the extraction is complete, you'll be shown the suggested topics for further review. Suggested topics aren't automatically added to your bot, but you can easily add them. Select Topics on the side pane. Go to the Suggested tab. If it's the first time you're getting suggestions, the list of suggested topics will be blank. A link to Get started or Learn more appears instead. Select Get started or Suggest topics. Enter links to each supported webpage or online file from which you want to extract content, and then select Add. If you add a link by mistake, you can remove it by selecting . Tip You can add multiple webpages and links to online files, but we recommend that you include only a few at a time to keep the list of suggestions manageable. When you're done adding links to webpages and/or online files, select Start. The process can take several minutes, depending on the complexity and number of webpages or files you added. The message "Getting your suggestions. This may take several minutes" appears at the top of the screen while the extraction is in progress. Important You can't add more URLs while the Suggest topics command is running. The tool provides explicit feedback about errors so that you can understand and address any issues. For example, you might be unable to extract content because the site you're referencing is down or it may be gated behind a user login, such as a SharePoint page. After you've successfully extracted content, a number of suggestions will appear. These may be either single-turn or multi-turn topics. You can now review these suggestions to see which ones you want to add to your bot. Single-turn and multi-turn topic suggestions When Power Virtual Agents extracts content, it generates single-turn or multi-turn topic suggestions, based on the structure of the document. A single-turn topic has a trigger phrase that contains a single answer. Topics such as these are typically generated if your online content has simple "question-and-answer" pairs, such as an FAQ page. A multi-turn topic contains multiple bot responses, and is often associated with multiple dialog branches. It provides a guided experience for your bot's users to navigate through a problem and reach a solution. These topics are typically generated when your online content is similar to a troubleshooting page or a reference manual or guidebook. The original content's structure or hierarchy (such as headings and subheadings) will contribute to whether a multi-turn or single-turn topic is generated. How the AI creates topic suggestions The Power Virtual Agents AI engine applies a number of steps to the content when it extracts topics and generates suggestions. These steps utilize AI to identify and parse visual and semantic cues from the content. Document parsing: the Power Virtual Agents engine identifies and extracts the basic components of the document, such as text and image blocks. Layout understanding: the document is segmented into different zones that consist of the blocks of content. Structure understanding: the logical structure of the content is analyzed by determining the "role" of each zone (for example, what is actual content and what are headings). Power Virtual Agents creates a hierarchical map or "heading tree" of the content, based on the headings and their associated content. Augmentation: the Power Virtual Agents AI engine adds context to the tree by analyzing how the headings relate to each other and their content. At this point, it generates single-turn topics from identified simple "question-and-answer" pairs of headings and content. Dialog generation: multi-turn topics are generated from the augmented knowledge tree, depending on whether the topic's intent is a simple answer from a group of many, or if the topic has multiple solutions that are equally different, and are chosen based on a user's input or choices. Add suggested topics to an existing bot After the extraction process has been completed, the topic suggestions appear on the Suggested tab. Review them individually to decide which ones you want to include in your bot. You can also add suggestions without reviewing them. Select the name of the suggested topic. Review the trigger phrase and suggested Message node. (Each topic will end with a survey, so your customers can let you know whether they found it helpful.) You have the following three options for dealing with the topic: To make edits to the topic, select Add to topics and edit. The topic will open, where you can edit the trigger phrases or enter the authoring canvas to make changes to the conversation flow. The topic will also be removed from the list of suggestions. To add the suggested topic without making any changes, select Add to topics. The topic is added and saved, but you'll stay on the list of suggested topics. The topic will also be removed from the list of suggested topics. To completely remove the suggestion, select Delete suggestion. The topic will be deleted from the list of suggested topics. You can run the Suggest topics command again if you want to restore it. In the suggested topics list, hover over the name of the suggested topic you want to add or delete. - To add the topic to your bot, select Add to topics . You won't see a preview of the topic and the topic will be moved from the list of Suggested topics into Existing topics. You can also add or delete multiple topic suggestions at a time. If you select multiple rows, you'll see options to Add to topic or Delete. Enable topics in your bot Suggested topics are added to the Existing tab with their status set to Off. This way, topics won't be prematurely added to your bot. Select Topics on the side pane. Go to the Existing tab. For each topic you want to enable, turn on the toggle under Status. See also Feedback Submit and view feedback for
https://docs.microsoft.com/en-au/power-virtual-agents/advanced-create-topics-from-web
2022-08-07T23:33:59
CC-MAIN-2022-33
1659882570730.59
[array(['media/advanced-create-topics-from-web/suggested-web-error-bar.png', 'A red banner alert that says We ran into problems getting suggestions appears at the top of the page with a link to more details.'], dtype=object) array(['media/advanced-create-topics-from-web/suggested-web-error-detail.png', 'A pop-up window that describes the errors encountered when trying to get suggestions from a web page.'], dtype=object) array(['media/advanced-create-topics-from-web/suggested-web-topics.png', 'The Suggested tab on the Topics page lists each topic by name, trigger phrase, source, and date it was received.'], dtype=object) array(['media/advanced-create-topics-from-web/sample-multi-turn-topic.png', 'A screenshot of the preview for a multi-turn topic suggestion showing multiple branches from the original question.'], dtype=object) ]
docs.microsoft.com
Themes Themes allow you to customize the look and feel of the Standard Notes app on all platforms. You can view the source code of our official themes in order to best understand how to create your own theme. For how to install a theme, please see Publishing. #Creating a theme Themes are simple CSS files which override a few variables to style the look of the application. CSS themes will automatically work on mobile. Your CSS file should contain content similar to the below. Note that font and font sizes do not apply to mobile; only desktop/web. In order to get SN to display a dock icon for your theme (a circle in the lower right corner of the app that allows you to quickly toggle themes), add the following payload into the your ext.json file when publishing your theme: #Reloading Mobile Themes The mobile app will download a theme once and cache it indefinitely. If you're installing your own mobile theme and make changes, you can press and hold on the theme name in the list to bring up the option to re-download the theme from the server. #3.9.15 Changes Since v3.9.15, the items in the notes list use a new variable for the background color, which will partially break the look of your theme when a note is selected or is hovered upon. In order to fix this, override the --sn-stylekit-grey-5 color to one which fits your theme. You might also need to override the --sn-stylekit-grey-4-opacity-variant variable if the tags inside the note item don't look correct. #Licensing Our themes are provided open-source mainly for educational and quality purposes. You're free to install them on your own servers, but please consider subscribing to Standard Notes Extended to help sustain future development of the Standard Notes ecosystem.
https://docs.standardnotes.com/extensions/themes/
2022-08-07T21:28:05
CC-MAIN-2022-33
1659882570730.59
[]
docs.standardnotes.com
Create Dashboards First, create a dashboard (under Dashboard 2.0) and then create tiles based on your requirements. A Partner Administrator or a client administrator who has auto-monitoring feature enabled and flag enabled can create and manage dashboards. NoteA Dashboard is associated with the user. A user can view a dashboard created by another user only if it has been shared with that particular user. To create a dashboard: Navigate to Dashboards > Dashboard 2.0. The default dashboard is displayed. Click the hamburger menu icon and click + Create Dashboard. Enter the name of the dashboard and click CREATE. The dashboard is displayed. You can also import a dashboard. Upload JSON file to import. You can create collections. Create Tiles To create a tile: Click the Create Tile icon or click + from the toolbar. The Create popup appears. Click the METRIC tile. The query window appears with DATA and VISUALIZATION columns. Enter the query in the PROMQL intelligent box. Enter a legend in the LEGEND box. Click CREATE. The tile is created and added to the dashboard.
https://jpdemopod2.docs.opsramp.com/platform-features/feature-guides/dashboards/dashboard20/create-dashboard/
2022-08-07T22:53:33
CC-MAIN-2022-33
1659882570730.59
[array(['https://docsmedia.opsramp.com/screenshots/Dashboard/dashboard-page.png', None], dtype=object) ]
jpdemopod2.docs.opsramp.com
2. IPU-Machine BMC specification The IPU-Machine baseboard management controller (BMC) runs software based on the OpenBMC project. The responsibilities of the BMC software stack are to control, monitor and manage system hardware, including power, sensors, inventories and event logging. You can control and monitor the system, via the BMC, using a variety of user interfaces such as a command line interface (CLI), REST API, IPMI and Redfish. 2.1. BMC subsystem A block diagram of the IPU-Machine, showing the BMC subsystem, is shown in Fig. 2.2. The physical components in the BMC subsystem are: ASPEED AST2520 baseboard management controller System FPGA 128 MB serial boot flash 1 GB of DDR4 DRAM One USB port — Micro-USB management interface, see Fig. 13.1 1 GbE mgmt ethernet interface — Eth0 which is configured to 100 Mbps, full-duplex with auto-negotiation enabled on BMC boot 1 GbE internal ethernet interface - Eth1 which is cofigured to 100Mbps, full-duplix with auto-negotiation enabled for BMC/Gateway communication Two I2C PSU interfaces for monitoring the state of the power supplies Five fan interfaces for controlling and measuring the speed of the system cooling fans Three LEDs used to indicate status 2.1.1. System FPGA The FPGA is mainly a status and control signal concentrator. It ensures that those control signals are always in a safe state. In addition, it provides hardware monitoring and protection for any thermal or voltage abnormalities. It also controls the sequencing of supply voltages, clocks and reset signals. 2.1.2. LEDs There are three LEDs used to indicate status: Green indicates normal operation status White is used to identify a specific IPU-Machine in a system Yellow indicates various error conditions: Temperature alert detected Standby voltages detected failing PSUs detected failing Connector domain detected failing IPU-Gateway 1 domain detected failing IPU 1 and 2, or IPU 3 and 4 domains detected failing Fan fail detected (too few fans or fans running too slow) No profile configuration found in flash BMC flash not trusted Note The white locator LED can be turned on and off with either the CLI or over IPMI. Using the CLI:$ ipum-utils locator_led_on $ ipum-utils locator_led_off Using IPMI:$ ipmitool -I lanplus -C 3 -p 623 -U <bmcuser> -P <bmcpass> -H <bmcip> chassis identify <interval> where <interval>is the time in seconds (default 15 and max 255) that the white locator LED remains lit. 2.1.3. Ethernet Switch There is an ethernet switch on IPU-M2000 motherboard, that is setup by BMC and can be configured over BMC CLI or REST interface, to provide different connectivity options between IPU-M2000 components and external network. Below are description of notable ports of this ethernet switch with a picture to help visualizing it. Port 1 : Connected to upper RJ45 connector on IPU-M2000 front Port 2 : Connected to lower RJ45 connector on IPU-M2000 front Port 5 : Connected to BMC ethernet internface Port 6 : Connected to GW ethernet interface This ethernet switch can be configured in three different forwarding modes with respect to the above ports: BmcOnly: BMC can send/recieve traffic to/from port 1 and port 2. Gateway traffic is blocked. BmcGatewaySplit: BMC can only send/receive traffic to/from port 2. Gateway can only send/receive to/from only port 1. Open (default): BMC and Gateway can send/receive traffic to/from port 1 or port 2. You can configure the ethernet switch to one of the above modes using BMC CLI as below: $ ethswitch-fix -s BmcOnly $ ethswitch-fix -s BmcGatewaySplit $ ethswitch-fix -s Open You can configure the ethernet switch to one of the above modes using BMC over REST as below: $ curl -k -X PUT https://<bmcip>/xyz/openbmc_project/ethswitch/mode/attr/Mode -d '{"data":"xyz.openbmc_project.ethswitch.Mode.State.BmcOnly"}' -u <bmcuser>:<bmcpass> $ curl -k -X PUT https://<bmcip>/xyz/openbmc_project/ethswitch/mode/attr/Mode -d '{"data":"xyz.openbmc_project.ethswitch.Mode.State.BmcGatewaySplit"}' -u <bmcuser>:<bmcpass> $ curl -k -X PUT https://<bmcip>/xyz/openbmc_project/ethswitch/mode/attr/Mode -d '{"data":"xyz.openbmc_project.ethswitch.Mode.State.Open"}' -u <bmcuser>:<bmcpass> 2.2. BMC functions The BMC supports the following system management functions: 2.4. Supported IPMI commands This section summarises the ipmitool commands supported by the BMC. For details of the parameters and examples of use, see the appropriate user guide chapter. The BMC is tested with ipmitool version 1.8.18. Note Sensor thresholds values in sensor thresh command should match this equation: lcr <= lnc <= unc <= ucr The maximum number of system users is 30 The maximum number of users with IPMI access is 15 The maximum username length for an IPMI user is 16 bytes User IDs are from 1 to 15 Passwords must be a minimum of 8 characters in length The privilege levels are: 0x1: Callback 0x2: User 0x3: Operator 0x4: Administrator 0x5: OEM Proprietary 0xF: No Access
https://docs.graphcore.ai/projects/bmc-user-guide/en/latest/specification.html
2022-08-07T22:14:50
CC-MAIN-2022-33
1659882570730.59
[]
docs.graphcore.ai
2. Software installation You need to install the Poplar SDK, which includes the development tools and some command line tools for managing the IPU hardware. The software can be downloaded from the Graphcore software download portal. 2.1. SDK installation Download the SDK tarball and unpack it using the following command: $ tar -xvzf poplar_sdk-[os]-[ver].tar.gz Where [os] is the host OS and [ver] is the software version number of the package. The main components under the SDK installation directory are shown in Table 2.1. Note There are two versions of each of the TensorFlow wheel files, optimised for Intel and AMD processors respectively. These are indicated by the arch component of the filename. You must install the correct wheel file for your host processor type. 2.2. Setting up the SDK environment To use the Graphcore tools and Poplar libraries, several environment variables (such as library and binary paths) need to be set up, as shown below: $ cd poplar_sdk-[os]-[ver] $ source poplar-[os]-[ver]/enable.sh $ source popart-[os]-[ver]/enable.sh Where [os] is the host OS and [ver] is the current software version number of each package. You will need to source both the Poplar and the PopART enable scripts if you are using PyTorch or PopART. Note Each of these scripts must be sourced every time the Bash shell is reset. If you attempt to run any Poplar software without having first enabled these scripts, you’ll get an error from the C++ compiler similar to the following (the exact message will depend on your code). fatal error: 'poplar/Engine.hpp' file not found You can verify that Poplar has been successfully set up by running: $ popc --version This will display the version number of the installed software. PopTorch and TensorFlow for the IPU are provided as Python wheel files that can be installed using pip as described in the following sections. 2.3. Setting up PyTorch for the IPU PopTorch is part of the Poplar SDK. It provides functions to allow PyTorch models to run on the IPU. Before running PopTorch, you must source the enable.sh scripts for Poplar and PopART as described in Section 2.2, Setting up the SDK environment. PopTorch is packaged as a Python wheel file that can be installed using pip. Note PopTorch requires pip version 18.1 or later, so it important to make sure you have the latest version before installing PopTorch. We recommend creating a virtual environment to isolate your PopTorch environment from the system Python environment. You can use the Python tool virtualenv for this. You can create a virtual environment and install PopTorch as shown below: $ virtualenv -p python3 poptorch_test $ source poptorch_test/bin/activate $ python -m pip install -U pip $ python -m pip install <sdk_path>/poptorch_x.x.x.whl To confirm that PopTorch has been installed, you can use pip list, which should include the poptorch package in the output. You can also test that the module has been installed correctly by attempting to import it in Python, for example: $ python3 -c "import poptorch; print(poptorch.__version__)" Note that you may get a warning message including the string “UserWarning: Failed to initialize NumPy” unless you have numpy installed. This warning can be safely ignored; it will not prevent you from using PopTorch. You can stop the warning being displayed by installing numpy into the virtualenv: $ python -m pip install numpy For more information, refer to PyTorch for the IPU: User Guide. 2.4. Setting up TensorFlow for the IPU Before running TensorFlow, you must source the enable.sh script for Poplar as described in Section 2.2, Setting up the SDK environment. To use the Graphcore port of TensorFlow, you must set up a Python virtual environment. You can create a virtual environment in a directory called workspace with: $ virtualenv -p python3.6 ~/workspace/tensorflow_env You have to activate the virtual environment before you can use it with: $ source ~/workspace/tensorflow_env/bin/activate Now all subsequent installations will be local to that virtual environment, until you deactivate it. Warning The versions of TensorFlow included in Poplar SDK 2.5 and earlier are not compatible with protobuf version 4 (see TensorFlow issue #56077). When you install a TensorFlow wheel from the Poplar SDK, you must ensure you have a compatible version of protobuf, downgrading if necessary. For TensorFlow 2: $ python -m pip install "protobuf>=3.9.2,<3.20" --force-reinstall For TensorFlow 1: $ python -m pip install "protobuf>=3.8.0,<3.20" --force-reinstall You can do this before or after installing the Graphcore TensorFlow wheel. We support TensorFlow 1 and TensorFlow 2. There are versions of these compiled for Intel and AMD processors to provide the best performance on those hosts. As a result, there are four Python wheel files that are included in the Poplar SDK that can be installed with pip. Warning You must install the correct wheel file for your host CPU. You can use the command lscpu to determine the CPU type, if you are not sure. To install Graphcore’s TensorFlow distribution, you would use a command similar to the following: $ python -m pip install tensorflow-<tf_ver>+gc<poplar_ver>+<build_info>+<arch>-<python_ver>-<platform>.whl where <tf-ver> is the TensorFlow version, <poplar_ver> is the Poplar SDK version, <build_info> is information about the specific build, <arch> is the host CPU architecture (Intel or AMD), <python_ver> is the Python version, and <platform> is the platform. For example: $ python -m pip install tensorflow-2.6.3+gc2.6.0+214768+ac0300b8f63+amd_znver1-cp36-cp36m-linux_x86_64.whl would install TensorFlow 2.6.3 for Poplar SDK 2.6.0 for an Intel CPU on 64-bit Linux. To confirm that tensorflow has been installed, you can use pip list, which should include the tensorflow package in the output, for example: (tensorflow_env)$ pip list Package Version ------------- ---------- numpy 1.19.5 pkg-resources 0.0.0 tensorflow 2.6.3 wheel 0.37.0 You can also test that the module has been installed correctly by importing it in Python, for example: $ python -c "from tensorflow.python import ipu" 2.4.1. IPU Keras In the TensorFlow 2.6 release, Keras was moved into a separate pip package. In the Poplar SDK 2.6 release, which includes the Graphcore distribution of TensorFlow 2.6, there is a Graphcore distribution of Keras which includes IPU-specific extensions. Note The Keras wheel must be installed after the TensorFlow wheel, but before the TensorFlow Addons wheel. To install the Keras wheel, use a command similar to: $ python -m pip install --force-reinstall --no-deps keras-<tf-ver>*.whl where <tf-ver> is the TensorFlow 2 version. For example: $ python -m pip install--force-reinstall --no-deps keras-2.6.0+gc2.6.0+214767+d36553ad-py2.py3-none-any.whl would install the Keras wheel for TensorFlow 2.6 for the IPU for Poplar SDK 2.6. You can test that the package has been installed correctly by importing it in Python, for example: $ python3 -c "import keras" If you get an “illegal instruction” or similar error, then try to install the Keras wheel again. 2.4.2. IPU TensorFlow Addons Poplar SDK 2.4 onwards includes an additional package called IPU TensorFlow Addons. This contains a collection of addons created for TensorFlow for the IPU, such as IPU-specific Keras optimizers. To install this package you need to use a command similar to the following: $ python -m pip install ipu_tensorflow_addons-<tf_ver>+gc<poplar_ver>+<build_info>-py3-none-any.whl where <tf-ver> is the TensorFlow version, <poplar_ver> is the Poplar SDK version, and <build_info> is information about the specific build. For example: $ python -m pip install ipu_tensorflow_addons-2.6.3+gc2.6.0+214767+3efc838-py3-none-any.whl would install IPU TensorFlow Addons for TensorFlow 2.6.3 for the IPU for Poplar SDK 2.6.0. Note that the IPU TensorFlow Addons wheels are not specific to a CPU architecture. 2.4.3. Next steps For the next steps with TensorFlow, refer to the appropriate user guide:
https://docs.graphcore.ai/projects/ipu-pod-da-getting-started/en/latest/sw-installation.html
2022-08-07T23:19:01
CC-MAIN-2022-33
1659882570730.59
[]
docs.graphcore.ai
Link to MSDN article with information about which .NET Framework versions are in which versions of Windows and Visual Studio I have been maintaining a blog post for a while that lists which version(s) of the .NET Framework are included with each version of Windows. Recently, I found an MSDN article titled .NET Framework Versions and Dependencies that includes this information in addition to some other useful .NET Framework versioning information. This article includes details about the following topics: - Operating system support - what versions of the .NET Framework are included with each version of Windows - Features and IDE - what version of the .NET Framework shipped with each version of Visual Studio and some key features included in each version - Targeting and running .NET Framework 4, 4.5, and 4.5.1 apps - Targeting and running apps for older versions of the .NET Framework In addition, the article includes links to several other useful articles related to .NET Framework versioning, including the following: - Version compatibility in the .NET Framework - Application compatibility in the .NET Framework - Assemblies and side-by-side execution - .NET Framework 4.5 migration guide If you have questions about .NET Framework versioning, compatibility, and/or OS integration, I encourage you to check out the information in this set of MSDN articles.
https://docs.microsoft.com/en-us/archive/blogs/astebner/link-to-msdn-article-with-information-about-which-net-framework-versions-are-in-which-versions-of-windows-and-visual-studio
2022-08-07T22:48:43
CC-MAIN-2022-33
1659882570730.59
[]
docs.microsoft.com
Generate an Authentication Token¶ You can use cURL to try the authentication process in two steps: get a token, and send the token to a service. Get an authentication token by providing your user name and either your API key or your password. Here are examples of both approaches: You can request a token by providing your user name and your password. $ curl -X POST -d '{"auth":{"passwordCredentials":{"username": "joecool", "password":"coolword"}, "tenantId":"5"}}' -H 'Content-type: application/json' Successful authentication returns a token which you can use as evidence that your identity has already been authenticated. To use the token, pass it to other services as an X-Auth-Tokenheader. Authentication also returns a service catalog, listing the endpoints you can use for Cloud services. Use the authentication token to send a GETto a service you would like to use. Authentication tokens are typically valid for 24 hours. Applications should be designed to re-authenticate after receiving a 401 (Unauthorized) response from a service endpoint. Note If you programmatically parse an authentication response, be aware that service names are stable for the life of the particular service and can be used as keys. You should also be aware that a user’s service catalog can include multiple uniquely-named services that perform similar functions.
https://docs.openstack.org/zaqar/latest/user/authentication_tokens.html
2022-08-07T22:52:05
CC-MAIN-2022-33
1659882570730.59
[]
docs.openstack.org
json2msgpack -i config.json -o data/config.mp config.mpfile into an empty folder. Anything in the folder will be uploaded to the SPIFFS partition on the Amp. In my case, I've saved it to a folder called data. Use mkspiffsto create the SPIFFS partition from the data folder. mkspiffs -c data -s 61440 spiffs.bin esptool.py --port COM7 write_flash 0x251000 spiffs.bin
https://docs.ridewithamp.com/customize-the-amp/update-amp-profile
2022-08-07T21:15:39
CC-MAIN-2022-33
1659882570730.59
[]
docs.ridewithamp.com
Service Request is a standard request that has already been pre-approved in an organization. - Service Request is created when user needs information, advice, or access to services. - Creating a list of pre-approved service requests reduces approval workflow cycles. Configure service request settings Customize Service Request details by creating custom fields in addition to the predefined fields: - From All Clients, select the client. - Go to Setup > Service Desk > Configuration section > Settings. - Click Service Request, configure the settings, and click Update. Create a service request Service requests can be created using: - Service Desk Create a service request on Service Desk - From the options in the drop-down menu, click Service Desk. - Click New and click Service Request. - In the New Service Request page, enter the required details and click Create. Supported fields: Create a service request using an email integration - Go to Setup > Integrations > Integrations. - In the Available Integrations, Click Email Requests. - In the Install Email Requests Integrations page, provide a Name of the Integration. - Select the Request Type as Service Request, upload an Image file if required, and click Install. The new email integration is displayed on the Email Requests Integration page. - Click the copy icon in Incoming Email Address. - While composing Service Request email, enter the following in the email fields: - To: Incoming Email Address - Subject: Service Request subject You can define rules to configure email attributes such as subject, description, priority, external ticket ID and email addresses. Create a service requests from closed service requests If a closed Service Request needs to be reviewed or a similar issue needs to be reported, create a new Service Request and attach the closed Service Request to the same. Prerequisite: Activate the Enable to create a follow-up service requests etting to be able to view the Create Follow-Up option. - Select Service Desk (from the drop-down menu). - Select a service request that is in Closed status. - Click Create Follow-Up. - Enter details and click Create. The Follow-Up Service Request is created. Edit service requests Edit a single service request - From the options in the drop-down menu, click Service Desk. - Click a Service Request. - Click Edit button and edit the required fields. Bulk edit multiple service requests - From the options in the drop-down menu, click Service Desk. - Click Bulk Update button and select the number of service requests to be edited. - Select Apply Actions option. Update Actions window is displayed. - Select the required changes and click Update. Attach an incident or task request to a service request Attach a single or multiple incidents, or task requests to an existing Service Request. As service requests are closed, the incidents and task requests are automatically resolved and closed. Attach a new Service Request to an existing incident or task request - From the options in the drop-down menu, click Service Desk. - Click New and click Service Request. - In the New Service Request page, click Attach Incidents/Task Requests. - In the Incidents/Tasks page: - All - Select to view both the incidents and task requests. - Incident - Select to view the incidents. - Task Request - Select to view the task requests. - Select the required incident or task request and then click Update. Notes - To attach an incident, the user must have the Manage Incident permission. Otherwise, the incidents do not appear in the Incidents/Tasks page. - To attach a task request, the user must have the Manage Task Request permission. Otherwise, the task requests do not appear in the Incidents/Tasks page. - If the user has the Manage Service Desk permissions, the incidents and task requests can still be attached to the service request though the user has only View Incident and View Task Request permissions. To know about the available permissions for a user, go to Setup > Accounts > Users, select a user, and click View under User Roles. Attach an existing Service Request to an incident - From the options in the drop-down menu, click Service Request. - Click Attached Requests and click Add/Modify. - Select the required incident and click Submit as New. Use auto-close to close a service request Configure Auto-Close Policies to close Service Requests that are resolved and that are in the inactive state since a certain elapsed time. - Go to Setup > Service Desk > Auto Close Policies. - Select the client and click Auto Close Service Requests. - Enter: - Name: Name of the Auto-Close policy - Resolved Tickets Above: The inactive period of a resolved Service Request beyond which the Service Request needs to be closed - Click Submit. Auto-Close Policy is added. View service requests The service request details page provides the following attributes.
https://jpdemopod2.docs.opsramp.com/platform-features/feature-guides/tickets/service-request/
2022-08-07T22:30:08
CC-MAIN-2022-33
1659882570730.59
[array(['https://docsmedia.opsramp.com/screenshots/Tickets/Follow-up-Service-Request.png', 'Follow Up Service Request'], dtype=object) array(['https://docsmedia.opsramp.com/screenshots/Tickets/Attach-incident-to-SR-800x339.png', 'Attach Incident to SR'], dtype=object) ]
jpdemopod2.docs.opsramp.com
Limitations and considerations when accessing federated data with Amazon Redshift Some Amazon Redshift features don't support access to federated data. You can find related limitations and considerations following. The following are limitations and considerations when using federated queries with Amazon Redshift: Federated queries support read access to external data sources. You can't write or create database objects in the external data source. In some cases, you might access an Amazon RDS or Aurora database in a different AWS Region than Amazon Redshift. In these cases, you typically incur network latency and billing charges for transferring data across AWS Regions. We recommend using an Aurora global database with a local endpoint in the same AWS Region as your Amazon Redshift cluster. Aurora global databases use dedicated infrastructure for storage-based replication across any two AWS Regions with typical latency of less than 1 second. Consider the cost of accessing Amazon RDS or Aurora. For example, when using this feature to access Aurora, Aurora charges are based on IOPS. Federated queries don't enable access to Amazon Redshift from RDS or Aurora. Federated queries are only available in AWS Regions where both Amazon Redshift and Amazon RDS or Aurora are available. Federated queries currently don't support ALTER SCHEMA. To change a schema, use DROPand then CREATE EXTERNAL SCHEMA. Federated queries don't work with concurrency scaling. Federated queries currently don't support access through a PostgreSQL foreign data wrapper. Federated queries to RDS MySQL or Aurora MySQL support transaction isolation at the READ COMMITTED level. If not specified, Amazon Redshift connects to RDS MySQL or Aurora MySQL on port 3306. Confirm the MySQL port number before creating an external schema for MySQL. When fetching TIMESTAMP and DATE data types from MySQL, zero values are treated as NULL. The following are considerations for transactions when working with federated queries to PostgreSQL databases: If a query consists of federated tables, the leader node starts a READ ONLY REPEATABLE READ transaction on the remote database. This transaction remains for the duration of the Amazon Redshift transaction. The leader node creates a snapshot of the remote database by calling pg_export_snapshotand makes a read lock on the affected tables. A compute node starts a transaction and uses the snapshot created at the leader node to issue queries to the remote database. An Amazon Redshift external schema can reference a database in an external RDS PostgreSQL or Aurora PostgreSQL. When it does, these limitations apply: When creating an external schema referencing Aurora, the Aurora PostgreSQL database must be at version 9.6, or later. When creating an external schema referencing Amazon RDS, the Amazon RDS PostgreSQL database must be at version 9.6, or later. An Amazon Redshift external schema can reference a database in an external RDS MySQL or Aurora MySQL. When it does, these limitations apply: When creating an external schema referencing Aurora, the Aurora MySQL database must be at version 5.6 or later. When creating an external schema referencing Amazon RDS, the RDS MySQL database must be at version 5.6 or later.
https://docs.aws.amazon.com/redshift/latest/dg/federated-limitations.html
2022-08-07T22:41:14
CC-MAIN-2022-33
1659882570730.59
[]
docs.aws.amazon.com
Version FlexNet Manager Suite 2022 R1 (On-Premises) Registry Versionidentifies the version of an application in the manual mapper for the application usage component. After usage is detected and uploaded, this version appears in the Raw Software Usage page in the web interface for FlexNet Manager Suite. This registry key must be created manually, within a node for the chosen application that has also been inserted manually (and shown below as Application node). Tip: If this value for Version, together with the value for Application, are exact matches for the application version and name recorded in installer evidence for the application, the set-up can be simplified a little, because this mapping of the executable to the application automatically matches through the known installer evidence. (If not, this manual mapper entry can also be manually linked to the application record.) Values Registry FlexNet Manager Suite (On-Premises) 2022 R1
https://docs.flexera.com/FlexNetManagerSuite2022R1/EN/GatherFNInv/SysRef/FlexNetInventoryAgent/topics/PMD-Version.html
2022-08-07T21:38:39
CC-MAIN-2022-33
1659882570730.59
[]
docs.flexera.com
cts:column-range-query( schema as xs:string, view as xs:string, column as xs:string, value as xs:anyAtomicType*, [operator as xs:string?], [options as xs:string*], [weight as xs:double?] ) as cts:triple-range-query Returns a cts:query matching documents matching a TDE-view column equals to an value. Searches with the cts:column-range-query constructor require the triple index; if the triple index is not configured, then an exception is thrown. This function returns a cts:triple-range-query, and all functions which takes cts:triple-range-query as an input can be used (e.g. cts:triple-range-query-subject). xquery version "1.0-ml"; let $query:=cts:column-range-query("MySchema","MyView","value",xs:decimal(200), "<") return cts:uris((),(),$query) Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question.
https://docs.marklogic.com/cts:column-range-query
2022-08-07T21:34:17
CC-MAIN-2022-33
1659882570730.59
[]
docs.marklogic.com
sql.insert( str as String, start as Number, length as Number, str2 as String ) as String Returns a string that that is the first argument with length characters removed starting at start and the second string has been inserted beginning at start. sql.insert("abcdef",2,1,"(REDACTED)") => "ab(REDACTED)def" Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question.
https://docs.marklogic.com/sql.insert
2022-08-07T21:42:51
CC-MAIN-2022-33
1659882570730.59
[]
docs.marklogic.com
fortios_firewall_auth_portal – Configure firewall authentication portals in Fortinet’s FortiOS and FortiGate¶ New in version 2.8. Synopsis¶ This module is able to configure a FortiGate or FortiOS by allowing the user to configure firewall feature and auth_portal category. Examples includes all options and need to be adjusted to datasources before usage. Tested with FOS v6.0.2 Requirements¶ The below requirements are needed on the host that executes this module. fortiosapi>=0.9.8 Parameters¶ Notes¶ Note Requires fortiosapi library developed by Fortinet Run as a local_action in your playbook Examples¶ - hosts: localhost vars: host: "192.168.122.40" username: "admin" password: "" vdom: "root" tasks: - name: Configure firewall authentication portals. fortios_firewall_auth_portal: host: "{{ host }}" username: "{{ username }}" password: "{{ password }}" vdom: "{{ vdom }}" firewall_auth_portal: groups: - name: "default_name_4 (source user.group.name)" identity-based-route: "<your_own_value> (source firewall.identity-based-route.name)" portal-addr: "<your_own_value>" portal-addr6: "<your_own_value>" Return Values¶ Common return values are documented here, the following are the fields unique to this module: Status¶ This module is not guaranteed to have a backwards compatible interface. [preview] This module is maintained by the Ansible Community. [community]
https://docs.ansible.com/ansible/2.8/modules/fortios_firewall_auth_portal_module.html
2022-08-07T22:04:21
CC-MAIN-2022-33
1659882570730.59
[]
docs.ansible.com
[ ] Requirements and Host Configuration¶ Overview¶ Below are some instructions and suggestions to help you get started with a Kubeadm All-in-One environment on Ubuntu 18.04. Other supported versions of Linux can also be used, with the appropriate changes to package installation. Requirements¶ System Requirements¶ The recommended minimum system requirements for a full deployment are: 16GB of RAM 8 Cores 48GB HDD For a deployment without cinder and horizon the system requirements are: 8GB of RAM 4 Cores 48GB HDD This guide covers the minimum number of requirements to get started. All commands below should be run as a normal user, not as root. Appropriate versions of Docker, Kubernetes, and Helm will be installed by the playbooks used below, so there’s no need to install them ahead of time. Warning By default the Calico CNI will use 192.168.0.0/16 and Kubernetes services will use 10.96.0.0/16 as the CIDR for services. Check that these CIDRs are not in use on the development node before proceeding, or adjust as required. Host Configuration¶ OpenStack-Helm uses the hosts networking namespace for many pods including, Ceph, Neutron and Nova components. For this, to function, as expected pods need to be able to resolve DNS requests correctly. Ubuntu Desktop and some other distributions make use of mdns4_minimal which does not operate as Kubernetes expects with its default TLD of .local. To operate at expected either change the hosts line in the /etc/nsswitch.conf, or confirm that it matches: hosts: files dns Host Proxy & DNS Configuration¶ Note If you are not deploying OSH behind a proxy, skip this step. Set your local environment variables to use the proxy information. This involves adding or setting the following values in /etc/environment:" Note Depending on your specific proxy, https_proxy may be the same as http_proxy. Refer to your specific proxy documentation. Your changes to /etc/environment will not be applied until you source them: source /etc/environment OSH runs updates for local apt packages, so we will need to set the proxy for apt as well by adding these lines to /etc/apt/apt.conf: Acquire::http::proxy "YOUR_PROXY_ADDRESS:PORT"; Acquire::https::proxy "YOUR_PROXY_ADDRESS:PORT"; Acquire::ftp::proxy "YOUR_PROXY_ADDRESS:PORT"; Note Depending on your specific proxy, https_proxy may be the same as http_proxy. Refer to your specific proxy documentation.
https://docs.openstack.org/openstack-helm/latest/install/developer/requirements-and-host-config.html
2022-08-07T21:15:01
CC-MAIN-2022-33
1659882570730.59
[]
docs.openstack.org
The Views menu is located in the top-left corner of the Look Dev view, next to the Settings menu. This controls all the options specific to each individual view. If a multi-view mode is enabled, both views have their own settings. This is indicated by orange or blue to match the screen they represent.
https://docs.unity3d.com/es/2018.2/Manual/LookDevViewsMenus.html
2022-08-07T22:07:22
CC-MAIN-2022-33
1659882570730.59
[]
docs.unity3d.com
Recent posts¶ 12 July - Read the Docs newsletter - July 2022. 08 June - Read the Docs newsletter - June 2022 We’re excited to welcome Benjamin Balder Bach to our team, joining as a part-time contractor for now. He’s a developer with a history of working as an Open Source maintainer and event organizer in the Django community. He has also previously contributed to Read the Docs and will be a wonderful addition to the team. We’re also excited to see people using our new build.jobs feature that we previously announced. There are a lot of interesting ways to adapt the build process with this, and we’ll continue to watch with interest how people are using it! 09 May - Announcing user-defined build jobs We are happy to announce a new feature to specify user-defined build jobs on Read the Docs. If your project requires custom commands to be run in the middle of the build process, they can now be executed with the new config key build.jobs. This opens up a complete world full of new and exciting possibilities to our users. If your project has ever required a custom command to run during the build process, you probably wished you could easily specify this. You might have used a hacky solution inside your Sphinx’s conf.pyfile, but this was not a great solution to this problem. 05 May - Read the Docs newsletter - May 2022 April has been another exciting month here at Read the Docs. We’ve gotten a few good candidates for our Product-focused Application Developer job posting, and we’re on to the second round of interviews. Expect to hear more about any new team members here in the next couple months. We’ve continued building a number of features and bug fixes in our roadmap: 07 April - Read the Docs newsletter - April 2022 March has been a productive month for Read the Docs. We have finished our Product-focused Application Developer job posting, which we’re excited about. We plan to share this on a few job boards, and are looking for someone to join the team who is excited to work on our product. We’ve continued building a number of features and bug fixes in our March roadmap: 21 March - Read the Docs 2021 Stats 2021 continued in the realm of being a tough year. That said, Read the Docs had a lot of good things happen this year. We did a majority of the work on our CZI grant, grew our team from 5 to 8, and continued to grow our EthicalAds network & Read the Docs for Business. It’s been another year of steady growth, and we hope to continue that into 2022. 10 March - Read the Docs newsletter - March 2022 It’s been pretty quiet on the company front in February, with nothing much to report. We’re actively working on our latest job description, which will be a product-focused Python development position. If you’re interested, please let us know. In February we continued to work on refactors and internal changes. Among the major user-facing changes: 07 March - War in Ukraine and what it means for Read the Docs With news surrounding the invasion of Ukraine evolving rapidly, we felt it was necessary to provide an update to our users and customers. At Read the Docs, we are outraged and saddened by the invasion of Ukraine and we condemn this act of violence as wrong and unlawful. We are monitoring the situation in Europe and how it relates to our employees, customers, and our services to the open source world. 01 March - Deprecation of the git:// protocol on GitHub Last year, GitHub announced the deprecation of the unsecured Git protocol due to security reasons. This change will be made permanent on March 15, 2022. At Read the Docs we found around 900 projects using a Git protocol URL ( git://github.com/user/project) to clone their projects. To save time for our users, we have migrated those to use the HTTPS cloning URL instead (). 08 February - Read the Docs newsletter - February 2022 Welcome to the latest edition of our monthly newsletter, where we share the most relevant updates around Read the Docs, offer a summary of new features we shipped during the previous month, and share what we’ll be focusing on in the near future. We have mostly finished migrating Read the Docs for Business users to Cloudflare for SSL. There are lots of interesting features this will enable, so stay tuned for updates there.
https://blog.readthedocs.com/index.html
2022-08-07T22:23:18
CC-MAIN-2022-33
1659882570730.59
[]
blog.readthedocs.com
Roles and Users Roles Roles are predefined in your Data Hub Service (DHS) portal and in each service you create. Each role is predefined with the appropriate privileges to perform certain tasks within its scope. - Portal roles are administrator roles that enable access to and perform management and operational tasks in the DHS portal. - Service roles enable access to and content management of a specific service. See: Users User accounts can be configured in your DHS portal, in each service you create, and in an external authentication provider. Each user account type is restricted within their scope. MarkLogic Data Hub Service does not automatically create user accounts. You must create portal users and service users and assign the appropriate roles to them. - Portal user accounts are administrator accounts used to manage the DHS portal and assigned portal roles. Although they can create, edit, and delete services, they do not have access to the contents of the databases as service users do. - Note: The first portal user to log into the portal is automatically assigned all portal roles. Additional portal users who log in are restricted until the first portal user assigns portal roles to them. - Service user accounts are restricted to the service in which they were created and assigned service roles. - Internal service users are user accounts configured using the portal and assigned service roles. - Note: Internal service users can only be assigned the service roles predefined in the DHS portal. - External service users are user accounts configured using an external authentication provider and assigned service roles. For details, see LDAP. - Note: External service users can be assigned the service roles predefined in the DHS portal or custom roles. For details about custom roles, see Custom Roles and Privileges. - Important: If using an external LDAP server, one or more users must exist in the Security Admin DN in the external LDAP server before you can map service roles to LDAP roles. See:
https://docs.marklogic.com/cloudservices/azure/security/about-security-roles-users.html
2022-08-07T21:21:23
CC-MAIN-2022-33
1659882570730.59
[]
docs.marklogic.com
The vertical origin of rotation for this Camera.. See setOrigin to set both origins in a single, chainable call.
https://newdocs.phaser.io/docs/3.55.1/focus/Phaser.Cameras.Scene2D.Camera-originY
2022-08-07T21:49:43
CC-MAIN-2022-33
1659882570730.59
[]
newdocs.phaser.io
How to: Promote Actions on Pages Actions that appear in the Home tab of the ribbon are called promoted actions. Promoted actions are copies of existing actions that are defined in the other tabs, such as Actions, Navigate, and Reports. Promoted actions provide quick access to common tasks, because a page always opens on the Home tab so users do not have to look through other tabs to find them. For example, on the Customer Card page, you can promote an action to create a new sales order to help the salespeople do their work, because creating sales orders is one of their most important daily tasks. When promoting actions, you have the following options: Organize promoted actions under different categories. For example, the categories can be New, Tasks, Reports, and Discounts. The order of actions under a category is determined by its order in the Action Designer. You define the caption for each category. For more information on defining the promoted categories, see How to: Define Promoted Action Categories Captions for the Ribbon. Set up an image that displays with the action. If you do not set up an image yourself, then a default image is used. For more information about promoted action images, see How to: Set an Icon on an Action. Specify that the promoted action only appears on the Home tab, not on the tab where it is defined. If there are no promoted actions, then the Home tab is hidden. Important In the Dynamics NAV Universal App, only promoted actions are displayed. For more information, see Differences and Limitations When Developing Pages for the Microsoft Dynamics NAV Universal App. Adding a Promoted Action to a Page To add a promoted action to a page In the Development Environment, on the Tools menu, choose Object Designer. In Object Designer, select a page that already has actions, and then choose Design to open the page in Page Designer. For example, select page 22, which is the Customer List page. To open Action Designer, in the View menu, choose Page Actions. In Action Designer, select the action to promote. In the View menu, choose Properties. In the Properties window, set the Promoted property value to Yes. To display the action on the Home tab only, set the Promoted property value to Yes. Set the PromotedCategory property to the category under which you want the promoted action to appear. Unless you define your own caption, the category name that you select will be used as the category's captions on the page. For more information, see How to: Define Promoted Action Categories Captions for the Ribbon. To assign an image to the action, set the Image property to the name of the image. For a list of images, see Action Icon Library. Note To associate a larger icon with your action, set the PromotedIsBig property. Close the Properties window and exit Page Designer. Save and compile the page. To preview your promoted action, in Object Designer, select the page, and then choose Run. Promoted Action Category Location in the Ribbon The locations of promoted action categories in the ribbon are determined by their first instance in Action Designer. Starting from top of Action Designer and working down, the category of the first promoted action encountered is located on the far left of the ribbon; the next category encountered is located to the right of the first category, and so on. Customizing the Ribbon When customizing the ribbon from the Customize Ribbon window, some actions may not have the sizing button labeled Default Icon Size available. The Default Icon Size button has the options of displaying an action icon as default size, small, or large. The label of the button changes when small or large is selected. In some cases the action can be a promoted action with the property PromotedIsBig set to Yes in the Development Environment. If you set an action to PromotedIsBig, this overrules the choices in the Customize Ribbon window. See Also Actions Overview How to: Add Actions to a Page How to: Define Promoted Action Categories Captions for the Ribbon
https://docs.microsoft.com/en-us/dynamics-nav/how-to--promote-actions-on-pages
2018-06-17T22:09:10
CC-MAIN-2018-26
1529267859817.15
[]
docs.microsoft.com
ISO/IEC TS 18661-3:2015 defines C support for additional floating types _Floatn and _Floatnx, and GCC supports these type names; the set of types supported depends on the target architecture. These types are not supported when compiling C++. Constants with these types use suffixes fn or Fn and fnx or Fnx. These type names can be used together with _Complex to declare complex types. As an extension, GNU C and GNU C++ support additional floating types, which are not supported by all targets. __float128is available on i386, x86_64, IA-64, and hppa HP-UX, as well as on PowerPC GNU/Linux targets that enable the vector scalar (VSX) instruction set. __float128supports the 128-bit floating type. On i386, x86_64, PowerPC, and IA-64 other than HP-UX, __float128is an alias for _Float128. On hppa and IA-64 HP-UX, __float128is an alias for long double. __float80is available on the i386, x86_64, and IA-64 targets, and supports the 80-bit ( XFmode) floating type. It is an alias for the type name _Float64xon these targets. __ibm128is available on PowerPC targets, and provides access to the IBM extended double format which is the current format used for long double. When long doubletransitions to __float128on PowerPC in the future, __ibm128will remain for use in conversions between the two types. Support for these additional types includes the arithmetic operators: add, subtract, multiply, divide; unary arithmetic operators; relational operators; equality operators; and conversions to and from integer and other floating types. Use a suffix ‘w’ or ‘W’ in a literal constant of type __float80 or type __ibm128. Use a suffix ‘q’ or ‘Q’ for _float128. In order to use _Float128, __float128, and __ibm128 on PowerPC Linux systems, you must use the -mfloat128 option. It is expected in future versions of GCC that _Float128 and __float128 will be enabled automatically.. The _Float16 type is supported on AArch64 systems by default, and on ARM systems when the IEEE format for 16-bit floating-point types is selected with -mfp16-format=ieee. GCC does not currently support _Float128x on any systems. On the i386, x86_64, IA-64, and HP-UX targets, you can declare complex types using the corresponding internal complex type, XCmode for __float80 type and TCmode for __float128 type: typedef _Complex float __attribute__((mode(TC))) _Complex128; typedef _Complex float __attribute__((mode(XC))) _Complex80; On the PowerPC Linux VSX targets, you can declare complex types using the corresponding internal complex type, KCmode for __float128 type and ICmode for __ibm128 type: typedef _Complex float __attribute__((mode(KC))) _Complex_float128; typedef _Complex float __attribute__((mode(IC))) _Complex_ibm128; Next: Half-Precision, Previous: Complex, Up: C Extensions [Contents][Index] © Free Software Foundation Licensed under the GNU Free Documentation License, Version 1.3.
http://docs.w3cub.com/gcc~7/floating-types/
2017-09-19T15:22:27
CC-MAIN-2017-39
1505818685850.32
[]
docs.w3cub.com
An item is a component of a crate.: extern cratedeclarations usedeclarations externblocks, constants and statics higher-ranked (or "forall") types abstracted over other types, though higher-ranked types do exist for lifetimes.; } loading at runtime.: use a::b::{c,d,e,f}; selfkeyword, such as use a::b::{self, c, d}; use p::q::r as x;. This can also be used with the last two features: use a::b::{self as ab, c as abc}. use a: a final expression,.: # #![allow(unused_variables)] #fn main() { fn add(x: i32, y: i32) -> i32 { x + y } #} As with let bindings, function arguments are irrefutable patterns, so any pattern that is valid in a let binding is also valid as an argument. # #![allow(unused_variables)] #fn main() { fn first((value, _): (i32, i32)) -> i32 { value } #} A generic function allows one or more parameterized types to appear in its signature. Each type parameter must be explicitly declared in an angle-bracket-enclosed and comma-separated list, following the function. A special kind of function can be declared with a ! character where the output type would normally be. For example: # #![allow(unused_variables)] #fn main() { fn my_err(s: &str) -> ! { println!("{}", s); panic!(); } #} We call such functions "diverging" because they never return a value to the caller. Every control path in a diverging function must end with a panic!(), a loop expression without an associated break expression, or a call to another diverging function on every control path. The ! annotation does not denote a type.: # #![allow(unused_variables)] #fn main() { # fn my_err(s: &str) -> ! { panic!() } fn f(i: i32) -> i32 { if i == 42 { return 42; } else { my_err("Bad number!"); } } #} This will not compile without the ! annotation on my_err, since the else branch of the conditional in f does not return an i32,. # #![allow(unused_variables)] #fn main() { // Declares an extern fn, the ABI defaults to "C" extern fn new_i32() -> i32 { 0 } // Declares an extern fn with "stdcall" ABI; #} Extern functions may be called directly from Rust code as Rust uses large, contiguous stack segments like C..: # #![allow(unused_variables)] #fn main() { enum Animal { Dog, Cat, } let mut a: Animal = Animal::Dog; a = Animal::Cat; #} Enumeration constructors can have either named or unnamed fields: # #![allow(unused_variables)] #fn main() { enum Animal { Dog (String, f64), Cat { name: String, weight: f64 }, } let mut a: Animal = Animal::Dog("Cocoa".to_string(), 37.2); a = Animal::Cat { name: "Spotty".to_string(), weight: 2.7 }; #} In this example, Cat is a struct-like enum variant, whereas Dog is simply called an enum variant. Each enum value has a discriminant which is an integer associated to it. You can specify it explicitly: # #![allow(unused_variables)] #fn main() { enum Foo { Bar = 123, } #} The right hand side of the specification is interpreted as an isize value, but the compiler is allowed to use a smaller type in the actual memory layout. The repr attribute can be added in order to change the type of the right hand side and specify the memory layout. If a discriminant isn't specified, they start at zero, and add one for each variant, in order. You can cast an enum to get its discriminant: # #![allow(unused_variables)] #fn main() { # enum Foo { Bar = 123 } let x = Foo::Bar as u32; // x is now 123u32 #} This only works as long as none of the variants have data attached. If it were Bar(i32), this is disallowed.".. Constant values must not have destructors, and otherwise permit most forms of data. Constants may refer to the address of other constants, in which case the address will have elided lifetimes where applicable, otherwise – in most cases – defaulting to the static lifetime. (See below on static lifetime elision.) The compiler is, however, still at liberty to translate the constant many times, so the address referred to may not be stable. Constants must be explicitly typed. The type may be bool, char, a number, or a type derived from those primitive types. The derived types are references with the static lifetime, fixed-size arrays, tuples, enum variants, and structs. # #!, }; #} A static item is similar to a constant, except that it represents a precise memory location in the program. A static is never "inlined" at the usage site, and all references to it refer to the same memory location. Static items have the static lifetime, which outlives all other lifetimes in a Rust program. Static items may be placed in read-only memory if they do not contain any interior mutability. Statics may contain interior mutability through the UnsafeCell language item. All access to a static is safe, but there are a number of restrictions on statics: Syncto allow thread-safe access. Constants should in general be preferred over statics, unless large amounts of data are being stored, or single-address and mutability properties are required. {() -> u32 { return atomic_add(&mut LEVELS, 1); } #} Mutable statics have the same restrictions as normal statics, except that the type of the value is not required to ascribe to Sync. 'staticlifetime elision Both constant and static declarations of reference types have implicit 'static lifetimes unless an explicit lifetime is specified. As such, the constant declarations involving 'static above may be written without the lifetimes. Returning to our previous example: # #![allow(unused_variables)] #fn main() { const BIT1: u32 = 1 << 0; const BIT2: u32 = 1 << 1; const BITS: [u32; 2] = [BIT1, BIT2]; const STRING: &str = "bitstring"; struct BitsNStrings<'a> { mybits: [u32; 2], mystring: &'a str, } const BITS_N_STRINGS: BitsNStrings = BitsNStrings { mybits: BITS, mystring: STRING, }; #} Note that if the static or const items include function or closure references, which themselves include references, the compiler will first try the standard elision rules (see discussion in the nomicon). If it is unable to resolve the lifetimes by its usual rules, it will default to using the 'static lifetime. By way of example: // Resolved as `fn<'a>(&'a str) -> &'a str`. const RESOLVED_SINGLE: fn(&str) -> &str = .. // Resolved as `Fn<'a, 'b, 'c>(&'a Foo, &'b Bar, &'c Baz) -> usize`. const RESOLVED_MULTIPLE: Fn(&Foo, &Bar, &Baz) -> usize = .. // There is insufficient information to bound the return reference lifetime // relative to the argument lifetimes, so the signature is resolved as // `Fn(&'static Foo, &'static Bar) -> &'static Baz`. const RESOLVED_STATIC: Fn(&Foo, &Bar) -> &Baz = .. A trait describes an abstract interface that types can implement. This interface consists of associated items, which come in three varieties: Associated functions whose first parameter is named self are called methods and may be invoked using . notation (e.g., x.foo()). All traits define an implicit type parameter Self that refers to "the type that is implementing this interface". Traits may also contain additional type parameters. These type parameters (including Self) may be constrained by other traits and so forth as usual. Trait bounds on Self are considered "supertraits". These are required to be acyclic. Supertraits are somewhat different from other constraints in that they affect what methods are available in the vtable when the trait is used as a trait object. Traits are implemented for specific types through separate implementations. Consider the following trait: # #![allow(unused_variables)] #fn main() { # type Surface = i32; # type BoundingBox = i32; trait Shape { fn draw(&self, Surface); fn bounding_box(&self) -> BoundingBox; } #} This defines a trait with two methods. All values that have implementations of this trait in scope can have their draw and bounding_box methods called, using value.bounding_box() syntax. Traits can include default implementations of methods, as in: # #![allow(unused_variables)] #fn main() { trait Foo { fn bar(&self); fn baz(&self) { println!("We called baz."); } } #} Here the baz method has a default implementation, so types that implement Foo need only implement bar. It is also possible for implementing types to override a method that has a default implementation.); } #} It is also possible to define associated types for a trait. Consider the following example of a Container trait. Notice how the type is available for use in the method signatures: # #![allow(unused_variables)] #fn main() { trait Container { type E; fn empty() -> Self; fn insert(&mut self,, Self::E); # } impl<T> Container for Vec<T> { type E = T; fn empty() -> Vec<T> { Vec::new() } fn insert(&mut self, x: T) { self.push(x); } } #} Generic functions may use traits as bounds on their type parameters. This will have two effects: For example: # #![allow(unused_variables)] #fn main() { # type Surface = i32; # trait Shape { fn draw(&self, Surface); } fn draw_twice<T: Shape>(surface: Surface, sh: T) { sh.draw(surface); sh.draw(surface); } #} Traits also define a trait object with the same name as the trait. Values of this type are created by coercing from a pointer of some specific type to a pointer of trait type. For example, &T could be coerced to &Shape if T: Shape holds (and similarly for Box<T>). This coercion can either be implicit or explicit. Here is an example of an explicit coercion: # #![allow(unused_variables)] #fn main() { trait Shape { } impl Shape for i32 { } let mycircle = 0i32; let myshape: Box<Shape> = Box::new: # #![allow(unused_variables)] #fn main() { trait Num { fn from_i32(n: i32) -> Self; } impl Num for f64 { fn from_i32(n: i32) -> f64 { n as f64 } } let x: f64 = Num::from_i32(42); #} Traits may inherit from other traits. Consider the following example: # #![allow(unused_variables)] #fn main() { trait Shape { fn area(&self) -> f64; } trait Circle : Shape { fn radius(&self) ->: # #![allow(unused_variables)] #fn main() { struct Foo; trait Shape { fn area(&self) -> f64; } trait Circle : Shape { fn radius(&self) -> f64; } impl Shape for Foo { fn area(&self) -> f64 { 0.0 } } impl Circle for Foo { fn radius(&self) -> f64 { println!("calling area: {}", self.area()); 0.0 } } let c = Foo; c.radius(); #} In type-parameterized functions, methods of the supertrait may be called on values of subtrait-bound type parameters. Referring to the previous example of trait Circle : Shape: # #![allow(unused_variables)] #fn main() { # trait Shape { fn area(&self) -> f64; } # trait Circle : Shape { fn radius(&self) -> f64; } fn radius_times_area<T: Circle>(c: T) -> f64 { // `c` is both a Circle and a Shape c.radius() * c.area() } #} Likewise, supertrait methods may also be called on trait objects. # trait Shape { fn area(&self) -> f64; } # trait Circle : Shape { fn radius(&self) -> f64; } # impl Shape for i32 { fn area(&self) -> f64 { 0.0 } } # impl Circle for i32 { fn radius(&self) -> f64 { 0.0 } } # let mycircle = 0i32; let mycircle = Box::new(mycircle) as Box<Circle>; let nonsense = mycircle.radius() * mycircle.area(); An implementation is an item that implements a trait for a specific type. Implementations are defined with the keyword impl. # #! } }, trait objects), and the implementation must appear in the same crate as the self type: # #![allow(unused_variables)] #fn main() { struct Point {x: i32, y: i32} impl Point { fn log(&self) { println!("Point is at ({}, {})", self.x, self.y); } } let my_point = Point {x: 10, y:11}; my_point.log(); #}. # #![allow(unused_variables)] #fn main() { # trait Seq<T> { fn dummy(&self, _: T) { } } impl<T> Seq<T> for Vec<T> { /* ... */ } impl Seq<bool> for u32 { /* Treat the integer as a sequence of bits */ } #}.. By default external blocks assume that the library they are calling uses the standard C ABI on the specific platform. Other ABIs may be specified using an abi string, as shown here: // Interface to the Windows API extern "stdcall" { } There are three ABI strings which are cross-platform, and which all compilers are guaranteed to support: extern "Rust"-- The default ABI when you write a normal fn foo()in any Rust code. extern "C"-- This is the same as extern fn foo(); whatever the default your C compiler supports. extern "system"-- Usually the same as extern "C", except on Win32, in which case it's "stdcall", or what you should use to link to the Windows API itself There are also some platform-specific ABI strings: extern "cdecl"-- The default for x86_32 C code. extern "stdcall"-- The default for the Win32 API on x86_32. extern "win64"-- The default for C code on x86_64 Windows. extern "sysv64"-- The default for C code on non-Windows x86_64. extern "aapcs"-- The default for ARM. extern "fastcall"-- The fastcallABI -- corresponds to MSVC's __fastcalland GCC and clang's __attribute__((fastcall)) extern "vectorcall"-- The vectorcallABI -- corresponds to MSVC's __vectorcalland clang's __attribute__((vectorcall)) Finally, there are some rustc-specific ABI strings: extern "rust-intrinsic"-- The ABI of rustc intrinsics. extern "rust-call"-- The ABI of the Fn::call trait functions. extern "platform-intrinsic"-- Specific platform intrinsics -- like, for example, sqrt-- have this ABI. You should never have to deal with it.. It is valid to add the link attribute on an empty extern block. You can use this to satisfy the linking requirements of extern blocks elsewhere in your code (including upstream crates) instead of adding the attribute to each extern block. © 2010 The Rust Project Developers Licensed under the Apache License, Version 2.0 or the MIT license, at your option.
http://docs.w3cub.com/rust/reference/items/
2017-09-19T15:22:18
CC-MAIN-2017-39
1505818685850.32
[]
docs.w3cub.com
Small Talk: Love Prevails In This Emotionally Charged Documentary From Taiwan by Taiwan Docs / 2017-02-14 “Small Talk Gleams with simplicity”!! —Hou Hsiao-Hsien Winner of the audience award and short-listed for Best Documentary Film at the Golden Horse Film Festival in 2016, Small Talk screens to rave reviews. This February, it has been invited to the Berlinale International Film Festival, the first Taiwanese documentary invited to appear on the prestigious Panorama program. The film unfolds the difficult conversations between the filmmaker Huang Hui-chen and her butch lesbian mother, Ah-Nu. Ah-Nu is married off to an abusive husband against her will at a young age. She runs off with her two little girls and launches her own funeral performance service. One of the daughters is now the filmmaker, Huang. Although they survive their abusive husband/father, the affective gap between mother and daughter is unfathomable. Huang has been puzzled by her mother’s contrasting behaviors at home and with friends. Ah-Nu acts like a reluctant mother, gloomy in front of her daughters at home, but sparkles in front of her own group of friends. Huang decides to film her mother and discovers the key to her mother’s coldness and detachment towards them. Through interviews with family members, Ah-Nu’s ex-girlfriends, and episodic conversations between mother and daughter, the decade-long journey of filming paints an emotional portrait of queer kinship in which the story of an aged lesbian mother reveals the cruelty of social antagonism toward LGBT people and their feeling of being homeless in their own homes. The powerful long takes trace back to the sad history of isolation and violence, while the sometimes comic responses from the interviewees lighten up the general mood of the film. Beautifully composed by Lim Giong, long-time collaborator of Hou Hsiao-Hsien and Jia Zhangke. Lim and DJ Point give a warm and exquisite touch to the film. Huang Hui-chen (Hui-zhen) is an activist, documentary filmmaker and fulltime mother. She has been advocating for migrant workers’ rights, the aboriginal movement, and land justice. For Huang, filmmaking is a tool for social change, a way to give voice to silenced minority groups. Her previous works include Hospital Wing 8 East (2006), Uchan is Going Home (2009) , and The Priestess Walks Alone (2015) . There will be two more screenings at Berlinanle International Film Festival this February. 16.02 20:00 CineStar 7 17.02 14:30 CineStar 7 For more information about Small Talk, please visit the official website: Small Talk
http://docs.tfi.org.tw/news/46
2017-09-19T15:02:39
CC-MAIN-2017-39
1505818685850.32
[]
docs.tfi.org.tw
CyberSource is another gateway product, similar to Authorize.Net. (Visa owns CyberSource and CyberSource owns Authorize.Net.) In general, merchants that do less than 5 million a year in sales use Authorize.Net. CyberSource is a more expensive gateway and may have more advanced fraud detection features than you would need in a smaller store. API Endpoint URL: The URL that Miva Merchant uses to submit requests. This field is autopopulated. You should only change it if you are given a new URL by Miva Merchant or CyberSource. CyberSource ID: Transaction Key: These fields are credentials that are created when you set up your account with CyberSource. They are used to securely identify your store during payment transactions. Currency: Select the currency for payments that you receive. Usually this matches the currency you have set for your store (see Edit Store > Settings > Currency Formatting drop-down list.) CVV2 Message: The text that you enter in this field will appear in your on-line store during checkout when your customer enters their credit card information. Merchants usually use this field to describe the purpose of the CVV2 field. Transaction Type: Authorize Only, Capture Later: Authorization occurs when the user clicks the Submit button in the Payments page in your on-line store. To capture funds you must edit the order in Miva Merchant admin and click on the Capture button. Automatic Capture: If you select this option, Authorize and Capture occur when the user clicks the Submit button in the Payments page in your on-line store. Note that these settings also affect you when you manually create an order. For example: In the Edit Order screen, you can see that the Capture button is greyed out and the funds were automatically captured after authorization. Perform Automatic Capture on AVS Soft Decline: CyberSource has a product called Decision Manager. It examines payment transactions and makes a decision as to whether the transaction is legitimate or fraudulent. If the software determines that the transaction is fraudulent, it can mark the transaction with a "soft decline". The order is created in Miva Merchant, and the customer's card is charged, but the order has a pending status at CyberSource. This setting affects soft declines that occurred because the shipping address on the order did not match the registered address of the card. Perform Automatic Capture on CVV2 Soft Decline: Same as above, but for soft declines that occurred because the customer did not enter a CVV2 number, or the CVV2 number that they entered did not match the recorded CVV2 number for the card. Store Entire Credit Card..
https://docs.miva.com/reference-guide/cyber-source
2017-09-19T15:23:01
CC-MAIN-2017-39
1505818685850.32
[]
docs.miva.com
$ minishift start --iso-url centos oc cluster upflags in minishift start When you use Minishift, you interact with two components: a virtual machine (VM) created by Minishift the OpenShift cluster provisioned by Minishift within the VM The following sections contain information about managing the Minishift VM. For. When you start Minishift, it downloads a live ISO image that the hypervisor uses to provision the Minishift VM. The following ISO images are available: Minishift Boot2Docker (Default). This ISO image is based on Boot2Docker, which is a lightweight Linux distribution customized to run Docker containers. The image size is small and optimized for development but not for production. Minishift CentOS. This ISO image is based on CentOS, which is an enterprise-ready Linux distribution that more closely resembles a production environment. The image size is larger than the Boot2Docker ISO image. By default, Minishift uses the Minishift Boot2Docker ISO image. To choose the Minishift CentOS ISO image instead, you can do one of the following: Use the centos alias to download and use the latest CentOS ISO: $ minishift start --iso-url centos Specify explicitly the download URL of the Minishift CentOS ISO image. For example: $ minishift start --iso-url Manually download the Minishift CentSO ISO image from the releases page and enter the file URI to the image: $ minishift start --iso-url<path_to_ISO_image> The runtime behavior of Minishift can be controlled through flags, environment variables, and persistent configuration options. The following precedence order is applied to control the behavior of Minishift.. Using persistent configuration allows you to control the Minishift behavior without specifying actual command line flags, similar to the way you use environment variables. Minishift maintains a configuration file in $MINISHIFT_HOME/config/config.json. This file can be used to set commonly-used command-line flags persistently. The easiest way to change a persistent configuration option is with the minishift config set sub-command. For example: # Set default memory 4096 MB $ minishift config set memory 4096 You can also set driver-specific environment variables. Each docker-machine driver supports its own set of options and variables. A good starting point is the official docker-machine driver documentation. xhyve and KVM documentation is available under their respective GitHub repository To speed up provisioning of the OpenShift cluster and to minimize network traffic, the core OpenShift images can be cached on the host. This feature is considered experimental and needs to be explicitly enabled using the minishift config set command: $ minishift config set image-caching true Once enabled, caching occurs transparently, in a background process, the first time you use the minishift start command. Once the images are cached under $MINISHIFT_HOME/cache/images, successive Minishift VM creations will use these cached images. Each time an image exporting background process runs, a log file is generated under $MINISHIFT_HOME/logs which can be used to verify the progress of the export. You can disable the caching of the OpenShift images by setting image-caching to false or removing the setting altogether using minishift config unset: $ minishift config unset image-caching Minishift VM.. If you want to get early access to some upcoming features and experiment, you can enable some of those in Minishift. You do this by setting the environment variable MINISHIFT_ENABLE_EXPERIMENTAL, which makes additional flags available: $ export MINISHIFT_ENABLE_EXPERIMENTAL=y oc cluster upflags in minishift start By default, Minishift does not expose all oc cluster up flags in the Minishift CLI. You can set the MINISHIFT_ENABLE_EXPERIMENTAL environment variable to enable the following options for the minishift start command: service-catalog Enables provisioning the OpenShift service catalog. extra-clusterup-flags Enables passing flags that are not directly exposed in the Minishift CLI directly to oc cluster up.
https://docs.openshift.org/latest/minishift/using/managing-minishift.html
2017-09-19T15:30:10
CC-MAIN-2017-39
1505818685850.32
[]
docs.openshift.org
- 1. full profile certified configuration which includes the technologies required by the Full Profile specification plus others including OSGi - standalone-ha.xml - Java Enterprise Edition 6 certified full profile configuration with high availability - standalone-osgi-only.xml - OSGi only standalone server. No JEE6 capabilities - standalone-xts.xml - Standalone JEE6 full certified profile with support for transactional web services. Domain Server Configurations - domain.xml (default) - Java Enterprise Edition 6 full profile certified configuration which includes the technologies required by the Full Profile specification plus others including OSGi - domain-osgi-only.xml - OSGi only server. No JEE6 capabilities Important to note is that the domain and standalone modes determine how the servers are managed not what capabilities they provide. Starting JBoss Application Server 7 To start AS 7 using the default full profile configuration in "standalone" mode, change directory to $JBOSS_HOME/bin. To start the default full run OSGi only server in domain mode:: . Managing your JBoss Application Server 7 AS 7 offers two administrative mechanisms for managing your running instance: - web-based Administration Console - command-line interface Authentication By default JBoss AS 7999. When running locally to the JBoss AS process the CLI will silently authenticate against the server by exchanging tokens on the file system, the purpose of this exchange is to verify that the client does have access to the local file system. If the CLI is connecting to a remote AS7 JBoss Application Server 7 logging can be configured in the XML configuration files, the web console or the command line interface. You can get more detail on the Logging Configuration page. Turn on debugging for a specific category: By default the server.log is configured to include all levels in it's log output. In the above example we changed the console to also display debug messages.! Aug 29, 2014 Fatih Onur Below link is incorrect!! This guide has moved to
https://docs.jboss.org/author/pages/viewpage.action?pageId=8094314
2017-09-19T15:15:34
CC-MAIN-2017-39
1505818685850.32
[array(['/author/download/attachments/8094314/JBossAS7-JavaEE.png?version=1&modificationDate=1310399117000', None], dtype=object) array(['/author/download/attachments/8094314/AS7-Welcome.png?version=1&modificationDate=1309510546000', None], dtype=object) ]
docs.jboss.org
For iOS platform, perform the below steps:. - Enable Push Notifications If you are on XCode 8, you need to add Push Notification as a capability. Select the project in the navigator > Go to Capabilities > Enable Push Notifications. Refer to the screenshot below: Integration To install WebEngage for your Cordova App, you'll need to take three basic steps. Add global configuration to the plugin's we_config.xml file. Add platform specific configuration to we_config.xml file Initialise the plugin. Add Global Configuration Open we_config.xml file within the plugins\cordova-plugin-com-webengage directory inside your app's root directory. All global configuration goes under the config tag. - licenseCode: Obtain your license code from the header title of the Account Setup section of your WebEngage dashboard and paste it within the licenseCodetag. - debug (optional) : Debug logs from SDK's are printed if the value of this tag is true. Default value of this tag is false. Platform specific Configuration Android All android specific configuration goes under the android tag under the global config tag. packageName: Insert your complete android application package name with packageName tag. iOS In iOS there is no mandatory configuration required for the app. For advanced configuration, check the Other Configurations section. Initialise the plugin In your onDeviceReady callback call: onDeviceReady: function() { /** Additional WebEngage options and callbacks to be registered here before calling webEngage.engage() **/ webengage.engage(); }
https://docs.webengage.com/docs/cordova-integration
2017-09-19T15:16:18
CC-MAIN-2017-39
1505818685850.32
[]
docs.webengage.com
Amazon Cognito for iOS For your app to access AWS services and resources, it must facilitate getting an identity within AWS for each user. Use Amazon Cognito to create unique identities for your users. Amazon Cognito identities can be unauthenticated, or they can use a range of methods to sign in and become authenticated. For more information, see Integrating Identity Providers. For information about Amazon Cognito Region availability, see AWS Service Region Availability. Most implementations of AWS services for mobile app features require identity management through Amazon Cognito. The following steps describe how to AWS credentials to your app users. In this section: Take the following steps to create a new identity pool with Auth and Unauth roles. - Choose Manage Federated Identities. - Choose Create new identity pool. - Type an Identity pool name. - Optional: Select Enable access to unauthenticated identities. - Choose Create Pool. - Choose View Details to review or edit the role names and default access policy JSON document for the identity pool you just created. Note the names of your Auth and Unauth roles. You will use them to enact access policy for the AWS resources you use. - Choose: Allow. - Choose the language of your app code in the Platform menu. Note the identityPoolId value in the sample code provided. For more information, see Identity Pools and IAM Roles. Follow the steps in Set Up the SDK for iOS. Add the following imports to your project. - Swift - import AWSCore import AWSCognito - Objective-C - #import <AWSCore/AWSCore.h> #import <AWSCognito/AWSCognito.h> Use the following code, replacing the value of YourIdentityPoolId with the identitPoolId value you noted when you created your identity pool. - Swift - let credentialProvider = AWSCognitoCredentialsProvider(regionType: .USEast1, identityPoolId: "YourIdentityPoolId") let configuration = AWSServiceConfiguration(region: .USEast1, credentialsProvider: credentialProvider) AWSServiceManager.default().defaultServiceConfiguration = configuration - Objective-C - AWSCognitoCredentialsProvider *credentialsProvider = [[AWSCognitoCredentialsProvider alloc] initWithRegionType:AWSRegionUSEast1 identityPoolId:@"YourIdentityPoolId"]; must reassociate your roles with your identity pool to use this constructor. To do so, open the Amazon Cognito console, select your identity pool, choose Edit Identity Pool, specify your authenticated and unauthenticated roles, and save the changes After the login tokens are set in the credentials provider, you can retrieve a unique Amazon Cognito identifier for your end user and temporary credentials that let the app access your AWS resources. - Swift - let cognitoId = credentialsProvider.identityId - Objective-C - // Retrieve your Amazon Cognito ID. NSString *cognitoId = credentialsProvider.identityId; The unique identifier is available in the identityIdproperty of the credentials provider object. The credentialsProvider communicates with Amazon Cognito, retrieving a unique identifier for the user as well as temporary, limited privilege AWS credentials for the AWS Mobile SDK. The retrieved credentials are valid for one hour. To use Amazon Cognito to incorporate sign-in through an external identity provider into your app, create an Amazon Cognito identity pool. An identity in a pool gets access to the AWS resources used by your app by being assigned a role in AWS Identity and Access Management (IAM). The access level of an IAM role is defined by the policy that is attached to it. Typical roles for identity pools allow you to give different levels of access to authenticated (Auth)or signed in users, and unauthenticated (Unauth)users. For more information on identity pools, see Amazon Cognito Identity: Using Federated Identities. For more information on using IAM roles with Amazon Cognito, see IAM Roles in the Amazon Cognito Developer Guide. Amazon Cognito identities can be unauthenticated or use a range of methods to sign in and become authenticated, including: - Federating with an external provider such as Google or Facebook - Federating with a SAML Provider such as a Microsoft Active Directory instance - For SAML federation, the SAML federation metadata for the authenticating system - Federating with your existing custom authentication provider using developer authenticated identities - Creating your own AWS-managed identity provider using Amazon Cognito User Pool Then, each time your mobile app interacts with Amazon Cognito, your user's identity is given a set of temporary credentials that give secure access to the AWS resources configured for your app. For information see, `External Identity Providers<>`_ in the Amazon Cognito Developer Guide. Amazon Cognito Sync: Sync User Data Developer Authenticated Identities
http://docs.aws.amazon.com/mobile/sdkforios/developerguide/cognito-auth-aws-identity-for-ios.html
2017-09-19T15:40:14
CC-MAIN-2017-39
1505818685850.32
[]
docs.aws.amazon.com
Authenticode is a Microsoft code-signing technology that identifies the publisher of Authenticode-signed software. Authenticode also verifies that the software has not been tampered with since it was signed and published. Authenticode uses cryptographic techniques to verify publisher identity and code integrity. It combines digital signatures with an infrastructure of trusted entities, including certificate authorities (CAs), to assure users that a driver originates from the stated publisher. Authenticode allows users to verify the identity of the software publisher by chaining the certificate in the digital signature up to a trusted root certificate. Using Authenticode, the software publisher signs the driver or driver package, tagging it with a digital certificate that verifies the identity of the publisher and also provides the recipient of the code with the ability to verify the integrity of the code. A certificate is a set of data that identifies the software publisher.. Authenticode code signing does not alter the executable portions of a driver. Instead, it does the following: With embedded signatures, the signing process embeds a digital signature within a nonexecution portion of the driver file. For more information about this process, see Embedded Signatures in a Driver File. With digitally-signed catalog files (.cat), the signing process requires generating a file hash value from the contents of each file within a driver package. This hash value is included in a catalog file. The catalog file is then signed with an embedded signature. In this way, catalog files are a type of detached signature. Note The Hardware Certification Kit (HCK) has test categories for a variety of device types. The list of test categories can be found at Certification Test Reference. If a test category for the device type is included in this list, the software publisher should obtain a WHQL release signature for the driver package However, if the HCK does not have a test program for the device type, the software publisher can sign the driver package by using the Microsoft Authenticode technology. For more information about this process, see Signing Drivers for Public Release.
https://docs.microsoft.com/en-us/windows-hardware/drivers/install/authenticode
2017-09-19T16:21:19
CC-MAIN-2017-39
1505818685850.32
[]
docs.microsoft.com
Demo projects¶ The source package of django-contactme comes with several demo projects to see the application in action: - bare_demo is the simplest demo possible. - bare_demo_with_ajax is the same previous example plus Ajax functionality provided by jquery.djcontactme.js, the jquery plugin that comes with the application. - crispy_forms_demo is an example of how to use django-contactme with django-crispy-forms. Demo quick setup¶ Demo projects live inside the example project in app’s root directory. The simplest and less interfeing way to run the demo projects is by creating a virtualenv for django-contactme. Then: cdinto the any of the demo directories. - Run python manage migrateto create a minimal SQLite db for the demo. - Run python manage runserverand browse In addition, crispy_forms_demo requires the crispy_forms package: $ pip install django-crispy-forms By default the demo project send email messages to the standard output. You can customize the email settings to send actual emails. Edit the settings.py module, go to the end of the file and customize the following entries: EMAIL_HOST = "" # for gmail it would be: "smtp.gmail.com" EMAIL_PORT = "" # for gmail: "587" EMAIL_HOST_USER = "" # for gmail: [email protected] EMAIL_HOST_PASSWORD = "" EMAIL_USE_TLS = True # for gmail DEFAULT_FROM_EMAIL = "Your site name <[email protected]>" SERVER_EMAIL = DEFAULT_FROM_EMAIL # Fill in actual EMAIL settings above, and comment out the # following line to let the django demo sending actual emails # EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' CONTACTME_NOTIFY_TO = "Your name <[email protected]>" The domain used in the links sent by email refers to example.com and thus are not associated with your django development web server. Change the domain name through the admin interface, sites application, to something like localhost:8000 so that URLs in email messages match your development server. Register a signal receiver¶ After trying the demo site you may like to add a receiver for any of the signals sent during the workflow. Read the entry on Signals to know more about django-contactme signals. The section Signals and receivers in the Tutorial shows a use case.
http://django-contactme.readthedocs.io/en/latest/example.html
2017-09-19T15:14:23
CC-MAIN-2017-39
1505818685850.32
[]
django-contactme.readthedocs.io
Rabix Knowledge Center The Rabix Knowledge Center provides guides and reference documentation for tools included in the Rabix project, open-source development project for running and creating computational workflows by Seven Bridges. The two efforts included in the Rabix project: Rabix Executor The Rabix Executor supports the Common Workflow Language (CWL) and is currently runnable from the command line. It is suitable for local testing and development of Bioinformatics apps. Rabix also supports the Global Alliance for Genomics Health (GA4GH) Task Execution Server (TES) as a way to scale to cloud environments.Learn more about Rabix Executor Rabix Composer).Learn more about Rabix Composer
http://docs.rabix.io/
2019-08-17T15:14:44
CC-MAIN-2019-35
1566027313428.28
[]
docs.rabix.io
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Edit-NPTDBParameterGroup-DBParameterGroupName <String>-Parameter <Parameter[]>-Force <SwitchParameter>. character_set_databaseparameter. You can use the Parameter Groups option of the Amazon Neptune console or the DescribeDBParameters command to verify that your DB parameter group has been created or modified. immediate | pending-rebootYou can use the immediate value with dynamic parameters only. You can use the pending-reboot value for both dynamic and static parameters, and changes are applied when you reboot the DB instance without fail
https://docs.aws.amazon.com/powershell/latest/reference/items/Edit-NPTDBParameterGroup.html
2019-08-17T15:09:47
CC-MAIN-2019-35
1566027313428.28
[]
docs.aws.amazon.com
The ledger¶ Summary - The ledger is subjective from each peer’s perspective - Two peers are always guaranteed to see the exact same version of any on-ledger facts they share The Ledger Data¶. The preceding Venn diagram represents 5 nodes (Alice, Bob, Carl, Demi and Ed) as sets. Where the sets overlap are shared facts, such as those known by both Alice and Bob (1 and 7)..
https://docs.corda.net/head/key-concepts-ledger.html
2019-08-17T15:18:15
CC-MAIN-2019-35
1566027313428.28
[array(['_images/ledger-venn.png', '_images/ledger-venn.png'], dtype=object)]
docs.corda.net
Managing Solutions, Projects, and Files The following topics describe common tasks you can perform when working with solutions and projects. In This Section Introduction to Solutions, Projects, and Items Understand the concepts of solutions, projects, and items as well as the advantages of using Visual Studio to manage your development projects and files. Project Properties (Visual Studio) Discusses the Project Designer and how to modify project settings. Multi-Project Solutions Work on more than one project at a time within one instance of the integrated development environment (IDE). Stand-Alone Projects Understand what stand-alone projects are and when you can work with them. Temporary Projects Understand what temporary projects are and when you can work with them. Visual Studio Templates Provides an overview the project and item template architecture and implementation. Targeting a Specific .NET Framework Version or Profile Describes how to enable your projects to target a specific version of the .NET Framework. Solution, Project, and File User Interface Elements Reference for user interface elements that enable you to configure your solutions, projects, and files. Related Sections Getting Started with Visual Studio Contains information on migrating applications, and the walkthroughs and samples available to increase your familiarity with Visual Studio. Editing Text, Code, and Markup Explains how to use the Code Editor. Globalizing and Localizing Applications Explains how to incorporate encoding standards such as ANSI, Unicode, and others. Creating and Managing Visual C++ Projects Explains how to work with files in projects specific to Visual C++. File Installation Management in Deployment Explains how to use the File System Editor to manage distribution of your files. File Types Management in Deployment Explains how to control how a target computer reads and implements files based on file extensions. How to: Add a Project to Source Control Explains how to apply Source Control to manage versions of your solutions and projects.
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/wbzbtw81%28v%3Dvs.100%29
2019-08-17T15:17:13
CC-MAIN-2019-35
1566027313428.28
[]
docs.microsoft.com
Welcome to PWAThemes.com Documentation PWAThemes.com is dedicated to developers looking to start building their progressive web applications with ease. We offer dozens of premium progressive web app themes and extensions that you can purchase individually or as a bundle. Discover ReactJS, AngularJS/Ionic & VueJS Progressive Web App Themes and UI Starter Kits that you can purchase individually or as a bundle. More documentation soon to come, but in the mean time, if you have questions please don't hesitate to get in touch.
https://docs.pwathemes.com/
2019-08-17T14:54:10
CC-MAIN-2019-35
1566027313428.28
[]
docs.pwathemes.com
By using the Platform API, you can directly interact with the different types of resources associated with chatting data. The APIs are designed to use standard HTTP protocols and return JSON payloads in response to HTTP requests, and are internally implemented based on the RESTful principles. While the native SDKs handle many of the requests and responses at the client side, the Platform API adds flexibility and abilities to your service from the server side. Note: The Platform API is not designed for client side use. Use the corresponding SDKs instead. The base URL used for all the APIs is formatted as the following:-{application_id}.sendbird.com/v3 To get the ID and the allocated base URL of your application, sign in to your dashboard, select the application, open the Overview, and then check the App credentials > App ID, API request URL. A typical HTTP request to the Platform API includes the following headers: Content-Type: application/json, charset=utf8 Api-Token: =--boundary_string ----boundary_string Content-Disposition: form-data; name="key1" {value1} ----boundary_string Content-Disposition: form-data; name="key2" {value2} ----boundary_string Content-Disposition: form-data; name="file_key"; filename="{file_name}" Content-Type: {Content-Type} {contents of the file} ----boundary_string-- import os import requests api_headers = {'Api-Token': '{api_token}'} data = { 'key1': {value1}, 'key2': {value2} } filepath = os.path.join(os.path.dirname(__file__), '{file_path}', '{file_name}') upload_files = {'file_key': ('{file_name}', open(filepath, 'rb'))} res = requests.post('{request_URL}', headers=api_headers, data=data, files=upload_files) curl -X {HTTP Method} -H "Api-Token: {api_token}" -H "Content-Type: multipart/form-data; boundary=--boundary_string" -F "key1={value1}" -F "key2={value2}" -F "file_key=@filename" "{request_URL}" # python: Create User API import os import requests api_headers = {'Api-Token': ') To use the Platform API with a specific application, you must authenticate a request using your API token. You can find the token in the dashboard under Overview > App Credentials. The API token in the dashboard is your master API token. The master API token is issued when an application has been created, and it can't be revoked or changed. With the master API token, you can issue an API token, revoke other API token, or retrieve a list of issued API tokens. As stated above, an API token must be included in your HTTP Request Header for authentication. Note: This usage has been changed from our old Server API, which previously required the token to be included in the payload. "Api-Token": {API_Token} DO NOT request any Platform API from your client app. If your API token information is compromised, you risk losing all your data. Authenticate with HTTP Basic authentication when you want to delete or list all applications, or create one.Cg== Authorizationfield in your HTTP header. For example, Authorization: Basic YXBpQHNlbmRiaXJkLmNvbToxMjM0C, open the Overview, and then check the App credentials > App
https://docs.sendbird.com/platform
2019-08-17T15:24:40
CC-MAIN-2019-35
1566027313428.28
[]
docs.sendbird.com
3. Add new applications to the. Note In this section, we’ll assume your project is already running ( docker-compose up web) and that you are in your own shell, not at the bash prompt in a container. 3.1. Add a package to the project directory¶ The simplest way to add a new Django application to a project is by placing it in the project directory, so it’s on the Python path. We’ll use a version of the Polls application that’s part of the Django tutorial. Download the application (tip: open a second terminal shell so you can leave the project running): git clone [email protected]:divio/django-polls.git Note The URL above requires that you have provided your public key to GitHub. Otherwise, use. And put the inner polls application directory at the root of your project (you can do this with: mv django-polls/polls .). 3.2. Configure the project¶ 3.2.1. Configure settings¶ Edit settings.py to include the polls application: INSTALLED_APPS.extend([ "polls", ]) This settings.py already includes INSTALLED_APPS that have been configured by applications in the project - here we are simply extending it with new ones. 3.2.2. Configure URLs¶ Edit urls.py to add the URLconf for the polls application: urlpatterns = [ url(r'^polls/', include('polls.urls', namespace='polls')), ] + aldryn_addons.urls.patterns() + i18n_patterns( # add your own i18n patterns here *aldryn_addons.urls.i18n_patterns() # MUST be the last entry! 3.3. Migrate the database¶ Run: docker-compose run web python manage.py migrate You will see the migrations being applied: Running migrations: Rendering model states... DONE Applying polls.0001_initial... OK And when that has completed, open the project again in your browser: divio project open You should see the new polls application in the admin: 3.4. Deploy the project¶ 3.4.1. Push your changes¶ If it works locally it should work on the Cloud, so let’s push the changes to the Test server and deploy there. First, add the change: git add settings.py urls.py polls Commit them: git commit -m "Added polls application" And push to the Divio Cloud Git server: git push origin develop Note The Control Panel will display your undeployed commits, and even a diff for each one. 3.4.2. Deploy the Test server¶ divio project deploy test And check the site on the Test server: divio project test Optionally, if you made some local changes to the database (perhaps you added some polls), you can push the database to the local server too, with: divio project push db (You’ll need to redeploy to see the results.) 3.5. Add a package via pip¶ Often, you want to add a reusable, pip-installable application. For this example, we’ll use Django Axes, a simple package that keeps access logs (and failed login attempts) for a site. 3.5.1. Add the package¶ Add django-axes==2.3.2 (it’s always sensible to specify a version number in requirements) to the project’s requirements.in: # <INSTALLED_ADDONS> # Warning: text inside the INSTALLED_ADDONS tags is auto-generated. Manual changes will be overwritten. [...] # </INSTALLED_ADDONS> django-axes==2.3.2 (Make sure that it’s outside the automatically generated # <INSTALLED_ADDONS> section.) 3.5.2. Rebuild the project¶ The project now needs to be rebuilt, so that Django Axes is installed: docker-compose build web 3.5.3. Configure settings¶ In the settings.py, add axes to INSTALLED_APPS: INSTALLED_APPS.extend([ "polls", "axes", ]) (Note that this application doesn’t need an entry in urls.py, because it only uses the admin). 3.5.4. Run migrations¶ Now the database needs to be migrated once again for the new application: docker-compose run web python manage.py migrate Check that it has installed as expected (Django Axes will show its records in the admin).
http://docs.divio.com/en/latest/introduction/03-add-applications.html
2019-08-17T15:10:45
CC-MAIN-2019-35
1566027313428.28
[array(['../_images/polls-admin.png', 'The polls application appears in the admin'], dtype=object)]
docs.divio.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Update-GACLAcceleratorAttribute-AcceleratorArn <String>-FlowLogsEnabled <Boolean>-FlowLogsS3Bucket <String>-FlowLogsS3Prefix <String>-Force <SwitchParameter> FlowLogsEnabledis true. The bucket must exist and have a bucket policy that grants AWS Global Accelerator permission to write to the bucket. FlowLogsEnabledis true. If you don’t specify a prefix, the flow logs are stored in the root of the bucket.
https://docs.aws.amazon.com/powershell/latest/reference/items/Update-GACLAcceleratorAttribute.html
2019-08-17T16:01:18
CC-MAIN-2019-35
1566027313428.28
[]
docs.aws.amazon.com
Alerts in the Office 365 Security & Compliance Center Use the alerts features in the Office 365 Security & Compliance Center to view and manage alerts for your Office 365 organization, including managing advanced alerts as part of Office 365 Cloud App Security alerts. How to get to the Office 365 alerts features Alerts in Office 365 are in the Security & Compliance Center. Here's how to get to the page. To go directly to the Security & Compliance Center: Go to. Sign in to Office 365 using your work or school account. In the left pane, click Alerts to see the alerts features. To go to the Security & Compliance Center using the Office 365 app launcher: Sign in to Office 365 using your work or school account. Click the app launcher in the upper left corner, and then click Security & Compliance. Can't find the app you're looking for? From the app launcher, select All apps to see an alphabetical list of the Office 365 apps available to you. From there, you can search for a specific app. In the left pane, click Alerts to see the alerts features. Alerts features The following table describes the tools that are available under Alerts in the Security & Compliance Center. Feedback
https://docs.microsoft.com/en-us/office365/securitycompliance/alerts?redirectSourcePath=%252fnl-nl%252farticle%252fWaarschuwingen-in-het-Office-365-beveiligings-en-compliancecentrum-2BB4E7C0-5F7F-4144-B647-CC6A956AAA53
2019-08-17T15:33:58
CC-MAIN-2019-35
1566027313428.28
[]
docs.microsoft.com
Stop Instance You can stop an instance once you are done with using the instance, or in order to conserve compute resources. Once you stop the instance, it is not available for use. You can start the instance at any time as and when you require it. You must be a self-service user or an administrator to perform this operation. To stop an instance, follow the steps given below. - Log in to Clarity. - Click Instances in the left panel. - Select the checkbox for the running or active instance that you want to stop. - Click Stop. - Click Confirm. The selected instance stops. The instance and the applications on the instance are no longer available for use.
https://docs.platform9.com/user-guide/instances/stop-instance/
2019-08-17T16:06:41
CC-MAIN-2019-35
1566027313428.28
[]
docs.platform9.com
You can specify a particular vRealize Orchestrator endpoint to use with a blueprint. When IaaS runs a vRealize Orchestrator workflow for any machine provisioned from this blueprint, it always uses the associated endpoint. If the endpoint is not reachable, the workflow fails. Prerequisites Log in to the vRealize Automation console as an infrastructure architect. Procedure - Create a new blueprint or edit an existing blueprint. If you are editing an existing blueprint, the vRealize Orchestrator endpoint you specify only applies to new machines provisioned from the updated blueprint. Existing machines provisioned from the blueprint continue to use the highest priority endpoint unless you manually add this property to the machine. - Click the Properties tab. - Click New Property. - Type VMware.VCenterOrchestrator.EndpointName in the Name text box.The property name is case sensitive. - Type the name of a vRealize Orchestrator endpoint in the Value text box. - Click the Save icon ( ). - Click OK.
https://docs.vmware.com/en/vRealize-Automation/7.1/com.vmware.vra.extensibility.doc/GUID-00E04930-9B33-4326-9E7D-3CFA44C1253C.html
2019-08-17T14:34:45
CC-MAIN-2019-35
1566027313428.28
[]
docs.vmware.com
Configuration¶ Hoverfly takes a config object, which contains sensible defaults if not configured. Ports will be randomised to unused ones, which is useful on something like a CI server if you want to avoid port clashes. You can also set fixed port: config().proxyPort(8080) You can also configure Hoverfly to use a remote instance which is already running config().useRemoteInstance() // localhost config().useRemoteInstance("1.2.3.4") // other host name or address
https://hoverfly-java.readthedocs.io/en/0.3.8/pages/corefunctionality/configuration.html
2019-08-17T14:54:11
CC-MAIN-2019-35
1566027313428.28
[]
hoverfly-java.readthedocs.io
This is an old revision of the document! Table of Contents Welcome to the SlackDocs Wiki: your primary source for Slackware Linux documentation on the web. Getting Started with Slackware The following links are useful for those getting started with Slackware Linux. - Slackware installation: a guide through the whole process of installing and configuring Slackware Linux; written for new and experienced users alike. - Configure your new Slackware System; how to proceed after the installation has completed. - The Slackware Linux Essentials Book: a valuable resource for those venturing into Slackware (or Linux in general) for the first time. The original SlackBook can be found on your Slackware CDROM or DVD, and it can also be read online: The Slackware Linux Distribution - Slackware: a brief overview of Slackware Linux; describes what to expect from a Slackware Linux system. - The Slackware Way: describes the principles and philosophy of Slackware Linux. The Community - Getting Involved: describes various ways Slackers (both new and experienced users) can contribute to the Slackware community. - Links and pointers: other sites on the web offering Slackware related information. - SlackDocs Wiki Tutorial: a short tutorial on editing and contributing to the SlackDocs wiki. More About the SlackDocs Project The Wiki News page is where you look for news from the Wiki admins. If you just want to talk about the content of any of our Wiki pages, or if you want to propose improvements to a page, you can use the “discussion” tab which shows up at the top of every page, and leave your thoughts/comments/ideas there. Alternatively, SlackDocs also has a mailing list which can be used for content discussion and brainstorming at (we keep the discussion archives). If you are willing and able to contribute to the wiki, please see this list for ideas. Perhaps you already have an idea for a new article! We understand that you may be uncertain about your writing skills or unsure about how to start contributing. If that is the case, we encourage you to subscribe to the mailing list and ask for help. The people on that list will certainly offer assistance. If you think that a mailing list is difficult to use, we wrote helpful instructions for you. We keep some statistics of the activities in our Wiki. If you want to know who are part of the team that is keeping the site structured and focused, visit our contact page. Internationalization / Localization Are you more comfortable reading articles in your native language? Click here for a list of available language categories and other internationalization information. Searching for Information in This Wiki - If looking for specific information, try using the search box to the left. - Alternatively, use Google's “site” search feature by appending “site:docs.slackware.com” to search terms. - An even easier way of searching for information in SlackDocs: the Wiki is OpenSearch 1) enabled. This is supported by all modern browsers. Here is how to add SlackDocs search to Firefox (other browsers probably handle it in a similar way): - open the wiki start page in the browser - click the little arrow on the left of your search field - choose “Add SlackDocs” Editing This Wiki You must create an account to edit this wiki (even if you only want to write something on a “discussion” page). Once done, you can play around in the Playground or your own user page to familiarize yourself with the Dokuwiki markup. The available syntax is listed in wiki:syntax where you will also find pointers to the plugins which have been installed, providing additional functionality.
http://docs.slackware.com/start?rev=1347793353
2019-08-17T14:48:37
CC-MAIN-2019-35
1566027313428.28
[array(['https://docs.slackware.com/lib/plugins/bookcreator/images/add.png', None], dtype=object) array(['https://docs.slackware.com/lib/plugins/bookcreator/images/del.png', None], dtype=object) ]
docs.slackware.com
VanderSat User documentation¶ Introduction¶ In this User Guide the VanderSat Data Service is detailed. The guide consists of a description of the VanderSat web based viewer (), the data products, and the data delivery system (API). See for more information about the company and its mission. Viewer and API¶ Data products¶ - VanderSat Data Products - Data Flags - Product naming convention - Vegetation correction - Volumetric Soil Moisture L-Band (SM) - Volumetric Soil Moisture C-Band (SM) - Volumetric Soil Moisture X-Band (SM) - Derived Root Zone Soil Moisture (DRZSM) - Land Surface Temperature (LST) - Vegetation Optical Depth L-Band (VOD) - Vegetation Optical Depth C-Band (VOD) - Vegetation Optical Depth X-Band (VOD) - Inundation classes (INU-CLASSES) - Inundation RGB (INU-RGB) - Inundation rolling mean (INU-RM) - VanderSat Data Flags
http://docs.vandersat.com/index.html
2019-08-17T15:33:05
CC-MAIN-2019-35
1566027313428.28
[]
docs.vandersat.com
TailClick: ASPxClientEvent<ASPxClientNewsControlItemEventHandler<ASPxClientNewsControl>> The TailClick event handler receives an argument of the ASPxClientNewsControlItemEventArgs type. The following properties provide information specific to this event. Write a TailClick event handler to perform specific actions on the client side each time a news item's tail is clicked within the ASPxNewsControl. Note that this event fires immediately after the left mouse button is released. If the button is released when the mouse pointer is not over a tail, the event doesn't fire. You can use the event parameter's properties to identify the clicked item and specify whether a postback should be generated to pass the event processing to the server side.
https://docs.devexpress.com/AspNet/js-ASPxClientNewsControl.TailClick
2019-08-17T15:35:13
CC-MAIN-2019-35
1566027313428.28
[]
docs.devexpress.com
The reference distribution for this book, and my preferred distribution, is Debian GNU/Linux, the Linux for the GNU Generation. I originally started with Slackware in the early 90's but migrated through Red Hat and then quickly on to Debian in 1995. Red Hat is a good distribution and is quite popular but has limitations. Debian conforms to the open and distributed development model making it a very open distribution where even you can make a change to it if you so desired. Debian is the basis of a number of commercial distributions and it also powers quite a few web sites including Linux.com. Distributions involving Debian GNU/Linux are listed at and include: Related distributions include Amirix (), Embedded Debian (), TimeSys for real time GNU/Linux () and the VA Linux Systems, O'Reilly and SGI collaboration ().
https://docs.huihoo.com/debian/survivor-2.0.0/Debian_GNU_Linux.html
2019-08-17T14:49:16
CC-MAIN-2019-35
1566027313428.28
[]
docs.huihoo.com
The SendBird SDKs help you to implement real-time chat to any types of your client apps with speed and efficiency. Our Unity. A SendBird application comprises everything that goes in a chatting service such as users, messages, and channels. To create a SendBird application, do the following: You can implement only one SendBird application per app for your service, regardless of the platforms. All users within a Unity SDK is designed and tested on Mono/.NET 2.0 platform and Unity 5.x.x or higher. The Unity Unity"); // SendBird Sample Application ID ... }
https://docs.sendbird.com/unity
2019-08-17T15:09:28
CC-MAIN-2019-35
1566027313428.28
[]
docs.sendbird.com
Project X-Ray¶ Build Status Documentation Status License Documenting the Xilinx 7-series bit-stream format. This repository contains both tools and scripts which allow you to document the bit-stream format of Xilinx 7-series FPGAs. More documentation can be found published on prjxray ReadTheDocs site - this includes; Quickstart Guide¶ Instructions were originally written for Ubuntu 16.04. Please let us know if you have information on other distributions. Step 1:¶ Install Vivado 2017.2. If you did not install to /opt/Xilinx default, then set the environment variable XRAY_VIVADO_SETTINGS to point to the settings64.sh file of the installed vivado version, ie export XRAY_VIVADO_SETTINGS=/opt/Xilinx/Vivado/2017.2/settings64.sh Do not source the settings64.sh in your shell, since this adds directories of the Vivado installation at the beginning of your PATH and LD_LIBRARY_PATH variables, which will likely interfere with or break non-Vivado applications in that shell. The Vivado wrapper utils/vivado.sh makes sure that the environment variables from XRAY_VIVADO_SETTINGS are automatically sourced in a separate shell that is then only used to run Vivado to avoid these problems. Step 3:¶ Install CMake: sudo apt-get install cmake # version 3.5.0 or later required, # for Ubuntu Trusty pkg is called cmake3 Step 5:¶ (Option 1) - Install the Python environment locally sudo apt-get install virtualenv python3-virtualenv python3-yaml make env (Option 2) - Install the Python environment globally sudo apt-get install python3-yaml sudo pip3 install -r requirements.txt This step is known to fail with a compiler error while building the pyjson5 library when using Arch Linux and Fedora. pyjson5 needs one change to build correctly: git clone cd pyjson5 sed -i 's/char \*PyUnicode/const char \*PyUnicode/' src/_imports.pyx sudo make This might give you and error about sphinx_autodoc_typehints but it should correctly build and install pyjson5. After this, run either option 1 or 2 again. Step 6:¶ Always make sure to set the environment for the device you are working on before running any other commands: source settings/artix7.sh Step 7:¶ (Option 1, recommended) - Download a current stable version (you can use the Python API with a pre-generated database) ./download-latest-db.sh (Option 2) - (Re-)create the entire database (this will take a very long time!) cd fuzzers make -j$(nproc) C++ Development¶ Tests are not built by default. Setting the PRJXRAY_BUILD_TESTING option to ON when running cmake will include them: cmake -DPRJXRAY_BUILD_TESTING=ON .. make The default C++ build configuration is for releases (optimizations enabled, no debug info). A build configuration for debugging (no optimizations, debug info) can be chosen via the CMAKE_BUILD_TYPE option: cmake -DCMAKE_BUILD_TYPE=Debug .. make The options to build tests and use a debug build configuration are independent to allow testing that optimizations do not cause bugs. The build configuration and build tests options may be combined to allow all permutations. Process¶ The documentation is done through a “black box” process were Vivado is asked to generate a large number of designs which then used to create bitstreams. The resulting bit streams are then cross correlated to discover what different bits do. Parts¶ Minitests¶ There are also “minitests” which are designs which can be viewed by a human in Vivado to better understand how to generate more useful designs. Experiments¶ Experiments are like “minitests” except are only useful for a short period of time. Files are committed here to allow people to see how we are trying to understand the bitstream. When an experiment is finished with, it will be moved from this directory into the latest “prjxray-experiments-archive-XXXX” repository. Fuzzers¶ Fuzzers are the scripts which generate the large number of bitstream. They are called “fuzzers” because they follow an approach similar to the idea of software testing through fuzzing. Tools & Libs¶ Tools & libs are useful tools (and libraries) for converting the resulting bitstreams into various formats. Binaries in the tools directory are considered more mature and stable then those in the utils directory and could be actively used in other projects. Utils¶ Utils are various tools which are still highly experimental. These tools should only be used inside this repository. Third Party¶ Third party contains code not developed as part of Project X-Ray. Database¶ Running the all fuzzers in order will produce a database which documents the bitstream format in the database directory. As running all these fuzzers can take significant time, Tim ‘mithro’ Ansell [email protected] has graciously agreed to maintain a copy of the database in the prjxray-db repository. Please direct enquires to Tim if there are any issues with it. Current Focus¶ Current the focus has been on the Artix-7 50T part. This structure is common between all footprints of the 15T, 35T and 50T varieties. We have also started experimenting with the Kintex-7 parts. The aim is to eventually document all parts in the Xilinx 7-series FPGAs but we can not do this alone, we need your help! Contributing¶ There are a couple of guidelines when contributing to Project X-Ray which are listed here. Sending¶ All contributions should be sent as GitHub Pull requests. License¶ All software (code, associated documentation, support files, etc) in the Project X-Ray repository are licensed under the very permissive ISC Licence. A copy can be found in the COPYING file. All new contributions must also be released under this license. Code of Conduct¶ By contributing you agree to the code of conduct. We follow the open source best practice of using the Contributor Covenant for our Code of Conduct. Sign your work¶ To improve tracking of who did what, we follow the Linux Kernel’s “sign your work” system. This is also called a “DCO” or “Developer’s Certificate of Origin”. All commits are required to include this sign off and we use the Probot DCO App to check pull requests for.) You can add the signoff as part of your commit statement. For example: git commit --signoff -a -m "Fixed some errors." Hint: If you’ve forgotten to add a signoff to one or more commits, you can use the following command to add signoffs to all commits between you and the upstream master: git rebase --signoff upstream/master Contributing to the docs¶ In addition to the above contribution guidelines, see the guide to updating the Project X-Ray docs.
https://symbiflow.readthedocs.io/projects/prjxray/en/latest/db_dev_process/readme.html
2019-08-17T14:46:46
CC-MAIN-2019-35
1566027313428.28
[]
symbiflow.readthedocs.io
Click Tools > Options. On the Foreign Data tab, click the software that you want in the Format box. In the Import box, click Options to access the AutoCAD Import Options dialog box. On the General tab of the dialog box, enter the folder path and template in the Template File box. search for the template by clicking Browse. Click File > Open. On the Open dialog box, select the .dwg extension. Select the document to open."> If you create a reference file, you can either click Insert > Object or drag an AutoCAD document from the Windows Explorer into the current document. After you place the AutoCAD information on the drawing sheet, you can locate elements and establish relationships among the new information and elements that are already in the current document. AutoCAD polylines are imported as SmartSketch line strings. AutoCAD mtext (two or more lines of text handled as a text box) is imported into the software as two separate line strings (text boxes). When you open an AutoCAD document that has references to other documents, those referenced documents appear as well. Nested reference documents can be up to four levels deep. You can locate referenced documents in the current document. Translators option. When translating a .dwg document, the default is to translate all blocks containing attribute date into symbols with SmartLabels. All translation options for opening AutoCAD documents using Open on the File menu are delivered through the Custom or Typical setup for these options. If you cannot open an AutoCAD document, you should re-install the software with the Custom or Typical setup for these options.
https://docs.hexagonppm.com/reader/HVi~HD9gUiC2KrlrLIMIww/x~S9MOHGyTM5RyFjuAo9cw
2019-08-17T16:11:05
CC-MAIN-2019-35
1566027313428.28
[]
docs.hexagonppm.com
OAM invalid pointer to memory OAM could not get a pointer to AIX resources that it requires. This will happen if the OAM executable is not owned by root. Check the owner of OAM and OAMEXEC. If the owner is not root, try setting it to root or restoring the files from a backup. Yellow Log, System Monitor
http://docs.blueworx.com/BVR/InfoCenter/V6.1/help/topic/com.ibm.wvraix.probdet.doc/dtxprobdet677.html
2019-08-17T14:34:11
CC-MAIN-2019-35
1566027313428.28
[]
docs.blueworx.com
git CORAL Documentation Style Guide¶ About the Style Guide¶ The purpose of the style guide is to help keep the documentation consistent in style while at the same time providing tips for contributing to the CORAL Documentation Project (CDP) and other relevant information. Helping Out¶ The CDP is a great way to participate in a small or big way as time allows. Take something in the documentation that you would like to improve and help out, whether it is editing, adding new content, or just sharing a tip. All the information needed to get started is found below. When finished you will need to submit a pull request in GitHub for any changes. The Web Committee will review these pull requests and merge them into the documentation as they are approved. For any suggestions or questions about helping out, please email us at [email protected]. Setting Up¶ The CDP is managed in a repo on GitHub found at . The project uses the Sphinx Python Documentation Generator and ReCommonMark. The documentation files are edited in a combination of reStructuredText and Markdown, both markup languages supported by GitHub. In addition, Read the Docs is used for hosting the documentation and providing additional documentation conversion and indexing tools. Editing Markdown¶ There are a lot of great resources online about Markdown, but to get started you may want to use a Markdown editor. There are many available free, but the best one for you will depend on your operating system. One suggestion for Microsoft Windows would be MarkdownPad. Basic Instructions on Getting Started with GitHub¶ Note: For first times users you will need to have a GitHub account. To create an account, go to. You will also need to install some type of GitHub client software on your PC. For the instructions below, we are using Git for Windows. You can download Git for Windows at. Git and Github Documentation Workflow Procedures *Note: Except as noted the following instructions are for Microsoft Windows using the command line prompt. * These instructions presume you have the correct software installed. - Go to a command line editor in Windows. - The following step will not be relevant the first time through this process. Use git fetchand then git pullon your master branch to make sure it is up to date before creating your working branch in step 7 below. - Create a working folder. In a windows command line editor this would be: md Work, where “Work” can be whatever you want to call your folder. (for Mac: mkdir Work) - Move inside your “ Work” folder. Command: cd Workor whatever your path is cd c:\Work - There are different ways to do this, but probably the easiest to get started would be to run the following command to clone a copy of the master version of the Documentation repo. Command: git clone Running this command will do a few things including identifying what github account and repo you are working with and by cloning a master version of the repo to your folder. It will also initialize your folder for use with the application Git. This process creates some hidden files and folders that track changes, active branches, and more. - Change your working folder to Documentation. In our example, cd\Work\Documentation - You should now be in the master branch of the repo. You can use git statusto see what branch is currently selected. You could make changes to the master branch, but when you copy these changes back to github you would be directly merging your changes into the master branch of the repo. Instead what you want to do is create a branch of the master version, so that later on when you copy your changes back to github, you have to go through another step, in github lingo a “push request,” to request that your changes be merged into the master branch. This allows for you to make sure you don’t inadvertently write over another person’s changes in the master branch. Use the following command to create a new branch from the master. Command: git checkout –b <branchname>(If the branch name already exists use git checkout <branchname>). You can now start making changes to your files. Feel free to do this by commandline or gui. For us we are going to use Windows Explorer to navigate to the following folder. Taking our example earlier. C:\Work\Documentation\Source You will notice that there are two folders under Documentation. Build and Source. Most users will be working exclusively with the Source. This is where the individual files for the documentation are to found. There are two files types of importance: .md = Markdown files and .rst = Restructured Files. Restructured Files are Python files used by the Sphinx Documentation Generator. We are using only one of these files at this time. This file is being used primarily to create our Table of Contents structure. For now, you will be editing primarily the Markdown files. If using Microsoft Windows, you can use the recommended MarkPad 2 for Windows application to open the files. - Make whatever edits you need to the Markdown files and be sure to save your changes. - Go back to your command line and change your working directory to the following. For our example, C:\Work\Documentation - Now you are ready to copy your changes back to github. Remember you are working with a different branch to the repo than the master branch. - First, use the following command to update the content you are about to commit to github. Command: ** git add *.***You can use this to just add whatever files you have changed. I usually use . to gather any files changed. We are using the hidden file .gitignoreto ignore adding any files from the /buildsubfolder. - You are now ready to commit your changes to github. Type the following command while substituting for “text description” a meaningful description of what changes you made. Command: ** git commit –m “text description”** - Now you are ready to push your commit to the repo with the following. Use the following command: git push The command ** git push **will work alone when you have an established connection. - Now go to your github account and navigate to the https//github.com/coral-erm/Documentationrepo. You should see your latest commit and branch showing up in the branch dropdown list. The branch list is on the left side under the code tab. Since you were using a different branch you are now ready to submit a pull request to merge your latest commit to the master branch. - Click on your branch to open it up. Once open you can use the Compare feature on the right side to review your changes and any possible conflicts. When ready click on the Pull Request button next to the Compare button. - Write a message related to your pull request. This can be a request to merge and/or notes about the changes. - You should receive the message “This branch has no conflicts with the base branch.” If so, and you have permission to do so, go ahead and select the green “Merge pull request” button to merge your changes into the master. Other options here include adding a comment or closing the pull request. Note: Admin rights are required to merge the pull request. Only members on the Web Committee and Steering Committee have these rights, so in this case outside parties submitting a pull request will require someone on our committee to review, approve, and merge the changes. - After clicking on the “Merge pull request” button click the “Confirm merge” button. You should receive the message “Pull request successfully merged and closed.” Go ahead and delete your branch by pushing the “Delete branch” button. Doing this will keep your workflow cleaner. Likewise, creating a new branch when needed will keep your working files closer to the master branch. Once you have committed your changes they will be updated in generally less than a minute at. If changes don’t appear right away, try refreshing the cache in your browser. - You have finished the process of cloning the repo, creating a branch, making updates to the source files, committing the changes in your new branch to the repo, and finally merging those changes into the master branch. Updating Documentation Versions¶ Follow instructions from “Basic Instructions on Getting Started with Github” with the following differences. To preserve a version of the documentation a branch repo has been created for that version. You will want to clone that branch repo instead of the master repo. For example use the following git command to clone the branch: git clone -b v2.0.1-Documentation -single-branch Change the branch name, make your edits and use the git add, commit, and push commands as described under the “Basic Instructions…” Use the following git push format: git push <your branchname> Once your new branch has been committed and pushed to Github be sure to setup the pull request to merge your branch into the version branch originally cloned. File Structure and File Naming Conventions¶ Images¶ - Create a subfolder under /img using the same name as the markdown file in which the images will be used. For example, the following folder name for organizations.md /img/organizations/ - Add images to the subfolder created in step 1. Name the images with a prefix identifying the markdown file they are associated with, separated Uppercase letters with a brief description of the image. Note: The underscore character causes GitHub to incorrectly process the image filenames in Markdown, which leads to problems in building files in the ReadtheDocs. For example: organizationsAccountsView.png for a a screenshot of the Organization’s module accounts form. WARNING! Image file names are case sensitive. This includes the file extensions. Be consistent in keeping all file extensions in lowercase. For example sampleFile.png - Add reusuable images such as icons to the /img/general folder.
http://docs.coral-erm.org/en/latest/docstyleguide.html
2019-08-17T16:00:41
CC-MAIN-2019-35
1566027313428.28
[]
docs.coral-erm.org
Description The DATE function returns the date in internal system form. This date is expressed as the number of days since December 31, 1967. It takes the general form: DATE() Note: The system and jBASE programs should manipulate date fields in internal form. They can then be converted to a preferred readable format using the OCONV function and the date conversion codes. An example of use can be as: CRT OCONV(DATE(), "D2") To display today's date in the form: dd MMM yy. Go back to JBASE BASIC.
https://docs.jbase.com/36868-jbase-basic/266870-date
2019-08-17T15:34:29
CC-MAIN-2019-35
1566027313428.28
[]
docs.jbase.com
Recently Viewed Topics Run Nessus on Linux with Systemd_pr/sbin/nessus-service -q - Add: ExecStart=/opt/nessus_pr/sbin/nessus-service -q --no-root - Add: User=nonprivuser The resulting script should appear as follows: [Service] Type=simple PIDFile=/opt/nessus_pr/var/nessus/nessus-service.pid ExecStart=/opt/nessus_pr
https://docs.tenable.com/nessus/6_10/Content/LinuxNonPrivileged.htm
2017-03-23T04:25:57
CC-MAIN-2017-13
1490218186774.43
[]
docs.tenable.com
Writing Job Results to AudienceOne Treasure Data + AudienceOne allows you to run your digital marketing campaigns, by leveraging data inside TD. Please contact us for the detail. Last modified: Dec 05 2016 03:11:23 UTC If this article is incorrect or outdated, or omits critical information, please let us know. For all other issues, please see our support channels.
https://docs.treasuredata.com/articles/result-into-audienceone
2017-03-23T04:16:34
CC-MAIN-2017-13
1490218186774.43
[]
docs.treasuredata.com
2. Definitions¶ In this manual, the terminology layer, project, and project definition are used ubiquitously, and it is important to explain what the terminology means as well as its use. In QGIS, a project or project file is a kind of container that acts like a folder storing information on file locations of layers and how these layers are displayed in a map. It is the main QGIS datafile. A layer is the mechanism used to display geographic datasets in the QGIS software, and layers provide the data that is manipulated within the IRMT. Each layer references a specific dataset and specifies how that dataset is portrayed within the map. The standard layer format for the IRMT is the ESRI Shapefile [ESRI98] that can be imported within the QGIS software using the default add data functionality, or layers may be created on-the-fly within the IRMT using GEM’s socio-economic databases. A QGIS project can include multiple layers that can be utilized to provide the variables and maps necessary for an integrated risk assessment. For each layer, multiple project definitions can be saved. A project definition is a set of parameters that are defined within the IRMT to define the integrated risk assessment’s workflow. It allows users to create, edit, and manage the workflow needed to systematically develop integrated risk models using layers. The project definition: - distinguishes which variables within a dataset are to be combined together to obtain a composite indicator; - defines how variables are grouped together by supporting: 1) deductive models that typically contain fewer than ten indicators that are normalized and aggregated to create the index; and 2) hierarchical models that employ roughly ten to twenty indicators that are separated into groups (sub-indices) that share the same underlying dimension (such as economy and infrastructure) in a manner in which individual indicators are aggregated into sub-indices, and the subindices are aggregated to create the index; - describes the type of aggregation method including additive modelling, weighted aggregation, and geometric aggregation that can be utilized by users to combine variables; - establishes the application of weights (if desired) to individual variables or sub-indices; and - delimits the directionality of variables when the intent is to consider that some variables may add to an index outcome; whereas some variables may need to detract from it. When considering the social vulnerability of populations, a socio-economic status indicator such as the percentage of population with a college education provides an example of a characteristic that may detract from social vulnerability, thereby warranting a negative directionality within an index.
http://docs.openquake.org/oq-irmt-qgis/v1.7.6/02_definitions.html
2017-03-23T04:12:26
CC-MAIN-2017-13
1490218186774.43
[]
docs.openquake.org
Find more information You can visit the following websites for more information about the BlackBerry Desktop Software: -: View support information, including Knowledge Base articles and forums. -: View the latest help associated with this release. -: Find the latest user guide for your BlackBerry device or click Help on your device. -: Download the latest version of the BlackBerry Desktop Software. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/43033/1186001.jsp
2015-01-27T19:13:10
CC-MAIN-2015-06
1422115856041.43
[]
docs.blackberry.com
G. Ann Campbell - Deprecated Pluginsupdated yesterday at 02:43 PM (view change) - Artifact Size Pluginupdated yesterday at 02:42 PM (view change) Keegan Witt - Usageupdated yesterday at 12:12 PM (view change) Aurélien Pupier - Re: Eclipse Plugin At the top of the page, in the first table, the link for the text h…commented yesterday at 10:15 AM - Re: Eclipse Plugin Just to mention a little typo in Archived Development Builds: "4.2 (Kepler)" --> "4.3 (Kepler)"commented yesterday at 10:13 AM Arnaud Heritier - Crowd Pluginupdated yesterday at 06:58 AM (view change) Ingmar Kellner - Sonargraph Plugin Added info for version 3.4.1updated yesterday at 01:57 AM (view change) Keegan Witt - Examplesupdated Jan 25, 2015 (view change) Erik Brangs - Regression Tests removed outdated informationupdated Jan 25, 2015 (view change) - Configuring the RVM updated indepth look at configurationsupdated Jan 24, 2015 (view change) Russel Winder - Homeupdated Jan 23, 2015 (view change) S. Ali Tokmen - Downloads - CARGO-1.4.12.zipattached Jan 22, 2015 - Archived Downloads - CARGO-1.4.11.zipattached Jan 22, 2015 - CARGO 1.4.12 is ready!created Jan 22, 2015 - Maven2 Plugin Installation Cédric Champeau - Download Release 2.4.0updated Jan 21, 2015 (view change) Guillaume Laforge - Groovy 2.4 release notesupdated Jan 21, 2015 (view change)
http://docs.codehaus.org/dashboard.action?maxRecentlyUpdatedPageCount=30&updatesSelectedTab=all
2015-01-27T19:12:24
CC-MAIN-2015-06
1422115856041.43
[]
docs.codehaus.org
&quot; &lt; plugin name&gt; does not exist or no valid version &quot; error? #How do I install a file in my local repository along with a generic POM #How do I install a file in my local repository along with my customized POM? #How do I locate a required plug &lt; versions/&gt; element? #How do I create a report that does not require Doxia's Sink interface? #How do I prevent verification warnings from custom repositories? #What does the &quot;ERROR Cannot override read-only parameter &lt; parameter_name&gt;&quot; message, when running &amp;amp;quot;sibling&amp;amp;quot;) . How do I locate a required plug-in? On the solaris machine In $HOME/.profile.: For the second solution: Add the following at the top level POM: And in directory A/B/, add an extra parent POM and add the following:: How do I add a description to the welcome page of the generated site when I execute mvn site? Fille up the <description> in the pom.xml as shown below:? Make sure the generated .classpath actually contains sourcepath attributes, like this: Is there a setting for testing, where I can add a directory to the classpath, which will allow the tests to access the files?).
http://docs.codehaus.org/pages/viewpage.action?pageId=43855
2015-01-27T19:19:43
CC-MAIN-2015-06
1422115856041.43
[]
docs.codehaus.org
you use the default embedded database, copy the /data directory from {$OLD_SONAR_HOME} to {$NEW_SONAR_HOME} - If Sonar is deployed on JEE applications, repackage the WAR file by executing the script /war/build-war - Start the server - browse to and follow setup instructions. Upgrade the maven plugin You don't have to do anything to upgrade the Sonar Maven plugin except using the new version of the plugin in the command line : To find more about the Maven parameters, check the Advanced parameters section.
http://docs.codehaus.org/pages/viewpage.action?pageId=121209911
2015-01-27T18:57:37
CC-MAIN-2015-06
1422115856041.43
[]
docs.codehaus.org
Document Type Article Abstract. Recommended Citation Tackach, James. 2007. "The Biblical Foundation of James Baldwin’s ‘Sonny’s Blues'." Renascence: Essays on Values in Literature 59 (2). Included in Literature in English, North America Commons In: Renascence, Vol. LIX, No. 2, Winter 2007, pp. 109-133.
http://docs.rwu.edu/fcas_fp/3/
2015-01-27T18:53:35
CC-MAIN-2015-06
1422115856041.43
[]
docs.rwu.edu
You are viewing an old version of this page. View the current version. Compare with Current View Page History Version 1 Created by Vincent Massol On Sat Nov 26 15:18:08 CET 2005 Using TimTam Powered by a free Atlassian Confluence Open Source Project License granted to Codehaus. Evaluate Confluence today.
http://docs.codehaus.org/pages/viewpage.action?pageId=39105
2015-01-27T19:04:37
CC-MAIN-2015-06
1422115856041.43
[]
docs.codehaus.org
The Python interpreter has a number of functions built into it that are always available. They are listed here in alphabetical order.. This abstract type is the superclass for str and unicode. It cannot be called or instantiated, but it can be used to test whether an object is an instance of str or unicode. isinstance(obj, basestring) is equivalent to isinstance(obj, (str, unicode)). New in version 2: Without an argument, an array of size 0 is created. New in version 2.6..(). Return a class method for function. A class method receives the class as implicit first argument, just like an instance method receives the instance. To declare a class method, use this idiom: class C, long, complex.. Create a new dictionary. The dict object is the dictionary class. See dict and Mapping Types — dict for documentation about this class. For other containers see the built-in list, set, and tuple classes, as well as the collections module.() #']. Take two (non complex) version 2.3: Using divmod() with complex numbers is deprecated.... Changed in version 2.4: formerly locals was required to be a dictionary.. Constructor function for the file type, described further in section File Objects. The constructor’s arguments are the same as those of the open() built-in function described below. When opening a file, it’s preferable to use open() instead of invoking this constructor directly. file is more suited to type testing (for example, writing isinstance(f, file)). New in version 2. See itertools.ifilter() and itertools.ifilterfalse() for iterator versions of this function, including a variation that filters for elements where the function returns false., long, complex.). in memory. Equivalent to eval(raw_input(prompt)). This function does not catch user errors. If the input is not syntactically valid, a SyntaxError will be raised. Other exceptions may be raised if there is an error during evaluation. If the readline module was loaded, then input() will use it to provide elaborate line editing and history features. Consider using the raw_input() function for general input from users. Convert a number or string x to an integer,. A base-n literal consists of the digits 0 to n-1, with a to z (or A to Z) having values 10 to 35. The default base is 10. The allowed values are 0 and 2-36. Base-2, -8, and -16 literals can be optionally prefixed with 0b/0B, 0o/0O/0, or 0x/0X, as with integer literals in code. Base 0 means to interpret the string exactly as an integer literal, so that the actual base is 2, 8, 10, or 16. The integer type is described in Numeric Types — int, float, long, complex. Return true if the object argument is an instance of the classinfo argument, or of a (direct, indirect or virtual) subclass thereof. Also return true if classinfo is a type object (new-style class) and object is an object of that type or of a (direct, indirect or virtual). Changed in version 2.2: Support for a tuple of type information was added. Return true if class is a subclass (direct, indirect or virtual) of classinfo. A class is considered a subclass of itself. classinfo may be a tuple of class objects, in which case every entry in classinfo will be checked. In any other case, a TypeError exception is raised. Changed in version 2.3: Support for a tuple of type information was added.) New in version 2.2. Return the length (the number of items) of an object. The argument may be a sequence (string, tuple or list) or a mapping (dictionary)., unicode, list, tuple, bytearray, buffer, xrange. For other containers see the built in dict, set, and tuple classes, and the collections module.. Convert a string or number to a long integer. If the argument is a string, it must contain a possibly signed number of arbitrary size, possibly embedded in whitespace. The base.. Open a file, returning an object of the file type described in section File Objects. If the file cannot be opened, IOError is raised. When opening a file, it’s preferable to use open() instead of invoking the file constructor directly. The first two arguments are the same as for stdio‘s fopen(): name is the file name to be opened, and mode is a string indicating how the file is to be opened. The most commonly-used values of mode are 'r' for reading, 'w' for writing (truncating the file if it already exists), and 'a' for appending (which on some Unix systems means that all writes append to the end of the file regardless of the current seek position).. The optional buffering argument specifies the file’s desired buffer size: 0 means unbuffered, 1 means line buffered, any other positive value means use a buffer of (approximately) that size (in bytes). A negative buffering means to use the system default, which is usually line buffered for tty devices and fully buffered for other files. If omitted, the system default is used. [2] Modes . In addition to the standard fopen() values mode may be 'U' or 'rU'. Python is usually built with universal newlineslines. Given a string of length one, return an integer representing the Unicode code point of the character when the argument is a unicode object, or the value of the byte when the argument is an 8-bit string. For example, ord('a').); if two multiples are equally close, rounding is done away from 0 (so. for example, round(0.5) is 1.0 and round(-0.5) is -1.. Return a new sorted list from the items in iter. Return a static method for function. A static method does not receive an implicit first argument. To declare a static method, use this idiom: class C(object): . object. With three arguments,)) New in version otherwise. For ASCII and 8-bit strings see chr(). New in version. For more information on Unicode strings see Sequence Types — str, unicode, list, tuple, bytearray,.. CPython implementation detail:+2*(step<0))//step). == list(x2) and y == list(y2) True New in version 2.0. Changed in version 2.4: Formerly, zip() required at least one argument and zip() raised a TypeError instead of returning an empty list.).). Return a tuple consisting of the two numeric arguments converted to a common type, using the same rules as used by arithmetic operations. If coercion is not possible, raise TypeError. are not immortal (like they used to be in Python 2.2 and before); you must keep a reference to the return value of intern() around to benefit from it. Footnotes
http://docs.python.org/2/library/functions.html
2013-12-04T22:42:57
CC-MAIN-2013-48
1386163037829
[]
docs.python.org
Demos Packaged demos in Groovy: Samples for SNAPSHOT releases.
http://docs.codehaus.org/pages/viewpage.action?pageId=231736738
2013-12-04T23:01:12
CC-MAIN-2013-48
1386163037829
[]
docs.codehaus.org
Hi there, Maven2 works really great as a project build tool for us. I have written a tutorial at my company for all Maven newbies, and I hope it can benefit other who is struggling on how to get started. The tutorial is very short, and it walks through a real simple java project from creating it from scratch to all the way on deploying and releasing it. It also has a small webapp application walk through as well. The tutorial download is here. Enjoys, -Zemian Deng
http://docs.codehaus.org/pages/diffpages.action?pageId=35422251&originalId=228172232
2014-12-18T11:31:14
CC-MAIN-2014-52
1418802766267.61
[]
docs.codehaus.org