content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Cross-platform mobile development in Visual Studio
You can build apps for Android, iOS, and Windows devices by using Visual Studio. As you design your app, use tools in Visual Studio to easily add connected services such as Microsoft.
Build an app for Android, iOS, and Windows (.NET Framework)
With Visual Studio Tools, select the Mobile Development with .NET option in the installer.
If you already have Visual Studio installed, re-run the Visual Studio Installer and select the same Mobile Development with .NET classes and methods. This means your apps have full access to native APIs and native Android SDK emulator and run Windows apps natively. You can also use tethered Android and Windows devices directly. For iOS projects, connect to a networked Mac and start the iOS and the Xamarin.Forms documentation.. Platform (UWP) and choose the Mobile Development with Javascript feature during setup. Android Emulator, Platform (UWP) apps are still available in Visual Studio so feel free to use them if you plan to target only Windows devices. If you decide to target Android and iOS later, you can always port your code to a Cordova project.
Build an app for Android, iOS, and Windows (C++)
First, install Visual Studio and the Mobile Development with C++ workload. Then, you can build a native activity application for Android, or an app that targets Windows or iOS. You can target Android, iOS, Android Emulator. It's fast, reliable, and easy to install and configure.
You can also build an app that targets the full breadth of Windows 10 devices by using C++ and a Universal Windows Platform (UWP) app project template. Read more about this in the Target Windows 10 devices section that appears earlier in this topic.
You can share C++ code between Android, iOS, and Windows by creating a static or dynamic shared library.
You can consume that library in a Windows, iOS, 2018.1. | https://docs.microsoft.com/en-us/visualstudio/cross-platform/cross-platform-mobile-development-in-visual-studio?view=vs-2019 | 2021-06-13T03:53:15 | CC-MAIN-2021-25 | 1623487598213.5 | [array(['media/homedevices.png?view=vs-2019', 'HomeDevices Devices'],
dtype=object)
array(['media/sharecode.png?view=vs-2019',
'ShareCode Share code between Windows, iOS, and Android UIs'],
dtype=object)
array(['media/windowsdevices.png?view=vs-2019',
'Windows Devices Windows Devices'], dtype=object)
array(['media/homedevices.png?view=vs-2019',
'Windows, iOS, and Android devices Windows, iOS, and Android devices'],
dtype=object)
array(['media/multidevicehybridapps.png?view=vs-2019',
'Multi-device hybrid apps with JavaScript Multi-device hybrid apps with JavaScript'],
dtype=object)
array(['media/cross_plat_cpp_intro_image.png?view=vs-2019',
'Cross_Plat_CPP_Intro_Image Use C++ to build for Android, iOS, and Windows'],
dtype=object)
array(['media/cross-plat_cpp_native.png?view=vs-2019',
'Native activity project template Native activity project template'],
dtype=object)
array(['media/cross_plat_cpp_libraries.png?view=vs-2019',
'Static and dynamic shared libraries Static and dynamic shared libraries'],
dtype=object)
array(['media/vstu_overview.png?view=vs-2019',
'Visual Studio Tools for Unity overview VSTU development environment'],
dtype=object) ] | docs.microsoft.com |
How to create a video background with Nimble Builder WordPress plugin ?
Nimble Builder allows you to create video backgrounds for sections and columns from self-hosted videos ( like mp4 videos for example ) from your WordPress media library, Youtube and Vimeo videos.
Like in this example.
- Edit your section's settings and expand the background tab
- Activate the video background option at the bottom of the background settings tab
Video backgrounds can also be used on columns.
Various Tips
About performances
In order to reduce page load impact, Nimble Builder will lazy load your videos by default. You can disable the default lazy load option for video background in the global settings > page speed optimizations.
How to use mp4 videos ?
Self-hosted mp4 videos is a good solution for your video backgrounds. If your server offers decent performances, I'd recommend using this type of videos rather than Vimeo or Youtube ones. You can upload your own videos or find good videos in free CC0 repositories such as pexels.com or Pixabay.
All you need to do is :
- Upload your mp4 video in your WordPress media library.
- Once uploaded, copy the video URL. For that click on edit the media, and then copy the video link.
- Paste the link in the Video link field of Nimble Builder. | https://docs.presscustomizr.com/article/401-how-to-create-a-video-background-with-nimble-builder-wordpress-plugin | 2021-06-13T02:30:35 | CC-MAIN-2021-25 | 1623487598213.5 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5c6b20962c7d3a66e32e99e5/file-MlDR1LTS4z.jpg',
None], dtype=object) ] | docs.presscustomizr.com |
Create a widget dependency In Service Portal, you can link JavaScript and CSS files to widgets to create dependencies between widgets and third-party libraries, external style sheets, and angular modules. Before you beginRole required: admin or sp_admin About this task Dependencies are loaded asynchronously from the server when needed. Widgets can have as many or as few dependencies as needed. However, the more you add, the more content a widget must download to render on the page. Keep dependencies as small as possible for more efficient load times. Procedure Create a dependency package. A dependency package is a collection of JavaScript and CSS files that can be then connected to a widget. Navigate to Service Portal > Dependencies. In the dependency record, define the following fields. Field Description Name The name of your dependency. (Useful for selecting a dependency from a dropdown list.) Application Application scope for the dependency record. Include on page load Select if you want your dependency to be loaded onto the page on the initial page load of Service Portal, or leave unchecked to load the dependency only when the linked widget is loaded onto a page. Angular module name Optional. Define the value if the linked JavaScript is an Angular module. Provide the name of the Angular module being loaded, so that it can be injected into the Service Portal Angular application. Add files to the dependency package. After you save the information for your dependency package, use the related lists to add JS and CSS Include files. For each related list, include the following information: Field Description Display name Name of the script include. Source Depending on whether you add a JS Include or a CSS Include, select one of these options from the list: URL UI script (for a JS Include) or Style Sheet (for a CSS Include) For a JS Include, use the UI Script field to reference a UI Script found in System UI > UI Scripts.For the CSS Include, use the Style Sheet field to reference a record in the sp_css table. Add a dependency package to a widget. After you have created a dependency package and added files, create a relationship between the dependency and a widget. Navigate to Service Portal > Widgets and find the widget record you want to add the dependency to. From the Dependencies related list, click Edit. In the slushbucket, find the dependency you created and double-click to add it to the selected items column on the right. Save the page to return to the widget record. Include a font icon in a single widgetIf you only want one widget to have access to a font icon, include the font icon in a single widget.Include font icons as a widget dependencyYou can include font icons wherever a widget is loaded by including them as a widget dependency. | https://docs.servicenow.com/bundle/quebec-servicenow-platform/page/build/service-portal/task/widget-dependencies.html | 2021-06-13T02:09:54 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.servicenow.com |
for a transparent pixel (range 0..1, anything equal or below this value will be considered unimportant)
avoid_vertex_merging(bool): [Read-Write] Experimental: Hint to the triangulation routine that extra vertices should be preserved
detail_amount(float): [Read-Write] Amount to detail to consider when shrink-wrapping (range 0..1, 0 = low detail, 1 = high detail)
geometry_type(SpritePolygonMode): [Read-Write] The geometry type (automatic / manual)
pixels_per_subdivision_x(int32): [Read-Write] Size of a single subdivision (in pixels) in X (for Diced mode)
pixels_per_subdivision_y(int32): [Read-Write] Size of a single subdivision (in pixels) in Y (for Diced mode)
shapes(Array(SpriteGeometryShape)): [Read-Write] List of shapes
simplify_epsilon(float): [Read-Write] This is the threshold below which multiple vertices will be merged together when doing shrink-wrapping. Higher values result in fewer vertices. | https://docs.unrealengine.com/4.26/en-US/PythonAPI/class/SpriteGeometryCollection.html | 2021-06-13T02:53:06 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.unrealengine.com |
User Password Change Using Game Data Service.
Game Data Service? This tutorial assumes an understanding of the Game Data Service. We strongly recommend that you review the Game Data and the Data Explorer topics before you attempt to follow this tutorial.
Cloud Code Section
Choosing Our Sequence of Actions and Passing in scriptData
1. Navigate to the Configurator > Cloud and password reset:
- Password recovery will generate a token which will be sent via email.
-); }
Linking User Account to E-Mail Data Type to save emails and tokens in together with the player's unique ID for easy reference. Our Data Type used in this example is named playerProfile.
- You need to go to the Game Data page and add indexed fields, which can then be used to query the playerProfile Data Type:
The Request Cloud Code:
if(Spark.getData().scriptData.email == null){ Spark.setScriptError("ERROR", "Email not specified"); } else{ //Load API var API = Spark.getGameDataService(); //Email var email = Spark.getData().scriptData.email; //Create condition var query = API.S("email").eq(email); //Check for results var resultOBJ = API.queryItems("playerProfile", query); //Check for errors if(resultOBJ.error()){ Spark.setScriptError("ERROR", resultOBJ.error()); } else{ //Get result var result = resultOBJ.cursor(); //If there's a result, means email is in use if(result.hasNext()){ Spark.setScriptError("ERROR", "Email in use!") } else{ Spark.setScriptData("email", Spark.getData().scriptData.email); } } }
The Response Cloud Code:
if(!Spark.hasScriptErrors()){ //Load API var API = Spark.getGameDataService(); //Create player inventory doc = API.createItem("playerProfile", Spark.getPlayer().getPlayerId()); //Get Data var data = doc.getData(); //Save userName and password data.userName = Spark.getPlayer().getUserName(); data.displayName = Spark.getPlayer().getDisplayName(); data.email = Spark.getData().scriptData.email; //Persist document var status = doc.persistor().persist().error(); //Check if document saved if(status){ //return error if persistence interrupted Spark.setScriptError("ERROR", status) } }
Sending the Request from Test Harness or the SDK:
{ "@class": ".RegistrationRequest", "displayName": "testUser", "password": "password", "userName": "testUser", "scriptData":{"email":"[email protected]"} }
Below is an example code of how to create a system that saves emails and generates token for password recovery:
Sequence Picker
Depending on the 'action' scriptData input, we're going to decide the sequence the script is going to follow.
var status = "Started"; //Checking if there is any scriptData passed in, if not then carry on the authentication as normal if(Spark.data.scriptData != null){ as long it's not easy to guess.
function generateRecoveryToken(){ // this should be cryptographically strong, not simply date-based var key = "MySecretKey"; var data = "ResetToken_" + new Date().getTime() + "_" + Math.random(); var token = Spark.getDigester().hmacSha256Base64(key, data); return token; }
Sending Out the E-Mail
We have full integration with SendGrid services. If you plan to use it to send your E-Mails, then create an account with them and wait for their E-Mail confirming that: "Your SendGrid account has been provisioned!", which then allows you to start sending emails. It might take up to 24 hours to receive it. You'll need:
- The E-Mail to send to.
- The E-Mail sent from.
- The E-mail Subject.
- The E-Mail contents.
- To send the E-Mail.
function sendRecoveryEmail(email, name , token){ //Here we use sendGrid as an example because we have full integration with their services. var myGrid = Spark.sendGrid("Username", "Password"); /(); }
If you don't want to use sendGrid you can use SparkHTTP to use your own provider.
Password Recovery Sequence
The password recovery sequence is going to generate a token, save it on the playerProfile Data Type, and send the player an email with a token.
function startRecovery(request){ if(!request.email){ //Either the email variable was not passed in or it was spelt incorrectly status = "email variable not passed in"; return; } //Get data service var api = Spark.getGameDataService(); //Construct query var query = api.S("email").eq(request.email); //Attempt to get results var resultOBJ = api.queryItems("playerProfile", query) //Check for errors if(resultOBJ.error()){ status = "invalid"; return; }else{ //Get document var result = resultOBJ.cursor().next(); } //Check for error if(result == null){ status = "invalid"; return; } // Function to generate a unique token var token = generateRecoveryToken(); var player = Spark.loadPlayer( result.getId()); //Sends the token back with the response Spark.setScriptData("token", token) //Get data object and insert new token var data = result.getData(); data.token = token; //Persist doc and save potential error var serviceStatus = result.persistor().persist().error(); //Check for errors if(serviceStatus){ status = "invalid"; return; }else{ // Function used to send email sendRecoveryEmail(request.email, player.getDisplay; } //Get data service var api = Spark.getGameDataService(); //Construct query var query = api.S("token").eq(request.token); //Attempt to get results var resultOBJ = api.queryItems("playerProfile", query); //Check for errors if(resultOBJ.error()){ status = "invalid"; return; }else{ var result = resultOBJ.cursor().next(); } //Check if any results came back if(result == null){ status = "invalid"; return; } //Load data and player var player = Spark.loadPlayer(result.getId()); var data = result.getData(); //Set the token to null so it wont be used again data.token = null; //Persist entry var serviceStatus = result.persistor().persist().error(); //Check for error if(serviceStatus){ status = "invalid"; return; }else{ //Change password player.setPassword(request.password); //Unlock player just in case too many failed attempts were tried and player was locked player.unlock();. | https://docs.gamesparks.com/tutorials/social-authentication-and-player-profile/automating-password-change-using-game-data-service.html | 2018-09-18T15:11:02 | CC-MAIN-2018-39 | 1537267155561.35 | [array(['img/autopasswordgds/1.png', None], dtype=object)] | docs.gamesparks.com |
Kernel-Mode WDM Audio Components
The kernel-mode Microsoft Windows Driver Model (WDM) audio components are:
WDMAud System Driver
SysAudio System Driver
KMixer System Driver
Redbook System Driver
SBEmul System Driver
SWMidi System Driver
DMusic System Driver
AEC System Driver
DRMK System Driver
Splitter System Driver
Port Class Adapter Driver and PortCls System Driver
USB Audio Class System Driver (Usbaudio.sys)
AVCAudio Class System Driver
WDMAud System Driver
The kernel-mode WDMAud system driver (Wdmaud.sys) is paired with the user-mode WDMAud system driver (Wdmaud.drv). The pair of WDMAud drivers translate between user-mode Microsoft Windows multimedia system calls and kernel-streaming I/O requests. WDMAud performs I/O for the following APIs: waveIn, waveOut, midiIn, midiOut, mixer, and aux (described in the Microsoft Windows SDK documentation). The kernel-mode WDMAud driver is a kernel streaming (KS) filter and a client of the SysAudio system driver.
SysAudio System Driver
The SysAudio system driver (Sysaudio.sys) builds the filter graphs that render and capture audio content. The SysAudio driver represents audio filter graphs as virtual audio devices and registers each virtual audio device as an instance of a KSCATEGORY_AUDIO_DEVICE device interface. (Adapter drivers should not register themselves in this category, which is reserved exclusively for SysAudio.) For example, a virtual MIDI device might represent a filter graph that is created by connecting the SWMidi driver, the KMixer driver, and a port/miniport driver.. The following audio stream sources use the graphs that SysAudio builds:
DirectSound (See Microsoft Windows SDK documentation.)
Windows multimedia APIs waveIn, waveOut, midiIn, midiOut, mixer, and aux (See Windows SDK documentation.)
Redbook CD digital audio (See Redbook System Driver.)
Sound Blaster emulator (See SBEmul System Driver.)
Kernel-mode software synthesizers (See SWMidi System Driver and DMusic System Driver.)
DRMK System Driver
KMixer System Driver
The KMixer system driver (Kmixer.sys) is the KS filter that does the following:
Mixing of multiple PCM audio streams
High-quality format conversion
Mixing and sample-rate conversion (See KMixer Driver Sample Rate Conversion and Mixing Policy.)
Bit-depth conversion
Speaker configuration and channel mapping
In addition to simple 8- and 16-bit, mono and stereo data formats, the KMixer driver supports:
PCM and IEEE floating-point data
Bit depths greater than 16 bits, and multichannel formats with more than two channels
Head-related transfer function (HRTF) 3-D processing
For information about the volume ranges and the default volume levels in the various versions of Windows, see Default Audio Volume Settings.
Redbook System Driver
The Redbook system driver (Redbook.sys) is the KS filter that manages the rendering of CD digital audio. The Redbook driver is a client of the SysAudio system driver. The system routes CD digital audio through the file system to the Redbook driver and then to the SysAudio driver. The CD digital audio is rendered on the preferred wave output device (as set in the Multimedia property pages in Control Panel).
SBEmul System Driver
The SBEmul system driver (Sbemul.sys) provides Sound Blaster emulation for MS-DOS applications. The SBEmul driver is a client of the SysAudio system driver. To render and capture content, the SysAudio driver uses the preferred wave and MIDI devices (as set in the Multimedia property pages in Control Panel).
Sound Blaster emulation is supported only in Windows 98/Me.
SWMidi System Driver
The SWMidi system driver (Swmidi.sys) is the KS filter that provides software-emulated General MIDI (GM) and high-quality Roland GS wavetable synthesis. A midiOutXxx application uses SWMidi when a hardware synthesizer is unavailable. The SWMidi filter receives as input a time-stamped MIDI stream from the WDMAud system driver and outputs a PCM wave stream to the KMixer system driver. SWMidi mixes all of its voices internally to form a single two-channel output stream with a PCM wave format.
DMusic System Driver
The DMusic system driver (Dmusic.sys) is the KS filter that supports software-emulated, high-quality, downloadable sound (DLS) synthesis. The DMusic driver is a system-supplied port class miniport driver. It exposes a single DirectMusic pin, which supports a DirectMusic stream data range. The DMusic filter receives as input a time-stamped MIDI stream from the DirectMusic system component and outputs a PCM wave stream to the KMixer system driver. The DMusic driver mixes all of its voices internally to form a single two-channel output stream with a PCM wave format. A DirectMusic application must explicitly select the kernel-mode software synth, Dmusic.sys, to use it in place of DirectMusic's default, user-mode synth.
AEC System Driver
The AEC system driver (Aec.sys) supports full-duplex DirectSound applications by implementing AEC (acoustic echo cancellation) and noise-suppression algorithms in software. For more information, see DirectSound Capture Effects.
DRMK System Driver
The DRMK system driver (Drmk.sys) is the KS filter that decrypts audio streams containing DRM-protected content. For more information, see Digital Rights Management.
Splitter System Driver
The Splitter system driver (Splitter.sys) is the KS filter that creates two or more output streams from a single input capture stream. The Splitter driver transparently copies the input stream to two more output streams independently of the format of the input stream.
The Splitter driver is supported by Windows Me, and Microsoft Windows XP and later. For more information, see AVStream Splitters.
Port Class Adapter Driver and PortCls System Driver
A port class adapter driver uses the port/miniport driver architecture to support an audio device. The PortCls driver includes built-in driver support for ISA and PCI audio devices. Although the PortCls system driver (Portcls.sys) also provides the framework for vendor-supplied port class adapter drivers, Microsoft recommends that vendors use a system-supplied port class adapter driver to support ISA and PCI audio devices. The PortCls framework might also be useful for constructing drivers for audio devices on other hardware buses or for software-only devices. For more information, see Introduction to Port Class.
USB Audio Class System Driver (Usbaudio.sys)
The USBAudio class system driver (Usbaudio.sys) provides driver support for USB Audio devices that comply with the Universal Serial Bus Device Class Definition for Audio Devices. For more information about this class system driver, see USB Audio Class System Driver (Usbaudio.sys).
AVCAudio Class System Driver
The AVCAudio class system driver (Avcaudio.sys) is an AVStream minidriver that provides driver support for audio devices that reside on an IEEE 1394 bus. The AVCAudio driver and associated support for IEEE 1394 audio devices are available in Windows XP and later.
To work with the system-supplied drivers, hardware vendors should design their audio devices to comply with the appropriate sections of the following specifications:
IEC 61883-1 and IEC 61883-6 (IEC 60958)
AV/C Digital Interface Command Set General Specification Ver. 3.0
AV/C Audio Subunit Specification 1.0
Connection and Compatibility Management Specification 1.0
AV/C Media Stream Format Information and Negotiation
Updates to the AV/C Audio Subunit Specifications currently in process
These specifications are available at the 1394 Trade Association website. The AVCAudio driver supports a subset of the features that are described in these specifications.
When an audio device identifies itself as an IEEE 1394-compliant audio device during Plug and Play device enumeration, the system automatically loads the AVCAudio driver to drive the device. AVCAudio drives the device directly, without the aid of a proprietary adapter driver. This means that a device that complies with the appropriate IEEE 1394 specifications requires no proprietary adapter driver.
Microsoft recommends that hardware vendors use the AVCAudio driver for their IEEE 1394 audio devices instead of writing proprietary adapter drivers.
The following figure shows the driver hierarchy for an IEEE 1394 audio device in Windows XP. In Windows XP and later, all of the driver components shown in this figure are provided by Microsoft with the operating system.
For more information about the driver components in the figure, see the following sections: | https://docs.microsoft.com/en-us/windows-hardware/drivers/audio/kernel-mode-wdm-audio-components | 2018-09-18T15:27:34 | CC-MAIN-2018-39 | 1537267155561.35 | [array(['images/avcaudio.png',
'diagram illustrating the driver hierarchy for a 1394 audio device'],
dtype=object) ] | docs.microsoft.com |
Blue Medora Nozzle for PCF Release Notes
This documentation describes the Blue Medora Nozzle for Pivotal Cloud Foundry (PCF). The Blue Medora Nozzle for PCF is a service that collects metrics for the Loggregator Firehose and exposes them using a RESTful API.
Overview
The Blue Medora Nozzle for PCF includes the following key features:
- Automated configuration method for the Blue Medora Nozzle
- Exposed metrics, using a RESTful API, for the Loggregator Firehose
- Integration with Blue Medora’s vRealize Operations Management Pack for PCF
Product Snapshot
The following table provides version and version-support information about Blue Medora Nozzle for PCF:
* As of PCF v2.0, Elastic Runtime is renamed Pivotal Application Service (PAS). If your tile supports PCF 2.0, it will use PAS, not Elastic Runtime.
Feedback
Please provide any bugs, feature requests, or questions to the Pivotal Cloud Foundry Feedback list or send an email to Blue Medora Customer Support. | https://docs.pivotal.io/partners/blue-medora/index.html | 2018-09-18T15:08:58 | CC-MAIN-2018-39 | 1537267155561.35 | [] | docs.pivotal.io |
Group Explorer
The Group Explorer is an aid to navigating report/table groups. The Group Explorer allows you to see the structure of the groups, to select them and change the Grouping, Sorting and Filtering. The Group Explorer can be especially handy for complex reports where there might be a lot of groups and it would be difficult to select a group and distinguish group hierarchy.
The Group Explorer allows you to easily apply grouping, sorting and filtering to your report. With this dialog, you don't need to manually invoke the Grouping dialog, Sorting dialog and Filtering dialog and define the group, sort and filter properties. Instead, you can use the Grouping, Sorting and Filter fields to do this with several intuitive mouse clicks.
The Group Explorer can be accessed from the context menu View | Group Explorer when right-clicking the area next to the report design surface.
When a Table/Crosstab item is selected, the Group Explorer dialog layout changes to show you the Row and Column Groups:
When a Graph item is selected, the Group Explorer will show you the Series and Categories Groups.
When a Map item is selected, the Group Explorer will show you the Series and Geolocation Groups.
It does not show the Static groups by default,. | https://docs.telerik.com/reporting/ui-group-explorer | 2018-09-18T15:30:09 | CC-MAIN-2018-39 | 1537267155561.35 | [] | docs.telerik.com |
Streamlined Metadata API for Picklists
We bring you elegance and efficiency with a reimagined Metadata API for picklists, with no wasted elements to clutter your API calls. The new structure clearly differentiates between global picklist value sets, local custom picklists, and standard picklists, making it super-easy to track your fields and values. This feature is available in both Lightning Experience and Salesforce Classic.
If you're using API v37.0, you can still use the existing elements for defining picklists and their values. If you're using API v38.0, your brain is about to get a break because defining all types of picklists makes more sense. Here's a high-level comparison:
For type and field descriptions and sample definitions, see the Metadata API Developer Guide. | https://releasenotes.docs.salesforce.com/en-us/winter17/release-notes/rn_forcecom_picklists_new_api.htm | 2018-09-18T15:41:10 | CC-MAIN-2018-39 | 1537267155561.35 | [] | releasenotes.docs.salesforce.com |
Annotation used for turning off Groovy's auto visibility conventions. By default, Groovy automatically turns package protected fields into properties and makes package protected methods, constructors and classes public. This annotation allows this feature to be turned off and revert back to Java behavior if needed. Place it on classes, fields, constructors or methods of interest as follows:
or for greater control, at the class level with one or moreor for greater control, at the class level with one or more
@PackageScope class Bar { // package protected
@PackageScope int field // package protected; not a property
@PackageScope method(){} // package protected }
PackageScopeTargetvalues:
import static groovy.transform.PackageScopeTarget.*This transformation is not frequently needed but can be useful in certain testing scenarios or when using a third-party library or framework which relies upon package scoping.
@PackageScope([CLASS, FIELDS]) class Foo { // class will have package protected scope int field1, field2 // both package protected def method(){} // public }
@PackageScope(METHODS) class Bar { // public int field // treated as a property def method1(){} // package protected def method2(){} // package protected }
@default {PackageScopeTarget.CLASS} | http://docs.groovy-lang.org/latest/html/gapi/groovy/transform/PackageScope.html | 2016-04-28T21:50:55 | CC-MAIN-2016-18 | 1461860109830.69 | [] | docs.groovy-lang.org |
Information for "Adapting a Joomla 1.5 extension to Joomla 2.5" Basic information Display titleJ2.5 talk:Adapting a Joomla 1.5 extension to Joomla 2.5 Default sort keyAdapting a Joomla 1.5 extension to Joomla 2.5 Page length (in bytes)4,070 Page ID11422 Page content languageEnglish (en) Page content modelwikitext Indexing by robotsAllowed Number of redirects to this page3 Number of subpages of this page0 (0 redirects; 0 non-redirects) Page protection EditAllow all users MoveAllow all users Edit history Page creatorElkuku (Talk | contribs) Date of page creation13:31, 18 January 2011 Latest editorTom Hutchison (Talk | contribs) Date of latest edit15:59, 29 April 2013 Total number of edits9 Total number of distinct authors9 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’ | https://docs.joomla.org/index.php?title=J2.5_talk:Adapting_a_Joomla_1.5_extension_to_Joomla_2.5&action=info | 2016-04-28T23:26:03 | CC-MAIN-2016-18 | 1461860109830.69 | [] | docs.joomla.org |
Screen.templates.
Contents
Description style sheet (CSS) file.
Version.
Typical Usage.
Note: The preview of an administrator template does not show you the administrator module positions.
Quick Tips
You can switch between the already installed templates for the Site and the Administrator by selecting the desired template and clicking on the Default toolbar icon on the upper right part of the screen.
Points to Watch.
Dependencies. | https://docs.joomla.org/index.php?title=Help15:Screen.templates.15&oldid=5735 | 2016-04-28T23:18:43 | CC-MAIN-2016-18 | 1461860109830.69 | [] | docs.joomla.org |
Difference between revisions of "JTableExtension:: construct"::__construct
Description
Contructor.
Description:JTableExtension:: construct [Edit Descripton]
public function __construct (&$db)
- Returns
- Defined on line 28 of libraries/joomla/database/table/extension.php
See also
JTableExtension::__construct source code on BitBucket
Class JTableExtension
Subpackage Database
- Other versions of JTableExtension::__construct
SeeAlso:JTableExtension:: construct [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=JTableExtension::_construct/11.1&diff=57842&oldid=50756 | 2016-04-28T23:10:49 | CC-MAIN-2016-18 | 1461860109830.69 | [] | docs.joomla.org |
Difference between revisions of "JCategoryNode:: construct"
From Joomla! Documentation
Revision as of 15::__construct
Description
Class constructor.
Description:JCategoryNode:: construct [Edit Descripton]
public function __construct ( $category=null &$constructor=null )
See also
JCategoryNode::__construct source code on BitBucket
Class JCategoryNode
Subpackage Application
- Other versions of JCategoryNode::__construct
SeeAlso:JCategoryNode:: construct [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API17:JCategoryNode::_construct&diff=next&oldid=56083 | 2016-04-28T22:23:09 | CC-MAIN-2016-18 | 1461860109830.69 | [] | docs.joomla.org |
Revision history of "JRule:: toString/11.1"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 20:35, 10 May 2013 JoomlaWikiBot (Talk | contribs) moved page JRule:: toString/11.1 to API17:JRule:: toString without leaving a redirect (Robot: Moved page) | https://docs.joomla.org/index.php?title=JRule::_toString/11.1&action=history | 2016-04-28T23:28:46 | CC-MAIN-2016-18 | 1461860109830.69 | [] | docs.joomla.org |
Products Intro
In Chargify, you sell Subscriptions to your Products. You must first create and configure a Product before you can sell anything to a Customer. Products are administered on a Site-by-Site basis, on the main “Products” tab. Options such as Return URL and Parameters, API handle, Credit Card requirements, and Address requirements
- Product Pricing allows you to set the recurring interval, recurring price, trial period, trial price, initial fees, and expiration intervals
Archiving Products.* | https://docs.chargify.com/products-intro | 2016-04-28T21:47:05 | CC-MAIN-2016-18 | 1461860109830.69 | [array(['/images/doculab/product-intro-1.png', None], dtype=object)
array(['/images/doculab/product-intro-2.png', None], dtype=object)] | docs.chargify.com |
Difference between revisions of "SEF" From Joomla! Documentation Redirect page Revision as of 12:46, 18 June 2008 (view source)Chris Davenport (Talk | contribs)m (Redirecting to SEO) Latest revision as of 13:26, 14 September 2013 (view source) Redirect fixer (Talk | contribs) (Fixing double redirect from Search Engine Optimisation to Portal:Search Engine Optimisation.) (One intermediate revision by one other user not shown)Line 1: Line 1: −#REDIRECT [[SEO]]+#REDIRECT [[Portal:Search Engine Optimisation]] [[Category:Landing Pages]] [[Category:Landing Pages]] Latest revision as of 13:26, 14 September 2013 Portal:Search Engine Optimisation Retrieved from ‘’ Category: Landing Pages | https://docs.joomla.org/index.php?title=SEF&diff=103553&oldid=7778 | 2016-04-28T22:57:57 | CC-MAIN-2016-18 | 1461860109830.69 | [] | docs.joomla.org |
You are not restricted to the same date ranges when running PHP on a 64-bit machine. This is because you are using 64-bit integers instead of 32-bit integers (at least if your OS is smart enough to use 64-bit integers in a 64-bit OS)
The following code will produce difference output in 32 and 64 bit environments.
var_dump(strtotime('1000-01-30'));
32-bit PHP: bool(false)
64-bit PHP: int(-30607689600)
This is true for php 5.2.* and 5.3
Also, note that the anything about the year 10000 is not supported. It appears to use only the last digit in the year field. As such, the year 10000 is interpretted as the year 2000; 10086 as 2006, 13867 as 2007, etc
strtotime
(PHP 4, PHP 5)
strtotime — Parse about any English textual datetime description into a Unix timestamp
Description
The function expects to be given a string containing a US English date format and will try to parse that format into a Unix timestamp (the number of seconds since January 1 1970 00:00:00 UTC),..);
}
?>
Notes UTC to Tue, 19 Jan 2038 03:14:07 UTC. .
See Also
-
*/
function get_x_months_to_the_future( $base_time = null, $months = 1 )
{
if (is_null($base_time))
$base_time = time();
$x_months_to_the_future = strtotime( "+" . $months . " months", $base_time );
$month_before = (int) date( "m", $base_time ) + 12 * (int) date( "Y", $base_time );
$month_after = (int) date( "m", $x_months_to_the_future ) + 12 * (int) date( "Y", $x_months_to_the_future );
if ($month_after > $months + $month_before)
$x_months_to_the_future = strtotime( date("Ym01His", $x_months_to_the_future) . " -1 day" );
return $x_months_to_the_future;
} /oyah! $time containing only numeric and space characters results in unexpected output (at least on Win2K server, not checked with linux).
<?php
echo date('d F Y', strtotime('2007')); // today's date (09 May 2008) displayed
echo date('d F Y', strtotime('01 2007')); // Warning: date(): Windows does not support dates prior to midnight (00:00:00), January 1, 1970
echo date('d F Y', strtotime('01 01 2007')); // same warning
echo date('d F Y', strtotime('01 Jan 2007')); // 01 January 2007
?>
No bug report submitted, I don't know enough about php and servers to know if this is expected behaviour or not.
31-Jan-2008 09:15
I found some different behaviors between PHP 4 and PHP 5. I have tested this on just two versions: PHP Version 5.2.3-1ubuntu6.3 and PHP Version 4.3.10-22.
Example 1:
<?php
$ts2 = strtotime("1st Thursday", $ts1)
var_dump($ts2)
// this works in PHP 4
// PHP 5 dumps bool(false)
?>
Example 2:
<?php
$ts2 = strtotime("first Thursday", $ts1)
var_dump($ts2)
// this works in PHP 4
// also works in PHP 5
?>
21-Jan-2008 01:56
As with each of the time-related functions, and as mentioned in the time() notes, strtotime() is affected by the year 2038 bug on 32-bit systems:
<?php
echo strtotime('13 Dec 1901 20:45:51'); // false
echo strtotime('13 Dec 1901 20:45:52'); // -2147483648
echo strtotime('19 Jan 2038 03:14:07'); // 2147483647
echo strtotime('19 Jan 2038 03:14:08'); // false
?>
10-Dec-2007 04:21
Be careful with spaces between the "-" and the number in the argument, for some PHP-installations...
<?php
strtotime("- 1 day") // ...with space - will ADD a day
strtotime("-1 day") // ...works perfect
?>
05-Dec-2007 06:42
Here is a list of differences between PHP 4 and PHP 5 that I have found
(specifically PHP 4.4.2 and PHP 5.2.3).
<?php
$ts_from_nothing = strtotime();
var_dump($ts_from_nothing);
// PHP 5
// bool(false)
// WARNING: Wrong parameter count...
// PHP 4
// NULL
// WARNING: Wrong parameter count...
// remember that unassigned variables evaluate to NULL
$ts_from_null = strtotime($null);
var_dump($ts_from_null)...
// PHP 5
// bool(false)
// throws a NOTICE: Undefined variable
// PHP 4
// current time
// NOTICE: Undefined variable $null...
// NOTICE: Called with empty time parameter...
$ts_from_empty = strtotime("");
var_dump($ts_from_empty);
// PHP 5
// bool(false)
// PHP 4
// current time
// NOTICE: Called with empty time parameter
$ts_from_bogus = strtotime("not a date");
var_dump($ts_from_bogus);
// PHP 5
// bool(false)
// PHP 4
// -1
?>
$date = explode("/","05/11/2007");
strftime("%Y-%m-%d",mktime(0,0,0,$date[1],$date[0],$date[2]));
?>
Much reliable but you must know the date format before. You can use javascript to mask the date field and, if you have a calendar in your page, everything is done.
Thank you.
01-Oct-2007 10:41
Here the workaround to the bug of strtotime() found in my previous comment on finding the exact date and time of "3 months ago of last second of this year", using mktime() properties on dates instead of strtotime(), and which seems to give correct results:
<?php
// check for equivalency
$basedate = strtotime("31 Dec 2007 23:59:59");
$timedate = mktime( 23, 59, 59, 1, 0, 2008 );
echo "$basedate $timedate "; // 1199141999 1199141999 : SO THEY ARE EQUIVALENT
// workaround, as mktime knows to handle properly offseted dates:
$date1 = mktime( 23, 59, 59, 1 - 3, 0, 2008 );
echo date("j M Y H:i:s", $date1); // 30 Sep 2007 23:59:59 CORRECT
?>
01-Oct-2007 08:36
Some surprisingly wrong results (php 5.2.0): date and time seem not coherent:
<?php
// Date: Default timezone Europe/Berlin (which is CET)
// date.timezone no value
$basedate = strtotime("31 Dec 2007 23:59:59");
$date1 = strtotime("-3 months", $basedate);
echo date("j M Y H:i:s", $date1); // 1 Oct 2007 23:59:59 WRONG
$basedate = strtotime("31 Dec 2007 23:59:59 CET");
$date1 = strtotime("-3 months", $basedate);
echo date("j M Y H:i:s", $date1); // 1 Oct 2007 23:59:59 WRONG
$basedate = strtotime("31 Dec 2007 23:59:59 GMT");
$date1 = strtotime("-3 months", $basedate);
echo date("j M Y H:i:s", $date1); // 1 Oct 2007 00:59:59 CORRECT
$basedate = strtotime("31 Dec 2007 22:59:59 GMT");
$date1 = strtotime("-3 months", $basedate);
echo date("j M Y H:i:s", $date1); // 1 Oct 2007 23:59:59 WRONG AGAIN
$basedate = strtotime("31 Dec 2007 00:00:00 GMT");
$date1 = strtotime("-3 months", $basedate);
echo date("j M Y H:i:s", $date1); // 1 Oct 2007 01:00:00 CORRECT
$basedate = strtotime("31 Dec 2007 00:00:00 CET");
$date1 = strtotime("-3 months", $basedate);
echo date("j M Y H:i:s", $date1); // 1 Oct 2007 00:00:00 WRONG AGAIN
$basedate = strtotime("31 Dec 2007 00:00:01");
$date1 = strtotime("-3 months", $basedate);
echo date("j M Y H:i:s", $date1); // 1 Oct 2007 00:00:01 WRONG AGAIN
?>
03-Sep-2007 02:33
Another inconsistency between versions:
<?php
print date('Y-m-d H:i:s', strtotime('today')) . "\n";
print date('Y-m-d H:i:s', strtotime('now')) . "\n";
?>
In PHP 4.4.6, "today" and "now" are identical, meaning the current timestamp.
In PHP 5.1.4, "today" means midnight today, and "now" means the current timestamp.
19-Jul-2007 01:13
A major difference in behavior between PHP4x and newer 5.x versions is the handling of "illegal" dates: With PHP4, strtotime("2007/07/55") gave a valid result that could be used for further calculations.
This does not work anymore at PHP5.xx (here: 5.2.1), instead something like strtotime("$dayoffset_relative_to_today days","2007/07/19") is to be used.
07-Jul-2007 05:47
when using strtotime("wednesday"), you will get different results whether you ask before or after wednesday, since strtotime always looks ahead to the *next* weekday.
strtotime() does not seem to support forms like "this wednesday", "wednesday this week", etc.
the following function addresses this by always returns the same specific weekday (1st argument) within the *same* week as a particular date (2nd argument).
<?php
function weekday($day="", $now="") {
$now = $now ? $now : "now";
$day = $day ? $day : "now";
$rel = date("N", strtotime($day)) - date("N");
$time = strtotime("$rel days", strtotime($now));
return date("Y-m-d", $time);
}
?>
example use:
weekday("wednesday"); // returns wednesday of this week
weekday("monday, "-1 week"); // return monday the in previous week
ps! the ? : statements are included because strtotime("") without gives 1 january 1970 rather than the current time which in my opinion would be more intuitive...
01-Jul-2007 02:54
To calculate the last Friday in the current month, use strtotime() relative to the first day of next month:
<?php
$lastfriday=strtotime("last Friday",mktime(0,0,0,date("n")+1,1));
?>
If the current month is December, this capitalises on the fact that the mktime() function correctly accepts a month value of 13 as meaning January of the next year.
01-Jun-2007 03:10
Note about strtotime() when trying to figure out the NEXT month...
strtotime('+1 months'), strtotime('next month'), and strtotime('month') all work fine in MOST circumstances...
But if you're on May 31 and try any of the above, you will find that 'next month' from May 31 is calculated as July instead of June....
13-May-2007 03:41
One more difference between php 4 and 5 (don't know when they changed this) but the string 15 EST 5/12/2007 parses fine with strtotime in php 4, but returns the 1969 date in php 5. You need to add :00 to make it 15:00 so php can tell those are hours. There's no change in php 4 when you do this.
$Date = "04-07-2007";
$Days = conv_to_xls_date($Date);
?>
$Days will contain 39179
SQL datetime columns have a much wider range of allowed values than a UNIX timestamp, and therefore this function is not safe to use to convert a SQL datetime column to something usable in PHP4. Year 9999 is the limit for MySQL, which obviously exceeds the UNIX timestamp's capacity for storage. Also, dates before 1970 will cause the function to fail (at least in PHP4, don't know about 5+), so for example my boss' birthday of 1969-08-11 returned FALSE from this function.
[red. The function actually supports it since PHP 5.1, but you will need to use the new object oriented methods to use them. F.e:
<?php
$date = new DateTime('1969-08-11');
echo $date->format('m d Y');
?>
]
09-Feb-2007 08:29
Regarding the "NET" thing, it's probably parsing it as a time-zone. If you give strtotime any timezone string (like PST, EDT, etc.) it will return the time in that time-zone.
In any case, you shouldn't use strtotime to validate dates. It can and will give incorrect results. As just one shining example:
Date: 05/01/2007
To most Americans, that's May 1st, 2007. To most Europeans, that's January 5th, 2007. A site that needs to serve people around the globe cannot use strtotime to validate or even interpret dates.
The only correct way to parse a date is to mandate a format and check for that specific format (preg_match will make your life easy) or to use separate form fields for each component (which is basically the same thing as mandating a format).
16-Jan-2007 10:47
A hint not to misunderstand the second parameter:
The parameter "[, int now]" is only used for strings which describe a time difference to another timestamp.
It is not possible to use strtotime() to calculate a time difference by passing an absolute time string, and another timestamp to compare to!
Correct:
<?php
$day_before = strtotime("+1 day", $timestamp);
# result is a timestamp relative to another
?>
Wrong:
<?php
$diff = strtotime("2007-01-15 11:40:00", time());
# result it the timestamp for the date in the string;
# because the string contains an absolute date and time,
# the second parameter is ignored!
# instead, use:
$diff = time() - strtotime("2007-01-15 11:40:00");
?>:02
The following might produce something different than you might expect:
<?php
echo date('l, F jS Y', strtotime("third wednesday", strtotime("2006-11-01"))) . "<br>";
echo date('l, F jS Y', strtotime("third sunday", strtotime("2006-01-01")));
?>
Produces:
Wednesday, November 22nd 2006
Sunday, January 22nd 2006
The problem stems from strtotime when the requested day falls on the date passed to strtotime. If you look at your calendar you will see that they should return:
Wednesday, November 15th 2006
Sunday, January 15th 2006
Because the date falls on the day requested it skips that day.
24-Jan-2006 06:18
It looks like in the latest release of PHP 5.1, when passing to strtotime this string "12/32/2005", it will now return the date "12/31/1969". (The previous versions would return "1/1/2006".)
11-Jan-2006 04:13
I'm posting these here as I believe these to be design changes, not bugs.
For those upgrading from PHP 4 to PHP 5 there are a number of things that are different about strtotime that I have NOT seen documented elsewhere, or at least not as clearly. I confirmed these with two separate fresh installations of PHP 4.4.1 and PHP 5.1.1.
1) Given that today is Tuesday: PHP4 "next tuesday" will return today. PHP5 "next tuesday" will actually return next tuesday as in "today +1week". Note that behavior has NOT changed for "last" and "this". For the string "last tuesday" both PHP4 and PHP5 would return "today -1week". For the string "this tuesday" both PHP4 and PHP5 would return "today".
2) You cannot include a space immediately following a + or - in PHP 5. In PHP4 the string "today + 1 week" works great. in PHP5 the string must be "today +1 week" to correctly parse.
3) (Partially noted in changelog.) If you pass php4 a string that is a mess ("asdf1234") it will return -1. If you in turn pass this to date() you'll get a warning like: Windows does not support dates prior to midnight. This is pretty useful for catching errors in your scripts. In PHP 5 strtotime will return FALSE which causes date() to return 12/31/69. Note that this is true of strings that might appear right such as "two weeks".
4) (Partially noted in changelog.) If you pass php4 an empty string it will error out with a "Notice: strtotime(): Called with empty time parameter". PHP5 will give no notice and return the current date stamp. (A much preferred behavior IMO.)
5) Some uppercase and mixed-case strings no longer parse correctly. In php4 "Yesterday" would parse correctly. In php5 "Yesterday" will return the infamous 1969 date. This is also true of Tomorrow and Today. [Red. This has been fixed in PHP already]
6. The keyword "previous" is supported in PHP5. (Finally!)
Good luck with your upgrades. :)
-Will
06-Oct-2005 09:34
Note strtotime() in PHP 4 does not support fractional seconds.
See especially if you happen to swap to ODBC for MS SQL Server and wonder what's happened!
28-Sep-2005 03:55
The PHP 5.1.0 change is a major backward compatibility break.
Now, that the returned value on failure has changed, the correct way to detect problems on all PHP versions is:
<?php
if (($time = strtotime($date)) == -1 || $time === false) {
die 'Invalid date';
}
?>
[red (derick): note, this is not 100% correct, as in this case 1969-12-31 23:59 will be thrown out as that timestamp is "-1"]
29-Aug-2005 01:25
One behavior to be aware of is that if you have "/" in the date then strtotime will believe the last number is the year, while if you use a "-" then the first number is the year.
12/4/03 will be evaluated to the same time as 03-12-4.
This is in the gnu documentation linked to in the article. I confirmed the behavior with strtotime and getdate.
Steve Holland
15-Aug-2005 06:49
When using multiple negative relative items, the result might be a bit unexpected:
<?php
$basedate = strtotime("15 Aug 2005 10:15:00");
$date1 = strtotime("-1 day 2 hours", $basedate);
$date2 = strtotime("-1 day -2 hours", $basedate);
echo date("j M Y H:i:s", $date1); // 14 Aug 2005 12:15:00
echo date("j M Y H:i:s", $date2); // 14 Aug 2005 08:15:00
?>
The minus sign has to be added to every relative item, otherwise they are interpreted as positive (increase in time). Other possibility is to use "1 day 2 hours ago".
15-Jul-2005 09:21
Maybe it saves others from troubles:
if you create a date (i.e. a certain day, like 30.03.2005, for a calendar for example) for which you do not consider the time, when using mktime be sure to set the time of the day to noon:
<?php
$iTimeStamp = mktime(12, 0, 0, $iMonth, $iDay, $iYear);
?>
Otherwhise
<?php
// For example
strtotime('-1 month', $iTimeStamp);
?>
will cause troubles when calculating the relative time. It often is one day or even one month off... After I set the time to noon "strtotime" calculates as expected.
Cheers
Denis
12-Jul-2005 11:13
relative dates..
<?php
echo date('d F Y', strtotime('last monday', strtotime('15 July 2005'))); // 11th
echo "<br>";
echo date('d F Y', strtotime('this monday', strtotime('15 July 2005'))); // 18th
echo "<br>";
echo date('d F Y', strtotime('next monday', strtotime('15 July 2005'))); // 25th
?>
28-Jun-2005 12:14
This is an easy way to calculate the number of months between 2 dates (including the months in which the dates are themselves).
<?php
$startDate = mktime(0,0,0, 6, 15, 2005);
$stopDate = mktime(0,0,0, 10, 8, 2006);
$nrmonths = ((idate('Y', $stopDate) * 12) + idate('m', $stopDate)) - ((idate('Y', $startDate) * 12) + idate('m', $startDate));
?>
Results in $nrmonths = 16.
09-May-2005 08:49
Here's a quick one-line function you can use to get the time difference for relative times. But default, if you put in a relative time (like "1 minute"), you get that relative to the current time. Using this function will give you just the time difference in seconds:
<?php function relative_time( $input ) { return strtotime($input) - time(); } ?>
For example "1 minute" will return 60, while "30 seconds ago" will return -30
Valid relative time formats can be found at:59
If you strtotime the epoch (Jan 1 1970 00:00:00) you will usually get a value, rather than the expected 0. So for example, if you were to try to use the epoch to calculate the difference in times (strtotime(Jan 1 1970 21:00:00)-strtotime(Jan 1 1970 20:00:00) for example) You get a value that depends strongly upon your timezone. If you are in EST for example, the epoch is actually shifted -5 to YOUR epoch is Jan 1 1970 19:00:00) In order to get the offset, simply use the following call to report the number of seconds you are away from the unix epoch. $offset=strtotime("1970-01-01 00:00:00"); Additionally, you can append GMT at the end of your strtotime calls so save yourself the trouble of converting relative to timezone.
I ran into the same problem with "last" as gabrielu at hotmail dot com (05-Apr-2005 10:45) when using strtotime() with getdate(). My only guess is that it has to do with daylight savings time as it seemed to be ok for most dates except those near the first Sunday in April and last Sunday in October.
I used strftime() with strtotime() and that gave me the result I was looking for.
05-Apr-2005 04:12
Be warned that strtotime() tries to "guess what you meant" and will successfully parse dates that would otherwise be considered invalid:
<?php
$ts = strtotime('1999-11-40');
echo date('Y-m-d', $ts);
// outputs: 1999-12-10
?>
It is my understanding (I have not verified) that the lexer for strtotime() has been rewritten for PHP5, so these semantics may only apply for PHP4 and below.
06-Apr-2004 06:39
Append the string "GMT" to all of your datetimes pulled from MySQL or other database that store date-times in the format "yyyy-mm-dd hh:ii:ss" just prior to converting them to a unix timestamp with strtotime(). This will ensure you get a valid GMT result for times during daylight savings.
EXAMPLE:
<?php
$date_time1 = strtotime("2004-04-04 02:00:00"); // returns bad value -1 due to DST
$date_time2 = strtotime("2004-04-04 02:00:00 GMT"); // works great!
?>
01-Jan-2004 11:24
I was having trouble parsing Apache log files that consisted of a time entry (denoted by %t for Apache configuration). An example Apache-date looks like: [21/Dec/2003:00:52:39 -0500]
Apache claims this to be a 'standard english format' time. strtotime() feels otherwise.
I came up with this function to assist in parsing this peculiar format.
<. | http://docs.php.net/strtotime | 2009-07-03T22:32:45 | crawl-002 | crawl-002-009 | [array(['/images/notes-add.gif', 'add a note'], dtype=object)] | docs.php.net |
List of Supported Timezones
Table of Contents
Here you'll find the complete list of timezones supported by PHP, which are meant to be used with e.g. date_default_timezone_set().
Note: The latest version of the timezone database can be installed via PECL's » timezonedb.
Note: This list is based upon Version 2009.10 of the timezonedb.
List of Supported Timezones
There are no user contributed notes for this page. | http://docs.php.net/manual/en/timezones.php | 2009-07-04T03:52:10 | crawl-002 | crawl-002-009 | [array(['/images/notes-add.gif', 'add a note'], dtype=object)] | docs.php.net |
JSP Expressions do not evaluate!
Detailed Description
JSPs that otherwise appear to work do not expand JSP expressions and instead have include the actual expression in the generated HTML.
Remedy
This is caused when using JSP2.0 against a 2.5 web.xml. Jasper 2.0 silently fails to evaluate the expression - but most other JSP functions appear to work.
... | http://docs.codehaus.org/pages/diffpages.action?pageId=73302203&originalId=73302078 | 2013-05-18T20:39:56 | CC-MAIN-2013-20 | 1368696382851 | [] | docs.codehaus.org |
View tree
Close tree
|
Preferences
|
|
Feedback
|
Legislature home
|
Table of contents
Search
Up
Up
SPS 386.02(8)
(8)
"Portable toilet" means a self-contained unit with a flushing device which retains sewage in a holding tank for disposal to a sewage system acceptable to the department.
SPS 386.02(9)
(9)
"Recirculating system" means a holding tank with all necessary appurtenances to provide for the recirculation of flushing liquid and for the receiving, venting and shore removal of sewage.
SPS 386.02(10)
(10)
"Sealed" means making a toilet incapable of discharging sewage into the waters upon which a boat is operated or moored.
SPS 386.02(11)
(11)
"Sewage" means human body wastes.
SPS 386.02(12)
(12)
"Toilet" means any device, facility or installation designed or constructed for use as a place for receiving sewage directly from the human body.
SPS 386.02 History
History:
Cr.
Register, September, 1980, No. 297
, eff. 10-1-80; renum. from H 80.02 and am. (1)
Register, May, 1983, No. 329
, eff. 6-1-83; correction in (3) made under s. 13.93 (2m) (b) 7., Stats.,
Register, April, 2000, No. 532
; correction in (1) made under s. 13.92 (4) (b) 6., Stats.,
Register December 2011 No. 672
.
SPS 386.03
SPS 386.03
Petition for variance.
SPS 386.03(1)
(1)
Procedure.
The department shall consider and may grant a variance to an administrative rule upon receipt of a fee and a completed petition for variance form from the owner, provided an equivalent degree of safety is established in the petition for variance which meets the intent of the rule being petitioned. The department may impose specific conditions in a petition for variance to promote the protection of the health, safety and welfare of the employees or the public. Violation of those conditions under which the petition is granted constitutes a violation of these rules.
SPS 386.03(2)
(2)
Petition processing time.
Except for priority petitions, the department shall review and make a determination on a petition for variance within 30 business days of receipt of all calculations, documents and fees required to complete the review. The department shall process priority petitions within 10 business days.
SPS 386.03 Note
Note:
Copies of the petition for variance (form SBD-8) may be downloaded for no charge at t the Department's Web site
through links to Safety and Buildings Division forms.
SPS 386.03 History
History:
Cr.
Register, September, 1980, No. 297
, eff. 10-1-80; renum. from H 80.03,
Register, May, 1983, No. 329
, eff. 6-1-83; r. and recr.
Register, October, 1984, No. 346
, eff. 11-1-84; cr. (2),
Register, February, 1985, No. 350
, eff. 3-1-85.
SPS 386.04
SPS 386.04
Contract applicability.
Applicable provisions of this regulation shall be construed to be a part of any order or agreement, written or verbal, for the installation of a holding tank, recirculating system, provisions of a portable toilet or shore disposal facility or appurtenances thereto.
SPS 386.04 History
History:
Cr.
Register, September, 1980, No. 297
, eff. 10-1-80; renum. from H 80.04,
Register, May, 1983, No. 329
, eff. 6-1-83.
SPS 386.05
SPS 386.05
Approval required.
SPS 386.05(1)
(1)
General.
Any prefabricated tank, portable toilet or toilet proposed for installation in boats used upon the inland or outlying waters of the state shall receive the approval of the department. The manufacturer of any prefabricated tank, portable toilet or toilet shall submit, in duplicate, plans and specifications showing construction details for such facility. The owner of a custom built tank or toilet shall similarly submit such details in duplicate for approval prior to installation. The department may require the submission of other information or the unit itself, in the case of a portable toilet, to complete its review.
SPS 386.05(2)
(2)
Approved unit listing.
The department shall keep a current list of approved prefabricated tanks, portable toilets and toilets for installation on boats and shall provide a copy of such current list to the bureau of law enforcement, department of natural resources.
SPS 386.05 History
History:
Cr.
Register, September, 1980, No. 297
, eff. 10-1-80; renum. from H 80.05,
Register, May, 1983, No. 329
, eff. 6-1-83.
SPS 386.06
SPS 386.06
Holding tank, toilet and appurtenances.
SPS 386.06(1)
(1)
Material.
Each holding tank and toilet shall be constructed of a plastic which is resistant to acid, alkali and water; stainless steel with comparable resistance or other approved material. Metal combinations shall be galvanically compatible.
SPS 386.06(2)
(2)
Holding tank strength.
A holding tank, with all openings sealed, shall show no signs of deformation, cracking or leakage when subjected to a combined suction and external pressure head of 5 pounds per square inch. It shall be designed and installed so as not to become permanently distorted with a static top load of 200 pounds.
SPS 386.06(3)
(3)
Temperature resistance.
All materials used shall be capable of withstanding a temperature range of from -22º F. (winter storage) to the maximum operating temperature obtainable when operating in an ambient temperature of 140º F.
SPS 386.06(4)
(4)
Mounting.
The tank and toilet shall be rigidly and permanently secured in place in such manner that the tank, toilet and piping will not fall.
SPS 386.06(5)
(5)
Capacity.
The capacity shall be sufficient to receive the waste from the maximum number of persons that may be on board during an 8-hour period. The passenger rating shall be that indicated on the boat's capacity plate or that of a boat of similar size should the plate be illegible or missing.
SPS 386.06(5)(a)
(a)
Holding tank.
The capacity shall be determined on the basis of contribution of 41⁄2 gallons per person per 8-hour day for a toilet of the hand pump type. If standard waterflush toilets are installed, the minimum capacity shall be at 131⁄ 2 gallons per person per 8-hour day.
SPS 386.06(5)(b)
(b)
Recirculating toilet.
The capacity of the tank of a recirculating type unit shall be determined on the basis of a contribution of one-quarter gallon per person per 8-hour day.
SPS 386.06(6)
(6)
Controls.
Each holding tank shall contain a sewage level device which actuates a warning light or other visible gauge when the tank becomes three-fourths full. The light or other device shall be located so that it can be readily observed. The sewage level device shall be in operable condition at any time the boat is used. Such water level indicator shall be installed so as to be removable and be of such design and of such size as to make a watertight seal with a tank opening that is sufficiently large to accommodate the sewage level device.
SPS 386.06(7)
(7)
Maintenance.
SPS 386.06(7)(a)
(a)
A separate manhole shall be provided in the top of the tank for maintenance purposes. A plate or cap capable of making a watertight seal shall be provided on the opening which shall be of sufficient size to readily permit cleaning and maintenance.
SPS 386.06(7)(b)
(b)
Deodorant.
Any deodorant used in a holding tank, approved portable toilet or recirculating toilet shall be easily obtainable and constitute a minimum hazard when handled, stored and used according to the manufacturer's recommendations and form no dangerous concentration of gases nor react dangerously with other chemicals used for the same purpose.
SPS 386.06(8)
(8)
Openings for piping.
Openings shall be provided in each holding tank for inlet, outlet and vent piping. The openings and pipe fittings shall be so designed as to provide watertight joints between the tank and the piping. Plastic opening fittings shall be of the rigid serrated type. Inlet openings should preferably be such that they could accommodate fittings that would be connected to piping of a minimum nominal inside diameter (I.D.) of 11⁄ 2 inches. Outlet openings shall be such as to accommodate at least 11⁄ 2 inch I.D. piping. Vent pipe openings shall be able to accommodate fittings for at least a one-half inch I.D. pipe, and should preferably be located at the top of a conical frustum or cylindrical vertical extension of the tank which is at least 2 inches in diameter at the base and 2 inches or more in height.
SPS 386.06(9)
(9)
Piping and fittings.
SPS 386.06(9)(a)
(a)
Size.
The piping from a toilet to the holding tank shall be at least as large as the trap of the toilet fixture. The piping from the holding tank or toilet to the pumpout connection shall have a nominal inside diameter of at least one and one-half inches.
SPS 386.06(9)(b)
(b)
Material.
All waste and venting piping shall be made of galvanized steel, wrought iron or yoloy pipe; lead; brass; type M copper; or flexible or rigid plastic pipe. Assembly shall be made with threaded fittings in the case of ferrous or brass pipe; lead or solder type fittings in the case of lead and copper pipe; and with threaded fittings, insertible clamp type fittings or weldable fittings in the case of plastic pipe. Clamps, usable only with plastic pipe, shall be made of stainless steel. All piping materials and fittings shall be capable of withstanding a pressure of at least 75 pounds per square inch and a combined maximum suction and external pressure head equivalent to 50 feet of water.
SPS 386.06(9)(c)
(c)
Location.
No piping, other than that for venting, associated with the boat sewage system shall pass through the hull. The vent pipe shall terminate with an inverted U-bend, the opening of which shall be above the maximum water level in the toilet or holding tank. At least one vent terminal shall be constantly open to the atmosphere. The terminal of the outlet pipe shall be of the female connection type and be located above the holding tank in a manner that makes gravity discharge of the contents impractical. It shall have an airtight capping device marked "WASTE" and the cap and flange shall be embossed with the word "WASTE".
SPS 386.06(10)
(10)
Electrical system.
The electrical system associated with the boat holding tank or toilet system shall conform to accepted practice and create no hazards.
SPS 386.06(11)
(11)
Portable toilet.
Each portable toilet shall meet the material requirements and temperature resistance requirements of
subs.
(1)
and
(3)
. Exposed surfaces shall be of reasonably smooth and cleanable material. Capacity of the flush tank and holding tank shall be adequate for the intended use. Portable toilets shall be designed to prevent spillage of contents of the holding tank when the toilet is tipped or portable toilets shall be secured on board.
SPS 386.06 History
History:
Cr.
Register, September, 1980, No. 297
, eff. 10-1-80; renum. from H 80.06,
Register, May, 1983, No. 329
, eff. 6-1-83.
SPS 386.07
SPS 386.07
Overboard discharge inactivation.
No boat equipped with a means of discharging sewage directly from a toilet or holding tank into the water upon which the boat is moored or is moved shall enter inland or outlying waters of the state until such means of discharge is inactivated. An owner or operator of a boat equipped with such means of discharge shall contact a representative of the department of natural resources or a local law enforcement official with respect to inactivation before entering state waters. Overboard discharge inactivation shall include as a minimum either disconnection of the toilet piping, removal of the pumping device, securely plugging the discharge outlet, sealing of the toilet bowl with wax or other method approved by the official contacted. The inspecting official shall provide the boat owner or operator with a signed written statement as to the method of inactivation accepted. The owner or operator shall give information as to the inland or outlying waters he or she plans to navigate and as to the time of stay on such waters.
SPS 386.07 Note
Note:
Discharge of wastes from boats in any form would be contrary to s.
29.29 (3)
, Stats.
SPS 386.07 History
History:
Cr.
Register, September, 1980, No. 297
, eff. 10-1-80; renum. from H 80.07,
Register, May, 1983, No. 329
, eff. 6-1-83; correction made under s. 13.93 (2m) (b) 5., Stats.,
Register, February, 1994, No. 458
.
SPS 386.08
SPS 386.08
On-shore disposal facilities.
SPS 386.08(1)
(1)
Pump.
A self-priming pump, suitable for pumping sewage, shall be provided for the on-shore removal of sewage from boat holding tanks and toilets; the installation of which shall be in accord with the appropriate state and local regulations. Head characteristics and capacity shall be based on installation needs for the site. The pump may be either fixed in position or portably mounted.
SPS 386.08(2)
(2)
Suction hose.
The suction hose shall be of non-collapsible quality, preferably made with reinforcement. A quick-connect dripproof connector shall be fitted to the end of the hose that is attached to the boat piping outlet.
SPS 386.08(3)
(3)
Discharge hose.
Quality flexible hose, compatible with the pump characteristics, may be used. All permanent piping shall conform to the state plumbing regulations. [
chs.
SPS 382
and
384
]
SPS 386.08(4)
(4)
Sewage disposal requirements.
SPS 386.08(4)(a)
(a)
Public facilities.
When connection to a public sanitary sewer is economically feasible, the disposal piping shall be designed to discharge thereto. [
ch.
SPS 384
]
SPS 386.08(4)(b)
(b)
Private facilities.
When a public sewer is not available, a private sewage disposal system installed in compliance with applicable state plumbing regulations shall be provided unless adequate private treatment and disposal facilities are already available. [
chs.
SPS 382
and
383
]
SPS 386.08(5)
(5)
Water supply requirements.
The on-shore disposal facility shall be served by a water supply piping system to permit flushing of the facilities serviced. If a potable water supply is the source for flushing, the distribution piping shall be protected from backsiphonage and backpressure.
SPS 386.08(6)
(6)
Plan approval.
Every owner, personally or through an authorized representative, shall obtain written approval from the department prior to award of any new or modified construction of shore disposal facilities set forth in this section. Three sets of plans and specification of such new or modified shore disposal facilities to be constructed for the purpose of pumping out boat holding tanks and toilets, receiving sewage from portable toilets, and disposing of the sewage shall be submitted to the department for review as to acceptability. Plans and specifications shall cover in detail the materials to be used, the pump characteristics, the water supply system, and when applicable, the size and construction of the septic or holding tank, results of soil percolation and boring tests and layout of the soil absorption system. Location of all wells within 50 feet of the absorption system, the surface water high water level and the general topography of the area shall be shown on the plans.
SPS 386.08(7)
(7)
Disposal of portable toilet wastes.
Sewage from portable toilets shall be discharged into an approved fixture or other approved device designed to receive sewage.
SPS 386.08 History
History:
Cr.
Register, September, 1980, No. 297
, eff. 10-1-80; renum. from H 80.08,
Register, May, 1983, No. 329
, eff. 6-1-83; correction in (3), (4) (a), (b) made under s.
13.92 (4) (b) 7.
, Stats.,
Register December 2011 No. 672
.
SPS 386.09
SPS 386.09
Alternate facilities.
SPS 386.09(1)
(1)
Chemical type toilets.
Nonrecirculating chemical toilets may be used in lieu of a toilet flushed by water provided the container is not portable and the use of on-shore pumping facilities is provided for in the design of the unit. The design of the toilet and on-shore disposal adaptation shall be approved.
SPS 386.09(2)
(2)
Incinerator type toilets.
An approved incinerator type toilet may be used in lieu of a toilet flushed by water provided it is of adequate capacity to handle the passenger load. Equipment for on-shore removal and disposal of resulting ash shall be kept on board.
SPS 386.09(3)
(3)
Portable toilets.
An approved portable toilet may be used in lieu of a permanently installed toilet provided it is of adequate capacity to handle the passenger load. Sewage in the holding tank shall be properly disposed of on shore. Units shall be temporarily secured on board, if necessary, to prevent spillage of contents.
SPS 386.09 History
History:
Register, September, 1980, No. 297
, eff. 10-1-80; renum. from H 80.09,
Register, May 1983, No. 329
, eff. 6-1-83.
SPS 386.10
SPS 386.10
Operation and maintenance.
All facilities controlled by this chapter shall be maintained in good operating condition at all times. All necessary tools for repair and maintenance shall be kept on board or on dock, as the case may be, and shall be properly stored when not in use. Extra fuses for electrical equipment and extra indicator lights shall be on hand. Pump-out suction hoses should be adequately drained through the pump before disconnection and then properly stored or capped. Pumping equipment shall be shut off before the hose is disengaged from the boat outlet pipe. Any equipment on board shall not be used or operated to allow discharge of sewage to surface waters.
SPS 386.10 History
History:
Cr.
Register, September, 1980, No. 297
, eff. 10-1-80; renum. from H 80.10,
Register, May, 1983, No. 329
, eff. 6-1-83.
SPS 386.11
SPS 386.11
Prohibited facilities.
No person shall use or permit to be used as a holding facility for sewage a pail, plastic bag or any other type of portable, semiportable or disposal receptacle aboard boats not specifically permitted by the provisions of this chapter.
SPS 386.11 History
History:
Cr.
Register, September, 1980, No. 297
, eff. 10-1-80; renum. from H 80.11,
Register, May, 1983, No. 329
, eff. 6-1-83.
Next file:
Chapter SPS 387
/code/admin_code/sps/safety_and_buildings_and_environment/380_387/386
true
administrativecode
/code/admin_code/sps/safety_and_buildings_and_environment/380_387/386/06/8
administrativecode/SPS 386.06(8)
administrativecode/SPS 386.06? | http://docs.legis.wisconsin.gov/code/admin_code/sps/safety_and_buildings_and_environment/380_387/386/06/8 | 2013-05-18T20:39:26 | CC-MAIN-2013-20 | 1368696382851 | [] | docs.legis.wisconsin.gov |
DotCloud offers support for all WSGI frameworks. Below is a quickstart guide for Pyramid apps. You can also read the DotCloud Python documentation for a complete overview.
Install DotCloud’s CLI by running:
$ pip install:
cherrypy Pyramid==1.3 # Add any other dependencies that should be installed as well
dotcloud.yml:
www: type: python db: type: postgresql
Learn more about the DotCloud buildfile.
wsgi.py::
Learn more about the DotCloud environment.json.! | http://docs.pylonsproject.org/projects/pyramid_cookbook/en/latest/deployment/dotcloud.html | 2013-05-18T20:38:24 | CC-MAIN-2013-20 | 1368696382851 | [] | docs.pylonsproject.org |
Ticket #95 (closed defect: wontfix)
verify charger current and battery temperature reading correctness
Description
Change History
comment:1 Changed 6 years ago by sean_mosko@…
- Owner changed from laforge@… to sean_chiang@…
- Milestone set to Phase 0
comment:2 Changed 6 years ago by mickey@…
I observe a couple of problems:
1.) charger current always seems to report 0 here, no matter whether mode is
fast_cccv , pre, or idle.
2.) even if I leave the charger on for very long, apm reports no more than 89%
comment:3 Changed 6 years ago by laforge@…
This should be mostly fixed in svn rev. 1518, but it would not hurt to actually
do the measurements and compare the readings given by the pmu driver with what
your lab equiment determines.
comment:4 Changed 6 years ago by alphaone@…
The voltage reading seems to be pretty consistent (I only have cheap measuring
hardware), but the output of /sys/bus/i2c/devices/0-0008/chgcur has too much
fluctuation to be correct.
Generally this doesn't seem to represent the charging current in mA.
Here are some values taken over a short period of time (10 sec)
root@fic-gta01:~$ cat /sys/bus/i2c/devices/0-0008/chgcur
533
root@fic-gta01:~$ cat /sys/bus/i2c/devices/0-0008/chgcur
560
root@fic-gta01:~$ cat /sys/bus/i2c/devices/0-0008/chgcur
346
root@fic-gta01:~$ cat /sys/bus/i2c/devices/0-0008/chgcur
746
root@fic-gta01:~$ cat /sys/bus/i2c/devices/0-0008/chgcur
853
root@fic-gta01:~$ cat /sys/bus/i2c/devices/0-0008/chgcur
1093
root@fic-gta01:~$ cat /sys/bus/i2c/devices/0-0008/chgcur
533
Kernel is uImage-2.6-moko8-r0_0_1584_0-fic-gta01.bin from buildhost
root@fic-gta01:~$ uname -a
Linux fic-gta01 2.6.20.4-moko8 #1 PREEMPT Fri Mar 30 22:10:44 UTC 2007 armv4tl
unknown
comment:7 Changed 6 years ago by laforge@…
- Cc werner@… added)
comment:8 Changed 6 years ago by elrond+bugzilla.openmoko.org@…"?
comment:9 Changed 6 years ago by oe@…
I've compared the displayed current intake w/ an actual measurement of the same.
There is no factor, the displayed values are
- all way off
- are bouncing up and down
comment:10 Changed 6 years ago by alphaone@…
Has anyone verified the readings yet?
At least the temperature readings are bogus on my device:
12
9
12
12
12
12
12
12
6
6
12
comment:11 Changed 6 years ago by alphaone@…
Kernel 2.6.22.1-moko10 on a GTA01bv4
comment:12 Changed 6 years ago by alex@…
comment:13 Changed 6 years ago by jserv@…
- Cc jserv@… added
comment:14 Changed 5 years ago by tick@…
- Owner changed from sean_chiang@… to willie_chen@…
comment:15 Changed 5 years ago by willie_chen@…
- Owner changed from willie_chen@… to michael@…
comment:16 Changed 5 years ago by anonymous
- Milestone Phase 0 deleted
Milestone Phase 0 deleted
comment:17 Changed 5 years ago by andy
- Status changed from new to closed
- Resolution set to wontfix
We are no longer working on GTA01, I do not believe anyone will address this issue.
Shanghai doesn't have much hardware support here. I'm going to assign this to
Taipei.
[Sean Chiang] If you don't have time please assign this bug to another engineer. | http://docs.openmoko.org/trac/ticket/95 | 2013-05-18T20:29:58 | CC-MAIN-2013-20 | 1368696382851 | [] | docs.openmoko.org |
If you're having trouble logging in to your BlogPress Domains account, go to.
You will see the option to reset your password at the top left, or contact domains support via phone at the top left.
Please note that the domain support team is not the same as the BlogPress blogging and training support team. They will not be able to help you with questions about your blog. | http://docs.theblogpress.com/domain-names-and-email-addresses/blogpress-domains-login-lost-password | 2018-05-20T19:28:13 | CC-MAIN-2018-22 | 1526794863684.0 | [array(['https://d2mckvlpm046l3.cloudfront.net/ae78f131dca67ab8d96a0c0af1760df74a06c127/http%3A%2F%2Fcdn.danandjennifermedia.com%2FBlogPress%2FTraining3.0%2FBlogPress%2520Domains%2520Login.png',
None], dtype=object) ] | docs.theblogpress.com |
Using the Analysis Services DMX Query Designer (Reporting Services).
Note
You must train your model before designing your report. For more information, see Data Mining Projects (Analysis Services - Data Mining).
Design Mode).
Designing a Prediction Query.
The following example shows how to create a report dataset by using the DMX query designer.
Example: Retrieving Data from a Data Mining Model
The Reporting Services samples include a project that deploys two mining models based on the SQL Server sample database AdventureWorksDW. For more information, see Reporting Services Samples.
Install and then publish the AdventureWorks sample reports, and then deploy the Analysis Services cube. For more information, see Reporting Services Samples.
Open the AdventureWorks Sample Reports project, and then add an empty report definition (.rdl) file to the project.
Create a new dataset using the AdventureWorksAS shared data source. In the Dataset Properties dialog, click Query Designer. The MDX Analysis Services query designer opens in Design mode.
Click the Command Type DMX (
) button on the toolbar.
Click Yes to switch to the DMX Query Designer.
Click Select Model, expand Targeted Mailing, and then choose TM Decision Tree. Click OK.
Click Select Case Table, scroll to and then select vTargetMail (dbo). Click OK.
In the Grid pane, click Source and then select TM Decision Tree mining model. Bike Buyer appears in the Field column.
On the next line, click Source and then select vTargetMail Table. CustomerKey appears in the Field column.
Right-click the Query Design pane, and select Result to view the result set. A result set containing 18484 rows appears in the result view. To switch back to Design mode, right-click the Result pane and select Design.
Using Parameters.
For more information about how to manage the relationship between report parameters and query parameters, see How to: Associate a Query Parameter with a Report Parameter. For more information about parameters, see Adding Parameters to Your Report.
Example Query with Parameters
The following query retrieves report data indicating which customers are likely to purchase a bicycle, and the probability that they will do so.
SELECT t.FirstName, t.LastName, (Predict ([Bike Buyer])) as [PredictedValue], (PredictProbability([Bike Buyer])) as [Probability] From [TM Decision Tree] PREDICTION JOIN OPENQUERY([Adventure Works DW], 'SELECT [FirstName], [LastName], [CustomerKey], [MaritalStatus], [Gender], [YearlyIncome], [TotalChildren], [NumberChildrenAtHome], [HouseOwnerFlag], [NumberCarsOwned], [CommuteDistance] FROM [dbo].[DimCustom].[House Owner Flag] = t.[HouseOwnerFlag] AND [TM Decision Tree].[Number Cars Owned] = t.[NumberCarsOwned] AND [TM Decision Tree].[Commute Distance] = t.[CommuteDistance] WHERE (Predict ([Bike Buyer]))=@Buyer AND (PredictProbability([Bike Buyer]))>@Probability
Note
This example uses the DimCustomer table as an input table. This is for illustration only. In the AdventureWorks database, the DimCustomer table was used to train the model used in this example. Ordinarily, you would use an input table that was not previously used for training.
In this example, after you create the query, you must define the query parameters using the Query Parameters dialog box. To do this, click the Query Parameters (
) button on the query designer toolbar.
Add the parameters as follows. Each parameter must also have a default value.
Note
The parameters specified in the Query Parameters dialog box must be the same as the parameters in the query, without the at (@) symbol.
When you switch to Design view to create a report, new report parameters are created from the query parameters. The report parameters are presented to the user when the report is run. You can update the report parameters to provide a list of values from which the user can choose, specify a default value, or change other report parameter properties.
For more information about working with report parameters, see:
See Also | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008/ms155812(v=sql.100) | 2018-05-20T20:23:35 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.microsoft.com |
You will receive Epaper CMS in zip file. Extract the zip file. Now upload all the files of htdocs folder on your server. Remember that the following folders must be writable on your server:
/uploads /sitemaps /assets /protected/runtime
Now create a MySql database. You can find a link to create MySql database on your server’s hosting control panel (Like Plesk or cPanel)
After creating database and assigning username and password to it, import db.sql (which comes in zip file of Epaper CMS) on database from phpmyadmin (or using similar MySql client)
Open /protected/config/db.php and enter database connection details.
Open index.php from root directory of your site and comment out (using //) the following lines:
defined('YII_DEBUG') or define('YII_DEBUG',true);
If you are using Windows server then you can delete .htaccess file or if you are using Linux server then you can delete web.config file. Both files can be found in root directory of site.
Congrats! Installation procedure has been completed. Now your site is ready.
Upgrading from version 2.3.x, 2.4.x, 2.5.x
To upgrade from version 2.3.x run the upgrade_from_2.3.x.sql file on your database.
To upgrade from version 2.4.x run the upgrade_from_2.4.x.sql file on your database.
To upgrade from version 2.5.x run the upgrade_from_2.5.x.sql file on your database.
Delete the /protected folder
Delete the /themes folder
Clear the /assets folder
Now upload the /protected and /themes folder from the ZIP file
Reenter the database details in /protected/config/db.php
Done!
If you are upgrading from any version perior to 2.6.2 then you also have to replace /framework folder
Remember default username is admin and password is pass
Please note that we have dropped the “Imag” theme on version 2.6 and introduced a news theme “Press”. Also note that if you have modified any theme of previous version then they will not be compatible with version 2.6.6 | http://docs.abhinavsoftware.com/epaper-cms/installation/ | 2018-05-20T19:13:06 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.abhinavsoftware.com |
- Using SSH keys with GitLab CI/CD. ## (change apt-get to yum if you use an RPM-based image) ## - 'which ssh-agent || ( to the services that you want to have an access to from within the build environment. If you are accessing a private GitLab repository you need to add it as a deploy key.
That's it! You can now have access to private servers or repositories in your build environment.
SSH keys when using the Shell executor
If you are using the Shell executor and not Docker, it is easier to set up an SSH key.
You can generate the SSH key from the machine that GitLab Runner is installed on, and use that key for all projects that are run on this machine.
First, you need to login to the server that runs your jobs.
Then from the terminal login as the
gitlab-runneruser:
sudo su - gitlab-runner
Generate the SSH key pair as described in the instructions to generate an SSH key. Do not add a passphrase to the SSH key, or the
before_scriptwill prompt for it.
As a final step, add the public key from the one you created earlier to the services that you want to have an access to from within the build environment. If you are accessing a private GitLab repository you need to add it as a deploy key.
Once done, try to login project
We have set up an Example SSH Project for your convenience that runs on GitLab.com using our publicly available shared runners.
Want to hack on it? Simply fork it, commit and push your changes. Within a few moments the changes will be picked by a public runner and the job will begin.. | https://docs.gitlab.com/ee/ci/ssh_keys/README.html | 2018-05-20T19:23:52 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.gitlab.com |
We only offer instructions for MailChimp using the MailChimp Widget and the Genesis eNews Extended widget.
If you look in the WordPress repository, the Aweber plugin gets terrible reviews and has not been updated in over a year - that's why we have not installed it.
You can install Aweber forms manually using a Raw HTML Snippet.
Here are instructions from Aweber on how to get the form code...-
Once you have the code, you can use a Raw HTML Snippet to add the code to your blog.
(You can also send us the form code and we can add it to your site for you - but you'll have to get the code from your Aweber account) | http://docs.theblogpress.com/plugins-and-widgets/how-to-add-aweber-subscribe-form | 2018-05-20T19:19:01 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.theblogpress.com |
Rollbar Laravel,
Disabling Laravel 5.5's auto-discovery
By default Rollbar Laravel supports auto-discovery. If you do not want to auto-discover the package add the following code to your
composer.json:
"extra": { "laravel": { "dont-discover": [ "rollbar/rollbar-laravel" ] } },
Enabling / disabling Rollbar on specific environments
First, disable auto-discovery..
execfunction call. If you do not want to allow
- enable_utf8_sanitization
- set to false, to disable running iconv on the payload, may be needed if there is invalid characters, and the payload is being destroyed
Default:
true
- environment
- Environment name, e.g.
'production'or
'development'
Default:
'production'
-:
false | https://docs.rollbar.com/docs/laravel/ | 2018-05-20T19:14:35 | CC-MAIN-2018-22 | 1526794863684.0 | [array(['https://travis-ci.org/rollbar/rollbar-php-laravel.svg?branch=master',
'Build Status'], dtype=object) ] | docs.rollbar.com |
CodeIgniter is based on the Model-View-Controller development pattern. MVC is a software approach that separates application logic from presentation. In practice, it permits your web pages to contain minimal scripting since the presentation is separate from the PHP scripting..
© 2014–2017 British Columbia Institute of Technology
Licensed under the MIT License. | http://docs.w3cub.com/codeigniter~3/overview/mvc/ | 2018-05-20T19:43:50 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.w3cub.com |
The LANSA Repository is a central storage facility for reusable field and file definitions.
These field and file definitions are available to all LANSA applications whether these are executed on the IBM i or Windows. Other PC applications may also use these definitions via LANSA's middleware, LANSA Open.
Repository descriptions, column headings and validations should be used for any file or field on a screen or in a report, rather than specifying this information in the specific program which displays the screen or produces the report.
By using the Repository to store field and file definitions centrally, they are easier to maintain and are used in a consistent way. For example, three different LANSA functions or client applications, even if built by three different developers, will be consistent because they use the same definitions for the same pieces of data. | https://docs.lansa.com/14/en/lansa009/content/lansa/insb9_002.htm | 2018-05-20T19:35:54 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.lansa.com |
Now that you have a little background on foreach loops with Miva Merchant's Template Language there is one more important concept that will give you a lot more control when working with foreach loops; it is called POS1. POS1 is a counter within a foreach loop. It always starts a 1 and contains the current iteration of the foreach loop. POS1 gets incremented each time the loops runs so it will always contain the current number of times the loop has been executed. You can also think of it as containing the current index of the array. For example, if we have eight products in our array, the first product it executes the code on, POS1 will be 1, on the next it will be 2, and so on until it reaches 8.
For example, on the category page there is a foreach loop that looks like this:
This foreach loops though all the products assigned to whatever category you happen to be on. Now, say for example you wanted to give the 4th product a border. We can use POS1 to help us test for the 4th iteration of the loop. Within the foreach loop you would add this code:
Some Important Notes About POS1
The most common use for POS1 is on the category page when you are trying to create 2, 3 or 4 column layouts for products. A lot of times you will need to apply a change in margin to the first product in a row that the rest of the products do not need.
Take this layout for example, it has a margin on the left of each product, except the first.
Here the second, third and fourth products all have a border left. However the first product does not. Every first product in every new row will not have a border left. This means the 1st, 5th, 9th, etc products all need a different class to achieve this layout. This is easily done using POS1 and the modulus (MOD) operator.
First a quick background on the MOD. It returns the remainder when you divide two numbers. So 6 MOD 3 will return 0 because 3 divides into 6 twice with a remainder of 0. However, 5 MOD 4 would return 1 because 4 divides into 5 once with a remainder of 1.
To make things a little bit more complicated, modulus will throw you a curveball when you are trying to divide a bigger number by a smaller number. I'll give you this example. If you are trying to divide something like 1 MOD 4, you will get a remainder of 1. The reason is that modulus only works with whole numbers, so 4 goes into 1 zero times, with a remainder of 1 (the original number).
We can use this to create the following If statement:
That is how you'd get it to show on the first, fifth, ninth and so on elements of the array. Remember, it works on the first element because 4 goes into 1 zero times, with a remainder of 1.
If you wanted to do some special code for the 4th, 8th, 12th and so on elements, you could use some code like this:
POS1 is a powerful feature of SMT that will allow you to create dynamic layouts quickly and easily using just CSS. | https://docs.miva.com/template-language/pos1-counter | 2018-05-20T19:05:23 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.miva.com |
3.0 to 2.x Object and Concept reference
Interana made major enhancements in 3.0 based on feedback from our customers. Included in these improvements is a new system of knowledge objects that replace and extend the 2.x named expressions as Interana's re-usable query building blocks. In 3.0, you can express any 2.x named expression logic in 3.0 knowledge objects with many new capabilities and simplifications.
Named Expression to Knowledge Object mapping
This table shows how specific named expressions in 2.x map to new knowledge objects in 3.0.
New Knowledge Objects
We have created new knowledge objects in 3.0 to make more logic re-usable between queries than in 2.x.
Other Renamed or Refactored Concepts
We have also renamed, and in some cases refactored, other components of Interana between 2.x and 3.0 beyond named expressions for additional clarity. | https://docs.interana.com/3/Getting_Started/3.0_to_2.x_Object_and_Concept_reference | 2018-05-20T19:47:31 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.interana.com |
When you are defining field definitions, you are really:
In addition to the expected field attributes, LANSA field definitions include:
All fields, including "working" fields should be defined in the Repository. After a period of time, most working fields will be defined. Defining all fields in the Repository will provide time savings for future projects as well as providing comprehensive cross-referencing capabilities. The more field definitions entered into the Repository, the higher the productivity gains.
Field definitions are stored in the Data Dictionary area of the Repository. | https://docs.lansa.com/14/en/lansa009/content/lansa/insb9_003.htm | 2018-05-20T19:18:58 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.lansa.com |
- The Kubernetes executor
- Workflow
- Connecting to the Kubernetes API
- The keywords
- Define keywords in the config toml
- Using volumes
- Using Docker in your builds and an additional container for each
service defined by the GitLab CI yaml. The names for these containers
are as follows:
- The build container is
build
- The services containers are
svc-Xwhere
Xis
[0-9]+
Note that when services and containers are running in the same Kubernetes pod, they are all sharing the same localhost address. The following restrictions are then applicable:
- The services are not accessible via their DNS name, you need to use localhost instead.
--discovery to run Kubernetes Pods in
namespace_overwrite_allowed: Regular expression to validate the contents of the namespace overwrite environment variable (documented following). When empty, it disables the namespace overwrite feature
privileged: Run containers with the privileged flag
cpu_limit: The CPU allocation given to build containers
memory_limit: The amount of memory allocated to build containers
memory_request: The amount of memory requested from build containers.
node_selector: A
tableof
key=valuepairs of
string=string. Setting this limits the creation of pods to kubernetes nodes matching all the
key=valuepairs timeout allow you to create a new isolated namespace dedicated for CI purposes, and deploy a custom
set of Pods. The
Pods spawned by the runner will take place on the overwritten namespace, for simple
and straight forward access between container during the CI stages.
variables: KUBERNETES_NAMESPACE_OVERWRITE: ci-${CI_COMMIT_REF_NAME}
Furthermore, to ensure only designated namespaces will be used during CI runs, inform the configuration
namespace_overwrite_allowed with proper regular expression. When left empty the overwrite behaviour is
disabled.
Overwriting Kubernetes Default Service Account
Additionally, Kubernetes service account can be overwritten on
.gitlab-ci.yml file, by using the variable
KUBERNETES_SERVICE_ACCOUNT_OVERWRITE.
This approach allow you to specify a service account that is attached to the namespace, usefull when dealing with complex RBAC configurations.
variables: KUBERNETES_SERVICE_ACCOUNT_OVERWRITE: ci-service-account
usefull when overwritting the namespace and RBAC is setup in the cluster.
To ensure only designated service accounts will be used during CI runs, inform the configuration
service_account_overwrite_allowed or set the environment variable
KUBERNETES_SERVICE_ACCOUNT_OVERWRITE_ALLOWED
with proper. Also, multiple annotations can be applied. For example:
variables: KUBERNETES_POD_ANNOTATIONS_1: "Key1=Val1" KUBERNETES_POD_ANNOTATIONS_2: "Key2=Val2" KUBERNETES_POD_ANNOTATIONS_3: "Key3=Val3""
Using volumes
As described earlier, volumes can be mounted in the build container. At this time hostPath, PVC, configMap, and secret volume types are supported. Docker in your builds
There are a couple of caveats when using docker in your builds while running on a kubernetes cluster. Most of these issues are already discussed in the Using Docker Build section of the gitlab-ci documentation but it is worth it
DOCKER_HOST=tcp://localhost:2375 in your environment variables of the build container..
Error's. | http://docs.gitlab.com/runner/executors/kubernetes.html | 2018-05-20T19:27:28 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.gitlab.com |
Before explaining what goes into a file definition, it is important that you understand the concept of a LANSA file definition and how this relates to the IBM i concept of a file definition.
A 'file' is a normal IBM i database file, in which records can be retrieved, added, changed or deleted.
To create a file, LANSA uses a file definition. In IBM I terms, a file definition contains:
LANSA's file definitions include:
As well, the LANSA file definition can also contain a number of LANSA file features, such as:
For more details about LANSA file definitions refer to the section What is a File and What is a File Definition? in the LANSA for i User Guide. | https://docs.lansa.com/14/en/lansa009/content/lansa/insb9_004.htm | 2018-05-20T19:41:53 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.lansa.com |
The script is based on Named Entity Detection capacities offered by spacy.
It allows to identify and index persons, places, organizations, etc. At the moment it can handle 5 different languages. In English, one can select among 10 kinds of entities (date: DATE, products: PRODUCT, people: PERSON, geological entities: GPE, etc.). 4 kinds of entities are automatically retrieved in the 4 other languages (geographic entities (LOC), organizations (ORG), person (PERSON), geopolitical entities (GPE)).
As an illustration, see the result of a correspondance analysis mixing named entities extracted from the bible:
| https://docs.cortext.net/named-entity-recognizer/ | 2018-05-20T19:38:27 | CC-MAIN-2018-22 | 1526794863684.0 | [array(['https://docs.cortext.net/wp-content/uploads/2018/04/Capture-d’écran-2018-04-22-à-17.28.59-1024x321.png',
None], dtype=object)
array(['https://docs.cortext.net/wp-content/uploads/2016/11/Capture-d’écran-2016-11-05-à-19.33.48-1024x637.png',
None], dtype=object) ] | docs.cortext.net |
An I/O Module (also called an Object Access Module) is a program created by LANSA to handle all access to files. Each file used by LANSA will have an I/O Module whether created by LANSA or defined as an OTHER file used in existing non-LANSA applications.
Each I/O module contains the Repository features defined for its file and fields. Centralizing these features in an I/O module reduces the impact of file changes on your applications. By channeling file access through the I/O module, whether from a IBM i or client application, the validity of your data is protected because it is always subjected to the same validation and security checks.
LANSA's I/O modules on the IBM i are compiled in RPG, making IBM i data access via LANSA Open very efficient. An RPG compiler is therefore required on the IBM i on which the Repository is maintained. (The compiler used for I/O modules varies with the deployment platform).
The step that creates the I/O module is called "Making the file operational". Some changes to the Repository definitions require the I/O module to be regenerated and some do not. Following is a list of file and field changes indicating whether the file needs to be "made operational" after the change, thus regenerating the I/O module. | https://docs.lansa.com/14/en/lansa009/content/lansa/insb9_005.htm | 2018-05-20T19:22:28 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.lansa.com |
Rendering¶
To start rendering in Natron you need to use the
render(effect,firstFrame,lastFrame,frameStep)
or
render(tasks) functions
of the App class.
The parameters passed are:
- The writeNode: This should point to the node you want to start rendering with
- The firstFrame: This is the first frame to render in the sequence
- The lastFrame: This is the last frame to render in the sequence
- The frameStep: This is the number of frames the timeline should step before rendering a new frame, e.g. To render frames 1,3,5,7,9, you can use a frameStep of 2
Natron always renders from the firstFrame to the lastFrame. Generally Natron uses multiple threads to render concurrently several frames, you can control this behaviour with the parameters in the settings.
Let’s imagine there’s a node called Write1 in your project and that you want to render frames 20 to 50 included, you would call it the following way:
app.render(app.Write1,20,50)
Note
Note that when the render is launched from a GuiApp, it is not blocking, i.e: this function will return immediately even though the render is not finished.
On the other hand, if called from a background application, this call will be blocking and return once the render is finished.
If you need to have a blocking render whilst using Natron Gui, you can use the
renderBlocking() function but bear in mind that
it will freeze the user interface until the render is finished.
This function can take an optional frameStep parameter:
#This will render frames 1,4,7,10,13,16,19 app.render(app.Write1, 1,20, 3)
You can use the after render callback to call code to be run once the render is finished.
For convenience, the App class also have a
render(tasks) function taking
a sequence of tuples (Effect,int,int) ( or (Effect,int,int,int) to specify a frameStep).
Let’s imagine we were to render 2 write nodes concurrently, we could do the following call:
app.render([ (app.Write1,1,10), (app.WriteFFmpeg1,1,50,2) ])
Note
The same restrictions apply to this variant of the render function: it is blocking in background mode and not blocking in GUI mode.
When executing multiple renders with the same call, each render is called concurrently from the others. | http://natron.readthedocs.io/en/master/devel/renderingWriteNode.html | 2018-05-20T19:33:55 | CC-MAIN-2018-22 | 1526794863684.0 | [] | natron.readthedocs.io |
TimeDissolve node¶
This documentation is for version 1.0 of TimeDissolve.
Description¶
Dissolves between two inputs, starting the dissolve at the in frame and ending at the out frame.
You can specify the dissolve curve over time, if the OFX host supports it (else it is a traditional smoothstep).
See also | http://natron.readthedocs.io/en/master/plugins/net.sf.openfx.TimeDissolvePlugin.html | 2018-05-20T19:30:18 | CC-MAIN-2018-22 | 1526794863684.0 | [] | natron.readthedocs.io |
Selectors
Selectors provide a simple way to select items within your project. They're used everywhere in Kumu (views, filter, focus, finder, showcase... you get the point!) so you better cozy up to them.
You can build selectors by hand, or you can use our selector builder while you're still getting comfortable with them (look for the rocket icon once you click on search)
You can always use general field selectors
[field=value] but we've
built in a number of friendly shorthands to make selectors as easy to work
with as possible
We'll first run through the available shorthands, then we'll cover the general field selector and the advanced queries you can create with them.
Shorthand Selectors
Slugs
Before we dive in, we need to talk about slugs. A slug is nothing more than a simplified version of a value. To slug a value, simply:
- Remove all special characters
- Convert all spaces to a single dash
- Lowercase everything
That's all there is to it! Here are some examples to clarify just in case the idea is still a little hazy:
Slugs are your friend! All shorthand selectors rely on slugs so make sure your comfortable with them before moving on.
By Collection
Selecting all of a given collection is pretty simple.
element // select all elements connection // select all connections loop // select all loops
By Type
Selecting all of a specific type is pretty simple too. (Noticing a pattern yet?)
For elements, just take the type and slug it. For connections, slug the type and add "-connection".
person // select all elements with "Person" element type future-project // select all elements with "Future Project" element type personal-connection // select all connections with "Personal" connection type business-connection // select all connections with "Business" connection type
By Label
Selecting specific items by label is pretty simple too. (Promise, that's the last one!)
Just slug the label and add a "#" to the front of it:
#jeff-mohr // select element "Jeff Mohr" #thinking-in-systems // select element "Thinking in Systems" #b1 // select loop "B1"
By ID
Sometimes you want to assign a friendly id so you don't need to use the full label. Easy! Just assign your own "ID" field and now you can use that to select items directly.
The syntax is the exact same as the label selector above.
#project-1234 // select item with id "project-1234"
By Tag
To select by tag simply slug the tag and add a "." to the front of it:
.mission-critical // select anything tagged with "Mission Critical"
General field Selector
While the shorthand selectors are great for most cases, they're only useful when you just need to check for an exact value. field selectors are longer to write but they're also much more powerful.
["element type"="person"] // select all elements with "Person" element type ["description"] // select if description is present ["description"*="kumu"] // select if description contains "kumu"
When working with numbers you can also use relative selectors:
[employees<20] [employees>20] [employees<=20] [employees>=20]
Check out the selector reference for the full list of available operators.
Complex Selectors
The selectors we've covered so far are just building blocks. The real power of selectors comes from being able to chain them together to create complex queries.
Connect selectors back-to-back to AND them together (match all), or join them with a comma to OR them (match any).
organization, person, project // select all organizations, people, and projects .young.influential[sex=female] // select all young influential women]
Pseudo-selectors
Pseudo-selectors allow you to select elements, connections, and loops based on their status (for example, in or out of focus) or their relationship to other data.
Connected from and connected to
With the
:from and
:to pseudo-selectors, you can select connections based on the elements those connections are attached to. The basic syntax is
:from(selector) and
:to(selector).
To build your own, just replace
selector with any valid selector. For example:
:from(organization)selects all connections that are coming from elements with type “organization”
:to(#my-element)selects all connections pointing to an element with the label "My Element"
:from(["level of influence"="high"]), :to(["level of influence"="high"])selects all connections from or to elements that have "high" in their "level of influence" field
Connection direction
Use the
:directed,
:undirected, and
:mutual pseudo-selectors to select connections based on their direction.
:mutual["strength" > 1]selects all mutual connections whose Strength is greater than 1
:directed["connection type"="donation"]selects all directed connections whose Type is Donation
Focus root
When you click and hold on an element, you'll apply the focus effect to your map. The element you clicked will be the root of the focus, and the focus will extend a certain distance away from the root.
You can also select multiple elements or connections before you apply the focus effect. In that case, all the elements and connections you selected will be considered focus roots.
To select your focus root(s) and apply a style, use the
:focus pseudo-selector. For example:
// select all the focus roots that are elements, and apply a shadow to them element:focus { shadow-size: 1.5; shadow-color: inherit; } // select all the focus roots that are connections, and make them dashed lines instead of solid connection:focus { pattern: dashed; } // select all focus roots, and change their color to #428cba (blue) :focus { color: #428cba; }
Not
The
:not pseudo-selector is useful when you want to select items that do not match a selector. The basic syntax is
:not(selector).
To build your own, just replace
selector with any valid selector. For example:
:not(organization)selects anything on the map that doesn't have the element type "Organization"
connection:not(:focus)selects any connection that is not a focus root
element:not(["tags"*="blue"])selects any element whose Tags field does not contain the tag "blue" | https://docs.kumu.io/guides/selectors.html | 2018-06-17T22:01:55 | CC-MAIN-2018-26 | 1529267859817.15 | [array(['../images/search-selector.png', 'selector rocket'], dtype=object)] | docs.kumu.io |
Pipe Fittings Manufacturer | Flanges Manufacturer domestic or commercial environments, whereas piping is often used to describe high-performance (e.g. high pressure, high flow, high temperature, hazardous materials) conveyance of fluids in specialized applications. The term tubing is sometimes used for lighter-weight piping, especially types that are flexible enough to be supplied in coiled form.
For example pipes need to conform to the dimensional requirements of :
ASME B36.10M - Welded and Seamless Wrought Steel Pipe ASME B36.19M - Stainless Steel Pipe ASME B31.3 2008 - Process PipinG ASME B31.4 XXXX - Power Piping The B31.3 / B31.4 code has requirements for piping found in petroleum refineries; chemical, pharmaceutical, textile, paper, semiconductor, and cryogenic plants; and related processing plants and terminals. This code specifies requirements for materials and components, design, fabrication, assembly, erection, examination, inspection, and testing of piping. This Code is applicable to piping for all fluids including: (1) raw, intermediate, and finished chemicals; (2) petroleum products; (3) gas, steam, air and water; (4) fluidized solids; (5) refrigerants; and (6) cryogenic fluids.
Keyword(s):
References: | http://docs.tdiary.org/en/?nimika | 2018-06-17T22:15:41 | CC-MAIN-2018-26 | 1529267859817.15 | [] | docs.tdiary.org |
Removing Juju objects
This section looks at how to remove the various objects you will encounter as you work with Juju. These are:
- applications
- units
- machines
- relations
To remove a model or a controller see the Models and Controllers pages respectively.
For guidance on what to do when a removal does not apply cleanly consult the Troubleshooting removals page.
Removing applications
An application can be removed with:
juju remove-application <application-name>
If dynamic storage is in use, the storage will, by default, be detached and
left alive in the model. However, the
--destroy-storage option can be used to
instruct Juju to destroy the storage once detached. See
Using Juju Storage for details on dynamic storage.
Note: Removing an application which has active relations with another running application will terminate that relation. Charms are written to handle this, but be aware that the other application may no longer work as expected. To remove relations between deployed applications, see the section below.
Removing units
It is possible to remove individual units instead of the entire application (i.e. all the units):
juju remove-unit mediawiki/1
To remove multiple units:
juju remove-unit mediawiki/1 mediawiki/2 mediawiki/3 mysql/2
In the case that these are the only units running on a machine, unless that
machine was created manually with
juju add machine, the machine will also be
removed.
The
--destroy-storage option is available for this command as it is for the
remove-application command above.
Removing machines
Juju machines can be removed like this:
juju remove-machine <number>
However, it is not possible to remove a machine which is currently allocated to a unit. If attempted, this message will be emitted:
error: no machines were destroyed: machine 3 has unit "mysql/0" assigned
By default, when a Juju machine is removed, the backing system, typically a
cloud instance, is also destroyed. The
--keep-instance option overrides this;
it allows the instance to be left running.
Removing relations
A relation can be removed easily enough:
juju remove-relation mediawiki mysql
In cases where there is more than one relation between the two applications, it is necessary to specify the interface at least once:
juju remove-relation mediawiki mysql:db | https://docs.jujucharms.com/2.1/en/charms-destroy | 2018-06-17T22:02:43 | CC-MAIN-2018-26 | 1529267859817.15 | [] | docs.jujucharms.com |
Extensions¶
Automate functionality can be easily extended by various extension modules, and it is also possible to make your own Automate extensions, for details see Making your own Automate Extensions. The following extensions are included in Automate:
- Web User Interface for Automate
- WSGI Support for Automate
- Remote Procedure Call Support for Automate
- Arduino Support for Automate
- Raspberry Pi GPIO Support for Automate | http://python-automate.readthedocs.io/en/0.10.0/official_extensions/index.html | 2018-06-17T21:41:24 | CC-MAIN-2018-26 | 1529267859817.15 | [] | python-automate.readthedocs.io |
Let's say you want to open a conversation to notify your customer support or sales team about a new form submission this is how you can accomplish it.
First of all, you will need to create a bridge file to communicate with our APIs.
In this file, depending on your environment you will need to access our public API endpoint
For more info check out here the API docs.
For example, we will use PHP to show you how to create a function to open a new conversation from a lead to an account.
In the send_message function below you have three fields to pass:
Remember to get a Customerly API Key from your Project settings as well.
static function send_message($from_email, $account_id, $message) { $ch = curl_init(); $payload = array( "from" => array( "type" => "lead", "email" => $from_email ), "to" => array( "type" => "admin", "id" => $account_id, ), "content" => $message, ) curl_setopt($ch, CURLOPT_URL, ""); curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); curl_setopt($ch, CURLOPT_HEADER, FALSE); curl_setopt($ch, CURLOPT_POST, TRUE); curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($payload)); curl_setopt($ch, CURLOPT_HTTPHEADER, array( "Authentication: AccessToken: " . CUSTOMERLY_API_KEY )); $response = curl_exec($ch); curl_close($ch); return $response; } | https://docs.customerly.help/api/how-to-open-a-new-conversation-from-a-user-to-a-teammate-via-api | 2021-07-24T03:25:09 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.customerly.help |
GroupDocs.Viewer for .NET 19.9 Release Notes
This page contains release notes for GroupDocs.Viewer for .NET 19.9
Major Features
There are 13 features, improvements and bug-fixes in this release, most notable features are:
- Default font setting support for rendering PDF into PNG/JPG
- Added support of file formats:
- OpenXPS File (.oxps)
- OpenDocument Flat XML Spreadsheet (.fods)
- Microsoft Project 2019 (.mpp)
Full List of Issues Covering all Changes in this Release
Public API and Backward Incompatible Changes
- No public API changes in this version | https://docs.groupdocs.com/viewer/net/groupdocs-viewer-for-net-19-9-release-notes/ | 2021-07-24T05:02:57 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.groupdocs.com |
Login to the Secure Mail Flow console
- Enter the URL in the browser.
- Log in to the console using admin id postmaster@<yourdomain> where <yourdomain> is replaced by your domain name. The password for the console would have been shared with you via email when your domain was provisioned, or you can use the forgot password application to reset your password.
Note: If you have hosted multiple domains on SkyConnect or ClrStream, the login id for the Secure Mail Flow console will be the postmaster id of the first domain created.
- Click the Log On button to proceed.
On successful login, you will see the dashboard showing charts for email traffic relayed through Secure Mail Flow service.
As an admin of your organization,
You have access to the Logs of all the domains under your organization. You can analyze these logs to track any missing mail.
You can also search through quarantine mail and delete or release mail from the quarantine.
Reset the password using the Forgot Pass application
1. Go to the login page of the HES Quarantine console by entering the URL in your browser
2. Click the Forgot your password? link.
3. The application checks if you have an online account. Choose No on the pop-up message to proceed.
4. On the Reset Password Request form,
The User name is the postmaster id of your first SkyConnect domain (postmaster@[domain name])
The Admin email id is the ID provided when your organization was provisioned with Mithi.
Enter captcha letters displayed in the image
Submit the information.
5. You will receive an email from [email protected] on your registered email id. This mail will have the following information:
- HES Trend Micro login page link
- User name
- Temporary Password to login the console
Use the information to login and reset your password.
Changing the password from the console
1. Logon to the Admin Console
2. Click user name and navigate to Change Password option
3. On the Change Admin Password screen, provide the Old password, enter the New password and confirm the same.
4. Click Save. | https://docs.mithi.com/home/how-login-to-the-secure-mail-flow-console | 2021-07-24T05:37:18 | CC-MAIN-2021-31 | 1627046150129.50 | [array(['https://skyconnect.mithi.com/wp-content/uploads/2018/05/HES_Trend_Forgot_Password1.png',
None], dtype=object)
array(['https://skyconnect.mithi.com/wp-content/uploads/2018/05/HES_Trend_Forgot_Password2.png',
None], dtype=object)
array(['https://skyconnect.mithi.com/wp-content/uploads/2018/06/trend_update_password_of_account.png',
None], dtype=object)
array(['https://skyconnect.mithi.com/wp-content/uploads/2018/05/Trend_Micro_Change_Password.png',
None], dtype=object) ] | docs.mithi.com |
Limited Availability NoticeGateway clustering is not generally available. This feature is visible in your account only if you are participating in the limited availability program. It will be generally available in future releases. Contact OpsRamp Support for additional information.
A gateway cluster is a set of virtual machines (VMs) running gateway software, which function as a single, logical machine. The gateway cluster provides:
- High availability against the failure of a node in the cluster.
- High availability against the failure of a physical server on which the nodes run.
- Flexible horizontal scaling of nodes to manage more IT assets.
How a gateway cluster works
A gateway cluster is a set of virtual machines, which run applications that discover and monitor your environment, as shown in the following figure:
Gateway nodes run on physical servers typically running a hypervisor with other VMs unrelated to the gateway. Nodes use a shared NFS storage volume to persist state shared among gateway nodes.
Each node also runs a lightweight Kubernetes - MicroK8s distribution. Kubernetes enables gateway nodes to work as a single, logical machine, which automatically schedules gateway applications between nodes.
If a node fails or the host on which the node runs fails, or both a node and host fail to restart applications on a different node, the logical node restarts the applications. The following figure shows how a gateway cluster works in the presence of faults:
Deployment options
Gateway clusters can be deployed in several configurations, depending on availability and horizontal scaling goals. The following figures illustrate three design points:
Prerequisites
To deploy a gateway cluster, make sure your environment meets these requirements: | https://docs.opsramp.com/platform-features/gateways/gateway-cluster/ | 2021-07-24T05:19:50 | CC-MAIN-2021-31 | 1627046150129.50 | [array(['https://docsmedia.opsramp.com/diagrams/gateway-clusters-1.png',
'Gateway Cluster'], dtype=object)
array(['https://docsmedia.opsramp.com/diagrams/gateway-clusters-2.png',
'Gateway Cluster'], dtype=object)
array(['https://docsmedia.opsramp.com/diagrams/gateway-clusters-3.png',
'Gateway Cluster'], dtype=object)
array(['https://docsmedia.opsramp.com/diagrams/gateway-clusters-4.png',
'Gateway Cluster'], dtype=object) ] | docs.opsramp.com |
Backup and restore of a Kubernetes cluster involves the following components: clustes spec, secrets, etcd database content and persistent volumes content.
Kublr implements the full cluster backup procedure described in the Backup section. However, this solution might not fit customer environment.
To aid the customer in implementing the backup procedure that best suits the requirements, we provide low-level tools for application-level etcd snapshot and restore.
To create a snapshot, run the following command as a root on any of the master nodes:
/opt/kublr/bin/kublr etcd backup --file file.db
This will create an application-level snapshot of etcd database and place it
to the
file.db file. This command is intended to
be run as part of a script that will generate a timestamped name for
the file and/or upload it to the final destination.
As with any backup, it is advisable to store it somewhere outside of the node file system.
The file is a standard application-level etcd snapshot as created by
etcdctl snapshot create command or equivalent etcd API call.
The snapshot contains consensus data, so which master nodes is used for the snapshot is not important. However, to avoid a single point of failure, you might want to schedule snapshotting on several nodes.
The snapshot made in the previous step can be restored manually according to the procedure described in etcd disaster recovery document.
Kublr uses
etcdX (where X is the master node ordinal) for
etcd instance names and for peer URLs.
You can find etcd data volume location and other aspects of Kublr etcd
environment in the file
/etc/kubernetes/manifests/etcd.manifest. See section Addendum: locating etcd data volume
for more information.
Note that etcdctl requires that peer URLs must be resolvable during
the restore. Names
etcdX.kublr.local are not part of the Kubernetes
DNS (which will not be operational when etcd is down). You
must use
/etc/hosts or other means to make them resolvable.
Also note that Kublr marks etcd as the critical pod, so you cannot stop etcd instance manually (Kublr will forcefully start it again).
Even if you somehow manage to run the restore without stopping the Kubernetes, (may be during time window of etcd restart), the replacement of etcd database under active Kubernetes API server will render the Kubernetes inoperational, so the cold restart will be needed anyway.
You must stop all Kublr/Kubernetes services by issuing the following commands as root:
service Kublr stop service Kublr-kubelet stop service Docker stop # at this point you can perform the restore # wait until all other master nodes reach this point service Kublr start # no need to manually start Kublr-kubelet and Docker, # the Kublr service will start them automatically
After the successful restore, worker nodes also will need to be restarted.
Warning: etcd database restore by itself does not restore the content of persistent volumes. This must be done separately, preferrably before the attempt to start the node.
To avoid a tedious tasks of finding node ordinals and constructing the
correct etcdctl environment and command arguments for every node
we provide a
kublr etcd restore subcommand.
To restore the etcd database using this command, issue the following commands as root on every master node:
# distribute the snapshot file to every master node service Kublr stop service Kublr-kubelet stop service Docker stop /opt/kublr/bin/kublr etcd restore --file file.db # wait until all other master nodes reach this point service Kublr start
As with manual restore, all master nodes must be restored from the same snapshot file.
After the successful restore, worker nodes also will need to be restarted.
Warning: etcd database restore by itself does not restore the content of persistent volumes. This must be done separately, preferrably before the attempt to start the node.
The command
kublr etcd restore does not perform the actual restore,
it just schedules the restore to be performed on etcd pod startup.
To find the output of the actual restore operation, check the logs of
etcd container using
docker logs command. Equivalent kubectl command
will be available only if the restore was successful.
Etcd container has name starting with
k8s_etcd_k8s-etcd-.
The restore operation output most likely will be at the top of the log, before the output of the etcd process.
To abort the scheduled restore, remove the file
/mnt/master-pd/etcd/restore.db
(See section Addendum: locating etcd data volume to find the etcd
data volume location for your instance).
Warning: The etcd restore is actually a destructive operation,
so avoid dry-running
kublr etcd restore
When only one of etcd instances is failed there is no need to restore entire cluster database from the backup. In etcd 3.2 and higher, the single failed node can be restored by replicating the data from the cluster quorum.
The procedure for restoring the single node is also described in the document etcd disaster recovery. In Kublr environment, this procedure is unconvenient because it requires stopping etcd instance and reproducing etcd environment for etcdctl command.
To aid in the recovery of single etcd instance, Kublr 1.11.2 and higher implements a control mechanism to schedule commands to be performed by running etcd pod.
To schedule the restore of single node by replication from cluster quorum, create
a file named
command in the root directory of etcd data volume. This file must contain a
string
reinitialize Example:
echo reinitialize > /mnt/master-pd/etcd/command
See section Addendum: locating etcd data volume to find the etcd data volume location for your instance
The command will be performed by etcd pod several seconds after the file creation, or
on next restart if the pod is in crash loop. The
command file
will be removed after the execution. The results of the execution
can be checked in the pod/container logs. Some information will be available in
command-result
file in the same directory.
This procedure does not involve replacing the content of etcd database, so it does not require restarting Kublr and Kubernetes services.
Warning In the future, additional commands can be added to the pod control mechanism,
so avoid creating the
command file with other content.
By default, Kublr starts etcd container using host directory
/mnt/master-pd/etcd for
etcd data volume. However this path can be overriden by custom cluster spec or
by platform-specific defaults.
Actual host path to etcd data volume can be found by two metods:
/opt/kublr/bin/kublr validate. Output of this command is Yaml data stream. The host path of etcd data volume is controlled by the parameter
etcd_storage.path.
kubectl describe podcommand or from Yaml file
/etc/kubetnetes/manifests/etcd.manifeston the master node.
hostPath.pathof the volume named
data.' Volume parameters are located in sectionspec.volumes` of the manifest. | https://docs.kublr.com/articles/backup/etcd-backup/ | 2021-07-24T05:06:51 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.kublr.com |
sudo docker run --name kublr -d --restart=unless-stopped -p 9080:9080 kublr/kublr:1.20.0 release brings Kuberntes 1.19.7, RedHat Enterprise Linux 8 support, ELK 7.9.0 stack with SearchGuard plugin v45.0.0, multiple significant Azure deployment improvement including Azure Virtual Machine Scale Sets, zones and ARM resource extensions support, improved cloud deployment architecture, as well as component versions upgrades and numerous improvements and fixes in UI, UX, backend, agent, resource usage, reliability, scalability, and documentation.
Additionally, you need to download the BASH scripts from
You also need to download Helm package archives and Docker images:
(optional if the control plane 1.20.0 images are imported already)
(optional if the control plane 1.20.0.
Non OSS ELK version cannot be enabled. By default Kublr configures ELK stack without X-Pack capabilities enabled. If this capability is necessary for your deployment, postpone Kublr upgrade until Kublr 1.20.1 is available. | https://docs.kublr.com/releasenotes/1.20/release-1.20.0/ | 2021-07-24T04:06:34 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.kublr.com |
Package ID prefix reservation
Package owners can reserve and protect their identity by reserving ID prefixes. Package consumers are provided with additional information when the packages that they are consuming are not deceptive in their identifying properties.
n appears only see the below visual indicators on the nuget.org gallery and in Visual Studio 2017 version 15.4 or later:
nuget.org Gallery
Visual Studio
ID prefix reservation application process
Review the acceptance criteria for prefix ID reservation.
Determine the prefixes are notified of acceptance or rejection (with the criteria that caused rejection). We may need to ask additional identifying questions to confirm owner identity.
ID prefix reservation criteria
When reviewing any application for ID prefix reservation, the NuGet.org team will evaluate the application against the below criteria. Please note that not all criteria need to be met for a prefix to be reserved, but the application may be denied if there is not substantial evidence of the criteria being met (with an explanation given):
Does the package ID prefix properly and clearly identify the reservation owner?
Has the owner enabled 2FA for their NuGet.org account?
Is the package ID prefix something common that should not belong to any individual owner or organization?
Would not reserving the package ID prefix cause ambiguity, confusion, or other harm to the community?
When publishing packages to NuGet.org within your ID prefix reservation, the following best practices must be considered:
Are the identifying properties of the packages that match the package ID prefix clear and consistent (especially the package author)?
Do the packages have a license (using the license metadata element and NOT licenseUrl which is being deprecated)?
If the packages have an icon (using the iconUrl metadata element), are they also using the icon metadata element? It is not a requirement to remove the iconUrl but embedded icons must be used.
Consider reviewing the full package authoring best practices guide in addition to the points above.
Third party feed provider scenarios
If a third party feed provider is interested in implementing their own service to provide prefix reservations, they can do so by modifying the search service in the NuGet V3 feed providers. The change in the feed search service is to add the
verified property. The NuGet client will not support the added property in the V2 feed.
For more information, see the documentation about the API's search service.
Package ID Prefix Reservation Dispute Policy
If you believe an owner on NuGet.org was assigned a package ID prefix reservation that goes against the above listed criteria, or infringes on any trademarks or copyrights, please email [email protected] with the ID prefix in question, the owner of the ID prefix, and the reason for disputing the assigned prefix reservation. | https://docs.microsoft.com/en-us/nuget/nuget-org/id-prefix-reservation | 2021-07-24T03:54:56 | CC-MAIN-2021-31 | 1627046150129.50 | [array(['media/nuget-gallery-reserved-prefix.png', 'nuget.org Gallery'],
dtype=object)
array(['media/visual-studio-reserved-prefix.png', 'Visual Studio'],
dtype=object) ] | docs.microsoft.com |
syntax ^
Documentation for syntax
^ assembled from the following types:
language documentation Regexes
(Regexes) regex ^ ^
The
^ anchor only matches at the start of the string:
say so 'karakul' ~~ / raku/; # OUTPUT: «True»say so 'karakul' ~~ /^ raku/; # OUTPUT: «False»say so 'rakuy' ~~ /^ raku/; # OUTPUT: «True»say so 'raku' ~~ /^ raku/; # OUTPUT: «True»
The
$ anchor only matches at the end of the string:
say so 'use raku' ~~ / raku /; # OUTPUT: «True»say so 'use raku' ~~ / raku $/; # OUTPUT: «True»say so 'rakuy' ~~ / raku $/; # OUTPUT: «False»
You can combine both anchors:
say so 'use raku' ~~ /^ raku $/; # OUTPUT: «False»say so 'raku' ~~ /^ raku $/; #»
language documentation Variables
(Variables)-isms).
language documentation Traps to avoid
From Traps to avoid
(Traps to avoid) Raku» | https://docs.raku.org/syntax/$CIRCUMFLEX_ACCENT | 2021-07-24T03:47:20 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.raku.org |
Security Policy¶
Read the Docs adheres to the following security policies and procedures with regards to development, operations, and managing infrastructure. You can also find information on how we handle specific user data in our Privacy Policy.
Our engineering team monitors several sources for security threats and responds accordingly to security threats and notifications.
We monitor 3rd party software included in our application and in our infrastructure for security notifications. Any relevant security patches are applied and released immediately.
We monitor our infrastructure providers for signs of attacks or abuse and will respond accordingly to threats.
Infrastructure¶
Read the Docs infrastructure is hosted on Amazon Web Services (AWS). We also use Cloudflare services to mitigate attacks and abuse.
Data and data center¶
All user data is stored in the USA in multi-tenant datastores in Amazon Web Services data centers. Physical access to these data centers is secured with a variety of controls to prevent unauthorized access.
Application¶
- Encryption in transit
All documentation, application dashboard, and API access is transmitted using SSL encryption. We do not support unencrypted requests, even for public project documentation hosting.
- Temporary repository storage
We do not store or cache user repository data, temporary storage is used for every project build on Read the Docs.
- Authentication
Read the Docs supports SSO with GitHub, GitLab, Bitbucket, and Google Workspaces (formerly G Suite).
- Payment security
We do not store or process any payment details. All payment information is stored with our payment provider, Stripe – a PCI-certified level 1 payment provider.
Engineering and Operational Practices¶
- Immutable infrastructure
We don’t make live changes to production code or infrastructure. All changes to our application and our infrastructure go through the same code review process before being applied and released.
- Continuous integration
We are constantly testing changes to our application code and operational changes to our infrastructure.
- Incident response
Our engineering team is on a rotating on-call schedule to respond to security or availability incidents. | https://docs.readthedocs.io/en/stable/legal/security-policy.html | 2021-07-24T04:37:00 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.readthedocs.io |
Types and models
Types and Models are the foundation of describing data within Taxi
Types and Models form the core building blocks of data and API's. These describe the contracts of data that float about in our ecosystems - published by services, exposed via processes, and sent from 3rd party suppliers.
In Taxi, as part of building out an robust composable type system, we differentiate between xxxx
Types
Types declare a simple, resuable concept.
Basic syntax:
// types can have comments (ignored by the parser), or docs, as shown below: [[ Any type of Name, used for refer to things ]] type Name inherits String [[ A first name of a person. Use this to call them for cake. ]] type FirstName inherits Name
Using inheritence to describe specificitiy
It's strongly recommended to inherit a type from another type - either one of Taxi's built-in primitives, or one of your own, to narrow the specificity of the type. This is discussed more in inheritence.
Fields on types
It is possible, (but discouraged) for types to contain fields:
// This is possible, but discouraged. Use a Model instead type Name { firstName : FirstName lastName : LastName }
Generally, it is discouraged for Types to have fields. It limits the reusability of types, as all users of the type must satisfy the same contract. In general, we recommend using Types that don't have fields, and Models that do.
We encourage widely shared types, and narrowly shared models.
Semantic typing
Semantic typing simply means defining types that describe the actual information being shared. It differs from most type systems, which focus purely on the representation of the data.
The information is the most useful part of any data, and so Taxi is intended to build type systems that describe this. Taxi encourages using lots of small, descriptive and narrowly focussed types as the building blocks of a shared enterprise taxonomy.
This is one of the key features of Taxi, and allows tooling to make intelligent choices about the information our systems are exposing.
For example, consider two pieces of information - a customers name, and their email address. Both are represented as Strings when being sent between systems - but both are very differnet pieces of information. Semantic typing aims to describe the information within a field, not just it's representation.
type Name inherits String type EmailAddress inherits String
By building out a rich vocabulary of semantic types, models and APIs can be explicit about the type of information they require and produce.
For example - consider a service that takes an EmailAddress, and returns the customers name.
Without semantic typing, this would look like this:
// Not semantically typed. Technically correct, but lacks sufficient information to be descriptive, or automate any tooling around operation getCustomerName(String):String // Semantically typed. The API is now much richer, and has hints that tooling can start leveraging operation getCustomerName(EmailAddress):Name
Through the use of inheritence, we can further refine these concepts, to add richer semantic descriptions about the information. This allows operations to add extra levels of specificity to their contracts, and for models to be descriptive about the data they are producing:
type FirstName inherits Name type PersonalEmaillAddress inherits EmailAddress type WorkEmailAddress inherits EmailAddress // This operation can work against any type of `EmailAddress`: operation getCustomerName(EmailAddress):Name // Whereas this operation only works if passed a `WorkEmailAddress`. operation getCustomerName(WorkEmailAddress):Name
Models
Models describe the contract of data either produced or consumed by a system. Models contain fields, described as types.
type PersonId inherits Int type FirstName inherits String model Person { id : PersonId // refernece to another custom type firstName : FirstName lastName : String // You don't have to use microtypes if they don't add value friends : Person[] // Lists are supported spouse : Person? // '?' indicates nullable types }
Models can use inheritence, but this is less common and generally less useful than with Types.
When models start getting shared too widely, it becomes difficult for them to change and evolve. Instead, favour single-system models, composed of terms from a widely shared Taxonomy of types.
Parameter models
Parameter models are special models } model Person { id : Id as Int firstName : FirstName as String lastName : LastName as String } parameter model CreatePersonRequest { firstName : FirstName lastName : LastName }
Types vs Models - Which to use?
Types
Use Types to describe a single specific piece of information. Try to form agreement on the definition of Types across an organisation. Generally, Types should match the shared business terms used across an organsiation to describe the business. Because types are small (only describing a single attribute), and specific, it's generally easier to form consensus around the definition of a Type.
Types are owned globally across an organisation, so there should be a curation process surrounding their definition.
Types generally don't have fields. As soon as a types have fields, users must agree on both the meaning of the information, and the representation. The latter tends to provide much greater friction, and leads to less reusable types.
Models
Use Models to describe the contract and structure of data either produced or consumed by a single system.
Models should be owned by the owners of a system, who are free to evolve and grow their models autonomously. Models have fields and structure, described using Types.
Built-in types)
Array types
An Array (specifically,
lang.taxi.Array) is a special type within Taxi, and can be represented in one of two ways - either as
Array<Type> or
Type[].
model Person { friends : Person[] } // or: model Person { friends : Array<Person> }
These two approaches are exactly the same. In the compiler,
Person[] gets translated to
Array<Person>.
It is possible to alias Arrays to another type:
type alias Friends as Person[] model Person { // This is the same as friends : Person[] friends : Friends }
However, be aware that this has a questionable effect on readability.
See also: Type aliases
Date / Time types
Taxi has four built-in date-time types (
Date,
Time,
DateTime and
Instant), which can specify additional format rules.
Currently, Taxi does not enforce or use these symbols, it simply makes them available to parsers and tooling to consume. As such, it's possible that other parsers may define their own set of formats for use. This is possible, but discouraged.
Date/Time formats
Formats can be specified using a
(@format='xxxx') syntax after the type declaration:
type DateOfBirth inherits Date(@format = 'dd/MMM/yy') type Timestamp inherits Instant(@format = 'yyyy-MM-dd HH:mm:ss.SSSZ')
The actual parsing of dates is left to parser implementations. Our reference implementation (Vyne) uses the Java SimpleDateFormat rules for parsing, assuming a lenient parser.
Sample patterns
More timezone examples
Optional and lenient matching.
Sections of a pattern can be marked optional by using
[].
The reference implementation follows lenient parsing rules, so values like
S and
SSS are equivalent. More details on Lenient are here.
For example:
Date/Time offsets
A model can also specify restrictions around the offset from UTC that a date must be represented/interpreted in:
model Transaction { timestamp : Instant( @offset = 60 ) }
This specifies that the value written to or read from the
timestamp value must have an offset of UTC +60 minutes.
This is most useful for contracts of consumers of data, specifying that the data presented must be in a certain timezone.
eg:
model Transaction { timestamp : Instant( @offset = 0 ) // Whenever data is written to this attribute, it must be in UTC. }
Note - taxi itself doesn't enforce or mutate these times, that's left to parsers and tooling.
Nullability
By default, all fields defined in a Model are mandatory.
To make a field optional, relax this by adding the
? operator:
model Person { id : PersonId // Mandatory spouse : Person? // Optional - may be null. }
Note that Taxi does not enforce nullabillity constraints - that's left to the systems which interpret the data.
Inheritence
Taxi provides inheritence across both Types and Models. It's recommended that inheritence is used heavily in Types to increase the specificity of a concept, and sparingly in Models.
type Name inherits String type PersonName inherits Name type FirstName innherits PersonName type LastName inherits PersonName type CompanyName inherits Name
As with most languages, inheritence is one-way -- ie., in the above example, all Names are Strings, but not all Strings are names.
Likewise, all FirstNames are Names, but not all Names are FirstNames.
This is incredibly useful in building out a taxonomy that allows publishers and consumers to be more or less sepcific about the types of data they are providing.
Inline inheritance
It is possible to define a type with inheritance inline within a model definition.
model Person { // This declares a new type called FirstName, which inherits PersonName firstName : FirstName inherits PersonName lastName : LastName inherits PersonName } // is exactly the same as writing: type FirstName inherits PersonName type LastName inherits PersonName model Person { firstName : FirstName lastName : LastName }
Annotations
Compiler support for annotations in Taxi has been improving recently. Originally, there was no checking of validity of annotations, other than simply syntax checking.
In the current version, there's partial support for type checking around annotations, beyond basic syntax checking. If an annotation has been defined as a type, then it's contract is enforced by the compiler (ie., attributes must match).
However, it's also possible to declare annotations that don't have an associated Annotation Type. This has been left for backward compatibility. If you are building out custom annotations, you're encouraged to define a corresponding Annotation type to go with it.
In a future release, all annotations will be required to have a corresponding annotation type.
See also: Annotation types
Annotation types
Annotation Types allow declaring new annotations. Annotations can have models, which have types associated with them.
Eg:
enum Quality { HIGH, MEDIUM, BAD } annotation DataQuality { quality : Quality }
This defines a new annotation -
@DataQuality, which has a single mandatory attribute -
quality, read from an enum.
Here's how it might be used:
@DataQuality(quality = Quality.HIGH) model MeterReading {}
When an annotation has an associated type, then it's contract is checked by the compiler.
For example, given the above sample, the following would be invalid:
@DataQuality(quality = Quality.Foo) // Invalid, Foo is not a member of Quality model MeterReading {} @DataQuality(qty = Quality.High) // Invalid, qty is not an attribute of the DataQuality annotation model MeterReading {} @DataQuality(quality = "High") // Invalid, as quality needs to be popualted with a value from the `Quality` enum. model MeterReading {}
Type aliases
Type aliases provide a way of declaring two concepts within the taxonomy are exactly the same. This is useful for mapping concepts between two independent taxonomies.
For example:
type FirstName inherits String type alias GivenName as FirstName
This states that anywhere
FirstName is used,
GivenName may also be used.
Unlike Inheritence, aliases are bi-directional. That is all
FirstNames are
GivenNames, and all
GivenNames are
FirstNames.
Inheritence vs Type Aliases - Which to use?
Type aliases were an early language feature in Taxi, and have since been replaced by inheritence, which has stricter rules and is more expressive.
Generally speaking, we encouarge the use of Inheritence where a relationship is one-way. Type aliases are really only useful when mapping between two different taxonomies, which both expose the same concept with different terminology.
Enums
Enum types are defined as follows:
enum BookClassification { @Annotation // Enums may have annotations FICTION, NON_FICTION }
Enum synonyms.
Enum values
Enums may declare values:
enum Country { NEW_ZEALAND("NZ"), AUSTRALIA("AUS"), UNITED_KINGDOM("UK") }
Parsers will match inputs against either the name or the value of the enum.
ie:
{ "countryOfBirth" : "NZ", // Matches Country.NEW_ZEALAND "countryOfResidence" : "UNITED_KINGDOM" // Matches Country.UNITED_KINGDOM }
Lenient enums
Lenient enums instruct parsers to be case insensitive when matching values.
Enums already support matching on either the Name of the enum or the value. Adding "lenient" to the start of the enum declaration means that these matches will be case insensitive.
eg - Without Lenient:
enum Country { NZ("New Zealand"), AUS("Australia") }
An ingested value of either "NZ" or "New Zealand" would match the NZ enum. However, an ingested value of "nz" would be rejected
Adding lenient makes checking against enums case insensitive
lenient enum Country { NZ("New Zealand"), AUS("Australia") }
An ingested value of "nz", "Nz", "new zealand", "New zealand" etc would match against NZ. A value of "UK" (not defined in the list) would be rejected.
Best practice reccomendation
If you must use lenient enums, restrict them to external data you're ingesting. Try not to make your internal Taxonomy lenient - as you want internal data to be as strict as possible.
You can use synonyms against a lenient external data to match against your stricter internal taxonomy. eg:
namespace my.vendor { lenient enum Country { NZ("New Zealand") synonym of acme.Country.NZ AUS("Australia") synonym of acme.Country.AUS }
Default values on Enums
Enum values can be marked as
default to instruct parsers to apply these values if nothing matches. You can specify at most one default in the list of enum values.
enum Country { NZ("New Zealand"), AUS("Australia"), default UNKNOWN("Unknown") }
In this example, an ingested value of "NZ" would match NZ, and a value of "UK" would match UNKNOWN. No value would be rejected.
Mixing Default and Lenient
You can mix lenient and default
lenient enum Country { NZ("New Zealand"), AUS("Australia"), default UNKNOWN("Unknown") }
In this scenario, because there's both a lenient keyword (making the enum case insensitive), and a default:
- A value of
Nzwould match
Country.NZ
- A value of
new zealandwould match
Country.NZ
- A value of
Ukwould match
Country.UNKNOWN | https://docs.taxilang.org/language-reference/types-and-models/ | 2021-07-24T03:43:29 | CC-MAIN-2021-31 | 1627046150129.50 | [array(['/taxi-lang-logo.png', None], dtype=object)] | docs.taxilang.org |
Wiris Quizzes for Brightspace Now you can include Math & Science by Wiris question types in your Brightspace course by following these instructions. Admin: Installation To use Wiris Quizzes in a Brightspace campus, the first step is to install Wiris Quizzes as a remote plugin in your Brightspace course. Credentials Before installing the plugin into your campus, have already purchased your license and you want to use the Product Key of your institution, go to with your user/password for and search for the Product Key of your Wiris Quizzes LTI license. The Consumer Key is the domain of your institution (if your website is '', the Consumer key will be 'acme.net').Purchase your licenseIf you still need to purchase your license, please contact us at [email protected] Installing Wiris Quizzes through LTI You must complete two processes before the plugin is active in Brightspace: Add Wiris Quizzes as a Remote Plugin Configure the domain Add Wiris Quizzes as a Remote Plugin Log in to Brightspace with an administrator account. In the Homepage (never when you are in a course), click the Admin Tools ⚙️ icon in the top right corner. In the drop-down menu that appears select Remote Plugins. When you click New Remote Plugin a pop-up form called Create a new Remote Plugin will appear. Fill the form with the data below: * Plugin Type: QuickLink (CIM). * Name: WIRIS QUIZZES LTI (this is just a recomendation, you can choose soemthing different) * Launch Point URL:. * LTI Key: demo.wiris.com (for testing purposes only). * LTI Secret: 5EQZQ-GN4T6-EZ7BN-QHTBD-TEFQB (for testing purposes only). * After setting the Key and Secret, the OAuth Signature Method field appears. Set it to: HMAC-SHA1. * You can Leave the other fields blank. At the end of the form, click the Add Org Units button and add the courses for which the plugin should be available. Configure the domain In the Homepage (never when you are in a course) click the Admin Tools ⚙️ icon in the top right corner. In the drop down menu that appears select External Learning Tools. Click the third item in the tab menu called Manage Tool Providers. There are two different types of External Learning Tools pages: a global one, and then one for each course. You can configure either one, but keep in mind that the course one overrides the global one, and it only applies to that course. The global one can be accessed through ⚙️ Admin Tools > External Learning Tools. The course one can be accessed through: [Course] > Course Tools > All Course Tools > External Learning Tools. A tool provider with the wiris.net name in the Launch point column should appear in the list. Click it to edit. Check all the boxes under Security Settings. In Add Org Units, you'll find all the courses where the plugin is available. That is all! The Wiris Quizzes plugin is now available on your Brightspace campus in selected courses. Manage Wiris Quizzes Manager account in Moodle not necessary in normal operation. The back end authoring platform is a Moodle-based platform. This platform is provided by Wiris and located at, where 'example.com' is the domain of your Brightspace institution. You are provided with a Manager account with high privileges to administer the authoring platform, although it is not necessary for regular operation. Use the manager account Go to the platform URL:, replacing example.com with the domain of your Brightspace institution. Contact WIRIS support to get the credential. The first time you log in with this account, you are required to change the password to a secure and private one. Immediately change the account email. Click Manager User in the top-right corner and select Profile, Then, in the profile page, click the Edit profile link and set up the new email. Teacher: Authoring & Grading If you are reading this probably, you want to create or manage quizzes with math and science features in a course hosted in a Brightspace platform. All you need to create content using Wiris Quizzes is a Brightspace account with a teacher role in a Brightspace course. Create a quiz To review the features of Wiris Quizzes and the process for creating quizzes, please see the documentation View all Wiris Quizzes documentation To create a quiz in Brightspace: Log in to Brightspace and go to the course where you want to create the quiz. You must ensure that the plugin is available in the course. Click the Content link. It is the first item on the course's Main Menu. Enter an existing module or create a new one. Click on the button Existing Activities, then select the External tool with the name you've given to Wiris Quizzes Remote Plugin; In this example, we have used 'WIRIS QUIZZES LTI' Here you can create a New quiz or Edit an existing one. Open the Wiris Quizzes platform in a full window If you want to create your quizzes and questions in full-window and access some restricted features, click the button 'Open in a new tab'. When you end creating the content in the platform, click the 'Refresh' link to see your new or updated quizzes in the selection window. Review quiz attempts As a teacher, in order to see the grades and each student's answers, you can navigate to the quiz and click the "Attempts: <n>" link. On this page you can generate a report with all the information and also review each student's response. Student: Just play Students use the Wiris Quizzes activities in their course without accessing platforms other than their Brightspace course. Wiris Quizzes' responses and grades may be reviewed in the same manner as standard Brightspace quizzes. Table of Contents Admin: Installation Teacher: Authoring & Grading Student: Just play | https://docs.wiris.com/en/quizzes/quizzes-brightspace?do=login§ok=1edcf482ac8ea12de3c513ffb3509a56 | 2021-07-24T04:58:51 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.wiris.com |
Instillation
To install WP e-Commerce Predictive Search:
- Download the WP e-Commerce Compare Products plugin [ilink url=”” style=”download” title=”Predictive Serach Pro”]PRO Version[/ilink] [ilink url=”” style=”download” title=”WP e-Commerce Predictive Search Lite”]Lite Version[/ilink]
- Upload the wp-ecommerce-compare-products folder to your /wp-content/plugins/ directory
- Activate the ‘WP e-Commerce Compare Products’ from the Plugins menu within WordPress
Features
As soon as you start to type WP e-Commerce Predictive Search starts delivering visually appealing and changing search results for:
- Product Names
- Product SKU’s
- Product Categories
- Product Tags
- Pages
WP e-Commerce WP e-Commerce Predictive Search Via Widgets, Shortcodes or the provided function with parameters. See the info graphic below to see where you find the settings for these on your site.
Image Legend
1. Settings: WP e-Commerce Predictive Search only works when you to have the WP e-Commerce plugin installed. If you have WP e-Commerce installed and activated you will see Products and Settings > Shop have been added to your WordPress Admin dashboard sidebar.
2. Store: The WP e-Commerce Predictive Search plugin auto adds a Predictive Search Tab to the WP e-Commerce Shop dashboard. You access that by putting your mouse over Settings in the sidebar and clicking on the Shop Link in the pop-out menu.
3. Predictive Search Settings Tab: Click on the Predictive Search tab on the WP e-Commerce admin panel to open it. The function that you do on this tab are – Exclude Specific Content from Predictive Search, Customize the All Serach results page display and generate a global search Function.WPEC Predictive Search’ Widget. Widgets are listed in Alphabetical order and thus you’ll the WPEC Predictive Search widget near the bottom of your widget list.
5. Predictive Search Function: Use the function to place Predictive search anywhere in your theme that is not a widget area. Most commonly people use the function to place and or replace an existing search funtion that is in their themes header. You will find the function generator on the Predictive Search Admin tab.
6. Predictive Search Shortcode – With WP e-CommerceWPEC Predictive Search’ Widget.
4. Drag and drop the ‘WPEC Predictive Search’ Widget into any ‘Widget’ area in the right hand column of the Widgets dashboard. WP e-Commerce WP e-Commerce WP e-Commerce
WP e-Commerce Predictive search gives administrators total flexibility is customizing what information the user see’s in the All results search page – for a detailed description of what they are and how that works see ……
How it works! An Explanation.
- WP e-Commerce Settings > Predictive Search
Image Legend
1. Predictive Search Admin Tab – Open this and you will see at the top the Search page. This page is auto created when you install and activate the plugin. The WP e-Commerce Predictive Search shortcode will be on the page.
2. You can use the Drop down to change the Page. If you want to customize the Page Title that shows at the top of the search page (image shows the Predictive Search Page title in the drop down and how it shows on the search page on the front of this site) just change the name of the page (Not the Page URL). Go to Pages in your wp-admin sidebar – Find the
WP e-Commerce Predictive Search enables you to completely customize the information that shows with each result on the All Results Search Pages. To do that from your wp_admin dashboard go to Setting > Store >.
7.. / WP e-Commerce or Theme search widget with Predictive Search – you do that by replacing the search function in your header.php file with the global Predictive Search function.
Image Legend
1. Predictive Search Function – 8 in the image.
2. The Global search function. Note settings here only affect the global function – they have no impact on the Search boxes created by Widgets or shortcode.
3. Customize Search function values.. Search Type values. – Add a number value for each to activate that search type. Product Name, Product SKU, Product Categories, Product Tags, Posts, Pages. The number you set is how many of that result type will show in the Drop down. Leave a field empty and that Search type will not be activated.
5. Description Characters – Set the number of characters to show from each search types description.
6. Width – the wide of the search box – leave empty and the wide will be 100% of the available container.
7. Padding – set the padding around the search box in px. use this to position your search box.
8. Custom Style – $12 ‘On Demand’ service. Just go here and purchase the service and we will do it for you that day.
9..
10. “Save All Changes’ – after making any changes you must save them for them to be activated. $12 service. – If you do not know how to do this and do not have access to a coder that does we offer a USD $12 ‘On Demand’ service. Just go here and purchase the service and we will do it for you that day.
Using Shortcodes
In this section we will cover how to use the WP e-Commerce Predictive Search Shortcode to insert a search box and configure it into any page or post on your site. See image below and the detailed description and instructions in the Image Legend under the image.
Image Legend
1. Predictive Search Shortcode Icon – When you install and activate WP e-Commerce WP e-Commerce.
Trouble Shooting
Below is a list of common support request and the solution.
Fatal error: Call to undefined function curl_init()
That Error is telling you that you do not have cURL activated on your hosting environment. Our Pro Version plugins require cURL to connect with our API and validate your Authorization Key and auto plugin updates. cURL is on almost all servers but some hosts do not activate it. If you have Cpanel access sometimes you can activate it yourself. If not please ask your host to activate cURL – if they won’t chnage hosts. Once cURL is activated then you can activate the plugin.
This Authorization Key is not valid
The Pro Version licenses are single domain licenses – you can have the plugin installed on as many locals, sub domains, dev sites and live domains as like – but only active on one at a time – anyone it does not matter. All you have to do to move between any of them is deactivate the plugin on the site its currently activated on and use your Authorization Key to activate it on somewhere else. | https://docs.a3rev.com/user-guides/plugins-extensions/wp-e-commerce/wpec-predictive-search/ | 2021-07-24T04:10:21 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.a3rev.com |
Drilldown Chart
- Overview
- Step by Step Guide
- Tune the Chart
- Drill Up Button
- Improvements
Overview
Creating a chart with drilldown in AnyChart is very easy and can be implemented using so-called event listeners and amazingly flexible API and data model. The very minimum you need is to create a chart, feed it proper data and then tell chart what to do when the point is clicked on.
Step by Step Guide
Here is a sample we will be creating in this step by step tutorial, if you have some experience with AnyChart you can simply launch it and dig into the comments in the code:
For those who never worked with AnyChart and those who want to dig deeper, let's go through the creation of drill-down chart step by step.
Prepare Data
The first thing we need to have for a chart with drill-down is the data. There a lot of ways to load, organize and use data in AnyChart we will use one of the simpliest one for this basic sample.
The data for the drilldown chart can be organized in a tree-like structure, each row has
x and
value, and a field where the drilldown data set is stored which can have any name, in our sample it is
drillDown:
var data = [ {"x": 2015, "value": 2195081, "drillDown": [ {"x": "Q1", "value": 792026}, {"x": "Q2", "value": 610501}, {"x": "Q3", "value": 441843}, {"x": "Q4", "value": 350711} ]}, {"x": 2016, "value": 3257447, "drillDown": [ {"x": "Q1", "value": 1378786}, {"x": "Q2", "value": 571063}, {"x": "Q3", "value": 510493}, {"x": "Q4", "value": 797105} ]}, {"x": 2017, "value": 1963407, "drillDown": [ {"x": "Q1", "value": 499299}, {"x": "Q2", "value": 649963}, {"x": "Q3", "value": 571176}, {"x": "Q4", "value": 242969} ]} ];
Note:
x and
value are reserved names for AnyChart and it is the easiest way to go but you can use any names or even simple arrays using our data set mapping option, see more at Data Set Article.
Create a Chart
Now we have our data, now we simply feed this data set to a constructor that creates a chart and displays a chart on the page in some block-based element. You may be familiar with the basics, if not - please see AnyChart Quick Start.
Here is how you create a chart, set data and display it:
// create a chart var chart = anychart.column(data); // display chart in a div named 'container' chart.container('container').draw();
Implement Drilldown
When chart has the data all that is left to do for us is to tell chart what to do when a point (single chart element, a column in this case) is clicked on:
// when a 'pointClick' event happens chart.listen('pointClick', function (e) { // check if there is drillDown data available if (e.point.get('drillDown')) { // if so, assign to the only data series we have chart.getSeries(0).data(e.point.get('drillDown')); } else { // otherwise assign this series the initial // dataset and return to the initial state chart.getSeries(0).data(data); } });
That's it, you can see it for yourself:
Basically the work is done, this foundation provides us with all we need and we will now tune the chart, add a drill-up button, and show that multilevel drilldown is also possible.
Tune the Chart
The basic chart is nice but we obviously need to tune it so it looks nice in this particular case. We will add three easy settings:
- Format Axis Labels so they show 'k' or 'm' for thousands and millions;
- Tune tooltips to show '$' sign;
- Change interactivity settings so the elements can't be selected.
We can do all this using this simple code:
// configure axis labels chart.yAxis().labels().format('${%value}{scale:(1000)(1000)|(k)(m)}'); // tune tooltips format chart.tooltip().format('${%value}'); // tune interactivity selection mode chart.interactivity().selectionMode('none');
And now the chart looks and feels better:
Drill Up Button
One thing you may want is to have a button on a chart that will take an end user a level up, this button may be implemented in several ways, we will show three of them.
AnyChart Label
First, you can create an interactive label with AnyChart and add it to a chart. To do so we need to add a label, configure how it looks and behaves, and modify drilldown behavior so the button appears when needed:
// configire drilldown on point click chart.listen('pointClick', function (e) { if (e.point.get('drillDown')) { chart.getSeries(0).data(e.point.get('drillDown')); chart.label(0).enabled(true); } }); // add chart label, set placement, color and text chart.label(0, {enabled: false, position: 'rightTop', anchor: 'rightTop', padding: 5, offsetX: 5, offsetY: 5, text: "Back", background: {stroke: "1 black", enabled: true}}); // load initial data on label click chart.label(0).listen('click', function () { chart.getSeries(0).data(data); chart.label(0).enabled(false); });
That's it, with a miniscule amount of coding you have a drilldown column chart:
jQuery Button
With jQuery you need to create an element, assign proper styles and code reactions.
Here is the same sample as above, but with a button created using jQuery:
Pure HTML Button
You can go and create a button without use of anything, just pure HTML and JavaScript:
Here is the same sample as above but with a button created using pure HTML and JavaScript:
Improvements
The sample shown above is an illustration of idea and you can make tons of improvements depending on the nature of your task. We will showcase several of them below.
Multilevel Drilldown
The first modification is not a modification at all, it is a demonstration of the flexibility of concept shown in the basic sample: without changing anything in the code you can have multilevel drilldown chart. All you need to do is actually add multilevel data. Here is how the data will look like:
var data = [ {"x": "2015", "value": 2195081, "drillDown": [ {"x": "Q1", "value": 792026, "drillDown": [ {"x": "Jan", "value": 302000}, {"x": "Feb", "value": 190000}, {"x": "Mar", "value": 300026}] }, {"x": "Q2", "value": 610501, "drillDown": [ {"x": "Apr", "value": 305000}, {"x": "May", "value": 100501}, {"x": "Jun", "value": 205000}] }, {"x": "Q3", "value": 441843, "drillDown": [ {"x": "Jul", "value": 240000}, {"x": "Aug", "value": 51000}, {"x": "Sep", "value": 150843}] }, {"x": "Q4", "value": 350711, "drillDown": [ {"x": "Oct", "value": 150000}, {"x": "Nov", "value": 100700}, {"x": "Dec", "value": 100011} ]} ]} ];
And if you feed such data to the code you'll be able to drill one more level down. And there is no limit, you can add more and more levels and it will still work. | https://docs.anychart.com/v8/Drilldown/Basics | 2021-07-24T05:18:44 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.anychart.com |
List View of Activity Log
Examine the list view of the activity log interface to obtain more detailed information.
Time
The date and time of the activity, such as April 16, 6:00 PM.
User
The username of the activity initiator.
IP
The IP address of the Arcadia instance that originated the request.
Action
The type of event that generated the activity, such as View, Create, and Update.
Component Type
The component of the activity, such as Dashboard, Data Connection, Dataset, Mixed, Snapshots, Thumbnail, or Workspaces.
Component ID
The ID of the component.
Visual
The ID of the visual that generates the activity. Click on this number to navigate to the visual.
Dashboard
The ID of the dashboard that generates the activity. Click on this number to navigate to the dashboard.
Dataset
The ID of the dataset that generates the activity. Click on this number to navigate to the Dataset Detail interface for that dataset.
Connection
The name of the data connection for the activity. Click on this name to navigate to the Datasets interface for that connection.
Type
The type of connection, such as arcengine, impala, hive, and so on.
Fallback
If the connection is of arcengine type, you can have a fallback engine enabled. For example, the second row shows that Dashboard ID = 500 uses the fallback option. Typically, the value of this column is N/A.
Cache
This column indicates whether the query was served from the cache; values are true or false.
Cache Check
This column indicates that CDP Data Visualization served the query from the cache after checking the base data to confirm that no changes occurred between the cache capture and the time the query ran in the cache. Values are true or false.
Aview Used
This column indicates if the query execution used an analytical view; values are Yes, No, or N/A.
Runtime
The duration of the activity, in seconds.
State
The completion status of the query, such as Done, if it is running, or if it is in an error state.
Error
This column shows if there is an error in any of the actions.
Query
You can examine the SQL statement for each query event.
By default, CDP Data Visualization hides the SQL statements. To see all statements, click the Plus icon under the column title Query. To hide all statements, click the Minus icon. To see only specific queries, click Show SQL in the specific row.
For example, this is the SQL statement for the first Dashboard (component ID=500):
SELECT cast((((TA_0.'duration') - 0.007558) / 1.5571718) as bigint) as 'Buckets', sum(1) as 'Record Count' FROM 'default'.'historical_queries' TA_0 WHERE TA_0.'start_date' = to_date(date_sub(now(), interval 1 Days)) GROUP BY 1 LIMIT 5000 | https://docs.cloudera.com/data-visualization/cdsw/activity-logs/topics/viz-activity-log-list.html | 2021-07-24T04:58:09 | CC-MAIN-2021-31 | 1627046150129.50 | [array(['../images/viz-activity-list.png', None], dtype=object)] | docs.cloudera.com |
class IO::Path::Parts
IO::Path parts encapsulation
does Positional does Associative does Iterable
An
IO::Path::Parts object is a container for the parts of an
IO::Path object. It is usually created with a call to the method
.parts on a
IO::Path object. It can also be created with a call to the method
.split on a object of one of the low-level path operations sub-classes of
IO::Spec.
The parts of an
IO::Path are:
Methods
method new
method new(\volume, \dirname, \basename)
Create a new
IO::Path::Parts object with
\volume,
\dirname and
\basename as respectively the volume, directory name and basename parts.
attribute volume
Read-only. Returns the volume of the
IO::Path::Parts object.
IO::Path::Parts.new('C:', '/some/dir', 'foo.txt').volume.say;# OUTPUT: «C:»
attribute dirname
Read-only. Returns the directory name part of the
IO::Path::Parts object.
IO::Path::Parts.new('C:', '/some/dir', 'foo.txt').dirname.say;# OUTPUT: «/some/dir»
attribute basename
Read-only. Returns the basename part of the
IO::Path::Parts object.
IO::Path::Parts.new('C:', '/some/dir', 'foo.txt').basename.say;# OUTPUT: «foo.txt»
Previous implementations
Before Rakudo 2020.06 the
.parts method of
IO::Path returned a
Map and the
.split routine of the
IO::Spec sub-classes returned a
List of
Pair. The
IO::Path::Part class maintains compatibility with these previous implementations by doing
Positional,
Associative and
Iterable.
my = IO::Path::Parts.new('C:', '/some/dir', 'foo.txt');say <volume>; # OUTPUT: «C:»say [0]; # OUTPUT: «volume => C:»say [0].^name; # OUTPUT: «Pair».say for [];# OUTPUT: «volume => C:dirname => /some/dirbasename => foo.txt»
Type Graph
404
Routines supplied by role Positional
IO::Path::Parts does role Positional, which provides the following routines:
(Positional) method of
method of()
Returns the type constraint for elements of the positional container, that is, the
T in the definition above, which, as it can be seen, defaults to Mu. It is returned as a type object.
my ;say .of.^name; # OUTPUT: «Mumy Str ;say .of.raku; # OUTPUT: «Str»say (my int @).of; # OUTPUT: «(int)»
.
Routines supplied by role Associative
IO::Path::Parts role Iterable
IO::Path::Parts. | https://docs.raku.org/type/IO::Path::Parts | 2021-07-24T04:47:12 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.raku.org |
To establish an XDCR relationship, you must configure each cluster as an XDCR participant. This involves:
Assigning each cluster a unique DR ID between 0 and 127
Specifying the cluster's role as XDCR
Identifying a node from the other cluster as the initial point of connectivity
Where normally the XDCR configuration is included in the XML configuration file, for Kubernetes you specify the configuration using YAML properties. The following table shows two equivalent XDCR configurations in the two formats.
Note that in Kubernetes the cluster nodes are assigned unique IP addresses based on the initial Helm release name (that is, the name you assigned the cluster when you installed it). The VoltDB Operator also creates services that abstract the individual server addresses and provide a single entry point for specific ports on the database cluster. Two services in particular are DR and client, which will direct traffic to the corresponding port (5555 or 21212 by default) on an arbitrary node of the cluster. If the two database instances are within the same Kubernetes cluster, you can use the DR service to make the initial connection between the database systems, as shown in the preceding YAML configuration file.
If the two databases are running in the same Kubernetes cluster but in different namespaces, you will need the specify the fully qualified service name as the connection source in the configuration, which includes the namespace. So, for example, if the manhattan database is in namespace ny1 and brooklyn is in ny2, the YAML configuration files related to XDCR for the two clusters would be the following: | https://docs.voltdb.com/KubernetesAdmin/K8sXdcrConfig.php | 2021-07-24T04:57:21 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.voltdb.com |
Users and Groups¶
User Types¶
In Kanboard there are two types of users:
Examples of remote users:
- LDAP user
- Users authenticated by a reverse-proxy
- OAuth2 users
User Roles¶
Groups Management¶
In Kanboard, each user can be a member of one or many groups. A group is like a team or an organization.
Only administrators can create new groups and assign users.
Groups can be managed from User management > View All Groups. From there, you can create groups and assign users.
Each project manager can authorize the access to a set of groups from the project permissions page.
The external id is mainly used for external group providers. Kanboard provides an LDAP group provider to automatically sync groups from LDAP servers.
Add a New User¶
To add a new user, you must be an administrator.
- From the dropdown menu in the top right corner, go to the menu Users Management
- On the top, you have a link New local user or New remote user
- Fill the form and save
When you create a local user, you have to specify at least those values:
- username: This is the unique identifier of your user (login)
- password: The password of your user must have at least 6 characters
For remote users, only the username is mandatory.
Edit Users¶
When you go to the users menu, you have the list of users, to modify a user click on the edit link.
- If you are a regular user, you can change only your own profile
- You have to be an administrator to be able to edit any users
Remove Users¶
From the users menu, click on the link remove. This link is visible only if you are an administrator.
If you remove a specific user, tasks assigned to this person will be unassigned after the operation.
Two-Factor Authentication¶
Each user can enable the two-factor authentication. After a successful login, a one-time code (6 characters) is asked to the user to allow access to Kanboard.
This code has to be provided by a compatible software usually installed on your smartphone or a different device.
Kanboard use the Time-based One-time Password Algorithm defined in the RFC 6238.
There are many software compatible with the standard TOTP system. For example, you can use these applications:
- Google Authenticator (Android, iOS, Blackberry)
- FreeOTP (Android, iOS)
- OATH Toolkit (Command line utility on Unix/Linux)
This system can work offline and you don’t necessarily need to have a mobile phone.
Configuration¶
- Go to your user profile
- On the left, click on Two Factor Authentication and click on the button
- A secret key is generated for you
- You have to save the secret key in your TOTP software. If you use a smartphone, the easiest solution is to scan the QR code with FreeOTP or Google Authenticator.
- Each time you will open a new session, a new code will be asked
- Don’t forget to test your device before closing your session
A new secret key is generated each time you enable/disable this feature.
Note
Since Kanboard v1.2.8, people with two-factor authentication enabled must use API keys. | https://docs.kanboard.org/en/latest/user_guide/users.html | 2021-07-24T04:05:19 | CC-MAIN-2021-31 | 1627046150129.50 | [array(['../_images/groups-management.png', 'Group Management'],
dtype=object)
array(['../_images/new-user.png', 'New user'], dtype=object)
array(['../_images/2fa.png', '2FA'], dtype=object)] | docs.kanboard.org |
class Capture
Argument list suitable for passing to a Signature
A
Capture is a container for passing arguments to a code object. Captures are the flip-side of Signatures. Thus, captures are the caller-defined arguments, while signatures are the callee-defined parameters. For example when you call
print $a, $b, the
$a, $b part is a capture.
Captures contain a list-like part for positional arguments and a hash-like part for named arguments, thus behaving as Positional and Associative, although it does not actually mix in those roles. Like any other data structure, a stand-alone capture can be created, stored, and used later.
A literal
Capture can be created by prefixing a term with a backslash
\. Commonly, this term will be a List of terms, from which the forms
key => value and
:key<value> of a
Pair literal will be placed in the named part, and all other terms will be placed in the positional part (including
Pairs of the form
'key' => value).
my = \(42); # Capture with one positional argmy = \(1, 2, verbose => True); # Capture with two positional args and one named argmy = \(1, 2, :verbose(True)); # same as beforemy = \(1, 2, 'verbose' => True); # Capture with three positional args
To reiterate, named arguments in a capture must be created using one of two ways:
Use an unquoted key naming a parameter, followed by
=>, followed by the argument. For example,
as => by => {1/$_}.
Use a colon-pair literal named after the parameter. For example,
:into(my %leap-years).
For example:
sub greet(:, :)my = \(name => 'Mugen', age => 19); # OKmy = \(:name('Jin'), :age(20)); # OKmy = \('name' => 'Fuu', 'age' => 15); # Not OK, keys are quoted.
For the
greet subroutine that accepts two named arguments
name and
age, the captures
$d and
$e will work fine while the capture
$f will throw a
Too many positionals passed... error. This is because
'age' => 20 isn't a named argument (as per the two ways of creating one mentioned above) but a positional argument of which
greet expects none. In the context of captures, quoted keys don't create named arguments. Any
'key' => value is just another positional parameter, thus exercise some caution when creating captures with named arguments.
Once a capture is created, you may use it by prefixing it with a vertical bar
| in a subroutine call, and it will be as if the values in the capture were passed directly to the subroutine as arguments — named arguments will be passed as named arguments and positional arguments will be passed as positional arguments. You may re-use the capture as many times as you want, even with different subroutines.
say greet |; # OUTPUT: «Mugen, 19»say greet |; # OUTPUT: «Jin, 20»
my = \(4, 2, 3, -2);say reverse |; # OUTPUT: «(-2 3 2 4)»say sort 5, |; # OUTPUT: «(-2 2 3 4 5)»say unique |, as => ; # OUTPUT: «(4 2 3)»say unique |, :as(); # OUTPUT: «(4 2 3)»my = \(1, 7, 3, by => );say min |; # OUTPUT: «7», same as min 1, 7, 3, by => {1/$_}say max |; # OUTPUT: «1», same as max 1, 7, 3, by => {1/$_}
Inside a
Signature, a
Capture may be created by prefixing a sigilless parameter with a vertical bar
|. This packs the remainder of the argument list into that capture parameter.
sub f(, |c)f 1, 2, 3, a => 4, :b(5);# OUTPUT:# 1# \(2, 3, :a(4), :b(5))# Capture# (2 3)# Map.new((a => 4, b => 5))
Note that
Captures are still
Lists in that they may contain containers, not just literal values:
my = 1;my = \(4, 2, , 3);say min |; # OUTPUT: «1»= -5;say min |; # OUTPUT: «-5»
Methods
method list
Defined as:
method list(Capture:)
Returns the positional part of the
Capture.
my Capture = \(2, 3, 5, apples => (red => 2));say .list; # OUTPUT: «(2 3 5)»
method hash
Defined as:
method hash(Capture:)
Returns the named/hash part of the
Capture.
my Capture = \(2, 3, 5, apples => (red => 2));say .hash; # OUTPUT: «Map.new((:apples(:red(2))))»
method elems
Defined as:
method elems(Capture: --> Int)
Returns the number of positional elements in the
Capture.
my Capture = \(2, 3, 5, apples => (red => 2));say .elems; # OUTPUT: «3»»
method values
Defined as:
multi method values(Capture: --> Seq)
Returns a Seq containing all positional values followed by all named argument values.
my = \(2, 3, 5, apples => (red => 2));say .values; # OUTPUT: «(2 3 5 red => 2)»»
method pairs
Defined as:
multi method pairs(Capture: --> Seq)
Returns all arguments, the positional followed by the named, as a Seq of Pairs. Positional arguments have their respective ordinal value, starting at zero, as key while the named arguments have their names as key.
my Capture = \(2, 3, apples => (red => 2));say .pairs; # OUTPUT: «(0 => 2 1 => 3 apples => red => 2)»
method antipairs
Defined as:
multi method antipairs(Capture: --> Seq)
Returns all arguments, the positional followed by the named, as a Seq of pairs where the keys and values have been swapped, i.e. the value becomes the key and the key becomes the value. This behavior is the opposite of the pairs method.
my = \(2, 3, apples => (red => 2));say .antipairs; # OUTPUT: «(2 => 0 3 => 1 (red => 2) => apples)»
method Bool
Defined as:
method Bool(Capture: --> Bool)
Returns
True if the
Capture contains at least one named or one positional argument.
say \(1,2,3, apples => 2).Bool; # OUTPUT: «True»say \().Bool; # OUTPUT: «False»
method Capture
Defined as:
method Capture(Capture: --> Capture)
Returns itself, i.e. the invocant.
say \(1,2,3, apples => 2).Capture; # OUTPUT: «\(1, 2, 3, :apples(2))»
method Numeric
Defined as:
method Numeric(Capture: --> Int)
Returns the number of positional elements in the
Capture.
say \(1,2,3, apples => 2).Numeric; # OUTPUT: «3»
Type Graph
Capture | https://docs.raku.org/type/Capture | 2021-07-24T05:02:41 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.raku.org |
The
varnishscoreboard utility displays Varnish tasks, the scheduled tasks managed by the thread pools workers in the
varnishd cache process. It can report ongoing, waiting and queued tasks with some information about them. In addition to tasks, it accounts for the number of idle workers, threads without a task to run.
In order to keep track of tasks the parameter
thread_pool_track needs to be enabled. Only a limited amount of tasks can be tracked and the amount of memory allocated to tracking is controlled by the
vst_space parameter.
Both of these parameters are very useful for debugging.
thread_pool_track keeps track of running worker threads managed by thread pools, and tasks queued in the pools.
vst_space sets the amount of space to allocate for the VST memory segment. The full documentation for these tuning parameters is found in
man varnishd and
varnishadm param.show <parameter>. Note param.show will also display the current parameter value.
varnishscoreboard [-h] [-n <dir>] [-t <seconds|off>] [-V]
The following options are available:
-h
Print program usage and exit
.
-V
Print version information and exit.
--optstring
Print the optstring parameter to getopt(3) to help writing wrapper scripts.
Starting from Varnish release 6.0.7r1 (2020-12-21) the parameter names
scoreboard_active and
scoreboard_enable are deprecated aliases of
vst_space and
thread_pool_track respectively. | https://docs.varnish-software.com/varnish-cache-plus/features/varnishscoreboard/ | 2021-07-24T05:46:50 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.varnish-software.com |
This action lets you set persistent values to all accounts for the following page views and events. This is great for setting custom dimensions and metric values that should persist for every hit.
Note: The data persistence model may not be intuitive, please be sure to learn more about setting custom data fields taking note of the "Data Persistence" section. | https://docs.acronym.com/analytics/adobe-launch/gtag/actions/set | 2021-09-16T15:23:18 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.acronym.com |
Official Plugins
Overview
Cloudify Plugins are Python packages that do the work of communicating with external systems. Primarily:
For example:
- If you want to create a VM in Azure, you will need the Azure plugin.
- If you want to configure a server with an Ansible playbook, you will use the Ansible plugin.
- If you have existing scripts, you may simply use the built-in Script plugin.
Background
Blueprints use the Cloudify DSL to model an application. The model, or node_templates section, describes a topology which includes:
- Node Templates
- Relationships between node templates.
Workflows actualize the topology by defining the order of operations that will be performed. For example, the built-in Install Workflow calls, among other things, the
create,
configure, and
start operations defined for a particular node type.
Plugins contain the Python code that each workflow operation calls.
Usage
In your blueprint, use the Import statement to import plugins, for example:
imports: - plugin: cloudify-azure-plugin
You can then map node template and relationship operations to plugin code, or if your plugin’s
plugin.yaml has custom node types, these operations may already be mapped for you.
Example Blueprint with REST Call
The following example illustrates configuration step. In it, we create a user via some REST API.
In the blueprint, we define the lifecycle steps, along with the inputs for the plugin operation:
- endpoint information
- reference to a template file containing the REST request itself.
- parameters
tosca_definitions_version: cloudify_dsl_1_3 imports: - - plugin:cloudify-utilities-plugin inputs: rest_endpoint: description: > REST API endpoint of the pfSense instance node_templates: user-post: type: cloudify.rest.Requests properties: hosts: [{ get_input: rest_endpoint }] port: 443 ssl: true verify: false interfaces: cloudify.interfaces.lifecycle: start: inputs: template_file: templates/create-user-post-template.yaml params: USER_ID: {get_attribute: [user-details, user, id]} USERNAME: { get_attribute: [user-details, user, username] } WEBSITE: { get_attribute: [user-details, user, website] } POST_ID: "1"
Let’s break down each section:
tosca_definitions_version: This is the version of Cloudify’s DSL. You should not need to change this value.
imports: This is a list of URLs or paths to more Cloudify DSL files. These define
node_typesand
relationships, among other things, that will be used in your blueprint.
inputs: These are parameters that you should know before running your blueprint, such as API endpoints.
node_templates: These are the nodes that you will handle throughout your deployment, such as VMs, applications, or network components.
Now, let’s talk about our
node_template:
- The name of our node template is
user-post, and it is of node type
cloudify.rest.Requests, which is defined in the
plugin:cloudify-utilities-pluginimport.
- The node type
cloudify.rest.Requestsdefines endpoint information in the
propertiessection.
- The
interfacessection defines how we map our operation. The install workflow calls a
startoperation. So that is the only operation that will be executed in this blueprint.
- We provide
inputsto the operation which are basically Python function parameters.
template_file: This is a Jinja2 template file, which contains a list of REST requests (see below).
params: These are params to the Jinja2 template, which define the user we will create.
create-user-post-template.yaml
This is the template file, which contains a list of requests. We define the
path, the
payload, and define the expected responses.
rest_calls: # create user post - path: /posts/{{POST_ID}} method: PUT headers: Content-type: application/json payload: title: '{{ USERNAME }}' body: '{{ WEBSITE }}' userId: '{{ USER_ID }}' response_format: json recoverable_codes: [400] response_expectation: - ['id', '{{POST_ID}}']
Only one request is defined here, but you can define bunch requests as well as successful and failure responses.
For more information on modeling REST request sequences, see the REST Plugin.
Example Blueprint with Example Script
A another way to understand using Cloudify plugins is to use Cloudify with existing scripts.
For example, let’s say that you have a BASH script like this:
scripts/hello.sh
#!/bin/bash # myblueprint/scripts/hello.sh ctx logger info "Hello World"
All this script does right now is log a “Hello World” message message to Cloudify. However, you can put just about any valid BASH script here.
To run the: - inputs: username: type: string private_key: type: string ip_address: type: string node_templates: node: type: cloudify.nodes.Compute properties: ip: { get_input: ip_address } agent_config: install_method: remote user: { get_input: username } key: { get_input: private_key } interfaces: cloudify.lifecycle.interface: start: implementation: scripts/hello.sh
Let’s review:
- In the
properties, we define the method for installing and configuring a Cloudify Agent on the VM. We provide the IP and authentication information.
- We define the
startoperation as the execution of our
hello.shscript from above.
For more information, see the Script Plugin.
Example Blueprint with Blueprint Mapped Operations
The following example describes configuring one or more hosts with Ansible.
Just like last: - - plugin:cloudify-ansible-plugin inputs: ip: type: string username: type: string private_key: type: string node_templates: kubespray: type: cloudify.nodes.Root interfaces: cloudify.interfaces.lifecycle: configure: implementation: ansible.cloudify_ansible.tasks.run inputs: playbook_path: myplaybook.yml sources: webservers: hosts: web: ansible_host: { get_input: ip } ansible_user: { get_input: username } ansible_ssh_private_key_file: { get_input: private_key } ansible_become: True ansible_ssh_common_args: -o StrictHostKeyChecking=no
In this blueprint, we don’t define any properties. Instead, we map everything as an operation.
ansible.cloudify_ansible.tasks.runis a task defined in the Ansible Plugin. All it does is run the
ansible-playbookcommand against the provided playbook and inventory.
playbook_path: this parameter is the path to the Playbook YAML file. (This parameter was previously called
site_yaml_path.)
sources: this parameter is an Ansible Inventory structure in YAML format.
For more information, see the Ansible Plugin.
Example Blueprint with Custom Node Types
The following example describes the creation of a connected AWS EC2 Internet Gateway and VPC. The
cloudify-aws-plugin's
plugin.yaml already defines the node types
cloudify.nodes.aws.ec2.InternetGateway and
cloudify.nodes.aws.ec2.Vpc, and maps their lifecycle operations to plugin tasks.
You will need the following secrets:
aws_access_key_id: The AWS Access Key.
aws_secret_access_key: The AWS Secret Key.
You will also need to provide the following inputs:
region_name: The AWS region name.
tosca_definitions_version: cloudify_dsl_1_3 imports: - - plugin:cloudify-aws-plugin inputs: region_name: type: string dsl_definitions: _: &client_config aws_access_key_id: { get_secret: aws_access_key_id } aws_secret_access_key: { get_secret: aws_secret_access_key } region_name: { get_input: region_name } node_templates: internet_gateway: type: cloudify.nodes.aws.ec2.InternetGateway properties: client_config: *client_config relationships: - type: cloudify.relationships.connected_to target: vpc vpc: type: cloudify.nodes.aws.ec2.Vpc properties: resource_config: CidrBlock: { get_input: vpc_cidr } client_config: *client_config
Let’s review:
We have the following nodes:
internet_gateway: This is an internet gateway. We use a
cloudify.relationships.connected_torelationship to define dependency on
vpc.
vpc: This is a VPC.
Properties:
- We provide to both resources their configuration in the
resource_configproperty.
- Authentication is defined in the
client_configproperty.
For more information, see the AWS Plugin.
Distribution
Cloudify plugins are distributed as Wagons format. Wagons are archives of Python Wheels. The latest official Cloudify Plugins are available for download at Plugins Download.
Installation
You may upload a plugin to your Cloudify Manager via either the UI or the CLI:
- For UI usage, see managing system resources.
- For CLI usage, see cfy plugins upload.
Contributing
See our community Contribution Guide.
Further Reading
For more information on creating your own plugin, see creating your own plugin. For a plugin template, see plugin template.
For information on packaging a plugin in wagon format, see creating wagons.
For an overview on working with CM systems, see Configuration Management.
For information on working with Docker and other container systems, see Containers. | https://docs.cloudify.co/5.0.0/working_with/official_plugins/ | 2021-09-16T15:54:50 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.cloudify.co |
Create and Edit Legacy Transactional Templates
Creating.
Creating a template
- Open the Legacy Templates page and then click Create Template.
- Add a unique template name in the dialogue box and then click Save.
- Open the Actions drop-down menu to create a new version.
- Click Add Version.
The editor opens. From here, you can change the subject and the body of your email template.
The easiest way to get started with a new template is to use one of your previous email templates or a free template from the internet, and then modify it to fit your needs.
Editing your HTML template.
Preview and test your.
Managing templates.
When you delete a template you will delete all the versions of your template.
Activate your template
To activate your template:
- Navigate to the template you wish to use and select the action menu.
- Select Make Active.
A template can only have one active version at a time. If you’ve created a new version with different HTML that you want your customers to start receiving, you’ll need to make that version “Active.”
Duplicate a Template
To duplicate a template:
- Navigate to the template you wish to use and select the action menu.
- Select Add Version..
Adding unsubscribe links to a template
For more information about unsubscribes, check out our unsubscribe documentation.
Additional Resources
Need some help?
We all do sometimes; code is hard. Get help now from our support team, or lean on the wisdom of the crowd browsing the SendGrid tag on Stack Overflow. | https://docs.sendgrid.com/ui/sending-email/create-and-edit-legacy-transactional-templates?utm_source=docs&utm_medium=social&utm_campaign=guides_tags | 2021-09-16T16:07:26 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.sendgrid.com |
Everything about DMARC
This article provides an overview of Domain-based Message Authentication, Reporting and Conformance (DMARC). You will learn how DMARC works and how it applies to your Sender Identity or From address. You should already be familiar with DNS records, IP addresses, and the general flow of web traffic to get the most from this article. If you need a refresher on these topics, resources are linked throughout this page.
What is DMARC?
DMARC is a powerful way to verify the authenticity of an email’s sender and prevent malicious senders from damaging your sender reputation.
To understand DMARC, let's first understand the problem DMARC attempts to solve: email spoofing.
Email spoofing is the practice of sending email with a forged From address. Note that an email actually has two From addresses: the Header From and Envelope From. DMARC is concerned only with the spoofing of the Envelope From (also known as the
return-path) address. See our spoofing glossary entry for more information about spoofing and From addresses.
DMARC relies on two authentication protocols to prevent spoofing: Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM).
Sender Policy Framework
The strategy employed by SPF is to add a TXT record to a domain’s DNS. The TXT record specifies which IP addresses are allowed to send email for the domain.
SPF mail flow
Imagine an email server receives a message and checks the Envelope<<
For more on SPF, see SPF Records Explained.
DomainKeys Identified Mail
DomainKeys Identified Mail (DKIM) uses public-key cryptography to sign a message. Like SPF, DKIM is implemented with a TXT record. Unlike SPF, the DKIM TXT record provides a public key that receiving mail servers can use to verify the authenticity of a message.
Remember, the problem with spoofing is forgery of the From address. However, by signing the From address, among other headers, and providing a public key to verify the signature, receiving servers can corroborate the authenticity of the sender.
DKIM mail flow
Let’s again_1<<
For more information about DKIM, see DKIM Records Explained.
Domain-based Message Authentication, Reporting and Conformance
If SPF and DKIM already help validate an email's sender, what does DMARC add?
Why we need DMARC
Think of DMARC not as an independent authentication protocol but as a framework for handling SPF and DKIM failures and reporting those failures to domain owners.
-. SPF and DKIM handle the Domain-based Message Authentication part of DMARC.
DMARC adds the Reporting and Conformance piece on its own. Like SPF and DKIM, DMARC is implemented using a TXT DNS record. This record allows receiving email servers to fetch failure processing instructions from domain owners.
DMARC Records
"If you know how to view DNS records (e.g. using the 'dig' command), you can also check to see if [service providers] publish a DMARC TXT Resource Record. This doesn’t necessarily mean they support DMARC for the email they receive (though it’s a good indication), but it does indicate they use DMARC to protect outbound mail." — DMARC.org request format,
rf=afrf, tells receiving servers how to format reports for the domain owner. Authentication Failure Reporting Format,
afrf, is the default and is an extension of Abuse Reporting Format..
DMARC mail flow
How DMARC Applies to a Sender Identity
When sending email via a service provider such as SendGrid, you will be asked to authenticate a domain or verify a Single Sender. However, what happens if you verify a Sender Identity using a
gmail.com,
yahoo.com,
aol.com, or a similar address? In other words, what happens if your Envelope From address is
[email protected]?
As you can guess, major mail providers such as Google, Microsoft, and others implement DMARC to protect their customers and prevent abuse. Let's use Yahoo and the email address
[email protected], as an example.
Yahoo has SPF, DKIM, and DMARC policies. Yahoo’s DNS records will approve domains such as yahoo.com and the IP addresses Yahoo controls. SendGrid domains and IP addresses will not be included in Yahoo's approved domains and IP addresses.
When you send a message from
[email protected] to
[email protected] using SendGrid, a Gmail server will receive the message. Gmail will then look up Yahoo’s SPF and DKIM records because
yahoo.com is the domain in the return-path message header.
The Gmail receiving server will determine that the message was sent using a SendGrid IP address and was not signed by a Yahoo private key. Both SPF and DKIM will fail, causing Gmail to employ the DMARC failure policy specified by Yahoo.
Essentially, Gmail, or any other receiving email server, has no way of knowing whether you are using SendGrid to send email for legitimate purposes or spoofing Yahoo's domain.
This is why SendGrid recommends.
Popular email providers that enforce DMARC.
Additional Resources
Need some help?
We all do sometimes; code is hard. Get help now from our support team, or lean on the wisdom of the crowd browsing the SendGrid tag on Stack Overflow. | https://docs.sendgrid.com/ui/sending-email/dmarc?utm_source=docs&utm_medium=social&utm_campaign=guides_tags | 2021-09-16T15:07:09 | CC-MAIN-2021-39 | 1631780053657.29 | [array(['https://twilio-cms-prod.s3.amazonaws.com/original_images/spf_mail_flow.jpeg',
'SPF mail flow diagram A diagram of the SPF traffic flow described in the steps above this image'],
dtype=object)
array(['https://twilio-cms-prod.s3.amazonaws.com/original_images/dkim_mail_flow.jpeg',
'DKIM mail flow diagram A diagram of the DKIM traffic flow described in the steps above this image'],
dtype=object)
array(['https://twilio-cms-prod.s3.amazonaws.com/original_images/dmarc_mail_flow.jpeg',
'DMARC mail flow diagram A diagram of the DMARC mail flow'],
dtype=object) ] | docs.sendgrid.com |
Using Girder Worker with Girder¶
The most common use case of Girder Worker is running processing tasks on data managed by a Girder server. Typically, either a user action or an automated process running on the Girder server initiates the execution of a task that runs on a Girder Worker.
The task to be run must be installed in both the Girder server environment as well as the
worker environment. If you are using a built-in plugin, you can just install
girder-worker on the Girder server environment. If you’re using a custom task
plugin,
pip install it on both the workers and the Girder server environment.
Running tasks as Girder jobs¶
Once installed, starting a job is as simple as importing the task into the python environment
and calling delay() on it. The following example assumes your task exists in a package
called
my_worker_tasks:
from my_worker_tasks import my_task result = my_task.delay(arg1, arg2, kwarg1='hello', kwarg2='world')
Here the
result variable is a celery result object
with Girder-specific properties attached. Most importantly, it contains a
job attribute
that is the created job document associated with this invocation of the task. That job will
be owned by the user who initiated the request, and Girder worker will automatically update its
status according to the task’s execution state. Additionally, any standard output or standard
error data will be automatically added to the log of that job. You can also set fields on the job
using the delay method kwargs
girder_job_title,
girder_job_type,
girder_job_public,
and
girder_job_other_fields. For instance, to set the title and type of the created job:
job = my_task.delay(girder_job_title='This is my job', girder_job_type='my_task') assert job['title'] == 'This is my job' assert job['type'] == 'my_task'
The Girder Job details page can show a dictionary of metadata passed in the
meta field of the
girder_job_other_fields:
job = my_task.delay(girder_job_title='This is my job', girder_job_type='my_task', girder_job_other_fields={'meta': {'special_key': 'Special Value'}})
Downloading files from Girder for use in tasks¶
Note
This section applies to python tasks, if you are using the built-in
docker_run task,
it has its own set of transforms for dealing with input and output data, which are
detailed in the The docker_run Task documentation
The following example makes use of a Girder Worker transform for passing a Girder file into
a Girder Worker task. The
girder_worker_utils.transforms.girder_io.GirderFileId transform causes the file
with the given ID to be downloaded locally to the worker node, and its local path will then
be passed into the function in place of the transform object. For example:
from girder_worker_utils.transforms.girder_io import GirderFileId def process_file(file): return my_task.delay(input_file=GirderFileId(file['_id'])).job | https://girder-worker.readthedocs.io/en/latest/using-from-girder.html | 2021-09-16T15:09:51 | CC-MAIN-2021-39 | 1631780053657.29 | [] | girder-worker.readthedocs.io |
You're viewing Apigee Edge documentation.
View Apigee X documentation.
Apigee Edge provides you with a place to store your OpenAPI Specifications. Manage your OpenAPI Specifications, as described in the following sections.
For more information about OpenAPI Specifications, see What is an OpenAPI Specification?
Explore the specs list
To access the specs list:
- Select an organization.
- Select Develop > Specs in the side navigation menu.
The current list of specifications and folders is displayed.
As highlighted in the previous figure, from the specifications list you can:
- Create and import specifications
- Generate an API proxy from a specification using a multi-step wizard
- Organize your specifications using folders and navigate through those folders (see Organizing content using folders)
- Edit a specification or open a folder by clicking a row
- Rename or delete a specification
- Search the list of specifications and folders
Create a new OpenAPI Specification
To create a new OpenAPI Specification:
- Click Develop > Specs in the side navigation bar.
- Click + Spec.
- Click New Spec in the drop-down menu. The spec editor opens.
- Replace the sample specification with your own OpenAPI Specification details.
See Create specifications using the spec editor.
- Click Save to save the specification.
- Enter a name for the specification when prompted.
- Click Save.
Import an OpenAPI Specification
To import an OpenAPI Specification:
- Click Develop > Specs in the side navigation bar.
- Click + Spec.
- Click one of the following options from the drop-down menu:
- Import URL to import the specification from a URL. Specify the name and URL when prompted.
- Import file to browse your local directory for the specification.
The specification is added to the list.
- Click the name of the specification to view and edit it in the spec editor.
See Create specifications using the spec editor.
Edit an existing OpenAPI Specification
To open an existing OpenAPI Specification:
- Click Develop > Specs in the side navigation bar.
- Click the row associated with the spec in the spec list.
- Edit the specification in the spec editor.
See Create specifications using the spec editor.
Create an API proxy from a specification in the spec list
With Apigee Edge, you can create your API proxies from the OpenAPI Specifications that you design and save in the spec editor. In just a few clicks, you'll have an API proxy in Apigee Edge with the paths, parameters, conditional flows, and target endpoints generated automatically. Then, you can add features such as OAuth security, rate limiting, and caching.
After you create an API proxy from an OpenAPI Specification, if the specification is modified, you will need to manually modify the API proxy to reflect the changes implemented. See What happens if I modify a specification?.
To create an API proxy from an OpenAPI Specification in the spec list:
- Select Develop > Specs in the side navigation menu.
- Navigate to the folder that contains the OpenAPI Specification, if required.
- Position your cursor on the OpenAPI Specification for which you want to create an API proxy to display the actions menu.
- Click
. The Create Proxy wizard is opened and the Proxy details page is pre-populated using values from the OpenAPI Specification, as shown in the following figure.
- Click Next.
- Step through the remaining pages in the Create Proxy wizard, as described in Creating an API proxy from an OpenAPI Specification. (Start from step 8.)
Rename a specification
To rename a specification:
- Select Develop > Specs in the side navigation menu.
- Navigate to the folder that contains the specification, if required.
- Position the cursor over the specification that you want to rename to display the actions menu.
- Click
.
- Edit the specification name.
- Click Rename to save the edits or Cancel to cancel.
Move a specification to a folder
You can organize your specifications into folders to facilitate management and security. To move a specification into a folder, see Organizing specifications using folders.
Delete a specification
You can delete a specification from the file store when it is no longer needed or becomes invalid.
To delete a specification:
- Select Develop > Specs in the side navigation menu.
- Position the cursor over the specification that you want to delete to display the actions menu.
- Click
.
- Click Delete to confirm the delete operation at the prompt.
- Delete related artifacts that are no longer needed. | https://docs.apigee.com/api-platform/publish/specs/manage-specs?hl=zh_cn | 2021-09-16T15:54:57 | CC-MAIN-2021-39 | 1631780053657.29 | [array(['https://docs.apigee.com/api-platform/images/speclist.png?hl=zh_cn',
'Spec list'], dtype=object) ] | docs.apigee.com |
You must download the StorageGRID installation archive and extract the required files.
Download the .tgz or .zip archive file for your platform.
The compressed files contain the RPM files and scripts for Red Hat Enterprise Linux or CentOS.
The files you need depend on your planned grid topology and how you will deploy your StorageGRID grid. | https://docs.netapp.com/sgws-114/topic/com.netapp.doc.sg-install-rhel/GUID-DE2AEB2D-C43B-42A0-B810-9F90DFA22CB1.html | 2021-09-16T15:00:13 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.netapp.com |
There.
Game Modes
All Game Modes are subclasses of
AGameModeBase, which contains considerable base functionality that can be overridden. Some of the common functions include:eMode:..
Game Mode Blueprints
It is possible to create Blueprints derived from Game Mode classes, and use these as the default Game Mode for your project or level.
Blueprints derived from Game Modes can set the following defaults:..
[option.
[.
Game State. | https://docs.unrealengine.com/4.27/en-US/InteractiveExperiences/Framework/GameMode/ | 2021-09-16T15:46:24 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.unrealengine.com |
District Energy Systems (DES)
Installation
The DES workflow has python and Modelica dependencies that must be installed in addition to the URBANopt CLI dependencies prior to use. Visit the DES Installation page to install the dependencies.
Usage
There are three major steps to running the DES workflow:
- generating the GeoJSON and System Parameter JSON files,
- creating of the Modelica package containing the district system, and
- running the Modelica package.
Refer to the Getting Started page to view examples of how to use the DES workflow with the URBANopt CLI.
Overview
District energy systems have been leveraged for hundreds of years to move energy (typically waste heat from industrial processes) to effectively maintain comfort in neighboring buildings; however, modeling the potential and effectiveness of these systems has been a challenge due to complexity. The URBANopt DES workflow aims to make DES analysis more approachable in hopes of encouraging adoption through better evaluation of new systems or expansions of in situ systems. The URBANopt DES workflow leverages a tool called the GeoJSON to Modelica Translator (GMT) to enable the analysis of DES systems.
The GeoJSON Modelica Translator (GMT) is a one-way trip from GeoJSON in combination with a well-defined instance of the system parameters schema to a Modelica package with multiple buildings loads, energy transfer stations, distribution networks, and central plants. The project will eventually allow multiple paths to build up different district heating and cooling system topologies; however, the initial implementation is limited to 1GDH and 4GDHC.
The project is motivated by the need to easily evaluate district energy systems. The goal is to eventually cover the various generations of heating and cooling systems as shown in the figure below. The need to move towards 5GDHC systems results in higher efficiencies and greater access to additional waste-heat sources.
The diagram below is meant to illustrate the future proposed interconnectivity and functionality of the GMT project.
As shown in the image, there are multiple building loads that can be deployed with the GMT and are described in the Building Load Models section below. These models, and the associated design parameters, are required to create a fully runnable Modelica model. The GMT leverages two file formats for generating the Modelica project: 1) the GeoJSON feature file and 2) a System Parameter JSON file.
Building Load Models
The building loads can be defined multiple ways depending on the fidelity of the required models. Each of the building load models are easily replaced using configuration settings within the System Parameters file. The models can have mixed building load models, for example the district system can have 3 time series models, an RC model, and a detail Spawn model. The 4 different building load models include:
- Time Series in Watts: This building load is the total heating, cooling, and domestic hot water loads represented in a CSV type file (MOS file). The units are Watts and should be reported at an hour interval; however, finer resolution is possible. The load is defined as the load seen by the ETS.
- Time Series as mass flow rate and delta temperature: This building load is similar to the other Time Series model but uses the load as seen by the ETS in the form of mass flow rate and delta temperature. The file format is similar to the other Time Series model but the columns are mass flow rate and delta temperature for heating and cooling separately.
- RC Model: This model leverages the TEASER framework to generate an RC model with the correct coefficients based on high level parameters that are extracted from the GeoJSON file including building area and building type.
- Spawn of EnergyPlus: This model uses EnergyPlus models to represent the thermal zone heat balance portion of the models while using Modelica for the remaining components. Spawn of EnergyPlus is still under development and currently only works on Linux-based systems.
Architecture Overview
The GMT is designed to enable “easy” swapping of building loads, district systems, and newtork topologies. Some of these functionalities are more developed than others, for instance swapping building loads between Spawn and RC models (using TEASER) is fleshed out; however, swapping between a first and fifth generation heating system has yet to be fully implemented.
GeoJSON and System Parameter Files
This module manages the connection to the GeoJSON file including any calculations that are needed. Calculations can include distance calculations, number of buildings, number of connections, etc.
The GeoJSON model should include checks for ensuring the accuracy of the area calculations, non-overlapping building areas and coordinates, and various others.
Load Model Connectors
The Model Connectors are libraries that are used to connect between the data that exist in the GeoJSON with a model-based engine for calculating loads (and potentially energy consumption). Examples includes, TEASER, Data-Driven Model (DDM), CSV, Spawn, etc.
Simulation Mapper Class / Translator
The Simulation Mapper Class can operate at mulitple levels:
- The GeoJSON level – input: geojson, output: geojson+
- The Load Model Connection – input: geojson+, output: multiple files related to building load models (spawn, rom, csv)
- The Translation to Modelica – input: custom format, output: .mo (example inputs: geojson+, system design parameters). The translators are implicit to the load model connectors as each load model requires different paramters to calculate the loads.
In some cases, the Level 3 case (translation to Modelica) is a blackbox method (e.g. TEASER) which prevents a simulation mapper class from existing at that level.
Testing and Developer Resources
It is possible to test the GeoJSON to Modelica Translator (GMT) by simpling installing the Python package and running the command line interface (CLI) with results from and URBANopt SDK set of results. However, to fully leverage the functionality of this package (e.g., running simulations), then you must also install the Modelica Buildings library (MBL) and Docker. Instructions for installing and configuring the MBL and Docker are available on the DES Installation page.
To simply scaffold out a Modelica package that can be inspected in a Modelica environment (e.g., Dymola) then run the following code below up to the point of run-model. The example generates a complete 4th Generation District Heating and Cooling (4GDHC) system with time series loads that were generated from the URBANopt SDK using OpenStudio/EnergyPlus simulations.
pip install geojson-modelica-translator # from the simulation results within a checkout of this repository # in the ./tests/management/data/sdk_project_scraps path. # generate the system parameter from the results of the URBANopt SDK and OpenStudio Simulations uo_des build-sys-param sys_param.json baseline_scenario.csv example_project.json # create the modelica package (requires installation of the MBL) uo_des create-model sys_param.json # test running the new Modelica package (requires installation of Docker) uo_des run-model model_from_sdk
More example projects are available in an accompanying example repository.
Visit the developer resources page if you are interested in contributing to the GMT project.
Publications and References
Long, N., Gautier, A., Elarga, H., Allen, A., Summer, T., Klun, L., Moore, N., Wetter, M.,. (2021). Modeling District Heating and Cooling Systems with Urbanopt, Geojson To Modelica Translator, and the Modelica Buildings Library. Building Simulation 2021, submitte
Allen, A., Long, N. L., Moore, N., & Elarga, H. (2021). URBANopt District Energy Systems HVAC Measures. National Renewable Energy Laboratory.
Long, N., & Summer, T. (2020). Modelica Builder (0.1.0).
Long, N., & Summer, T. (2020). Modelica Formatter.
Long, N., Almajed, F., von Rhein, J., & Henze, G. (2021). Development of a metamodelling framework for building energy models with application to fifth-generation district heating and cooling networks. Journal of Building Performance Simulation, 14(2), 203–225..
Allen, A., Henze, G., Baker, K., Pavlak, G., Long, N., & Fu, Y. (2020). A topology optimization framework to facilitate adoption of advanced district thermal energy systems. IOP Conference Series: Earth and Environmental Science, 588, 022054.
Allen, A., Henze, G., Baker, K., & Pavlak, G. (2020). Evaluation of low-exergy heating and cooling systems and topology optimization for deep energy savings at the urban district level. Energy Conversion and Management, 222, 113106.
Allen, A., Henze, G., Baker, K., Pavlak Gregory, & Murphy, M. (2021). Evaluation of Topology Optimization to Achieve Energy Savings at the Urban District Level. 2021 ASHRAE Winter Conference. | https://docs.urbanopt.net/workflows/des.html | 2021-09-16T16:43:35 | CC-MAIN-2021-39 | 1631780053657.29 | [array(['https://raw.githubusercontent.com/urbanopt/geojson-modelica-translator/develop/docs/images/des-generations.png',
'image of DES generations'], dtype=object)
array(['https://raw.githubusercontent.com/urbanopt/geojson-modelica-translator/develop/docs/images/des-connections.png',
'GMT functionality'], dtype=object) ] | docs.urbanopt.net |
Tests that comments are correctly deleted when their author is deleted.
File
- core/
modules/ comment/ tests/ comment.test, line 2348
- Tests for the Comment module.
Class
- CommentAuthorDeletionTestCase
- Tests the behavior of comments when the comment author is deleted.
Code
public function testAuthorDeletion() { // Create a comment as the admin user. $this->backdropLogin($this->admin_user); $comment = $this->postComment($this->node, $this->randomName()); $this->assertTrue($this->commentExists($comment), t('Comment is displayed initially.')); $this->backdropLogout(); // Delete the admin user, and check that the node which displays the // comment can still be viewed, but the comment itself does not appear // there. user_delete($this->admin_user->uid); $this->backdropGet('node/' . $this->node->nid); $this->assertResponse(200, t('Node page is accessible after the comment author is deleted.')); $this->assertFalse($this->commentExists($comment), t('Comment is not displayed after the comment author is deleted.')); } | https://docs.backdropcms.org/api/backdrop/core%21modules%21comment%21tests%21comment.test/function/CommentAuthorDeletionTestCase%3A%3AtestAuthorDeletion/1 | 2021-09-16T15:57:41 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.backdropcms.org |
apply
The
cfy apply command is used to install/update a deployment using Cloudify manager without having to manually go through the process of uploading a blueprint, creating a deployment, and executing a workflow.
cfy apply command uses
cfy install or
cfy deployments update logic depending on the existence of the deployment referenced by
DEPLOYMENT_ID.
It is recommended to read about
cfy install and
cfy deployments update in order to understand the
cfy apply command.
cfy apply designed to improve blueprints development lifecycle.
For example, during blueprint development and testing, it is useful to be able to quickly deploy and install the updated blueprint, overriding the existing deployment with the new changes.
cfy apply logic:
Check for
BLUPRINT_PATHand
DEPLOYMENT_ID.
If
BLUPRINT_PATHis missing, use the default value and infer
DEPLOYMENT_ID(explained in the usage section).
Check if deployment
DEPLOYMENT_IDexists.
Upload blueprint
BLUPRINT_PATHto the manager.
If deployment
DEPLOYMENT_IDexists, perform a deployment update with the uploaded blueprint. Else, create a new deployment with the name
DEPLOYMENT_ID, and execute the
installworkflow.
Usage
cfy apply [OPTIONS]
The
cfy apply command uses the
cfy install or
cfy deployments update
depending on the existence of the deployment specified by
DEPLOYMENT_ID.
If the deployment exists, the deployment will be updated with the given blueprint.
Otherwise, the blueprint will be installed, and the deployment name will be
DEPLOYMENT_ID.
In both cases, the blueprint will be uploaded to the manager.
BLUEPRINT_PATH can be a:
- local blueprint yaml file.
- blueprint archive.
- URL to a blueprint archive.
- GitHub repo (
organization/blueprint_repo[:tag/branch]).
Supported archive types are zip, tar, tar.gz, and tar.bz2
DEPLOYMENT_ID is the deployment’s id to install/update.
Default values:
If
BLUEPRINT_PATH is not provided, the default blueprint path is
‘blueprint.yaml’ in the current working directory.
If DEPLOYMENT_ID is not provided, it will be inferred from the
BLUEPRINT_PATH
in one of the following ways:
- If
BLUEPRINT_PATHis a local file path, then
DEPLOYMENT_IDwill be the name of the blueprint directory.
- If
BLUEPRINT_PATHis an archive and –blueprint-filename/-n option is not provided, then
DEPLOYMENT_IDwill be the name of the blueprint directory.
- If
BLUEPRINT_PATHis an archive and –blueprint-filename/-n option is provided, then
DEPLOYMENT_IDwill be <blueprint directory name>.<blueprint_filename>.
Optional flags
This command supports the common CLI flags.
-p, --blueprint-path PATH- The path to the application’s blueprint file. Can be a:
- local blueprint yaml file
- blueprint archive
- url to a blueprint archive
- github repo(
organization/blueprint_repo[:tag/branch])
-d, --deployment-id TEXT- The unique identifier for the deployment.
-n, --blueprint-filename TEXT- The name of the archive’s main blueprint file. This is only relevant if uploading an archive.
-b, --blueprint-id TEXT- The unique identifier for the blueprint.
-i, --inputs TEXT- Inputs for the deployment (Can be provided as wildcard based paths (*.yaml, /my_inputs/, etc..) to YAML files, a JSON string or as ‘key1=value1;key2=value2’). This argument can be used multiple times.
-r, --reinstall-list TEXT- Node instances ids to be reinstalled as part of deployment update. They will be reinstalled even if the flag –skip-reinstall has been supplied.
-w, --workflow-id TEXT- The workflow to execute [default: None]
--skip-install- Skip install lifecycle operations.
--skip-uninstall- Skip uninstall lifecycle operations.
--dont-skip-reinstall- Reinstall node-instances that their properties has been modified, as part of a deployment update. Node instances that were explicitly given to the reinstall list will be reinstalled too.
--ignore-failure- Supply the parameter
ignore_failurewith the value
trueto the uninstall workflow.
--install-first- In deployment update, perform install workflow and then uninstall workflow. default: uninstall and then install.
--preview- Preview the deployment update, stating what changes will be made without actually applying any changes.
--dont-update-plugins- Don’t update the plugins.
-f, --force- Force running update in case a previous update on this deployment has failed to finish successfully [This option is deprecated].
-l, --visibility TEXT- Defines who can see the resource, can be set to one of [‘private’, ‘tenant’, ‘global’] [default: tenant].
--validate- Validate the blueprint first.
--include-logs / --no-logs- Include logs in returned events [default:True].
--json-output- Output events in a consumable JSON format.
--manager TEXT- Connect to a specific manager by IP or host.
--runtime-only-evaluation- If set, all intrinsic functions will only be evaluated at runtime, and no intrinsic functions will be evaluated at parse time(such as get_input, get_property).
--auto-correct-types- If set, before creating plan for a new deployment, an attempt will be made to cast old inputs’ values to the valid types declared in blueprint.
--reevaluate-active-statuses- If set, before attempting to update, the statuses of previous active update operations will be reevaluated based on relevant executions’ statuses.
terminatedexecutions will be mapped to
successfulupdates, while
failedand any
*cancel*statuses will be mapped to
failed.
--skip-plugins-validation- Determines whether to validate if the required deployment plugins exist on the manager. If validation is skipped, plugins containing source URL will be installed from source.
-p, --parameters TEXT- Parameters for the workflow (Can be provided as wildcard based paths (*.yaml, /my_inputs/, etc..) to YAML files, a JSON string or as ‘key1=value1;key2=value2’). This argument can be used multiple times.
--allow-custom-parameters- Allow passing custom parameters (which were not defined in the workflow’s schema in the blueprint) to the execution.
--blueprint-labels TEXT- labels list of the form
: , : .
--deployment-labels TEXT- labels list of the form
: , : .
Example using default values
In a folder called
resources with
blueprint.yaml inside:
$ cfy apply No blueprint path provided, using default: /home/..../resources/blueprint.yaml Trying to find deployment resources Uploading blueprint /home/..../resources/blueprint.yaml... blueprint.yaml |######################################################| 100.0% Blueprint `resources` upload started. 2021-03-31 14:07:32.306 CFY <None> Starting 'upload_blueprint' workflow execution 2021-03-31 14:07:32.335 LOG <None> INFO: Blueprint archive uploaded. Extracting... 2021-03-31 14:07:32.368 LOG <None> INFO: Blueprint archive extracted. Parsing... 2021-03-31 14:07:33.290 LOG <None> INFO: Blueprint parsed. Updating DB with blueprint plan. 2021-03-31 14:07:33.375 CFY <None> 'upload_blueprint' workflow execution succeeded Blueprint uploaded. The blueprint's id is resources Creating new deployment from blueprint resources... Deployment created. The deployment's id is resources Executing workflow `install` on deployment `resources` [timeout=900 seconds] 2021-03-31 14:07:36.565 CFY <resources> Starting 'install' workflow execution 2021-03-31 14:07:36.768 CFY <resources> [node_b_cfrr7p] Validating node instance before creation: nothing to do 2021-03-31 14:07:36.769 CFY <resources> [node_b_cfrr7p] Precreating node instance: nothing to do 2021-03-31 14:07:36.771 CFY <resources> [node_b_cfrr7p] Creating node instance: nothing to do 2021-03-31 14:07:36.773 CFY <resources> [node_b_cfrr7p] Configuring node instance: nothing to do 2021-03-31 14:07:36.774 CFY <resources> [node_b_cfrr7p] Starting node instance 2021-03-31 14:07:37.057 CFY <resources> [node_b_cfrr7p.start] Sending task 'script_runner.tasks.run' 2021-03-31 14:07:37.638 LOG <resources> [node_b_cfrr7p.start] INFO: Downloaded install.py to /tmp/C9QFC/install.py 2021-03-31 14:07:37.638 LOG <resources> [node_b_cfrr7p.start] INFO: hi!! 2021-03-31 14:07:37.908 CFY <resources> [node_b_cfrr7p.start] Task succeeded 'script_runner.tasks.run' 2021-03-31 14:07:37.909 CFY <resources> [node_b_cfrr7p] Poststarting node instance: nothing to do 2021-03-31 14:07:37.911 CFY <resources> [node_b_cfrr7p] Node instance started 2021-03-31 14:07:38.120 CFY <resources> [node_a_vfhhzn] Validating node instance before creation: nothing to do 2021-03-31 14:07:38.123 CFY <resources> [node_a_vfhhzn] Precreating node instance: nothing to do 2021-03-31 14:07:38.124 CFY <resources> [node_a_vfhhzn] Creating node instance: nothing to do 2021-03-31 14:07:38.125 CFY <resources> [node_a_vfhhzn] Configuring node instance: nothing to do 2021-03-31 14:07:38.126 CFY <resources> [node_a_vfhhzn] Starting node instance 2021-03-31 14:07:38.432 CFY <resources> [node_a_vfhhzn.start] Sending task 'script_runner.tasks.run' 2021-03-31 14:07:39.101 LOG <resources> [node_a_vfhhzn.start] INFO: Downloaded install.py to /tmp/E6KY5/install.py 2021-03-31 14:07:39.102 LOG <resources> [node_a_vfhhzn.start] INFO: hi!! 2021-03-31 14:07:39.480 CFY <resources> [node_a_vfhhzn.start] Task succeeded 'script_runner.tasks.run' 2021-03-31 14:07:39.481 CFY <resources> [node_a_vfhhzn] Poststarting node instance: nothing to do 2021-03-31 14:07:39.484 CFY <resources> [node_a_vfhhzn] Node instance started 2021-03-31 14:07:39.661 CFY <resources> 'install' workflow execution succeeded Finished executing workflow install on deployment resources * Run 'cfy events list 57ad1536-8904-48cf-8521-70abeefa0c60' to retrieve the execution's events/logs
In the first invocation, the blueprint was uploaded and
resources deployment was created and installed.
Before the second invocation,
node_c was added to the blueprint.
$ cfy apply No blueprint path provided, using default: /home/..../resources/blueprint.yaml Trying to find deployment resources Deployment resources found, updating deployment. Uploading blueprint /home/..../resources/blueprint.yaml... blueprint.yaml |######################################################| 100.0% Blueprint `resources-31-03-2021-17-14-09` upload started. 2021-03-31 14:14:10.328 CFY <None> Starting 'upload_blueprint' workflow execution 2021-03-31 14:14:10.357 LOG <None> INFO: Blueprint archive uploaded. Extracting... 2021-03-31 14:14:10.387 LOG <None> INFO: Blueprint archive extracted. Parsing... 2021-03-31 14:14:11.292 LOG <None> INFO: Blueprint parsed. Updating DB with blueprint plan. 2021-03-31 14:14:11.378 CFY <None> 'upload_blueprint' workflow execution succeeded Blueprint uploaded. The blueprint's id is resources-31-03-2021-17-14-09 Updating deployment resources, using blueprint resources-31-03-2021-17-14-09 2021-03-31 14:14:14.223 CFY <resources> Starting 'update' workflow execution 2021-03-31 14:14:14.542 CFY <resources> [node_c_oh15uc] Validating node instance before creation: nothing to do 2021-03-31 14:14:14.544 CFY <resources> [node_c_oh15uc] Precreating node instance: nothing to do 2021-03-31 14:14:14.545 CFY <resources> [node_c_oh15uc] Creating node instance: nothing to do 2021-03-31 14:14:14.546 CFY <resources> [node_c_oh15uc] Configuring node instance: nothing to do 2021-03-31 14:14:14.549 CFY <resources> [node_c_oh15uc] Starting node instance 2021-03-31 14:14:14.830 CFY <resources> [node_c_oh15uc.start] Sending task 'script_runner.tasks.run' 2021-03-31 14:14:15.371 LOG <resources> [node_c_oh15uc.start] INFO: Downloaded install.py to /tmp/5049J/install.py 2021-03-31 14:14:15.372 LOG <resources> [node_c_oh15uc.start] INFO: hi!! 2021-03-31 14:14:15.729 CFY <resources> [node_c_oh15uc.start] Task succeeded 'script_runner.tasks.run' 2021-03-31 14:14:15.730 CFY <resources> [node_c_oh15uc] Poststarting node instance: nothing to do 2021-03-31 14:14:15.732 CFY <resources> [node_c_oh15uc] Node instance started 2021-03-31 14:14:16.548 CFY <resources> 'update' workflow execution succeeded Finished executing workflow 'update' on deployment 'resources' Successfully updated deployment resources. Deployment update id: resources-90a04562-c24e-4088-868d-72c9d46979fc. Execution id: ef0e35e5-4b22-4cae-9608-829377312510
In the second invocation, the updated blueprint is uploaded and deployment-update is executed. | https://docs.cloudify.co/latest/cli/orch_cli/apply/ | 2021-09-16T16:32:07 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.cloudify.co |
Use Microsoft Cognitive Toolkit deep learning model with Azure HDInsight Spark cluster
In this article, you do the following steps.
Run a custom script to install Microsoft Cognitive Toolkit on an Azure HDInsight Spark cluster.
Upload a Jupyter Notebook to the Apache Spark cluster to see how to apply a trained Microsoft Cognitive Toolkit deep learning model to files in an Azure Blob Storage Account using the Spark Python API (PySpark).
How does this solution flow?
This solution is divided between this article and a Jupyter Notebook that you upload as part of this article. In this article, you complete the following steps:
- Run a script action on an HDInsight Spark cluster to install Microsoft Cognitive Toolkit and Python packages.
- Upload the Jupyter Notebook that runs the solution to the HDInsight Spark cluster.
The following remaining steps are covered in the Jupyter Notebook.
- Load sample images into a Spark Resilient Distributed Dataset or RDD.
- Load modules and define presets.
- Download the dataset locally on the Spark cluster.
- Convert the dataset into an RDD.
- Score the images using a trained Cognitive Toolkit model.
- Download the trained Cognitive Toolkit model to the Spark cluster.
- Define functions to be used by worker nodes.
- Score the images on worker nodes.
- Evaluate model accuracy.
Install Microsoft Cognitive Toolkit
You can install Microsoft Cognitive Toolkit on a Spark cluster using script action. Script action uses custom scripts to install components on the cluster that aren't available by default. You can use the custom script from the Azure portal, by using HDInsight .NET SDK, or by using Azure PowerShell. You can also use the script to install the toolkit either as part of cluster creation, or after the cluster is up and running.
In this article, we use the portal to install the toolkit, after the cluster has been created. For other ways to run the custom script, see Customize HDInsight clusters using Script Action.
Using the Azure portal
For instructions on how to use the Azure portal to run script action, see Customize HDInsight clusters using Script Action. Make sure you provide the following inputs to install Microsoft Cognitive Toolkit. Use the following values for your script action:
Upload the Jupyter Notebook to Azure HDInsight Spark cluster
To use the Microsoft Cognitive Toolkit with the Azure HDInsight Spark cluster, you must load the Jupyter Notebook CNTK_model_scoring_on_Spark_walkthrough.ipynb to the Azure HDInsight Spark cluster. This notebook is available on GitHub at.
Download and unzip.
From a web browser, navigate to, where
CLUSTERNAMEis the name of your cluster.
From the Jupyter Notebook, select Upload in the top-right corner and then navigate to the download and select file
CNTK_model_scoring_on_Spark_walkthrough.ipynb.
Select Upload again.
After the notebook is uploaded, click the name of the notebook and then follow the instructions in the notebook itself on how to load the data set and perform the article.
- Application Insight telemetry data analysis using Apache Spark in HDInsight
Create and run applications
- Create a standalone application using Scala
- Run jobs remotely on an Apache Spark cluster using Apache Livy
Tools and extensions
- Use HDInsight Tools Plugin for IntelliJ IDEA to create and submit Spark Scala applications
- Use HDInsight Tools Plugin for IntelliJ IDEA to debug Apache Spark applications remotely
- | https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-microsoft-cognitive-toolkit?WT.mc_id=AZ-MVP-5003408 | 2021-09-16T17:13:01 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.microsoft.com |
Documentation
The Project
Philly Community Wireless is a working group of organizers, technologists, academics, public school teachers, and City Hall staffers in Philadelphia. During a time of social distancing and online teaching, roughly half of the city’s public school students lack a wifi connection at home. In certain neighborhoods, even more residents lack any connection to the Internet. We seek to address the city's digital divide with community-owned and -operated mesh network technologies.
Mesh networks allow a single Internet connection to be shared among a broader group of users with very little cost or infrastructure required. With the help of PhillyWisper, a pro-net neutrality, wireless internet service provider, we are working toward installing two pilot sites in Kensington and Fairhill. From there, we will plan participatory design workshops and technical training for the community that will empower them to maintain and grow this free network connection.
This Website
This docs page provides in-development technical information and guides for replicating the project's configuration and distribution of routers and antennas for building a mesh network.
If you are looking for a non-technical overview of the project, or are interested in signing up for PCW coverage, please visit our homepage. | https://docs.phillycommunitywireless.org/en/latest/ | 2021-09-16T16:21:10 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.phillycommunitywireless.org |
Unreal Engine 4 provides a lot of multiplayer functionality out of the box, and it's easy set up a basic Blueprint game that works over a network. It's easy to dive in and start playing mutliplayer. Most of the logic to make basic multiplayer work is thanks to the built-in networking support in the
Character class, and its
CharacterMovementComponent, which the Third Person template project uses.
Gameplay framework review
To add multiplayer functionality to your game, it's important to understand the roles of the major gameplay classes that are provided by the engine and how they work together - and especially, how they work in a multiplayer context:
GameInstance
GameMode
GameState
Pawn (and Character, which inherits from Pawn)
PlayerController
PlayerState
See the Gameplay Framework documentation for more information - but at least keep in mind the following tips when designing your game for multiplayer are:
The GameInstance exists for the duration of the engine's session, meaning that it is created when the engine starts up and not destroyed or replaced until the engine shuts down. A separate GameInstance exists on the server and on each client, and these instances do not communicate with each other. Because the GameInstance exists outside of the game session and is the only game structure that exists across level loads, it is a good place to store certain types of persistent data, such as lifetime player statistics (e.g. total number of games won), account information (e.g. locked/unlocked status of special items), or even a list of maps to rotate through in a competitive game like Unreal Tournament.
The GameMode object only exists on the server. It generally stores information related to the game that clients do not need to know explicitly. For example, if a game has special rules like "rocket launchers only", the clients may not need to know this rule, but when randomly spawning weapons around the map, the server needs to know to pick only from the "rocket launcher" category.
The GameState exists on the server and the clients, so the server can use replicated variables on the GameState to keep all clients up-to-date on data about the game. Information that is of interest to all players and spectators, but isn't associated with any one specific player, is ideal for GameState replication. As an example, a baseball game could replicate each team's score and the current inning via the GameState.
One PlayerController exists on each client per player on that machine. These are replicated between the server and the associated client, but are not replicated to other clients, resulting in the server having PlayerControllers for every player, but local clients having only the PlayerControllers for their local players. PlayerControllers exist while the client is connected, and are associated with Pawns, but are not destroyed and respawned like Pawns often are. They are well-suited to communicating information between clients and servers without replicating this information to other clients, such as the server telling the client to ping its minimap in response to a game event that only that player detects.
A PlayerState will exist for every player connected to the game on both the server and the clients. This class can be used for replicated properties that all clients, not just the owning client, are interested in, such as the individual player's current score in a free-for-all game. Like the PlayerController, they are associated with individual Pawns, and are not destroyed and respawned when the Pawn is.
Pawns (including Characters) will also exist on the server and on all clients, and can contain replicated variables and events. The decision of whether to use the PlayerController, the PlayerState, or the Pawn for a certain variable or event will depend on the situation, but the main thing to keep in mind is that the PlayerController and PlayerState will persist as long as the owning player stays connected to the game and the game doesn't load a new level, whereas a
Pawnmay not. For example, if a Pawn dies during gameplay, it will usually be destroyed and replaced with a new Pawn, while the PlayerController and PlayerState will continue to exist and will be associated with the new Pawn once it finishes spawning. The Pawn's health, therefore, would be stored on the Pawn itself, since that is specific to the actual instance of the Pawn and should be reset when the Pawn is replaced with a new one.
Actor replication
The core of the networking technology in UE4 is actor replication. An actor with its "Replicates" flag set to true will automatically be synchronized from the server to clients who are connected to that server. An important point to understand is that actors are only replicated from the server to the clients - it's not possible to have an actor replicate from a client to the server. Of course, clients still need to be able to send data to the server, and they do this through replicated "Run on server" events.
See this Replicating Actors in Blueprints guide for a walkthrough of a concrete example, as well as the Actor Replication documentation.
Authority
For every actor in the world, one of the connected players is considered to have authority over that actor. For every actor that exists on the server, the server has authority over that actor - including all replicated actors. As a result, when the Has Authority function is run on a client, and the target is an actor that was replicated to them, it will return false. You can also use the Switch Has Authority convenience macro as a quick way to branch for different server and client behavior in replicated actors.
Variables
In the details panel of variables on your actors, there is a Replication drop-down that lets you control how your variables are replicated, if at all.
Many of the variables in the engine's built-in classes already have replication enabled, so that many features work automatically in a multiplayer context.
See this Replicating Variables in Blueprints guide for a walkthrough of a concrete example of variable replication, as well as the Property Replication documentation.
Spawning and destroying
When a replicated actor is spawned on the server, this is communicated to clients, and they will also automatically spawn a copy of that actor. But since, in general, replication doesn't occur from clients to the server, if a replicated actor is spawned on a client, that actor will only exist on the client that spawned it. Neither the server nor any other client will receive a copy of the actor. The spawning client will, however, have authority over the actor. This can still be useful for things like cosmetic actors that don't really have an effect on gameplay, but for actors that do have an effect on gameplay and should be replicated, it's best to make sure they are spawned on the server.
The situation is similar for destroying replicated actors: if the server destroys one, all clients will destroy their respective copies as well. Clients are free to destroy actors for which they have authority - that is, actors they have spawned themselves - since these are not replicated to other players and wouldn't have any affect on them. If a client tries to destroy an actor for which they are not the authority, the destroy request will be ignored. The key point here is the same for spawning actors: if you need to destroy a replicated actor, destroy it on the server.
Event replication
In Blueprints, in addition to replicating actors and their variables, you can also run events across the clients and the server.
See this Replicating Functions in Blueprints guide for a walkthrough of a concrete example, as well as the RPCs documentation.
You may also see the term RPC (Remote Procedure Call). If so, just be aware that replicated events in Blueprints essentially compile down to RPCs inside the engine - and this is what they're usually called in C++.
Ownership
An important concept to understand when working on multiplayer, and especially with replicated events, is which connection is considered to be the owner of a particular actor or component. For our purposes, know that "Run on server" events can only be invoked from actors (or their components) which the client owns. Usually, this means you can only send "Run on server" events from the following actors, or from a component of one of the actors:
The client's PlayerController itself,
A Pawn that the client's PlayerController possesses, or
The client's PlayerState.
Likewise, for a server sending "Run on owning client" events, those events should also be invoked on one of these actors. Otherwise, the server won't know which client to send the event to, and it will only run on the server!
Events
In the details panel of your custom events, you can set how the event is replicated, if at all.
The following tables illustrate how the different replication modes affect where an event is run, based on how it is invoked.
If the event is invoked from the server, given the target in the left-hand column, it will run on...
If the event is invoked from a client, given the target in the left-hand column, it will run on...
As you can see from the table above, any events that are invoked from a client and that are not set to Run on Server are treated as if they are not replicated.
Sending a replicated event from the client to the server is the only way to communicate information from a client to the server, since general actor replication is designed to be server-to-client only.
Also, note that multicast events can only be sent from the server. Because of Unreal's client-server model, a client isn't directly connected to any of the other clients, they're only connected to the server. Therefore, a client is unable to send a multicast event directly to the other clients, and must only communicate with the server. You can emulate this behavior, however, by using two replicated events: one Run on server event, and one Multicast event. The implementation of the Run on server event can perform validation, if desired, and then call the multicast event. The implementation of the multicast event would perform the logic that you'd like to run for all connected players. As an example that doesn't perform any validation at all, see the following image:
Join-in-progress considerations
One thing to keep in mind when using replicated events to communicate game state changes is how they interact with a game that supports join-in-progress. If a player joins a game in progress, any replicated events that occurred before the join will not be executed for the new player. The takeaway here is that if you want your game to work well with join-in-progress, it's usually best to synchronize important gameplay data via replicated variables. A pattern that comes up pretty often is that a client performs some action in the world, notifies the server about the action via a "Run on server" event, and in the implementation of that event, the server updates some replicated variables based on the action. Then the other clients, who did not perform the action, still see the result of the action via the replicated variables. In addition, any clients who join-in-progress after the action has occurred will also see the correct state of the world, since they receive the most recent value of the replicated variables from the server. If the server had instead only sent an event, the join-in-progress players wouldn't know about the action that was performed!
Reliability
For any replicated event, you can choose whether it is Reliable or Unreliable.
Reliable events are guaranteed to reach their destination (assuming the ownership rules above are followed), but they introduce more bandwidth, and potentially latency, to meet this guarantee. Try to avoid sending reliable events too often, such as on every tick, since the engine's internal buffer of reliable events may overflow - when this happens, the associated player will be disconnected!
Unreliable events work as their name implies - they may not reach their destination, in case of packet loss on the network, or if the engine determines there is a lot of higher-priority traffic it needs to send, for example. As a result, unreliable events use less bandwidth than reliable events, and they can be called safely more often. | https://docs.unrealengine.com/4.27/en-US/InteractiveExperiences/Networking/Blueprints/ | 2021-09-16T15:33:59 | CC-MAIN-2021-39 | 1631780053657.29 | [array(['./../../../../Images/InteractiveExperiences/Networking/Blueprints/replicates.jpg',
'replicates.png'], dtype=object)
array(['./../../../../Images/InteractiveExperiences/Networking/Blueprints/switch_has_authority.jpg',
'switch_has_authority.png'], dtype=object)
array(['./../../../../Images/InteractiveExperiences/Networking/Blueprints/variable_replication.jpg',
'variable_replication.png'], dtype=object)
array(['./../../../../Images/InteractiveExperiences/Networking/Blueprints/event_replication.jpg',
'event_replication.png'], dtype=object)
array(['./../../../../Images/InteractiveExperiences/Networking/Blueprints/forward_multicast.jpg',
'forward_multicast.png'], dtype=object) ] | docs.unrealengine.com |
REopt LiteTM is a technoeconomic decision making model for distributed energy resource (DER) design accessible via web tool and API. Given a set of inputs it leverages mixed-integer linear programming to select the optimal design and hourly annual dispatch of solar PV, storage, wind and diesel generator technologies - it also provides key economic metrics for the system. Integration of this model with URBANopt allows individual buildings (i.e Feature Reports) and collections of buildings (i.e. Scenario Reports) to be assessed for cost-optimal DER solutions.
The URBANopt REopt Gem extends the URBANopt Scenario Gem and is intended to be used as a post-processor on Feature and Scenario Reports. The URBANopt REopt Gem can be used directly as shown in the example REopt project or through the URBANopt CLI. The URBANopt REopt Gem comes with the following functionality:
- Reads Feature and Scenario Reports and parses their latitude, longitude, electric load profile, and available roof area to use as inputs to the REopt Lite API. These values can be overwritten with settings from the assumptions file.
- Reads Feature and Scenario Reports populates the ElectricTariff > coincident_peak_load_active_timesteps input to the REopt Lite API with the 1-indexed indices of the top 100 timesteps with the largest power demands. At finer REopt Lite modeling resolutions than 1 hour, the number of timesteps in this array is determined as 100 * time_steps_per_hour. This value can be overwritten with settings from the assumptions file.
- Makes calls to the REopt Lite API for optimal distributed energy resource (DER) technology sizing, dispatch, and financial metrics based using additional customizable input parameters stored in an input
.jsonfile
- Optionally makes calls to the REopt Lite API for resilience statistics on an optimized system (i.e. average outage duration sustained by system)
- Saves responses from the REopt Lite API to local files (by default in a
reoptfolder in the Scenario or Feature Report directory)
- Updates Feature or Scenario Report’s
distributed_generationattributes based on a REopt Lite API response
- Updates a Feature or Scenario Report’s
timeseries_csvbased on a REopt Lite API response | https://docs.urbanopt.net/workflows/reopt/reopt.html | 2021-09-16T15:21:37 | CC-MAIN-2021-39 | 1631780053657.29 | [array(['../../doc_files/reopt-lite-logo.png', None], dtype=object)] | docs.urbanopt.net |
Downloading Files to Executions¶
You frequently have changing files like training samples/labels or pretrained models you want to use during your executions. I mean, what would you otherwise operate on.
You can either:
download the files manually yourself using the tools you want; we don’t recommend this but might be easy way to get started if you’ve already mastered them
use Valohai’s input mechanism to leverage our automatic record keeping, version control, reproducibility and caching features
This section covers Valohai’s input concept in more detail.
For a file to be usable by an execution, you first have to upload it to a data stores connected to the project either manually or by using our web user interface upload utility.
Here is how to upload files using the web user interface after the data store has been configured:
During an execution, Valohai inputs are available under
/valohai/inputs/<INPUT_NAME>/<INPUT_FILE_NAME>.
To see this in action, try running
ls -laR /valohai/inputs/ as the main command of an execution which has inputs.
When you specify the actual input or default for one, you have 3 options:
Option #1: Custom Store URL¶
You can connect private data stores to Valohai projects.
If you connect a store that contains files that Valohai doesn’t know about, like the files that you have uploaded there yourself, you can use the following syntax to refer to the files.
Azure Blob Storage:
azure://{account_name}/{container_name}/{blob_name}
Google Storage:
gs://{bucket}/{key}
Amazon S3:
s3://{bucket}/{key}
OpenStack Swift:
swift://{project}/{container}/{key}
This syntax also has supports wildcard syntax to download multiple files:
s3://my-bucket/dataset/images/*.jpgfor all .jpg (JPEG) files
s3://my-bucket/dataset/image-sets/**.jpgfor recursing subdirectories for all .jpg (JPEG) files
You can also interpolate execution parameter into input URIs:
s3://my-bucket/dataset/images/{parameter:user-id}/*.jpegwould replace
{parameter:user-id}with the value of the parameter
user-idduring an execution.
Option #2: Datum URI¶
You can use the
datum://<identifier> syntax to refer to specific files Valohai platform already knows about.
Files will have a datum identifier if the files were uploaded to Valohai either:
by another execution, or
by using the Valohai web interface uploader under “Data” tab of the project | https://docs.valohai.com/topic-guides/executions/inputs/ | 2021-09-16T16:36:40 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.valohai.com |
Deleting a Deployment
Get the latest docs.
After you have uninstalled an application, you can delete it from Cloudify Manager. After you uninstall an application, all of its static and runtime properties are still stored in the Manager’s database and the deployment-specific agents continue to consume resources on the Manager. Deleting a deployment enables you to clean the environment of those excess artifacts.
To delete a deployment from the manager with the CLI, run:
cfy deployments delete nodecellar
The delete options are:
. | https://docs.cloudify.co/5.0.0/working_with/manager/delete-deployment/ | 2021-09-16T16:15:50 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.cloudify.co |
Date: Fri, 18 May 2007 13:11:32 -0500 From: "[email protected]" <[email protected]> To: "Marc G. Fournier" <[email protected]> Cc: Schiz0 <[email protected]>, [email protected] Subject: Re: NO_* options in /etc/make.conf ... Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> <[email protected]> <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On 18/05/07, Marc G. Fournier <[email protected]> wrote: > - --On Friday, May 18, 2007 13:44:39 -0400 Schiz0 <[email protected]> > wrote: > > > On 5/18/07, Marc G. Fournier <[email protected]> wrote: > >> > >> Is there a document that describes what is available, and what each one does? > >> > >> As an example, I took a peak at the nanoBSD documentation, and they have one > >> that is 'NO_BIND' listed ... does that mean nslookup won't work, or you just > >> can't run a named server? > >> > > > > make make.conf > > > > Or, if you want a web interface, type in "make.conf" at > > > > > Never even thought about man ... perfect, thank you :) Typical of unix variants, this cat's skin is removed at least one other way: $ less /usr/share/examples/etc/make.conf -- --
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=1764577+0+archive/2007/freebsd-questions/20070520.freebsd-questions | 2021-09-16T16:37:39 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.freebsd.org |
Get-Item
Property
Gets the properties of a specified item.
Syntax
Get-Item
Property [-Path] <String[]> [[-Name] <String[]>] [-Filter <String>] [-Include <String[]>] [-Exclude <String[]>] [-Credential <PSCredential>] [<CommonParameters>]
Get-Item
Property -LiteralPath <String[]> [[-Name] <String[]>] [-Filter <String>] [-Include <String[]>] [-Exclude <String[]>] [-Credential <PSCredential>] [<CommonParameters>]
Description.
Examples
Example 1: Get information about a specific directory
This command gets information about the
C:\Windows directory.
Get-ItemProperty C:\Windows
Example 2: Get the properties of a specific file
This command gets the properties of the
C:\Test\Weather.xls file. The result is piped to the
Format-List cmdlet to display the output as a list.
Get-ItemProperty C:\Test\Weather.xls | Format-List
Example 3: Get the value name and data of a registry entry in a registry subkey
This command gets the value name and data of the
ProgramFilesDir registry entry in the
CurrentVersion registry subkey. The Path specifies the subkey and the Name parameter
specifies the value name of the entry.
Get-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion -Name "ProgramFilesDir"
Note
This command requires that there is a PowerShell drive named
HKLM: that is mapped to the
HKEY_LOCAL_MACHINE hive of the registry.
A drive with that name and mapping is available in PowerShell by default. Alternatively, the path to this registry subkey can be specified by using the following alternative path that begins with the provider name followed by two colons:
Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion.
Example 4: Get the value names and data of registry entries in a registry key
This command gets the value names and data of the registry entries in the
PowerShellEngine
registry key. The results are shown in the following sample output.
Parameters the name of the property or properties to retrieve. Wildcard characters are permitted.
Specifies the path to the item or items. Wildcard characters are permitted.
Inputs
You can pipe a string that contains a path to
Get-ItemProperty.
Outputs. | https://docs.microsoft.com/en-us/powershell/module/Microsoft.PowerShell.Management/Get-ItemProperty?view=powershell-7.1&viewFallbackFrom=powershell-4.0 | 2021-09-16T15:35:02 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.microsoft.com |
506docs.com
Your Home for Reg. D Compliant PPM Templates
Rule 506(c) Common Stock With Warrants
A Feature-rich, Rule 506(c) Compliant Private Placement Memorandum (PPM) Template Ideal For Your Corporation Issuing Common Stock With Warrants. Easily create your own Rule 506(c) compliant, custom PPM and full set of compliant offering documents.
Template Features
- Min/Max Capital Raise;
- Escrow Services;
- Easily-to-edit sample content to save you time in creating your offering documents.
Description
Our Regulation D compliant Rule 506(c) PPM template is ideal for your corporation issuing Common Stock with Warrants. The template features sample content from Forms S-1 that have been declared “effective” by the SEC. The sample content is written in Microsoft® Word making the PPM’s content easy to edit for your capital raise.
Your PPM Package Includes These Important And Necessary Documents
- Private Placement Memorandum;
- Subscription Agreement;
- Certificate of Signatory;
- Investor Verification;
- Third Party Verification Letter;
- Form Of Warrant;
- Notice Of Exercise;
- Jurisdictional Legends (all 50 states);
- ERISA Disclosure;
- Anti-money Laundering Certification;
- Anti-money Laundering Definitions;
- Anti-money Laundering Certification;
- Form D;
- Capitalization Table;
- Executive Summary/Pitch Sheet;
- Form U-1;
- Form U-2;
- Form U-2A.
PPM Delivery
- The single zip file, containing all of the documents you need, is available for download after purchase. You will receive an email containing the link to download your files.
State "Blue Sky" Laws
We encourage issuers to contact the state securities regulators in the state in which they intend to offer or sell securities for further guidance on compliance with state security laws.
Although we created these documents to conform with the disclosure requirements of Regulation D of the Securities Act of 1933 and Regulation S-K, these documents present an array of often mutually exclusive options with respect to particular Regulation D provisions. We encourage you to tailor the template to accurately reflect the specific provisions of your Regulation D capital raise. | https://506docs.com/Rule-506-c-Common-Stock-With-Warrants-p282326170 | 2021-09-16T16:25:38 | CC-MAIN-2021-39 | 1631780053657.29 | [] | 506docs.com |
Troubleshooting issues when using Swagger as a Remedy REST API client
The Swagger user interface (UI) is an HTML/JS web application that can be hosted on simple web servers such as Apache, Microsoft Internet Information Services (IIS), or Apache Tomcat. The Swagger UI provides a sample request response that helps to integrate the AR System server with the REST service.
This topic describes the most common issues that might occur while using the Swagger UI.
Symptoms
- Pages are not displayed for the specified Jetty URLs.
- Pages are not displayed for the specified Swagger URLs.
- One of the following errors is displayed:
Failed to load API definition on Swagger UI
Or
Possible cross-origin (CORS) issue
- The API definition is not provided in the Swagger UI.
Scope
Swagger, being a third-party tool, does not affect other areas. If the Jetty server doesn't respond, the Swagger UI and other integrations that use the Jetty server do not work.
Resolution
Perform the following steps to troubleshoot an issue:
After you determine a specific symptom or error message, use the following table to identify the solution: | https://docs.bmc.com/docs/ars1902/troubleshooting-issues-when-using-swagger-as-a-remedy-rest-api-client-941864982.html | 2021-09-16T15:20:26 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.bmc.com |
- Available reference architectures
- Availability Components
- Deviating from the suggested reference architectures
Reference architectures.
Available reference architectures
The following reference architectures are available:
- Up to 1,000 users
-) to install and configure the various components (with one notable exception being the suggested select Cloud Native installation method described below)..
Config. | https://docs.gitlab.com/14.0/ee/administration/reference_architectures/ | 2021-09-16T16:26:44 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.gitlab.com |
We believe that the decentralized network is the future. StreamMe will push forward the vision of a decentralized future by accomplishing one mission:
Integrating FREEDOM OF SPEECH as a core principle in our platform
And three sub-missions:
Giving ownership of content back to social media users
Building a reward system where contributors are rewarded for content
Connect KOLs to businesses, support e-commerce businesses
Our ultimate goal is to open up a new chapter in how people share life moments and interact with each other for income. | https://docs.streamme.network/abstract/vision | 2021-09-16T16:33:42 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.streamme.network |
deliveryservices/{{ID}}/servers/eligible
Caution
This endpoint may not work as advertised, and its use is therefore discouraged!
GET
Retrieves properties of Edge-tier cache servers eligible for assignment to a particular Delivery Service. Eligibility is determined based on the following properties:
The name of the server’s Type must match one of the glob patterns
EDGE*,
ORG*
The server and Delivery Service must belong to the same CDN
If the Delivery Service has Required Capabilities, an Edge-tier cache server must have all of those defined capabilities
- Auth. Required
Yes
- Roles Required
None
- Response Type
Array
Request Structure
Response Structure
- cachegroup
A string which-10-30 16:01:12" }] } ]} | https://traffic-control-cdn.readthedocs.io/en/latest/api/v3/deliveryservices_id_servers_eligible.html | 2021-09-16T16:25:38 | CC-MAIN-2021-39 | 1631780053657.29 | [] | traffic-control-cdn.readthedocs.io |
We’re currently working on redesigning Insights and the metrics that we’re going to provide. Our plan is to first keep extending Prometheus endpoint with valuable metrics and offer at some point connectors to the most popular metrics tools
We’d love if you want to provide feedback on this. Please reach out to support to start a conversation. Thanks!
Prometheus endpoint
GrapheneDB provides a Prometheus endpoint if you navigate to the Insights tab. This endpoint can be used as a target for scrapping database instance metrics.
The Prometheus configuration file should look like this:
global: scrape_interval: 15s # By default, scrape targets every 15 seconds. evaluation_interval: 15s scrape_configs: - job_name: 'graphenedb-prometheus' scheme: 'https' static_configs: - targets: ['db-jnbxly9x0fbarnn5x9cz.graphenedb.com:24780']
Please, note that we have removed the
/metrics part from the given URL. Prometheus expects metrics to be available on targets on a path of /metrics.
After Prometheus is started, you should be able to check that the targets state at
Cluster Prometheus configuration
For Cluster plans, each node will expose a Prometheus endpoint. You can configure Prometheus for scrapping metrics on each node:
global: scrape_interval: 15s # By default, scrape targets every 15 seconds. evaluation_interval: 15s scrape_configs: - job_name: 'graphenedb-prometheus' scheme: 'https' static_configs: - targets: ['db-1-4on3rt6tpwbnj6zknmg4.graphenedb.com:24780’, 'db-2-4on3rt6tpwbnj6zknmg4.graphenedb.com:24780', 'db-3-4on3rt6tpwbnj6zknmg4.graphenedb.com:24780']
On the Prometheus targets states page, you should see:
Metrics reference
On Cluster databases, ONgDB and Neo4j Enterprise metrics will be also available. Please, check Neo4j Operation Manual to get a list of the available metrics.
Grafana dashboard examples
Grafana is a popular and powerful tool to visualize metrics. Please, visit Grafana documentation for more information.
Please, find in the following links some Grafana dashboard exported by our team as a starting point of metrics visualization:
- Single database metrics
- Cluster database nodes metrics
Halin tool
Halin is a cluster-enabled monitoring tool for Neo4j, that provides insights into live metrics, queries, configuration and more.
You just need to visit and enter the parameters shown on the Insights tab.
Please read more on Halin project here.
Updated about a year ago | https://docs.graphenedb.com/docs/insights | 2021-09-16T15:07:17 | CC-MAIN-2021-39 | 1631780053657.29 | [array(['https://files.readme.io/cc4b2f1-prometheus_endpoint_screenshot.png',
'prometheus_endpoint_screenshot.png'], dtype=object)
array(['https://files.readme.io/cc4b2f1-prometheus_endpoint_screenshot.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/ed2972e-prometheus_targets.png',
'prometheus_targets.png'], dtype=object)
array(['https://files.readme.io/ed2972e-prometheus_targets.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/e038189-prometheus_targets_cluster.png',
'prometheus_targets_cluster.png'], dtype=object)
array(['https://files.readme.io/e038189-prometheus_targets_cluster.png',
'Click to close...'], dtype=object) ] | docs.graphenedb.com |
Bryan Sullivan's Web Blog
Thoughts on web application security
REST and XSRF, Part One
Hi everyone. In case you missed my talk at Black Hat, “REST for the Wicked”, I wanted to give you...
Author: bryansul Date: 08/15/2008
Show some respect to XSS
StickyMinds.com has just posted an article of mine on the dangers of XSS. (Although they still have...
Author: bryansul Date: 06/11/2008
SQL injection in classic ASP
In light of the recent wake of SQL injection attacks on ASP sites, I'd like to highlight some...
Author: bryansul Date: 05/30/2008
Web Application Firewalls in Practice - or - Yes, Jeremiah, Secure Software Does Matter
There's been a lot of renewed interest in web application firewalls lately. In the past, I haven't...
Author: bryansul Date: 05/19/2008
Cross-domain XHR will destroy the internet
Ok, maybe “destroy the internet” is a little harsh. But let’s take a look the impact that...
Author: bryansul Date: 04/04/2008
BlueHat shows some love to web app security
If you haven't heard yet, BlueHat v7 is dedicating the entire block of morning sessions to web app...
Author: bryansul Date: 03/24/2008 | https://docs.microsoft.com/en-us/archive/blogs/bryansul/ | 2021-09-16T16:23:45 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.microsoft.com |
Contents:
Contents:
Configure Job
When you are ready to test your recipe against the entire dataset, click Run.
The job is queued up for processing.
You can track progress in the Job Details page.
- If visual profiling was enabled for the job, click the Profile tab.
- When the job is completed, you can access results in the Output destinations tab.
- For more information,.
This page has no comments. | https://docs.trifacta.com/display/SS/Running+Job+Basics | 2021-09-16T15:53:37 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.trifacta.com |
Go to Dashboard >> Appearance >> Customize >> Front page Sections >>Testimonial
Hide/Show Testimonial– Click this setting box to hide and show testimonial section on the home page.
Hide/Show Background Animation Enable – Click this setting box to hide and show background animation in section.
Testimonial Layout – Select testimonial.
Testimonial Items Content
Testimonial Item ( Add itmes ) –
- Image – Upload a testimonial image for testimonial section.
- Title– Enter a text for testimonial title.
- Text – Enter a text for testimonial Description.
- Designation– Enter a designation.
| https://helpdocs.britetechs.com/docs/businesswp-pro/front-page-section/how-to-setup-section-testimonial/ | 2021-09-16T15:29:50 | CC-MAIN-2021-39 | 1631780053657.29 | [array(['https://helpdocs.britetechs.com/wp-content/uploads/2021/08/businesswp-testimonial.png',
None], dtype=object) ] | helpdocs.britetechs.com |
Legacy.
Order refund. Refunds are based on orders (essentially negative orders) and
contain much of the same data.
Handle data for the current customers session.
Implements the WC_Session abstract class. | https://docs.woocommerce.com/wc-apidocs/package-WooCommerce.Classes.html | 2017-08-16T15:16:37 | CC-MAIN-2017-34 | 1502886102307.32 | [] | docs.woocommerce.com |
.
Requires permission to access the GetPolicy action.
Namespace: Amazon.IoT.Model
Assembly: AWSSDK.IoT.dll
Version: 3.x.y.z
The GetPolicyRequest type exposes the following members
.NET Core App:
Supported in: 3.1
.NET Standard:
Supported in: 2.0
.NET Framework:
Supported in: 4.5, 4.0, 3.5 | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/IoT/TGetPolicyRequest.html | 2021-10-16T00:22:58 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.aws.amazon.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Paginator for the GetBehaviorModelTrainingSummaries operation
Namespace: Amazon.IoT.Model
Assembly: AWSSDK.IoT.dll
Version: 3.x.y.z
The IGetBehaviorModelTrainingSummariesPaginator type exposes the following members
.NET Core App:
Supported in: 3.1
.NET Standard:
Supported in: 2.0
.NET Framework:
Supported in: 4.5 | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/IoT/TIGetBehaviorModelTrainingSummariesPaginator.html | 2021-10-16T01:09:46 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.aws.amazon.com |
Jamf Now Documentation
This guide provides an overview of Jamf Now features and instructions for performing administrative tasks using Jamf Now. To learn more, see the following sections in this guide:
Additional Resources
- Jamf Now Support
- To access Support, log in to your Jamf Now account for chat support or to submit a support ticket. You can also email [email protected].
- Resources on Jamf.com
- Search the Resources area on the Jamf website to access a range of documentation resources including product guides, E-books, white papers, videos, webinars, and more. | https://docs.jamf.com/jamf-now/documentation/Jamf_Now_Documentation.html | 2021-10-15T22:47:43 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.jamf.com |
When deploying the VM-Series Auto Scaling CFT, if the template stack is unable to provision the resources specified in the template, the process automatically rolls back and deletes the resources that were successfully created. Because an initial error can trigger a cascade of additional errors, you need to review the logs to locate the first failure event.
Document:VM-Series Deployment Guide
Troubleshoot the VM-Series Auto Scaling CFT for AWS
Last Updated:
May 1, 2020
Current Version:
7.1 (EoL) | https://docs.paloaltonetworks.com/vm-series/7-1/vm-series-deployment/set-up-the-vm-series-firewall-in-aws/troubleshoot-the-vm-series-auto-scaling-cft-for-aws.html | 2021-10-15T23:45:36 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.paloaltonetworks.com |
Edition, you also need to install and configure Parasoft Team Server.
- We strongly recommend that you configure C++test preferences (for Team Server, task assignment, reporting, etc.) and team Test Configurations as described in the Configuration before you start testing.
For command line execution, you will need to ensure that the installation directory is on the path, or launch cpptest with the full path to the executable (for example,
c:\parasoft\c++test\cpptestcli.exe).Before you can test code with C++test, it must be added to an Eclipse C/C++ project. For instructions on creating a new project, see Creating a Project.
- Before you perform the initial test, we strongly recommend that you review and modify project options. For details on how to do this, see Local Settings (Options) Files.
For
cpptestclito email each developer a report that contains only the errors/results related to his or her work, one of the following conditions must be true:
You have configured C++test to compute code authorship based on source control data AND your project is under a supported source control system AND each developer’s source control username + the mail domain (specified using an options file and the
-localsettingsoption described in
-localsettings %LOCALSETTINGS_FILE%) matches the developer’s email address.
You have configured C++test to compute code authorship based on local user AND each user name + the mail domain (specified using an options file and the
-localsettingsoption described in
-localsettings %LOCALSETTINGS_FILE%) matches the developer’s email address.
Setup Overview
Parasoft C++test. C++test CLI can be invokedon the specified project resourcesAs part of the CLI execution, C++test can perform one or more of the following:
- Static analysis of code, including checks against a configured coding policy, analysis of possible runtime bugs, and metrics analysis.
Execution of unit tests
-, C++test can use your SCM client (if supported) to automatically retrieve file modification information from the SCM system and generate tasks for specific individuals based on results of code analysis and executed tests.
Specific execution options for C++test are controlled via Test Configurations and Preferences.
Test Configurations can be sourced from the built in set, or created using C++test interactive mode in the GUI. It is highly recommended that you do not use the built-in configurations (other than for getting started). We suggest using the built in configurations as starting templates for customer-specific configurations, which are then stored on disk or on Parasoft Team Server.
Preferences can be configured from the C++test GUI. Most of the preference settings can also be supplied with a configuration file that is provided as a parameter to a CLI call. A table of the configuration file preference settings is available in Local Settings (Options) Files. C++test preferences set from the GUI are applied by default. These can be overridden — on an individual basis—by preference values contained in the configuration file used with a given run. This enables you to have a basic set of preferences configured for all CLI runs, and then vary individual settings as necessary by providing an additional configuration file for a specific run with a given Test Configuration. This can be useful, for example, to include different information in reports for different runs, or to change options for email distribution of reports, including report names, email headings, etc.
Step 1: Configure Preferences Source Control to set the options appropriate for your SCM.
- Scope and Authorship: Check the appropriate options for your environment as described in Configuring Task Assignment and Code Authorship Settings.
Reports: The following options are enabled by default and are a good starting point:
Detailed report for developers (includes task breakdown with details).
Overview of tasks by authors (summary table).
Generate formatted reports in command line mode.
Suppressions Details (applies to static analysis only).
E-mails: Enter settings that will be used to send emails with reports. This needs to be an existing email account on an email server accessible from the C++test test machine.
Reports> Email Notifications:
If desired, enable Send Reports by Email. Regardless of this setting, reports will always be uploaded to Parasoft Team Server for later viewing (controlled by the CLI option). Email distribution will use the settings for E-mails above.
Manager reports contain a rollup of all test results generated by C++test Developer reports contain only results for individual developers. Enable options and specify email addresses accordingly.
If you are not in the same directory as the Eclipse workspace that you want to test, you need to use
cpptestcli with the
-data option. For example, this Windows command tests the C++test Example project by applying the "My Configuration" Test Configuration, generates a report of results, and saves that report in the
c:\reports\Report1 directory
cpptestcli -data "c:\Documents and Settings\cynthia\ApplicationData\Parasoft\C++test\workspace" -resource "C++test Example" -config user://"My Configuration" -report c:\reports\Report1
If you are in the same directory as the workspace that you want to test, you can call
cpptestcli without the
-data option. For example, this Windows command tests the C++test Example project by applying the My Configuration Test Configuration, generates a report of results, and saves that report in the
c:\reports\Report1 directory:
cpptestcli -resource "C++test Example" -config user://"My Configuration" -report c:\reports\Report1
cli Options
Available
cpptestcli options are listed in the following tables.
General Options
.
-appconsole stdout|% OUTPUT_FILE%- Redirects C++test's console output to standard output or an
%OUTPUT_FILE%file.
Examples:
-appconsole stdout(console redirected to the standard output)
-appconsole console.out(console redirected to console.out file)
-list-compilers- Prints a list of valid compiler family values.
-list-configs - Prints a list of valid Test Configuration values.
-include %PATTERN%,-exclude %PATTERN% - Specifies files to be included/excluded during testing.
You must specify a file name or path after this option.
Patterns specify file names, with the wildcards *and ? accepted, and the special wildcard ** used to specify one or more path name segments. Syntax for the patterns is similar to that of Ant filesets.
Examples:
-include **/Bank.cpp(test Bank.cpp files)
-include **/ATM/Bank/*.cpp(test all .cpp files in folder ATM/Bank)
-include c:/ATM/Bank/Bank.cpp(test only the c:/ATM/Bank/Bank.cpp file)
-exclude **/internal/**(test everything except classes that have path with folder "internal")
-exclude **/*Test.cpp(test everything, but files that end with Test.cpp)
Additionally if a pattern is a file with a .lst extension, it is treated as a file with a list of patterns.
For example, if you use -include c:/include.lst and include.lst contains the following (each line is treated as single pattern):
**/Bank.cpp
**/ATM/Bank/*.cpp
c:/ATM/Bank/Bank.cpp
then it has same effect as specifying:
-include **/Bank.cpp -include **/ATM/Bank/*.cpp
-include c:/ATM/Bank/Bank.cpp".
Options files can control report settings, Parasoft DTP settings, error authorship settings, Team Server settings, and more. You can create different options files for different projects, then use the DTP settings, error authorship settings, Team Server settings, and more. You can create different options files for different projects, then use the
-localsettings option to indicate which file should be used for the current command line test.
Each options file must be a simple text file. There are no name or location requirements. Each setting should be entered in a single line.
If a parameter is specified in this file and there is an equivalent parameter in the GUI’s Preferences panel (available from Parasoft> Preferences), the parameter set in this file will override the related parameter specified from the GUI. If a parameter is not specified in this file, C++test will use the equivalent parameter specified in the GUI.
Any options for creating or importing projects are valid only when creating or importing the project. They are ignored during subsequent runs.
- The repository(n).vss.ssdir property should containhas the working directory
C:\TEMP\VSS\SomeProjectand its subproject
$/SomeProject/SomeSubProjecthas the working directory
D:\SomeSubProject).
Settings for Creating BDF-Based Projects
Settings for Importing Green Hills .gpj Projects
Settings for Importing IAR Embedded Workbench .ewp Projects
Settings for Importing Microsoft Visual Studio 6.0 .dsp Projects
For example, if the folder C:\temp\sources should be linked in an imported project and we have defined the path variable
DEVEL_ROOT_DIR with the value C:\temp, then that folder will be linked as
DEVEL_ROOT_DIR/sources and the DEVEL_ROOT_DIR path variable will be created in the workspace. If such a variable cannot be used (for example, because its value points to another folder not containing C:\temp\sources folder, it is already defined and has different value, or it has an invalid value), then C:\temp\sources folder will be linked using the full path C:\temp\sources.
Settings for Importing Keil uVision Projects
Settings for Importing Renesas High-performance Embedded Projects
Miscellaneous Settings
Here is one sample options file named
local.properties:
#
The following variables can be used in reports, e-mail, Parasoft DTP, Team Server, and license settings. Note "..."
workspace_name
example: ${workspace_name}
Outputs an empty string.
config_name
$ example: ${config_name}
Outputs the name of executed Test Configuration; applies only to Reports and Email settings.
analysis_type
$ example: ${analysis_type}
Outputs a comma separated list of enabled analysis types (for example: Static, Generation, Execution); applies only to Reports and Email settings.
tool_name
$ example: ${tool_name}
Outputs the tool name (for example: C++test).
Example localsettings file
# if the current installation is connected to Parasoft Project Center. concerto.reporting=true #Specifies the host name of the Parasoft DTP #local settings file that specified an attribute such as project:projname2. concerto.user_defined_attributes=Type:Nightly;Project:Project1 # Determines whether the results sent to Parasoft #cpptest.license.autoconf.timeout=40 cpptest.license.use_network=true cpptest.license.network.host=license_server.domain.com cpptest.license.network.port=2222 cpptest.license.network.edition=automation_edition # SOURCE CONTROL scontrol.rep1.type=cvs scontrol.rep1.cvs.root=:pserver:[email protected]_server.domain.com:/home/cvs/ scontrol.rep1.cvs.pass=mypassword
Using the cli with an Eclipse-Based Builder
The -
buildscript %SCRIPT_FILE% option executes the specified Eclipse build script prior to testing.
In some cases, this may bring up the Eclipse UI. It depends on the Eclipse component capabilities and 3rd party Eclipse source control plugins. Note that if the UI is not opened and the source control is not fully configured, then the 3rd party Eclipse source control plugins may fail in headless mode. They may fail silently or throw various exceptions while trying to access a UI that is not available. To prevent this, have source control fully configured and ensure that it does not need to ask the user for any additional information (username, passwords, etc.)
The following scripting language can be used to define the script...
Syntax
Commands are entered one per line. Whitespace at the beginning or end of the line is trimmed. Any blank line is ignored. Everything following a # comment symbol in a line is ignored. Commands consist of a command name and one or more arguments.
Substrings of the form
$(key) are recursively expanded as macros in arguments.
Commands
Macros
Strings of the form
$(key) in command arguments are expanded. The values used can be from previous var commands or from System properties. System properties can be predefined by Java (e.g.
user.home) or passed into the build by running Eclipse with the
-vmargs -Dkey=value parameter. | https://docs.parasoft.com/display/CPPDESKE1042/Testing+from+the+Command+Line+Interface | 2021-10-15T22:49:58 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.parasoft.com |
Important: SQL Sentry versions 2021.8 or later are licensed with a SolarWinds license through the SolarWinds License Manager. SQL Sentry versions before 2021.8 are licensed with a SentryOne license.
Licensing
SQL Sentry is generally licensed per each individually monitored target. This includes products for SQL Server, SQL Server Analysis Services, and Azure SQL Database. No additional licensing is needed for the SQL Sentry Portal, clients, or monitoring services installed in your environment. You may install as many of these as needed with your SQL Sentry license.
Important: To host the SQL Sentry database in an Availability Group, the Monitoring Service and SQL Sentry Client Connection(s) must use the Listener name. Ensure that the Monitoring Service and any SQL Sentry Client connections are using the Listener name before activating your license key.
To host the SQL Sentry database in a Failover Cluster Instance (FCI), the Monitoring Service and SQL Sentry Client Connection(s) must use the virtual cluster name. Ensure that the Monitoring Service and any SQL Sentry Client connections are using the virtual cluster name before activating your license key.
Additional Information: For more information about adding your SQL Sentry Database to an Availability Group, see the Adding the SQL Sentry Database to an Availability Group article.
Additional Information: See the product pricing and SQL Sentry licensing options section of the product page for more information on licensing models and what is included with each option such as monitoring of the Hyper-V or VMware host.
Free SQL Sentry License for Monitoring the SQL Sentry Database Instance
It's possible to obtain a free license of SQL Sentry for monitoring the SQL Server instance that contains the SQL Sentry database. The database license is free to monitor regardless of how many licenses you already have.
Note: The free license doesn't appear in the license count in the Help > About window, despite whether the free license is used.
Success: There are no annual software maintenance costs for the free license, and the license is perpetual.
SQL Sentry License Usage for VMware and Windows Hyper-V
It's possible to monitor VMware vCenter hosts, Windows Servers, and Hyper-V hosts with SolarWinds SQL Sentry. With each SQL Sentry license, you can watch 3 times the SQL Sentry license quantity to monitor VMware vCenter hosts, Windows Servers, and Hyper-V hosts. For example, if you have 3 SQL Sentry licenses, you can watch 9 VMware vCenter hosts, Windows Servers, and Hyper-V hosts in your preferred combination.
Additional Information: See the SQL Sentry VMware, and Windows Hyper-V product pages for more information about the subscription licensing.
SQL Sentry License Usage Overview
View additional license information, such as the number of licenses applied throughout the environment, in the About SolarWinds SQL Sentry window.
Access the About SolarWinds SQL Sentry window by selecting Help > About.
Inventory View
To get an overview of how your licenses are applied throughout your environment you can also select the Inventory node, (Navigator > Configuration > Inventory). See the Inventory View article for instructions.
SolarWinds License Manager for SQL Sentry Online Activation
Use the SolarWinds License Manager to activate a new license key or upgrade the existing one to watch a different number of targets. The license manager can be launched from within the SQL Sentry client.
- Open Windows Services on the monitoring service machine(s) and then stop the SentryOne Monitoring Service.
- In SQL Sentry, go to Help > Manage Licenses:
- The License Manager screen appears for SolarWinds SQL Sentry:Important: If you have an existing license, the Action will have an Upgrade option. If you have an evaluation license or just upgraded from a SentryOne licensed version of SQL Sentry and need to apply a SolarWinds branded license, the Action will have an Activate option.
- To apply a license, use the Upgrade or Activate option. If you have internet access, you can use the first option to enter the activation key obtained from the SolarWinds Customer Portal. There are additional options to use a specific proxy server or perform a manual activation if there is no internet access. Select Next to continue.
- Enter your information (name, email, and phone number) to register SQL Sentry. Select Next to continue.
- You should see a message that reads, "SolarWinds SQL Sentry is now licensed and activated! Your license has been imported successfully." Select Finish to return to the License Manager screen.
- Select Exit to return to the SQL Sentry client.
- Start the SentryOne Monitoring Service.
SolarWinds License Manager for SQL Sentry Offline Activation
Use the SolarWinds License Manager to activate a new license key or upgrade the existing one to watch a different number of targets. Activate your license on an offline machine by completing the following steps:
1. Open Windows Services on the monitoring service machine(s) and then stop the SentryOne Monitoring Service.
2. In SQL Sentry, go to Help > Manage Licenses to open the SolarWinds SQL Sentry License Manager.
3. To apply a license, use the Upgrade or Activate option.
4. Select the This server does not have internet access option and then select Next to continue.
6. Complete the steps on the Activate Product page:
1. Select Copy Unique Machine ID and then paste the id into a text editor. Save the .txt document, and then move the document onto a machine with internet access.
.
7. Select Browse, select the location of your license key file, and then select Next to continue.
8. Select Finish to activate your license.
9. Select Exit to return to the SQL Sentry client, and then start the SentryOne Monitoring Service.
Applying Multiple Licenses to the Same Installation
1. Open Windows Services on the monitoring service machine(s) and then stop the SentryOne Monitoring Service.
2. In SQL Sentry, go to Help > Manage Licenses to open the License Manager.
3. Select Upgrade for the installed license key.
4. Enter the applicable license key and then select Next.
5. Verify your user information on the product registration page, and then select Next.
6. Select Finish to return to the License Manager.
7. Repeat these steps for any additional licenses.
8. Start the SentryOne Monitoring Service.
Success: New licenses will display in the list for the License Manager.
SentryOne License Management for SQL Sentry
Deprecated: This section applies only to SentryOne-branded versions of SQL Sentry (earlier than 2021.8).
The Hardware Key
Your SentryOne SQL Sentry license has a hardware key that's tied to the location of your SQL Sentry database (denoted by the SQL Server instance name). If you decide to relocate the SQL Sentry database, this hardware key can be updated through the SentryOne Customer Portal by any authorized account holder (see the SentryOne Account Management article). Contact support if you have any issues. For more information about moving the SQL Sentry database, see the Relocating the SQL Sentry Database topic.
An exception to this process is when the SQL Sentry database is part of an Availability Group. To allow licensing to be aware of the SQL Sentry database and to continue working, the hardware key must be tied to the Availability Group ID. You can get this ID by executing the following query against the instance that will be hosting the SQL Sentry database:
SELECT group_id as AG_ID, name FROM sys.availability_groups
For more information, see the Hosting SQL Sentry Database On An Availability Group article.
APS and DW Sentry Licensing
EOL: These products (APS Sentry and DW Sentry) are no longer for sale.
The licensing for Microsoft Analytics Platform System (APS) and SQL Data Warehouse (SQL DW) differ from the traditional SQL Sentry licensing model. APS is licensed by the number of Compute nodes on the target, and SQL DW is a flat monthly subscription with an annual term.
Important: If the number of Compute nodes exceeds the number of license units, monitoring will be suspended until additional license units are applied. For example, if a SQL DW target with six compute nodes is being monitored with a six node license, and then two additional nodes are added to the SQL DW environment to accommodate increased activity, monitoring will stop until an updated license that includes additional license units is applied.
Applying a New SentryOne License
When your license is near expiration, a notification that your license is about to expire displays on the SQL Sentry client status bar. Manage your license within the SQL Sentry client by completing the following:
- Select Help on the client toolbar, and then select Manage License to open the License Entry dialog box.
- Select Edit to change your license information. Select Clear to erase the current license from the text box.
- Copy your new product license from the SentryOne customer portal, and then paste the text in the text-box, or drag-and-drop a license file into the space provided. Select Save to save the changes to your license, and then select OK to close the License Entry dialog box. This opens the License Change Detected prompt.
- Select OK to restart the application with your new license.
Success: You have now successfully updated your SentryOne SQL Sentry License!
SentryOne Licensing Errors
License Key Mismatch
Receiving this error while applying the license key indicates that the hardware key of the license doesn't match the name of the server currently housing the SQL Sentry database. This typically happens when you need to migrate your SQL Sentry Database to a new instance of SQL Server, or to a new server altogether.
Note: To get the instance name, connect to the server where the SQL Sentry database will reside and run the following query:
SELECT serverproperty('servername')
Resolve this error by logging into your Customer portal account, and then update the instance name. To update the hardware key of the license in the Customer portal complete the following steps:
- Scroll to the Licenses section at the bottom of the Customer portal.
- Select the license you'd like to modify from the Perpetual License list on the left.
- In the Update Server Name section, enter the name of the SQL Server instance housing the SQL Sentry database, and the reason for making this change.
- Select Update to update the license. In the License Key form, choose to email the new license key or copy it to the clipboard.
Product Version Mismatch
Receiving this error indicates that the version number of the license that you're applying isn't valid for the version of SQL Sentry that you're running.
Note: This error message may be encountered when you try to apply a license after a major version upgrade because the version number of the license keys are incremented for each major version release.
Resolve this issue by logging into your Customer portal, and then update the license version. To upgrade the license key in the Customer portal, complete the following steps:
- Go to the Available Upgrades section in the portal.
- Select the license key(s) that you'd like to upgrade, and then select Upgrade.
Note: Once the license is upgraded, the new license key is emailed to you.
Invalid Signature
Receiving this error indicates that the license key has been modified.
Note: Adding an extra character or space in the license key trigger this error message.
Resolve this issue by selecting the original license key attachment from the licensing email and reapplying it, or select the original license key from the Customer portal and reapply it.
Invalid License Schema
Receiving this error indicates that the license key has been modified. This error is specific to the license key being modified by an email-security system.
Resolve this issue by selecting the original license key attachment from the licensing email and reapplying it, or select the original license key from the Customer portal and reapply it.
Changing the credentials of the SQL Sentry Monitoring Service Account
To update the account credentials used for the monitoring service, see the Monitoring Service Logon Account article for instructions on using the Service Configuration Utility.
Hosting the SentryOne Database On An Availability Group
Hosting the SQL Sentry database on an availability group gives you a License Key Mismatch error because SQL Sentry recognizes that the SQL Sentry database is part of an Availability Group and is looking for the Availability Group ID, rather than the server currently hosting the database.
To allow licensing to be aware of SQL Sentry database movements and continue working, the Hardware Key must be tied to the Availability Group ID.
You can get this ID by executing the below query against the instance that will be hosting the SQL Sentry database:
SELECT group_id as AG_ID, name FROM sys.availability_groups
Next, update the hardware key through the Customer portal.
Important: When running the client, use the Listener name for the Server Name. This allows the Monitoring Service to connect regardless of where the database is currently hosted.
| https://docs.sentryone.com/help/license-management | 2021-10-16T00:41:36 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/6095872bec161c432b63fd2f/n/sql-sentry-about-solarwinds-license-usage-v2021-8.png',
'SentryOne About SentryOne window Version 2020.0 About SolarWinds SQL Sentry window showing 11 of 50 licenses used'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/609587fd8e121caa0798db82/n/sql-sentry-access-about-solarwinds-window-v2021-8.png',
'SentryOne Help > About Version 2020.0 SQL Sentry Help > About menu options'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/615dc49ef4f5ca600a7b245b/n/sql-sentry-stop-monitoring-service-202112.png',
'Stop the SentryOne Monitoring Service'], dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/6101b8e84a1dd45c0e7b25a2/n/sql-sentry-client-help-manage-licenses-202112.png',
'SQL Sentry Client Help Manage Licenses SQL Sentry Client Help Manage Licenses'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/6101b8f626c722524a7b23cc/n/sql-sentry-license-manager-2021.png',
'SolarWinds License Manager Upgrade license SolarWinds License Manager Upgrade license'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/6101b91426c7224b4b7b23cb/n/solarwinds-license-manager-offline-activation.png',
'SolarWinds License Manager offline manual activation SolarWinds License Manager offline manual activation'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/6101b92226c7225b4b7b23c8/n/solarwinds-license-manager-activate-product-copy-unique-machine-id.png',
'SolarWinds License Manager Copy Unique Machine ID SolarWinds License Manager Copy Unique Machine ID'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/6101b93e4a1dd45d107b23c9/n/solarwinds-license-management-activate-license-manually.png',
'SolarWinds License Management Activate license manually SolarWinds License Management Activate license manually'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/6101b9599402edeb457b25d4/n/solarwinds-manage-license-activation-generate-license-file.png',
'SolarWinds Manage License Activation Generate License File SolarWinds Manage License Activation Generate License File'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/6101b97a9402edba477b23ce/n/solarwinds-manage-license-activation-download-activation-file.png',
'SolarWinds Manage License Activation Download Activation File SolarWinds Manage License Activation Download Activation File'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/6101b98e26c722594b7b2448/n/solarwinds-software-license-file-email.png',
'SolarWinds Manage License Activation Download Activation File SolarWinds Software License File email'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/6101b9f44a1dd429117b23c9/n/solarwinds-license-manager-select-finish.png',
'SolarWinds Manage License Activation Download Activation File SolarWinds License Manager Finish'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/615dc6e424163bdf427b2484/n/sql-sentry-stop-monitoring-service-202112.png',
'Stop the SentryOne Monitoring Service'], dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/60c3a730f0328e86267b248c/n/sql-sentry-license-manager-license-applied-2021.png',
'SolarWinds SQL Sentry License Manager multiple licenses Version 2021 License Manager with the added license applied.'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5e6a5fa2ec161c9937d3b170/n/s1-license-key-mismatch-200.png',
'SentryOne License Configuration Error License Key Mismatch Version 2020.0 SentryOne License Configuration Error License Key Mismatch'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5e6a5fc3ec161c083bd3b005/n/s1-license-version-mismatch-200.png',
'SentryOne License Configuration Error Product Version Mismatch Version 2020.0 SentryOne License Configuration Error Product Verison Mismatch'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5e6a5fec6e121c2646b10032/n/s1-license-invalid-signature-200.png',
'Sentryone License Configuration Error Invalid Signature Version 2020.0 SentryOne License Configuration Error Invalid Signature'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5e6a5ff5ad121cc3529d910e/n/s1-license-invalid-schema-200.png',
'SentryOne License Configuration Error Invalid License Schema Version 2020.0 SentryOne License Configuration Error Invalid License Schema'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5e6a60c5ad121c7e529d8f5a/n/s1-sentryone-database-connection-200.png',
'SentryOne Database Connection Version 2020.0 SentryOne Database Connection'],
dtype=object) ] | docs.sentryone.com |
Version Control
The color of the square icon in front of each Model, defines the status of that Model in terms of change:
Conflict¶
In case you have uncommited changes in a model and you try to update it you may have a Conflict. You will see this message:
After this you will see on your Navigator this message
until you resolve the conflict(s).
Click on it and navigator's filter will set to Conflicted.
Now with right click on the model you can go to Version Control and select one of the following options:
- Manually Resolve (View the conflicts, deside on how to resolve them, apply resolution manually)
- Resolve Using Mine (Your changes will overwrite the update's changes)
- Resolve Using Theirs (Your changes will be lost)
Warning
Maybe you have uncommited changes in a model and update it without conflict, if your changes are not in the same spot with update's changes | https://docs.zappdev.com/Tools/VersionControl/VersionControl/ | 2021-10-16T00:02:00 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.zappdev.com |
vert ]
Retrieve a JSON array of up to twenty of your queues. This will return the queues themselves, not just a list of them. To retrieve the next twenty queues, use the nextToken string returned with the array.
See also: AWS API Documentation
See 'aws: Queues
list-queues [--list-by <value>] [--order <value>] [--cli-input-json <value>] [--starting-token <value>] [--page-size <value>] [--max-items <value>] [--generate-cli-skeleton <value>]
--list-by (string) Optional. When you request a list of queues, you can choose to list them alphabetically by NAME or chronologically by CREATION_DATE. If you don't specify, the service will list them by creation date.
Possible values:
- NAME
- CREATION_DATE
--order (string) Optional. When you request lists of resources, you can specify whether they are sorted in ASCENDING or DESCENDING order. Default varies by resource. queues
The following list-queues example lists all of your MediaConvert queues.
aws mediaconvert list-queues \ --endpoint-url
Output:
{ "Queues": [ { "PricingPlan": "ON_DEMAND", "Type": "SYSTEM", "Status": "ACTIVE", "CreatedAt": 1503451595, "Name": "Default", "SubmittedJobsCount": 0, "ProgressingJobsCount": 0, "Arn": "arn:aws:mediaconvert:us-west-2:123456789012:queues/Default", "LastUpdated": 1534549158 }, { "PricingPlan": "ON_DEMAND", "Type": "CUSTOM", "Status": "ACTIVE", "CreatedAt": 1537460025, "Name": "Customer1", "SubmittedJobsCount": 0, "Description": "Jobs we run for our cusotmer.", "ProgressingJobsCount": 0, "Arn": "arn:aws:mediaconvert:us-west-2:123456789012:queues/Customer1", "LastUpdated": 1537460025 }, { "ProgressingJobsCount": 0, "Status": "ACTIVE", "Name": "transcode-library", "SubmittedJobsCount": 0, "LastUpdated": 1564066204, "ReservationPlan": { "Status": "ACTIVE", "ReservedSlots": 1, "PurchasedAt": 1564066203, "Commitment": "ONE_YEAR", "ExpiresAt": 1595688603, "RenewalType": "EXPIRE" }, "PricingPlan": "RESERVED", "Arn": "arn:aws:mediaconvert:us-west-2:123456789012:queues/transcode-library", "Type": "CUSTOM", "CreatedAt": 1564066204 } ] }
For more information, see Working with AWS Elemental MediaConvert Queues in the AWS Elemental MediaConvert User Guide.
NextToken -> (string)
Use this string to request the next batch of queues.
Queues -> (list)
List of queues.
(structure)
You can use queues to manage the resources that are available to your AWS account for running multiple transcoding jobs at the same time. If you don't specify a queue, the service sends all jobs through the default queue. For more information, see.
Arn -> (string)An identifier for this resource that is unique within all of AWS.
CreatedAt -> (timestamp)The.
Status -> (string)Specifies whether the pricing plan for your reserved queue is ACTIVE or EXPIRED.
Status -> (string)Queues can be ACTIVE or PAUSED. If you pause a queue, the service won't begin processing jobs in that queue. Jobs that are running when you pause the queue continue to run until they finish or result in an error.
SubmittedJobsCount -> (integer)The estimated number of jobs with a SUBMITTED status.
Type -> (string)Specifies whether this on-demand queue is system or custom. System queues are built in. You can't modify or delete system queues. You can create and modify custom queues. | https://docs.aws.amazon.com/cli/latest/reference/mediaconvert/list-queues.html | 2021-10-16T01:07:11 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.aws.amazon.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.