content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Wing Wing Z-84 Pixracer Build The Wing Wing Z-84 is our gold standard airframe: Small, rugged and just large enough to host a Pixracer. Key information: - Frame: Wing Wing Z-84 - Flight controller: Pixracer Parts List Z-84 Plug n' Fly (PNF/PNP) or Kit One of these: PNF (or "PNP") versions include motor, propeller and electronic speed controller. The "kit" version does not include these components, which must be purchased separately. Electronic Speed Controller (ESC) One of these (any small (>=12A) ESC will do): - Blue Series 12A ESC (Hobbyking) - Lumenier Regler 30A BLHeli_S ESC OPTO (GetFPV) Autopilot and Essential Components - Pixracer kit (including GPS and power module) - FrSky D4R-II receiver or equivalent (jumpered to PPM sum output according to its manual) - Mini telemetry set for HKPilot32 - Digital airspeed sensor for HKPilot32 / Pixfalcon - 1800 mAh 2S LiPo Battery - e.g. Team Orion 1800mAh 7.4V 50C 2S1P Recommended spare parts 布线 The wiring below is valid for Pixhawk and Pixracer. Use the main outputs, not the ones labeled with AUX. The motor controller needs to have an in-built BEC, as the autopilot is not powering the servo rail. Build Log The images below give a rough idea about the assembly process, which is simple and can be done with a hot glue gun. Airframe Configuration Select the Z-84 in the flying wing section of the QGC airframe config:
http://docs.px4.io/master/zh/frames_plane/wing_wing_z84.html
2019-12-05T17:05:51
CC-MAIN-2019-51
1575540481281.1
[array(['../../images/wing_wing_build11.jpg', 'Wing Wing Z-84 build'], dtype=object) array(['../../images/wing_wing_build01.jpg', 'wing\\_wing\\_build01'], dtype=object) array(['../../images/wing_wing_build02.jpg', 'wing\\_wing\\_build02'], dtype=object) array(['../../images/wing_wing_build03.jpg', 'wing\\_wing\\_build03'], dtype=object) array(['../../images/wing_wing_build04.jpg', 'wing\\_wing\\_build04'], dtype=object) array(['../../images/wing_wing_build09.jpg', 'wing\\_wing\\_build09'], dtype=object) array(['../../images/wing_wing_build11.jpg', 'Wing Wing Z-84 build'], dtype=object) array(['../../images/qgc_firmware_flying_wing_west_wing.png', 'QGC - select firmware for West Wing'], dtype=object) ]
docs.px4.io
Adding an image to an article From Joomla! Documentation Images are added to articles using the Image button below the content editor window in the Edit Article screen. Note: It is possible to insert images using the editor in Joomla. However, this feature provides a simple way of inserting images stored in the images directory of your Joomla installation. - Open the Article for editing either by: - Clicking the Content → Article Manager menu item to go to the Article Manager, select the Article and click on Edit in the and click on the Insert button. -. - Image Float: Sets the image alignment. By default, the align attribute is Not Set. - Caption: Enables the caption which displays the Image Title below the image. - Caption Class: Applies the entered class to the figcaption element. - Click the Insert button to insert the image. The Insert Image screen will close and the image will be displayed in the editor. Or. - The selected file(s) appear at the bottom of the Insert Image screen. Click Start Upload to begin uploading files. - When the upload is complete a green notice will appear at the top of the screen. - You may now select and insert the uploaded image as before.
https://docs.joomla.org/Adding_an_image_to_an_article/en
2019-12-05T17:55:48
CC-MAIN-2019-51
1575540481281.1
[]
docs.joomla.org
Portal:Administrators/Content Management From Joomla! Documentation < Portal:Administrators.? Article Management Help Screens
https://docs.joomla.org/Portal:Administrators/Content_Management/en
2019-12-05T17:22:14
CC-MAIN-2019-51
1575540481281.1
[array(['/images/4/4d/Compat_icon_3_x.png', 'Joomla 3.x'], dtype=object)]
docs.joomla.org
Xml Writer. Write Bin Hex Async(Byte[], Int32, Int32) Method Definition Asynchronously encodes the specified binary bytes as BinHex and writes out the resulting text. public: virtual System::Threading::Tasks::Task ^ WriteBinHexAsync(cli::array <System::Byte> ^ buffer, int index, int count); public virtual System.Threading.Tasks.Task WriteBinHexAsync (byte[] buffer, int index, int count); abstract member WriteBinHexAsync : byte[] * int * int -> System.Threading.Tasks.Task override this.WriteBinHexAsync : byte[] * int * int -> System.Threading.Tasks.Task Public Overridable Function WriteBinHexAsync (buffer As Byte(), index As Integer, count As Integer)BinHex, with the same functionality. To use this method, you must set the Async flag to true.
https://docs.microsoft.com/en-au/dotnet/api/system.xml.xmlwriter.writebinhexasync?view=netframework-4.8
2019-12-05T17:45:18
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
Type. Member Type Property Definition Gets a MemberTypes value indicating that this member is a type or a nested type. public: virtual property System::Reflection::MemberTypes MemberType { System::Reflection::MemberTypes get(); }; public override System.Reflection.MemberTypes MemberType { get; } member this.MemberType : System.Reflection.MemberTypes Public Overrides ReadOnly Property MemberType As MemberTypes Property Value A MemberTypes value indicating that this member is a type or a nested type. Implements Examples The following code example shows the MemberType field as a parameter to the GetMember method: array<MemberInfo^>^ others = t->GetMember( mi->Name, mi->MemberType, (BindingFlags)(BindingFlags::Public | BindingFlags::Static | BindingFlags::NonPublic | BindingFlags::Instance) ); MemberInfo[] others = t.GetMember(mi.Name, mi.MemberType, BindingFlags.Public | BindingFlags.Static | BindingFlags.NonPublic | BindingFlags.Instance); Dim others As MemberInfo() = t.GetMember(mi.Name, mi.MemberType, _ BindingFlags.Public Or BindingFlags.Static Or BindingFlags.NonPublic _ Or BindingFlags.Instance) Remarks This property overrides MemberInfo.MemberType. Therefore, when you examine a set of MemberInfo objects - for example, the array returned by GetMembers - the MemberType property returns MemberTypes.NestedType when a given member is a nested type. MemberTypes.TypeInfo.
https://docs.microsoft.com/en-gb/dotnet/api/system.type.membertype?view=netframework-4.7.2
2019-12-05T18:16:48
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
Create a Database User This topic describes how to create a database user mapped to a login in SQL Server 2012 by using SQL Server Management Studio or Transact-SQL.. In This Topic Before you begin: Background Security To create a database user, using: SQL Server Management Studio Transact-SQL Before You Begin Background. Security Permissions Requires ALTER ANY USER permission on the database. [Top] Using SQL Server Management Studio,. [Top] Using Transact-SQL To create a database user In Object Explorer, connect to an instance of Database Engine. On the Standard bar, click New Query. Copy and paste the following example into the query window and click Execute. -- Creates the login AbolrousHazem with password '340$Uuxwp7Mcxo7Khy'. CREATE LOGIN AbolrousHazem WITH PASSWORD = '340$Uuxwp7Mcxo7Khy'; GO -- Creates a database user for the login created above. CREATE USER AbolrousHazem FOR LOGIN AbolrousHazem; GO For more information, see CREATE USER (Transact-SQL). [Top] See Also Concepts Principals (Database Engine)
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2012/aa337545%28v%3Dsql.110%29
2019-12-05T18:31:40
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
Legislator: Theme Installation 1. Install & Activate the Theme In the WordPress dashboard, upload the theme zip file from Appearance > Themes 2. Install & Activate Recommended Plugins.. 'legislator-widgets.wie' file from the 'demo-content' folder. in the theme options panel: Appearance > Theme Options
https://docs.rescuethemes.com/article/139-legislator-theme-installation
2019-12-05T18:15:58
CC-MAIN-2019-51
1575540481281.1
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55708d6ee4b01a224b428e9d/images/5578735ae4b01a224b42a62d/file-lKwUPR73Uu.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55708d6ee4b01a224b428e9d/images/55787381e4b027e1978e6a03/file-IqSfxJpVtk.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55708d6ee4b01a224b428e9d/images/5578738fe4b01a224b42a630/file-I0b6BgUxY2.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55708d6ee4b01a224b428e9d/images/557873cbe4b01a224b42a634/file-6z07BV9chH.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55708d6ee4b01a224b428e9d/images/557873eae4b01a224b42a636/file-an0lHx5DuC.png', None], dtype=object) array(['https://d33v4339jhl8k0.cloudfront.net/docs/assets/55708d6ee4b01a224b428e9d/images/557599e0e4b01a224b42989e/file-hxvL3pPF4S.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55708d6ee4b01a224b428e9d/images/55787475e4b027e1978e6a0c/file-670JT6yBWa.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55708d6ee4b01a224b428e9d/images/5578748de4b01a224b42a63a/file-Kxc7FEiDaZ.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55708d6ee4b01a224b428e9d/images/5578749ce4b01a224b42a63b/file-PhS4gQES6m.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55708d6ee4b01a224b428e9d/images/5579d287e4b01a224b42ac5f/file-vxbbmXcJYC.png', None], dtype=object) ]
docs.rescuethemes.com
._backup command must exit with a value of 0 on success, non-zero if an error occurs. In the case of a non-zero exit code, gpbackup displays the contents of stderr to the user.
http://docs.greenplum.org/6-1/admin_guide/managing/plugin_api/setup_plugin_for_backup.html
2019-12-05T16:53:13
CC-MAIN-2019-51
1575540481281.1
[]
docs.greenplum.org
Historically, the Varnish Agent has been developed in close relation with Varnish. As the API changed throughout time, so was the need to update the agent in this regards. Today, the agent has three implementations. The very first one has been written in Perl and has addressed Varnish while still in its rather early stages. This implementation is quite old and disregarded, therefore the current implementations which are in use today are the two below: The 4.X implementation is the most popular one, it is open-source and has the widest spread of usage. It has been developed in C and compatible with Varnish 4.X version. The 6.X implementation is a newer implementation and its main driver was to buypass the dependency towards a specific Varnish version. This has been written in Golang and it is a closed source implementation. This version is far more friendly in terms of compatibility and it works with older Varnish versions such 5.X and 4.X Regardless of the chosen implementation, the VAC will work with any of the two. Both implementations respect the rest interface that the VAC requires.
https://docs.varnish-software.com/varnish-administration-console/varnish-agent/about/
2019-12-05T18:35:36
CC-MAIN-2019-51
1575540481281.1
[]
docs.varnish-software.com
Menus Menu Item User Profile Edit/de. Profile Edit shown:
https://docs.joomla.org/Help39:Menus_Menu_Item_User_Profile_Edit/de
2019-12-05T18:16:57
CC-MAIN-2019-51
1575540481281.1
[]
docs.joomla.org
Summary of Chapter 7. XAML vs. code Note Notes on this page indicate areas where Xamarin.Forms has diverged from the material presented in the book. Xamarin.Forms supports an XML-based markup language called the Extensible Application Markup Language or XAML (pronounced "zammel"). XAML provides an alternative to C# in defining the layout of the user interface of a Xamarin.Forms application, and in defining bindings between user-interface elements and underlying data. Properties and attributes Xamarin.Forms classes and structures become XML elements in XAML, and properties of these classes and structures become XML attributes. To be instantiated in XAML, a class must generally have a public parameterless constructor. Any properties set in XAML must have public set accessors. For properties of the basic data types ( string, double, bool, and so forth), the XAML parser uses the standard TryParse methods to convert attribute settings to these types. The XAML parser can also easily handle enumeration types, and it can combine enumeration members if the enumeration type is flagged with the Flags attribute. To assist the XAML parser, more complex types (or properties of those types) can include a TypeConverterAttribute that identifies a class that derives from TypeConverter which supports conversion from string values to those types. For example, the ColorTypeConverter converts color names and strings, such as "#rrggbb", into Color values. Property-element syntax In XAML, classes and the objects created from them are expressed as XML elements. These are known as object elements. Most properties of these objects are expressed as XML attributes. These are called property attributes. Sometimes a property must be set to an object that cannot be expressed as a simple string. In such a case, XAML supports a tag called a property element that consists of the class name and property name separated by a period. An object element can then appear within a pair of property-element tags. Adding a XAML page to your project A Xamarin.Forms Portable Class Library can contain a XAML page when it is first created, or you can add a XAML page to an existing project. In the dialog to add a new item, choose the item that refers to a XAML page, or ContentPage and XAML. (Not a ContentView.) Note Visual Studio options have changed since this chapter was written. Two files are created: a XAML file with the filename extension .xaml, and a C# file with the extension .xaml.cs. The C# file is often referred to as the code-behind of the XAML file. The code-behind file is a partial class definition that derives from ContentPage. At build time, the XAML is parsed and another partial class definition is generated for the same class. This generated class includes a method named InitializeComponent that is called from the constructor of the code-behind file. During runtime, at the conclusion of the InitializeComponent call, all the elements of the XAML file have been instantiated and initialized just as if they had been created in C# code. The root element in the XAML file is ContentPage. The root tag contains at least two XML namespace declarations, one for the Xamarin.Forms elements and the other defining an x prefix for elements and attributes intrinsic to all XAML implementations. The root tag also contains an x:Class attribute that indicates the namespace and name of the class that derives from ContentPage. This matches the namespace and class name in the code-behind file. The combination of XAML and code is demonstrated by the CodePlusXaml sample. The XAML compiler Xamarin.Forms has a XAML compiler, but its use is optional based on the use of a XamlCompilationAttribute. If the XAML is not compiled, the XAML is parsed at build time, and the XAML file is embedded in the PCL, where it is also parsed at runtime. If the XAML is compiled, the build process converts the XAML into a binary form, and the runtime processing is more efficient. Platform specificity in the XAML file In XAML, the OnPlatform class can be used to select platform-dependent markup. This is a generic class and must be instantiated with an x:TypeArguments attribute that matches the target type. The OnIdiom class is similar but used much less often. The use of OnPlatform has changed since the book was published. It was originally used in conjunction with properties named iOS, Android, and WinPhone. It is now used with child On objects. Set the Platform property to a string consistent with the public const fields of the Device class. Set the Value property to a value consistent with the x:TypeArguments attribute of the OnPlatform tag. OnPlatform is demonstrated in the ScaryColorList sample, so called because it contains blocks of nearly identical XAML. The existence of this repetitious markup suggests that techniques should be available to reduce it. The content property attributes Some property elements occur quite frequently, such as the <ContentPage.Content> tag on the root element of a ContentPage, or the <StackLayout.Children> tag that encloses the children of StackLayout. Every class is allowed to identify one property with a ContentPropertyAttribute on the class. For this property, the property-element tags are not required. ContentPage defines its content property as Content, and Layout<T> (the class from which StackLayout derives) defines its content property as Children. These property element tags are not required. The property element of Label is Text. Formatted text The TextVariations sample contains several examples of setting the Text and FormattedText properties of Label. In XAML, Span objects appear as children of the FormattedString object. When a multiline string is set to the Text property, end-of-line characters are converted to space characters, but the end-of-line characters are preserved when a multiline string appears as content of the Label or Label.Text tags: Related links Feedback
https://docs.microsoft.com/en-us/xamarin/xamarin-forms/creating-mobile-apps-xamarin-forms/summaries/chapter07
2019-12-05T18:05:28
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
Learn to: Almost all the operations in this section are mainly related to Numpy rather than OpenCV. A good knowledge of Numpy is required to write better optimized code with OpenCV. *( Examples will be shown in a Python terminal, since most of them are just single lines of code )* Let's load a color image first: You can access a pixel value by its row and column coordinates. For BGR image, it returns an array of Blue, Green, Red values. For grayscale image, just corresponding intensity is returned. You can modify the pixel values the same way. Warning Numpy is an optimized library for fast array calculations. So simply accessing each and every pixel value and modifying it will be very slow and it is discouraged. Better pixel accessing and editing method : Image properties include number of rows, columns, and channels; type of image data; number of pixels; etc. The shape of an image is accessed by img.shape. It returns a tuple of the number of rows, columns, and channels (if the image is color): Total number of pixels is accessed by img.size: Image datatype is obtained by `img.dtype`: Sometimes, you will have to play with certain regions of images. For eye detection in images, first face detection is done over the entire image. When a face is obtained, we select the face region alone and search for eyes inside it instead of searching the whole image. It improves accuracy (because eyes are always on faces :D ) and performance (because we search in a small area). ROI is again obtained using Numpy indexing. Here I am selecting the ball and copying it to another region in the image: Check the results below: Sometimes you will need to work separately on the B,G,R channels of an image. In this case, you need to split the BGR image into single channels. In other cases, you may need to join these individual channels to create a BGR image. You can do this simply by: Or Suppose you want to set all the red pixels to zero - you do not need to split the channels first. Numpy indexing is faster: Warning cv.split() is a costly operation (in terms of time). So use it only if necessary. Otherwise go for Numpy indexing. If you want to create a border around an image, something like a photo frame, you can use cv.copyMakeBorder(). But it has more applications for convolution operation, zero padding etc. This function takes following arguments: Below is a sample code demonstrating all these border types for better understanding: See the result below. (Image is displayed with matplotlib. So RED and BLUE channels will be interchanged):
https://docs.opencv.org/master/d3/df2/tutorial_py_basic_ops.html
2019-12-05T18:30:09
CC-MAIN-2019-51
1575540481281.1
[]
docs.opencv.org
type = ‘none’¶ Introduction. Examples¶ Simple none field 'none_1' => [ 'label' => 'none_1', 'config' => [ 'type' => 'none', ], ], Properties renderType default¶ cols¶ fieldInformation¶ - Datatype - array - Scope - Display - Description - Show information between an element label and the main element input area. Configuration works identical to the “fieldWizard” property, no default configuration in the core exists (yet). In contrast to “fieldWizard”, HTML returned by fieldInformation is limited, see FormEngine docs for more details. format¶ - Datatype - string (keyword) + array - Scope - Display - Description The value of a none-type fields is normally displayed as is. It is however possible to format it using this property. The following keywords are available, some having sub-properties. Sub-properties are called with the format.keyword (note the trailing dot), which in itself is an array. - date Formats the value of the field as a date. The default formatting uses PHP’s date()function and d-m-Yas a format. Possible options are: - strftime - (boolean) If true, strftime()is used instead date()for formatting. - option - (string) Format string (i.e. Y-m-dor %x, etc.). - appendAge - (boolean) If true, the age of the value is appended is appended to the formatted output. - datetime - Formats the values using PHP’s date()function and H:i d-m-Yas a format. - time - Formats the values using PHP’s date()function and H:ias a format. - timesec - Formats the values using PHP’s date()function and H:i:sas a format. - year - Formats the values using PHP’s date()function and Yas a format. - int Formats the values as an integer using PHP’s sprintf()in various bases. The default base is decimal ( dec). The base is defined as an option: - base - (string) Defines the base of the value. Possible bases are “dec”, “hex”, “HEX”, “oct” and “bin”. - float Formats the values as an real number using PHP’s sprintf()and the %fmarker. The number of decimals is an option: - precision - (integer) Defines the number of decimals to use (maximum is 10, default is 2). - number Formats the values as a number using PHP’s sprintf(). The format to use is an option: - option - (string) A format definition according to PHP’s sprintf()function (see the reference). - md5 - Returns the md5 hash of the values. - filesize Formats the values as file size using \TYPO3\CMS\Core\Utility\GeneralUtility::formatSize(). One option exists: - appendByteSize - (boolean) If true, the original value is appended to the formatted string (between brackets). - user - Calls a user-defined function to format the values. The only option is the reference to the function: - userFunc - (string) Reference to the user-defined function. The function receives the field configuration and the field’s value as parameters. - Examples 'aField' => [ 'config' => [ 'type' => 'none', 'format' => 'date' 'format.' => [ 'strftime' => TRUE, 'option' => '%x' ], ], ], 'aField' => [ 'config' => [ 'type' => 'none', 'format' => 'float' 'format.' => [ 'precision' => 8 ], ], ], pass_content¶ - Datatype - boolean - Scope - Display - Description If set, then content from the field is directly outputted in the <input>section as value attribute. Otherwise, the content will be passed through htmlspecialchars(). Be careful to set this flag since it allows HTML from the field to be outputted on the page, thereby creating the possibility of XSS security holes.
https://docs.typo3.org/m/typo3/reference-tca/master/en-us/ColumnsConfig/Type/None.html
2019-12-05T17:41:06
CC-MAIN-2019-51
1575540481281.1
[array(['../../_images/TypeNoneStyleguide1.png', 'Simple none field (none_1)'], dtype=object)]
docs.typo3.org
# How can I control posts on discussions? > [!Alert] Please be aware that not all functionality covered in this and linked articles may be available to you. Discussions are a wonderful tool for opening dialogs with and between students. However, to be truly effective, at times posts, responses, and/or comments may need to be removed for the good of the overall discussion. Users with the Moderator role, when added to a discussion, have the ability to delete posts, responses, and comments from that discussion. All discussions should have at least one Moderator. Users with the Operations Manager role can assign the Moderator role. To add one or more moderators to a discussion: 1. Edit the **Discussion** profile. 1. On the **Moderators** tab, click **Add.** 1. The Choose Moderators dialog will open. This dialog is pre-filtered for users from the Discussion’s organization who have the specific moderator permissions. Use the filters to narrow the results and click **Search**. 1. Select the moderator(s) and click **OK**. 1. Click **Save**. A moderator can access discussions from the Course profile page(s) where the discussion is associated or from Find Discussions. To access from a **Course** profile: 1. Click **Discussion** in the command area. This will take you to the topics and their posts. To access from **Find Discussions**: 1. Click the name of the discussion in the search results. 1. Click **Posts** in the upper right corner to be taken to the topics and their posts. Once in a discussion, to delete a post, response, or comment on the board, click the X to the right of the item. Moderators that have not been assigned to a discussion may be able to access the discussion but will not be able to delete posts, responses, and comments. View of discussion by Moderator assigned to it: View of discussion by Moderator not assigned to it: ## Related Articles For more information on Discussions, please see: - [How do I create a discussion and attach it to a course?](create-discussion.md) - [How can I add a disclaimer to all my discussions?](add-disclaimer.md) - [How do my students and I participate in discussions?](participation.md) - [How can I be notified of activity on a discussion?](admin-follow.md)
https://docs.learnondemandsystems.com/tms/tms-administrators/discussions/add-moderators.md
2019-12-05T18:25:14
CC-MAIN-2019-51
1575540481281.1
[array(['/tms/images/disc-moderator.png', None], dtype=object) array(['/tms/images/disc-non-moderator.png', None], dtype=object)]
docs.learnondemandsystems.com
Improve availability of your application with Azure Advisor Azure Advisor helps you ensure and improve the continuity of your business-critical applications. You can get high availability recommendations by Advisor from the High Availability tab of the Advisor dashboard. Ensure virtual machine fault tolerance To provide redundancy to your application, we recommend that you group two or more virtual machines in an availability set. Advisor identifies virtual machines that are not part of an availability set and recommends moving them into an availability set. This configuration ensures that during either a planned or unplanned maintenance event, at least one virtual machine is available and meets the Azure virtual machine SLA. You can choose to create an availability set for the virtual machine or to add the virtual machine to an existing availability set. Note If you choose to create an availability set, you must add at least one more virtual machine into it. We recommend that you group two or more virtual machines in an availability set to ensure that at least one machine is available during an outage. Ensure availability set fault tolerance To provide redundancy to your application, we recommend that you group two or more virtual machines in an availability set. Advisor identifies availability sets that contain a single virtual machine and recommends adding one or more virtual machines to it. This configuration ensures that during either a planned or unplanned maintenance event, at least one virtual machine is available and meets the Azure virtual machine SLA. You can choose to create a virtual machine or to add an existing virtual machine to the availability set. Use Managed Disks to improve data reliability Virtual machines that are in an availability set with disks that share either storage accounts or storage scale units are not resilient to single storage scale unit failures during outages. Advisor will identify these availability sets and recommend migrating to Azure Managed Disks. This will ensure that the disks of the different virtual machines in the availability set are sufficiently isolated to avoid a single point of failure. Ensure application gateway fault tolerance This recommendation ensures the business continuity of mission-critical applications that are powered by application gateways. Advisor identifies application gateway instances that are not configured for fault tolerance, and it suggests remediation actions that you can take. Advisor identifies medium or large single-instance application gateways, and it recommends adding at least one more instance. It also identifies single- or multi-instance small application gateways and recommends migrating to medium or large SKUs. Advisor recommends these actions to ensure that your application gateway instances are configured to satisfy the current SLA requirements for these resources. Protect your virtual machine data from accidental deletion Setting up virtual machine backup ensures the availability of your business-critical data and offers protection against accidental deletion or corruption. Advisor identifies virtual machines where backup is not enabled, and it recommends enabling backup. Ensure you have access to Azure cloud experts when you need it When running a business-critical workload, it's important to have access to technical support when needed. Advisor identifies potential business-critical subscriptions that do not have technical support included in their support plan and recommends upgrading to an option that includes technical support. Create Azure Service Health alerts to be notified when Azure issues affect you We recommend setting up Azure Service Health alerts to be notified when Azure service issues affect you. Azure Service Health is a free service that provides personalized guidance and support when you are impacted by an Azure service issue. Advisor identifies subscriptions that do not have alerts configured and recommends creating one. Configure Traffic Manager endpoints for resiliency Traffic Manager profiles with more than one endpoint experience higher availability if any given endpoint fails. Placing endpoints in different regions further improves service reliability. Advisor identifies Traffic Manger profiles where there is only one endpoint and recommends adding at least one more endpoint in another region. If all endpoints in a Traffic Manager profile that is configured for proximity routing are in the same region, users from other regions may experience connection delays. Adding or moving an endpoint to another region will improve overall performance and provide better availability if all endpoints in one region fail. Advisor identifies Traffic Manager profiles configured for proximity routing where all the endpoints are in the same region. It recommends adding or moving an endpoint to another Azure region.. Advisor identifies Traffic Manager profiles configured for geographic routing where there is no endpoint configured to have the Regional Grouping as "All (World)". It recommends changing the configuration to make an endpoint "All (World). Use soft delete on your Azure Storage Account to save and recover data after accidental overwrite or deletion Enable soft delete on your storage account so that deleted blobs transition to a soft deleted state instead of being permanently deleted. When data is overwritten, a soft deleted snapshot is generated to save the state of the overwritten data. Using soft delete allows you to recover if there are accidental deletions or overwrites. Advisor identifies Azure Storage accounts that don't have soft delete enabled and suggests you enable it. Configure your VPN gateway to active-active for connection resiliency In active-active configuration, both instances of a VPN gateway will establish S2S VPN tunnels to your on-premises VPN device. When a planned maintenance event or unplanned event happens to one gateway instance, traffic will be switched over to the other active IPsec tunnel automatically. Azure Advisor will identify VPN gateways that are not configured as active-active and suggest that you configure them for high availability. Use production VPN gateways to run your production workloads Azure Advisor will check for any VPN gateways that are a Basic SKU and recommend that you use a production SKU instead. The Basic SKU is designed for development and testing purposes. Production SKUs offer a higher number of tunnels, BGP support, active-active configuration options, custom Ipsec/IKE policy, and higher stability and availability. Repair invalid log alert rules Azure Advisor will detect alert rules that have invalid queries specified in their condition section. Log alert rules are created in Azure Monitor and are used to run analytics queries at specified intervals. The results of the query determine if an alert needs to be triggered. Analytics queries may become invalid overtime due to changes in referenced resources, tables, or commands. Advisor will recommend that you correct the query in the alert rule to prevent it from getting auto-disabled and ensure monitoring coverage of your resources in Azure. Learn more about troubleshooting alert rules Configure consistent indexing mode on your Cosmos DB collection Azure Cosmos DB containers configured with Lazy indexing mode may impact the freshness of query results. Advisor will detect containers configured this way and recommend switching to Consistent mode. Learn more about indexing policies in Cosmos DB Configure your Azure Cosmos DB containers with a partition key Azure Advisor will identify Azure Cosmos DB non-partitioned collections that are approaching their provisioned storage quota. It will recommend migrating these collections to new collections with a partition key definition so that they can automatically be scaled out by the service. Learn more about choosing a partition key Upgrade your Azure Cosmos DB .NET SDK to the latest version from Nuget Azure Advisor will identify Azure Cosmos DB accounts that are using old versions of the .NET SDK and recommend upgrading to the latest version from Nuget for the latest fixes, performance improvements, and new feature capabilities. Learn more about Cosmos DB .NET SDK Upgrade your Azure Cosmos DB Java SDK to the latest version from Maven Azure Advisor will identify Azure Cosmos DB accounts that are using old versions of the Java SDK and recommend upgrading to the latest version from Maven for the latest fixes, performance improvements, and new feature capabilities. Learn more about Cosmos DB Java SDK Upgrade your Azure Cosmos DB Spark Connector to the latest version from Maven Azure Advisor will identify Azure Cosmos DB accounts that are using old versions of the Cosmos DB Spark connector and recommend upgrading to the latest version from Maven for the latest fixes, performance improvements, and new feature capabilities. Learn more about Cosmos DB Spark connector Enable virtual machine replication Virtual machines that do not have replication enabled to another region are not resilient to regional outages. Replicating virtual machines reduces any adverse business impact during the time of an Azure region outage. Advisor will detect VMs that do not have replication enabled and recommend enabling replication so that in the event of an outage, you can quickly bring up your virtual machines in a remote Azure region. Learn more about virtual machine replication How to access High Availability recommendations in Advisor On the Advisor dashboard, click the High Availability tab. Next steps For more information about Advisor recommendations, see: Feedback
https://docs.microsoft.com/en-us/azure/advisor/advisor-high-availability-recommendations
2019-12-05T18:12:19
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
PBC Data Type Set Store An operation to update a set, either on its own (at the bucket/key level) or inside of a map. Request message SetOp { repeated bytes adds = 1; repeated bytes removes = 2; } Set members are binary values that can only be added ( adds) or removed ( removes) from a set. You can add and/or remove as many members of a set in a single message as you would like.
https://docs.riak.com/riak/kv/2.1.4/developing/api/protocol-buffers/dt-set-store/index.html
2019-12-05T16:50:41
CC-MAIN-2019-51
1575540481281.1
[]
docs.riak.com
GNS3 and VirtualBox Before you run your virtual network under GNS3, make sure you: - Download and install VirtualBox. - Download the VirtualBox OVA image and import all the VMs that you want to run in GNS3 into VirtualBox. - Download and install GNS3. GNS3 overwrites the interface names that are configured in VirtualBox. If you want to use the VMs in VirtualBox and GNS3, consider cloning them first. To run your virtual network under GNS3: Start GNS3. Select GNS3 > Preferences. The Preferences dialog opens. From the left pane, select VirtualBox. In the Path to VBoxManage field, enter the location where VBoxManage is installed. For example: /usr/bin/VBoxManage. From the left pane, select VirtualBox VMs, then click New. The VM list shows the VirtualBox VMs you set up earlier. From the VM list, select the VM that you want to run in GNS3, then click Finish. The VM you selected appears in the center pane. Repeat this step for every VM in the topology that you want to run in GNS3. For the example topology above, the VMs are: Cumulus VX-spine1, Cumulus VX-spine2, Cumulus VX-leaf1 and Cumulus VX-leaf2. Enable GNS3 to work with the network interfaces of the VirtualBox VMs. Configure the network settings for each VM using the GNS3 interface: Select a VM in the center pane, then click Edit. In the VirtualBox VM configuration dialog, click the Network tab. Increase the number of Adapters to 4. Select the Type to be Paravirtualized Network. Select Allow GNS3 to use any configured VirtualBox adapter. Click OK to save your settings and close the dialog. GNS3 overwrites the interface names that are configured in VirtualBox; If you want to use the VM in VirtualBox, you might want to consider cloning them first. To connect VMs, select the cable icon from the left pane, then select the VMs to connect directly. To do this, select which network interface you want connected for each VM. Cumulus VX-spine1: e1<-> e1 Cumulus VX-leaf1 e2<->e1 Cumulus VX-leaf2 Cumulus VX-spine2: e1<->e2 Cumulus VX-leaf1 e2<->e2 Cumulus VX-leaf2 Cumulus VX-leaf1: e1<->e1 Cumulus VX-spine1 e2<->e1 Cumulus VX-spine2 e3<->e0 PC1 (VPCS) Cumulus VX-leaf2: e1<->e2 Cumulus VX-spine1 e2<->e2 Cumulus VX-spine2 e3<->e0 PC2 (VPCS) e1 in GNS3 corresponds to swp1 in Cumulus VX, e2 to swp2, and so on. You can also drag and drop virtual PCs (VPCS) and connect them to the Cumulus VX switch. To open a console to a virtual PC, right click on the VPCS icon and select Console. In the console, you can configure the IP address and default gateway for the VPCS. For example: ip 10.4.1.101/25 10.4.1.1 Start all the VMs. You should be able to ping between the VMs and between the virtual PCs.
https://docs.cumulusnetworks.com/cumulus-vx/Development-Environments/GNS3-and-VirtualBox/
2019-12-05T17:40:39
CC-MAIN-2019-51
1575540481281.1
[]
docs.cumulusnetworks.com
Dsadd quota Applies To: Windows Server 2008) quota -part <PartitionDN> [-rdn <RelativeDistinguishedName>] -acct <Name> -qlimit <Value> [ use contains spaces, use quotation marks around the text, for example, "CN=DC 2,OU=Domain Controllers,DC=Contoso,DC=Com". Examples To specify a quota of 1000 objects for the configuration partition for user MikeDan, type: dsdd quota -part CN=configuration,dc=contoso,dc=com" -acct MikeDan -qlimit 1000
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc754339%28v%3Dws.10%29
2019-12-05T18:49:32
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
Summary of Chapter 9. Platform-specific API calls Note Notes on this page indicate areas where Xamarin.Forms has diverged from the material presented in the book. Note Portable Class Libraries have been replaced by .NET Standard libraries. All the sample code from the book has been converted to use .NET standard libraries. A library cannot normally access classes in application projects. This restriction seems to prevent the technique shown in PlatInfoSap2 from being used in a library. However, Xamarin.Forms contains a class named DependencyService that uses .NET reflection to access public classes in the application project from the library. The library must define an interface with the members it needs to use in each platform. Then, each of the platforms contains an implementation of that interface. The class that implements the interface must be identified with a DependencyAttribute on the assembly level. The library in each platform. Related links Feedback
https://docs.microsoft.com/en-us/xamarin/xamarin-forms/creating-mobile-apps-xamarin-forms/summaries/chapter09
2019-12-05T17:40:56
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
Learn the Basics New Tab → How to deploy for everyone New Tab → Browser compability Browser Extension Overview The Sametab Browser Extension will change your browser’s new tab. It will make your browser’ new tab dynamic, so that when there’s a new important company-wide or team-wide news, you will see it when opening a new tab. Rather than having your team members pulling information from a third-party product or website, Sametab (through its browser extension) allows you to seamlessly push information where your team members already are before they start any kind of activity in their browser: their new tab. How it works Workspace Admins and Workspace Owners are the ones who have writing permissions. They are allowed by the system to publish announcements on Sametab. When they publish a new announcement, Sametab automatically delivers the message to everyone’s browser new tab. Tipically this operation is completed in under a minute. People interact with a new announcement by clicking on the card that will pop up in their browser new tab. When they click it, they will be redirected to their Sametab company workspaces so they will be able to read the announcement and (in case you required it) mark it as read. Announcements that need to be Marked as Read ✅ Some announcements will require your readers’ acknowledgment. Those announcemements will sit on your team’s browser new tab until they get marked as read by the recipients. (See more about the Read event) Those announcements are going to be removed from the new tab only when the recipient mark it as read. Announcements that don’t need to be Marked as Read 👁️ For some announcements you won’t require your readers’ acknowledgment. Those announcements will stay on your team’s browser new tab until they’re viewed (See more about the View event). When they click and view the announcement Sametab removes them from their new tab. No further interaction is required from your users. New Tab States - Empty browser’s new tab - Browser’s new tab with one or more announcements How to deploy the Chrome Extension on your team members devices? This operation is typically done during the onboarding phase. If you’re just interested in the Deploy phase, check out this quick tutorial we made to let you deploy the Sameta Chrome Extenion to everyone in your Company or Department using Google App for Bbusiness. Cross browser compability Q&A Can I force the installation of the Chrome Extension in my team? Sure but you’ll need to use Google App for Business or a MDM automated deploy system. Read our guide to learn more about the subject. Can I use Sametab without the Chrome Extension? While we recommend to use Sametab with the Chrome Extension, you can definitely use Sametab even without the Chrome Extension. Beside announcement, for what else is useful your browser extension? Time zones, bookmarks, quotes and more. If I don’t pay for Sametab will my browser extension will be less empowered? No. You’re going to have less funcinalities on the Adim side. You’re Chrome Browser Extension will keep all the major funcionalities. Check out our pricing page for more. What if in my team there are people that aren’t using Chrome as their default browser? We planned to cover all the major browsers. Check out our Cross browser compability section to see when we planned the other release. That beings said, you can still adopt Sametab. Sametab will work just fine even if you’re not using the Browser Extension. Not using Sametab yet? Get your free account here. 👈
https://docs.sametab.com/docs/new-tab/learn-the-basics/
2019-12-05T18:31:45
CC-MAIN-2019-51
1575540481281.1
[array(['/docs/static/images/new-tab/empty-new-tab.png', 'write-new-sametab-announcement small-img'], dtype=object) array(['/docs/static/images/new-tab/empty-new-tab.png', 'write-new-sametab-announcement small-img'], dtype=object)]
docs.sametab.com
Click Tools > Run Reports. On the Catalog Reports tab, expand the tree view, and select Materials Handling. Select Materials Handling MHE_PackingThickness_Reports in the list of available report templates. Click Run. The Filter Properties for Asking Filter dialog box appears. Select a system containing the idlers with packing needed for the report, and click OK. The dialog box closes, and the report creation process begins. When processing completes, the software opens the report so that you can view its contents.
https://docs.hexagonppm.com/reader/uVNY0bK5Zszx4fh37wUWog/wY7FsnugFUCSCnI3Nl51Qg
2019-12-05T16:48:05
CC-MAIN-2019-51
1575540481281.1
[]
docs.hexagonppm.com
Deleting an Article From Joomla! Documentation Joomla! 3.x To delete an article: - Login to the Joomla! Administrator Back-end. - Go to Content > Articles on the toolbar menu. - Check the box next to the article(s) that you want to delete. Then click the Trash button. If you want to make sure that the Article is completely gone from the system (so no further roll back is available): - In the Article Manager, click the Search Tools button next to the search box. This will cause several selector menus to appear. - Click Select Status, and select Trashed. - Select the article(s) you want to permanently delete, and click the Empty Trash button. Note: Trashed Articles are not the same as Archived Articles.
https://docs.joomla.org/Deleting_an_Article/en
2019-12-05T17:23:05
CC-MAIN-2019-51
1575540481281.1
[]
docs.joomla.org
Microsoft 365 Business frequently asked questions General What is Microsoft 365 Business? Microsoft 365 is an integrated solution that brings together best-in-class productivity tools, security, and device management capabilities for small to medium-sized businesses. It includes: A set of business productivity and collaboration tools - Word, Excel, PowerPoint, Outlook, OneNote, Publisher, and Access - Exchange, OneDrive, Skype for Business, Microsoft Teams, and SharePoint. - Business apps from Office (Bookings, MileIQ1). Enterprise-grade device management and security capabilities - Helps provide protection from external threats like phishing and sophisticated malware with Office 365 Advanced Threat Protection Plan 1 and Windows Defender Exploit Guard. - Helps control and manage how sensitive information is accessed and transmitted with Office 365 data loss prevention policies and Azure Information Protection Plan 1. - Helps protect, preserve, and back up your data with Exchange Online Archiving. - App protection for Office and other mobile apps with Intune App Protection. - Device management for Windows 10 PCs, MacOS, and mobile devices with Intune device management. - Identity protection with multi-factor authentication, self-service password reset, and conditional access. - Consistent security configuration across devices—protection of company data across devices; Windows Defender, which is always on and up to date. Simplified device deployment and user setup - Single admin console to set up and manage users and devices - Auto-installation of Office apps on Windows 10 PCs. - Always up-to-date Office + Windows 10. - Streamlined deployment of PCs with Windows AutoPilot. Other entitlements - Microsoft 365 Business customers also have access to Windows Virtual Desktop and Office Shared Computer Activation capabilities. Read the Microsoft 365 Business blog to learn more. See also the Microsoft 365 Business Service Description. Who should consider adopting Microsoft 365 Business? Microsoft 365 Business is a comprehensive, cloud-based security solution that lets you: - Defend your business against advanced cyberthreats a familiar location for administration, billing, and 24x7 support. It consists of enterprise-grade technology built for businesses with fewer than 300 employees. How can I get Microsoft 365 Business for my business? Microsoft 365 Business can does Microsoft 365 Business cost? Microsoft 365 Business is offered at USD$20.00 user/month based on an annual contract if purchased directly from Microsoft. When purchased through a Microsoft Partner, pricing can vary based on the services the partner provides and their pricing model for Microsoft 365 Business. Is there a cap to how many Microsoft 365 Business seats a customer can have? Microsoft 365 Business was designed for small to medium-sized businesses with low to medium IT complexity requirements. Customers can purchase up to 300 Microsoft 365 Business licenses for their organization. Customers can mix and match cloud subscriptions. As a result, depending on their IT requirements, customers may add Microsoft 365 Enterprise licenses to the same account. When customers consider an environment consisting of multiple subscription types, they 365, Enterprise Mobility + Security, and Office 365. Is helps reduce maintenance and security costs over time and is a state that businesses should strive to attain. However, we recognize that some small and medium-sized customers update their software primarily when they upgrade their hardware, over an extended period. Pro or later, it likely meets the minimum requirements for Microsoft 365 Business. Certain Windows 10 features such as Cortana, Windows Hello, and multitouch require specific hardware that is only available on newer PCs. See the Windows 10 Pro system requirements for certain premium Microsoft Defender features like Controlled Folder Access and Network Protection for web-based threads. It also includes. Important You need to supply the original product key when you upgrade, otherwise the upgrade won't work. How does Microsoft 365 Business help support our apps helps protect Office data, including email, calendar, contacts, and documents on iOS and Android mobile devices, by enforcing policies such as automatically deleting business data after a prescribed amount of time of not connecting to the service, requiring that information is stored only in OneDrive for Business, requiring a PIN/fingerprint verification to access Office apps, and preventing company data from being copied from an Office app into personal apps. Mobile application management for other mobile apps through Intune is also available for Microsoft 365 Business subscribers. Device Management for Windows 10 PCs allows businesses to choose to set and enforce capabilities such as Windows Defender protection for malware, automatic updates, and turning off screens after a prescribed amount of time. In addition, lost or stolen Windows 10 devices can be completely wiped of business applications and data through the admin center. Device Management for iOS, Android & MacOS features helps businesses securely manage a diverse device ecosystem that includes iOS, Android, Windows, and MacOS devices., organizations can ensure that Windows Defender protection is running and always up to date on all their Windows 10 devices. Windows 10 Business also includes Windows Defender Exploit Guard, a new set of intrusion prevention capabilities. One of its features, Controlled folder access, stops ransomware by locking down folders and preventing unauthorized apps from accessing a user’s important files. What's the difference between Office 365 Business Premium, Microsoft 365 Business, and Microsoft 365 Enterprise? Microsoft has various productivity and security management offerings that small to medium-sized customers may consider when upgrading their desktop and device infrastructure, each bringing increasingly powerful features and functionality. Office 365 Business Premium delivers best-in-class productivity with Office 365 apps and services, but doesn't include the application protection and device management capabilities of Microsoft 365 Business. Microsoft 365 Business combines Office 365 apps and services with advanced security capabilities to help protect your business against advanced cyberthreats, safeguard your data and manage your devices. It includes a simplified management console through which device and data policies may be administered. Many small to medium-sized businesses can be best served with Microsoft 365 Business. Microsoft 365 Enterprise is a set of licensing plans that offer increased levels of compliance and security management over Microsoft 365 Business and are designed for enterprise customers and those customers that have over 300 users. In addition, Microsoft 365 Enterprise plans provide additional functionality, including business intelligence and analytics tools. Can I switch my Office 365 plan to Microsoft 365 Business? Yes, customers may switch their plans from a qualifying Office 365 plan to Microsoft 365 Business. Depending on the customer’s current plan, there may be a decrease or increase in monthly charges. In what regions is Microsoft 365 Business available? Microsoft 365 Business is available to all partners and customers where Office 365 is available. See the list of Office 365 international availability for languages, countries, and regions. Is there a Microsoft 365 Business trial I may use to evaluate the offer? A Microsoft 365 Business trial is available for CSPs. A trial for direct customers will be available later. What should customers and partners know before running Microsoft 365 Business within their organization? Customers that wish to experience the complete capabilities of Microsoft 365 Business must be running Windows 7, 8.1, or 10 Pro2 on their existing desktops. Existing Windows 10 Pro PCs should be running Creators Update if they have not already done so.? Yes, Microsoft 365 Business subscribers are licensed to use full Intune capabilities for iOS, Android, MacOS, and other cross-platform device management. Features not available in the simplified management console in Microsoft 365 Business, like third-party app management and configuration of WiFi profiles, VPN certificates, can be managed in the full Intune console. Does Azure Active Directory Premium P1 come with Microsoft 365 Business? Microsoft 365 Business includes select Azure AD Premium P1 (AADP P1) features such as self-service password reset with AD write-back, Azure MFA, and conditional access. It does not include the entirety of AADP P1. For more information, see the Microsoft 365 Business Service Description. Does Microsoft 365 Business allow customers to manage Macs? Intune helps you securely manage iOS, Android, Windows, and MacOS devices. the centralized management controls of Microsoft 365 Business. You can also use Windows AutoPilot for existing PCs that are running Windows 10 Professional Creators Update (or later) and have been factory reset. Details about Windows AutoPilot can be found in this June 2017 blog post. Compatibility Can I add Office 365 add-ons to Microsoft 365 Business? All the add-ons that can be added to Office 365 Business Premium can be added to Microsoft 365 Business. This means that you can purchase Office 365 Cloud App Security, Advanced Compliance, Threat Intelligence, MyAnalytics, PowerBI Pro, and Audio Conferencing. Can I add Phone System and Calling Plans to Microsoft 365 Business? No, Phone System and Calling Plan are reserved for customers who have more advanced needs. Customers who require these capabilities should look at Microsoft 365 Enterprise offerings. Can Microsoft 365 Business customers use Windows Defender Advanced Threat Protection? No, customers that require Windows Defender Advanced Threat Protection need either Windows 10 Enterprise E5 or Microsoft 365 Enterprise E5. more and to upgrade existing products and services. Microsoft 365 Business provides an opportunity to have an upgrade discussion with customers now using Exchange Server, Exchange Online, or Office 365 Business Essentials. Partners may also gain more revenue from increased managed services and/or peruser support fees. With the new Windows AutoPilot feature included in Microsoft 365 Business, partners who have been reluctant to sell new Windows devices because of deployment logistics and costs will find this opportunity much more attractive. Customers who are confident in the security of their on-premises and mobile devices are also more likely to invest in more Enterprise SKUs. Some of my customers have devices that aren't genuine; will Microsoft 365 Business make these devices genuine? Microsoft 365 Business doesn't make an otherwise non-genuine version of Windows, genuine. Microsoft 365 Business does provide an upgrade benefit allowing those customers running genuine Windows 7, 8, or 8.1 Pro to upgrade to the most recent, genuine version of Windows 10 Pro. requirements for personal data processing. In March 2017, Microsoft made available contractual guarantees that provide these assurances. Customers that have questions about how Microsoft can help them meet their additional GDPR obligations should learn about the advanced compliance and security capabilities available in Microsoft 365 Business (for example, Azure Information Protection, data loss prevention, Advanced Threat Protection, and so on) and in other suites (for example, Microsoft 365 Enterprise E5). To learn more, visit. Footnotes 1 Available in US, UK, and Canada. 2 Devices running Windows 7 or 8.1 Pro are eligible for an upgrade to Windows 10 Pro within the Microsoft 365 Business preview. Feedback
https://docs.microsoft.com/en-us/microsoft-365/business/support/microsoft-365-business-faqs
2019-12-05T17:49:55
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
Deploy Applications and API Proxies to Runtime Fabric Anypoint Runtime Fabric enables you to deploy Mule applications and API Proxies to a Mule runtime engine that is managed on Runtime Fabric. Supported Mule Versions Anypoint Runtime Fabric supports deployment to these versions of Mule runtime: 3.8.7 3.9.1, 3.9.2 4.1.2+ 4.2.0 Prerequisites Before you deploy a Mule application or API proxy to Runtime Fabric: Install and configure a Runtime Fabric. Ensure that you understand how resources are allocated as described in Resource Allocation on Anypoint Runtime Fabric. Publish your Mule application or API proxy to Exchange. Deploy Using Runtime Manager You can use Runtime Manager to manually deploy Mule applications and API proxies. See Deploy a Mule Application to Runtime Fabric. Deploy Using Maven Runtime Fabric supports Maven for managing and deploying a Mule application or API proxy. To deploy Mule applications and API proxies using Maven, see the topic specific to your Mule version: Deployment Considerations Eventual consistency Runtime Fabric is a self healing, eventually consistent platform. When building a CI pipeline for Runtime Fabric deployment, you must take eventual consistency into consideration. After triggering a deployment, the application status should become RUNNING. If the application status never indicates RUNNING, the replicas contain a state and reason to indicate why the application is not RUNNING. Application Deployment When an application is deployed, the following events occur: The expected state of your application is stored, including application bundle and number of replicas. The application replica status shows as PENDING. When adequate compute and memory resources are available, each replica is attached to a node. If not already present, a Docker image corresponding to the Mule Runtime version is downloaded. The replica status shows as STARTING. The replica finishes loading the application. The replica status shows as STARTEDand is able to perform work. Application Failure If an application fails, for example, due to running of out memory, the following events occur: The replica status shows as TERMINATED. Runtime Fabric immediately attempts to restart the replica. The replica status shows as RECOVERING. If the replica is able to restart: The replica finishes loading the application. The replica status shows as STARTEDand is able to perform work. If the replica is initially unable to restart, for example, it relies on a network resource which temporarily unavailable, the following events occur: The replica status shows as PENDING, with a message indicating "CrashLoopBackoff". Runtime Fabric attempts to restart the replica, using exponential backoff to avoid an excessive number of restart attempts. The replica status alternates between RECOVERINGand PENDINGuntil the issue preventing a successful restart is resolved. The replica loads the application. After a successful restart, the replica shows as STARTEDand is able to perform work.
https://docs.mulesoft.com/runtime-fabric/1.4/deploy-index
2019-12-05T17:21:45
CC-MAIN-2019-51
1575540481281.1
[]
docs.mulesoft.com
Murano may work in various networking environments and is capable of detecting the current network configuration and choosing appropriate settings automatically. However, some additional actions are required to support advanced scenarios.. To create the router automatically, provide the following parameters in the configuration file: [networking] external_network = %EXTERNAL_NETWORK_NAME% router_name = %MURANO_ROUTER_NAME% create_router = true To figure out the name of the external network, run openstack network list --external. During the first deployment, the required networks and router with a specified name will be created and set up. To configure neutron manually, follow the steps below. Create a public network. Create a local network. Create a router. Navigate to. Click Create Router. In the Router Name field, enter murano-default-router. If you specify a name other than murano-default-router, change the following settings in the configuration file: [networking] router_name = %SPECIFIED_NAME% create_router = false Click Create router. Click the newly created router name. In the Interfaces tab, click Add Interface. Specify the subnet and IP address. Verify the result in. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/murano/queens/configuration/configuration.html
2018-06-18T03:25:59
CC-MAIN-2018-26
1529267860041.64
[]
docs.openstack.org
Downloads are available for 3 download attempts. Please contact us to refresh your access if needed. After purchasing a template package you’ll need to download it. Having Problems? Contact us at [email protected] if you have problems and we will email your order to you. This is done during business hours only, M-F, 8 a.m. – 5 p.m. (Pacific).
https://506docs.com/template-download/
2018-06-18T03:28:41
CC-MAIN-2018-26
1529267860041.64
[]
506docs.com
A Query Profile is a named collection of search request parameters given in the configuration. The search request can specify a query profile whose parameters will be used as parameters of that request. This relieves the client from having to manage and send a large number of parameters, and enables the request parameters to use for a use case to be changed without having to change the client. Query profiles enables bucket tests, where a part of the query stream is given some experimental treatment, as well as differentiating behavior based on (OEM) customer, user type, region, frontend type etc. This document explains how to create and use query profiles. See also the query profile reference for the full syntax. This section describes how to create, deploy and use query profiles. A Query Profile is an XML file containing the request parameter names and their values, for example: <query-profile <field name="hits">10</field> <field name="unique">merchantid</field> </query-profile>For the full syntax, see the query profile reference. Note that full property names must be used, aliases are only supported in requests and programmatic lookup. See the search API for a list of the built in query properties. To deploy a query profile: /in the name by _). If the query profiles contains errors, like incorrect syntax and/or infinite reference loops, deployment will fail. To use a query profile in a search request, send the name of the profile as the parameter queryProfile: queryProfile=MyProfileThe parameter values set in MyProfile.xml will then be used as if they were present directly in the search request. If a parameter is present both directly in the request and in a profile, the request value takes precedence by default. Individual query profile field can be made to take priority by setting the overridable attribute to false. If the search request does not specify a query profile, the profile named default will be used. If no default profile is configured, no profile will be used. If the queryProfile parameter is set but does not resolve to an existing profile, an error message is returned. The query profile values (whether set from a configured query profile or by the request) is available as Query properties. To look up a value from a Searcher component, use query.properties().get("myVariable")Note that property names are case sensitive. The following section describes the various query profile features available in no particular order. To support structure in the set of request variables, a query profile value may not be a string but a reference to another query profile. In this case, the referenced query profile functions as a map (or struct, if types are used, see below) in the referencing query profile. The parameter names of the nested profile gets preceded by the name of the reference variable and a dot. For example: <query-profile <field name="hits">10</field> <field name="unique">merchantid</field> <field name="user"><ref>MyUserProfile</ref></field> </query-profile>Where the referenced profile might look like: <query-profile <field name="age">20</field> <field name="profession">student</field> </query-profile>If MyProfileis referenced in a query now, it will contain the variables hits=10 unique=merchantid user.age=20 user.profession=studentOf course, references can be nested to provide a deeper structure, producing variables having multiple dots. The dotted variables can be overridden directly in the search request (using the dotted name) just as other parameters. Note that the id value of a profile reference can also be set in the request, making it possible to choose not just the top level profile but also any number of specific subprofiles in the request. For example, the request can contain queryProfile=MyUserProfile&user=ref:MyOtherUserprofileto change the reference in the above example to some other subprofile. Note the ref:prefix which is required to identify this as setting user to a query profile referenced by id rather than setting it to a string. A query profile may inherit one or more other query profiles. This is useful when there is some common set of parameters applicable to multiple use cases, and a smaller set of parameters which varies between them. To inherit another query profile, reference it as follows: <query-profile … </query-profile>The parameters of MyBaseProfilewill be present when this profile is used exactly as if they were explicitly written in this profile. Multiple inheritance is supported by specifying multiple space-separated profile ids in the inheritance attribute. Order matters in this list - if a parameter is set in more than one of the inherited profiles, the first one encountered in the depth first, left to right search order is used. Parameters specified in the child query profile will always override the same parameters in a inherited one. Query profile values may contain substitution strings on the form %{property-name} which are replaced by the value returned from query.properties().get("property-name") at the time of the lookup. An example: <query-profile <field name="message">Hello %{world}!</field> <field name="world">Earth</field> </query-profile> The value returned by looking up message will be Hello Earth!. As the lookups happens at query time, substituted values may be looked up in variants, in inherited profiles, in values set at run time and by following query profile references. Details: In some cases, it is convenient to allow the values returned from a query profile to vary depending on the values of some properties input in the request. For example, a query profile may contain values which depend on both the market in which the request originated ( market), the kind of device ( model) and the bucket in question ( bucket). Such variants over a set of request parameters may be represented in a single query profile, by defining nested variants of the query profile for the relevant combinations of request values. Here is a complete example: <query-profile <!-- A regular profile may define "virtual" children within itself --> <!-- Names of the request parameters defining the variant profiles of this. Order matters as described below. Each individual value looked up in this profile is resolved from the most specific matching virtual variant profile --> <dimensions>region,model,bucket</dimensions> <!-- Values may be set in the profile itself as usual, this becomes the default values given no matching virtual variant provides a value for the property --> <field name="a">My general a value</field> <!-- The "for" attribute in a child profile supplies values in order for each of the dimensions --> <query-profile <field name="a">My value of the combination us-nokia-test1-a</field> </query-profile> <!-- Same as [us,*,*] - trailing "*"'s may be omitted --> <query-profile <field name="a">My value of the combination us-a</field> <field name="b">My value of the combination us-b</field> </query-profile> <!-- Given a request which matches both the below, the one which specifies concrete values to the left gets precedence over those specifying concrete values to the right (i.e the first one gets precedence here) --> <query-profile <field name="a">My value of the combination us-nokia-a</field> <field name="b">My value of the combination us-nokia-b</field> </query-profile> <query-profile <field name="a">My value of the combination us-test1-a</field> <field name="b">My value of the combination us-test1-b</field> </query-profile> </query-profile> It is possible to define variants across several levels in an inheritance hierarchy. The variant dimensions are inherited from parent to child, with the usual precedence rules (depth first left to right), so a parent profile may define the dimensions and the child the values over which it should vary. Variant resolution within a profile has precedence over resolution in parents. This means e.g that a default value for a given property in a sub-profile will be chosen over a perfect variant match in an inherited profile. Variants may specify their own inherited profiles, as in <query-profile … <query-profile … </query-profile> </query-profile> Values are resolved in this profile and inherited profiles “interleaved” by the variant resolution order (which is specificity by default). E.g by decreasing priority: 1. Highest prioritized variant value 2. Value in inherited from highest prioritized variant 3. Next highest prioritized variant value 4. Value in inherited from next highest prioritized variant … n. Value defined at top level in profile n+1. Value in inherited from query profile The query profiles may optionally be type checked. Type checking is turned on by referencing a Query Profile Type from the query profile. The type lists the legal set of parameters of the query profile, whether additional parameters are allowed, and so on. A query profile type is referenced by <query-profile … </query-profile> And the type is defined as: <query-profile-type <field name="age" type="integer"/> <field name="profession" type="string"/> <field name="user" type="query-profile:MyUserProfile"> </query-profile-type> This specifies that these three parameters may be present in profiles using this type, as well as the query profile type of the user parameter. It also possible to specify that parameters are mandatory, that no additional parameters are allowed (strict), to inherit other types and so on, refer to the full syntax in the query profile reference. If the base profile type is strict, it must extend a built-in query profile type, see the reference documentation on strict. A query profile type is deployed by adding an xml file named [query-profile-type-name].xml in the search/query-profiles/types directory in the search definition. Query profile types may be useful even if query profiles are not used to set values. As they define the names, types and structure of the parameters which can be accepted in the search request, they can also be used to define, restrict and check the content of search requests. For example, as the built-in search api parameters are also type checked if a typed query profile is used, types can be used to restrict the parameters that can be set in a request, or to mandate that some are always set. The built-in parameters are defined in a set of query profile types which are always present and which can be inherited and referenced in application-defined types. These built-in types are defined in the search api. By adding <match path="true"> to a profile type, path name matching is used rather than the default exact matching when a profile is looked up from a name. Path matching interprets the profile name as a slash separated path and matches references which are subpaths (more specific paths) to super-paths. The most specific match becomes the target of the reference. For example: Given the query profile names a1 a1/b1 then a1/b1/c1/d1 resolves to a1/b1 a1/b resolves to a1 a does not resolve This is useful to assign specific query profile id's to every client or bucket without having to create a different configuration item for each of these cases. If there is a need to provide a differentiated configuration for any such client or bucket in the future, this can be done without having the client change its request parameter because a specific id is already used. Query profiles (and types) may exist in multiple versions at the same time. Wherever a name of a query profile (or type) is referenced, the name may also append a version string, separated by a colon, e.g MyProfile:1.2.3. The version number is resolved - if no version is given the highest version known is used. If the version number is only partially specified, as in my-version:1, the highest version starting by 1 is used. Where a query profile (or type) is defined, the id may specify the version, followed by a colon: <query-profile … </query-profile> Any sub-number omitted is taken to mean 0 where a version is defined, so id="MyProfile:1" is the same as "id=MyProfile:1.0.0". Query profiles (and types) which specifies a version in their id, must use a file name which includes the same version string after the name, separated by a dash, e.g MyProfile-1.2.3.xml. For more information on versions, see component versioning. It can sometimes be handy to be able to dump resolved query profiles offline. A tool is provided for this purpose, which can be run using $ vespa-query-profile-dump-tool Please run the tool with no arguments to get documentation on usage.
https://docs.vespa.ai/documentation/query-profiles.html
2018-06-18T03:59:08
CC-MAIN-2018-26
1529267860041.64
[]
docs.vespa.ai
Cancels an Thread.Abort(object) requested for the current thread. This method can only be called by code with the proper permissions. For more information see [<topic://cpconMakingSecurityDemands>]. When a call is made to Abort to terminate a thread, the system throws a System.Thread. See System.Threading.ThreadAbortException for an example that demonstrates calling the ResetAbort method.
http://docs.go-mono.com/monodoc.ashx?link=M%3ASystem.Threading.Thread.ResetAbort
2018-06-18T03:49:01
CC-MAIN-2018-26
1529267860041.64
[]
docs.go-mono.com
Sharing API \ Re-Publish a previously published message PHP SDK Workflow Request: the code to send to the API POST the following structure to the resource /sharing/messages/<message_token>.json to re-publish a previously published message. By publishing one and the same message to multiple users you obtain insights on how well the message is overally performing in terms of clickthroughs and resulting referral traffic. Re-Publish on behalf of a user_token Publish an existing message to multiple social network accounts of a user.#" ] } } } } Re-Publish on behalf of an identity_token Publish an existing message to a single social network account. Use the following structure as POST data and include an identity_token to post content for a social network identity. { "request": { "sharing_message": { "publish_for_identity": { "identity_token": "identity_token;" } } } } Result: the code returned by the API The result is similar to the one you receive after having published a new message.
http://docs.oneall.com/api/resources/social-sharing/re-publish-message/
2018-06-18T03:43:55
CC-MAIN-2018-26
1529267860041.64
[]
docs.oneall.com
. . <ol>and <ul>elements. >. <ul> <li>first item</li> <li>second item</li> <li>third item</li> </ul> Above HTML will output: : <ol>, <li>, <menu>and the obsolete <dir>; <ul>element: compactattribute, © 2005–2018 Mozilla Developer Network and individual contributors. Licensed under the Creative Commons Attribution-ShareAlike License v2.5 or later.
http://docs.w3cub.com/html/element/ul/
2018-06-18T03:35:48
CC-MAIN-2018-26
1529267860041.64
[]
docs.w3cub.com
First, navigate to the "Integrations" page which can be found in the left navigation panel. Next, scroll down to Smile and click "Connect Integration" Then you will be redirected to sign into Smile to access your Admin App Available Metrics - Rewards Members - AOV by Tier - Total Customers - Tier 2 - Rewards Redeemed - LTV - Smile Members Still have questions? Reach out to us at [email protected] or start a chat with us!
http://docs.yaguara.co/en/articles/3494649-smile
2021-01-16T05:55:38
CC-MAIN-2021-04
1610703500028.5
[]
docs.yaguara.co
Shadow Effects in PSD File Inner Shadow Effect Using Aspose.PSD for .NET, developers can set inner shadow effects on different elements. The Inner Shadow effect is used to add a diffused shadow to the inside of an object and to adjust its color, offset and blur. Below is the code demonstration of the said functionality.
https://docs.aspose.com/psd/net/shadow-effects-in-psd-file/
2021-01-16T05:46:12
CC-MAIN-2021-04
1610703500028.5
[]
docs.aspose.com
Close. - In the Tools toolbar, select the Close Gap tool. The tool's properties are displayed in the Tool Properties view. NOTE To learn how to use the Close Gap tool, see Closing Gaps.
https://docs.toonboom.com/help/harmony-17/paint/reference/tool-properties/close-gap-tool-properties.html
2021-01-16T06:39:19
CC-MAIN-2021-04
1610703500028.5
[array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/Reference/tool-properties/Close_Gap_Tool_Properites.png', 'Stroke Tool Properties View Stroke Tool Properties View'], dtype=object) ]
docs.toonboom.com
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here. Class: Aws::OpsWorksCM::Types::DeleteBackupRequest - Defined in: - (unknown) Overview Note: When passing DeleteBackupRequest as input to an Aws::Client method, you can use a vanilla Hash: { backup_id: "BackupId", # required }.
https://docs.aws.amazon.com/sdk-for-ruby/v2/api/Aws/OpsWorksCM/Types/DeleteBackupRequest.html
2021-01-16T07:03:24
CC-MAIN-2021-04
1610703500028.5
[]
docs.aws.amazon.com
Purchasing Additional Sources, Models, and Events ON THIS PAGE Hevo informs you whenever you are near to consuming or have already consumed your quota of Sources, Models, or Events, through email notifications and alerts on your Hevo UI. At that time, you can: - Purchase On-Demand Events to meet the additional requirements. - Upgrade to the Business plan from a Basic or Starter plan, to avail an increased quota of Events, Sources, and Models. - (If you are on a Business plan), subscribe to Add-On Sources and Models. Note: If you are subscribed to a Custom plan, choose from the options below or write to [email protected] for changes to the plan. Add-On Sources and Models As a Business Plan subscriber, you can top up your subscription of Sources and Models using Add-Ons. You can select either or both of the following: - Source Add-On Bundles of 10 Sources each - Model Add-On Bundles of 10 Models each Add-Ons are billed monthly. If you are already on a Monthly pricing plan, the Add-On subscription is added as a new item in the bill. If your base subscription has any trial period left, the same is applicable for the Add-On subscriptions, and billing starts only after the trial period expires. If you increase the Add-On bundles in the middle of a subscription, Hevo charges you on pro-rata basis. Similarly, if you reduce the Add-On bundles in between a subscription period, the unused time is credited back to you in the next invoice. Note: To address any immediate need, you can revisit your existing Pipelines or unused Models and consider deleting these to free up your Sources/Models quota. Deleting Pipelines helps free up a Source only if no other Pipelines are using it. To subscribe to Add-Ons: - Access your PLAN DETAILS. - Click on Buy Add-Ons. If you have already subscribed to Add-Ons, click Add More. Select the Add-on bundle and specify the number of bundles you need. Use + and - buttons to increase or decrease the count. You cannot reduce the no of bundles below what you are using.The total price of all the Bundles is displayed. Note: Your Add-Ons subscription is cancelled immediately if the quantity is set to 0. - Click UPDATE. On-Demand Events Hevo warns you if you are nearing your Events quota limit or have exceeded it through alerts and email notifications. At that time, you can purchase On-Demand Event bundles to meet your processing requirements. Each bundle offers you 1M Events. On-Demand Events are charged immediately. Note: On-Demand Event bundles can be purchased for all the subscription plans. The On-Demand Events you have purchased are shown on the Plan Details page: For continued higher usage than your base plan, we recommend that you upgrade your plan or subscribe to more Events.
https://docs.hevodata.com/account-management/billing/purchasing-additional-sme/
2021-01-16T06:33:01
CC-MAIN-2021-04
1610703500028.5
[array(['https://res.cloudinary.com/hevo/image/upload/v1598859948/hevo-docs/NewPricing2726/buy-events.gif', 'Purchase On-Demand Events'], dtype=object) array(['https://res.cloudinary.com/hevo/image/upload/v1598086296/hevo-docs/NewPricing2726/on-demand-events.gif', 'On-Demand Events purchased'], dtype=object) ]
docs.hevodata.com
- First, receive the order through the Receiving work bench, or make sure this has been done. - Find or bring in the record that you want the order moved to. Make sure this record has holdings or create holdings. - Copy the MMS ID of the record you want the order moved to. - Go to the item to move. - Bring up the title that has the item. - Click on Physical, then Items. - Go to the ellipsis of the item you want and click Edit. - In the Item Editor, top right, click on the “Relink to another bibliographic record” button. - A window will open. Click in the “Select from a list” field. - Change the sub-search to MMS ID and paste in the id that you copied. - The title you want should show up. Click on it. - You’ll then see a line with the item. Click on the radio button on the left. - Click “Select” in the upper right corner. - Bring up the titles again to make sure that the order and the item was moved to the correct record.
https://docs.library.vanderbilt.edu/2020/05/26/moving-an-order/
2021-01-16T06:07:46
CC-MAIN-2021-04
1610703500028.5
[]
docs.library.vanderbilt.edu
The most popular tools¶ Intended audience: beginners, all Tango Controls provides various tools. You may refer to Index of tools for a detailed list of available tools, applications and libraries or look below for the most used ones. - Astor which is a tool for management of Tango Controls system. - Jive which is used to configure components of the Tango Controls and browse a static Tango Database. - AtkPanel which is a generic control panel. - Pogo which is a device class and device server generator. - JDraw and Synpotic which is a synoptic panels builder and viewer. - Taurus is a Python library for creating GUI applications.
https://tango-controls.readthedocs.io/en/latest/tools-and-extensions/tools.html
2021-01-16T06:24:33
CC-MAIN-2021-04
1610703500028.5
[]
tango-controls.readthedocs.io
Release Notes Information¶ Releases use the following numbering system: {major}.{minor}.{incremental} - major: Major refactoring or rewrite - make sure you read and test very carefully! - minor: Breaking change in some circumstances, or a new feature. Read carefully and make sure you understand the impact of the change. incremental: A "safe" change / improvement. Should always be safe to upgrade. [BC]: Items marked with [BC] indicate a breaking change that will require updates to your code if you are using that code in your extension. Release 3.4.7 (not yet released 2020-12-04)¶ - When upgrading check and update customfield datatypes ( eligible_for_gift_aidwas varcharinstead of inton some old installs). Release 3.4.6¶ - Add GiftAid.EnsuredatastructuresAPI. - #20 Add GiftAid.GetcontributioneligibilityAPI. - Fix #21 Enlarge source to prevent data too long issues. - Make sure report totals are calculated correctly if decimal separator is not dot for currency - Fix #15 House name or number is now first line of address per HMRC/Charities Online guidance. Release 3.4.5¶ Release 3.4.4¶ Requires CiviCRM 5.25 minimum or you won't see the batch name anymore in the contribution detail - alterCustomFieldDisplayValue hook does not exist. - Replace deprecated hook definitions. - Remove deprecated trapException on executeQuery. - Switch "Add to Batch" to use API and supported batch params. - If deleting a batch clean up giftaid batch info so we don't have problems adding them to a new batch. - Add API to recalculate amounts for contributions already added to a batch ( GiftAid.recalculatecontributionamounts). - Fix ukgiftaidonline menu items sometimes not showing: ukgiftaidonline#1. - Fix searching for batch via reports. - Improve GiftAid Report for submission to HMRC !12 - Improve address validation for house names. - Fix get address for report when contribution receive date is a few seconds after declaration but on same day. Release 3.4.3¶ Release 3.4.2¶ - Fix #4 - Individual donation marked as "NO" from backend, gets included in batch. - Remove unused lineitems display from add/remove to batch. - Add 'View' action to contributions on add/remove batch list. Release 3.4.1¶ - Add GiftAid.Updatedeclarations API - Use title that user entered when adding to batch - Disable logging when fixing declarations/contributions via API - Fix crash with disable then enable extension Release 3.4¶ This release adds unit tests, fixes multiple issues, improves documentation and adds a field "Given Date" to the declaration. - Update empty address when updating declaration - Add date declaration was given, as well as the start date. (Currently defaults to the same.) - Fix bug when recording a not-eligible declaration and a future is-eligible declaration exists. - Fix #3 - generate correct link for gift aid declaration tab - Don't create 'Yes, in past 4 years' optionvalue for contributions on install.... - Fix issues with Contribution customfield metadata in installer - Trigger postInstall hook which is required on install to set revision so upgrader steps are not run - Followup re gitlab issue #2 - fix problem with shared field name - Fix license link - Update documentation - Fix gitlab issue #2: cannot reinstall after uninstall - Add eligibility flowchart (graphviz .dot format source) - Don't worry about calculating gift aid fields twice - Add unit tests - fix issue #24 setDeclaration called with missing eligible_for_gift_aid param - Identify eligibility by line items not contribution financial type. Fixes issue #19 - Fix install wrongly setting table-level default batch; add first two tests - Improve code readability and avoid duplicated method calls - Simplify fetching list of entity_ids Release 3.3.11¶ - Use "Primary" address instead of "Home" address for declarations. - Remove code to handle multiple charities - it is untested and probably doesn't work anymore and adds complexity to the code. Release 3.3.10¶ - Only display eligible but no declaration message if logged in and has access to civicontribute. - Use session to track giftaid selections on form so it works with confirmation page. Release 3.3.9¶ - Fix crash on contribution thankyou page when a new contact is created. Release 3.3.8¶ - Refactor GiftAid report to fix multiple issues and show batches with multiple financial types. Release 3.3.7¶ - Allow editing address on the declaration. Release 3.3.6¶ - Rework "Remove from Batch" to improve performance and ensure that what is shown on the screen is what is added to the batch. - Rework "Add to Batch" task to improve performance and ensure that what is shown on the screen is what is added to the batch. - Update GiftAid.updateeligiblecontributions API and document. Release 3.3.5¶ - Update and refactor how we create/update declarations. - Added documentation for declarations to explain how the declarations are created/updated and what the fields mean. Release 3.3.4¶ - Fix issues with setting "Eligible for gift aid" on contributions. - Added documentation for contributions to explain how the gift aid fields on contributions work. Release 3.3.3¶ - Include first donation in the batch - Due to the timestamp on the declaration is created after the contribution hence the first donation doesn't gets included in batch. Set the timestamp as the date rather than time. - Clear batch_name if we created a new contribution in a recur series (it's copied across by default by Contribution.repeattransaction). - Check and set label for 'Eligible amount' field on contribution. - Always make sure current declaration is set if we have one - fixes issue with overwriting declaration with 'No'. - Fix #5 Donations included in batch although financial types disabled in settings. - Trigger create of new gift aid declaration from contribution form if required. Release 3.3.2¶ - Handle transitions between the 3 declaration states without losing information - create a new declaration when state is changed. - Refactor creating/updating declaration when contribution is created/updated. - Properly escape SQL parameters when updating gift aid declaration. - Extract code to check if charity column exists. Release 3.3.1¶ - Major performance improvement to "Add to Batch". Release 3.3¶ In this release we update profiles to use the declaration eligibility field instead of the contribution. This allows us to create a new declaration (as it will be the user filling in a profile via contribution page etc.) and means we don't create a declaration when time a contribution is created / imported with the "eligible" flag set to Yes. IMPORTANT: Make sure you run the extension upgrades (3104). - Fix status message on AddToBatch. - Fix crash on enable/disable extension. - Fix creating declarations every time we update a contribution. - Refactor insert/updateDeclaration. - Refactor loading of optiongroups/values - we load them in the upgrader in PHP meaning that we always ensure they are up to date with the latest extension. - Add documentation in mkdocs format (just extracted from README for now). - Make sure we properly handle creating/ending and creating a declaration again (via eg. contribution page). - Allow for both declaration eligibility and individual contribution eligibility to be different on same profile (add both fields). - Fix PHP notice in GiftAid report. - Match on OptionValue value when running upgrader as name is not always consistent. Release 3.2¶ - Be stricter checking eligible_for_gift_aid variable type - Fix issues with entity definition and regenerate - Fix PHP notice - Refactor addtobatch for performance, refactor upgrader for reliability - Add API to update the eligible_for_gift_aid flag on contributions Release 3.1¶ - Be stricter checking eligible_for_gift_aid variable type
https://docs.civicrm.org/ukgiftaid/en/latest/releasenotes/
2021-01-16T04:51:10
CC-MAIN-2021-04
1610703500028.5
[]
docs.civicrm.org
What was decided upon? (e.g. what has been updated or changed?) When cataloging playaways, please choose “Playaway” as the material type. Why was this decided? (e.g. explain why this decision was reached. It may help to explain the way a procedure used to be handled pre-Alma) Alma has made more material types available, so we should utilize these options during cataloging. A list of available material types has been reviewed and the Material Type options in Item Editor have been updated. Who decided this? (e.g. what unit/group) The Collegium of Catalogers When was this decided? 10/4/2018 Additional information or notes. Changing “Material Type” shouldn’t affect circulation policies, unless there is a rule set for this material type to circulate differently. Please contact LTDS when batch changing existing items’ material type.
https://docs.library.vanderbilt.edu/2018/10/12/playaways-to-have-the-material-type-of-playaway/
2021-01-16T05:44:20
CC-MAIN-2021-04
1610703500028.5
[]
docs.library.vanderbilt.edu
Documentation for the unified/ban project. For Developers Find out how to integrate your project with the unified/ban API. Collaborate with us in development, with translations, ideas and pull requests. Learn the Telegram bot Find out all the commands of the bot and how to best configure it in your Telegram group. Learn the Twitch Sync All TwitchSync Plugin settings for unified/ban. Find out how to best configure it for your needs.
https://docs.unifiedban.solutions/
2021-01-16T05:12:07
CC-MAIN-2021-04
1610703500028.5
[]
docs.unifiedban.solutions
Configure ProForma ProForma has a simple configuration page that lets you get set up in a matter of minutes. You can choose whether or not to globally enable Issue Forms, and you can enable ProForma on any or all of your Jira projects. Enabled by Default This option controls whether ProForma will be enabled by default in new projects created in Jira. It will also affect any project that has not explicitly been enabled or disabled, which you can do using the bulk Enable All and Disable All buttons for projects. Project Configuration ProForma can be enabled and disabled for all project in your Jira instance, or for some projects and not others. Enabling ProForma on a project will allow you to: View forms attached to issues/requests Fill out a form attached to issues/requests Publish a form with a request type on Service Desk projects Disabling ProForma for a project, or removing ProForma entirely, will hide/remove all of the above functions. Any forms that are already attached to an issue will not be deleted. However, it will not be possible to view the forms attached to the issues. Form Builder: Allow HTML This options allows Project Administrators to insert raw HTML into a form template. This can be useful for including YouTube videos or images within your forms. Be aware that the content of any html block in a form is not included in any PDF. Allowing raw html in your form templates could pose a security threat, as it could allow for Cross Site Scripting (XSS) attacks. It is therefore up to each Jira administrator to determine whether the option should be enabled. Disabling the Allow HTML setting will mean that HTML included on any Standard forms, including forms that have already been added to issues, will not be rendered. It will not impact Legacy forms. Forms: Clickable Links Enabling this option means that URL responses on the form will be rendered as clickable links when the form is in view mode. Beware that enabling this feature could allow links to malicious content. Issue Forms ProForma can allow users to create Jira issues directly from ProForma forms, thereby ensuring that you get all of the information you need from the moment the issue is created. This is a global feature that, once enabled, will be available on all projects in your instance. However, you will also need to adjust the Form Settings to allow issues to be created from a particular form. To enable the Issue Creator, log in as a Jira Administrator and got to Jira Settings > Apps (Cloud) / Add-ons (Server) > ProForma. Go to Configuration and use the Issue Forms toggle button to enable the feature. Analytics & Messaging ProForma collects anonymous usage data and incorporated Intercom as a messaging platform to ensure Project and Jira administrators are kept informed of changes to ProForma. Please refer to our Trust Site page for more information. Disabling this option will prevent ProForma from sending any analytics data and also disable the onboarding and notification messages displayed within the app. ProForma Lite users cannot disable this option.
https://docs.thinktilt.com/proforma/Configure-ProForma.1445363735.html
2021-01-16T05:48:22
CC-MAIN-2021-04
1610703500028.5
[]
docs.thinktilt.com
Custom Extensions¶ Spack extensions permit you to extend Spack capabilities by deploying your own custom commands or logic in an arbitrary location on your filesystem. This might be extremely useful e.g. to develop and maintain a command whose purpose is too specific to be considered for reintegration into the mainline or to evolve a command through its early stages before starting a discussion to merge it upstream. From Spack’s point of view an extension is any path in your filesystem which respects a prescribed naming and layout for files: spack-scripting/ # The top level directory must match the format 'spack-{extension_name}' ├── pytest.ini # Optional file if the extension ships its own tests ├── scripting # Folder that may contain modules that are needed for the extension commands │ └── cmd # Folder containing extension commands │ └── filter.py # A new command that will be available ├── tests # Tests for this extension │ ├── conftest.py │ └── test_filter.py └── templates # Templates that may be needed by the extension In the example above the extension named scripting adds an additional command ( filter) and unit tests to verify its behavior. The code for this example can be obtained by cloning the corresponding git repository: $ pwd /home/user $ mkdir tmp && cd tmp $ git clone Cloning into 'spack-scripting'... remote: Counting objects: 11, done. remote: Compressing objects: 100% (7/7), done. remote: Total 11 (delta 0), reused 11 (delta 0), pack-reused 0 Receiving objects: 100% (11/11), done. As you can see by inspecting the sources, Python modules that are part of the extension can import any core Spack module. Configure Spack to Use Extensions¶ To make your current Spack instance aware of extensions you should add their root paths to config.yaml. In the case of our example this means ensuring that: config: extensions: - /home/user/tmp/spack-scripting is part of your configuration file. Once this is setup any command that the extension provides will be available from the command line: $ spack filter --help usage: spack filter [-h] [--installed | --not-installed] [--explicit | --implicit] [--output OUTPUT] ... filter specs based on their properties positional arguments: specs specs to be filtered optional arguments: -h, --help show this help message and exit --installed select installed specs --not-installed select specs that are not yet installed --explicit select specs that were installed explicitly --implicit select specs that are not installed or were installed implicitly --output OUTPUT where to dump the result The corresponding unit tests can be run giving the appropriate options to spack unit-test: $ spack unit-test --extension=scripting ============================================================== test session starts =============================================================== platform linux2 -- Python 2.7.15rc1, pytest-3.2.5, py-1.4.34, pluggy-0.4.0 rootdir: /home/mculpo/tmp/spack-scripting, inifile: pytest.ini collected 5 items tests/test_filter.py ...XX ============================================================ short test summary info ============================================================= XPASS tests/test_filter.py::test_filtering_specs[flags3-specs3-expected3] XPASS tests/test_filter.py::test_filtering_specs[flags4-specs4-expected4] =========================================================== slowest 20 test durations ============================================================ 3.74s setup tests/test_filter.py::test_filtering_specs[flags0-specs0-expected0] 0.17s call tests/test_filter.py::test_filtering_specs[flags3-specs3-expected3] 0.16s call tests/test_filter.py::test_filtering_specs[flags2-specs2-expected2] 0.15s call tests/test_filter.py::test_filtering_specs[flags1-specs1-expected1] 0.13s call tests/test_filter.py::test_filtering_specs[flags4-specs4-expected4] 0.08s call tests/test_filter.py::test_filtering_specs[flags0-specs0-expected0] 0.04s teardown tests/test_filter.py::test_filtering_specs[flags4-specs4-expected4] 0.00s setup tests/test_filter.py::test_filtering_specs[flags4-specs4-expected4] 0.00s setup tests/test_filter.py::test_filtering_specs[flags3-specs3-expected3] 0.00s setup tests/test_filter.py::test_filtering_specs[flags1-specs1-expected1] 0.00s setup tests/test_filter.py::test_filtering_specs[flags2-specs2-expected2] 0.00s teardown tests/test_filter.py::test_filtering_specs[flags2-specs2-expected2] 0.00s teardown tests/test_filter.py::test_filtering_specs[flags1-specs1-expected1] 0.00s teardown tests/test_filter.py::test_filtering_specs[flags0-specs0-expected0] 0.00s teardown tests/test_filter.py::test_filtering_specs[flags3-specs3-expected3] ====================================================== 3 passed, 2 xpassed in 4.51 seconds =======================================================
https://spack.readthedocs.io/en/latest/extensions.html
2021-01-16T05:47:16
CC-MAIN-2021-04
1610703500028.5
[]
spack.readthedocs.io
Tech Writer Resources Unable to render {children}. Page not found: KBSS:Tech Writer Resources. If you cannot install or log into the client, please send a support ticket with the following information: - Operating system type - Error log files - Are you installing a new client or are you upgrading an existing client? - A screenshot of the error message To learn how to send a support ticket, see Armor Support.
https://docs.armor.com/pages/viewpage.action?pageId=28278944
2021-01-16T05:08:27
CC-MAIN-2021-04
1610703500028.5
[]
docs.armor.com
Receiving Grins Let's see how you can receive your first grins. Interactive Transactions The nature of Mimblewimble protocol means that the sender & receiver need to interact with one another, in some way or another, in order to form transactions. The first step is to generate an address: grin-wallet address It is not an on-chain address, as it's only used for wallet-to-wallet communication. grin1dhvv9mvarqwl6fderuxp3qgl6qppvhc9p4u24347ec0mvgg6342q4w6x56 Give it to the sender. To understand what comes next, you should know there are two primary ways to interact with the other party: Tor and Slatepack. A Tor connection is attempted first, but if it isn't accessible (counterparty offline, or either party doesn't have Tor service installed), then Slatepack method is automatically chosen. Tor All you need to do is type: grin-wallet listen Done! This sets up your wallet to listen for incoming connections through Tor. Just let the sender know that your wallet is ready. You can type grin-wallet info to check your wallet balance. Slatepack Slatepacks are encoded text messages used to transfer the data required to form a transaction, and are an alternative to a hands-off method such as Tor. The messages are easily copy-pasted and can be transferred in any communication channel imaginable: email, forum, social media, chat, letter, carrier pigeon etc. The text you receive from the sender should look like this: BEGINSLATEPACK. HctgNGXrJDGFY3B KrEF1meAezGjxQ6 Z93QF6Ps2m9yKCQ LfhZvpDY9ZXViM7 nDoNeMvwtYV2crr 8gDqvYDmtRfLL3n Uabao7VyWR4AuYg TXQUSWU83kEhKmr bRtdRjvpisx1LYo 9cyZGfsgsd7ZvDJ KKZPHhcPe4Eivtv cMvee3nwFFY3ZnM SoULNaHVJ38h3tZ vMXQMoMLB17L53o Xy6QQjDaG8avUBt LQq2GfGRTiUPQgn vQwFzfZPVzVKNLk 5AFmUQFZtiVdTJV xHvc1BuAqcamerv Y76KVccPY3WGupy 4zWFpkjTH65XNiH XqQnkb3EA1iVrHc tyTJ1PWb6X6oV1k ktYiWBpatyTirRy CywPyjr6c8XLr4Q 9VoCedU5BcdFdMB ACqQTwjgVXqjHoS 58ZPKFitjeH67Ts ah6twcKtMaFmTXD i7JEQ7qV6cewgxH 2jwWFxbb98mye6A Lm9movc6Wer26L2 91WQD3cbVpAZLEs APFPtyxnWjv8n3W ZXFLR2TPZwGc5Vt zwFUPoyWfKXasQy VVV6tbKWEEhqAZR e34M7uEwfurpUUi 9812VFPY1qw3K9b ynwQXuXMuWQCUnU s1JqWqFgSQKENUP tGCK19dys9twghA FaAc7ZXQHdMbUoL sVxVfdjE94F1Wpj M7QAM5VZuaauHdQ Mt2erFyxJ5vsYSZ hgS553UKoQL5YWX E7oRNdMDkJV6VkL i55kAQc1vWvW9ce 3MoXiBT4TJ1SyNS NVZKxgk8c. ENDSLATEPACK. Your next step would be to type: grin-wallet receive Then enter the message you were sent into the prompt. Next, your own wallet will output a beautiful Slatepack message as well: Copy and send it to the other party, and that's it, you've completed your role! It's now in the hands of the sender to finalize and post the transaction to the network. You can tell when it's accepted by the chain by typing grin-wallet info and seeing if there's an amount waiting for confirmation.
https://docs.grin.mw/getting-started/quickstart/receive/
2021-01-16T05:33:39
CC-MAIN-2021-04
1610703500028.5
[array(['../../../assets/images/send-msg.png', 'receiver-msg'], dtype=object) ]
docs.grin.mw
Firebase configuration steps Updated December 2020Updated December 2020 [email protected] steps detail configurations in 3rd party consoles, if you spot any discrepancies in the screenshots/examples, please contact us at In order to configure your Firebase project, so the Copilot deployment can access the data, Here are the steps to follow 💡 These steps should be done for each of your Firebase projects ( Normally there should be two - Development and Production environments ) Step 1Step 1 Get to settings and extract Project IDGet to settings and extract Project ID Go to the project wanted under the Firebase console, and then access Settings - - Send us the Project ID in the main top section. You can see the Android and iOS configurations that should have been created by your developers - If you haven’t got them, make sure your developers add the right apps for Android/iOS, you will need to turn each of them ON later. Step 2Step 2 Adding Copilot Analytics userAdding Copilot Analytics user Go to “Users and permissions”, and add this user group with Viewer roles: Step 3Step 3 Get to Big query setupGet to Big query setup Continue to the integrations tab, and enter the BigQuery setup details. Step 4Step 4 enabling big queryenabling big query Approve and enable BigQuery through the Wizard, if you have not yet connected BigQuery to your project. During the Linking process, or at any time after, validate the following has been done: Check “Include advertising identifiers in export” checkmark under "Google Analaytics", and toggle both apps ( And web apps if needed ) ON. Under “Crashlytics”, also toggle both apps ON. If you add more apps, you should turn the analytics & Crashlytics ON for each of them ( Including Streaming if needed) Step 5Step 5 Google cloud platform IAMsGoogle cloud platform IAMs Go to to the Google cloud platform console and select your project at the top. If you have inegrated BigQuery beforehand, you can press View in BigQuery to go directly to the GCP for the BigQuery integration Then go to IAM & Admin. Step 6Step 6 Add Copilot Growth userAdd Copilot Growth user In the top of the Members and Roles, press the ADD button. Add the member [email protected]. The set three roles by selecting a role, and “Add another role” and Save - BigQuery DataViewer. - BigQuery Job User. - BigQuery User. You're all doneYou're all done Contact your customer success manager with any questions
https://docs.copilot.cx/docs/resources/firebase-configuration-steps/firebase-configuration-steps
2021-01-16T04:58:40
CC-MAIN-2021-04
1610703500028.5
[array(['/docs/assets/firebase-configuration/step1-fire-project-id.png', 'Firebase project ID'], dtype=object) array(['/docs/assets/firebase-configuration/step2-firebase-users.png', 'Add Firebase Copilot users'], dtype=object) array(['/docs/assets/firebase-configuration/step3-bq-setup.png', 'BigQuery setup'], dtype=object) array(['/docs/assets/firebase-configuration/step4-enable_bq.png', 'Enable BigQuery'], dtype=object) array(['/docs/assets/firebase-configuration/step5-gcloud-iams.png', 'Google Cloud IAMs'], dtype=object) array(['/docs/assets/firebase-configuration/step6-growth-user.png', 'Growth user'], dtype=object) ]
docs.copilot.cx
TextColor From Xojo Documentation Method The system text color. Usage result = TextColor Notes This value is useful when you are using Canvas controls to create custom controls. When drawing objects, use this color as the ForeColor value when drawing text. The DisabledTextColor returns the system color for drawing disabled text. This value can be changed by the user or when the system switches between light/dark modes, so you should use this method in the Paint event handler rather than storing the value. See Also DarkBevelColor, DarkTingeColor, DisabledTextColor, FillColor, FrameColor, HighlightColor, LightBevelColor, LightTingeColor functions; Color data type.
http://docs.xojo.com/index.php?title=TextColor&oldid=63412
2021-01-16T05:54:28
CC-MAIN-2021-04
1610703500028.5
[]
docs.xojo.com
The load balancing model used, according to the selected QoS is defined by a number of different load balancing classes. These are configured automatically when different QoS is selected, be explicitly changed by altering the configuration file. The supported load balancers are detailed in the table below: The default setting is MostAdvancedSlaveLoadBalancer. To change the Connector load balancer, specify the property in the configuration, i.e to use Round Robin: shell> ./tools/tpm update alpha \ --property=dataSourceLoadBalancer_RO_RELAXED=com.continuent.tungsten.router.resource.loadbalancer.RoundRobinSlaveLoadBalancer
https://docs.continuent.com/tungsten-clustering-6.1/connector-routing-loadbalancers.html
2021-01-16T05:50:15
CC-MAIN-2021-04
1610703500028.5
[]
docs.continuent.com
Contents Strategic Partner Links Sepasoft - MES Modules Cirrus Link - MQTT Modules Resources Knowledge Base Articles Inductive University Forum IA Support SDK Documentation SDK Examples All Manual Versions Ignition 8 Ignition 7.9 Ignition 7.8 The Shutdown Intercept Script event at. This allows you to set special restrictions on, regardless of the number of times the Tag changes. However, because this is run from the Client Scope, it)
https://docs.inductiveautomation.com/pages/viewpage.action?pageId=15336348
2021-01-16T06:00:29
CC-MAIN-2021-04
1610703500028.5
[]
docs.inductiveautomation.com
# Channels & Custom Groups Personalising your Mumble Server. # Creating, Managing and Locking Channels Creating a Channel is simple. You just right click the root channel (or the channel you want to put it under) and Add, then add a Name, Description (not required) and you can then position it if you want. # Managing Channels This is where channels can get more complicated. You can add custom groups containing specific users for each channel. This is useful if you want to give only your team access to the channel, without requiring a password. This is more advanced and we don't intend to cover it any further than the GIF below. # Locking Channels This is the easiest way to only allow certain people into channels. You can add a Password via the Edit option when right clicking on a channel. To actually be able to access the channel, you then need to go to Server > Access Tokens at the top and add that password as a token. This will then override enter permissions and let anyone with the password in. # More Information on Channels There are plenty of tutorials online that help with more complicated channel operations, such as Listening Channels, more specific Permissions and more.
https://docs.rsl.tf/guides/mumble-servers/channels-and-custom-groups.html
2021-01-16T06:47:45
CC-MAIN-2021-04
1610703500028.5
[array(['https://raw.githubusercontent.com/Respawn-League/Respawn-Docs/master/docs/src/guides/mumble-servers/createchannel.gif', 'createchannel'], dtype=object) array(['https://raw.githubusercontent.com/Respawn-League/Respawn-Docs/master/docs/src/guides/mumble-servers/managingchannel.gif', 'alt text'], dtype=object) array(['https://raw.githubusercontent.com/Respawn-League/Respawn-Docs/master/docs/src/guides/mumble-servers/pass1channel.gif', 'alt text'], dtype=object) array(['https://raw.githubusercontent.com/Respawn-League/Respawn-Docs/master/docs/src/guides/mumble-servers/pass2channel.gif', 'alt text'], dtype=object) ]
docs.rsl.tf
Display Node.
https://docs.toonboom.com/help/harmony-17/premium/reference/node/output/display-node.html
2021-01-16T04:55:27
CC-MAIN-2021-04
1610703500028.5
[array(['../../../Resources/Images/HAR/Stage/SceneSetup/HAR12/HAR12_Advanced_Display_Menu.png', None], dtype=object) array(['../../../Resources/Images/HAR/Stage/SceneSetup/HAR11/HAR11_display_concepts002.png', None], dtype=object) ]
docs.toonboom.com
Limitations with Analytical Applications This topic lists all the limitations associated with the Applications feature. - Securing project resources CML applications are accessible by any user with read-only (or higher) access to the project. However, CML does not actively prevent you from running an application that allows a read-only user (i.e. Viewers) to modify files belonging to the project. It is up to you to make the application truly read-only in terms of files, models, and other resources belonging to the project. - Port availabilityCloudera Machine Learning exposes only 2 ports per project. Therefore, you can run a maximum of 2 web applications simultaneously, on these ports: CDSW_APP_PORT CDSW_READONLY_PORT By default, third-party browser-based editors run on CDSW_APP_PORT. Therefore, for projects that are already using browser-based editors, you are left with only one other port to run applications on: CDSW_READONLY_PORT.
https://docs.cloudera.com/machine-learning/1.0/applications/topics/ml-applications-limitations.html
2021-01-16T05:38:30
CC-MAIN-2021-04
1610703500028.5
[]
docs.cloudera.com
Upgrading to Riak KV 2.0 When upgrading to Riak 2.0 from an earlier version, we strongly recommend reading each section of the following guide. This guide explains which default Riak behaviors have changed and specific steps to take for a successful upgrade. For an overview of the new features and functionality included in version 2.0, check out our guide to Riak 2.0. New Clients To take advantage of the new features available in Riak 2.0, we recommend upgrading your application to an official Riak client that was built with those features in mind. There are official 2.0-compatible clients in the following languages: While we strongly recommend using the newest versions of these clients, older versions will still work with Riak 2.0, with the drawback that those older clients will not able to take advantage of new features like data types or the new Riak Search. Bucket Types In versions of Riak prior to 2.0, the location of objects was determined by objects’ bucket and key, while all bucket-level configurations were managed by setting bucket properties. In Riak 2.0, bucket types are both an additional namespace for locating objects and a new way of configuring bucket properties in a systematic fashion. More comprehensive details on usage can be found in the documentation on using bucket types. Here, we’ll list some of the things to be aware of when upgrading. Bucket types and object location With the introduction of bucket types, the location of all Riak objects is determined by: - bucket type - bucket - key This means there are 3 namespaces involved in object location instead of 2. A full tutorial can be found in Using Bucket Types. If your application was written using a version of Riak prior to 2.0, you should make sure that any endpoint in Riak targeting a bucket/key pairing is changed to accommodate a bucket type/bucket/key location. If you’re using a pre-2.0-specific client and targeting a location specified only by bucket and key, Riak will use the default bucket configurations. The following URLs are equivalent in Riak 2.0: /buckets/<bucket>/keys/<key> /types/default/buckets/<bucket>/keys/<key> If you use object locations that don’t specify a bucket type, you have three options: - Accept Riak’s default bucket configurations - Change Riak’s defaults using your configuration files - Manage multiple sets of bucket properties by specifying those properties for all operations (not recommended) Features that rely on bucket types One reason we recommend using bucket types for Riak 2.0 and later is because many newer Riak features were built with bucket types as a precondition: - Strong consistency — Using Riak’s strong consistency subsystem requires you to set the consistentparameter on a bucket type to true - Riak Data Types — In order to use Riak Data Types, you must create bucket types specific to the Data Type you are using Bucket types and downgrades If you decide to use bucket types, please remember that you cannot downgrade your cluster to a version of Riak prior to 2.0 if you have both created and activated a bucket type. New allow_mult Behavior One of the biggest changes in version 2.0 regarding application development involves Riak’s default siblings behavior. In versions prior to 2.0, the allow_mult setting was set to false by default for all buckets. So Riak’s default behavior was to resolve object replica conflicts between nodes on its own; relieving connecting clients of the need to resolve those conflicts. In 2.0, allow_mult is set to true for any bucket type that you create and activate. This means that the default when using bucket types is to handle conflict resolution on the client side using either traditional vector clocks or the newer dotted version vectors. If you wish to set allow_mult to false in version 2.0, you have two options: - Set your bucket type’s allow_multproperty to false. - Don’t use bucket types. More information on handling siblings can be found in our documentation on conflict resolution. Enabling Security The authentication and authorization mechanisms included with Riak 2.0 should only be turned on after careful testing in a non-production environment. Security changes the way all applications interact with Riak. When Downgrading is No Longer an Option If you decide to upgrade to version 2.0, you can still downgrade your cluster to an earlier version of Riak if you wish, unless you perform one of the following actions in your cluster: - Index data to be used in conjunction with the new Riak Search. - Create and activate one or more bucket types. By extension, you will not be able to downgrade your cluster if you have used the following features, both of which rely on bucket types: If you use other new features, such as Riak Security or the new configuration files, you can still downgrade your cluster, but you will no longer be able to use those features after the downgrade. Upgrading Your Configuration System Riak 2.0 offers a new configuration system that both simplifies configuration syntax and uses one configuration file, riak.conf, instead of the two files, app.config and vm.args, required by the older system. Full documentation of the new system can be found in Configuration Files. If you’re upgrading to Riak 2.0 from an earlier version, you have two configuration options: - Manually port your configuration from the older system into the new system. - Keep your configuration files from the older system, which are still recognized in Riak 2.0. If you choose the first option, make sure to consult the configuration files documentation, as many configuration parameters have changed names, some no longer exist, and others have been added that were not previously available. If you choose the second option, Riak will automatically determine that the older configuration system is being used. You should be aware, however, that some settings must be set in an advanced.config file. For a listing of those parameters, see our documentation on advanced configuration. If you choose to keep the existing app.config files, you must add the following additional settings in the riak_core section: {riak_core, [{default_bucket_props, [{allow_mult,false}, %% or the same as an existing setting {dvv_enabled,false}]}, %% other settings ] }, This is to ensure backwards compatibility with 1.4 for these bucket properties. Upgrading LevelDB If you are using LevelDB and upgrading to 2.0, no special steps need to be taken, unless you wish to use your old app.config file for configuration. If so, make sure that you set the total_leveldb_mem_percent parameter in the eleveldb section of the file to 70. {eleveldb, [ %% ... {total_leveldb_mem_percent, 70}, %% ... ]} If you do not assign a value to total_leveldb_mem_percent, Riak will default to a value of 15, which can cause problems in some clusters. Upgrading Search Information on upgrading Riak Search to 2.0 can be found in our Search upgrade guide. Migrating from Short Names Although undocumented, versions of Riak prior to 2.0 did not prevent the use of the Erlang VM’s -sname configuration parameter. As of 2.0 this is no longer permitted. Permitted in 2.0 are nodename in riak.conf and -name in vm.args. If you are upgrading from a previous version of Riak to 2.0 and are using -sname in your vm.args, the below steps are required to migrate away from -sname. - Upgrade to Riak 1.4.12. - Back up the ring directory on each node, typically located in /var/lib/riak/ring. - Stop all nodes in your cluster. - Run riak-admin reip <old_nodename> <new_nodename>on each node in your cluster, for each node in your cluster. For example, in a 5 node cluster this will be run 25 total times, 5 times on each node. The <old_nodename>is the current shortname, and the <new_nodename>is the new fully qualified hostname. - Change riak.confor vm.args, depending on which configuration system you’re using, to use the new fully qualified hostname on each node. - Start each node in your cluster.
https://docs.riak.com/riak/kv/2.1.4/setup/upgrading/version.1.html
2021-01-16T05:43:44
CC-MAIN-2021-04
1610703500028.5
[]
docs.riak.com
Azure App Service, Virtual Machines, Service Fabric, and Cloud Services comparison Overview Azure offers several ways to host web sites: Azure App Service, Virtual Machines, Service Fabric, and Cloud Services. This article helps you understand the options and make the right choice for your web application. Azure App Service is the best choice for most web apps. Deployment and management are integrated into the platform, sites can scale quickly to handle high traffic loads, and the built-in load balancing and traffic manager provide high availability. You can move existing sites to Azure App Service easily with an online migration tool, use an open-source app from the Web Application Gallery, or create a new site using the framework and tools of your choice. The WebJobs feature makes it easy to add background job processing to your App Service web app. Service Fabric is a good choice if you’re creating a new app or re-writing an existing app to use a microservice architecture. Apps, which run on a shared pool of machines, can start small and grow to massive scale with hundreds or thousands of machines as needed. Stateful services make it easy to consistently and reliably store app state, and Service Fabric automatically manages service partitioning, scaling, and availability for you. Service Fabric also supports WebAPI with Open Web Interface for .NET (OWIN) and ASP.NET Core. Compared to App Service, Service Fabric also provides more control over, or direct access to, the underlying infrastructure. You can remote into your servers or configure server startup tasks. Cloud Services is similar to Service Fabric in degree of control versus ease of use, but it’s now a legacy service and Service Fabric is recommended for new development. If you have an existing application that would require substantial modifications to run in App Service or Service Fabric, you could choose Virtual Machines in order to simplify migrating to the cloud. However, correctly configuring, securing, and maintaining VMs requires much more time and IT expertise compared to Azure App Service and Service Fabric. If you are considering Azure Virtual Machines, make sure you take into account the ongoing maintenance effort required to patch, update, and manage your VM environment. Azure Virtual Machines is Infrastructure-as-a-Service (IaaS), while App Service and Service Fabric are Platform-as-a-Service (Paas). Feature Comparison The following table compares the capabilities of App Service, Cloud Services, Virtual Machines, and Service Fabric to help you make the best choice. For current information about the SLA for each option, see Azure Service Level Agreements. App Service is a great solution for complex business applications. It lets you develop apps that scale automatically on a load balanced platform, are secured with Active Directory, and connect to your on-premises resources. It makes managing those apps easy through a world-class portal and APIs, and allows you to gain insight into how customers are using them with app insight tools. The Webjobs feature lets you run background processes and tasks as part of your web tier, while hybrid connectivity and VNET features make it easy to connect back to on-premises resources. Azure App Service provides three 9's SLA for web apps App Service is a great solution for hosting corporate websites. It enables web apps.. - Be ISO, SOC2, and PCI compliant. - Integrate with Active Directory I have an IIS6 application running on Windows Server 2003. Azure App Service App Service is a great solution for this scenario, because you can start using it for free and then add more capabilities when you need them. Each free web app App Service, you can: - Begin with the free tier and then scale up as needed. - Use the Application Gallery to quickly set up popular web applications, such as WordPress. - Add additional Azure services and features to your application as needed. - Secure your web app with HTTPS. I'm a web or graphic designer, and I want to design and build websites for my customers For web developers and designers, Azure App Service integrates easily with a variety of frameworks and tools, includes deployment support for Git and FTP, and offers tight integration with tools and services such as Visual Studio and SQL Database. With App Service, App Service is a good option that offers tight integration with Azure SQL Database. And you can use the WebJobs feature for running backend processes. Choose Service Fabric Service Fabric. App Service, the languages and frameworks needed by your application are configured for you automatically. App Service App Service, you can run it on one of the other Azure web hosting options. App Service, Service Fabric, and Virtual Machines using the Azure Virtual Network service. On App Service you can use the, App Service is a great choice for hosting REST APIs. With App Service, you can: - Quickly create a mobile app or API app. Note If you want to get started with Azure App Service before signing up for an account, go to, where you can immediately create a short-lived starter app in Azure App Service for free. No credit card required, no commitments. Next Steps For more information about the three web hosting options, see Introducing Azure. To get started with the option(s) you choose for your application, see the following resources:
https://docs.microsoft.com/en-us/azure/app-service-web/choose-web-site-cloud-service-vm
2016-12-03T04:42:58
CC-MAIN-2016-50
1480698540839.46
[]
docs.microsoft.com
Interface that can be implemented by exceptions etc. that are error coded. The error code is a String, rather than a number, so it can be given user-readable values, such as "object.failureDescription". These codes will be resolved by a com.interface21.context.MessageSource object. This interface is necessary because both runtime and checked exceptions are useful, and they cannot share a common, framework-specific, superclass. public static final java.lang.String UNCODED public java.lang.String getErrorCode()
http://docs.spring.io/docs/tmp/com/interface21/core/ErrorCoded.html
2014-04-16T08:27:03
CC-MAIN-2014-15
1397609521558.37
[]
docs.spring.io
AN ORDER to create SFC 3.13 (6), relating to course descriptions for students applying for social worker training certificates.Submitted by DEPARTMENT OF REGULATION AND LICENSING Final rule filed with LRB Agency report to legislature part 1 Agency report to legislature part 2 LC Clearinghouse Report to agency LC Clearinghouse Comments Agency filing with LC Clearinghouse
https://docs.legis.wisconsin.gov/code/chr/2001/cr_01_059
2014-04-16T07:33:31
CC-MAIN-2014-15
1397609521558.37
[]
docs.legis.wisconsin.gov
. Trackbacks/Pingbacks [...] Via i-docs.org Like this:LikeBe the first to like this post. Categories: Uncategorized Comments (0) Trackbacks (0) Leave a comment Trackback [...] [...] Popcorn Maker, 3WDOC, Conductr, Storyplanet and Galahad. ( 3WDoc, Klynt and Popcorn Maker were compared by Maria Yanez and Eva Dominguez. for the i-Docs Symposium back in the Spring. ) One thing that [...]
http://i-docs.org/2012/03/23/authoring-tools-confronte/
2014-04-16T07:49:38
CC-MAIN-2014-15
1397609521558.37
[]
i-docs.org
User Guide Local Navigation Back up smartphone data or tablet data Before you begin: If you're using a BlackBerry® smartphone that includes built-in media storage, mass storage mode must be turned on. - Connect your smartphone or BlackBerry® PlayBook™ tablet to your computer. - In the BlackBerry® Desktop Software, click Device > Back up. - Do one of the following: - If your smartphone includes built-in media storage and you want to back up data that is stored there, select the Files saved on my built-in media storage check box. - Do any of the following: - To change the default name for the backup file, in the File name field, type a new name. - To encrypt your data, select the Encrypt backup file check box. Type a password. - To save your settings so that you aren't prompted to set these options again when you back up your smartphone or tablet, select the Don't ask for these settings again check box. - Click Back up. Next topic: Restore smartphone data or tablet data Previous topic: About backing up and restoring data Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/29244/Back_up_device_and_tablet_data_602_1480692_11.jsp
2014-04-16T09:00:02
CC-MAIN-2014-15
1397609521558.37
[]
docs.blackberry.com
Customizing CMDB list views in CMDB On the Remedyforce Administration > Configure CMDB 2.0 > List View Customization page, you can customize the list view that is displayed in the Remedyforce CMDB tab. You can modify the out-of-box list views or create new list views for specific classes. The list views can be customized by: - Adding or removing fields that are displayed as columns - Defining the display order of the columns - Defining the width of each column The columns that are displayed in the Remedyforce CMDB list view are not based on profiles or users. By default, 25 columns are shown in the Remedyforce CMDB list view. You can update the default number of columns that are shown in the Remedyforce CMDB list view. To update the number, change the value in the CMDBListViewLimit custom setting. For more information, see Managing custom settings. The following image describes the List View Customization tab. The following out-of-the-box views are available: Customizing the list view The following video (2:17) shows how to modify the columns of the Remedyforce CMDB console. If you are using BMC Remedyforce CMDB 1.0, the configuration options that are supported only in BMC Remedyforce CMDB 2.0 appear disabled on the List View Customization page. This includes the New and Delete buttons, Views menu, search option for views, and Class Type and Class Name lists. For more information, see Customizing fields shown in the Remedyforce CMDB list view in CMDB 1.0. To customize the Remedyforce CMDB list view - Click the Remedyforce Administration tab. - On the Home page, click the Configure CMDB 2.0 tile, and from the menu select CMDB List View Customization. - In the List Views section, select the required view. If you want to create a new custom view, perform the following actions: - Click New. From the Class Type list, select one of the following: - CI – only CI and CI and Asset classes are displayed in the Class Name list. - Asset – only asset and CI and Asset classes are displayed in the Class Name list. - From the Class Name list, select the class. From the Available Fields list, select the required columns; to move the selected columns to the Selected Fields list, click the right arrow. - (Optional) To remove columns from the Selected Fields list, select the columns and click the left arrow. The columns displayed in the Remedyforce CMDB list view are in the order in which you have added them. To modify the display order of a column, select that column and click the up arrow or the down arrow in the Selected Fields list. - In the Pixels field, enter the width of each column in pixels (100 pixels = 2.646 cm). - (Optional) In the Preview section, click Refresh to preview the changes made to the Remedyforce CMDB list view. - Click Save. Considerations for customizing the Remedyforce CMDB list view - If there are any Rich Text Format (RTF) fields in the field set of a class, the Available Fields list does not display those fields. Configuration of the RTF fields as Remedyforce CMDB list view columns is not supported. - If you are already on the Remedyforce CMDB tab, the updated Remedyforce CMDB list view columns may not be displayed to you. You must refresh the Remedyforce CMDB tab to view the updated Remedyforce CMDB list view columns. - For all the out-of-the-box list views, if you remove any field from the Base Element field set, Salesforce1 Self Service - List View, the Selected Fields box still displays these fields. - In All the OOTB list view, once the Base element fields are removed from the Base element field set, still the old fields are shown in the selected list. List view for classes that do not have a custom list view If you have not defined a custom list view for a class, it inherits the custom list view of its parent class in the Remedyforce CMDB tab; if you have not defined a custom list view for the parent class, it inherits the custom list view of the class which is nearest in the class hierarchy. For example, in the following screenshot, if you have defined a custom list view for WAN, when you select WAN on the left panel of the Remedyforce CMDB tab, the right panel displays the columns based on the custom list view for WAN. However, if a custom list view is not defined for WAN, it will inherit the custom list view of the class which is nearest in the class hierarchy and has a custom list view defined, such as LNs Collection or Connectivity Collection. If none of the classes in the class hierarchy has any custom list view, WAN inherits the list view of the All Instances (Base Element) class. For a Rule Based Asset class, if you have not defined a custom list view, it inherits the list view of its parent CI class. For example, if you have not defined a custom list view for Desktop, which is an out-of-the-box Rule Based Asset Class based on the Computer System CI class, Desktop will inherit the list view of the Computer System CI class. The fields that you configure for a list view of a class are also shown in the CI/Asset details popup window. This popup window is displayed with the Configuration Item / Asset, Service, and Service Offering fields on the Remedyforce Console forms. This popup window shows first 10 fields that you select for a class that have values. If a field does not have a value, it is not shown. The Instance Name field is always shown. Related topic Overview of the Remedyforce CMDB tab
https://docs.bmc.com/docs/remedyforce/201801/en/customizing-cmdb-list-views-in-cmdb-765452399.html
2020-11-23T18:36:18
CC-MAIN-2020-50
1606141164142.1
[]
docs.bmc.com
LinqServerModeSource.InconsistencyDetected Event Enables you to manually handle the inconsistency detected during an operation on a data source. Namespace: DevExpress.Data.Linq Assembly: DevExpress.Data.v20.2.dll Declaration public event LinqServerModeInconsistencyDetectedEventHandler InconsistencyDetected Public Event InconsistencyDetected As LinqServerModeInconsistencyDetectedEventHandler Event Data The InconsistencyDetected event's data class is LinqServerModeInconsistencyDetectedEventArgs. The following properties provide information specific to this event: Remarks The InconsistencyDetected event is raised when an inconsistency has been detected during an operation on a data source. Set the LinqServerModeInconsistencyDetectedEventArgs.Handled property to true, to manually handle the error. Otherwise, the data will be reloaded from the data source. See Also Feedback
https://docs.devexpress.com/CoreLibraries/DevExpress.Data.Linq.LinqServerModeSource.InconsistencyDetected
2020-11-23T20:20:51
CC-MAIN-2020-50
1606141164142.1
[]
docs.devexpress.com
Launching, resuming, and multitasking (XAML) [ This article is for Windows 8.x and Windows Phone 8.x developers writing Windows Runtime apps. If you’re developing for Windows 10, see the latest documentation ] Learn how to launch, suspend, and resume your app. Also learn about file associations, AutoPlay, transferring data in the background, and running your own code in the background with background tasks. Roadmap: How does this topic relate to others? See:. See Detecting when your app is running in Kid's Corner mode. Related topics Guidelines and checklist for lock screen tiles
https://docs.microsoft.com/zh-cn/previous-versions/windows/apps/hh770837(v=win.10)
2020-11-23T19:42:02
CC-MAIN-2020-50
1606141164142.1
[]
docs.microsoft.com
Backing Up and Restoring Pivotal Platform Page last updated: This topic provides an overview of backing up and restoring your Pivotal Platform deployment. Before You Back Up When backing up data in your Pivotal Platform deployment, consider: If your deployment uses external databases, check that it is a supported database configuration. For more information, see External Storage Support Across Pivotal Platform Versions in Backing Up Pivotal Platform with BBR. If your Pivotal Platform deployment uses internal databases, follow the backup and restore instructions for the MySQL server in the BBR documentation. For more information, see Backup and Restore with BBR. General Data Protection Regulation The General Data Protection Regulation . Backup artifacts might contain personal data covered by GDPR. For example, a backup of a PAS could contain a user email. For more information about personal data that can be stored in Pivotal Platform, see General Data Protection Regulation. Backup and Restore with BBR BOSH Backup and Restore (BBR) is a command-line tool for backing up and restoring BOSH deployments. To perform a backup of your Pivotal Platform deployment with BBR, see Backing Up Pivotal Platform with BBR. To restore your Pivotal Platform deployment with BBR, see Restoring Pivotal Platform from Backup with BBR. To troubleshoot problems with BBR, see Troubleshooting BBR. BBR Restore Scenarios There are many different restore cases that can arise with BOSH Directors and PAS deployments. The following links provide some guidance on how to use BBR in common restore scenarios: These guides do not cover all possible restore scenarios. If you encounter another restore case, then some information in these guides might be applicable. For guidance, contact Pivotal Support.
https://docs.pivotal.io/ops-manager/2-7/install/backup-restore/
2020-11-23T19:50:16
CC-MAIN-2020-50
1606141164142.1
[]
docs.pivotal.io
{"metadata":{"image":[],"title":"","description":""},"api":{"url":"","auth":"required","results":{"codes":[]},"settings":"","params":[]},"next":{"description":"","pages":[]},"title":"The Tool Editor","type":"basic","slug":"the-tool-editor","excerpt":"","body":"[block:callout]\n{\n \"type\": \"info\",\n \"body\": \"This page contains procedures related to the legacy editor. Try out our new editor that has a more streamlined design and provides a better app editing experience. [More info](doc:about-the-tool-editor).\"\n}\n[/block]\nOnce you have uploaded an image containing your tool to the [The CGC Image Registry](doc:the-cgc-image-registry) using the [Rabix Command Line Interface](sdk-overview), you should provide a description of the tool's interface. The interface includes details of the tool's input and output ports, its command options, and the CPU and memory resources it requires. Providing this information writes a Common Workflow Language tool description, and allows your tool to be connected arbitrarily in workflows on the CGC.\n\nThere are two ways to enter a description of the tool's interface:\n * All of the features of a tool's interface can be described using the graphical tool editor on the CGC, which you can use in the browser.\n * Alternatively you can supply your own description of the tool in accordance with the Common Workflow Language. Read more on [instructions on the protocol for the Common Workflow Language](doc:advanced-features-of-the-tool-editor#section-upload-your-own-cwl-tool-description).\": \"* [Add a tool description with the tool editor](#section-add-a-tool-description-with-the-tool-editor)\\n* [Using the tool editor](#section-using-the-tool-editor)\\n * [Basic principles](#section-basic-principles)\\n * [Terminology](#section-terminology)\\n * [The tool editor layout](#section-the-tool-editor-layout)\\n * [A note on tool inputs](#section-a-note-on-tool-inputs)\\n * [A note on arguments](#section-a-note-on-arguments)\\n * [Dynamic expressions](#section-dynamic-expressions)\\n\\n* [Access the tool description as a JSON object](#section-access-the-tool-description-as-a-json-object)\"\n}\n[/block]\n##Add a tool description with the tool editor\nTo create a new tool description using the tool editor:\n1. [Navigate to your project](view-a-project).\n2. Click the **Apps** tab and select **+Add App**.\n3. Click **Create Tool**.\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"create-a-tool-ed1.jpg\",\n \"815\",\n \"491\",\n \"#c29233\",\n \"\"\n ],\n \"border\": true,\n \"caption\": \"Create a tool\"\n }\n ]\n}\n[/block]\n4. Give your tool a name, and click the green Create button.\n\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"Screenshot 2015-11-11 14.37.58.png\",\n \"920\",\n \"528\",\n \"#366d4e\",\n \"\"\n ],\n \"border\": true,\n \"caption\": \"Name your tool\"\n }\n ]\n}\n[/block]\n##Using the tool editor\n###Basic principles\nThe tool editor is used to capture the interface of a tool, so that it can be be used in conjunction with other imported tools, and with existing workflows on the CGC.\n\nThe way the tool editor works is to build up a command that looks like the command you would enter in your terminal to run the tool, using variables in place of specific filenames or parameter values. Its use will become clearer with examples, given in the general introduction to the editor on this page, and in the extended example given in the SDK Tutorial, [Worked example of uploading SamTools Sort](doc:install-and-run-samtools-sort).\n\n###Terminology\nThe tool editor uses the terminology of the Common Workflow Language to refer to the syntax of a command line utility. The terms map onto syntax items as follows:\n[block:parameters]\n{\n \"data\": {\n \"h-0\": \"\",\n \"0-0\": \"**Example command syntax**\",\n \"0-1\": \"`cmd`\",\n \"0-2\": \"`[-a]`\",\n \"0-3\": \"`[num]`\",\n \"0-4\": \"`[-f]`\",\n \"0-5\": \"`[binary file]`\",\n \"1-0\": \"**Tool editor terminology**\",\n \"1-1\": \"base command\",\n \"1-2\": \"prefix\",\n \"1-3\": \"value\",\n \"1-4\": \"prefix\",\n \"1-5\": \"value\"\n },\n \"cols\": 6,\n \"rows\": 2\n}\n[/block]\nThe **base command** of a command line tool is the first part of any command, before any arguments are specified. It is the utility name, together with any subcommand of the utility, in the case that the utility has multiple subcommands (for example, as a tool package will). If you need to specify a path to the utility, then the full path constitutes the **base command**.\n\nThe following are all examples of **base commands**:\n * `grep (no subcommand)`\n * `bin/my-script.sh`\n * `samtools sort`\n * `bwa index`\n\nThe **prefixes** of a command line tool roughly correspond to its options, or flags, which are the single characters, such as `-f` or `-X` that follow the **base command** and are modified by option arguments. However, the notion of a **prefix** is slightly wider than the notion of an option. Specifically, if a tool requires option arguments to be passed in such a way that the option argument is not separated by a space from the option argument, but instead by some non-empty string, such as '=' then this separator is also part of the **prefix**.\n\nThe following are all examples of **prefixes**:\n * `-f`\n * `-X`\n * `INPUT=`\n\nThe **values** of the command line tool roughly correspond to its option arguments. However, there is an important difference between the two. The **value** can be either a literal that passed in as the option argument or an expression that resolves to the option argument. For the latter case, you may enter a JavaScript expression given in terms of features of the tool execution as a **value** of the tool; the value of this expression will be treated as the option argument for the **prefix** preceding it. Information on how to enter an expression in place of a literal is given in the documentation on [dynamic expressions in tool descriptions](doc:dynamic-expressions-in-tool-descriptions).\n\nThe following are all examples of **values**:\n * `myfile.ext`\n * `9`\n * `$job.allocatedResources.cpu`\n\n###The tool editor layout\nThe tool editor consists of five tabs, corresponding to different parts of the tool description, some additional details used to label the tool in graphical interfaces, and a tab on which you can test the correctness of a tool description.\n\nInformation on the tabs is provided in the following five pages:\n\n * [The tool's General Information](doc:general-tool-information) \n * [The tool Input ports](doc:tool-input-ports) \n * [The tool Output ports](doc:tool-output-ports) \n * [Additional Tool Information](doc:additional-tool-information) \n * [A Test tab](doc:test-the-tool-description) \n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"te3.png\",\n \"1350\",\n \"934\",\n \"#74a4c4\",\n \"\"\n ]\n }\n ]\n}\n[/block]\n###A note on tool inputs\nIn the tool editor, you will be prompted to describe a tool's inputs. Inputs in this sense include data, such as files, as well as parameters, such as integers, or arrays.\n\n###A note on arguments\nThe way that you specify the arguments of the subcommand being described depends on the argument type:\n * Arguments related to inputs (files and parameter settings) are described on the **Inputs** tab.\n * Arguments that are not related to any specific input—such as those based on resources allocated to the job—can be entered in the **Arguments** field on the **General Information** tab.\n * The executable name of the subcommand, and any other part of the command that you want to fix for every execution of the tool is entered in the field ** Base Command** on the **General Information** tab. \n\n###Dynamic expressions\nThe icon **</>** in a field indicates that you can enter an expression in that field; i.e., rather than giving a literal value, you can specify that the value is a function of something else. Expressions are entered in JavaScript. See the page on [dynamic expressions in tool descriptions](doc:dynamic-expressions-in-tool-descriptions) for details of hardcoded variables that are available in these expressions, and examples of their use.\n\n##Access the tool description as a JSON object\n\nOnce you have described a tool in the tool editor, you can access its Common Workflow Language description as a JSON object. If you want to familiarize yourself with writing CWL by hand, inspecting these automatically generated files can be helpful.\n\nTo access the CWL description of your tool from the tool editor click the ellipsis menu in the top right hand corner and select **Export Tool.** \n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"export-a-tool-cgc-2.jpg\",\n 706,\n 300,\n \"#eef0f0\"\n ],\n \"border\": true\n }\n ]\n}\n[/block]\nAlternatively, you can use the API to download the CWL description of your tool. See the [documentation on the API for details](doc:get-details-of-an-app).","updates":["5d1c623bcc60f9001e1d5563"],"order":0,"isReference":false,"hidden":false,"sync_unique":"","link_url":"","link_external":false,"_id":"5637e95797666c0d008656ba","__v":44,"category":{"sync":{"isSync":false,"url":""},"pages":["56-02T22:53:11.556Z","project":"55faf11ba62ba1170021a9a7","githubsync":"","parentDoc":null,},"user":"554290cd6592e60d00027d17"}
https://docs.cancergenomicscloud.org/docs/the-tool-editor
2020-11-23T19:25:10
CC-MAIN-2020-50
1606141164142.1
[]
docs.cancergenomicscloud.org
How much line space do I get with Hills Portable 170 Clothesline? The Portable 170 has approx. 17m in total, that's enough to get about two loads of washing hung out and drying! Quite amazing for a portable clothesline. For help and advice on installing this Indoor clothesline, please click the following link for Hills Portable 170 clothesline installation options To see photo's, video's and reviews please visit the Hills Portable 170 Clothesline product page here
https://docs.lifestyleclotheslines.com.au/article/400-how-much-line-space-do-i-get-with-hills-portable-170-clothesline
2020-11-23T20:01:18
CC-MAIN-2020-50
1606141164142.1
[]
docs.lifestyleclotheslines.com.au
The Sales Channel API is deprecated and will be removed with 6.4.0.0. Consider using the Store-API The category endpoint is used to get category information e.g. for a navigation. GET /sales-channel-api/v3/category Description: Returns a list of categories assigned to the sales channel. All filter, sorting, limit and search operations are supported. You find more information about these operations here. GET /sales-channel-api/v3/category/{categoryId} Description: Returns detailed information about a specific category.
https://docs.shopware.com/en/shopware-platform-dev-en/sales-channel-api/sales-channel-category-api
2020-11-23T19:35:54
CC-MAIN-2020-50
1606141164142.1
[]
docs.shopware.com
How stress transfer between volcanic and seismic zones affects volcanic and earthquake Gudmundsson, Agust Philipp, Sonja L. Universitätsverlag Göttingen Sammelband- / Konferenzbeitrag Verlagsversion Deutsch Gudmundsson, Agust; Philipp, Sonja L., 2006-03: How stress transfer between volcanic and seismic zones affects volcanic and earthquake. In: Philipp, S.; Leiss, B; Vollbrecht, A.; Tanner, D.; Gudmundsson, A. (eds.): 11. Symposium "Tektonik, Struktur- und Kristallingeologie"; 2006, Univ.-Verl. Göttingen, p. 81 - 84., DOI. Oceanic transform faults and ridge segments form a network where mechanical interaction is to be expected. In particular, dike emplacement in ridge segments is likely to affect earthquake activity in the adjacent transform faults through processes such as stress transfer. Similarly, strike-slip displacement across transform faults may trigger dike injections and, eventually, eruptions in the adjacent ridge segments. For obvious reasons, direct observations of the possible mechanical interaction between submarine transform zones and ridge segments at mid-ocean ridges are difficult. The subaerial seismic zones of Iceland, however, are in clear spatial connections with the adjacent volcanic zones. These zones, therefore, provide excellent opportunities to study stress transfer between volcanic and seismic zones (Gudmundsson 2000)...
https://e-docs.geo-leo.de/handle/11858/00-1735-0000-0001-343F-D?locale-attribute=de
2020-11-23T19:55:35
CC-MAIN-2020-50
1606141164142.1
[]
e-docs.geo-leo.de
SwitchYard projects are capable of referencing artifacts stored in an external repository (e.g. when using a SOA governance solution). These references can be declared in the SwitchYard application configuration using the tooling. Repository integration in SwitchYard is currently a work in progress and will evolve over time. The user may define references to specific artifacts being managed through an external SOA governance solution. Currently, artifact references consist of an element in the switchyard.xml file defining a name and url for the artifact. The tools provide a wizard for adding artifact references to a SwitchYard project. To access the wizard, right-click any SwitchYard project and select, SwitchYard -> Add Artifact Reference... The first page allows selection of the reference type: Support for adding references directly from a Guvnor repository will be available in an upcoming release. After selecting Generic Artifact Reference on the previous page, the user is presented with a page allowing them to specify a name and URL for the reference: Pressing Finish will add the reference to the project's switchyard.xml file.
https://docs.jboss.org/author/display/SWITCHYARD11/Artifact%20References.html
2020-11-23T20:58:02
CC-MAIN-2020-50
1606141164142.1
[]
docs.jboss.org
You are viewing documentation for version: 1.0 (latest) | Version History otIcmp6Handler Struct ReferenceAPI > IPv6 Networking > ICMPv6 This structure implements ICMPv6 message handler. #include <include/openthread/icmp6.h> Public Attributes otIcmp6ReceiveCallback mReceiveCallback The ICMPv6 received callback. void * mContext A pointer to arbitrary context information. struct otIcmp6Handler * mNext A pointer to the next handler in the list. This structure implements ICMPv6 message handler. The documentation for this struct was generated from the following file: include/openthread/icmp6.h
https://docs.silabs.com/openthread/1.0/structotIcmp6Handler
2020-11-23T19:54:33
CC-MAIN-2020-50
1606141164142.1
[]
docs.silabs.com
- Agents Page Agents Page Updated by Anya SuperAdmins and Admins can view and edit the information associated with each Agent on the Agents Page. Navigating the Agents Page To get to the Agents Page: - Click on the Settings icon in the bottom, left corner - Select "Team Settings" - Make sure "Agents" is selected in the left sidebar On the Agents page, you'll see a list of the names of your Agents along with their Role, Status, and when they were last seen. You can filter by any of these headings by clicking on them. Click on the heading again to switch between ascending and descending order. Use the dropdown menu at the top of the page to filter your Agents by Crew. Select a Crew and only Agents who have joined that Crew will show up in the list. Search for a particular Agent by using the search bar to the right of the Crew filter dropdown. You can search for an Agent by name or email. To learn about Adding Agents, see this article. Viewing and Editing Agent Information - Go to Settings icon in the bottom, left corner and select Team Settings - Make sure "Agents" is selected in the left sidebar - Click on the name of an Agent in the list - In the popup: - You can edit the Agent's First name, Last name, or Email address by clicking on the field and typing. - Change the Agent's Role by selecting the circle next to one of the other options - The selected options in the Crew Settings section indicates which Crews the Agent has been invited to. To add them to an additional Crew, use the dropdown to select another Crew. - The Conversations section shows the Conversations that the Agent has joined. Click on one of them to be taken to that Conversation. - At the top of the popup, you can click the three dots (...) to suspend an Agent, or reset their password. Once you suspend an Agent, you can then click the three dots again to remove them altogether from your company's TABLE account, or activate them again. - Click "Save" at the bottom of the popup to save your changes.
https://docs.table.co/article/4qs3yrir1w-agent-information
2020-11-23T19:57:44
CC-MAIN-2020-50
1606141164142.1
[array(['https://files.helpdocs.io/m53zo2qeyd/articles/4qs3yrir1w/1605032887928/agents-page.png', None], dtype=object) array(['https://files.helpdocs.io/m53zo2qeyd/articles/4qs3yrir1w/1605031415785/agents.png', None], dtype=object) ]
docs.table.co
Map design You can use Mapbox to control nearly every aspect of the map design process, including uploading custom data, tweaking your map's color scheme, adding your font, creating data-driven visualizations, and more. The core of the map design process is the style: a document, written in JSON, that defines exactly how your map should be drawn. Because all Mapbox styles conform to the open source Mapbox Style Specification, your maps can be rendered consistently across multiple platforms, including on the web with Mapbox GL JS, the Mapbox Static Images API, on mobile with the Mapbox Maps SDKs for Android and iOS, and with any third party libraries that are designed to read Mapbox styles. This guide explains how Mapbox styles work and where you can go to learn more and get started designing your map. Style documents, or Style JSON, contain style rules that are used by a renderer to display the map you see in your browser or device. In this example, you can see references to the style's data and images and instructions on how to render them in the Style JSON on the left and the rendered live map on the right. The map is displayed in your browser after Mapbox GL draws the final map using the style JSON and the tilesets it refers to. How map styles work You must write style JSON according to a strict specification for the renderer to interpret it and display the map in your browser. Mapbox Style Specification The Mapbox Style Specification defines the information that should be included in a style document for the renderer to display the map, including the title, values for the initial camera position, sources and other resources used in the style, and the styling rules for the map's layers. The complete requirements are listed in the Mapbox Style Specification, but some of the key concepts to understand are: The map that you see in your browser or on your device is the result of applying style rules (Style JSON) to data sources (usually map tiles or GeoJSON) to render a complete map. In the language of the Mapbox Style Specification, data sources are called sources and the style rules you apply to that data are organized into layers. You cannot create a map without specifying both sources and layers. - Sources. Sources tell the renderer what kind of data you would like to include and where to find it. - Layers. A layer is a styled representation of the data in a source. It includes information about how the layer should appear on the map, including color, opacity, font, and more. If you are using any icons, images, or fonts in your map, your style will need to include a sprite or glyphs property. - Sprite. Any icons or images in your style will need to be stored in a sprite. Read more about how sprites work. - Glyphs. Glyphs are used to display the fonts you are using in your style. A style's glyphs property provides a URL template for loading signed-distance-field glyph sets in PBFformat. Using styles Map styles work with Mapbox GL JS, the Mapbox Maps SDKs for Android and iOS, and any third party software designed to read Mapbox styles. Style rules and tilesets are combined and rendered completely on the computer or mobile device that has requested them. Because styles are designed to work with GL-based renderers, you can alter your style programmatically after the map has been loaded. Data-driven styles Data-driven styles allow you to change a layer's style based on properties in the layer's source. For example, you might create a data-driven style rule that sets the color of states in the US based on the population of each state. Required style sheet objects for data-driven styles The value of any layout property, paint property, or filter within a style sheet may be specified as an expression. With expressions, you can style data with multiple feature properties at once, apply conditional logic, and manipulate data with arithmetic and string operations for a more sophisticated relationship between your data and how it is styled. Below is an example of the value of a layout property using a match expression to display a different icon image depending on the value of the marker-number property in the data - "layout": { "icon-image": [ "match", ["string", ["get", "marker-number"]], "one", "pin-red", "two", "pin-blue", "three", "pin-green", "pin-white" ], "icon-size": 1 } The many types of expressions can be explored further within the Mapbox Style Specification and in the examples like Getting started with expressions and Make a heatmap with Mapbox GL JS. Creating map styles You can use the Mapbox Studio style editor to generate style JSON, write it directly, or use the Mapbox Styles API to read and change map styles, fonts, and images. Mapbox Studio The Mapbox Studio style editor is a visual interface for creating and editing a style according to the Mapbox Style Specification. You can learn more about how to create and edit styles in the Mapbox Studio Manual style section. Cartogram Use Cartogram, a drag-and-drop tool, to create a custom map in seconds. Upload a picture, select the colors you want to use, and create a map style that fits your brand. Your new map style will be ready to use on a website or in a mobile application. You can also open it in the Mapbox Studio style editor to continue customizing the style or add custom data. Mapbox Styles API The Mapbox Styles API lets you read and change map styles, fonts, and icons. This API is the basis for Mapbox Studio. If you use Mapbox Studio, Mapbox GL JS, or the Mapbox Mobile SDKs, you're already using the Styles API. You can learn more about the Styles API in our API documentation. Mapbox Style Specification You can create a style from scratch using a text editor and following the Mapbox Style Specification.
https://docs.mapbox.com/help/how-mapbox-works/map-design/
2020-11-23T19:35:50
CC-MAIN-2020-50
1606141164142.1
[array(['/help/img/gl-js/mapbox-gl-js-expressions.png', 'graduated circle map with circles of varying sizes'], dtype=object) array(['/help/img/screenshots/choropleth-160809.png', 'choropleth map with each state in the United States colored according to a data property'], dtype=object) array(['/help/img/screenshots/timezone-flights.png', 'colored line map showing flight paths'], dtype=object) array(['/help/img/screenshots/3D_buildings_example_dds.png', 'map showing 3D buildings where height is assigned according to a data property'], dtype=object) ]
docs.mapbox.com
Class: Aws::Firehose::Types::HiveJsonSerDe - Defined in: - gems/aws-sdk-firehose/lib/aws-sdk-firehose/types.rb Overview When making an API call, you may pass HiveJsonSerDe data as a hash: { timestamp_formats: ["NonEmptyString"], }. Constant Summary collapse - SENSITIVE = [] Instance Attribute Summary collapse - #timestamp_formats ⇒ Array<String> Indicates how you want Kinesis Data Firehose to parse the date and timestamps that may be present in your input data JSON. Instance Attribute Details #timestamp_formats ⇒ Array<String>.
https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/Firehose/Types/HiveJsonSerDe.html
2020-11-23T19:56:27
CC-MAIN-2020-50
1606141164142.1
[]
docs.aws.amazon.com
Features Removed or Deprecated in Windows Server 2012 R2 Applies To: Windows Server 2012, Windows Server 2012 R2 The following is a list of features and functionalities in Windows Server® 2012 R2. For more details about a particular feature or functionality and its replacement, see the documentation for that feature. For your quick reference, following table briefly summarizes the status of features that have been removed or deprecated in either Windows Server® 2012 or Windows Server 2012 R2. Note This table is necessarily abbreviated; if you see a feature marked for deprecation or removal, please consult the detailed information in this topic or in Features Removed or Deprecated in Windows Server 2012. Quick reference table Features removed from Windows Server 2012 R2 The following features and functionalities have been removed from this release of Windows Server 2012 R2. Applications, code, or usage that depend on these features will not function in this release unless you employ an alternate method. Note If you are moving to Windows Server 2012 R2 from a server release prior to Windows Server 2012, you should also review Features Removed or Deprecated in Windows Server 2012. Backup and file recovery The File Backup and Restore feature has been removed. Use the File History feature instead. System Image Backup (the “Windows 7 File Recovery” feature) has been removed. Instead, use “Reset your PC.” Drivers Drivers for tape drives have been removed from the operating system. Instead, use the drivers provided by the manufacturer of your tape drive. Recovery disk creation The ability to create a recovery disk on CD or DVD has been removed. Use the Recovery Disk to USB feature. Slmgr.vbs options The /stao (sets token activation only) and /ctao (clears activation tokens only) options of Slmgr.vbs have been removed. These options have been replaced by more flexible activation command options. Subsystem for UNIX-based Applications The Subsystem for UNIX-based Applications (SUA) has been removed.. WMI root\virtualization namespace v1 (used in Hyper-V) The WMI root\virtualization\v1 namespace has been removed. Instead use the root\virtualization\v2 namespace. Features deprecated starting with Windows Server 2012 R2 deprecated. Use the protection policy to control the document lifecycle. To remove access to a particular document, set the validity time to “0” in the template, or select Require a connection to verify a user’s permission in Microsoft Office. Be aware that both of these options require a connection to a Rights Management Server in order to open the files.. Application Server The Application Server role is deprecated and will eventually no longer be available as an installable server role. Instead, install individual features and roles separately. COM and Inetinfo interfaces of the Web Server role The IIS CertObj COM interface is deprecated. Use alternate methods for managing certificates. The Inetinfo interface is deprecated. DNS The GAA_FLAG_INCLUDE_TUNNEL_BINDINGORDER flag in GetAdaptersAddresses is deprecated. There is no specific replacement. File and storage services. IIS Manager 6.0 Internet Information Services (IIS) Manager 6.0 is deprecated. It has been replaced by a newer management console. Networking Network Access Protection (NAP) is deprecated. Other options for keeping client computers up to date and secure for remote access include DirectAccess, Windows Web Application Proxy, and various non-Microsoft solutions. Network Information Service (NIS) and Tools (in RSAT) The Server for Network Information Service (NIS) is deprecated. This includes the associated administration tools in Remote Server Administration Tools (RSAT). Use native LDAP, Samba Client, Kerberos, or non-Microsoft options. RSAT: Identity management for Unix/NIS The Server for Network Information Service (NIS) Tools option of Remote Server Administration Tools (RSAT) is deprecated. Use native LDAP, Samba Client, Kerberos, or non-Microsoft options. Security Configuration Wizard The Security Configuration Wizard is deprecated. Instead, features are secured by default. If you need to control specific security settings, you can use either Group Policy or Microsoft Security Compliance Manager. SMB SMB 1.0 is deprecated. Once this is removed, systems running Windows XP or Windows Server 2003 (or older) operating systems will not be able to access file shares. SMB 1.0 has been replaced by SMB 2.0 and newer versions. Telnet server Telnet server is deprecated. Instead, use Remote Desktop. Windows Identity Foundation Windows Identity Foundation (WIF) 3.5 is deprecated and has been replaced by WIF 4.5. You should start migrating applications that use WIF to WIF 4.5 or Windows .NET Framework 4.5. SQL Lite SQL Lite is deprecated. Migrate to alternatives such as SQL LocalDb. WMI providers and methods
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn303411(v=ws.11)
2020-11-23T20:45:56
CC-MAIN-2020-50
1606141164142.1
[]
docs.microsoft.com
Stock to Flow Model Stock to Flow model is a methodology that explains why scarce commodities retain their value through time. Stock: Already existing amount of the commodity. Flow: Yearly production amount of the commodity. S/F: Stock to Flow ratio. Stock to Flow ratio gives us how many years are needed to produce the total amount of a commodity which already exists. Assume the whole mined or produced amount of a commodity suddenly disappears; how many years will it take us to re-produce it? When S/F ratio is high, this is a clear indicator for scarcity of that commodity. On the other hand if total stock of a commodity is easy or more importantly fast to re-produce, that commodity isn’t considered to be scarce. Stock to Flow Ratio of Gold Gold is considered to be one of the most scarce and valuable commodities, therefore it is a good example for stock to flow ratio. The amount of gold already mined can not be known exactly but it is estimated to be around 185,000 tons. Yearly production of gold is around 3,000 tons. When we put these values into the equation, Stock of Gold: 185,000 tons Flow of Gold: 3,000 tons S/F of Gold: About 62 years This means that if all the gold which has already been mined disappears, we need about 62 years to re-mine the same amount of gold. This value is quite high, therefore it is an indication that gold is a scarce commodity. Stock to Flow Ratio of Bitcoin Bitcoin, the first cryptocurrency with the highest network effect, having one of the highest liquidity and volume, is worth examining under the stock to flow model. Stock of Bitcoin: About 18,000,000 Bitcoins already mined so far Flow of Bitcoin: After the halving on 11th May 2020, dropped to about 330,000 Bitcoins per year S/F of Bitcoin: About 54 years Bitcoin’s stock to flow ratio is now very close to Gold, and with the next halving it will become 108 years. This will make Bitcoin the most scarce commodity ever. It is also worth noting that Bitcoin’s flow will never increase due to demand, which is a certain differentiation from Gold and any other commodities. Production Cost All commodities have a production cost which varies by region and technologies being used. From now on, when we mention production cost, we will refer to worldwide median average production cost. When the market price of a commodity drops below the production cost, miners will stop production as they will be in a non-profitable situation. As a result, there won’t be any more flow into the market at lower prices than production cost. Production Cost of Gold During the sub-prime crisis, it was known that the average mining cost of Gold was around 1,100 USD / Oz. Gold dropped to this area, many miners stopped operations and Gold prices bottomed. Production Cost of Bitcoin During the last bear market, Bitcoin prices bottomed around 3,100 - 3,200 USD area. It is well known that this area was the cost of producing Bitcoin in China where a significant amount of Bitcoin miners are operating. We can conclude that in most cases, production cost creates a clear bottom in the markets.
https://docs.risingcoin.org/whitepaper/stocktoflow.html
2020-11-23T19:20:00
CC-MAIN-2020-50
1606141164142.1
[array(['_images/GOLD%20M1.png', 'GOLD M1'], dtype=object) array(['_images/BTCUSD%20M1.png', 'BTCUSD M1'], dtype=object)]
docs.risingcoin.org
If you get stuck on something, try some of suggestions below or feel free to contact us at any time. You can always chat us directly from the TestProject app, or post a question on the forum. Some organizations have firewall rules in place that prevent the TestProject agent from communicating to the app in the way that it needs to. Check with your IT team to see if this may be the issue. You can also check your computer's firewall settings to make sure TestProject is not getting blocked If your internet connection is trough a corporate proxy server, and the certificate is self signed, You may need to add the root CA of this certificate to the list of trusted publishers on your computer. Make sure you have a TestProject agent installed and running on your local machine. If not, you will need to start the agent before you can use it to run tests. If the agent is running, you can try restarting it. Make sure you do not have any other mirroring software running. If a new browser instance does not open when you try to run the test recorder for web tests, check that your browser version is up to date.
https://docs.testproject.io/tips-and-tricks/troubleshooting
2020-11-23T18:29:21
CC-MAIN-2020-50
1606141164142.1
[]
docs.testproject.io
Events and Logs Displays the logs and events of all the executions in the current tenant, according to the user’s permissions. You can configure the fields that are displayed and can choose to indicate in colors success and failure messages. You can sort events/logs by Timestamp (default), Blueprint, Deployment, Node Id, Node Instance Id, Workflow, Operation and Type. Sometimes error logs may contain additional information about error cause. This will be indicated by icon in the Message column. When you click on this icon you will see detailed information about the error: Widget Settings Refresh time interval- The time interval in which the widget’s data will be refreshed, in seconds. Default: 2 seconds List of fields to show in the table- You can choose which fields to present. By default, these are the fields presented: - Icon - Timestamp - Blueprint - Deployment - Node Id - Node Instance Id - Workflow - Operation You can also choose to add the field “Type”, which will present the log level in case of a log, and event type in case of an event. Color message based on type- when marked as “on”, successful events will be coloured in blue, and failures in red. Default: On Maximum message length before truncation- Allow to define the length of the messages presented in the table. Default: 200. Please note that even if the message is being truncated in the table itself, you can see the full message upon overing.
https://docs.cloudify.co/4.5.5/working_with/console/widgets/events/
2019-04-18T14:50:58
CC-MAIN-2019-18
1555578517682.16
[array(['../../../../images/ui/widgets/events-logs-2.png', 'events-logs'], dtype=object) array(['../../../../images/ui/widgets/events-logs-error-cause-modal.png', 'error-cause-modal'], dtype=object) ]
docs.cloudify.co
How do I run my code on a micro:bit? If you've got a real micro:bit, you can download the code you write and upload and run it on your micro:bit. 1. Plug your micro:bit in to your computer, it will show up as a USB thumb drive. 2. Go to a problem and click the button below the main editor on the right to download your code as a .hex file. 3. Drag the downloaded .hex file onto the micro:bit – it'll reboot and run the code! Note: This example uses macOS, but the process is the same on windows - just drag the file on to the USB called MICROBIT
https://docs.groklearning.io/article/257-how-do-i-run-my-code-on-a-micro-bit
2019-04-18T14:19:17
CC-MAIN-2019-18
1555578517682.16
[]
docs.groklearning.io
Where to Contribute Screwdriver has a modular architecture, and the various responsibilities are split up into separate repos. Check out the architecture diagram to see the overall workflow of Continuous Delivery using Screwdriver. The next few sections will help lay out where different code repositories fit. Screwdriver API The screwdriver repo is the core of screwdriver, providing the API endpoints for everything that screwdriver does. The API is based on the hapijs framework and is implemented in node as a series of plugins. Build bookends allow a user to create setup and teardown steps for builds. The API can also send notifications to users. notifications-base is the base class for defining the behavior between Screwdriver and notifications plugins, like email notifications and slack notifications. The API can also uploading code coverage reports and/or test results. coverage-bookend defines the relationship between Screwdriver and coverage bookends. coverage-base is the base class for defining the behavior between Screwdriver coverage bookend plugins, like coverage-sonar. Launcher The launcher performs step execution and housekeeping internal to build containers. This is written in Go and mounted into build containers as a binary. - sd-cmd: A Go-based CLI for sharing binaries which provides a single interface for executing a versioned command (via remote binary, docker image, or habitat package) during a Screwdriver build - sd-step: A Shared Step allows people to use the same packages and commands in all build containers, regardless of build environment - meta-cli: A Go-based CLI for reading/writing information from the metadata Executors An executor is used to manage build containers for any given job. Several implementations of executors have been created. All are designed to follow a common interface. Executor implementations are written in node: - executor-base: Common interface - executor-docker: Docker implementation - executor-j5s: Jenkins implementation - executor-k8s: Kubernetes implementation - executor-k8s-vm: Kubernetes VM implementation - executor-nomad: Nomad implementation The executor router is a generic executor plugin that routes builds to a specified executor. Models The object models provide the definition of the data that is saved in datastores (analogous to databases). This is done in two parts: - data-schema: Schema definition with Joi - models: Specific business logic around the data schema Datastores A datastore implementation is used as the interface between the API and a data storage mechanism. There are several implementations written in node around a common interface. - datastore-base: Base class defining the interface for datastore implementations - datastore-sequelize: MySQL, PostgreSQL, SQLite3, and MS SQL implementations - datastore-dynamodb: DynamoDB implementation, with dynamic-dynamodb being used to create the datastore tables for it Artifacts The Artifact Store (not to be confused with the datastores mentioned above) is used for saving log outputs, shared steps, templates, test coverage, and any artifacts that are generated during a build. The log service is a Go tool for reading logs from the Launcher and uploading them to the store. The artifact-bookend is used for uploading artifacts to the store. Source Code Management An SCM implementation is used as the interface between the API and an SCM. There are several implementations written in nodejs around a common interface. - scm-base: Common interface - scm-bitbucket: Bitbucket implementation - scm-github: Github implementation - scm-gitlab: Gitlab implementation Templates Templates are snippets of predefined code that people can use to replace a job definition in their screwdriver.yaml. A template contains a series of predefined steps along with a selected Docker image. - templates: A repo for all build templates - template-main: The CLI for validating and publishing job templates - template-validator: A tool used by the API to validate a job template Config Parser Node module for validating and parsing user’s screwdriver.yaml configurations. Guide & Homepage The Guide is documentation! Everything you ever hoped to know about the Screwdriver project. The Homepage is the basic one-pager that powers Screwdriver.cd. UI The Ember-based user interface of Screwdriver. Miscellaneous Tools - circuit-fuses: Wrapper to provide a node-circuitbreaker with callback interface - client: Simple Go-based CLI for accessing the Screwdriver API - gitversion: Go-based tool for updating git tags on a repo for a new version number - keymbinatorial: Generates the unique combinations of key values by taking a single value from each keys array - toolbox: Repository for handy Screwdriver-related scripts and other tools - hashr: Wrapper module for generating ids based on a hash Adding a New Screwdriver Repo We have some tools to help start out new repos for screwdriver: - generator-screwdriver: Yeoman generator that bootstraps new repos for screwdriver - eslint-config-screwdriver: Our ESLint rules for node-based code. Included in each new repo as part of the bootstrap process If you create a new repo, please come back and edit this page so that others can know where your repo fits in. Screwdriver.cd Tests and Examples The organization screwdriver-cd-test contains various example repos/screwdriver.yamls and acceptance tests for Screwdriver.cd.
https://docs.screwdriver.cd/about/contributing/where-to-contribute
2019-04-18T15:17:53
CC-MAIN-2019-18
1555578517682.16
[]
docs.screwdriver.cd
Inside EasySendy Pro, you can build Autoresponder email sequence trigger along various activities in an email list: - Upon email list subscription - Upon email campaign open - Upon email list import In this guide, we will see how you can create Autoresponder email sequence and trigger the same after email list import. Follow the guide to create autoresponder email sequence of 4 emails; step by step: 1. First create an email list on which you want to send an email campaign, 2. Go to “Create new campaign” from EasySendy Drip (or EasySendy Pro) dashboard and select “Create New Autoresponder Campaigns”, fill in all the required detail and build email template, save the template and goto last stage. 3. At last screen you can select “Activate at” time; you should let this time to be default. Select Autoresponder event to “AFTER-SUBSCRIBE” then, add Autoresponder time value of your choice and time unit to day / minute / hour. Finally select “Include imported subscribers” to Yes and save the template. This will create 1st autoresponder email template. 4. Similarly, create 2nd, 3rd and 4th autoresponder email campaigns on same email list and select “Activate at” time to be same as 1st email campaign, then select “Include imported subscribers” to Yes in all email campaign design. 5. Now you should start importing your email subscribers inside an email list on which you want to run the email campaign. You can follow this guide to import email list.
https://docs.easysendy.com/email-campaigns/kb/build-autoresponder-email-list-import/
2019-04-18T14:40:58
CC-MAIN-2019-18
1555578517682.16
[]
docs.easysendy.com
Problems Upgrading? First and foremost: report the problem on github! It should not have occurred, and either points to a problem with your database, or a problem with the upgrade script itself. Please provide as much details as possible about what went wrong, any error messages and what version you were upgrading from. Upgrading involves two things: updating the files and updating the database. If the database upgrade fails, that means that there are new files trying to read the new database, even though it wasn't updated. You have two options: - Wait to hear a response on github, - Revert your Form Tools files back to the previous version. But please know that in the current state (new files + old database), Form Tools may not be able to function normally.
https://docs.formtools.org/upgrading/problems/
2019-04-18T15:19:02
CC-MAIN-2019-18
1555578517682.16
[]
docs.formtools.org
Auto-correct In auto-correct mode, RuboCop will try to automatically fix offenses: $ rubocop -a For some offenses, it is not possible to implement automatic correction. Some automatic corrections that are possible have not been implemented yet. Safe auto-correct $ rubocop --safe-auto-correct In RuboCop 0.60, we began to annotate cops as Safe or not safe. Eventually, the safety of each cop will be determined. - Safe (true/false) - indicates whether the cop can yield false positives (by design) or not. - SafeAutoCorrect (true/false) - indicates whether the auto-correct the cop does is safe (equivalent) by design. If a cop is annotated as "not safe", it will be omitted. Example of Unsafe Cop array = [] array << 'Foo' << 'Bar' << 'Baz' puts array.join('-') Style/LineEndConcatenation will correct the above to: array = [] array << 'Foo' \ 'Bar' \ 'Baz' puts array.join('-') Therefore, in this (unusual) scenario, Style/LineEndConcatenation is unsafe. (This is a contrived example. Real code would use %w for an array of string literals.)
https://rubocop.readthedocs.io/en/stable/auto_correct/
2019-04-18T14:36:08
CC-MAIN-2019-18
1555578517682.16
[]
rubocop.readthedocs.io
Before deleting an Oracle hierarchy or instance, make sure that the hierarchy is active (green) on its primary server. You may also wish to remove the dependencies before deleting the hierarchy; otherwise, the dependencies will be deleted also. Deleting an Oracle hierarchy accomplishes the following: - Stops the Oracle services. - Deletes the Oracle hierarchy and all dependencies. Notes: - Make sure both servers are active when a delete is initiated for LifeKeeper to properly withdraw the databases from the backup server. - If you want the IP address and volume to remain under LifeKeeper protection, you should delete volume and TCP/IP dependencies prior to deletion. To delete a resource hierarchy from all the servers in your LifeKeeper environment, complete the following steps: - On the Edit menu, select Resource, then Delete Resource Hierarchy. - Select the Target Server where you will be deleting your Oracle Oracle resource was deleted successfully. - Click Done to exit. フィードバック フィードバックありがとうございました このトピックへフィードバック
http://docs.us.sios.com/sps/8.6.2/ja/topic/deleting-an-oracle-hierarchy
2019-04-18T15:06:54
CC-MAIN-2019-18
1555578517682.16
[]
docs.us.sios.com
6.10.0 (Apr 2019)¶ Kurento Media Server 6.10 is seeing the light with some important news! To install it: Installation Guide. Hello Ubuntu Bionic¶ Preliminary support for Ubuntu 18.04 LTS (Bionic Beaver) has landed in Kurento, and all the CI machinery is already prepared to compile and generate Debian packages into a new repository. To install KMS on this version of Ubuntu, just follow the usual installation instructions: DISTRO="bionic" # KMS for Ubuntu 18.04 (Bionic) sudo apt-get update \ && sudo apt-get install --yes kurento-media-server Mostly everything is already ported to Bionic and working properly, but the port is not 100% finished yet. You can track progress in this board: The two biggest omissions so far are: None of the extra modules have been ported yet; i.e. the example plugins that are provided for demonstration purposes, such as kms-chroma, kms-crowddetector, kms-platedetector, kms-pointerdetector. OpenCV plugins in Kurento still uses the old C API. This still worked out fine in Ubuntu 16.04, but it doesn’t any more in 18.04 for plugins that use external training resouces, such as the HAAR filters. Plugins that need to load external OpenCV training data files won’t work. For now, the only plugin affected by this limitation in KMS seems to be facedetector because it won’t be able to load the newer training data sets provided by OpenCV 3.2.0 on Ubuntu 18.04. Consequently, other plugins that depend on this one, such as the faceoverlay filter and its FaceOverlayFilter API, won’t work either. Not much time has been invested in these two plugins, given that they are just simple demonstrations and no production application should be built upon them. Possibly the only way to solve this problem would be to re-write these plugins from scratch, using OpenCV’s newer C++ API, which has support for the newer training data files provided in recent versions of OpenCV. This issue probably affects others of the extra modules mentioned earlier. Those haven’t even started to be ported. Hello Chrome 74¶ Google is moving forward the security of WebRTC world by dropping support for the old DTLS 1.0 protocol. Starting from Chrome 74, DTLS 1.2 will be required for all WebRTC connections, and all endpoints that want to keep compatibility must be updated to use this version of the protocol. Our current target operating system, Ubuntu 16.04 (Xenial), provides the library OpenSSL version 1.0.2g, which already offers support for DTLS 1.2 [1]. So, the only change that was needed to bring Kurento up to date in compatibility with Chrome was to actually make use of this newer version of the DTLS protocol. Bye Ubuntu Trusty¶ Our resources are pretty limited here and the simpler is our CI pipeline, and the less work we need to dedicate making sure KMS works as expected in all Operating Systems, the best effort we’ll be able to make improving the stability and features of the server. Our old friend Ubuntu 14.04 LTS (Trusty Tahr) is reaching its deprecation and End Of Life date in April 2019 (source:) so this seems like the best time to drop support for this version of Ubuntu. Canonical is not the only one setting an end to Trusty with their lifecycle schedules; Google is also doing so because by requiring DTLS 1.2 support for WebRTC connections, Trusty is left out of the game, given that it only provides OpenSSL 1.0.1f which doesn’t support DTLS 1.2 [1]. Reducing forks to a minimum¶ Moving on to Ubuntu 18.04 and dropping support of Ubuntu 14.04 are efforts which pave the road for a longer-term target of dropping custom-built tools and libraries, while standarizing the dependencies that are used in the Kurento project. These are some of the objectives that we’d like to approach: Using standard tools to create Ubuntu packages. We’re dropping the custom compile_project.py tool and instead will be pushing the use of git-buildpackage. Our newer kurento-buildpackage.sh uses git-buildpackage for the actual creation of Debian packages from Git repositories. Dropping as many forks as possible. As shown in Code repositories, Kurento uses a good number of forked libraries that are packaged and distributed as part of the usual releases, via the Kurento package repositories. Keeping all these forks alive is a tedious task, very error-prone most of the times. Some are definitely needed, such as openh264 or usrsctp because those are not distributed by Ubuntu itself so we need to do it. Some others, such as the fork of libsrtp, have already been dropped and we’re back to using the official versions provided by Ubuntu (yay!) Lastly, the big elephant in the room is all the GStreamer forks, which are stuck in an old version of GStreamer (1.8) and would probably benefit hugely from moving to newer releases. We hope that moving to Ubuntu 18.04 can ease the transition from our forks of each library to the officially provided versions. Ultimately, a big purpose we’re striving for is to have Kurento packages included among the official ones in Ubuntu, although that seems like a bit far away for now. Clearer Transcoding log messages¶ Codec transcoding is always a controversial feature, because it is needed for some cases which cannot be resolved in any other way, but it is undesired because it will consume a lot of CPU power. All debug log messages related to transcoding have been reviewed to make them as clear as possible, and the section Troubleshooting Issues has been updated accordingly. If you see that transcoding is active at some point, you may get a bit more information about why, by enabling this line: export GST_DEBUG="${GST_DEBUG:-3},Kurento*:5,agnosticbin*:5" in your daemon settings file, /etc/default/kurento-media-server. Then look for these messages in the media server log output: Upstream provided caps: (caps) Downstream wanted caps: (caps) Find TreeBin with wanted caps: (caps) Which will end up with either of these sets of messages: - If source codec is compatible with destination: TreeBin found! Use it for (audio|video) TRANSCODING INACTIVE for (audio|video) - If source codec is not compatible with destination: TreeBin not found! Transcoding required for (audio|video) TRANSCODING ACTIVE for (audio|video) These messages can help understand what codec settings are being received by Kurento (“Upstream provided caps”) and what is being expected at the other side by the stream receiver (“Downstream wanted caps”). Recording with Matroska¶ It’s now possible, thanks to a user contribution, to configure the RecorderEndpoint to use the Matroska multimedia container (MKV), using the H.264 codec for video. This has big implications for the robustness of the recording, because with the MP4 container format it was possible to lose the whole file if the recorder process crashed for any reason. MP4 stores its metadata only at the end of the file, so if the file gets truncated it means that it won’t be playable. Matroska improves the situation here, and a truncated file will still be readable. For more information about the issues of the MP4 container, have a look a then new knowledge section: H.264 video codec. New JSON settings parser¶ Kurento uses the JSON parser that comes with the Boost C++ library; this parser accepted comments in JSON files, so we could comment out some lines when needed. The most common example of this was to force using only VP8 or H.264 video codecs in the Kurento settings file, /etc/kurento/modules/kurento/SdpEndpoint.conf.json: "videoCodecs" : [ { "name" : "VP8/90000" }, { "name" : "H264/90000" } ] This is the default form of the mentioned file, allowing Kurento to use either VP8 or H.264, as needed. To disable VP8, this would change as follows: "videoCodecs" : [ // { // "name" : "VP8/90000" // }, { "name" : "H264/90000" } ] And it worked fine. The Boost JSON parser would ignore all lines starting with //, disregarding them as comments. However, starting from Boost version 1.59.0, the Boost JSON parser gained the great ability of not allowing comments; it was rewritten without any consideration for backwards-compatibility (yeah, it wouldn’t hurt the Boost devs if they practiced a bit of “Do NOT Break Users” philosophy from Linus Torvalds, or at least followed Semantic Versioning…) The devised workaround has been to allow inline comment characters inside the JSON attribute fields, so the former comment can now be done like this: "videoCodecs": [ { "//name": "VP8/90000" }, { "name": "H264/90000" } ] Whenever you want to comment out some line in a JSON settings file, just append the // characters to the beginning of the field name. Code Sanitizers and Valgrind¶ If you are developing Kurento, you’ll probably benefit from running AddressSanitizer, ThreadSanitizer, and other related tools that help finding memory, threading, and other kinds of bugs. kms-cmake-utils includes now the arsenm/sanitizers-cmake tool in order to integrate the CMake build system with the mentioned compiler utilities. You’ll also find some useful suppressions for these tools in the kms-omni-build dir. Similarly, if you want to run KMS under Valgrind, kms-omni-build contains some utility scripts that can prove to be very handy. Special Thanks¶ A great community is a key part of what makes any open source project special. From bug fixes, patches, and features, to those that help new users in the forum / mailing list and GitHub issues, we’d like to say: Thanks! Additionally, special thanks to these awesome community members for their contributions: - @prlanzarin (Paulo Lanzarin) for: - Add API support for MKV profile for recordings Kurento/kms-core#14, Kurento/kms-elements#13. - Fixed config-interval prop type checking in basertpendpoint and rtppaytreebin Kurento/kms-core#15 and @leetal (Alexander Widerberg) for reporting #321. - rtph26[45]depay: Don’t handle NALs inside STAP units twice (cherry-picked from upstream) Kurento/gst-plugins-good#2. - @tioperez (Luis Alfredo Perez Medina) for reporting #349 and sharing his results with RTSP and Docker. - @goroya for Kurento/kurento-media-server#10.
https://doc-kurento.readthedocs.io/en/6.10.0/project/relnotes/v6_10_0.html
2019-04-18T15:35:49
CC-MAIN-2019-18
1555578517682.16
[]
doc-kurento.readthedocs.io
Find your Office 365 tenant ID Your Office 365 tenant ID is a globally unique identifier (GUID) that is different than your organization name or domain. You might need this identifier when you configure Group Policy objects for OneDrive. To find your Office 365 tenant ID in the Azure AD admin center Sign in to the Azure Active Directory admin center as a global or user management admin. Under Manage, select Properties. The tenant ID is shown in the Directory ID box. Note For info about finding your tenant ID by using PowerShell instead, first read Azure Active Directory PowerShell for Graph and then useGet-AzureADTenantDetail. Feedback Send feedback about:
https://docs.microsoft.com/en-us/onedrive/find-your-office-365-tenant-id?redirectSourcePath=%252fzh-cn%252farticle%252f%2525E6%25259F%2525A5%2525E6%252589%2525BE-office-365-%2525E7%2525A7%25259F%2525E6%252588%2525B7-id-6891b561-a52d-4ade-9f39-b492285e2c9b
2019-04-18T14:33:53
CC-MAIN-2019-18
1555578517682.16
[]
docs.microsoft.com
Kubernetes at the command line: Up and running with kubectl Kubernetes provides a command-line interface (CLI) called kubectl to deploy, troubleshoot, and manage applications. Before you can use kubectl, you must ensure you can authenticate with the cluster's API server. Your credentials are distributed in a file called a kubeconfig, which is read by kubectl. Download the kubeconfig To obtain a kubeconfig customized to your cluster, navigate to the Platform9 clusters page and click the kubeconfig link. This will download a kubeconfig file for the specific cluster. Make kubeconfig discoverable by kubectl kubectl will by default read the file ~/.kube/config We recommend placing your kubeconfig there. Otherwise, you can pass kubectl the option --kubeconfig Download kubectl kubectl is an executable binary built specifically for combinations of OS and CPU architectures. Before you can download kubectl, you must identify the version of kubectl to download. The kubectl version must match the Kubernetes version of your cluster. To identify the Kubernetes version of your cluster, navigate to Infrastructure>Clusters on the Platform9 Clarity UI. Check the value in the Kubernetes version column for your cluster. If you are using Linux, you can download kubectl, make it executable, and move it to /usr/local/bin by using the following commands: curl -Lo kubectl<kubernetes version>/bin/linux/amd64/kubectl chmod +x kubectl && sudo mv kubectl /usr/local/bin/ To install kubectl on other operating systems, refer to Using kubectl To see the nodes in your cluster kubectl get nodes Let's make a Deployment with one nginx Pod: kubectl run nginx-example --image=nginx --port=80 And watch the Pods as they are created (Ctrl-C to stop): kubectl get pods --watch Finally, delete the Deployment. The Pod will be deleted automatically. kubectl delete deployment nginx-example For more details, see the Kubernetes overview for kubectl.
https://docs.platform9.com/support/kubernetes-at-the-command-line-up-and-running-with-kubectl/
2019-04-18T14:18:00
CC-MAIN-2019-18
1555578517682.16
[array(['/assets/pmk_1067_kubeconfig-1.png', None], dtype=object) array(['/assets/PMK_1067_K8Sversion.png', None], dtype=object)]
docs.platform9.com
Title Beyond Faraday’s Crispations: Nonlinear Patterns of Granular Flow on a Vibratory Conveyor Document Type Conference Proceeding. Recommended Citation C.A. Krϋlle, A. Götzendorfer, J. Kreft, and D. Svensek. 2009. "Beyond Faraday’s Crispations: Nonlinear Patterns of Granular Flow on a Vibratory Conveyor." Powders and Grains 2009: Proceedings of the 6th International Conference on Micromechanics of Gran. Med.: 711-716. Published in: Powders and Grains 2009: Proceedings of the 6th International Conference on Micromechanics of Gran. Med.
https://docs.rwu.edu/fcas_fp/195/
2019-04-18T14:37:06
CC-MAIN-2019-18
1555578517682.16
[]
docs.rwu.edu
Adding yamlized jobs to Jenkins with jenkins-job-builder This doc describes how to use jenkins-job-builder (JJB) in the CI environment for creating/updating Jenkins jobs. Note: This doc is intended for infra maintainers. If you are not an infra maintainer, and would like to add jobs to oVirt-Jenkins, please send a patch to jenkins repo for review. Introduction There are two ways to create/update a job on the Jenkins server from yaml confs: - Run the Jenkins deploy job from an existing patch - Run jenkins-job-builder manually for an existing gerrit patch Important note: Any manual change made using this procedure, on a job that is already configured in yaml on the Jenkins repo master branch, will be overwritten automatically once the jenkins_master_deploy-configs_merged job runs. Running the Jenkins deploy job The deploy job on the Jenkins server is used for updating Jenkins jobs from a gerrit patch containing a change/update in the yaml config. You can specify the gerrit patch refspec and the name (or glob) of the job(s) to be updated. A link to the job: Running jenkins-job-builder manually JJB can be used in order to test/update jobs from yaml files on jenkins.ovirt.org. It can be found in the following repo: General steps for running jjb test/update: - Have a configuration file ready for the appropriate jenkins server. - Checkout the Jenkins repository of the relevant server and make the changes. - cd to the parent directory of the yaml directories (see below Jenkins yaml confs repository). - Run the jjb test command. - If the test went ok, run the jjb update command for the specific job. Creating a jjb config file for jenkins.ovirt.org: The config file should include the following lines: [jenkins] user=<your user name on the jenkins server> password=<api token from user settings->configure->show API token> url= [job_builder] keep_descriptions=True recursive=True allow_empty_variables=True Jenkins yaml confs repository: Yaml directories located at the following location (yaml and project sub-directories);a=tree;f=jobs/confs;hb=refs/heads/master Testing your changes: Run the following commands from the jobs/confs dir in order to test your code: jenkins-jobs --conf <jjbconfig file path> --allow-empty-variables -l debug test -o <temp xml output dir> yaml:projects Deploy your changes on the Jenkins server: If the test went ok, run the following commands in order to update your code jenkins-jobs --conf <jjbconfig file path> -l debug update yaml:projects <job_name>
https://ovirt-infra-docs.readthedocs.io/en/latest/CI/Adding_yamlized_jobs_with_JJB/index.html
2019-04-18T14:17:30
CC-MAIN-2019-18
1555578517682.16
[]
ovirt-infra-docs.readthedocs.io
Setting up a Xamarin app with ApiOmat Within this tutorial we will take advantage of Xamarin's ability to share code between different projects and platforms. Therefore you will create a Xamarin. Forms app that uses a Portable Class Library (PCL) which contains the generated ApiOmat Xamarin SDK. Using Xamarin Studio Click New Solution and chose Multiplatform -> Apps -> Form App In the next step select the desired target platforms and make sure the option Use Portable Class Library is selected. By clicking Next and Create afterwards, your solution will be created immediately. Now, depending on what target platforms you've selected, a new solution with up to three new projects is created: XamarinApp XamarinApp.Droid XamarinApp.iOS The XamarinApp project is the PCL, where you will add the generated SDK. But before you do so, you need to configure it. You cannot add Windows Phone 8.0 as target of the PCL, because the System.Linq.Parallel namespace, which is used by the SDK, is not supported on that platform. This means you can't use PCL Profile 259. Instead we recommend to use PCL Profile 111. If your IDE doesn't show the profile number in the properties, you can open the .csproj file of the PCL project and see (and change) the profile there. Add the required dependencies that are listed below: Portable.BouncyCastle Version:1.8.1 (only needed when using the old C# SDK and not the new C# SQLite SDK) sqlite-net-pcl Regular C# / Xamarin SDK requires version 1.1.1 C#-SQLite / Xamarin-SQLite SDK (Beta) requires version 1.5.166-beta with the following dependency: System.Collections.Immutable Version:1.3.1 Xam.Plugin.Connectivity Version:2.1.2 Newtonsoft.Json Version:10.0.2 If you are targeting Windows Phone 8 or later and no previously added package depended on it, then you also need to add Microsoft.Net.Http Version:2.2.29! Unfortunately, due to a problem with this package, GZip-Compression for outgoing and incoming requests won't be available in this case. If you're not targeting Windows Phone 8 or later, please make sure that they're not selected as target frameworks in your project's build options. Newer versions of the packages probably work as well, but we ran our tests with the versions mentioned above and thus only support them. To add those packages, right click on Packages of the PCL project (XamarinApp in this example) and chose Add Packages... Within the search bar you can search for the required packages. By adding version: x.x.x after the package name you can ensure that only the specific version will be listed. For each package activate the checkbox in front of the package's name. After adding all packages, simply click the Add Package button in the right corner at the bottom and all previously selected packages will be installed in a row. You will also have to accept the license agreement of some selected packages. After adding all listed packages, your referenced packages should look like shown in the figure below: Now you are ready to add the generated SDK. Right click on your project and choose Add existing folder... Choose the Com folder of your extracted Xamarin SDK, check Include All within the appearing dialogue and confirm by clicking OK. Afterwards, chose Copy the files to directory and confirm again by clicking the OK button. Now, to make sure everything is fine, choose Project -> Build XamarinApp. Please notice that you must also add some of the listed packages to your platform projects as well: Portable.BouncyCastle (only needed when using the old C# SDK and not the new C# SQLite SDK) sqlite-net-pcl Xam.Plugin.Connectivity Newtonsoft.Json For the versions you have to use see the list above. On some Windows targets (e.g. Windows 8 store app or Windows Phone 8.1 app), the SQLite-net PCL also requires to reference the Microsoft Visual C++ Runtime Package for Windows (a System.TypeInitializationException gets thrown from SQlitePCL.raw if required and missing). When building your target project, the Xam.Plugin. Connectivity may complain about a duplicate "Plugin.Connectivity.pdb" on your build-path. You will have to delete the file either from the folder [MyProjectFolder]/packages/Xam.Plugin.Connectivity.[Version]/lib/portable-[targets] or from MyProjectFolder/packages/Xam.Plugin.Connectivity.[Version]/lib/[target] (whereas "MyProjectFolder" is the main folder for your solution, "Version" the installed version number of the package, "targets" the different supported targets for this library, as of writing "net45+wp80+wp81+wpa81+win8+MonoAndroid10+MonoTouch10+Xamarin.iOS10+UAP10" and "target" your specific target for the project, e.g. win8). Now your PCL project should be built without any errors and we can add the project as a reference to the other project(s). Therefore, right click on the other project's References (those that aren't the PCL project) and choose Edit References... When choosing the tab Projects you can now select your PCL project. Because the ApiOmat C# / Xamarin SDK uses an SQLite database for saving requests when no connection to the server is available and for generally caching data, a few final steps are necessary to finish the setup. According to Xamarin's documentation you have to define an interface, containing a method, that provides an existing SQLiteConnection in your PCL project. Afterwards you have to implement this interface in each of your platform specific projects, as shown below. Content of ISQLite.cs (PCL project) public interface ISQLite { SQLiteConnection GetConnection(); } Content of SQLite_iOS (iOS project) [assembly: Dependency ( typeof (SQLite_iOS))] // ... public class SQLite_iOS : ISQLite { public SQLite_iOS () {} public SQLite.SQLiteConnection GetConnection () { var sqliteFilename = "TodoSQLite.db3" ; string documentsPath = Environment.GetFolderPath (Environment.SpecialFolder.Personal); // Documents folder string libraryPath = Path.Combine (documentsPath, ".." , "Library" ); // Library folder var path = Path.Combine(libraryPath, sqliteFilename); // Create the connection var conn = new SQLite.SQLiteConnection(path); // Return the database connection return conn; }} Content of SQLite_Android (Android project) [assembly: Dependency ( typeof (SQLite_Android))] // ... public class SQLite_Android : ISQLite { public SQLite_Android () {} public SQLite.SQLiteConnection GetConnection () { var sqliteFilename = "TodoSQLite.db3" ; string documentsPath = System.Environment.GetFolderPath (System.Environment.SpecialFolder.Personal); // Documents folder var path = Path.Combine(documentsPath, sqliteFilename); // Create the connection var conn = new SQLite.SQLiteConnection(path); // Return the database connection return conn; }} After adding those files you can initialize the Datastore with the SQLiteConnection in your Xamarin.Forms project, for example in the class that inherits from the Application class. var conn = DependencyService.Get<ISQLite> ().GetConnection(); Datastore.InitOfflineHandler( conn ); From now on, all methods provided by the SDK, including accessing, modifying and saving your models are available within your platform specific project(s) as well as the PCL, which is also going to host your platform independent Xamarin.Forms code. Also have a look at our C# SDK documentation, where the usage of the C# / Xamarin SDK is described in detail.
http://docs.apiomat.com/32/Setting-up-a-Xamarin-app-with-ApiOmat.html
2019-04-18T14:48:44
CC-MAIN-2019-18
1555578517682.16
[array(['images/download/attachments/27723866/Bildschirmfoto-2016-08-31-um-10.24.10.png', 'images/download/attachments/27723866/Bildschirmfoto-2016-08-31-um-10.24.10.png'], dtype=object) array(['images/download/attachments/27723866/Bildschirmfoto-2016-08-31-um-10.24.47.png', 'images/download/attachments/27723866/Bildschirmfoto-2016-08-31-um-10.24.47.png'], dtype=object) array(['images/download/attachments/27723866/Bildschirmfoto-2016-08-31-um-10.25.40.png', 'images/download/attachments/27723866/Bildschirmfoto-2016-08-31-um-10.25.40.png'], dtype=object) array(['images/download/attachments/27723866/Bildschirmfoto-2016-08-31-um-10.26.37.png', 'images/download/attachments/27723866/Bildschirmfoto-2016-08-31-um-10.26.37.png'], dtype=object) array(['images/download/attachments/27723866/Bildschirmfoto-2016-08-31-um-10.28.16.png', 'images/download/attachments/27723866/Bildschirmfoto-2016-08-31-um-10.28.16.png'], dtype=object) array(['images/download/attachments/27723866/Bildschirmfoto-2016-08-31-um-10.34.07.png', 'images/download/attachments/27723866/Bildschirmfoto-2016-08-31-um-10.34.07.png'], dtype=object) array(['images/download/attachments/27723866/Bildschirmfoto-2016-08-31-um-10.41.32.png', 'images/download/attachments/27723866/Bildschirmfoto-2016-08-31-um-10.41.32.png'], dtype=object) array(['images/download/attachments/27723866/Bildschirmfoto-2016-08-31-um-10.45.01.png', 'images/download/attachments/27723866/Bildschirmfoto-2016-08-31-um-10.45.01.png'], dtype=object) ]
docs.apiomat.com
Use your average transaction value to spot fraudulent transactions. Fraudulent transactions are often significantly higher than the average transaction value, making it easy to identify at-risk transactions. This check identifies these transactions. Configuration options Establish various amount thresholds with individual scores for each. A single timeframe is established for the check. The risk check fires on any transaction that results in the configured threshold being met. - If multiple amounts meet the criteria for the check, the one with the highest score triggers. The amount of the assessed payment is included in the check. Contact Support Team to enable this risk rule for recurring payments.
https://docs.adyen.com/developers/risk-management/revenueprotect-engine-rules/shopperdna-rules/authorised-transaction-amount-velocity
2019-04-18T15:00:51
CC-MAIN-2019-18
1555578517682.16
[]
docs.adyen.com
Danube Cloud¶ Danube Cloud is an open source software for deploying, managing and automating cloud data centers and their processes. Danube Cloud Enterprise Edition and general information: Danube Cloud Community Edition homepage: Danube Cloud User Guide (for both Enterprise Edition and Community Edition): Danube Cloud Development and Contributing: Note If you have found a bug, please don’t hesitate and report it in the main bug tracker:.
https://docs.danubecloud.org/user-guide/esdc/index.html
2019-04-18T15:16:51
CC-MAIN-2019-18
1555578517682.16
[array(['../_images/danubecloud-logo.png', 'Danube Cloud'], dtype=object)]
docs.danubecloud.org
Arbitrary Settings This module is for developers only. You will need PHP knowledge to make use of this script. So what's the point? The point is to allow you to open up Form Tools to store arbitrary data that you can use outside of the script. Maybe an example would help... One of our clients that is using Form Tools to display submission data on their site. Every year, they need their custom PHP page to be updating again and again, for each year's form ID (and other settings). Each year I manually update the form ID on that page, the number of submissions that get displayed on the page and other minor little things. Since our client isn't technical, they are unable to do this themselves. But now, with this module, they can just edit the values with Form Tools itself: which saves us time, and them money. Basically it lets you use Form Tools to store "global" data, configurable by the clients through a few simple fields. Then, you reference those values in your own PHP files. The result? No more bothersome updates...! The client does it themselves.
https://docs.formtools.org/modules/arbitrary_settings/
2019-04-18T14:59:23
CC-MAIN-2019-18
1555578517682.16
[]
docs.formtools.org
Global Visit Resource Contents Description The global visit resource contains information about the relationship between a visitor's browser instance and their identity. Currently, the resource has one operation Query identities by globalVisitID, which retrieves a list of identities for a given globalVisitID (the identifier for the browser). This page was last modified on November 6, 2015, at 13:00. Feedback Comment on this article:
https://docs.genesys.com/Documentation/GWE/latest/API/GlobalVisits
2019-04-18T15:05:58
CC-MAIN-2019-18
1555578517682.16
[]
docs.genesys.com
Creating Multi-master Kubernetes Cluster in Bare Metal Environment with Platform9 Clarity UI Platform9 Managed Kubernetes supports creation of highly available, multi-master Kubernetes clusters in a bare metal environment, through the Platform9 Clarity UI. A multi-master cluster must be composed of at least three master nodes. Before you can create a multi-master cluster, you must ensure that - the infrastructure network has been setup correctly. Refer to the article, HA for Bare metal multi-master Kubernetes Clusters, for further details. - the nodes that are to be a part of the cluster have been added and authorized. Follow the steps given below to create a highly available Kubernetes cluster for a bare metal environment, with Platform9 Clarity UI. Note: It is assumed that you are logged in to Platform9 Clarity UI. - Click Infrastructure>Clusters>Add Cluster. - Select the Deploy cluster via agent install check box. - Enter the name for the multi-master cluster in Name. - Select at least three nodes that would function as master nodes. - Select the Disable Workloads on Master Nodes check box, if you wish to disable deployment of workloads on master nodes. This is an optional, but recommended step. - Click Next. - Select the nodes that would function as worker nodes. - being inaccessible. - Select the Enable Application Catalog check box, if you want to deploy applications using the Kubernetes package manager, Helm, on the cluster. This is an optional step. - Click Next. - Review the cluster configuration, and then click Create Cluster. The multi-master cluster is created in the bare metal environment. You can deploy your applications on the highly available multi-master Kubernetes cluster.
https://docs.platform9.com/support/create-multimaster-bareos-cluster/
2019-04-18T15:00:51
CC-MAIN-2019-18
1555578517682.16
[array(['/assets/pmk_1299_s1.png', 'Add Master Nodes'], dtype=object) array(['/assets/pmk_1299_s3.png', 'Advanced Configuration'], dtype=object)]
docs.platform9.com
velocity vector of the rigidbody measured in radians per second. In most cases you should not modify it directly, as this can result in unrealistic behaviour. // Change the material depending on the speed of rotation using UnityEngine; public class ExampleClass : MonoBehaviour { public Material fastWheelMaterial; public Material slowWheelMaterial; public Rigidbody rb; public MeshRenderer rend; void Start() { rb = GetComponent<Rigidbody>(); rend = GetComponent<MeshRenderer>(); } void Update() { if (rb.angularVelocity.magnitude < 5) rend.sharedMaterial = slowWheelMaterial; else rend.sharedMaterial = fastWheelMaterial; } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/ScriptReference/Rigidbody-angularVelocity.html
2019-04-18T15:11:28
CC-MAIN-2019-18
1555578517682.16
[]
docs.unity3d.com
Aliases. Yii has many pre-defined aliases already available. For example, the alias @yii represents the installation path of the Yii framework; @web represents the base URL for the currently running Web application. You can define an alias for a file path or URL by calling Yii::setAlias(): // an alias of a file path Yii::setAlias('@foo', '/path/to/foo'); // an alias of a URL Yii::setAlias('@bar', ''); Note: The file path or URL being aliased may not necessarily refer to an existing file or resource. Given a defined alias, you may derive a new alias (without the need of calling Yii::setAlias()) by appending a slash / followed with one or more path segments. The aliases defined via Yii::setAlias() becomes the root alias, while aliases derived from it are derived aliases. For example, @foo is a root alias, while @foo/bar/file.php is a derived alias. You can define an alias using another alias (either root or derived): Yii::setAlias('@foobar', '@foo/bar'); Root aliases are usually defined during the bootstrapping stage. For example, you may call Yii::setAlias() in the entry script. For convenience, Application provides a writable property named aliases that you can configure in the application configuration: return [ // ... 'aliases' => [ '@foo' => '/path/to/foo', '@bar' => '', ], ]; You can call Yii::getAlias() to resolve a root alias into the file path or URL it represents. The same method can also resolve a derived alias into the corresponding file path or URL: echo Yii::getAlias('@foo'); // displays: /path/to/foo echo Yii::getAlias('@bar'); // displays: echo Yii::getAlias('@foo/bar/file.php'); // displays: /path/to/foo/bar/file.php The path/URL represented by a derived alias is determined by replacing the root alias part with its corresponding path/URL in the derived alias. Note: The Yii::getAlias() method does not check whether the resulting path/URL refers to an existing file or resource. A root alias may also contain slash / characters. The Yii::getAlias() method is intelligent enough to tell which part of an alias is a root alias and thus correctly determines the corresponding file path or URL: Yii::setAlias('@foo', '/path/to/foo'); Yii::setAlias('@foo/bar', '/path2/bar'); Yii::getAlias('@foo/test/file.php'); // displays: /path/to/foo/test/file.php Yii::getAlias('@foo/bar/file.php'); // displays: /path2/bar/file.php If @foo/bar is not defined as a root alias, the last statement would display /path/to/foo/bar/file.php. Aliases are recognized in many places in Yii without needing to call Yii::getAlias() to convert them into paths or URLs. For example, yii\caching\FileCache::$cachePath can accept both a file path and an alias representing a file path, thanks to the @ prefix which allows it to differentiate a file path from an alias. use yii\caching\FileCache; $cache = new FileCache([ 'cachePath' => '@runtime/cache', ]); Please pay attention to the API documentation to see if a property or method parameter supports aliases. Yii predefines a set of aliases to easily reference commonly used file paths and URLs: @yii, the directory where the BaseYii.phpfile is located (also called the framework directory). @app, the base path of the currently running application. @runtime, the Composer vendor directory. Defaults to @app/vendor. @bower, the root directory that contains bower packages. Defaults to @vendor/bower. @npm, the root directory that contains npm packages. Defaults to @vendor/npm. The @yii alias is defined when you include the Yii.php file in your entry script. The rest of the aliases are defined in the application constructor when applying the application configuration. An alias is automatically defined for each extension that is installed via Composer. Each alias is named after the root namespace of the extension as declared in its composer.json file, and each alias represents the root directory of the package. For example, if you install the yiisoft/yii2-jui extension, you will automatically have the alias @yii/jui defined during the bootstrapping stage, equivalent to: Yii::setAlias('@yii/jui', 'VendorPath/yiisoft/yii2-jui'); © 2008–2017 by Yii Software LLC Licensed under the three clause BSD license.
https://docs.w3cub.com/yii~2.0/guide-concept-aliases/
2019-04-18T14:56:23
CC-MAIN-2019-18
1555578517682.16
[]
docs.w3cub.com
WSO2 Enterprise Store (ES), enables users to manage and provision the entire enterprise asset life cycle, for any type of asset in one store. The WSO2 ES is made-up of a combination of the Back Office and the Store Front. The Back Office (Publisher) allows users to add assets and manage them throughout the entire asset life cycle from creation to retirement; while also allowing users to maintain assetversioning. The EnterpriseStore Front is a central multi-tenant store that helps to increase the visibility of enterprise assets, by allowingreview popularity aspect, will help the users to gauge the quality of the available assets based on the user feedback. WSO2 Enterprise Store is shipped with two assets which are namely, gadgets and sites. However, WSO2 ES supports any type of enterprise asset suchas APIs, mobile apps, web apps,services etc. This is made possible by WSO2 ES empowering its users the freedom to add their own digital asset types together with their own customized life cycles. WSO2 ES goesthe extra mile by allowing users to also extend the ES by carrying out additional customizations (such as, customizing the asset subscription process based on how the asset needs to be accessed, adding new customized pages for the assets etc.). Furthermore, WSO2 ES is released under Apache Software License Version 2.0, one of the most business-friendly licenses available today.
https://docs.wso2.com/display/ES200/Introducing+WSO2+Enterprise+Store
2019-04-18T15:32:49
CC-MAIN-2019-18
1555578517682.16
[]
docs.wso2.com
5.0.0 (with service pack 1) token generated in step 6 <API URL>: Go to the API's Overview tab in the API Store and copy the production URL and append the payload to it. E.g., Here's an example: curl -k -H "Authorization :Bearer 8e64c4201d1c311c76a9c540856d1043" '' Note the result that appears on the command line. Similarly, invoke the POST method using the following cURL command: curl -k -H "Authorization :Bearer e9c8e79669041b4f73ab922f82692fa" -.
https://docs.wso2.com/pages/viewpage.action?pageId=45941943
2019-04-18T14:52:58
CC-MAIN-2019-18
1555578517682.16
[]
docs.wso2.com
Writing STDCI secrets file STDCI uses XDG Base Directory Specifications standard in order to search for the secrets file. The standard defines where different files should be looked for. $XDG_CONFIG_HOME is the place to search for user specific configuration files. On most systems, this variable is unset by default. For this case, the standard defines that if $XDG_CONFIG_HOME is either not set or empty, a default equal to $HOME/.config should be used. STDCI searches for a file named ci_secrets_file.yaml under XDG_CONFIG_HOME. If XDG_CONFIG_HOME is not defined, will look for a file with the same name under $HOME/.config. ci_secrets_file.yaml is a YAML config from the following form: --- - name: # Secret name project: # Optional. Used to filter secrets by project's name branch: # Optional. Used to filter secrets by project's branch name # Regex is supported for both project and branch # If not specified, the secret will be available for all projects/branches secret_data: # In this section, we write a key-value pairs of secret data name and # it's value. It is used to bind several values for one secret. # For example, username and password. Example --- - name: SERVICE_X_CREDENTIALS project: my_project branch: master secret_data: username: USERNAME_X password: PASSWORD_X - name: MY_SSH_KEY project: oVirt-.* secret_data: key: | # SSH KEY GOES HERE Note that SERVICE_X_CREDENTIALS will be available to "my_project" only and only for "master" branch. MY_SSH_KEY will be available for all projects that their name starts with "oVirt-".
https://ovirt-infra-docs.readthedocs.io/en/latest/CI/Writing_STDCI_secrets_file/index.html
2019-04-18T14:18:34
CC-MAIN-2019-18
1555578517682.16
[]
ovirt-infra-docs.readthedocs.io
Kubernetes Integration Summary Kubernetes is a container orchestrator whereas Cloudify is a general orchestrator. Kubernetes uses control loops to maintain resource states. Conversely, Cloudify uses event driven workflows to achieve desired states. Cloudify integrates with Kubernetes to orchestrate multi-tier application architectures that include containerized and non-containerized workloads. Integration Points There are two integration points between Cloudify and Kubernetes: infrastructure orchestration and service orchestration. Infrastructure orchestration is accomplished via the Cloudify Kubernetes Provider. Service Orchestration is accomplished via the Cloudify Kubernetes Plugin. These two features are not mutually dependent. You can use one without using the other. However, together they enable you to use the best of both Kubernetes and Cloudify. Infrastructure Orchestration: Cloudify Kubernetes Provider The Cloudify Kubernetes Provider is the first integration point, and it is related to managing underlying infrastructure. For example: - Lifecycle management of the underlying infrastructure, such as healing and scaling of Kubernetes nodes, storage management, and service exposure. - Deployment of a Kubernetes Cluster (via a blueprint). To install Kubernetes and the Cloudify Kubernetes Provider, go to Cloudify Kubernetes Provider. Service Orchestration: Cloudify Kubernetes Plugin The “Cloudify Kubernetes Plugin” is the second integration point, and it relates to Kubernetes API object orchestration. For example: - Connecting Kubernetes objects to non-Kubernetes objects, such as a remote Windows service and a Kubernetes Pod. - Creating and deleting Kubernetes API objects, such as Pods, Deployments, etc. - Updating Kubernetes API objects such as migrating Pods from one Kubernetes Node to another. To learn more, read the documentation on the Cloudify Kubernetes Plugin. Or, to deploy a demo application, go to Kubernetes Wordpress Example. If you need to access your Kubernetes Dashboard from a public API, follow these instructions. Why not put everything in a container? Some workloads can be delivered in a container, but there are often additional non-container configurations that need to be orchestrated. For example, part of an application may include a Windows service, or involve post-start or day two changes to some other custom hardware component. Also, several organizations have legacy applications that will not be migrated to containers any time soon. These “hybrid cloud” scenarios are where Cloudify comes in to the picture to bridge the gap between the power of containers and hardware, or custom component, orchestration.
https://docs.cloudify.co/4.5.0/developer/kubernetes/
2019-04-18T14:55:22
CC-MAIN-2019-18
1555578517682.16
[array(['../../images/plugins/services-orch.png', 'diagram of services orchestration'], dtype=object)]
docs.cloudify.co
You can connect multiple SMTP email relay servers from any combination of relay service provider from Amazon SES, Mandrill, SendGrid, Sparkpost, Leadersend, Elasticemail and MailGun inside your EasySendy Drip (or EasySendy Pro) account. With each SMTP email relay delivery server, you can follow relevant guide, which will help you add the required email delivery server properly. After you have added a delivery server inside your EasySendy Drip (or EasySendy Pro) account, then, confirm that it is validated and activated. Follow the guide to add following delivery servers: - ElasticEmail Integration Guide - SparkPost Integration Guide - MailGun Integration Guide - SendGrid Integration Guide - Mandrill Integration Guide - Amazon SES Integration Guide Inside each email delivery server integration setup, you will following setting: Probability The percentage of out going emails which you want to send through this single delivery server. Hourly rate In case there are limits that apply for sending with this server, you can set a hourly quota for it and it will only send in one hour Reply-To in header.
https://docs.easysendy.com/email-campaigns/kb/connect-multiple-smtp-relay-servers/
2019-04-18T14:31:19
CC-MAIN-2019-18
1555578517682.16
[]
docs.easysendy.com
Available Salomon Modules on PHI Cards Compiler¶ Module Description icc Intel C and C++ compilers Data¶ Module Description HDF5 HDF5 is a unique technology suite that makes possible the management of extremely large and complex data collections. Devel¶ Module Description devel_environment Devel environment for intel xeon phi GCC 5.1.1 Python 2.7.12 Perl 5.14.2 CMake 2.8.7 Make 3.82 ncurses 5.9 .... Tcl Tcl (Tool Command Language) is a very powerful but easy to learn dynamic programming language, suitable for a very wide range of uses, including web and desktop applications, networking, administration, testing and many more. Lib¶ Module Descriptionlib zlib is designed to be a free, general-purpose, legally unencumbered -- that is, not covered by any patents -- lossless data-compression library for use on virtually any computer hardware and operating system. Math¶ Module Description GMP Old module, description not available. Octave GNU Octave is a high-level interpreted language, primarily intended for numerical computations. Mpi¶ Module Description impi Intel MPI Library, compatible with MPICH ABI. Toolchain¶ Module Description iccifort Intel C, C++ & Fortran compilers ifort Intel Fortran compiler iimpi Intel C/C++ and Fortran compilers, alongside Intel MPI. intel Compiler toolchain including Intel compilers, Intel MPI and Intel Math Kernel Library (MKL). Tools¶ Module Description. cURL libcurl is a free and easy-to-use client-side URL transfer library expat Expat is an XML parser library written in C. It is a stream-oriented parser in which an application registers handlers for things the parser might find in the XML document (like start tags) OpenSSL Old module, description not available. Vis¶ Module Description gettext GNU `gettext' is an important step for the GNU Translation Project, as it is an asset on which we may build many other steps. This package offers to programmers, translators, and even users, a well integrated set of tools and documentation Comments
https://docs.it4i.cz/modules-salomon-phi/
2019-04-18T15:42:41
CC-MAIN-2019-18
1555578517682.16
[]
docs.it4i.cz
Platform9 VMware GA 1.1.0 Release Notes What's New! Support for Cinder Block Storage With this release, we've now enabled OpenStack Cinder block storage service, to add persistent storage volumes to VM instances using VMFS datastores. The Cinder block storage service enables functionality such as: - Creation of Independent Volumes - Taking volume snapshots - Boot VMs from volumes and attach/detach volumes to VMs Please refer to this support article for more info on Cinder support. Enhancements to Glance Image Catalog From this release, users can use Glance to store vmdk-based images directly to a VMFS-backed datastore. VMDK images can be uploaded by users using the Glance API / client directly to this datastore. Snapshots will also be stored here, instead of taking up space on the gateway appliance. Please refer to this support article for more info on Glance enhancements. Bug Fixes and Enhancements This release also features bug-fixes and optimizations including: - Bug #3526 - When vCenter username or password contains a single quote ('), double quote ("), or dollar sign ('$'), authentication fails. - Bug #3560 - When deploying the Platform9 VMware Gateway appliance, assigning a static IP occasionally causes appliance deployment to fail. Known Issues Here's a list of known issues we are actively working on: - IAAS-3716: Images remain in datastore after appliance removal - IAAS-3704: Changing vmware datastore should move images to new datastore - Before upgrade, all user images in the old image library(appliance file system) including any snapshots that might have been created, need to be deleted to ensure consistency. This is required as glance is currently configured to have only one backend(vmware datastore) - Some of the guest OS may not be able to recognize that a new disk has been attached and may require a reboot. Users can configure their guest OS to detect changes in SCSI hosts but the configuration for the same is different for different OS. For details, see Cinder support article listed above. The disks are attached as lsiLogic scsi adapter always, whose discovery could be dependent on the guest OS. This has been tested with centos 6.6, ubuntu 14.04 and Windows 2008 R2 server October 17, 2015
https://docs.platform9.com/support/platform9-vmware-ga-1-1-0-release-notes/
2019-04-18T14:43:37
CC-MAIN-2019-18
1555578517682.16
[]
docs.platform9.com