content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Plugins.
Add Parameter
Allows Add Parameter to the access created for a supplier
Aggregation
Aggregate different Suppliers response based on different criteria.
Booking Persistence
This plugin allows to store and avoid duplicated bookings
Commission
Convert Gross Prices into Net Prices.
Market Group
Groups the Search result by markets that share the same product
Markup
Mark up or down the price coming from the Supplier based on different criteria.
Preference
To give preference to the options that match the preference rules.
Safety Margin
Discards those options that have a commission higher than expected.
Vcc Gen
Creates a virtual credit card at Book step. | https://docs.travelgatex.com/hotel-x/plugins/ | 2020-07-02T08:55:57 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.travelgatex.com |
Configure and manage developer programs, as described in this section. See also What is a developer program?.
Explore the Developer Programs page
To access the Developer Programs page, select Publish > Developer Programs in the side navigation bar.
As highlighted in the figure, the Developer Programs page enables you to:
- View all developer programs
- View the integrated portals connected to each developer program
- Configure a developer program
- Delete a developer-program
- Search the list of developer programs
You create and connect to a developer program when creating an integrated portal.
Configure a developer program
To configure a developer program:
- Access the Developer Programs page.
- Click the row of the developer program that you want to configure.
Manage the following, as required:
- Registration and sign-in experience for developer accounts
- Identity providers (built-in or SAML) that are configured and enabled
- Integrated portal that is connected to the developer program
Developer accounts that have been registered on the connected integrated portals If you enrolled in the Beta release of the developer team and audience management features, you can perform the following additional tasks:
Developer teams that have been created by portal users to share responsibility for an app with other portal users.
Audiences for your portal to segment individuals in order to control access to content.
Edit the developer program details:
a. In the Program details section, click
.
c. Edit the name or description, as desired.
d. Click Save.
Delete a developer program
Before you can delete a developer program, you must delete the portal associated with it.
To delete a developer program:
- Access the Developer Programs page.
- Position your cursor over the row associated with the developer program you want to delete to display the actions menu.
- Click
.
- To confirm the delete operation:
a. Type DELETE in the text field.
b. Click Delete. | https://docs.apigee.com/api-platform/publish/portal/configure-developer-program | 2019-11-12T01:50:31 | CC-MAIN-2019-47 | 1573496664469.42 | [array(['/api-platform/images/developer-programs.png',
'Developer programs'], dtype=object) ] | docs.apigee.com |
Installation Guide for Solopreneur theme
This article will guide you on how to install and configure Solopreneur WordPress theme.
#1. Installation and Activation
- 1
- After you’ve purchased and downloaded the Solopreneur theme, login to your WordPress website to get started.
- 2
- Navigate to Appearance>Themes and click the Add New button at the top of the following screen. In the Add Themes screen, click the Upload Theme button.
- 3
- Click the Choose File button and search for your licensed copy of the Solopreneur theme on your drive.
- 4
- Once it is uploaded, click the Install Now button.WordPress will extract the theme and give you details about the installation along the way.
- 5
- Once it is installed, you’ll see an Activate link underneath the installation details.
- 6
- Click the Activate link to proceed.
- 7
- To make sure everything is in place, visit your WordPress site’s front-end. It should look similar to the Solopreneur theme’s demo.
- 8
- At this time, you’ll be prompted to install the Optin Forms plugin. The Solopreneur theme requires you to install and activate this theme so click Begin installing plugin to continue
#2. Configuring Solopreneur
Solopreneur is a flexible WordPress theme that has a ton of customization options on offer. In this section, we’ll explore them in detail by walking you through the live customizer.
Navigate to Appearance>Customize from your WordPress site’s dashboard. You’ll see the default WordPress live customizer on the left side of the screen. We’ll start from the top and work our way down.
Site Identity
The Site Identity section in the customizer allows you to add a logo, site title, and a site icon.
- Add a Logo. To add a logo to your website, navigate to Appearance > Customize > Site Identity. Click the Select logo button and upload your logo. The recommended image dimensions are 200 x 56. Adding the logo replaces the site’s title.
- Site Title. The site title displays your site’s title in Solopreneur’s default typeface. (Note: If you have a logo added then the site title will not be displayed.)
- Site Icon. Site icon (also called favicon) is displayed in the tab next to your site’s name. The icon must be square with image dimensions 512 x 512.
Theme Colors
If you’d like to change the default colors of the theme to fit your business’ brand then head over to Appearance > Customize > Theme Colors to get started. The Theme Colors section allows webmasters to modify:
- Primary accent color.
- Secondary accent.
- Link color.
- Link hover color.
Here’s how it looks if you modify the default color palettes.
Theme Settings
The Theme Settings option enables webmasters to modify the default post layout. By default, there are three options available in the drop-down menu:
Standard (includes Sidebar)
Full Width (Slim)
Full Width
The Theme Settings options panel also allows users to modify the post meta settings by selecting which post information elements to display and which to leave out. All you have to do to show/hide the elements is tick the box next to the post information element. Users can choose to show/hide:
- Author and post date information.
- Replace date posted with date updated.
- Hide comment counts.
- Hide category list.
- Hide tag list.
- Hide author bio.
In addition to this, users can also configure the social sharing buttons from this menu. Solopreneur uses Jetpack’s social sharing module. To be able to make modifications, you will have to activate the module first.
Users can set the footer’s text from the Theme Settings menu, as well. There are two shortcodes and two HTML entities available that make it easier to set a professional-looking footer. You can also choose to hide FancyThemes’ footer credits if you’d like.
Theme Fonts
The Theme Fonts panel allows you to select from a wide-range of fonts for the Base font and the Heading font. In addition to this, you can also change the font size of the Heading font.
Footer Callout
Footer Callout section allows users to set a background image on their footer. Once you set this, you can also add a callout title, callout text, and two buttons.
Social Media
The Social Media panel enables users to add links to four social media networks for their business:
- Twitter.
- Instagram.
- Google Plus.
Main Menu
Users can customize their site’s main menu from the customizer. Adding items to the menu is easy. You can select from five different types of menu items to add:
- Custom Links.
- Pages.
- Categories.
Widgets
Solopreneur displays newsletter subscription widgets using the Optin Forms plugin. Users can add widgets to the sidebar and to their site’s footer. The widgets include the default WordPress widgets in addition to FancyThemesAbout Me, FancyThemes Recent Posts, and FancyThemes Top Posts.
Static Front Page
Solopreneur supports a static front page which enables users to select what to display on their front page. By visiting Appearance>Customize>Static Front Page, you’ll find that you have two options. Either you can display your latest posts or select a page to display.
| https://docs.fancythemes.com/article/11-installation-guide-for-solopreneur-theme | 2019-11-12T00:51:18 | CC-MAIN-2019-47 | 1573496664469.42 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/572e39479033600a4c9f0e11/images/573c66dd9033603b8d7dc5a0/file-7aB3250cpv.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/572e39479033600a4c9f0e11/images/573c67039033603b8d7dc5a3/file-Z60RivEU78.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/572e39479033600a4c9f0e11/images/573c672ec697912497b4d552/file-EnA6EFVYf5.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/572e39479033600a4c9f0e11/images/573c670ec697912497b4d550/file-JVVGyDkaRZ.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/572e39479033600a4c9f0e11/images/573c67a19033603b8d7dc5a8/file-oRag82B17o.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/572e39479033600a4c9f0e11/images/573c681bc697912497b4d55c/file-W8SRWl9G2P.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/572e39479033600a4c9f0e11/images/573c682c9033603b8d7dc5b1/file-07sNRPcMJ0.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/572e39479033600a4c9f0e11/images/573c683ec697912497b4d55e/file-SV2QwzeEEN.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/572e39479033600a4c9f0e11/images/573c684f9033603b8d7dc5b2/file-CyiUulbRk8.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/572e39479033600a4c9f0e11/images/573c685dc697912497b4d562/file-lsO0UKuaN0.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/572e39479033600a4c9f0e11/images/573c686ac697912497b4d563/file-fJwslezGt0.png',
None], dtype=object) ] | docs.fancythemes.com |
June 2018
Volume 33 Number 6
[Don't Get Me Started]
Ol’ Man River
By David S. Platt | June 2018
I just paid my income taxes, so I’m feeling cranky. To cheer myself up, I’m going to kick over my all-time favorite hornets’ nest: Visual Basic 6. My three previous columns on it (msdn.com/magazine/jj133828, msdn.com/magazine/dn745870 and msdn.com/magazine/mt632280) have generated far more mail, pro and con, than anything else I’ve ever written. Once again, I’ll goad the developers who continue to love VB6, and those who love to hate it and them, into spectacular combat, for my amusement and yours. Damn, this is fun.
VB6 just got an important boost from Microsoft blogger Scott Hanselman. In his post (bit.ly/2rcPD0f), Hanselman shows how to configure a VB6 app to be hosted in the Windows 10 Store, using the Microsoft Desktop Bridge infrastructure and tools (bit.ly/2HFVzcc). That’s huge, as hosting an app in the store means that Microsoft is at least somewhat vouching for its compatibility and content. Potential purchasers perceive it as sort of a Good Computing Seal™—perhaps not as strong as Apple’s, but definitely much stronger than Google’s. You may have to modify your app somewhat to meet the store’s policies (bit.ly/2HHUXiq), such as removing “excessive or gratuitous profanity.” (Well, %*&#$ that, I say. Oops.) But this should be relatively easy.
These bridging tools instruct Windows 10 to enforce good behavior on regular, not-otherwise-compliant, Win32 apps. For example, Windows 10 (when properly instructed) will use a separate registry file to handle changes the app might make to the system registry, so it can’t clobber any other apps or resources. For another example, any changes the app might make to the file system are automatically redirected to the ApplicationData.LocalFolder, where Windows 10 standards require them to reside. You can see this strategy at bit.ly/2I3n0fG.
But wait! There’s more!.
Maybe this is why Microsoft won’t release VB6 as open source, as it has for most of its tools. It might be worried that the community would change it to the point that Microsoft couldn’t provide this “It Just More-or-Less Works” (IJM-o-LW) compatibility in the future.
I rarely use VB6 for commercial software development, as its tradeoffs are not usually the right set for my clients today. But I do have it installed on my experimental network for testing. I have a big problem (not an issue, see my old column on “Weasel Words,” msdn.com/magazine/ff955613) with people who have a big problem with other developers’ choices. Why do you care what someone else uses? Are you a Puritan as H.L. Mencken describes them: someone who lies awake at night with the haunting fear that someone, somewhere, may be happy?
VB6 programmers chose a different set of tradeoffs than you did. Yes, you get frustrated, virtuously slogging through infrastructure, while they ignore scalability and robustness and plunge merrily ahead. No, they probably don’t understand the underlying COM very well—almost nobody does these days. When they inevitably get in trouble, I’ll bail them out (for a fee, of course, see graybeardsoftware.com). That’s their call. Mind your own damn business.. I can dig it.
I’ve likened VB6 to a cockroach, a bus and a knuckleball. Today VB6 continues to cut a path to working apps, eroding its way through new obstacles, as the Mississippi River cuts new pathways through its delta to the sea, even as the silt it carries clogs the old ones. Like Ol’ Man River, VB6 just keeps rollin’ along.
Note: a beautiful clip of this song, sung by Paul Robeson in James Whale’s classic 1936 film version of “Show Boat,” is online at bit.ly/2JJFv66. It’s worth a listen. | https://docs.microsoft.com/en-us/archive/msdn-magazine/2018/june/don-t-get-me-started-ol-man-river | 2019-11-12T01:35:23 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.microsoft.com |
- Services: Storage Gateway
- Release Date: Sept. 9, 2019
Storage Gateway 1.3 is now available. Storage Gateway is a cloud storage gateway that lets you connect your on-premise applications with Oracle Cloud Infrastructure. See Overview of Storage Gateway for more information.
New in this release:
- Large file support enhancement: Storage Gateway now provides partial update capabilities to reduce upload latency, improve the use of available network bandwidth, reduce minimum required storage cache size, and enable ingestion of single files that are larger than the Storage Gateway cache size.
- Cloud Sync enhancements: Storage Gateway cloud sync, an integrated data transfer and synchronization feature for backup and replication of on-premises files to and from Oracle Cloud Infrastructure Object Storage buckets, has a new "schedule" CLI option to automate the Cloud Sync job so it runs according to a specified schedule. You can now configure email notifications for completed Cloud Sync jobs using the system notification tab of the Storage Gateway management console.
Critical Fixes:
- File system getting UNMOUNTED due to temporary errors from Object Storage.
- Cloud Sync parsing issues with special characters in file names.
- Several miscellaneous fixes to improve the overall stability of Storage Gateway. | https://docs.cloud.oracle.com/iaas/releasenotes/changes/8d8e69bf-838f-4b46-b007-f8f7f9b0bfce/ | 2019-11-12T00:31:18 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.cloud.oracle.com |
System
Brushes. Highlight Property
Definition
Gets a SolidBrush that is the color of the background of selected items.
public: static property System::Drawing::Brush ^ Highlight { System::Drawing::Brush ^ get(); };
public static System.Drawing.Brush Highlight { get; }
member this.Highlight : System.Drawing.Brush
Public Shared ReadOnly Property Highlight As Brush
Property Value
A SolidBrush that is the color of the background of selected items.
Remarks
Selected items may include menu items as well as selected text. For example, the brush may be the color used for the background of selected items in a list box. | https://docs.microsoft.com/ar-sa/dotnet/api/system.drawing.systembrushes.highlight?view=netframework-4.8 | 2019-11-12T00:36:45 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.microsoft.com |
Migrate from ASP.NET Core 2.0 to 2.1
See What's new in ASP.NET Core 2.1 for an overview of the new features in ASP.NET Core 2.1.
This article:
- Covers the basics of migrating an ASP.NET Core 2.0 app to 2.1.
- Provides an overview of the changes to the ASP.NET Core web application templates.
A quick way to get an overview of the changes in 2.1 is to:
- Create an ASP.NET Core 2.0 web app named WebApp1.
- Commit the WebApp1 in a source control system.
- Delete WebApp1 and create an ASP.NET Core 2.1 web app named WebApp1 in the same place.
- Review the changes in the 2.1 version.
This article provides an overview on migration to ASP.NET Core 2.1. It doesn't contain a complete list of all changes needed to migrate to version 2.1. Some projects might require more steps depending on the options selected when the project was created and modifications made to the project.
Update the project file to use 2.1 versions
Update the project file:
- Change the target framework to .NET Core 2.1 by updating the project file to
<TargetFramework>netcoreapp2.1</TargetFramework>.
- Replace the package reference for
Microsoft.AspNetCore.Allwith a package reference for
Microsoft.AspNetCore.App. You may need to add dependencies that were removed from
Microsoft.AspNetCore.All. For more information, see Microsoft.AspNetCore.All metapackage for ASP.NET Core 2.0 and Microsoft.AspNetCore.App metapackage for ASP.NET Core.
- Remove the "Version" attribute on the package reference to
Microsoft.AspNetCore.App. Projects that use
<Project Sdk="Microsoft.NET.Sdk.Web">don't need to set the version. The version is implied by the target framework and selected to best match the way ASP.NET Core 2.1 works. For more information, see the Rules for projects targeting the shared framework section.
- For apps that target the .NET Framework, update each package reference to 2.1.
- Remove references to <DotNetCliToolReference> elements for the following packages. These tools are bundled by default in the .NET Core CLI and don't need to be installed separately.
- Microsoft.DotNet.Watcher.Tools (
dotnet watch)
- Microsoft.EntityFrameworkCore.Tools.DotNet (
dotnet ef)
- Microsoft.Extensions.Caching.SqlConfig.Tools (
dotnet sql-cache)
- Microsoft.Extensions.SecretManager.Tools (
dotnet user-secrets)
- Optional: you can remove the <DotNetCliToolReference> element for
Microsoft.VisualStudio.Web.CodeGeneration.Tools. You can replace this tool with a globally installed version by running
dotnet tool install -g dotnet-aspnet-codegenerator.
- For 2.1, a Razor Class Library is the recommended solution to distribute Razor files. If your app uses embedded views, or otherwise relies on runtime compilation of Razor files, add
<CopyRefAssembliesToPublishDirectory>true</CopyRefAssembliesToPublishDirectory>to a
<PropertyGroup>in your project file.
The following markup shows the template-generated 2.0 project file:
<Project Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <TargetFramework>netcoreapp2.0</TargetFramework> <UserSecretsId>aspnet-{Project Name}-{GUID}</UserSecretsId> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.9" /> <PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="2.0.3" PrivateAssets="All" /> <PackageReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Design" Version="2.0.4" PrivateAssets="All" /> </ItemGroup> <ItemGroup> <DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="2.0.3" /> <DotNetCliToolReference Include="Microsoft.Extensions.SecretManager.Tools" Version="2.0.2" /> <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="2.0.4" /> </ItemGroup> </Project>
The following markup shows the template-generated 2.1 project file:
<Project Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <TargetFramework>netcoreapp2.1</TargetFramework> <UserSecretsId>aspnet-{Project Name}-{GUID}</UserSecretsId> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.AspNetCore.App" /> <PackageReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Design" Version="2.1.1" PrivateAssets="All" /> </ItemGroup> </Project>
Rules for projects targeting the shared framework
A shared framework is a set of assemblies (.dll files) that are not in the app's folders. The shared framework must be installed on the machine to run the app. For more information, see The shared framework.
ASP.NET Core 2.1 includes the following shared frameworks:
The version specified by the package reference is the minimum required version. For example, a project referencing the 2.1.1 versions of these packages won't run on a machine with only the 2.1.0 runtime installed.
Known issues for projects targeting a shared framework:
The .NET Core 2.1.300 SDK (first included in Visual Studio 15.6) set the implicit version of
Microsoft.AspNetCore.Appto 2.1.0 which caused conflicts with Entity Framework Core 2.1.1. The recommended solution is to upgrade the .NET Core SDK to 2.1.301 or later. For more information, see Packages that share dependencies with Microsoft.AspNetCore.App cannot reference patch versions.
All projects that must use
Microsoft.AspNetCore.Allor
Microsoft.AspNetCore.Appshould add a package reference for the package in the project file, even if they contain a project reference to another project using
Microsoft.AspNetCore.Allor
Microsoft.AspNetCore.App.
Example:
MyApphas a package reference to
Microsoft.AspNetCore.App.
MyApp.Testshas a project reference to
MyApp.csproj.
Add a package reference for
Microsoft.AspNetCore.Appto
MyApp.Tests. For more information, see Integration testing is hard to set up and may break on shared framework servicing.
Update to the 2.1 Docker images
In ASP.NET Core 2.1, the Docker images migrated to the dotnet/dotnet-docker GitHub repository. The following table shows the Docker image and tag changes:
Change the
FROM lines in your Dockerfile to use the new image names and tags in the preceding table's 2.1 column. For more information, see Migrating from aspnetcore docker repos to dotnet.
Changes to take advantage of the new code-based idioms that are recommended in ASP.NET Core 2.1
Changes to Main
The following images show the changes made to the templated generated Program.cs file.
The preceding image shows the 2.0 version with the deletions in red.
The following image shows the 2.1 code. The code in green replaced the 2.0 version:
The following code shows the 2.1 version of Program.cs:
namespace WebApp1 { public class Program { public static void Main(string[] args) { CreateWebHostBuilder(args).Build().Run(); } public static IWebHostBuilder CreateWebHostBuilder(string[] args) => WebHost.CreateDefaultBuilder(args) .UseStartup<Startup>(); } }
The new
Main replaces the call to
BuildWebHost with CreateWebHostBuilder. IWebHostBuilder was added to support a new integration test infrastructure.
Changes to Startup
The following code shows the changes to 2.1 template generated code. All changes are newly added code, except that
UseBrowserLink has been removed:
using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.Hosting; using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Configuration; using Microsoft.Extensions.DependencyInjection; namespace WebApp1 { public class Startup { public Startup(IConfiguration configuration) { Configuration = configuration; } public(); // If the app uses Session or TempData based on Session: // app.UseSession(); app.UseMvc(); } } }
The preceding code changes are detailed in:
- GDPR support in ASP.NET Core for
CookiePolicyOptionsand
UseCookiePolicy.
- HTTP Strict Transport Security Protocol (HSTS) for
UseHsts.
- Require HTTPS for
UseHttpsRedirection.
- SetCompatibilityVersion for
SetCompatibilityVersion(CompatibilityVersion.Version_2_1).
Changes to authentication code
ASP.NET Core 2.1 provides ASP.NET Core Identity as a Razor Class Library (RCL).
The default 2.1 Identity UI doesn't currently provide significant new features over the 2.0 version. Replacing Identity with the RCL package is optional. The advantages to replacing the template generated Identity code with the RCL version include:
- Many files are moved out of your source tree.
- Any bug fixes or new features to Identity are included in the Microsoft.AspNetCore.App metapackage. You automatically get the updated Identity when
Microsoft.AspNetCore.Appis updated.
If you've made non-trivial changes to the template generated Identity code:
- The preceding advantages probably do not justify converting to the RCL version.
- You can keep your ASP.NET Core 2.0 Identity code, it's fully supported.
Identity 2.1 exposes endpoints with the
Identity area. For example, the follow table shows examples of Identity endpoints that change from 2.0 to 2.1:
Applications that have code using Identity and replace 2.0 Identity UI with the 2.1 Identity Library need to take into account Identity URLs have
/Identity segment prepended to the URIs. One way to handle the new Identity endpoints is to set up redirects, for example from
/Account/Login to
/Identity/Account/Login.
Update Identity to version 2.1
The following options are available to update Identity to 2.1.
- Use the Identity UI 2.0 code with no changes. Using Identity UI 2.0 code is fully supported. This is a good approach when significant changes have been made to the generated Identity code.
- Delete your existing Identity 2.0 code and Scaffold Identity into your project. Your project will use the ASP.NET Core Identity Razor Class Library. You can generate code and UI for any of the Identity UI code that you modified. Apply your code changes to the newly scaffolded UI code.
- Delete your existing Identity 2.0 code and Scaffold Identity into your project with the option to Override all files.
Replace Identity 2.0 UI with the Identity 2.1 Razor Class Library
This section outlines the steps to replace the ASP.NET Core 2.0 template generated Identity code with the ASP.NET Core Identity Razor Class Library. The following steps are for a Razor Pages project, but the approach for an MVC project is similar.
- Verify the project file is updated to use 2.1 versions
- Delete the following folders and all the files in them:
- Controllers
- Pages/Account/
- Extensions
- Build the project.
- Scaffold Identity into your project:
- Select the projects exiting _Layout.cshtml file.
- Select the + icon on the right side of the Data context class. Accept the default name.
- Select Add to create a new Data context class. Creating a new data context is required for to scaffold. You remove the new data context in the next section.
Update after scaffolding Identity
Delete the Identity scaffolder generated
IdentityDbContextderived class in the Areas/Identity/Data/ folder.
Delete Areas/Identity/IdentityHostingStartup.cs.
Update the _LoginPartial.cshtml file:
- Move Pages/_LoginPartial.cshtml to Pages/Shared/_LoginPartial.cshtml.
- Add
asp-area="Identity"to the form and anchor links.
- Update the
<form />element to
<form asp-.
The following code shows the updated _LoginPartial.cshtml file:
@using Microsoft.AspNetCore.Identity > }
Update
ConfigureServices with the following code:
public void ConfigureServices(IServiceCollection services) { services.AddDbContext<ApplicationDbContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"))); services.AddDefaultIdentity<ApplicationUser>() .AddEntityFrameworkStores<ApplicationDbContext>() .AddDefaultTokenProviders(); services.AddMvc(); // Register no-op EmailSender used by account confirmation and password reset // during development services.AddSingleton<IEmailSender, EmailSender>(); }
Changes to Razor Pages projects Razor files
The layout file
Move Pages/_Layout.cshtml to Pages/Shared/_Layout.cshtml
In Areas/Identity/Pages/_ViewStart.cshtml, change
Layout = "/Pages/_Layout.cshtml"to
Layout = "/Pages/Shared/_Layout.cshtml".
The _Layout.cshtml file has the following changes:
<partial name="_CookieConsentPartial" />is added. For more information, see GDPR support in ASP.NET Core.
- jQuery changes from 2.2.0 to 3.3.1.
_ValidationScriptsPartial.cshtml
- Pages/_ValidationScriptsPartial.cshtml moves to Pages/Shared/_ValidationScriptsPartial.cshtml.
- jquery.validate/1.14.0 changes to jquery.validate/1.17.0.
New files
The following files are added:
- Privacy.cshtml
- Privacy.cshtml.cs
See GDPR support in ASP.NET Core for information on the preceding files.
Changes to MVC projects Razor files
The layout file
The Layout.cshtml file has the following changes:
<partial name="_CookieConsentPartial" />is added.
- jQuery changes from 2.2.0 to 3.3.1
_ValidationScriptsPartial.cshtml
jquery.validate/1.14.0 changes to jquery.validate/1.17.0
New files and action methods
The following are added:
- Views/Home/Privacy.cshtml
- The
Privacyaction method is added to the Home controller.
See GDPR support in ASP.NET Core for information on the preceding files.
Changes to the launchSettings.json file
As ASP.NET Core apps now use HTTPS by default, the Properties/launchSettings.json file has changed.
The following JSON shows the earlier 2.0 template-generated launchSettings.json file:
{ "iisSettings": { "windowsAuthentication": false, "anonymousAuthentication": true, "iisExpress": { "applicationUrl": "", "sslPort": 0 } }, "profiles": { "IIS Express": { "commandName": "IISExpress", "launchBrowser": true, "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } }, "WebApp1": { "commandName": "Project", "launchBrowser": true, "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" }, "applicationUrl": "" } } }
The following JSON shows the new 2.1 template-generated launchSettings.json file:
{ "iisSettings": { "windowsAuthentication": false, "anonymousAuthentication": true, "iisExpress": { "applicationUrl": "", "sslPort": 44390 } }, "profiles": { "IIS Express": { "commandName": "IISExpress", "launchBrowser": true, "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } }, "WebApp1": { "commandName": "Project", "launchBrowser": true, "applicationUrl": ";", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } } } }
For more information, see Enforce HTTPS in ASP.NET Core.
Breaking changes
FileResult Range header
FileResult no longer processes the Accept-Ranges header by default. To enable the
Accept-Ranges header, set EnableRangeProcessing to
true.
ControllerBase.File and PhysicalFile Range header
The following ControllerBase methods no longer processes the Accept-Ranges header by default:
- Overloads of ControllerBase.File
- ControllerBase.PhysicalFile
To enable the
Accept-Ranges header, set the
EnableRangeProcessing parameter to
true.
Additional changes
- If hosting the app on Windows with IIS, install the latest .NET Core Hosting Bundle.
- SetCompatibilityVersion
- Transport configuration
Feedback | https://docs.microsoft.com/en-us/aspnet/core/migration/20_21?view=aspnetcore-2.1&WT.mc_id=cicd-devto-shboyer | 2019-11-12T01:11:02 | CC-MAIN-2019-47 | 1573496664469.42 | [array(['20_21/_static/main20.png?view=aspnetcore-2.1',
'old version differences'], dtype=object)
array(['20_21/_static/main21.png?view=aspnetcore-2.1',
'new version differences'], dtype=object) ] | docs.microsoft.com |
Introduction to Omnichannel Insights dashboard.
Important Dynamics 365 Customer Service,.
Customer service managers or supervisors are responsible for managing the agents who work to resolve customer queries every day through various service channels, including Chat for Dynamics 365 Customer Service. They need to know key operational metrics to ensure that their agents are providing quality support. Supervisors can see trends in these metrics over a period of time to understand how agents and queues are performing, so that they can take corrective measures, provide appropriate guidance to agents, and improve the customer support experience.
Supervisors can use Omnichannel Insights to perform the following tasks:
Monitor operational metrics across channels, queues, and agents.
Monitor support quality via sentiment analysis across channels, queues, and agents.
Note
Contact your system administrator for configuration errors or if you are unable to view the dashboards. To learn more, see Configure Omnichannel Insights dashboards.
Prerequisites
Verify the following prerequisites before you use the Omnichannel Chat and Sentiment Analysis dashboards:
Omnichannel supervisor role is assigned. To learn more, see Assign roles and enable users for Omnichannel for Customer Service.
A user is added in a supervisor configuration. To learn more, see Add users to supervisor configuration.
See also
Configure Omnichannel Insights dashboards
View and understand Omnichannel Insights dashboards
Feedback | https://docs.microsoft.com/en-us/dynamics365/omnichannel/supervisor/intro-dynamics-365-omnichannel-insights-dashboard | 2019-11-12T02:06:05 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.microsoft.com |
This an an archived version of the documentation for SonarQube version 4.5 & 4.5.x LTS.
See for current functionality
If the SCM Activity Plugin is installed and active, then it is possible to:
- use the "Time Changes" action from any tab to see which lines were touched during a specific period.
- filter the source code to show only the lines changed during the selected period.
- decorate the source code to show the last committer and last commit date for each line of code.
Note that while the "Modified Lines" filter is available only on the SCM tab, the "Time Changes" action is available on all tabs.
Overview
Content Tools | https://docs.sonarqube.org/display/SONARQUBE45/SCM+Information+tab | 2019-11-12T01:05:30 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.sonarqube.org |
Upon starting a build using IL2CPP, scripts. See documentation on Platform-dependent compilation for further information. | https://docs.unity3d.com/es/2018.3/Manual/IL2CPP-HowItWorks.html | 2019-11-12T01:40:46 | CC-MAIN-2019-47 | 1573496664469.42 | [] | docs.unity3d.com |
A DataKeeper Volume resource provides two functions that are used by the Microsoft Cluster service to check for availability and health of the DataKeeper Volume resource. A simple check LooksAlive and a more rigorous check IsAlive.
The Cluster service calls the LooksAlive function based on the specified interval. The default is every 20 seconds on a freshly installed system, or 60 seconds after upgrading DataKeeper Cluster Edition from a version prior to 8.4.0. The LooksAlive function performs a quick check of the volume device. When the LooksAlive test fails, the cluster service will call the IsAlive test immediately.
Performs a thorough check to determine if the specified resource is online (available for use). The default is 120 seconds on a freshly installed system, or 300 seconds after upgrading DataKeeper Cluster Edition from a version prior to 8.4.0. If the device for the mirror becomes unreachable by DataKeeper, the IsAlive check will detect this condition and will mark the resource as Failed.
Post your comment on this topic. | http://docs.us.sios.com/dkce/8.6.1/en/topic/datakeeper-volume-resource-health-check | 2018-12-10T04:59:04 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.us.sios.com |
Getting Started with Amazon WorkDocs
Amazon WorkDocs uses a directory to store and manage organization information for your users and their documents. You can create a Simple AD directory using Quick Start or Standard Setup, or create an AD Connector directory to connect to your on-premises directory. Alternatively, you can enable Amazon WorkDocs to work with an existing AWS directory, or you can have Amazon WorkDocs create a directory for you. You can also create a trust relationship between your AWS Directory Service service and a AWS Managed Microsoft AD Directory.
Note
If you are part of a compliance program, such as PCI, FedRAMP, or DoD, you must set up a AWS Managed Microsoft AD Directory to meet compliance requirements.
Contents | https://docs.aws.amazon.com/workdocs/latest/adminguide/getting_started.html | 2018-12-10T04:35:41 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.aws.amazon.com |
Getting Started¶
Welcome to Binary Ninja. This introduction document is meant to quickly guide you over some of the most common uses of Binary Ninja.
Directories¶
Binary Ninja uses two main locations. The first is the install path of the binary itself and the second is the user folders for user-installed content.
Binary Path¶
Binaries are installed in the following locations by default:
- OS X:
/Applications/Binary Ninja.app
- Windows:
C:\Program Files\Vector35\BinaryNinja
- Linux: Wherever you extract it! (No standard location)
Warning
Do not put any user content in the install-path of Binary Ninja. The auto-update process of Binary Ninja may replace any files included in these folders.
User Folder¶
The base locations of user folders are:
- OS X:
~/Library/Application Support/Binary Ninja
- Linux:
~/.binaryninja
- Windows:
%APPDATA%\Binary Ninja
Contents of the user folder includes:
lastrun: A text file containing the directory of the last BinaryNinja binary path -- very useful for plugins to resolve the install locations in non-default settings or on linux.
license.dat: License file
plugins/: Folder containing all manually installed user plugins
repositories/: Folder containing files and plugins managed by the Plugin Manager API
settings.json: Advanced settings (see settings)
License¶
When you first run Binary Ninja, it will prompt you for your license key. You should have received your license key via email after your purchase. If not, please contact support.
Once the license key is installed, you can change it, back it up, or otherwise inspect it simply by looking inside the base of the user folder for
license.dat.
Linux Setup¶
Because linux install locations can vary widely, we do not assume a Binary Ninja has been installed in any particular folder on linux. Rather, you can simply run
binaryninja/scripts/linux-setup.sh after extracting the zip and various file associations, icons, and other settings will be set up. Run it with
-h to see the customization options.
You can load files in many ways:
- Drag-and-drop a file onto the Binary Ninja window
- Use the
File/Openmenu or
Openbutton on the start screen
- Clicking an item in the recent files list
- Running Binary Ninja with an optional command-line parameter
- Opening a file from a URL via the
⌘-lor
⌃-lhotkey
- Opening a file using the binaryninja: url handler. For security reasons, the url handler requires you to confirm a warning before opening a file via the url handler. The url handler can open remote URLs like:
binaryninja:, or even local files like
binarynina://bin/lsin cases where you wish to script up Binary Ninja from a local webapp.
Analysis¶
As soon as you open a file, Binary Ninja begins its auto-analysis.
Even while Binary Ninja is analyzing a binary, the UI should be responsive. Not only that, but because the analysis prioritizes user-requested analysis, you can start navigating a binary immediately and any functions you select will be added to the top of the analysis queue. The current progress through a binary is shown in the status bar, but note that the total number of items left to analyze will go up as well as the binary is processed and more items are discovered that require analysis.
Errors or warnings during the load of the binary are also shown in the status bar, along with an icon (in the case of the image above, a large number of warnings were shown). The most common warnings are from incomplete lifting and can be safely ignored. If the warnings include a message like
Data flow for function at 0x41414141 did not terminate, then please report the binary to the [bug database][issues].
Interacting¶
Navigating¶
Navigating code in Binary Ninja is usually a case of just double-clicking where you want to go. Addresses, references, functions, jmp edges, etc, can all be double-clicked to navigate. Additionally, The
g hotkey can navigate to a specific address in the current view.
Switching views happens multiple ways. In some instances, it's automatic (clicking a data reference from graph view will navigate to linear view as data is not shown in the graph view), and there are multiple ways to manually change views as well. While navigating, you can use the view hotkeys (see below) to switch to a specific view at the same location as the current selection. Alternatively, the view menu in the bottom-right can be used to change views without navigating to any given location.
Hotkeys¶
h: Switch to hex view
p: Create a function
[ESC]: Navigate backward
[CMD] [(OS X) : Navigate backward
[CMD] ](OS X) : Navigate forward
[CTRL] [(Windows/Linux) : Navigate backward
[CTRL] ](Windows/Linux) : Navigate forward
[SPACE]: Toggle between linear view and graph view
g: Go To Address dialog
n: Name a symbol
u: Undefine a symbol
e: Edits an instruction (by modifying the original binary -- currently only enabled for x86, and x64)
x: Focuses the cross-reference pane
;: Adds a comment
i: Switches between disassembly and low-level il in graph view
y: Change type
a: Change the data type to an ASCII string
- [1248] : Change type directly to a data variable of the indicated widths
a: Change the data type to an ASCII string
d: Switches between data variables of various widths
r: Change the data type to single ASCII character
o: Create a pointer data type
[CMD-SHIFT] +(OS X) : Graph view zoom in
[CMD-SHIFT] -(OS X) : Graph view zoom out
[CTRL-SHIFT] +(Windows/Linux) : Graph view zoom in
[CTRL-SHIFT] -(Windows/Linux) : Graph view zoom out
Graph View¶
The default view in Binary Ninja when opening a binary is a graph view that groups the basic blocks of disassembly into visually distinct blocks with edges showing control flow between them.
Features of the graph view include:
- Ability to double click edges to quickly jump between locations
- Zoom (CTRL-mouse wheel)
- Vertical Scrolling (Side scroll bar as well as mouse wheel)
- Horizontal Scrolling (Bottom scroll bar as well as SHIFT-mouse wheel)
- Individual highlighting of arguments, addresses, immediate values
- Edge colors indicate whether the path is the true or false case of a conditional jump (a color-blind option in the preferences is useful for those with red-green color blindness)
- Context menu that can trigger some function-wide actions as well as some specific to the highlighted instruction (such as inverting branch logic or replacing a specific function with a NOP)
View Options¶
Each of the views (Hex, Graph, Linear) have a variety of options configurable in the bottom-right of the UI.
Current options include:
- Hex
- Background highlight
- None
- Column
- Byte value
- Color highlight
- None
- ASCII and printable
- Modification
- Contrast
- Normal
- Medium
- Highlight
- Graph
- Show address
- Show opcode bytes
- Assembly
- Lifted IL
- Show IL flag usage (if showing Lifted IL)
- Low Level IL
- Show basic block register state (if showing Low Level IL)
- Linear
- Show address
- Show opcode bytes
Hex View¶
The hexadecimal view is useful for view raw binary files that may or may not even be executable binaries. The hex view is particularly good for transforming data in various ways via the
Copy as,
Transform, and
Paste from menus. Note that
Transform menu options will transform the data in-place, and that these options will only work when the Hex View is in the
Raw mode as opposd to any of the binary views (such as "ELF", "Mach-O", or "PE").
Note that any changes made in the Hex view will take effect immediately in any other views open into the same file (new views can be created via the
Split to new tab, or
Split to new window options under
View.). This can, however, cause large amounts of re-analysis so be warned before making large edits or transformations in a large binary file.
Xrefs View¶
The xrefs view in the lower-left shows all cross-references to a given location or reference. Note that the cross-references pane will change depending on whether an entire line is selected (all cross-references to that address are shown), or whether a specific token within the line is selected.
One fun trick that the xrefs view has up its sleeve: when in Hex View, a large range of memory addresses can be selected and the xrefs pane will show all xrefs to any location within that range of data.
Linear View¶
Linear view is a hybrid view between a graph-based disassembly window and the raw hex view. It lists the entire binary's memory in a linear fashion and is especially useful when trying to find sections of a binary that were not properly identified as code or even just examining data.
Linear view is most commonly used for identifying and adding type information for unknown data. To this end,
Function List¶
The function list in Binary Ninja shows the list of functions currently identified. As large binaries are analyzed, the list may grow during analysis. The function list starts with known functions such as the entry point, exports, or using other features of the binary file format and explores from there to identify other functions.
The function list also highlights imports, and functions identified with symbols in different colors to make them easier to identify.
Tip
To search in the function list, just click to make sure it's focused and start typing!
Script (Python) Console¶
The integrated script console is useful for small scripts that aren't worth writing as full plugins.
To trigger the console, either use
<CTRL>-<BACKTICK>, or use the
View/
Script console menu.
Once loaded, the script console can be docked in different locations or popped out into a stand-alone window. Note that at this time window locations are not saved on restart.
Multi-line input is possible just by doing what you'd normally do in python. If you leave a trailing
: at the end of a line, the box will automatically turn into a multi-line edit box, complete with a command-history. To submit that multi-line input, use
<CTRL>-<ENTER>
By default the interactive python prompt has a number of convenient helper functions and variables built in:
here/
current_address: address of the current selection
bv/
current_view/ : the current BinaryView
current_function: the current Function
current_basic_block: the current BasicBlock
current_llil: the current LowLevelILFunction
current_mlil: the current MediumLevelILFunction
current_selection: a tuple of the start and end addresses of the current selection
write_at_cursor(data): function that writes data to the start of the current selection
get_selected_data(): function that returns the data in the current selection
Note
The current script console only supports Python at the moment, but it's fully extensible for other programming languages for advanced users who wish to implement their own bindings.
Using Plugins¶
Plugins can be installed by one of two methods. First, they can be manually installed by adding the plugin (either a
.py file or a folder implementing a python module with a
__init__.py file) to the appropriate path:
- OS X:
~/Library/Application Support/Binary Ninja/plugins/
- Linux:
~/.binaryninja/plugins/
- Windows:
%APPDATA%\Binary Ninja\plugins
Alternatively, plugins can be installed with the new pluginmanager API.
For more detailed information, see the plugin guide.
PDB Plugin¶
Binary Ninja supports loading PDB files through the built in PDB plugin. When selected from the plugin menu it attempts to find where the corresponding PDB file is located using the following search order:
- Look for in the same directory as the opened file/bndb (e.g. If you ahve
c:\foo.exeor
c:\foo.bndbopen the pdb plugin looks for
c:\foo.pdb)
- Look in the local symbol store. This is the directory specified by the settings:
local-store-relativeor
local-store-absolute. The format of this directory is
foo.pdb\<guid>\foo.pdb.
- Attempt to connect and download the PDB from the list of symbol servers specified in setting
symbol-server-list.
- Prompt the user for the pdb.
Preferences/Updates¶
Binary Ninja automatically updates itself by default. This functionality can be disabled in the preferences by turning off the
Update to latest version automatically option. Updates are silently downloaded in the background and when complete an option to restart is displayed in the status bar. Whenever Binary Ninja restarts next, it will replace itself with the new version as it launches.
On windows, this is achieved through a separate launcher that loads first and replaces the installation before launching the new version. On OS X and Linux, the original installation is overwritten after the update occurs as these operating systems allow files to be replaced while running. The update on restart is thus immediate.
Settings¶
Settings are stored in the user directory in the file
settings.json. Each top level object in this file is represents a different plugin. As of build 860 the following settings are available:
Below is an example
settings.json setting various options:
{ "ui" : { "activeContent" : false, "colorblind" : false, "debug" : true } "pdb" : { "local-store-absolute" : "C:\Symbols", "local-store-relative" : "", "symbol-server-list" : [""] } }
Getting Support¶
Vector 35 offers a number of ways to get Binary Ninja support. | https://docs.binary.ninja/getting-started/index.html | 2018-12-10T04:37:25 | CC-MAIN-2018-51 | 1544376823303.28 | [array(['../images/license-popup.png', 'License Popup license popup >'],
dtype=object)
array(['../images/recent.png', 'Recent Files recent files'], dtype=object)
array(['../images/analysis.png', 'Auto Analysis auto analysis ><'],
dtype=object)
array(['../images/view-choices.png', 'Different Views graph view'],
dtype=object)
array(['../images/graphview.png', 'Graph View graph view'], dtype=object)
array(['../images/graphcontext.png',
'Graph View Contet Menu graph view context >'], dtype=object)
array(['../images/options.png', 'options options ><'], dtype=object)
array(['../images/hex.png', 'hex view hex >'], dtype=object)
array(['../images/xrefs.png', 'xrefs xrefs >'], dtype=object)
array(['../images/linear.png', 'linear view linear'], dtype=object)
array(['../images/functionlist.png', 'Function List function list >'],
dtype=object)
array(['../images/console.png', 'Console console >'], dtype=object)
array(['../images/preferences.png', 'Preferences preferences >'],
dtype=object) ] | docs.binary.ninja |
Secondary Namespaces¶
What is a secondary namespace?¶
A secondary namespace is one that is referenced indirectly by the main schema, that is, one schema imports another one as shown below:
a.xsd imports b.xsd b.xsd imports c.xsd
(using a, b and c as the respective namespace prefixes for a.xsd, b.xsd and c.xsd):
a.xsd declares b:prefix b.xsd declares c:prefix
The GeoTools encoder does not honour these namespaces and writes out:
"a:" , "b:" but NOT "c:"
The result is c’s element being encoded as:
<null:cElement/>
When to configure for secondary namespaces¶
If your application spans several namespaces which may be very common in application schemas.
A sure sign that calls for secondary namespace configuration is when prefixes for namespaces are printed out as the literal string “null” or error messages like:
java.io.IOException: The prefix "null" for element "null:something" is not bound.
Note
When using secondary namespaces, requests involving complex featuretypes must be made to the global OWS service only, not to Virtual Services. This is because virtual services are restricted to a single namespace, and thus are not able to access secondary namespaces.
In order to allow GeoServer App-Schema to support secondary namespaces, please follow the steps outlined below:
Using the sampling namespace as an example.
Step 1:Create the Secondary Namespace folder¶
Create a folder to represent the secondary namespace in the data/workspaces directory, in our example that will be the “sa” folder.
Step 3:Edit content of files¶
Contents of these files are as follows:
namespace.xml(uri is a valid uri for the secondary namespace, in this case the sampling namespace uri):
<namespace> <id>sa_workspace</id> <prefix>sa</prefix> <uri></uri> </namespace>
workspace.xml:
<workspace> <id>sa_workspace</id> <name>sa</name> </workspace>
That’s it.
Your workspace is now configured to use a Secondary Namespace. | https://docs.geoserver.org/latest/en/user/data/app-schema/secondary-namespaces.html | 2018-12-10T04:32:41 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.geoserver.org |
To access the MOREAL platform, registered users have to login with their credentials; the email used for the account activation by Crypteia Networks administrators and the password that has been generated and provided by MOREAL automatic account activation procedure through email notification to the email address given by the user.
The login page can be accessed through the following URL:
The Login page
If Crypteia Networks has provided you with a Zendesk account, then it is possible to login using your Zendesk credentials to login to the MOREAL platform using the Login with Service Desk Account button, that provides full integration of the Zendesk Ticketing System with MOREAL’s Alerting mechanism. | https://docs.moreal.co/logging-in-moreal/ | 2018-12-10T03:52:14 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.moreal.co |
EdX transfers course data to the data czars at our partner institutions in regularly generated data packages. Data packages can be accessed only by the data czar at each partner institution. This section describes how data czars can set up and use the credentials and public/private key pairs they need so that they can download and decrypt the edX data package.
The data czar who is selected at each institution sets up keys for securely transferring files from edX to the partner institution. Meanwhile, the Analytics team at edX sets up credentials so that the data czar can log in to the site where data packages are stored.
After these steps for setting up credentials are complete, the data czar can download data packages on an ongoing basis.
To ensure the security of data packages, edX encrypts all files before making them available to a partner institution. As a result, when you receive a data package (or other files) from edX, you must decrypt the files that it contains before you use them.
The cryptographic processes of encrypting and decrypting data files require that you create a pair of keys: the public key in the pair, which you send to the edX Analytics team, is used to encrypt data. You use your corresponding private key to decrypt any files that have been encrypted with that public key.
To create the keys needed for this encryption and decryption process, you use GNU Privacy Guard (GnuPG or GPG). Essentially, you install a cryptographic application on your local computer and then supply your email address and a secret passphrase (a password).
Important
The result is the public key that you send to edX to use in encrypting data files for your institution, and the private key which you keep secret and use to decrypt the encrypted files that you receive. Creating these keys is a one- time process that you coordinate with your edX partner manager. Instructions for creating the keys on Windows or Macintosh follow.
For more information about GPG encryption and creating key pairs, see the Gpg4win Compendium.
Important
Do not reveal your passphrase, or share your private key, with anyone else. If you need another person to be able to transfer and decrypt files, work with edX to set her or him up as an additional data czar. Data czars must create and use their own passphrases.
Go to the GPG Tools website. Scroll down to the GPG Suite section of the page and select Download GPG Suite.
When the download is complete, select the .dmg file to begin the installation.
When installation is complete, GPG Keychain Access opens a web page with First Steps and a dialog box.
Enter your name and email address. Be sure to enter your official university or institution email address. EdX cannot use public keys that are based on personal or other non-official email addresses to encrypt data.
Select Generate key. A dialog box opens to prompt you for a passphrase.
Enter a strong passphrase. Be sure to select a passphrase that you can remember, or use a secure method of retaining it for reuse in the future: you use this passphrase when you decrypt your data packages.
To send only your public key to your edX partner manager, select the key and then select Export. A dialog box opens.
Specify a file name and location to save the file. Make sure that Format is set to ASCII and that Allow secret key export is not selected.
When you select Save, only the public key is saved in the resulting
.asc file. Do not share your private key with edX or any third party.
Compose an email message to your edX partner manager. Attach the .asc file that you saved in the previous step to the message, then send the message.
The data packages that edX prepares for each partner organization are uploaded to the Amazon Web Service (AWS) Simple Storage Service (Amazon S3). The edX Analytics team creates an individual account to access this storage service for each data czar. The credentials for accessing this account are called an Access Key and a Secret Key.
After edX creates these access credentials for you, edX uses the public
encryption key that you sent your edX partner manager to encrypt the
credentials into a
credentials.csv.gpg file. EdX then sends the file to you
as an email attachment.
The credentials.csv.gpg file is likely to be the first file that you decrypt with your private GPG key. You use the same process to decrypt the data package files that you retrieve from Amazon S3. See Decrypt an Encrypted File.
To work with an encrypted .gpg file, you use the same GNU Privacy Guard program that you used to create your public/private key pair.
To use your private key to decrypt the Amazon S3 credentials file and the files in your data packages, follow these steps.
Save the encrypted file in an accessible location.
On a Windows computer, open Windows Explorer. On a Macintosh, open Finder.
Navigate to the file and right-click it.
On a Windows computer, select Decrypt and verify, and then select Decrypt/Verify. Do not change any other setting.
On a Macintosh, select Services, and then select OpenPGP: Decrypt File.
Enter your passphrase. The GNU Privacy Guard program decrypts the file.
For example, when you decrypt the credentials.csv.gpg file the result is a credentials.csv file. Open the decrypted credentials.csv file to see that it contains your email address, your Access Key, and your Secret Key.
Once you have your decrypted credentials, you can use them to access Amazon S3 and download your data package. | https://edx.readthedocs.io/projects/devdata/en/latest/access/credentials.html | 2018-12-10T05:31:08 | CC-MAIN-2018-51 | 1544376823303.28 | [] | edx.readthedocs.io |
Contribute to Documentation¶
While we are doing our best to make sure our documentation fulfills all your needs, there is always place for improvement. If you'd like to contribute to our docs, you can do the following:
How to contribute to documentation¶
This documentation is written on GitHub and generated into a static site. It is organized in branches. Each branch is a version of documentation (which in turn corresponds to a version of eZ Platform).
If you are familiar with the git workflow, you will find it easy to contribute. Please create a Pull Request for any, even the smallest change you want to suggest.
Contributing through the GitHub website¶
To quickly contribute a fix to a page, find the correct
*.md files in the GitHub repository and select "Edit this file".
Introduce your changes, at the bottom of the page provide a title and a description of what you modified and select "Propose file change".
This will lead to a screen for creating a Pull Request. Enter the name and description and select "Create pull request".
Your pull request will be reviewed by the team and, when accepted, merged with the rest of the repository. You will be notified of all activity related to the pull request by email.
Contributing through git¶
You can also contribute to the documentation using regular git workflow. If you are familiar with it, this should be quick work.
Assuming you have a GitHub account and a git command line tool installed, fork the project and clone it into a folder:
git clone XXX .
Add your own fork as a remote:
git remote add fork <address of your fork>.
Choosing a branch
Always contribute to the earliest branch that a change applies to.
For example, if a change concerns versions v1.7 and v.1.13, make your contribution to the
v1.7 branch.
The changes will be merged forward to be included in later versions as well.
Create a new local branch:
git checkout -b <name of your new branch>.
Now introduce whatever changes you wish, either modifying existing files, or creating new ones.
Once you are happy with your edits, add your files to the staging area. Use
git add .to add all changes.
Commit your changes, with a short, clear description of your changes:
git commit -m "Description of commit".
Now push your changes to your fork:
git push fork <name of your branch>.
Finally, you can go to the project's page on GitHub and you should see a "Compare and pull request" button. Activate it, write a description and select "Create pull request". If your contribution solves a JIRA issues, start the pull request's name with the issue number. Now you can wait for your changes to be reviewed and merged.
Contributing outside git and GitHub¶
- Create a JIRA issue. You can also report any omissions or inaccuracies you find by creating a JIRA issue. See Report and follow issues on how to do this. Remember to add the "Documentation" component to your issue to make sure we don't lose track of it
- Visit Slack. The
\#documentation-contribchannel on eZ Community Slack team is the place to drop your comments, suggestions, or proposals for things you'd like to see covered in documentation. (You can use the link to get an auto-invite to Slack)
- Contact the Doc Team. If you'd like to add to any part of the documentation, you can also contact the Doc Team directly at [email protected]
Writing guidelines¶
(see Style Guide below for more details)
- Write in (GitHub-flavored) Markdown
- Try to keep lines no longer than 120 characters. If possible, break lines in logical places, for example at sentence end.
- Use simple language
- Call the user "you" (not "the user", "we", etc.). Use gender-neutral language: the visitor has their account, not his, her, his/her, etc.
Do not be discouraged if you are not a native speaker of English and/or are not sure about your style. Our team will proofread your contribution and make sure any problems are fixed. Any edits we do are not intended to be criticism of your work. We may simply modify the language of your contributions according to our style guide, to make sure the terminology is consistent throughout the docs, and so on.
Markdown writing tools¶
You can write and edit Markdown in any text editor, including the most simple notepad-type applications, as well as most common IDEs. You can also make use of some Markdown-dedicated tools, both online and desktop. While we do not endorse any of the following tools, you may want to try out:
- online: dillinger.io, jbt.github.io/markdown-editor or stackedit.io
- desktop (open source): atom.io or brackets.io
Markdown primer¶
(see below for more detailed markdown conventions we apply)
Markdown is a light and simple text format that allows you to write quickly using almost any tool, and lets us generate HTML based on it. Even if you are not familiar with Markdown, writing in it is very similar to writing plain text, with a handful of exceptions. Here's a list of most important Markdown rules as we use them:
- Each paragraph must be separated by a blank line. A single line break will not create a new paragraph.
- A heading starts with a number of hash marks (#): level 1 heading starts with #, level two heading with ##, and so on.
- In an unordered list each item starts with a dash (-) and a space. Items within one list are not separated with blank lines.
- In an ordered list each item starts with a number, period and a space. Here items within one list are also not separated.
- You can put emphasis on text by surrounding it with single asterisks (*), and bold the text using double asterisks.
- You can mark part of a text as code (
monospace) by surrounding it with single backticks (`).
- If you need a longer, multi-line piece of code, put it in a separate paragraph and add a line with three backticks (```)
- To add a link, enter the link title in square brackets immediately followed by the link proper in regular brackets.
- To add an image, start with an exclamation mark (!), then provide the alt text in square brackets immediately followed by the link to the image in regular brackets.
You can find a detailed description of all features of Markdown in its syntax doc.
This page is written in Markdown. View it on GitHub and select Raw in the upper right corner to see an example of a document in Markdown.
Style Guide¶
(see above for a summary or writing guidelines)
Phrasing¶
- Address the reader with "you", not "the user."
- Do not use "we", unless specifically referring to the company.
- Avoid using other personal pronouns. If necessary, use "they," not "he," "he or she," "he/she."
- Use active, not passive as much as possible.
- Clearly say which parts of instructions are obligatory ("To do X you need to/must do Y") and which are optional ("If you want A, you may do B.")
- Do not use Latin abbreviations, besides "etc." and "e.g."
Punctuation¶
- Use American English spelling.
- Use American-style dates: January 31, 2016 or 01/31/2016.
- Use sentence-style capitalization for titles and headings (only capitalize words that would have capital letters in a normal sentence).
- Do not use periods (full stops) or colons at the end of headings.
- Do not use a space before question mark, colon (:) or semi-colon (;).
- Do not use symbols instead of regular words, for example "&" for "and" or "#" for "number".
- Do not end list items with a comma or period, unless the item contains a whole sentence.
- Place commas and periods inside quotation marks and other punctuation outside quotations.
- Use the Oxford comma (especially when it clarifies meaning)
- pluralize acronyms with a simple "s", without apostrophe: "URLs", "IDs", not
URL's, ID's
Formatting¶
- Mark interface elements with bold the first time they appear in a given section (not necessarily every single time).
- Capitalize interface elements the way they are capitalized in the interface.
- Capitalize domain names.
- Capitalize names of third-party products/services, etc., unless they are explicitly spelled otherwise (e.g. use "GitHub" NOT "github", but "git" not "Git"; "Composer", not "composer"), or unless used in commands (
composer update).
- When linking, provide a description of the target in the link text (e.g. "See the templating documentation", NOT "Click for more info").
- If possible, link to specific heading, not just to a general page (especially with longer pages).
- Use numbered lists to list steps in a procedure or items that are explicitly counted (e.g.: "There are three ways to ..." followed by a numbered list). In other cases, use a bullet list.
- If a procedure has long steps that would require multiple paragraphs, consider using numbered low-level headings instead.
- Use code marking (backtick quotes) for commands, parameters, file names, etc.
Naming¶
- use eZ Platform to refer to the product in general, or eZ Platform Enterprise Edition (eZ Enterprise in short) to refer to the commercial edition.
- use Studio (or Studio UI) to refer to the feature set and interface specific to eZ Enterprise.
Conventions for some problematic words¶
- add-on has a hyphen
- backup is a noun ("Make a backup"); back up is a verb ("Back up you data")
- content is uncountable, if you have more than one piece of content, call it a Content item
- login is a noun ("Enter your login"); log in is a verb ("Log in to the application")
- open source is used after a verb ("This software is open source"); open-source is used when describing a noun ("This is open-source software")
- reset is written as one word
- setup is a noun ("Setup is required"); set up is a verb ("You must set up this or that")
- back end is a noun ("This is done on the back end"); back-end is an adjective ("On the back-end side")
- hard-coded has a hyphen
click something, not "click on" ("Click the button" not "Click on the button")
- if possible, use select or activate instead of click
- vs. is followed by a period (full stop)
Some common grammatical and spelling mistakes¶
- its is a possessive ("This app and its awesome features"); it's is short for "it is" ("This app is awesome and it's open source")
- allow must be followed by "whom", -ing or a noun ("This allows you to do X", "This allows doing X" or "This allows X", but NOT just
"This allows to do X")
Detailed markdown conventions¶
- Headings: Always put page title in H1, do not use H1 besides page titles.
- Headings: Do not create headings via underlines (setext-style headings).
- Whiteline: Always divide paragraphs, headings, code blocks, lists and pretty much everything else with one (and only one) whiteline.
- Code: Mark all commands, filenames, paths and folder names, parameters and GitHub repo names as code.
- Code: In code blocks, where relevant, put the name of the file they concern in the first line in a comment proper for the language.
- Code: In code blocks, if possible, always provide language. Pygments does not have syntax highlighting for Twig, so use
htmlinstead.
- Lists: Use dashes for unordered lists and "1." for ordered list (yes, always "1", it will be interpreted as proper numbers in the list).
- Images: Always add the
alttext in square brackets. Add title in quotations marks after the image link (inside parenthesis) if you want a caption under the image.
- Note boxes: Write the following way. Possible types are
note,
tip,
caution.
Which will result in:
This is note title
This is note text, indented. Can span more paragraphs, all indented
- Table of contents: Insert a table of contents of the heading inside a page using
[TOC]. | https://ez-systems-developer-documentation.readthedocs-hosted.com/en/latest/community_resources/documentation/ | 2018-12-10T04:26:28 | CC-MAIN-2018-51 | 1544376823303.28 | [] | ez-systems-developer-documentation.readthedocs-hosted.com |
Waits for all the elements in the specified array to receive a signal, using an int value to specify the time interval and specifying whether to exit the synchronization domain before the wait.
- waitHandles
-
A WaitHandle array containing the objects for which the current instance will wait. This array cannot contain multiple references to the same object (duplicates).A WaitHandle array containing the objects for which the current instance will wait. This array cannot contain multiple references to the same object (duplicates).
- millisecondsTimeout
-
The number of milliseconds to wait, or Timeout.Infinite (-1) to wait indefinitely.The number of milliseconds to wait, or Timeout.Infinite (-1) to wait indefinitely.
- exitContext
-
true to exit the synchronization domain for the context before the wait (if in a synchronized context), and reacquire it afterward; otherwise, false.true to exit the synchronization domain for the context before the wait (if in a synchronized context), and reacquire it afterward; otherwise, false.
true when every element in waitHandles has received a signal; otherwise, false.
If millisecondsTimeout is zero, the method does not block. It tests the state of the wait handles and returns immediately.
System.Threading.AbandonedMutexException is new in the .NET Framework version 2.0. In previous versions, the erload:System.Threading.WaitHand erload:System.Threading.WaitHand.
The erload:System.Threading.WaitHandle.WaitAll method is not supported on threads that have STAThreadAttribute.
The exitContext parameter has no effect unless the WaitHandle.WaitAll(WaitHandle[], int, bool) WaitHandle.WaitAll(WaitHandle[], int, bool) method. The thread returns to the original nondefault context after the call to the WaitHandle.WaitAll(WaitHandle[], int, bool) method completes.
This can be useful when the context-bound class has the System.Runtime.Remoting.Contexts.SynchronizationAttribute attribute. In that case, all calls to members of the class are automatically synchronized, and the synchronization domain is the entire body of code for the class. If code in the call stack of a member calls the WaitHandle.WaitAll(WaitHandle[], int, bool) method and specifies true for exitContext, the thread exits the synchronization domain, allowing a thread that is blocked on a call to any member of the object to proceed. When the WaitHandle.WaitAll(WaitHandle[], int, bool) method returns, the thread that made the call must wait to reenter the synchronization domain. | http://docs.go-mono.com/monodoc.ashx?link=M%3ASystem.Threading.WaitHandle.WaitAll(System.Threading.WaitHandle%5B%5D%2CSystem.Int32%2CSystem.Boolean) | 2018-12-10T04:05:30 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.go-mono.com |
Represents a chain-building engine for System.Security.Cryptography.X509Certificates.X509Certificate2 certificates.
See Also: X509Chain Members
The System.Security.Cryptography.X509Certificates.X509Chain object has a global error status called X509Chain.ChainStatus that should be used for certificate validation. The rules governing certificate validation are complex, and it is easy to oversimplify the validation logic by ignoring the error status of one or more of the elements involved. The global error status takes into consideration the status of each element in the chain. | http://docs.go-mono.com/monodoc.ashx?link=T%3ASystem.Security.Cryptography.X509Certificates.X509Chain | 2018-12-10T04:32:25 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.go-mono.com |
Platform / Folders
Folders
The more stuff you have on your desk the less likely it becomes to actualy find what you’re looking for. The same goes for your Archilogic 3d models. Thankfully there is a way clean up that mess. Archilogic allows you to create your own folders and store your models in them.
By clicking on the folder icon you open the folder list.
In the folder list you can see every folder that you’ve created. You can either click on one of them to open them and see its content or click on + add new to add a new empty folder to the list.
To get back from one folder to the model overview just click on the folder icon again and then click on all models.
Add 3d model to folder
If you want to put the 3d model into a folder you have to first click on the edit button and then on the Add folder link and select from an already existing folder or create a new one.
| https://docs.archilogic.com/en/platform/folders/ | 2018-12-10T05:44:31 | CC-MAIN-2018-51 | 1544376823303.28 | [array(['/assets/images/Platform-Folder-Icon.jpg', 'Folder Icon'],
dtype=object)
array(['/assets/images/Platform-Folder-List.jpg', 'Folder List'],
dtype=object)
array(['/assets/images/Platform-Dashboard-Model-Folder.gif',
'Change The Model Folder'], dtype=object) ] | docs.archilogic.com |
Translation guide¶
Introduction¶
GroupServer is written in English, and the interface has been partly translated into French and German. In this guide we work through how to translate GroupServer, how to add internationalisation (i18n), and finally we discuss how to update the products.
Translate GroupServer¶
Anyone can help improving the translations of GroupServer! We use the Transifex system to help make the translations: it is a web-based system that allows you to translate GroupServer bit by bit. All you need is a browser. If you have any questions please feel free to ask away in the GroupServer development group.
Add internationalisation (i18n)¶
Adding internationalisation support to a product that lacks it is a development task. If you come across a component of GroupServer that lacks a translation please ask for i18n to be added in the GroupServer development group. The person who responds (probably either Michael JasonSmith or Alice Rose) will then carry out the following tasks.
Identify the product. (The element identifiers in the HTML often point to the product that needs to be changed.)
Add a
localesdirectory to the product, in the same directory that has the
configure.zcmlfile.
Add i18n to the Python code.
To the
__init__.pyfor the product instantiate a message factory, passing the name of the product as an argument:
from zope.i18nmessageid import MessageFactory GSMessageFactory = MessageFactory('gs.groups')
Identify the Python products that contain strings that need translating. To each add the following
import:
from . import GSMessageFactory as _
Add i18n to all the strings:
All strings, including the simple ones, get a label with the default (English) text following. The label make Transifex much easier to deal with.
@form.action(name="change", label=_('change-action', 'Change'), failure='handle_change_action_failure') def handle_invite(self, action, data):
When actually adding i18n to a command button in a
zope.formlibform remember to set a name, that way the element-identifier will be the same no matter the language.
Complex strings have a
mappingkeyword argument, and a
${}template syntax (rather than
{}for Python format-strings).
_('start-status', 'The group ${groupName} has been started.', mapping={'groupName': r})
Add i18n to the page templates.
Add the
i18nnamespace to the page template, using the product name as the domain.
<html xmlns:
Add
i18n:translateattributes to all elements that require translation. Always set the translation ID.
<p id="group-id-error" style="display:none;" class="alert" i18n: <strong class="label alert-label">Group ID In Use:</strong> The Group ID <code>above</code> is already being used. Please pick another group ID. </p><!--group-id-error-->
Add
i18n:nameattributes to dynamic content.
<span class="group" i18n:this group</span>
Add
i18n:attributesattributes to dynamic attributes.
<a title="Change this About box" i18n:Change</a>
Add i18n to the Zope Configuration file.
Add the
i18nnamespace
<configure xmlns="" xmlns:
Add the
reigsterTranslationselement
<i18n:registerTranslations
Run the latest version of
i18n.sh[1] in the base directory of the product to create and update the translation.
Fill out the English translation, which is used as the source translation for Transifex.
Commit the changes.
Add the product to Transifex [2].
In the GroupServer organisation on Transifex, click on Add project.
Fill in the Project Details form:
- Use the GroupServer product identifier as the name (e.g.
gs.site.about).
- Source language is always English.
- The License is always “Permissive open-source”
- Source Code URL is the GitHub URL.
Assign to the project to the GroupServer Team.
Skip “Add content”.
Create the project.
View the new project.
Choose the Manage link.
Under Project URL, add hyphens where Transifex has removed dots from the project name (e.g.
gssiteabout→
gs-site-about).
Optionally add a Long Description from the Introduction section of the product
README.rst.
Save.
Update the
README.rstto include a Transifex link in the Resources section.
- Translations:
Initialise the product, accepting the defaults:
$ tx init
Run
tx-set.sh[3] in the base directory of the product.
Sync local source and translations to remote:
$ tx push -s -t
Pull the translations, now modified by Transifex from remote to local:
$ tx pull -a
Commit the Transifex configuration (
.tx/) and the modified translations to version control.
Push all the changes to the repositories.
Update the products¶
Transifex and the product need to be kept in sync with each other. When the product changes it is necessary to update Transifex with the new strings. Likewise, when some translations have been completed it is necessary to update the product with the new translations.
Update Transifex with the new strings¶
To update a Transifex project with the new strings in a product carry out the following tasks.
Update the
potfile and the
pofiles by running the
i18n.shscript in the root of the product [1].
Update the English
pofile, copying the default text into the
msgstr. This is the source language that supplies the example text in Transifex. (Without this step the translations can still take place, but the translators see the message identifiers, rather than the default text.)
Push the changes in the source file to Transifex, using the Transifex client (
tx):
$ tx push -s
Commit and push the changes to the source-code repositories.
Update the product with the new translations¶
To update a product with the new translations in a Transifex project carry out the following tasks.
Pull the updated translations (in
pofiles) from the Transifex project using the Transifex client (
tx):
$ tx pull -a
Ensure that Zope is set up to automatically compile the
pofiles to
mofiles:
$ export zope_i18n_compile_mo_files=true
Start your development system. Change the language in your browser to test the different translations.
Note
Browsing the Web with a changed language will result in Goggle, Microsoft, the NSA, and Yahoo! getting some funny ideas about that languages you can comprehend.
Commit and push the changes to the source-code repositories. | http://groupserver.readthedocs.io/en/master/translations.html | 2017-09-19T20:30:17 | CC-MAIN-2017-39 | 1505818686034.31 | [] | groupserver.readthedocs.io |
docker network connectEstimated reading time: 3 minutes
Description. | https://docs.docker.com/engine/reference/commandline/network_connect/ | 2017-09-19T20:49:26 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.docker.com |
Creating selling rules
Last updated on March 16, 2017
Create selling rules for the inventoryAd space available on a website or app. The basic unit of inventory for OpenX is an ad unit. that you want to make available for real-time selling.
To create a selling rule:
Go to the Inventory tab.
Click the OpenX Market Rules tab.
Note: The brand name of your market appears as the name of this tab, such as OpenX US Rules, or OpenX Market Japan Rules. The same is true for the Add Rule button.
Click Add.. This displays the Basic Information panel of the Create. screen.
Specify the following required details for the selling rule:
Name. Type in a unique name for the selling rule.
Account. Select the ad networkAn OpenX account type, which represents a business that manages other businesses and typically contains and manages both publisher accounts and advertiser accounts. account that this rule is for.
- For Video. Make sure you change the default value to the Publisher-specific Network name.
Specify the Inventory Type, such..
Important: Always choose Linear Video for Video (In Beta) inventory, whether web or mobile.
In the FloorThe minimum price a publisher is willing to accept for a given impression. CPMCost per mille, a pricing method which calculates cost based on the number of impressions (per 1000). field, keep the default of None, or type in the minimum price to accept for the ad space defined by this rule.
Define the inventory for the selling rule by setting Targeting criteria for it. You can specify the following targeting details:
Content targetingA targeting dimension that describes the context and layout that the ad space exists within.
Geographic targetingA targeting dimension that describes a viewer’s physical location, such as their city or state.
Technographic targetingA targeting dimension that describes the technologies a user employs in their computing environment, such as their computer’s operating system. Also referred to as “technology and devices targeting.”
Audience segmentA group of users with similar traits or characteristics. targeting (if enabled)
Custom targetingA targeting dimension that describes custom key-value pairs that a publisher defines based on what they know about their visitors.
Click Show advanced OpenX Market rule details... and choose Filters as needed. You can specify Buyer, Brand, Industry, Domain and Category filters, which give you more control over the ads that display on your sites but potentially decrease revenue by blocking bids that would otherwise compete for your ad units.
For each BuyerA company that pays a demand partner to purchase ad inventory on OpenX Ad Exchange. Source in the Buyer Filter list, you can select to allow all advertisers, block all advertisers, or allow or block specific advertisers.
In the Brand Filter list, you can select to allow all brands, block all brands, or allow or block specific brands.
In the Industry Filter list, you can select to allow all industries, block all industries, or allow or block specific industries.
In the Domain Filter list, you can specify individual domains to allow or block.
In the Blocked CreativeThe media asset associated with an ad, such as an image or video file. Types list, type in and select the creative types to block.
Tip: For Video (In Beta), remove In-BannerThis is an ad that appears on a web page which is typically hyperlinked to an advertiser’s website. Banners can be images (GIF, JPEG, PNG), JavaScript programs or multimedia objects (For example, Java). Video Ad (Auto Play) and In-Banner Video Ad (User Initiated) as some DSPs will not bid on your inventory if they interpret these settings to mean you don't support auto play and user initiated video ads.
In the Blocked Content Attributes list, type in and select the content attributes to block.
In the Blocked Languages list, type in and select the languages to block.
Click Create.
As necessary, you can change settings for a particular selling rule.
| https://docs.openx.com/Content/publishers/userguide_inventory_indirectrules_adding.html | 2017-09-19T20:31:55 | CC-MAIN-2017-39 | 1505818686034.31 | [array(['../Resources/Images/AdExchangeLozenge.png',
'This topic applies to Ad Exchange. This topic applies to Ad Exchange.'],
dtype=object)
array(['../Resources/Images/ProgrammaticDirectLabel.png',
'This topic applies to Programmatic Direct. This topic applies to Programmatic Direct.'],
dtype=object)
array(['../Resources/Images/SSPLozenge.png',
'This topic applies to SSP. Most SSP activities are completed by OpenX. This topic applies to SSP. Most SSP activities are completed by OpenX.'],
dtype=object) ] | docs.openx.com |
Writes the value of each of its arguments to the file. The arguments must be strings or numbers. To write other values, use tostring() or string.format() before writing.
In the normal mode, this function writes to the standard output (
stdout) which defaults to the Corona Simulator Console if io.output() has not been called with a file name. This is equivalent to
io.output():write. In short, it's similar to
print(), but no newline character (
\n) is appended to the output string.
If you intend to write data to a file,
file:write() should be used instead of
io.write().
For security reasons, you are not allowed to write files in the
system.ResourceDirectory (the directory where the application is stored). You must specify the
system.DocumentsDirectory,
system.ApplicationSupportDirectory,
system.TemporaryDirectory, or
system.cachesDirectory parameter in the system.pathForFile() function when opening the file for writing. See io.open() for details. | http://docs.coronalabs.com.s3-website-us-east-1.amazonaws.com/api/library/io/write.html | 2017-09-19T20:39:05 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.coronalabs.com.s3-website-us-east-1.amazonaws.com |
Note: Most user interface tasks can be performed in Edge Classic or the New Edge experience. For an overview, getting started topics, and release notes specific to the New Edge experience, see the docs.
List aliases
GET
List aliases
Self-service TLS/SSL Beta: This API is available as part of the self-service TLS/SSL Beta release.
Returns a list of all the aliases in the keystore.
Resource URL /organizations/{org_name}/environments/{env_name}/keystores/{keystore_name}/aliases?) | http://ja.docs.apigee.com/management/apis/get/organizations/%7Borg_name%7D/environments/%7Benv_name%7D/keystores/%7Bkeystore_name%7D/aliases | 2017-09-19T20:37:00 | CC-MAIN-2017-39 | 1505818686034.31 | [] | ja.docs.apigee.com |
Future work¶
Short term¶
- RTT-based communities: extend support to add NO_EXPORT / NO_ADVERTISE
- Informative community with the measured RTT of the announcing peer
- New feature: CLI option to build configs based on templates/groups only and avoid client specific settings
Mid term¶
- OpenBGPD: consider dropping the use of macros for ASN and prefix lists
- New feature: group clients by AFI/ASN (OpenBGPD only)
- Split configuration in multiple files
- Doc: better documentation
- Doc: contributing section
- Doc: schema of data that can be used within J2 templates
Long term¶
- New feature: path-hiding mitigation technique on OpenBGPD
- New feature: routing policies based on RPSL import-via/export-via
- New feature: other BGP speakers support (GoBGP, ...)
- New feature: balance clients among n different configurations (for multiple processes - see Scaling BIRD Routeservers) | http://arouteserver.readthedocs.io/en/latest/FUTUREWORK.html | 2017-09-19T20:27:44 | CC-MAIN-2017-39 | 1505818686034.31 | [] | arouteserver.readthedocs.io |
13. Experimental Features¶
This is a list of experimental features in CouchDB. They are included in a release because the development team is requesting feedback from the larger developer community. As such, please play around with these features and send us feedback, thanks!
Use at your own risk! Do not rely on these features for critical applications.
13.1. NodeJS Query Server¶
The NodeJS Query Server is an alternative runtime environment for the default JavaScript Query Server that runs on top of Node.JS and not SpiderMonkey like the default Query Server.!! | http://docs.couchdb.org/en/stable/experimental.html | 2017-09-19T20:28:33 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.couchdb.org |
redirect the container console (stdout) to a file. This is the first file you should check in case of problem.
Example: Starting Tomcat 4.x specifying an output console log file
Use the
container.setAppend(true|false) method to decide whether the log file is recreated or whether it is appended to, keeping the previous execution logs.)
Turning on container logs
Cargo is able to configure containers to generate various levels logs. There are 3 levels defined: "low", "medium" and "high". They represent the quantity of information you wish in the generated log file. You can turn on: | http://docs.codehaus.org/pages/viewpage.action?pageId=13158 | 2014-12-18T02:30:51 | CC-MAIN-2014-52 | 1418802765584.21 | [] | docs.codehaus.org |
Task Task Task Class
Definition
Represents Reporting Services tasks.
public ref class Task
public class Task
Public Class Task
- Inheritance
-
Remarks
Tasks cannot be modified and additional tasks cannot be added to a report server.
A Task object is returned as output by the GetRoleProperties, and ListTasks methods and is passed as input to the CreateRole and SetRoleProperties methods. | https://docs.microsoft.com/en-us/dotnet/api/reportservice2005.task?redirectedfrom=MSDN&view=sqlserver-2016 | 2018-06-17T22:29:18 | CC-MAIN-2018-26 | 1529267859817.15 | [] | docs.microsoft.com |
To Add a Custom Policy to API Manager
After you have created the YAML and XML files, or downloaded the files, you make the new custom policy available in API Manager.
In Anypoint Platform, click API Manager.
In API Administration, choose Custom policies.
Click Add Custom Policy.
In Add Custom Policy, give the new policy a name, for example myPolicy.
Browse to and select the YAML and XML files you created or downloaded.
Click Add
Submit your feedback!
Share your thoughts to help us build the best documentation experience for you!Take our latest survey! | https://docs.mulesoft.com/api-manager/1.x/add-custom-policy-task | 2022-05-16T12:42:26 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.mulesoft.com |
{ "field1": "Annie", "field2": "Point", "field3": "Stuff" }
Work with Functions and Lambdas in DataWeave
In DataWeave, functions and lambdas (anonymous functions) can be passed as values or be assigned to variables..
When using lambdas within the body of a DataWeave file in conjunction with an function such as
map, its attributes can either be explicitly named or left anonymous, in which case they can be referenced as
$,
$$, etc.
Declare and Invoke a Function
You can declare a function in the header or body of a DataWeave script by using the
fun keyword. Then you can invoke the function at any point in the body of the script.
You refer to functions using this form:
functionName() or
functionName(arg1, arg2, argN)
You can pass an expression in between the parentheses for each argument. Each expression between the parentheses is evaluated, and the result is passed as an argument used in the execution of the function body.
%dw 2.0 output application/json fun toUser(obj) = { firstName: obj.field1, lastName: obj.field2 } --- { "user" : toUser(payload) }
{ "user": { "firstName": "Annie", "lastName": "Point" } }
Assign a Lambda to a Var
You can define a function as a variable with a constant directive through
var
{ "field1": "Annie", "field2": "Point", "field3": "Stuff" }
%dw 2.0 output application/json var toUser = (user) -> { firstName: user.field1, lastName: user.field2 } --- { "user" : toUser(payload) }
{ "user": { "firstName": "Annie", "lastName": "Point" } }
Use Named Parameters in a Lambda
This example uses a lambda with an attribute that is explicitly named as
name.
%dw 2.0 output application/json var names = ["john", "peter", "matt"] --- users: names map((name) -> upper(name))
{ "users": ["JOHN","PETER","MATT"] }
Use Anonymous Parameters in a Lambda
This example uses a lambda with an attribute that’s not explicitly named, and so is referred to by default as
$.
%dw 2.0 output application/json var names = ["john", "peter", "matt"] --- users: names map upper($)
{ "users": ["JOHN","PETER","MATT"] } | https://docs.mulesoft.com/dataweave/2.2/dataweave-functions-lambdas | 2022-05-16T11:51:23 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.mulesoft.com |
Deployment-related questions and errorsDeployment-related questions and errors
- How do I deploy Streamlit on a domain so it appears to run on a regular port (i.e. port 80)?
- How can I deploy multiple Streamlit apps on different subdomains?
- How do I deploy Streamlit on Heroku, AWS, Google Cloud, etc...?
- Invoking a Python subprocess in a deployed Streamlit app
- Does Streamlit support the WSGI Protocol? (aka Can I deploy Streamlit with gunicorn?)
- Argh. This app has gone over its resource limits.
- App is not loading when running remotely
- Authentication without SSO
- I don't have SSO. How do I sign in to Streamlit Cloud?
- How do I share apps with viewers outside my organization?
- Upgrade the Streamlit version of your app on Streamlit Cloud
- Organizing your apps with workspaces on Streamlit Cloud
- How do I increase the upload limit of
st.file_uploaderon Streamlit Cloud? | https://docs.streamlit.io/knowledge-base/deploy | 2022-05-16T13:09:05 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.streamlit.io |
eazyBI Private licensing
Private eazyBI
General Info
- The eazyBI Private Server licenses are annual subscription based.
- The billing period is 12 months (not perpetual).
- After a year, the license has to be renewed at the initial purchase price.
- When the license expires, users cannot import new data, there is a red warning at the top of the window.
- eazyBI prices on the pricing page are without taxes (VAT). VAT is applied only if sold to Latvian companies and/or buyers from EU without a valid VAT.
Generating Trial Licenses for Customers
How to create a new eazyBI Private account for a customer/partner/reseller?
- Start a new eazyBI Private Trial
- Enter your own billing details
- If reselling, please enter Customer's company name in the "Licensee" field. Optionally enter the purchase order number, if required.
- The trial license is valid for 30 days whether a customer purchases a commercial license or not.
- All standard integrations and many custom data sources.
- Full eazyBI installation on your own computer.
- Customize your visual appearance – your logo, headers, footers, colors, etc.
- Develop additional integrations with other applications.
- Extend and customize Private eazyBI functionality with the Ruby programming language.
Quotes, Invoices, Purchases, Renewals
You can manually generate eazyBI Private Quotes, Invoices, as well as renew or purchase eazyBI private subscriptions from the eazyBI license account on eazybi.com.
- To log-in, go to eazybi.com and log in with your email address (used when starting the eazyBI Private trial)
- Open account subscription page by clicking on the Account name (Usually company name) on the top-right corner—select "PRIVATE subscription" from the dropdown.
- On the Subscription page, you can generate a Quote, an Invoice, extend the license period, change subscription plan.
You can pay with a credit card or a bank wire transfer.
When extending the license, the new license expiration period will be 12 months from the end of current license. So you can renew the license a month or few before it expires to make sure there are no service interruptions. The new license can be used immediately (no need to wait for the old license to expire).
Account Transfers
To transfer the eazyBI license and billing account to a customer, in the account user manager, add customer's email to add access to the account. Once they accept, you can give them "account owner" permissions. | https://docs-staging.eazybi.com/eazybi/getting-started/eazybi-private-licensing | 2022-05-16T12:35:19 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs-staging.eazybi.com |
'Not Visualized' section. Note that you cannot perform hidden sorts using attributes.
You can also drag and drop attributes into the slice with color section, which appears for certain chart types, such as column or bar charts, if you have more than one attribute in a search. Note that you can only use an attribute to slice with color.
Slicing with color enables you to separate data already sorted by an attribute into subcategories, based on another attribute in your search.
In the following example, we sliced
sales for each
department by
store region.
To drag and drop columns to the correct axis, or to the slice with color section, follow these steps:
Click the chart configuration icon on the top right.
Drag and drop a measure or attribute from the not visualized section to the correct axis, or from an axis to the not visualized section. Or, drag an attribute to the slice with color. | https://docs.thoughtspot.com/cloud/latest/chart-column-configure | 2022-05-16T12:05:53 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.thoughtspot.com |
Introduction
This section will guide you through discovering and searching for commercial geospatial datasets using the catalog user interface. The catalog allows you to either place orders and download the readily available geospatial datasets or acquire fresh datasets through tasking operations.
After selecting the datasets that suit your needs, you can purchase from the catalog and download the assets from the storage. The delivery time can range from a few minutes to 24 hours, depending on the image archive type (long-term or online archive).
Geospatial Datasets
The archive geospatial datasets available on the catalog are displayed in the table below.
To proceed, please go to Data Search. | https://docs.up42.com/data/catalog/data-discovery | 2022-05-16T11:35:03 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.up42.com |
public abstract class TopicsContext extends Object
TopicsContext is a per-application singleton tracking all registered topics.
It is erroneous to communicate with the Topic not registered via TopicsContext.
Application developer obtains instance of TopicsContext via static lookup() method.
When TopicsContext is being looked up for the first time, it triggers creation of
PushContext via
PushContextFactory that is configured as RichFaces service.
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public TopicsContext()
protected abstract Topic createTopic(TopicKey key)
public Topic getOrCreateTopic(TopicKey key)
Creates topic for given key or returns existing one when it was already created.
This method is thread-safe.
public Topic getTopic(TopicKey key)
public void removeTopic(TopicKey key)
public void publish(TopicKey key, Object data) throws MessageException
Publishes data through the topic with given key.
The provided topic key can contain expressions as the name of topic or its subtopic. In such case, the topic name or subtopic name will be first evaluated, which will form actual topic key that will be used to publish a message.
MessageException- when topic with given key fails to publish given data object.
public static TopicsContext lookup()
TopicsContexttracking all registered topics.
protected TopicKey getTopicKeyWithResolvedExpressions(TopicKey key) | https://docs.jboss.org/richfaces/4.5.X/4.5.13.Final/javadoc/org/richfaces/application/push/TopicsContext.html | 2022-05-16T13:14:28 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.jboss.org |
Authorize your Google Analytics account
How to integrate NS8 with Google Analytics.
Authorizing your Google Analytics (GA) account with the NS8 platform lets you prevent low-scoring traffic from being added to your retargeting audience. This also prevents bots and other invalid traffic from draining your marketing budget. To view your estimated savings, on the dashboard, view Retargeting Fraud.
Before you authorize your GA account, configure it with your ecommerce platform. To do this, see your ecommerce platform’s documentation.
If you don’t have a GA account, set one up on the GA main page.
To open your GA account, from Settings/Support, select Settings.
Scroll down to Google Analytics Integration. If you have not authorized an account, the setting will look like the following image. You can link one GA account to one NS8 account at the same time. If you change your account, follow these steps to authorize the new account.
Select Click here to authorize.
If you're not signed in to your Google account, you’re prompted to sign in.
You’re prompted to confirm that we can access your GA account. To continue, select ALLOW.
Select the GA account that you want to link. In most cases, you will only have one account available. If you have several sites that you manage with the same GA account, select the one that you want to use. Select Accept.
The Google Analytics Integration setting appears as Active. The web property ID appears. You can confirm the property ID with your GA account.
Updated over 1 year ago | https://docs.ns8.com/docs/authorizing-your-google-analytics-account | 2022-05-16T11:30:04 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.ns8.com |
[−][src]Crate xenon
Fixed-point math, for any static or dynamic precision in any base.
Differences from floating-point:
- exact results, except where explicitly rounded
- supports rounding in any base, such as decimal
- simpler interface (no NaN, infinities)
- better performance
- more compact storage for some value ranges
- unnormalized, so sigfigs of result can be determined by sigfigs of inputs
Differences from bignums:
- if you know the "shape" of your inputs/calculations, the statically typed fixed-points can be more efficient
- XeN is optimized for values that fit in a relatively small range
Dynamic precision? Isn't that floating point?
Technically yes, but the Xe API is much simpler than typical floating point; it's designed for applications like working with a collection of values that are known to be fixed-precision decimal of the same type, but the number of decimal places won't be known until runtime. | https://docs.rs/xenon/latest/xenon/ | 2022-05-16T13:24:53 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.rs |
Handling of sensitive data
The Data Intelligence Service (DIS) handles user- and company-related sensitive data to secure the organization's data privacy.
Description
DIS does not upload or host any company or tenant-specific information. It is not possible to determine which company or tenant is the source of data inventoried.
Company and user information
User information is removed during file processing in Snow Update Service (SUS). If user information appears in the user’s file path, the user information is removed. For example, if a file path is C:/Users/myname/…, it becomes :/Users/{User}/…
However, there are limitations:
Removal of the user information only applies to Windows OS with an English language framework.
If you include company information in a file path, the information is present in the DIS database in the executable file path information of the software row, for example, company name. In this case, company information cannot be removed as it is required for recognition.
Exclude data from inventory
If you have strict regulations on data that cannot leave your company, you can configure the Snow Inventory Agents to exclude the folders containing such data from the computer file system scan. Since the folders are excluded from the scan, the data in the folders will not be inventoried, and it will not be included in the data normalization process.
Normalization logic for software data
Data on devices that are scanned by Snow Inventory Agents’ default scanning methods, that align with the data information inventoried, is transferred to the DIS database. If this includes a company name in a file path, this information is transferred to the DIS database. However, DIS only creates normalization logic for software data that is publicly available. DIS does not create normalization logic for software data used for bespoke software. | https://docs.snowsoftware.com/data-intelligence-service/en/UUID-d2fb4f9a-9ec9-3480-060b-3648887f6647.html | 2022-05-16T12:33:56 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.snowsoftware.com |
Release Guide
This METplus Release Guide provides detailed instructions for METplus developers for creating software releases for the METplus component repositories. This Release Guide is intended for developers creating releases and is not intended for users of the software.
Stages of the METplus Release Cycle
Development Release
Beta
Beta releases are a pre-release of the software to give a larger group of users the opportunity to test the recently incorporated new features, enhancements, and bug fixes. Beta releases allow for continued development and bug fixes before an official release. There are many possible configurations of hardware and software that exist and installation of beta releases allow for testing of potential conflicts.
Release Candidate (rc)
A release candidate is a version of the software that is nearly ready for official release but may still have a few bugs. At this stage, all product features have been designed, coded, and tested through one or more beta cycles with no known bugs. It is code complete, meaning that no entirely new source code will be added to this release. There may still be source code changes to fix bugs, changes to documentation, and changes to test cases or utilities.
Official Release
An official release is a stable release and is basically the release candidate, which has passed all tests. It is the version of the code that has been tested as thoroughly as possible and is reliable enough to be used in production.
Bugfix Release
A bugfix release introduces no new features, but fixes bugs in previous official releases and targets the most critical bugs affecting users.
Instructions Summary
Instructions are provided for three types of software releases:
Official Release (e.g. vX.Y.Z) from the develop branch (becomes the new main_vX.Y. | https://metplus.readthedocs.io/en/latest/Release_Guide/index.html | 2022-05-16T12:54:47 | CC-MAIN-2022-21 | 1652662510117.12 | [] | metplus.readthedocs.io |
NavBarControl.GroupCollapsed Event
Fires immediately after a group has been collapsed.
Namespace: DevExpress.XtraNavBar
Assembly: DevExpress.XtraNavBar.v21.2.dll
Declaration
Event Data
The GroupCollapsed event's data class is NavBarGroupEventArgs. The following properties provide information specific to this event:
Remarks
This event is raised after a group has been collapsed by the end-user or via code. Note that it provides only a notification and you cannot cancel the action. The event parameter’s NavBarGroupEventArgs.Group property identifies the collapsed group.
See Also
Feedback | https://docs.devexpress.com/WindowsForms/DevExpress.XtraNavBar.NavBarControl.GroupCollapsed | 2022-05-16T13:26:40 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.devexpress.com |
$ odo create nodejs --starter in
odo with the
odo debug command.
Download the sample application that contains the necessary
debugrun step within its devfile:
$ odo create nodejs --starter
Validation ✓ Checking devfile existence [11498ns] ✓ Checking devfile compatibility [15714ns] ✓ Creating a devfile component from registry: DefaultDevfileRegistry [17565ns] ✓ Validating devfile component [113876ns] Starter Project ✓ Downloading starter project nodejs-starter from [428ms] Please use `odo push` command to create the component with source deployed
Push the application with the
--debug flag, which is required for all debugging deployments:
$ odo push --debug
Validation ✓ Validating the devfile [29916ns] Creating Kubernetes resources for component nodejs ✓ Waiting for component to start [38ms] Applying URL changes ✓ URLs are synced with the cluster, no changes are required. Syncing to component nodejs ✓ Checking file changes for pushing [1ms] ✓ Syncing files to the component [778ms] Executing devfile commands for component nodejs ✓ Executing install command "npm install" [2s] ✓ Executing debug command "npm run debug" [1s] Pushing devfile component nodejs ✓ Changes successfully pushed to component
Port forward to the local port to access the debugging interface:
$ odo debug port-forward
Started port forwarding at ports - 5858:5858
Check that the debug session is running in a separate terminal window:
$ odo debug info
Debug is running for the component on the local port : 5858
Attach the debugger that is bundled in your IDE of choice. Instructions vary depending on your IDE, for example: VSCode debugging interface. | https://docs.okd.io/4.10/cli_reference/developer_cli_odo/creating_and_deploying_applications_with_odo/debugging-applications-in-odo.html | 2022-05-16T12:50:04 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.okd.io |
In OKD 4.10, you can install a cluster on VMware vSphere infrastructure in a restricted network by creating an internal mirror of the installation release content.
You reviewed details about the OKD installation and update processes.
You read the documentation on selecting a cluster installation method and preparing it for users.
You created a registry on your mirror host and obtained the
imageContentSources data for your version of OKD.
You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide the ReadWriteMany access mode.... This field is optional.
In the
install-config.yaml file, set the value of
platform.vsphere.clusterOSImage to the image location or name. For example:
platform: vsphere: clusterOSImage: VMware vSphere configuration parameters are described in the following table:: 3 platform: vsphere: (4) cpus: 2 coresPerSocket: 2 memoryMB: 8196 osDisk: diskSizeGB: 120 controlPlane: (2) hyperthreading: Enabled (3) name: master replicas: 3 platform: vsphere: (4) cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster (5) platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder diskType: thin (6) network: VM_Network cluster: vsphere_cluster_name (7) apiVIP: api_vip ingressVIP: ingress_vip clusterOSImage: (8) pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' (9) sshKey: 'ssh-ed25519 AAAA...' additionalTrustBundle: | (10) -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources:
See About remote health monitoring for more information about the Telemetry service
If necessary, you can opt out of remote health reporting.
Set up your registry and configure registry storage. | https://docs.okd.io/4.10/installing/installing_vsphere/installing-restricted-networks-installer-provisioned-vsphere.html | 2022-05-16T11:41:08 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.okd.io |
Connect Streamlit to data sourcesConnect Streamlit to data sources
These step-by-step guides demonstrate how to connect Streamlit apps to various databases & APIs. They use Streamlit's secrets management and caching to provide secure and fast data access.
- AWS S3
- BigQuery
- Snowflake
- Microsoft SQL Server
- Firestore (blog)
- MongoDB
- MySQL
- PostgreSQL
- Tableau
- Private Google Sheet
- Public Google Sheet
- TigerGraph
- Deta Base
- Supabase
- Google Cloud Storage | https://docs.streamlit.io/knowledge-base/tutorials/databases | 2022-05-16T12:50:44 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.streamlit.io |
Admin Emailconfigured in the checkout process, meaning that the server is completely ready to be used. To start working with it, just follow the next steps:
Create an accountbutton, and fill the form to create a new user profile using the
Admin Emailaddress provided while configuring the instance (any other address will not be authorized to sign up). | https://docs.thinger.io/server/deployment/thinger.io-cloud-server | 2022-05-16T11:22:36 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.thinger.io |
Zextras Suite Changelog - Release 3.6.0 Release Date October 4th, 2021 Solved Issues Auth *Issue ID:* AUTH-300 Title: OTP Label is now customizable Description: Users can now edit labels of newly generated TOTPs. Drive *Issue ID:* DRIV-1207 Title: Window title now reads Drive in place of Zimbra Drive. Description: The title of Drive window for external users has changed to "Drive". Powerstore *Issue ID:* PS-325 Title: Enhanced Mailbox move speed Description: Optimizations to the MailboxMove command operation now speeds up the operation thereby reducing the time taken. *Issue ID:* PS-342 Title: Tika indexing exceptions management enhanced Description: Documents that raise 204 (no content) and 422 (unprocessable entity) HTTP codes as a result of Tika parsing are no longer re-tried and a log is reported in the mailbox.log file. Team *Issue ID:* TEAMS-2317 Title: Optimized Team performance to address lag Description: Zextras Team now performs better and no longer lags after prolonged usage. *Issue ID:* TEAMS-2440 Title: Updated logs for ChatAutoCleanup procedure Description: The ChatAutoCleanup procedure now no longer shows incomplete logs. *Issue ID:* TEAMS-2510 Title: Teams no longer saves mute status of exiting users Description: Users' mute status is no longer remembered by Teams and users returning to an ongoing conversation are not muted — irrespective of their mute status when exiting. *Issue ID:* TEAMS-3034 Title: Improved Team file download Description: Users sometimes faces issues when downloading files sent by other users. The download of a file from one-to-one chat now no longer suffers casual failures due to buffer issues. *Issue ID:* TEAMS-3108 Title: Fixed a paste issue on Chrome versions higher than 91. Description: Fixed issue where users, on chrome version higher than 92, experienced issues when pasting text in a conversation. | https://docs.zextras.com/zextras-suite-documentation/3.6.0/changelogs/3.6.0.html | 2022-05-16T12:18:08 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.zextras.com |
Utilities¶
Newforms exposes various utilities you may want to make use of when working with forms, as well as some implementation details which you may need to make use of for customisation purposes.
- validateAll(form, formsAndFormsets)¶
Extracts data from a <form> using formData() and validates it with a list of Forms and/or FormSets.
- util.formatToArray(str, obj[, options])¶
Replaces '{placeholders}' in a string with same-named properties from a given Object, but interpolates into and returns an Array instead of a String.
By default, any resulting empty strings are stripped out of the Array before it is returned. To disable this, pass an options object with a 'strip' property which is false.
This is useful for simple templating which needs to include ReactElement objects.
- util.makeChoices(list, submitValueProp, displayValueProp)¶
Creates a list of [submitValue, displayValue] choice pairs from a list of objects.
If any of the property names correspond to a function in an object, the function will be called with the object as the this context. | https://newforms.readthedocs.io/en/v0.10.0/util_api.html | 2022-05-16T12:43:49 | CC-MAIN-2022-21 | 1652662510117.12 | [] | newforms.readthedocs.io |
Editing¶
These preferences control how several tools will interact with your input.
Objects¶
New Objects¶
- Link Materials to:
- Object Data
Any created material will be created as part of the Object Data data-block.
- Object
Any created material will be created as part of the Object data-block.
See also
Read more about Blender’s Data System.
- Align to
- World
New objects align with world coordinates.
- View
New object align with view coordinates.
- 3D Cursor
New objects align to the 3D cursor’s orientation.
- Enter Edit Mode
If selected, Edit Mode is automatically activated when you create a new object.
- Instance Empty Size
The display size for empties when a new collection instance is created. Duplicate Data list.
3D Cursor¶
- Cursor Surface Project
When placing the cursor by clicking, the cursor is projected onto the surface under the cursor.
- Cursor Lock Adjust
When the viewport is locked to the cursor, moving the cursor avoids the view jumping based on the new offset.
Annotations¶
- Default Color
The default color for new Annotate layers.
- Eraser Radius
The size of the eraser used with the Annotate Tool.
See also
Read more about Annotations..
Grease Pencil¶
- Distance
- Manhattan
The minimum number of pixels the mouse should have moved either horizontally or vertically before the movement is recorded. Decreasing this should work better for curvy lines.
- Euclidean
The minimum distance that mouse has to travel before movement is recorded.
See also
Read more about Grease Pencil.
Miscellaneous¶
- Sculpt Overlay Color
Defines Sidebar.
- Node Auto-offset Margin
Margin to use for offsetting nodes. | https://docs.blender.org/manual/en/2.93/editors/preferences/editing.html | 2022-05-16T11:59:37 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.blender.org |
Version End of Life. Not Yet Set
Release 6.1.9 is a minor bug fix release containing a fix for a critial bug within the Replicator related to the handling of Timezone.
Improvements, new features and functionality
A new script is available, tungsten_generate_haproxy_for_api. This script will read all available
INI files and dump out corresponding
haproxy.cfg entries with properly incrementing ports; the
composite parent will come first, followed by the composite children in alphabetical order.
This script will be embedded as a tpm command in a future release.
Issues: CT-1385
tpm update no longer fails when using the staging method to upgrade to a new version.
Issues: CT-1381
tungsten_find_orphaned no longer hangs when the password keyword exists by itself under
[client] in
my.cnf, which caused mysqlbinlog to
issue a password prompt.
Issues: CT-1387 | https://docs.continuent.com/release-notes/release-notes-tc-6-1-9.html | 2022-05-16T12:15:10 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.continuent.com |
SparklineAxisScaling Enum
Lists values used to specify how to calculate the minimum and maximum values for the vertical axis of a sparkline group.
Namespace: DevExpress.Spreadsheet
Assembly: DevExpress.Spreadsheet.v21.2.Core.dll
Declaration
Remarks
The values listed by this enumeration are used to set the SparklineVerticalAxis.MinScaleType and SparklineVerticalAxis.MaxScaleType properties.
Related GitHub Examples
The following code snippet (auto-collected from DevExpress Examples) contains a reference to the SparklineAxisScaling enum.
Note
The algorithm used to collect these code examples remains a work in progress. Accordingly, the links and snippets below may produce inaccurate results. If you encounter an issue with code examples below, please use the feedback form on this page to report the issue. | https://docs.devexpress.com/OfficeFileAPI/DevExpress.Spreadsheet.SparklineAxisScaling | 2022-05-16T13:20:50 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.devexpress.com |
Lowest Quality Campaigns
The Lowest Quality Campaigns card displays the advertising campaigns that have referred the lowest-quality traffic in the last 24 hours. The quality of campaign traffic is determined by the average score of its referred users.
Columns
To sort the columns in this table, select the triangles next to each column heading.
Latest Campaign
The name of the latest campaign from the UTM code The name of the latest campaign from the campaign referral settings.
Score
The average score of the visitors associated with this campaign. Campaigns that have a lower score attracted lower-quality traffic and potential fraud.
Sessions
The total number of sessions and the percentage of traffic from these sessions for a campaign.
Updated over 1 year ago | https://docs.ns8.com/docs/the-lowest-quality-campaigns-card | 2022-05-16T11:17:25 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.ns8.com |
contract: The address of the token contract.
owner: The address of the token owner.
spender: The address of the token spender.
address: The address for which token balances will be checked.
contractAddresses: An array of contract addresses, or the string "DEFAULT_TOKENS" to fetch all tokens.
address: The address for which token balances were checked.
tokenBalances: An array of token balance objects. Each object contains:
contractAddress: The address of the contract.
tokenBalance: The balance of the contract, as a string representing a base-10 number.
error: An error string. One of this or
tokenBalancewill be
null.
address: The address of the token contract.
name: The token's name.
nullif not defined in the contract and not available from other sources.
symbol: The token's symbol.
nullif not defined in the contract and not available from other sources.
decimals: The token's decimals.
nullif not defined in the contract and not available from other sources.
logo: URL of the token's logo image.
nullif not available.
web3.eth.subscribe("pendingTransactions"), but differs in that it emits full transaction information rather than just transaction hashes.
"alchemy_fullPendingTransactions", which is different from the string used in raw
eth_subscribeJSON-RPC calls, where it is
"alchemy_newFullPendingTransactions"instead. This is confusing, but it is also consistent with the existing Web3 subscription APIs (for example:
web3.eth.subscribe("pendingTransactions")vs
"newPendingTransactions"in raw JSON-RPC).
fromBlock: in hex string or "latest". optional (default to latest)
toBlock: in hex string or "latest". optional (default to latest)
fromAddress: in hex string. optional
toAddress: in hex string. optional.
contractAddresses: list of hex strings. optional.
category: list of any combination of
external,
token. optional, if blank, would include both.
excludeZeroValue:a
Boolean. optional (default
true)
maxCount: max number of results to return per call. optional (default
1000)
fromBlockand
toBlockare inclusive. Both default to
latestif not specified.
fromAddressand
toAddresswill be
ANDed together when filtering. If left blank, will indicate a wildcard (any address).
contractAddressesonly applies to
tokencategory transfers (eth log events). The list of addresses are
ORed together. This filter will be
ANDed with
fromAddressand
toAddressfor eth log events. If empty, or unspecified, it will be taken as a wildcard (any contract addresses).
category:
externalfor primary level eth transfers,
tokenfor contract event transfers.
excludeZeroValue:an optional
Booleanto exclude asset transfers with a value field of zero (defaults to
true)
maxCount: The maximum number of results to return per call. Default and max will be 1000.
pageKey: If left blank, will return the first 1000 or
maxCountnumber of results. If more results are available, a uuid pageKey will be returned in the response. Pass that uuid into
pageKeyto fetch the next 1000 or maxCount. See section on pagination.
web3.eth.getFeeHistory(blockRange, startingBlock, percentiles[])
blockRange: The number of blocks for which to fetch historical fees. Can be an integer or a hex string.
startingBlock: The block to start the search. The result will look backwards from here. Can be a hex string or a predefined block string e.g. "latest".
percentiles: (Optional) An array of numbers that define which percentiles of reward values you want to see for each block.
oldestBlock: The oldest block in the range that the fee history is being returned for.
baseFeePerGas: An array of base fees for each block in the range that was looked up. These are the same values that would be returned on a block for the
eth_getBlockByNumbermethod.
gasUsedRatio: An array of the ratio of gas used to gas limit for each block.
reward: Only returned if a percentiles parameter was provided. Each block will have an array corresponding to the percentiles provided. Each element of the nested array will have the tip provided to miners for the percentile given. So if you provide [50, 90] as the percentiles then each block will have a 50th percentile reward and a 90th percentile reward.
web3.eth.getMaxPriorityFeePerGas()
maxPriorityFeePerGasin EIP 1559 transactions. Rather than using
feeHistoryand making a calculation yourself you can just use this method to get a quick estimate. Note: this is a geth-only method, but Alchemy handles that for you behind the scenes.
maxPriorityFeePerGassuggestion. You can plug this directly into your transaction field. | https://docs.alchemy.com/alchemy/documentation/alchemy-web3/enhanced-web3-api | 2022-05-16T12:12:02 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.alchemy.com |
Spacing GuidelinesSpacing Guidelines
Within the context of the NMS, it's important to have consistency within the design so that all pages feel unified. As such, most spacing throughout the app follows an 8px scaling factor as shown below:
To help facilitate this better, we leverage Material-UI's
theme.spacing() helper which too uses an 8px scaling factor.
const theme = createMuiTheme(); theme.spacing(0.5) // = 8 * 0.5 (4px) theme.spacing(1) // = 8 * 1 (8px) theme.spacing(2) // = 8 * 2 (16px) theme.spacing(3) // = 8 * 3 (24px) theme.spacing(4) // = 8 * 4 (32px) theme.spacing(5) // = 8 * 5 (40px)
With this in mind, always try and leverage the scaling system when building out components rather than using static
px values. Reason being, in the case the scaling factor is ever changed in the future, it will automatically update across all sizing. | https://docs.magmacore.org/docs/nms/dev_spacing | 2022-05-16T11:16:57 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.magmacore.org |
Understanding a Windows PowerShell Module.
The main purpose of a module is to allow the modularization (ie, reuse and abstraction) of Windows PowerShell code. For example, the most basic way of creating a module is to simply save a Windows PowerShell script as a .psm1 file. Doing so allows you to control (ie, make public or private) the functions and variables contained in the script. Saving the script as a .psm1 file also allows you to control the scope of certain variables. Finally, you can also use cmdlets such as Install-Module to organize, install, and use your script as building blocks for larger solutions.
Module Components and Types
A module is made up of four basic components:
Some sort of code file - usually either a PowerShell script or a managed cmdlet assembly.
Anything else that the above code file may need, such as additional assemblies, help files, or scripts.
A manifest file that describes the above files, as well as stores metadata such as author and versioning information.
A directory that contains all of the above content, and is located where PowerShell can reasonably find it.
Note
none of these components, by themselves, are actually necessary. For example, a module can technically be only a script stored in a .psm1 file. You can also have a module that is nothing but a manifest file, which is used mainly for organizational purposes. You can also write a script that dynamically creates a module, and as such doesn't actually need a directory to store anything in. The following sections describe the types of modules you can get by mixing and matching the different possible parts of a module together.
Script Modules
As the name implies, a script module is a file (
.psm1) that contains any valid Windows
PowerShell code. Script developers and administrators can use this type of module to create modules
whose members include functions, variables, and more. At heart, a script module is simply a Windows
PowerShell script with a different extension, which allows administrators to use import, export, and
management functions on it.
In addition, you can use a manifest file to include other resources in your module, such as data files, other dependent modules, or runtime scripts. Manifest files are also useful for tracking metadata such as authoring and versioning information.
Finally, a script module, like any other module that isn't dynamically created, needs to be saved in a folder that PowerShell can reasonably discover. Usually, this is on the PowerShell module path; but if necessary you can explicitly describe where your module is installed. For more information, see How to Write a PowerShell Script Module.
Binary Modules
A binary module is a .NET Framework assembly (
.dll) that contains compiled code, such as C#.
Cmdlet developers can use this type of module to share cmdlets, providers, and more. (Existing
snap-ins can also be used as binary modules.) Compared to a script module, a binary module allows
you to create cmdlets that are faster or use features (such as multithreading) that are not as easy
to code in Windows PowerShell scripts.
As with script modules, you can include a manifest file to describe additional resources that your module uses, and to track metadata about your module. Similarly, you probably should install your binary module in a folder somewhere along the PowerShell module path. For more information, see How to How to Write a PowerShell Binary Module.
Manifest Modules
A manifest module is a module that uses a manifest file to describe all of its components, but
doesn't have any sort of core assembly or script. (Formally, a manifest module leaves the
ModuleToProcess or
RootModule element of the manifest empty.) However, you can still use the
other features of a module, such as the ability to load up dependent assemblies or automatically run
certain pre-processing scripts. You can also use a manifest module as a convenient way to package up
resources that other modules will use, such as nested modules, assemblies, types, or formats. For
more information, see How to Write a PowerShell Module Manifest.
Dynamic Modules
A dynamic module is a module that is not loaded from, or saved to, a file. Instead, they are
created dynamically by a script, using the New-Module
cmdlet. This type of module enables a script to create a module on demand that does not need to be
loaded or saved to persistent storage. By its nature, a dynamic module is intended to be
short-lived, and therefore cannot be accessed by the
Get-Module cmdlet. Similarly, they usually do
not need module manifests, nor do they likely need permanent folders to store their related
assemblies.
Module Manifests
A module manifest is a
.psd1 file that contains a hash table. The keys and values in the hash
table do the following things:
Describe the contents and attributes of the module.
Define the prerequisites.
Determine how the components are processed.
Manifests are not required for a module. Modules can reference script files (
.ps1), script module files (
.psm1), manifest files (
.psd1), formatting and type files (
.ps1xml), cmdlet and provider assemblies (
.dll), resource files, Help files, localization files, or any other type of file or resource that is bundled as part of the module. For an internationalized script, the module folder also contains a set of message catalog files. If you add a manifest file to the module folder, you can reference the multiple files as a single unit by referencing the manifest.
The manifest itself describes the following categories of information:
Metadata about the module, such as the module version number, the author, and the description.
Prerequisites needed to import the module, such as the Windows PowerShell version, the common language runtime (CLR) version, and the required modules.
Processing directives, such as the scripts, formats, and types to process.
Restrictions on the members of the module to export, such as the aliases, functions, variables, and cmdlets to export.
For more information, see How to Write a PowerShell Module Manifest.
Storing and Installing a Module
Once you have created a script, binary, or manifest module, you can save your work in a location that others may access it. For example, your module can be stored in the system folder where Windows PowerShell is installed, or it can be stored in a user folder.
Generally speaking, you can determine where you should install your module by using one of the paths
stored in the
$ENV:PSModulePath variable. Using one of these paths means that PowerShell can
automatically find and load your module when a user makes a call to it in their code. If you store
your module somewhere else, you can explicitly let PowerShell know by passing in the location of
your module as a parameter when you call
Install-Module.
Regardless, the path of the folder is referred to as the base of the module (ModuleBase), and the name of the script, binary, or manifest module file should be the same as the module folder name, with the following exceptions:
Dynamic modules that are created by the
New-Modulecmdlet can be named using the
Nameparameter of the cmdlet.
Modules imported from assembly objects by the
Import-Module -Assemblycommand are named according to the following syntax:
"dynamic_code_module_" + assembly.GetName().
For more information, see Installing a PowerShell Module and about_PSModulePath.
Module Cmdlets and Variables
The following cmdlets and variables are provided by Windows PowerShell for the creation and management of modules.
New-Module cmdlet This cmdlet creates a new dynamic module that exists only in memory. The module is created from a script block, and its exported members, such as its functions and variables, are immediately available in the session and remain available until the session is closed.
New-ModuleManifest cmdlet This cmdlet creates a new module manifest (.psd1) file, populates its values, and saves the manifest file to the specified path. This cmdlet can also be used to create a module manifest template that can be filled in manually.
Import-Module cmdlet This cmdlet adds one or more modules to the current session.
Get-Module cmdlet This cmdlet retrieves information about the modules that have been or that can be imported into the current session.
Export-ModuleMember cmdlet This
cmdlet specifies the module members (such as cmdlets, functions, variables, and aliases) that are
exported from a script module (.psm1) file or from a dynamic module created by using the
New-Module cmdlet.
Remove-Module cmdlet This cmdlet removes modules from the current session.
Test-ModuleManifest cmdlet This cmdlet verifies that a module manifest accurately describes the components of a module by verifying that the files that are listed in the module manifest file (.psd1) actually exist in the specified paths.
$PSScriptRoot This variable contains the directory from which the script module is being executed. It enables scripts to use the module path to access other resources.
$env:PSModulePath This environment variable contains a list of the directories in which Windows PowerShell modules are stored. Windows PowerShell uses the value of this variable when importing modules automatically and updating Help topics for modules.
See Also
Writing a Windows PowerShell Module
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/powershell/scripting/developer/module/understanding-a-windows-powershell-module?view=powershell-7.1 | 2022-05-16T13:46:17 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.microsoft.com |
You can use New Relic's NerdGraph GraphiQL explorer to query your distributed tracing data. This document explains:
- Trace metadata that's only available with NerdGraph
- Example queries of trace data
Trace metadata
In addition to span event and transaction event data, we calculate additional metadata about the trace and its span relationships. To query this metadata, go to the NerdGraph GraphiQL explorer at api.newrelic.com/graphiql.
Additional trace-level data:
Additional span-level data:
For more about trace structure and span relationships, see Trace structure.
Trace data query examples
Here are example NerdGraph queries of distributed tracing data: | https://docs.newrelic.com/kr/docs/apis/nerdgraph/examples/nerdgraph-distributed-trace-data-tutorial/ | 2022-05-16T11:21:52 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.newrelic.com |
We strive to keep our resources operating efficiently so that our services are available to all our users. To prevent data usage spikes in one New Relic account from impacting other customers' accounts, we have various data volume and rate limits in place. We reserve the right to enforce these limits to protect our system and to avoid issues for you and other customers.
If your New Relic account, whether by configuration or by error, exceeds one of these limits, it or its child accounts might experience one or both of the following:
- Sampling of data
- Temporary pause or cessation of data collection
To learn more about how hitting a limit can affect your data, see View limits. If you have further questions about these limits, your contract, or a limit you've reached, contact your New Relic account representative. We can work with you to adjust any rate limits to meet your needs.
View limits and manage data
For information about system and account limits, and for links to data ingest API limits, go to View limits.
To manage your data ingest, storage, and limits for organization or billing purposes, go to Manage data. | https://docs.newrelic.com/kr/docs/licenses/license-information/general-usage-licenses/new-relic-data-usage-limits-policies/ | 2022-05-16T12:11:59 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.newrelic.com |
Onboarding
SocialOS makes it quick and easy for users to sign up.
Using OAuth 2.0, users can sign up or sign in to SocialOS via
- Slack
Users can also create accounts using traditional email and password,
with support for standard confirmation and password reset by email.
SocialOS can also act as an OAuth 2.0 provider, allowing users to log
in to other applications using their SocialOS credentials.
For details on authentication methods and security models, see the
detailed topics below.
Related Topics
API Reference
Updated over 4 years ago | https://docs.socialos.io/docs/onboarding | 2022-05-16T12:18:41 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.socialos.io |
Session StateSession State
Session State is a way to share variables between reruns, for each user session. In addition to the ability to store and persist state, Streamlit also exposes the ability to manipulate state using Callbacks.
Check out this Session State basics tutorial video by Streamlit Developer Advocate Dr. Marisa Smith to get started:
Initialize values in Session StateInitialize values in Session State
The Session State API follows a field-based API, which is very similar to Python dictionaries:
# Initialization if 'key' not in st.session_state: st.session_state['key'] = 'value' # Session State also supports attribute based syntax if 'key' not in st.session_state: st.session_state.key = 'value'
Reads and updatesReads and updates
Read the value of an item in Session State and display it by passing to
st.write :
# Read st.write(st.session_state.key) # Outputs: value
Update an item in Session State by assigning it a value:
st.session_state.key = 'value2' # Attribute API st.session_state['key'] = 'value2' # Dictionary like API
Curious about what is in Session State? Use
st.write or magic:
st.write(st.session_state) # With magic: st.session_state
Streamlit throws a handy exception if an uninitialized variable is accessed:
st.write(st.session_state['value']) # Throws an exception!
Delete itemsDelete items
Delete items in Session State using the syntax to delete items in any Python dictionary:
# Delete a single key-value pair del st.session_state[key] # Delete all the items in Session state for key in st.session_state.keys(): del st.session_state[key]
Session State can also be cleared by going to Settings → Clear Cache, followed by Rerunning the app.
Session State and Widget State associationSession State and Widget State association
Every widget with a key is automatically added to Session State:
st.text_input("Your name", key="name") # This exists now: st.session_state.name
Use Callbacks to update Session StateUse Callbacks to update Session State
A callback is a python function which gets called when an input widget changes.
Order of execution: When updating Session state in response to events, a callback function gets executed first, and then the app is executed from top to bottom.
Callbacks can be used with widgets using the parameters
on_change (or
on_click),
args, and
kwargs:
Parameters
- on_change or on_click - The function name to be used as a callback
- args (tuple) - List of arguments to be passed to the callback function
- kwargs (dict) - Named arguments to be passed to the callback function
Widgets which support the
on_change event:
st.checkbox
st.color_picker
st.date_input
st.multiselect
st.number_input
st.radio
st.select_slider
st.selectbox
st.slider
st.text_area
st.text_input
st.time_input
st.file_uploader
Widgets which support the
on_click event:
st.button
st.download_button
st.form_submit_button
To add a callback, define a callback function above the widget declaration and pass it to the widget via the
on_change (or
on_click ) parameter.
Forms and CallbacksForms and Callbacks
Widgets inside a form can have their values be accessed and set via the Session State API.
st.form_submit_button can have a callback associated with it. The callback gets executed upon clicking on the submit button. For example:
def form_callback(): st.write(st.session_state.my_slider) st.write(st.session_state.my_checkbox) with st.form(key='my_form'): slider_input = st.slider('My slider', 0, 10, 5, key='my_slider') checkbox_input = st.checkbox('Yes or No', key='my_checkbox') submit_button = st.form_submit_button(label='Submit', on_click=form_callback)
Caveats and limitationsCaveats and limitations
Only the
st.form_submit_buttonhas a callback in forms. Other widgets inside a form are not allowed to have callbacks.
on_changeand
on_clickevents are only supported on input type widgets.
Modifying the value of a widget via the Session state API, after instantiating it, is not allowed and will raise a
StreamlitAPIException. For example:
slider = st.slider( label='My Slider', min_value=1, max_value=10, value=5, key='my_slider') st.session_state.my_slider = 7 # Throws an exception!
Setting the widget state via the Session State API and using the
valueparameter in the widget declaration is not recommended, and will throw a warning on the first run. For example:
st.session_state.my_slider = 7 slider = st.slider( label='Choose a Value', min_value=1, max_value=10, value=5, key='my_slider')
Setting the state of button-like widgets:
st.button,
st.download_button, and
st.file_uploadervia the Session State API is not allowed. Such type of widgets are by default False and have ephemeral True states which are only valid for a single run. For example:
if 'my_button' not in st.session_state: st.session_state.my_button = True st.button('My button', key='my_button') # Throws an exception! | https://docs.streamlit.io/library/api-reference/session-state | 2022-05-16T12:22:28 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.streamlit.io |
newforms¶
An isomorphic JavaScript form-handling library for React.
(Formerly a direct port of the Django framework’s
django.forms library)
Getting newforms¶
- Node.js
Newforms can be used on the server, or bundled for the client using an npm-compatible packaging system such as Browserify or webpack.
npm install newforms
var forms = require('newforms')
- Browser bundles
The browser bundles expose newforms as a global
formsvariable and expects to find a global
Reactvariable to work with.
The uncompressed bundle is in development mode, so will log warnings about potential mistakes.
You can find it in the dist/ directory.
- Source
Newforms source code and issue tracking is on GitHub:
Documentation¶
Note
Unless specified otherwise, documented API items live under the
forms
namespace object in the browser, or the result of
require('newforms') in
Node.js.
Documentation Contents¶
Guide Documentation¶
- Quickstart
- Overview
- React Components
- Interactive Forms with React
- Customising Form display
- Forms
- Form fields
- Form and Field validation
- Widgets
- Formsets
- Locales
API Reference¶
- Forms API
- BoundField API
- Fields API
- Validation API
- Widgets API
- Formsets API
- Utilities | https://newforms.readthedocs.io/en/v0.13.2/index.html | 2022-05-16T12:53:47 | CC-MAIN-2022-21 | 1652662510117.12 | [] | newforms.readthedocs.io |
At the time of this edit, Deadline Funnel does not have an API integration with Teachable so you cannot use 'deadlinetext' in your emails, but you can:
Use an animated email countdown timer
Use an email link in your Teachable emails to link to your sales page and ensure that your subscribers are tracked accurately
How to add an Animated Email Countdown to Teachable Email:
1. Navigate to Emails in the Teachable dashboard:
2. In the email composer 1) choose your email recipients, 2) give your email a subject line and click the 'code' icon to edit your email:
3. In your Deadline Funnel admin navigate to Edit Campaign > Emails and click to copy your Email Timer Code:
4. Paste the email timer code into your email editor in location where you want your animated countdown to appear:
5. Click the source code icon again to preview your email, you will see your animated countdown:
The animated timer will then show up in your subscriber's email and look similar to this:
How to use an Email Link in your Teachable Email:
Copy and paste the Email Link URL into your emails to link to any pages that have Deadline Funnel active on them. If someone clicks the email link before their countdown expires, visitors will be redirected to your Before Deadline URL. If their deadline has already expired, visitors will be redirected to your After Deadline URL.
You can create additional Email Links by adding new pages to Campaigns > Edit Campaign > Pages. Each URL will have a separate Email Link.
1. Navigate to 'Emails' and select the email link that corresponds to your page URL:
2. Copy the corresponding Email Link and use it in your Teachable email.
That's it. :)
If you have any questions, please let us know at [email protected]. | https://docs.deadlinefunnel.com/en/articles/4160478-how-to-use-deadline-funnel-with-teachable-email | 2021-09-17T00:28:55 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217867327/276424e15beb69388978f77b/file-QNHp1E9RGy.png',
None], dtype=object)
array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217867342/f945e8c61cb880dbc1cc3386/file-c95DCuCt6R.png',
None], dtype=object)
array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217867344/bd199b71716ccb3afbf85f59/file-BNBygoKvsO.png',
None], dtype=object)
array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217867347/c7240f4fd23cd60812c45331/file-9avEReLjUY.png',
None], dtype=object)
array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217867349/c03df609529e9e4f9bc3d79d/file-MhZX7kAUs4.png',
None], dtype=object)
array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217867352/c00b5cd599ac638dcbb974b2/file-g0dZd9Diwx.png',
None], dtype=object)
array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217867357/6daa130b28b1555fe0706d0d/file-7MNCQr7WTj.png',
None], dtype=object) ] | docs.deadlinefunnel.com |
:
- Using the Audit Log
- Viewing the Replication Status
- Using the Traffic Capture Tool
- Using the Capacity Report
- Participating in the Customer Experience Improvement Program
- Monitoring DNS Transactions
- Viewing DNS Alert Indicator Status
- Configuring DNS Alert Thresholds
In addition, if Grid members manage Microsoft servers, Grid Manager creates a synchronization log file for each managed Microsoft server. For information, see Viewing Synchronization Logs.
Using the Audit Log
The audit log contains a record of all Infoblox administrative activities. It provides the following detailed information:
- Timestamp of the change. If you have different admin accounts with different time zone settings, the appliance uses the time zone of the admin account that you use to log in to the appliance to display the date and timestamp.
- Administrator name
- Changed object name
- New value of the object. If you change multiple properties of an object, the audit log lists all changes in a comma-separated log entry. You can also search the audit log to find the new value of an object.
The appliance logs the following successful operations:
- Logins to Grid Manager and the API.
- Logout events, including when users log out by clicking the Logout button, when the Grid Manager GUI times out, and when users are logged out due to an error.
- Write operations such as the addition, modification, and deletion of objects.
- System management operations such as service restarts and appliance reboots.
- Scheduled tasks such as adding an A record or modifying a fixed address.
Enabling Audit Log Rolling:
- From the Grid tab, select the Grid Manager tab -> Members tab, and then click Grid Properties -> Edit from the Toolbar.
- In the Grid Properties editor, select the Security tab, and then select Enable Audit Log Rolling.
Specifying the Audit Log Type
Select either the Detailed (default) or Brief audit log type as follows:
- From the Grid tab, select the Grid Manager tab -> Members tab, and then click Grid Properties -> Edit from the Toolbar.
- In the Grid Properties editor, select the General tab, and then select one of the following:
- Detailed: This is the default type. When you select this, Grid Manager displays detailed information on all administrative changes such as the timestamp of the change, administrator name, changed object name, and the new values of all properties in the logged message.
- Brief: Provides information on administrative changes such as the changed object name and action in the log message. The logged message does not show timestamp or admin name.
Viewing the Audit Log
To view an audit log:
- From the Administration tab, select the Logs tab -> Audit Log tab.
- Optionally, use the filters to narrow down the audit log messages you want to view. Click Show Filters to enable the filters. Configure the filter criteria, and then click Apply.
Based on your filter criteria (if any), Grid Manager displays the following in the Audit Log viewer:
- Timestamp: The date, time, and time zone the task was performed. The time zone is the time zone configured on the member.
- Admin: The admin user who performed the task.
Note: The admin user displayed as $admin group name$ represents an internal user. You can create a admin filter with “matches expression” equals ^[^$] to filter out internal users.
- Action: The action performed. This can be CALLED, CREATED, DELETED, LOGIN_ALLOWED, LOGIN_DENINED, MESSAGE, and MODIFIED.
- Object Type: The object type of the object involved in this task. This field is not displayed by default. You can select this for display.
- Object Name: The name of the object involved in this task.
- Execution Status: The execution status of the task. Possible values are Executed, Normal, Pending Approval and Scheduled.
- Message: Detailed information about the performed task.
You can also do the following in the log viewer:
- Toggle between the single line view and the multi-line view for display.
- Navigate to the next or last page of the file using the paging buttons.
- Refresh the audit log view.
- Click the Follow icon to have the appliance automatically refresh the log every five seconds.
- Download the log.
- Clear the contents of the audit log.
-.
- Export or print the content of the log.
Searching in the Audit Log
Instead of paging through the audit log file to locate messages, you can have the appliance search for messages with certain text strings.
To search for specific messages:
- Enter a search value in the search field below the filters, and then click the Search icon.
The appliance searches through the audit log file and highlights the search value in the viewer. You can use the arrow keys next to the Search icon to locate the previous or next message that contains the search value.
Downloading the Audit Log
You can download the audit log file to a specified directory, if you want to analyze it later. To download an audit log file:
- From the Administration tab, select the Logs tab -> Audit Log tab, and then click the Download icon.
- Navigate to a directory where you want to save the file, optionally change the file name (the default name is auditLog.tar.gz), and then click OK. If you want to download multiple audit log files to the same location, rename each downloaded file before downloading the next.
Note: If your browser has a pop-up blocker enabled, you must turn off the pop-up blocker or configure your browser to allow pop-ups for downloading files.
Viewing the Replication:
- Name: The FQDN (fully qualified domain name) of the appliance.
- Send Queue: The size of the queue from the Grid Master to the Grid member.
- Last Send: The timestamp of the last replication information sent by the Grid Master.
- Receive Queue: The size of the queue from the Grid member to the Grid Master.
- Last Receive: The timestamp of the last replication information sent received by the Grid Master.
- Member Replication Status: The replication status between the member and the Grid Master. Grid Manager displays the status in green when the status is fine or red when the member is offline.
- HA Replication Status: The HA replication status between the active and passive nodes. The status is at the member level, not at the node level. Grid Manager displays the status in red when one of the nodes is offline.
- Status: The current operational status of the appliance. The status can be one of the following:
- Green: The appliance is operating normally in a "Running" state.
- Yellow: The appliance is connecting or synchronizing with its Grid Master.
- Red: The Grid member is offline, is not licensed (that is, it does not have a DNSone license with the Grid upgrade that permits Grid membership), is upgrading or downgrading, or is shutting down.
- IPv4 Address: The IPv4 address of the appliance or the VIP of an HA pair.
- IPv6 Address: The IPv6 address of the appliance or the VIP of an HA pair.
- Identify: This field appears only if your appliance has the unit identification button. This can be On or Off. When you identify the appliance by pressing the UID button on the appliance or through the GUI or CLI command, this field displays On. Otherwise, this is Off.
- DHCP, DNS, TFTP, HTTP,FTP, NTP, bloxTools, Captive Portal, DNS Accelerator Usage, Discovery, Reporting: The current status of the service. The status can be one of the following:
- Green: The service is enabled and running properly.
- Yellow: The service is enabled, but there may be some issues that require attention.
- Red: The service is enabled, but it is not running properly. A red status icon can also appear temporarily when a service is enabled and begins running, but the monitoring mechanism has not yet notified the Infoblox GUI.
- Gray: The service is not configured or it is disabled.
- Hardware Type: The hardware type of the appliance, such as IB-1400.
- Serial Number: The serial number of the appliance.
- DB Utilization: The percentage of the database that is currently in use.
- Comment: Information about the appliance.
- Site: The location to which the member belongs. This is one of the predefined extensible attributes.
- HA: Indicates whether the member is an HA pair. If the member is an HA pair, Grid Manager displays the status of the HA pair.
- Hardware Model: The hardware model of the appliance.
You can do the following:
-.
- Modify some of the data in the table. Double click a row of data, and either edit the data in the field or select an item from a drop-down list. Note that some fields are read-only. For more information about this feature, see Modifying Data in Tables.
- Edit the properties of a member.
- Click the check box beside a member, and then click the Edit icon.
- Delete a member.
- Click the check box beside a member, and then click the Delete icon.
- Export or print the list.
Using the Traffic Capture Tool
You can capture the traffic on one or all of the ports on a NIOS appliance, and then view it using a third-party network protocol analyzer application, such as the Wireshark – Network Protocol Analyzer™. 4 NIOS admin users.
Note: The NIOS appliance always saves a traffic capture file as tcpdumpLog.tar.gz. If you want to download multiple traffic capture files to the same location, rename each downloaded file before downloading the next.
You can also capture traffic on the NIOS appliance through the Infoblox CLI using the
set traffic_capture command. For more information, refer to the Infoblox CLI Guide. Grid Manager displays the traffic capture status and it allows you to download the captured traffic, irrespective of whether the traffic capture is initiated from the Infoblox CLI or from Grid Manager.
To capture traffic on a member:
- From the Grid tab, select the Grid Manager tab -> Members tab, and then click Traffic Capture from the Toolbar.
- In the Traffic Capture dialog box, complete the following:
- Member: Grid Manager displays the selected member on which you want to capture traffic. If no member is displayed or if you want to specify a different member, click Select. When there are multiple members, Grid Manager displays the Member Selector dialog box from which you can select one. You cannot capture traffic on an offline member.
- Interface: Select the port on which you want to capture traffic. Note that if you enabled the LAN2 failover feature, the LAN and LAN2 ports generate the same output. (For information about the LAN2 failover feature, see About Port Redundancy.)
- LAN: Select this to capture all the traffic the LAN port receives and transmits.
- MGMT: Select this to capture all the traffic the MGMT port receives and transmits.
- LAN2: Select to capture all the traffic the LAN2 port (if enabled) receives and transmits.
- All: Select this to capture the traffic addressed to all ports. Note that the NIOS appliance only captures traffic that is addressed to it.
- LANx nnnn: If you have configured VLANs on the LAN1 or LAN2 port, the appliance displays the VLANs in the format LANx nnnn, where x represents the port number and nnnn represents the associated VLAN ID.
Note: Riverbed virtual appliances support capturing traffic only on the LAN port.
- Seconds to run: Specify the number of seconds you want the traffic capture tool to run.
3. Capture Control: Click the Start icon to start the capture. A warning message appears indicating that this report will overwrite the existing file. Click Yes. You can click the Stop icon to stop the capture after you start it.
4. Transfer To: Select the destination to transfer the traffic capture file. You can select My Computer, TFTP, FTP, or SCP from the drop-down list. Note that you cannot transfer the traffic capture file when the traffic capture is in progress.
- My Computer: Transfer the traffic capture file to a local directory on your computer. This is the default.
- TFTP: Transfer the traffic capture file to a TFTP server.
- Filename: Enter the directory path and the file name of the traffic capture file. For example, you can enter
/home/test/Infoblox_2016_03_01.
- IP Address of TFTP Server: Enter the IP address of the TFTP server to which you want to transfer the traffic capture file.
- FTP: Transfer the traffic capture file to an FTP server.
- Filename: Enter the directory path and the file name of the traffic capture file. For example, you can enter
/home/test/Infoblox_2016_03_01.
- IP Address of FTP Server: The IP address of the FTP server.
- Username: Enter the username of your FTP account.
- Password: Enter the password of your FTP account.
- SCP: Transfer the traffic capture file to an SCP server.
- Filename: Enter the directory path and the file name of the traffic capture file. For example, you can enter
/home/test/Infoblox_2016_03_01.
- IP Address of SCP Server: The IP address of the SCP server.
- Username: Enter the username of your SCP account.
- Password: Enter the password of your SCP account.
5. Uncompressed Capture File Size: Click Download to download the captured traffic after the capture stops and then save the file. You can rename the file if you want. You cannot download the traffic report when the tool is running. Grid Manager updates the size of the report when the capture tool is running.
Note: The NIOS appliance must have free disk space of at least 500MB + size of the traffic capture file (4 GB/1 GB, depending on the appliance model) to download the traffic capture file.
6. Use terminal window commands (Linux) or a software application (such as StuffIt™ or WinZip™) to extract the contents of the .tar.gz file.
7. When you see the traffic.cap file in the directory where you extract the .tar.gz file, open it with a third-party network protocol analyzer application.
Using the Capacity Report:
- From the Grid tab, select the Grid Manager tab -> Members tab -> member check box, and then click Capacity Report from the Toolbar.
The capacity summary contains the following information:
- Name: The name of the appliance.
- Role: The role of the appliance. The value can be Grid Master, Grid Master Candidate, Grid Member, or Standalone.
- Hardware Type: The type of hardware. For an HA pair, the report displays the hardware type for both the active and passive nodes.
- Object Capacity: The maximum number of objects the appliance can support.
- Total Objects: The total number of objects currently in the database.
- % Capacity Used: The percentage of the capacity in use.
The capacity report filters object types you can manage through the appliance. You can configure the object types you want to see in the following table by completing the following in the Minimum Object Total filter:
- Minimum Object Total: Enter the minimum number of objects within an object type of which Grid Manager displays. In the Object Type table, Grid Manager displays only the object types that contain at least the specified number of objects you enter in this field.
The capacity report displays the following information:
- Object Type: The type of objects. For example, DHCP Lease, Admin Group, or PTR Record. For objects that are only used for internal system operations, the report groups and shows them under Other.
- Total: The total number of objects for the specific object type. You can print the object type information or export it to a CSV file.
Participating in the Customer Experience Improvement:
- The phone home feature version.
- The report type, such as periodic and test.
- The time of the report.
- The Infoblox Support ID that was assigned to the account.
- Information about the Grid, such as its NIOS version, name, VIP, Grid Master hostname, LAN IP, and the number of Grid members and appliances in the Grid.
- The upgrade history of the Grid.
- Information about each Grid member, such as the hostname, IP address, status, role (such as standalone, master), and if the member is an HA pair. If the member is a peer in a DHCP failover association, the report also includes the DHCP failover status.
- Hardware information, such as the hardware type, serial number, HA status, and uptime.
- Information about the interfaces, such as the interface name and IP addresses.
- Resource usage information, such as CPU and system temperature, and CPU, database, disk, and memory usage.
Note that if the appliance is configured to send email notifications to an SMTP relay server, as described in Notifying Administrators, the appliance sends the phone home reports to the relay server as well.
To configure the Grid Master to email status reports:
- From the Grid tab, select the Grid Manager tab -> Members tab.
- Expand the Toolbar and click Grid Properties -> Edit.
- In the Grid Properties editor, select the Customer Improvement tab, and then complete the following:
- Participate in Infoblox Customer Experience Improvement Program: Select the check box to send product usage data to Infoblox on a periodic basis. Infoblox uses this data to improve product functionality.
- Support ID: Enter the Infoblox Support ID that was assigned to your account. It must be a number with four to six digits. Infoblox includes this ID in the data report.
- Send notifications to:
- Infoblox Support: Select this to email the reports to Infoblox Technical Support.
- Additional email addresses: Optionally, you can specify up to 16 additional recipients. Click the Add icon and enter the email addresses of the recipients.
- Send Test Report: Click this to send a test report to the specified recipients.
4. Save the configuration and click Restart if it appears at the top of the screen.
Monitoring DNS Transactions:
- There are no outstanding DNS requests from the port on which the response arrives.
- The TXID of the DNS response matches the TXID of an outstanding request. However, the request was sent from a port other than the port on which the response arrives.:
- The attacks that the appliance monitors do not happen over TCP.
- DNS responses are sent only from port 53. The appliance discards DNS responses that are sent from other ports.
To monitor invalid ports and invalid TXIDs on the Infoblox DNS server, follow these procedures:
- Enable DNS network monitoring and DNS alert monitoring. For information, see Enabling and Disabling DNS Alert Monitoring.
- Configure the thresholds for DNS alert indicators. For information, see Configuring DNS Alert Thresholds
- Enable SNMP traps and e-mail notifications. For information, see Configuring SNMP.
- Review the DNS alert status. For information, see Viewing DNS Alert Indicator Status
- Identify the source of the attack by reviewing the DNS alert status, syslog file, and SNMP traps. For information on SNMP traps for DNS alerts, see Threshold Crossing Traps.
To mitigate cache poisoning, you can limit incoming traffic or completely block connections from specific sources, as follows:
- Enable rate limiting on the DNS server. For information, see Enabling and Disabling Rate Limiting from External Sources.
- Configure rate limit traffic rules from specific sources. For information, see Configuring Rate Limiting Rules.
You can verify the rate limiting rules after you configure them. For information, see Viewing Rate Limiting Rules
Enabling and Disabling DNS Alert Monitoring:
- Log in to the Infoblox CLI as a superuser account.
- Enter the following CLI command:bookmark2831.
Viewing DNS Alert Indicator Status
To view DNS alert indicator status:
- Log in to the Infoblox CLI as a superuser account.
- Enter the following CLI command:.
Configuring DNS Alert Thresholds:
- Log in to the Infoblox CLI as a superuser account.
- Enter the following CLI command:.
Viewing DNS Alert Thresholds
You can view the DNS alert thresholds. The appliance displays the current thresholds. If you have not configured new thresholds, the appliance displays the default thresholds, which are 50% for both invalid port and TXID.
To view the DNS alert thresholds:
- Log in to the Infoblox CLI as a superuser account.
- Enter the following CLI command:
show monitor dns alert
The appliance displays the threshold information as shown in the following example:
DNS Network Monitoring is enabled. Alerting is enabled.
DNS Alert Threshold (per minute)
===========================================
portover 70% of packets
txidover 100 packets
Enabling and Disabling Rate Limiting from External Sources
You can mitigate cache poisoning on your DNS server by limiting the traffic or blocking connections from UDP port 53. To enable rate limiting from sources:
- Log in to the Infoblox CLI as a superuser account.
- Enter the following CLI command: bookmark2834.
You can also disable rate limiting by entering the following command:
set ip_rate_limit off
When you disable rate limiting, the appliance stops applying the rate limiting rules.
Configuring:
- Log in to the Infoblox CLI as a superuser account.
- Enter the following CLI:
- To block all traffic from host 10.10.1.1, enter the following command:
set ip_rate_limit add source 10.10.1.1 limit 0
- To limit traffic to five packets per minute from host 10.10.1.2, enter the following command:
set ip_rate_limit add source 10.10.1.2 limit 5/m
- To limit the traffic to five packets per minute from host 10.10.2.1/24 with an allowance for burst traffic of 10 packets, enter the following command:
set ip_rate_limit add source 10.10.2.1/24 limit 5/m burst 10
- To limit the traffic to 5000 packets per minute from all sources, enter the following command:
set ip_rate_limit add source all limit 5000/m
Removing Rate Limiting Rules
You can remove the existing rate limiting rules that limit access or block connections from UDP port 53. To remove all the existing rules:
- Log in to the Infoblox CLI as a superuser account.
- Enter the following CLI command:
- To remove the rate limiting rule that limits traffic from all sources, enter:
set ip_rate_limit remove source all
or
- To remove all of the rate limiting rules from all sources, enter:
set ip_rate_limit remove all
To remove one of the existing rules for an existing host:
- Log in to the Infoblox CLI as a superuser account.
- Enter the following CLI command:
set ip_rate_limit remove source ip-address[/mask]
Viewing Rate Limiting Rules
You can view the existing rate limiting rules that limit access or block connections from UDP port 53. To view rate limiting rules:
- Log in to the Infoblox CLI as a superuser account.
- Enter the following CLI
This page has no comments. | https://docs.infoblox.com/display/NAG8/Monitoring+Tools | 2021-09-17T01:47:37 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.infoblox.com |
Broadcast join
Today, regular joins are executed on a single cluster node. Broadcast join is an execution strategy of join that distributes the join over cluster nodes. This strategy is useful when left side of the join is small (up to few tens of MBs). In this case, a broadcast join will be more performant than a regular join. Run the following query to get the estimated size of the left side in bytes:
lookupSubQuery | summarize sum(estimate_data_size(*))
If left side of the join is a small dataset, then you may run join in broadcast mode using the following syntax (hint.strategy = broadcast):
lookupTable | join hint.strategy = broadcast (factTable) on key
Performance improvement will be more noticeable in scenarios where the join is followed by other operators such as
summarize. for example in this query:
lookupTable | join hint.strategy = broadcast (factTable) on Key | summarize dcount(Messages) by Timestamp, Key | https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/broadcastjoin | 2021-09-17T02:08:26 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.microsoft.com |
Voucher Definitions
On the Wireless Protection > Hotspots > Voucher Definitions tab you can manage different voucher definitions for voucher type hotspots.
To create a voucher definition, proceed as follows:
Click Add Voucher Definition.
The Add Voucher Definition dialog box opens.
Make the following settings:
Name: Enter a descriptive name for this voucher definition.
Validity period: Enter the time span for which a voucher with this definition will be valid. Counting is started at the first login. It is highly recommended to enter a time period.
Note – The maximum time for the Validity Period is two years.
Time quota: Here you can restrict the allowed online time. Enter the maximum online time after which a voucher of this definition expires. Counting is started at login and is stopped at logout. Additionally, counting is stopped after 5 minutes of inactivity.
Note – The maximum time for the Time Quota is two years.
Data volume: Here you can restrict the allowed data volume. Enter the maximum data volume to be transmitted with this voucher definition.
Note – The maximum Data Volume is 100 GB.
Comment (optional): Add a description or other information.
Click Save.
The voucher definition will be created. It can now be selected when creating a voucher-type hotspot.
To either edit or delete a voucher definition, click the corresponding buttons.
Cross Reference – Find information about customizing hotspot vouchers in the Sophos Knowledge Base. | https://docs.sophos.com/nsg/sophos-utm/utm-on-aws/9.707/help/en-us/Content/utm/utmAdminGuide/WirelessHotspotsVoucherDefinitions.htm | 2021-09-16T23:54:35 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.sophos.com |
Ternate Chabacano
Facts
- Language: Ternate Chabacano
- Alternate names: Chabacano, Chabakano, Zamboangueño
- Language code: dcbkt
-: 3000
- Script: Braille script. Latin script, primary usage.
More information:
Introduction.
The Ternate Chavacano Verb.
Tense-Aspect-Mood markers
- Ø: generic, past and present time reference
- ya/a: perfective, past time reference
- ta: imperfective, past and present time reference
- di: contemplated, future time reference | https://docs.verbix.com/Languages/ChavacanoTernate | 2021-09-16T23:53:58 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.verbix.com |
Create a New Fixed Asset
To create a new fixed asset:
Create a New Item
A dashboard for the Fixed assets directory opens.
1. On the Codejig ERP Main menu, click the Fixed assets module, and then select Fixed assets.
A dashboard for the Fixed assets directory opens.
2. On the dashboard page, click + Add new.
You are taken to a page for entering fixed asset details.
3. Enter information about your fixed asset:
- General information: item type (required), name, stock keeping unit, description, image, base unit of measurement.
- Then, provide the following information in the Details section, under the following Details tabs:
- Accounting tab - configure accounts associated with the fixed asset that are required for: 1) fixed asset management after the purchase of the fixed asset, 2) accounting for the cost of the fixed asset and income generated during the sales process, 3) other specific operations, such as inventory adjustment, depreciation of fixed assets, etc.When you create new fixed assets, accounts under the Accounting tab are auto-completed being retrieved from the My company settings. But, it is recommended to revise default accounts and change them to accounts that are more appropriate for the specific fixed asset. Specified accounts become fixed asset’s default accounts and will be used in documents and all fixed asset-related transactions.
- VAT tab - fill in fields under this tab in case it is planned to sell the fixed asset or derecognize it before the end of its useful life period. If reduced VAT rates are applicable to this fixed asset, select a specific reduced VAT rate for it.
- Codes tab - add fixed asset codes, such as EAN 13, QR Code.
4. Click Save.
It is not obligatory to provide all information under the Details tab immediately after creating a new fixed asset. You can fill in some details later on in the course of working with the system. You provide information under the Details tab to simplify and automate the creation of documents.
The page of fixed asset records consists of the following sections:
- General area
- Details section
- Details tabs
- Accounting tab
- VAT tab
- Codes tab
- Reports tabs
- List of transaction tab
- Fixed asset tab
- Stock tab
- Prices tab
- Charts tab
To be able to save the new fixed asset, you have to fill in the required fields which are marked with an asterisk (*) in General area and under the Accounting tab.
For more details about creating new fixed assets, see Create Items in Codejig ERP, Item: General Area, Item: Details Section.
Adding new fixed assets to the system and defining their details is only the first step in the process of creating fixed assets. In order to complete the creation of a fixed asset, you have to recognize it.
For more details on recognition of fixed assets, see Recognition, Create New Recognition Document.
More information
Recognition
Create a New Recognition Document
Fixed Assets: Dashboard | https://docs.codejig.com/en/entity2305843015656313960/view/4611686018427397356 | 2021-09-17T00:50:38 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.codejig.com |
HighLevel Emails to Match Your Deadline
Navigate to your HighLevel Workflow and ensure that you have a tag or other starting point to mark when users are subscribed, and also check to be sure that you have followed the API setup steps to add your webhook URL to the Workflow:
In this example, we've set the campaign to be 3 days and to end at 11:59 PM EST on the final day:
So we see that when the user signs up, three things happen. A tag is added to the user which kicks off our Workflow, and then a webhook is sent back to Deadline Funnel to trigger the deadline, and our first email goes out to our subscriber. We then want to add two conditional statements which will allow us to line up our emails with the deadline:
Set the first wait condition to wait on all days until 12 AM:
Then set the second wait condition to wait on all days until 8 AM: (which will be the next morning)
Now we'll set the emails to go out each day until the last day is reached: HighLevel is set to EST. You can set this time zone to whatever you wish, but the time zones set in Deadline Funnel and HighLevel must be the same.
That's it! Please be sure to test your campaign.
We are available on chat or at [email protected] if you have any questions. | https://docs.deadlinefunnel.com/en/articles/5529918-how-to-time-your-highlevel-emails-to-match-your-deadline | 2021-09-17T00:03:22 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['https://downloads.intercomcdn.com/i/o/381622085/3b5321d15cdd6b6699ac82a6/image.png',
None], dtype=object)
array(['https://deadline-funnel-d18ee6fb3b00.intercom-attachments-1.com/i/o/217866456/b04247f1ef6c4b86d72ef6dd/file-IXzLd5yUIZ.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/381630073/4286be3c8b52c5894373aaa4/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/381631296/721da0068da7e5d79619e356/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/381632620/9c7931c6fc02e5334489bca3/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/382308008/6b21e159c1435fe0a27631ef/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/382308547/d99b366f74b13e8e260fe04f/image.png',
None], dtype=object) ] | docs.deadlinefunnel.com |
Zhyler
Facts
- Language: Zhyler
- Created: 2001
- Alternate names:
- Language code:
- Language family: personal language
- Script:
A constructed language by David J. Peterson.
Zhyler features 57 noun cases, in part due to a diachronic change where all postpositions in the language fused onto the nouns they modified, at which point a vowel harmony system came into being. Zhyler also has 17 noun classes, not unlike those found in the Bantu language family.
Language sources: Turkish, Swahili, Middle Egyptian, Mbasa, Kamakawi, and English.
Word building and derivation are tied into the noun class system. The language has only nouns and adjectives, with a few adverbs. Either nouns or adjectives can be inflected as verbs.
No. | https://docs.verbix.com/Conlangs/zhyler | 2021-09-17T01:52:46 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.verbix.com |
sh.removeRangeFromZone()
On this page
Definition
sh.
removeRangeFromZone( namespace, minimum, maximum )
New in version 3.4.
Removes the association between a range of shard key values and a zone.
sh.removeRangeFromZone()takes the following arguments:
Use
sh.removeRangeFromZone()to remove the association between unused, out of date, or conflicting ranges and a zone.
If no range matches the minimum and maximum bounds passed to
removeShardFromZone(), nothing is removed.
Only issue
sh.removeTagRange()when connected to a
mongosinstance.
Behavior
sh.removeShardFromZone() does not remove the zone associated to the specified range.
See the zone manual page for more information on zones in sharded clusters.
Balancer.
Security
For sharded clusters running with authentication, you must authenticate as either:
a user whose privileges include the specified actions on various collections in the
configdatabase:
or, alternatively
a user whose privileges include
enableShardingon the cluster resource (available starting in version 3.6.16).
The
clusterAdmin or
clusterManager built-in roles have the appropriate permissions for issuing
sh.removeRangeFromZone(). See the documentation page for Role-Based Access Control for more information.
Example
Given a sharded collection
exampledb.collection with a shard key of
{ a : 1 }, the following operation removes the range with a lower bound of
1 and an upper bound of
10:
sh.removeRangeFromZone() does not remove anything.
Compound Shard Key
Given a sharded collection
exampledb.collection with a shard key of
{ a : 1, b : 1 }, the following operation removes the range with a lower bound of
{ a : 1, b : 1} and an upper bound of
{ a : 10, b : 10 }:
Given the previous example, if there was an existing range with a lower bound of
{ a : 1, b : 5 } and an upper bound of
{ a : 10, b : 1 }, the operation would not remove that range, as it is not an exact match of the minimum and maximum passed to
sh.removeRangeFromZone(). | https://www.docs4dev.com/docs/en/mongodb/v3.6/reference/reference-method-sh.removeRangeFromZone.html | 2021-09-17T01:40:19 | CC-MAIN-2021-39 | 1631780053918.46 | [] | www.docs4dev.com |
Design Debt Quantification
Roots, as well as anti-patterns, can be considered as design debt, a type of technical debt (TD). The rationale is that if these design problems are not fixed, they may continue to generate additional maintenance costs, the same way that a monetary debt accumulates interest. Using DV8, the user can calculate (1) the added maintenance costs due to each instance of each anti-pattern and (2) the extra maintenance costs of each root.
(1) The maintenance costs of design debt: As an example, the following tables summarizes the anti-patterns detected in a real industrial project [9] , their scopes, and maintenance costs. The first line shows 322 files (21% of all the files) involved in 26 Clique instances. These files were changed 1,790 times involving 26,294 LOC, 41% of all the LOC altered for the entire project. 643 of the changes are for bug fixing, involving 16,557 LOC, 45% of all the LOC spent for bug fixing. This table shows that Cliques are very expensive to maintain in this project. The table below indicates that Clique1 involves 99 files and incurred the most maintenance costs, definitely worth attention. Clique5, although it contains just 16 files, also appears to be very costly.
Using this table, the user can prioritize which flaws need to be addressed in which order. By comparing with system average bug and change rates, we can see that files involved in these flaws are causing high maintenance difficulty.
Table: Design anti-patterns
Pt. : Percentage; Flaw CF - BC : maintenance costs, quantified by CF, CC, BF and BC, of the files in each flaw
Table: Maintenance costs of Clique instances
We can similarly calculate the maintenance costs incurred on each root, as exemplified in the following table. The first row shows that the first root involves 147 files. These files were changed 1,109 times, consuming 13,487 LOC. Of these changes, 414 were bug fixes involving 9,347 LOC. As we can see from the table, even though a Root only covers a small portion of the system, it is a hotspot where much maintenance effort was spent.
Table: Maintenance costs of each root
%: percentage; Rt. CF - BC: the total CF - BC of all files in each root
(2) Extra maintenance costs of a design debt: For each design debt, in the form of a root, an anti-pattern instance, or a hotspot, DV8 provides a debt calculator to compute the penalty it incurred. This penalty is calculated as the difference between the actual maintenance effort spent on the debt, and the expected maintenance effort. We use the average change/bug rate of all the project files as its expected maintenance effort [2]. For example, the columns "ExtraCF "- "ExtraBC" in the following table represent the debt's cumulative maintenance penalty. For example,"615" in the second row of "ExtraBF " column indicates that the 222 files in root1 and root2 are involved in bug fixes 615 times more often than average files. The "Percentage" row presents the percentage of the extra maintenance effort as compared with project averages. The last row indicates that 28% of all the changes, 41% of all the LOC, 40% of bug-fixing changes, and 47% of bug-fixing LOC spent on the entire project are incurred by these roots.
These extra maintenance costs can be considered as expected savings once these debts are removed. All these data are aggregated into Return on Investment (ROI) spreadsheets in the root, anti-pattern, and hotspot folders, respectively.
Table: Extra maintenance costs of architecture roots.
| https://docs.archdia.net/DesignDebtQuantification.html | 2021-09-17T00:04:56 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['lib/NewItem94.png', None], dtype=object)
array(['lib/NewItem93.png', None], dtype=object)
array(['lib/NewItem92.png', None], dtype=object)
array(['lib/NewItem91.png', None], dtype=object)] | docs.archdia.net |
, goals, sales tracking, custom fields, and testing
Recommended reading: Important links and a free training resource
Introduction
In this guide, we will show you how to create an evergreen campaign in Deadline Funnel that connects to your automation in HighLevel.
We recommend starting with this guide if you are using HighLevel. 🙂
But if you do not want to create an evergreen campaign - and you want to create a fixed-date campaign - please refer to this guide instead.
🕐 Estimated time:
30 minutes
⚠️ Before you start:
Make sure you have already created a campaign in HighLevel
Write at least two or three emails in the campaign your HighLevel campaign (or trigger) (required)
One or more Deadline Funnel email links in your campaign emails (required)
One or more Deadline Funnel email timers in your campaign emails (optional)
Key concepts
Please watch this quick video about how Deadline Funnel integrates with HighLevel:
⏰ THE WEBHOOK: The trigger that starts each subscriber's deadline
Each subscriber who goes through the webhook in your campaign will be added to your Deadline Funnel campaign.
Example: In a 3-day evergreen campaign, if someone goes through the webhook on Monday, their deadline will be Thursday. And someone else who goes through the webhook go through the Deadline Funnel webhook in your campaign..{{contact.email}}) instead of.
When a subscriber clicks on a Deadline Funnel email link, Deadline Funnel will look up their deadline (based on when they went through the webhook) and either redirect them to your special offer page or the expired page.
Core setup
These three steps are required in order to integrate Deadline Funnel + HighLevel. your campaign to start each subscriber's unique deadline
After you've created your Deadline Funnel campaign, set up the integration between Deadline Funnel and HighLevel.
This integration determines where you start each subscriber's Deadline Funnel tracking as they are going through your automation.
➡️ Please visit our guide for more details about how to add the Deadline Funnel webhook to your automation. w
Recommended reading
Here's your recommended reading list:
Create a Deadline Funnel campaign
Add Deadline Funnel email links to your automation emails
Add email timers to your emails
If you have any questions, please reach out to us via the Messenger. | https://docs.deadlinefunnel.com/en/articles/5265097-highlevel | 2021-09-17T01:46:28 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['https://downloads.intercomcdn.com/i/o/217889499/9bd91ebc24dc041c7475a301/index.png?expires=1620321325&signature=659057b034cebfd66ff1d4f0481819723985ed7ff60874f56b02c1e4c319884e',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/340931909/7ae9c8a8d56a7541b71e35e2/HighLevel+Webhook.gif',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/340934145/f900bda9742b694502214568/image.png',
None], dtype=object) ] | docs.deadlinefunnel.com |
SAP High Availability Interface 7.73
Feedback
Thanks for your feedback.
*Beginning in v9.5.0 SIOS has released the new SAP HANA Application Recovery Kit. SIOS will continue to support the SAP HANA gen/app based Recovery Kit with the 9.4.x releases until March 31, 2022. If you are using SIOS Protection Suite for Linux v9.5 or later you must use the new (built-in) SAP HANA Application Recovery Kit.
!The existing SAP HANA gen/app based Recovery Kit is not supported with v9.5.0. Users who wish to upgrade to the SIOS Protection Suite for Linux v9.5.0 must convert their existing SAP HANA gen/app based Recovery Kit to the new SAP HANA Recovery Kit. Refer to Upgrading from the SAP HANA Gen/App to the SAP HANA Recovery Kit for details.
*NOTE: Operating system versions built for enhanced SAP support (such as Red Hat Enterprise Linux for SAP Business Applications, Red Hat Enterprise Linux for SAP Solutions, and SUSE Linux Enterprise Server for SAP Applications) are also supported as long as the running Linux kernel version is the same as one of the supported OS versions listed above.
Post your comment on this topic. | https://docs.us.sios.com/spslinux/9.5.1/en/topic/sios-protection-suite-for-sap-solution-page | 2021-09-17T00:23:45 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.us.sios.com |
Sãotomense
Facts
- Language: Sãotomense
- Alternate names: Forro, Santomense, São Tomense
- Language code: cri
-: 69900
- Script:
More information:
Introduction
Forro Creole is a Portuguese creole language spoken in São Tomé and Príncipe. It is also called by its native speakers as sãotomense creole or santomense creole.
The Sãotomense Verb. | https://docs.verbix.com/Languages/PortugueseSaotomense | 2021-09-17T01:12:28 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.verbix.com |
Legal Information
Please read and understand the following important legal information and disclaimer:
Tibbo Technology ("TIBBO") is a Taiwan corporation that designs and/or manufactures a number of hardware products, software products, and applications ("PRODUCTS"). TIBBO PRODUCT range includes BASIC-programmable devices ("PROGRAMMABLE DEVICES") that can run a variety of applications written in Tibbo BASIC ("BASIC APPLICATIONS").
As a precondition to your purchase and/or use of any TIBBO PRODUCT, including PROGRAMMABLE DEVICES, you acknowledge and agree to the following:
A. By purchasing any TIBBO PRODUCT, you agree and acknowledge that the design of all aspects of any COMBINATORIAL PRODUCT/SYSTEM or END PRODUCT is solely your responsibility. You agree that TIBBO shall have no obligation to indemnify or defend you in the event that a third party asserts that your COMBINATORIAL PRODUCT/SYSTEM or END PRODUCT violates third party patents, copyrights, or other proprietary rights.
B. You waive any right to cause TIBBO to defend or indemnify you or any of your customers in connection with a demand related to TIBBO PRODUCTS, including but not limited to any such right as may be imposed or implied by law, statute, or common law.
C. If a demand or proceeding is brought against TIBBO based on an allegation that your COMBINATORIAL PRODUCT/SYSTEM or END PRODUCT violates a patent, copyright, database right, trademark, or other intellectual property right, you shall defend such demand of proceeding and indemnify us and hold us harmless for, from and against all damages and costs awarded against us on the same basis and subject to the same conditions as were applicable to you. | http://docs.tibbo.com/soism/legal.htm | 2019-03-18T15:38:26 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.tibbo.com |
Create a standalone cluster running on Windows Server
You can use Azure Service Fabric to create Service Fabric clusters on any virtual machines or computers running Windows Server. This means you can deploy and run Service Fabric applications in any environment that contains a set of interconnected Windows Server computers, be it on premises or with any cloud provider. Service Fabric provides a setup package to create Service Fabric clusters called the standalone Windows Server package.
This article walks you through the steps for creating a Service Fabric standalone cluster.
Note
This standalone Windows Server package is commercially available and may be used for production deployments. This package may contain new Service Fabric features that are in "Preview". Scroll down to "Preview features included in this package." section for the list of the preview features. You can download a copy of the EULA now.
Get support for the Service Fabric for Windows Server package
- Ask the community about the Service Fabric standalone package for Windows Server in the Azure Service Fabric forum.
- Open a ticket for Professional Support for Service Fabric. Learn more about Professional Support from Microsoft here.
- You can also get support for this package as a part of Microsoft Premier Support.
- For more details, please see Azure Service Fabric support options.
- To collect logs for support purposes, run the Service Fabric Standalone Log collector.
Download the Service Fabric for Windows Server package
To create the cluster, use the Service Fabric for Windows Server package (Windows Server 2012 R2 and newer) found here:
Download Link - Service Fabric Standalone Package - Windows Server
Find details on contents of the package here.
The Service Fabric runtime package is automatically downloaded at time of cluster creation. If deploying from a machine not connected to the internet, please download the runtime package out of band from here:
Download Link - Service Fabric Runtime - Windows Server
Find Standalone Cluster Configuration samples at:
Standalone Cluster Configuration Samples
Create the cluster. The nodes section describes the three nodes in the cluster: name, IP address, node type, fault domain, and upgrade domain. The properties section defines the security, reliability level, diagnostics collection, and types of nodes for the cluster.
The cluster created in this article is unsecure. Anyone can connect anonymously and perform management operations, so production clusters should always be secured using X.509 certificates or Windows security. Security is only configured at cluster creation time and it is not possible to enable security after the cluster is created. Update the config file enable certificate security or Windows security. Read Secure a cluster to learn more about Service Fabric cluster security.
Step 1: Create the cluster
Scenario A: Create an unsecured local development cluster
Service Fabric can be deployed to a one-machine development cluster by using the ClusterConfig.Unsecure.DevCluster.json file included in Samples.
Unpack the standalone package to your machine, copy the sample config file to the local machine, then run the CreateServiceFabricCluster.ps1 script through an administrator PowerShell session, from the standalone package folder.
.\CreateServiceFabricCluster.ps1 -ClusterConfigFilePath .\ClusterConfig.Unsecure.DevCluster.json -AcceptEULA
See the Environment Setup section at Plan and prepare your cluster deployment for troubleshooting details.
If you're finished running development scenarios, you can remove the Service Fabric cluster from the machine by referring to steps in section "Remove a cluster".
Scenario B: Create a multi-machine cluster
After you have gone through the planning and preparation steps detailed at Plan and prepare your cluster deployment, you are ready to create your production cluster using your cluster configuration file.
The cluster administrator deploying and configuring the cluster must have administrator privileges on the computer. You cannot install Service Fabric on a domain controller.
The TestConfiguration.ps1 script in the standalone package is used as a best practices analyzer to validate whether a cluster can be deployed on a given environment. Deployment preparation lists the pre-requisites and environment requirements. Run the script to verify if you can create the development cluster:
.\TestConfiguration.ps1 -ClusterConfigFilePath .\ClusterConfig.json
You should see output similar to the following. If the bottom field "Passed" is returned as "True", sanity checks have passed and the cluster looks to be deployable based on the input configuration.
Create the cluster: Run the CreateServiceFabricCluster.ps1 script to deploy the Service Fabric cluster across each machine in the configuration.
.\CreateServiceFabricCluster.ps1 -ClusterConfigFilePath .\ClusterConfig.json -AcceptEULA
Note
Deployment traces are written to the VM/machine on which you ran the CreateServiceFabricCluster.ps1 PowerShell script. These can be found in the subfolder DeploymentTraces, based in the directory from which the script was run. To see if Service Fabric was deployed correctly to a machine, find the installed files in the FabricDataRoot directory, as detailed in the cluster configuration file FabricSettings section (by default c:\ProgramData\SF). As well, FabricHost.exe and Fabric.exe processes can be seen running in Task Manager.
Scenario C: Create an offline (internet-disconnected) cluster
The Service Fabric runtime package is automatically downloaded at cluster creation. When deploying a cluster to machines not connected to the internet, you will need to download the Service Fabric runtime package separately, and provide the path to it at cluster creation.
The runtime package can be downloaded separately, from another machine connected to the internet, at Download Link - Service Fabric Runtime - Windows Server. Copy the runtime package to where you are deploying the offline cluster from, and create the cluster by running
CreateServiceFabricCluster.ps1 with the
-FabricRuntimePackagePath parameter included, as shown in this example:
.\CreateServiceFabricCluster.ps1 -ClusterConfigFilePath .\ClusterConfig.json -FabricRuntimePackagePath .\MicrosoftAzureServiceFabric.cab
.\ClusterConfig.json and .\MicrosoftAzureServiceFabric.cab are the paths to the cluster configuration and the runtime .cab file respectively.
Step 2: Connect to the cluster
Connect to the cluster to verify the cluster is running and available. The ServiceFabric PowerShell module is installed with the runtime. You can connect to the cluster from one of the cluster nodes or from a remote computer with the Service Fabric runtime. The Connect-ServiceFabricCluster cmdlet establishes a connection to the cluster.
To connect to an unsecure cluster, run the following PowerShell command:
Connect-ServiceFabricCluster -ConnectionEndpoint <*IPAddressofaMachine*>:<Client connection end point port>
For example:
Connect-ServiceFabricCluster -ConnectionEndpoint 192.13.123.2345:19000
See Connect to a secure cluster for other examples of connecting to a cluster. After connecting to the cluster, use the Get-ServiceFabricNode cmdlet to display a list of nodes in the cluster and status information for each node. HealthState should be OK for each node.
PS C:\temp\Microsoft.Azure.ServiceFabric.WindowsServer> Get-ServiceFabricNode |Format-Table NodeDeactivationInfo NodeName IpAddressOrFQDN NodeType CodeVersion ConfigVersion NodeStatus NodeUpTime NodeDownTime HealthState -------------------- -------- --------------- -------- ----------- ------------- ---------- ---------- ------------ ----------- vm2 localhost NodeType2 5.6.220.9494 0 Up 00:03:38 00:00:00 OK vm1 localhost NodeType1 5.6.220.9494 0 Up 00:03:38 00:00:00 OK vm0 localhost NodeType0 5.6.220.9494 0 Up 00:02:43 00:00:00 OK
Step 3: Visualize the cluster using Service Fabric explorer
Service Fabric Explorer is a good tool for visualizing your cluster and managing applications. Service Fabric Explorer is a service that runs in the cluster, which you access using a browser by navigating to.
The cluster dashboard provides an overview of your cluster, including a summary of application and node health. The node view shows the physical layout of the cluster. For a given node, you can inspect which applications have code deployed on that node.
Add and remove nodes
You can add or remove nodes to your standalone Service Fabric cluster as your business needs change. See Add or Remove nodes to a Service Fabric standalone cluster for detailed steps.
Remove a cluster
To remove a cluster, run the RemoveServiceFabricCluster.ps1 PowerShell script from the package folder and pass in the path to the JSON configuration file. You can optionally specify a location for the log of the deletion.
This script can be run on any machine that has administrator access to all the machines that are listed as nodes in the cluster configuration file. The machine that this script is run on does not have to be part of the cluster.
# Removes Service Fabric from each machine in the configuration .\RemoveServiceFabricCluster.ps1 -ClusterConfigFilePath .\ClusterConfig.json -Force
# Removes Service Fabric from the current machine .\CleanFabric.ps1
Telemetry data collected and how to opt out of it
As a default, the product collects telemetry on the Service Fabric usage to improve the product. The Best Practice Analyzer that runs as a part of the setup checks for connectivity to. If it is not reachable, the setup fails unless you opt out of telemetry.
- The telemetry pipeline tries to upload the following data to once every day. It is a best-effort upload and has no impact on the cluster functionality. The telemetry is only sent from the node that runs the failover manager primary. No other nodes send out telemetry.
- The telemetry consists of the following:
- Number of services
- Number of ServiceTypes
- Number of Applications
- Number of ApplicationUpgrades
- Number of FailoverUnits
- Number of InBuildFailoverUnits
- Number of UnhealthyFailoverUnits
- Number of Replicas
- Number of InBuildReplicas
- Number of StandByReplicas
- Number of OfflineReplicas
- CommonQueueLength
- QueryQueueLength
- FailoverUnitQueueLength
- CommitQueueLength
- Number of Nodes
- IsContextComplete: True/False
- ClusterId: This is a GUID randomly generated for each cluster
- ServiceFabricVersion
- IP address of the virtual machine or machine from which the telemetry is uploaded
To disable telemetry, add the following to properties in your cluster config: enableTelemetry: false.
Preview features included in this package
None.
Note
Starting with the new GA version of the standalone cluster for Windows Server (version 5.3.204.x), you can upgrade your cluster to future releases, manually or automatically. Refer to Upgrade a standalone Service Fabric cluster version document for details.
Next steps
- Deploy and remove applications using PowerShell
- Configuration settings for standalone Windows cluster
- Add or remove nodes to a standalone Service Fabric cluster
- Upgrade a standalone Service Fabric cluster version
- Create a standalone Service Fabric cluster with Azure VMs running Windows
- Secure a standalone cluster on Windows using Windows security
- Secure a standalone cluster on Windows using X509 certificates
Feedback
We'd love to hear your thoughts. Choose the type you'd like to provide:
Our feedback system is built on GitHub Issues. Read more on our blog. | https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-creation-for-windows-server | 2019-03-18T16:13:36 | CC-MAIN-2019-13 | 1552912201455.20 | [array(['media/service-fabric-cluster-creation-for-windows-server/sfx.png',
'Service Fabric Explorer'], dtype=object) ] | docs.microsoft.com |
The ant plugin
The
ant plugin is useful for ant-based parts.
The ant build system is commonly used to build Java projects. The plugin requires a build.xml in the root of the source tree.
This plugin uses the common plugin keywords as well as those for “sources”. For more information, see Snapcraft parts metadata.
Additionally, this plugin uses the following plugin-specific keywords:
ant-properties(object)
A dictionary of key-value pairs. Set the following properties when running ant.
ant-build-targets(list of strings)
Run the given ant targets.
For examples, search GitHub for projects already using the plugin.
This is a snapcraft plugin. See Snapcraft plugins and Supported plugins for further details on how plugins are used.
Last updated 4 months ago. Help improve this document in the forum. | https://docs.snapcraft.io/the-ant-plugin/8507 | 2019-03-18T16:19:36 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.snapcraft.io |
:
Windows Server 2008 R2 and Windows Server 2012 R2 are supported for the purposes of running Horizon Client in nested mode. For more information, see Features Supported in Nested Mode.
Connection Server, Security Server, and View Agent or Horizon Agent
Latest maintenance release of Horizon 6 version 6.x and later releases.
If client systems connect from outside the corporate firewall, VMware recommends that you use a security server or Unified Access Gateway appliance so that client systems do not require a VPN connection.
Display protocols.
For Windows 7 SP1, install the Platform update for Windows 7 SP1 and Windows Server 2008 R2 SP1. For information, go to. | https://docs.vmware.com/en/VMware-Horizon-Client-for-Windows/4.7/horizon-client-windows-installation/GUID-D223AA9A-F2FF-439E-AD82-3C469AC0F1ED.html | 2019-03-18T15:28:39 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.vmware.com |
Customizing the 2007 Office Fluent Ribbon for Developers (Part 3 of 3)
Summary: This article is the companion to the part one and part two articles of the same name. This article contains a list of frequently asked questions regarding the Microsoft Office Fluent user interface. (33 printed pages)
Frank Rice, Microsoft Corporation
Ken Getz, MCW Technologies, LLC
Published: May 2006
Updated: May 2008
Applies to: Microsoft Office Access 2007, Microsoft Office Excel 2007, Microsoft Office PowerPoint 2007, Microsoft Office Outlook 2007, Microsoft Office Word 2007, Microsoft Visual Studio 2005 Tools for the 2007 Microsoft Office System, Microsoft Visual Studio 2005
Contents
The Ribbon and the Office Fluent User Interface
Ribbon and Office Fluent UI Frequently Asked Questions
Conclusion
Additional Resources
The Ribbon and the Office Fluent User Interface
The Ribbon is a part of the new Microsoft Office Fluent user interface (UI) for some of the applications in the 2007 release of Microsoft Office, including Microsoft Office Access 2007, Microsoft Office Excel 2007, Microsoft Office Outlook 2007, and Microsoft Office Word 2007. The Office Fluent UI is a departure from the system of menus, toolbars, and dialog boxes that were part of earlier releases of Office.
Ribbon and Office Fluent UI Frequently Asked Questions
These are some of the questions that are asked most frequently about how to customize the Fluent UI.
When I moved to Beta 2 TR, I noticed some differences in element names. What are these, specifically?
The <advanced> element was renamed to <dialogBoxLauncher>. The <fileMenu><menu id="fileMenu"> element was renamed to <officeMenu>. A loadImage callback was added with the following signatures.
Sub LoadImage(imageID, ByRef image)
public object loadImage(string imageId)
In addition, many of the idMso values changed.
How do you expose the internal IDs of built-in controls?
You can see each individual idMso value within the applications by using the following procedure.
To find the idMso value for an application
Click the Microsoft Office Button, and then click Application Options.
Click Customize, and then select the item you want information about.
Move the pointer over the item. The dialog box displays the control's idMso value in a ScreenTip, in parentheses.
What are the control IDs for controls that I need to disable or repurpose?
There are a set of Ribbon controls whose published control IDs are not the same as control IDs that should be used for their disabling/repurposing. This is due to how these specific controls are implemented internally. These previously unpublished control IDs only apply to the <commands> section of the XML used to define the Ribbon when repurposing and disabling commands. For all other scenarios (inserting controls with insertAfterMso/insertBeforeMso, cloning controls with <control idMso=""/>, re-using images with imageMso, and so forth), the published control ID is the correct ID.
The following table lists the affected IDs–the second column is the published control ID, and the third column is the ID that should be used to disable or repurpose the controls.
Table 1. Control IDs for disabling and repurposing controls
What are some of the limitations on attributes that I need to know about?
The getShowImage, getShowlabel, showImage, showLabel attributes are ignored on large controls.
The description and getDescription attributes only apply to menu items.
The getSize and size attributes do not apply to menu items. Instead the size is based on the item size for menu items.
The getVisible and visible attributes are ignored for ButtonGroup and Separator elements.
How do I display error messages for the Fluent UI?
You can control the error message display by setting a general option in each application.
To display error messages for the Fluent UI
Click the Microsoft Office Button, and then click Application Options to display the dialog box.
Click Advanced, and then find the General section of the options.
Select Show add-in user interface errors.
How do I change the UI dynamically? For example, I want to update labels and images, hide and show buttons, or refresh the content of a list while my code is running.
See the section "Dynamically Updating the Fluent UI" in the article Customizing the 2007 Office Fluent Ribbon for Developers (Part 1 of 3).
In Excel 2007, I am not able to run macros from the Quick Access Toolbar or from the Ribbon when my worksheet is in Print Preview. Is this the expected behavior?
Yes, this is the expected behavior. The ability to run macros while in Print Preview is disabled in Excel 2007. For example, assume that you have added a custom button to the Quick Access Toolbar. If you click the Microsoft Office Button, point to Print, and then click Print Preview, the current worksheet is displayed in Print Preview mode. The default buttons on the Quick Access Toolbar are disabled. If you then click the custom button, nothing happens; that is, any macro attached to the button is not executed and no dialog box is displayed.
Is there a way to programmatically control how the UI used by my add-in scales in size as controls are added or removed?
As currently implemented, custom groups do not resize themselves. They remain large, effectively getting a higher priority.
Is there a way to reset the UI and remove all customizations?
Yes. To reset the UI, uninstall your add-ins and then close any open documents. This restores the default UI.
Can I dynamically change the number of results in a Gallery control?
Yes. You can dynamically fill the gallery by supplying callbacks for the getItemCount, getItemLabel, or getItemImage attributes.
Are custom controls in the Fluent UI supported?
No. As an alternative, for scenarios that are not supported by the built-in controls, you can use the custom task pane feature to host your custom controls. You can find more information in the article Creating Custom Task Panes in the 2007 Office System.
Are all the controls in the Office applications available to my own customizations?
No, some controls are not available. For example, the splitButtonGallery control is not available to your customizations. (An example of the splitButtonGallery control is the Text Highlight Color control in Word 2007.)
What parts of the Fluent UI are not customizable by using the new extensibility model?
You cannot customize the status bar, the Mini toolbar, or context menus, although you can customize context menus by using the command bars object model.
Can I turn off the Mini toolbar?
Yes. The following procedure gives the steps.
To turn off the Mini toolbar
Click the Microsoft Office Button, and then click Application Options to display the Options dialog box.
Click Popular.
Clear the Show Mini Toolbar on selection option.
My Microsoft Office Access 2003 solution hides all Access menus and toolbars and displays custom menus and toolbars. What happens when users open this solution in Access 2007? Will my custom menus and toolbars appear on the Add-Ins tab?
Access 2007 can detect when an Access 2003 application includes settings to hide menus and toolbars, and to display only custom menus and toolbars. In this case, Access 2007 does not display the custom menus and toolbars on the Add-Ins tab.
How does attached Fluent UI customization XML work in Access 2007? Can I store the custom UI in the database? If so, how?
Because Access databases do not implement the new Office Open XML Formats file structure, Microsoft Visual Basic for Applications (VBA) solutions in Access usually store their markup in a table in the database. Create a table named USysRibbons and store two columns (RibbonName, a 255-character field, and RibbonXml, a memo field) that contain names and markup. You can then select a Ribbon by name from the table, by using the Options dialog box. You can also use standard data manipulation techniques to read XML content from a table, and call the Application.LoadCustomUI method to apply the new Ribbon content. You can find more information on the Office Fluent User Interface Developer Portal Web site.
What happens when two add-ins try to repurpose the same built-in control?
The last add-in that attempts to repurpose the control becomes the active add-in.
Can I programmatically remove items from built-in galleries?
You cannot programmatically remove items from built-in galleries by using extensibility. You may be able to remove them by using the application's object model.
Can I programmatically customize the Quick Access Toolbar, at least in a start-from-scratch scenario?
Yes. You can customize the Quick Access Toolbar by setting the startFromScratch attribute of the Ribbon element to true. However, we recommend not customizing the Quick Access Toolbar unless there is a good business reason—this feature is really meant for user customization.
How do I localize my UI?
You have two options. If you use COM, you can return different XML files, based on the current UI language. If you use VBA, you can have multiple VBA files for each language, or you can have a callback that returns the appropriate label for all of your controls.
Can I remove the Microsoft Office Button?
You can disable or hide all of the items on the Microsoft Office Button menu, but you cannot remove the button itself.
How do I write a VBA add-in that uses the Fluent UI, but that uses command bars in Office 2003 applications?
You can create one VBA document that uses the functionality of both Office 2003 and the 2007 Microsoft Office system. One way to do this is to check the version of Office, by using the Application.Version property. If the value is less than "12" (for 2007 Office applications), run your command bars code. Your Fluent UI XML markup is ignored by the converter that enables a document created in a 2007 Office application to be opened in an Office 2003 application. If the value is "12", you do not need to do any special processing. The file that contains your Fluent UI XML markup is loaded from the Office Open XML Formats file, and your callbacks are made available.
I cannot use extensibility to control the status bar. How do I programmatically hide the status bar?
You can hide the status bar by using the following line of code.
Application.CommandBars("Status Bar").Visible = False
How do I create two add-ins that add items to the same group or tab?
The idQ property of controls exists to enable multiple add-ins to share containers, such as custom tabs and groups.
In the following VBA example, two Excel add-ins share the same "Contoso" group on the add-ins tab; each adds one button to it. The key is specifying the same unique namespace in the <customUI> tag. Then, controls can reference this namespace by using idQ.
CustomUI for add-in 1
<customUI xmlns="" xmlns: <ribbon> <tabs> <tab idMso="TabAddIns"> <group idQ="x:Contoso" label="Contoso"> <button id="C1" label="Contoso Button 1" size="large" imageMso="FileSave" onAction="c_action1" /> </group> </tab> </tabs> </ribbon> </customUI>
CustomUI for add-in 2
<customUI xmlns="" xmlns: <ribbon> <tabs> <tab idMso="TabAddIns"> <group idQ="x:Contoso" label="Contoso"> <button id="C2" label="Contoso Button 2" size="large" imageMso="FileSave" onAction="c_action2" /> </group> </tab> </tabs> </ribbon> </customUI>
If you use a COM add-in to customize the Fluent UI, the namespace name must be the ProgID of the COM add-in, but the behavior is otherwise the same. When you use a shared add-in, the ProgID is AddInName.Connect. When you use Microsoft Visual Studio 2005 Tools for the 2007 Microsoft Office System (Visual Studio 2005 Tools for Office Second Edition) to create the add-in, the ProgID is the name of the add-in.
How can I assign KeyTips to my controls?
KeyTips are the keyboard shortcuts that appear on the Ribbon when you press the ALT key. You can assign your own KeyTips by using the keytip and getKeytip attributes. (The getKeytip attribute supplies the name of a callback procedure that provides the KeyTip.)
(VBA only) If two documents have the same callback signatures, the callbacks from the active document are called. How do I ensure my UI calls only the callbacks associated with my document?
This is an issue that was also present in Office 2003. As a workaround, you can make your callback names unique by adding your add-in or solution name to the callback name. You can also put your callbacks in a module, and then refer to your callbacks by using the full name of the procedure. For example, if you put your callbacks in a module named "MyAddInXYZ", you can refer to the callbacks by using "MyAddInXYZ.myCallback".
Can I interact with Fluent UI controls from VBA?
The Application.CommandBars class provides the following methods for interacting with Fluent UI controls.
Table 2. Methods for the Application.CommandBars class
How can I determine the Ribbon ID for Ribbons in the various applications?
The following table lists the Ribbon IDs for the different applications. Each application passes this ID to your solution in the getCustomUI method of the IRibbonExtensibility interface. This enables your application (or add-in) to determine which application has loaded your code, and you can return a different set of XML content depending on the identity of the host application.
Table 3. Ribbon IDs by application
Can I add images to my Enhanced ScreenTips?
No. You can add only text, by using the Supertip property.
How do I start a new line in an Enhanced ScreenTip?
Type the following character code where you want the new line to start:
How do I invalidate a control that has a qualified ID (idQ)?
You can call a callback procedure and pass the ID of the control in the following way.
Assume that idQ="x:test_idq"
You invoke the callback by using the following method.
InvalidateControl("test_idq")
You cannot set callbacks or invalidate controls from a different add-in (even though they are specified by using the idQ attribute in the current add-in's XML). Only the add-in that has the ProgID namespace gets callbacks and can invalidate the control.
How do I write a shim for my COM add-in?
See the information in the MSDN article Isolating Office Extensions with the COM Shim Wizard.
How do I display large menu items?
In the <menu> tag in the Fluent UI XML file, set itemSize="large". For any element that supports the itemSize attribute, set the value to large to cause the item to appear large (set the value to normal for normal-sized items).
Can I have two callbacks with the same name but different signatures?
Although you can do this, we recommended that you have different callbacks for each control (and not count on built-in overloading to handle the distinction between the two callbacks). For example, assume that you write a Fluent UI add-in with two callbacks of the same name, as in the following code.
public void doSomething(IRibbonControl control, bool pressState); public void doSomething(IRibbonControl control);
Also assume that your XML markup defines a toggleButton control and a button control, and that each of them has an onAction="doSomething" callback.
In this instance, only the toggleButton control will work, because of the Visual Basic and Visual C# auto-generated IDispatch implementation. If you write a C++ add-in and implement IDispatch yourself, this case will work. (In other words, it is best not to do this.)
How can I determine the correct signatures for each callback procedure?
The following table lists all of the callbacks, along with their procedure signatures for C++, VBA, C#, and Visual Basic.
Table 4. List of all C#, VBA, C++, and Visual Basic callbacks and signatures
How do I find out what each Ribbon attribute indicates?
The following table lists all of the Ribbon attributes and includes a short description of each.
Table 5. Ribbon attributes
I am looking for guidance about how to create a consistent end-user experience when customizing the Fluent UI directly with XML files or through add-ins. Can you help?
You can find the 2007 Office system guidance document UI Style Guide for Solutions and Add-Ins on the Microsoft Download Center.
Is it possible to line up (either right-justify or left-justify) text boxes in my custom Fluent UI?
No. However, you might be able to get a similar effect by using the box control. The box control is a container for other controls that has a boxStyle attribute that can be set to horizontal or vertical.
I have a document that I created from a template containing several macros. I have tried calling the macros from the Ribbon onAction callbacks without success. How can I call existing macros from Ribbon controls without modifying the original macros?
It is not possible to call macros that were created for an earlier version of Office directly from a Ribbon control without modifying the macros to include a reference to the control. However, there is a workaround. You can create a new module that contains a macro that hosts all of the Ribbon callbacks. When a call is made to the new macro from a Ribbon control, the older macro is called. The following code shows an example.
New Ribbon extensibility module
Sub RibbonX_macroname(control as IRibbonControl) Select Case control button1 macroname1 button2 macroname2 End Select End Sub
How do I get the selected index or item ID for a combo box control?
The onChange callback returns the selected string. The following code shows the signature.
Sub OnChange(control as IRibbonControl, text as String)
Whenever the value of the combo box is selected, the onChange callback receives the text. However, it is not possible to get the index of the selection.
Is it possible to predict or control the order in which callbacks are called?
No. You should not add logic to your Fluent UI solutions that depends on callbacks being called in a certain order.
In an application that uses command-bar controls, the Tag property was useful for storing arbitrary strings. How can I use the IRibbonControl.Tag property in my Fluent UI solutions?
The 2007 Microsoft Office applications do not use the Tag property, so you can use it to store arbitrary strings and then retrieve them at run time. In your XML, you can set the tag as in the following code.
<button id="mybutton" tag="some string" onAction="MyFunction"/>
When MyFunction is called, you can get the IRibbonControl.Tag property, which will be "some string".
Normally, you can distinguish between your controls by using the IRibbonControl.Id property, but there are restrictions on what IDs can contain (no non-alphanumeric characters, and they must all be unique). The Tag property does not have these restrictions, so it can be used in the following situations, where the Id property does not work:
If you need to store a special string with your control, such as a file name, as in this example: tag="C:\path\to\my\file.xlsm"
If you want multiple controls to be treated the same way by your callbacks, but you do not want to have a list of all of their IDs (which have to be unique). For example, you could have buttons on different tabs all with tag="blue", and then just check the Tag property instead of the ID for some action in the callback.
Is it possible to display an image in a ScreenTip or Enhanced ScreenTip similar to the Chart button in the Illustrations group on the Insert tab?
No. This is not currently supported in Fluent UI extensibility.
Assume I have a custom Ribbon defined for Outlook 2007 and a different Ribbon defined for Word 2007. If I use Word for my e-mail editor, which Ribbon will I see when I create or edit an e-mail message?
When a new Inspector type is created, Outlook will call the GetCustomUI method and pass in the Ribbon ID as an argument. Even though Outlook uses Word APIs, it is still an Outlook container and uses the Outlook Ribbon.
Conclusion
The articles that make up this set provide you with the information that you need to produce professional-looking solutions that are tailored to the needs of your customers. The customization samples presented in Customizing the 2007 Office Fluent Ribbon for Developers (Part 1 of 3) can be used as a jumping-off point for creating a UI that places the controls and options that are most important to your customers within easy reach. The reference information described in Customizing the 2007 Office Fluent Ribbon for Developers (Part 2 of 3) gives you detailed control over the look and feel of the Fluent UI. This article answers many of the questions that might arise as you create your own customized Fluent UI. By applying the information presented in these articles to your own applications, you can create more innovative, attractive solutions that set you apart from your competition. | https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2007/aa722523(v=office.12) | 2019-03-18T15:42:27 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.microsoft.com |
User-Mode Dump Files
This section includes:
For information on analyzing a dump file, see Analyzing a User-Mode Dump File.
Varieties of User-Mode Dump Files
There are several kinds of user-mode crash dump files, but they are divided into two categories:
The difference between these dump files is one of size. Minidumps are usually more compact, and can be easily sent to an analyst.
Note Much information can be obtained by analyzing a dump file. However, no dump file can provide as much information as actually debugging the crash directly with a debugger.
Full User-Mode Dumps
A.
Minidumps
A user-mode dump file that includes only selected parts of the memory associated with a process is called a minidump.
The size and contents of a minidump file vary depending on the program being dumped and the application doing the dumping. Sometimes, a minidump file is fairly large and includes the full memory and handle table. Other times, it is much smaller -- for example, it might only contain information about a single thread, or only contain information about modules that are actually referenced in the stack.
The name "minidump" is misleading, because the largest minidump files actually contain more information than the "full" user-mode dump. For example, .dump /mf or .dump /ma will create a larger and more complete file than .dump /f. For this reason, .dump /m[MiniOptions] recommended over .dump /f for all user-mode dump file creation.
If you are creating a minidump file with the debugger, you can choose exactly what information to include. A simple .dump /m command will include basic information about the loaded modules that make up the target process, thread information, and stack information. This can be modified by using any of the following options:
These options can be combined. For example, the command .dump /mfiu can be used to create a fairly large minidump, or the command .dump /mrR can be used to create a minidump that preserves the user's privacy. For full syntax details, see .dump (Create Dump File).
Creating a User-Mode Dump File
There are several different tools that can be used to create a user-mode dump file: CDB, WinDbg, Windows Error Reporting (WER), UserDump, and ADPlus.
For information about creating a user-mode dump file through ADPlus, see ADPlus.
For information about creating a user-mode dump file through WER, see Windows Error Reporting.
Choosing the Best Tool
There are several different tools that can create user-mode dump files. In most cases, ADPlus is the best tool to use.
The following table shows the features of each tool.
CDB and WinDbg
CDB and WinDbg can create user-mode dump files in a variety of ways.
Creating a Dump File Automatically
When an application error occurs, Windows can respond in several different ways, depending on the postmortem debugging settings. If these settings instruct a debugging tool to create a dump file, a user-mode memory dump file will be created. For more information, see Enabling Postmortem Debugging.
Creating Dump Files While Debugging
When CDB or WinDbg is debugging a user-mode application, you can also the .dump (Create Dump File) command to create a dump file.
This command does not cause the target application to terminate. By selecting the proper command options, you can create a minidump file that contains exactly the amount of information you wish.
Shrinking an Existing Dump File
CDB and WinDbg can also be used to shrink a dump file. To do this, begin debugging an existing dump file, and then use the .dump command to create a dump file of smaller size.
UserDump
The UserDump tool (Userdump.exe), also known as User-Mode Process Dump, can create user-mode dump files.
UserDump and its documentation are part of the OEM Support Tools package.
For more info and to download these tools, see How to use the Userdump.exe tool to create a dump file and follow the instructions on that page. Additionally, When CDB or WinDbg is debugging a user-mode application, you can also use the .dump (Create Dump File) command to create a dump file.
Feedback
We'd love to hear your thoughts. Choose the type you'd like to provide:
Our feedback system is built on GitHub Issues. Read more on our blog. | https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/user-mode-dump-files | 2019-03-18T15:51:44 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.microsoft.com |
Integration Guide
This document explains the components necessary to install Calico on Kubernetes for integrating with custom configuration management.
The self-hosted installation method will perform these steps automatically for you and is strongly recommended for most users. These instructions should only be followed by users who have a specific need that cannot be met by the self-hosted installation method.
- Requirements
- About the Calico Components
- Installing
calico/node
- Installing the Calico CNI plugins
- Installing the Calico network policy controller
- Role-based access control (RBAC)
- Configuring Kubernetes
Requirements
- An existing Kubernetes cluster running Kubernetes >= v1.1. To use NetworkPolicy, Kubernetes >= v1.3.0 is required.
- An
etcdcluster accessible by all nodes in the Kubernetes cluster
- Calico can share the etcd cluster used by Kubernetes, but in some cases it’s recommended that a separate cluster is set up. A number of production users do share the etcd cluster between the two, but separating them gives better performance at high scale.
NOTE:
Calico can also be installed without a dependency on etcd, but that is not covered in this document.
About the Calico Components
There are three components of a Calico / Kubernetes integration.
- The Calico per-node docker container, calico/node
- The cni-plugin network plugin binaries.
- This is the combination of two binary executables and a configuration file.
- When using Kubernetes NetworkPolicy, the Calico policy controller is also required.
The
calico/node docker container must be run on the Kubernetes master and each
Kubernetes node in your cluster. It contains the BGP agent necessary for Calico routing to occur,
and the Felix agent which programs network policy rules.
The
cni-plugin plugin integrates directly with the Kubernetes
kubelet process
on each node to discover which pods have been created, and adds them to Calico networking.
The
calico/kube-policy-controller container runs as a pod on top of Kubernetes and implements
the NetworkPolicy API. This component requires Kubernetes >= 1.3.0.
Installing
calico/node
Run
calico/node and configure the node.
The Kubernetes master and each Kubernetes node require the
calico/node container.
Each node must also be recorded in the Calico datastore.
The calico/node container can be run directly through docker, or it can be
done using the
calicoctl utility.
# Download and install `calicoctl` wget sudo chmod +x calicoctl # Run the calico/node container sudo ETCD_ENDPOINTS=http://<ETCD_IP>:<ETCD_PORT> ./calicoctl node run --node-image=quay.io/calico/node:v2.5.1
See the
calicoctl node run documentation
for more information.
Example systemd unit file (calico-node.service)
If you’re using systemd as your init system then the following service file can be used.
[Unit] Description=calico node After=docker.service Requires=docker.service [Service] User=root Environment=ETCD_ENDPOINTS=http://<ETCD_IP>:<ETCD_PORT> PermissionsStartOnly=true ExecStart=/usr/bin/docker run --net=host --privileged --name=calico-node \ -e ETCD_ENDPOINTS=${ETCD_ENDPOINTS} \ -e NODENAME=${HOSTNAME} \ -e IP= \ -e NO_DEFAULT_POOLS= \ -e AS= \ -e CALICO_LIBNETWORK_ENABLED=true \ -e IP6= \ -e CALICO_NETWORKING_BACKEND=bird \ -e FELIX_DEFAULTENDPOINTTOHOSTACTION=ACCEPT \ -v /var/run/calico:/var/run/calico \ -v /lib/modules:/lib/modules \ -v /run/docker/plugins:/run/docker/plugins \ -v /var/run/docker.sock:/var/run/docker.sock \ -v /var/log/calico:/var/log/calico \ quay.io/calico/node:v2.5.1 ExecStop=/usr/bin/docker rm -f calico-node Restart=always RestartSec=10 [Install] WantedBy=multi-user.target
Replace
<ETCD_IP>:<ETCD_PORT>with your etcd configuration.
NOTE:
To ensure reasonable dataplane programming latency on a system under load,
calico/noderequires a CPU reservation of at least 0.25 cores with additional benefits up to 0.5 cores.
Installing the Calico CNI plugins
The Kubernetes
kubelet should be configured to use the
calico and
calico-ipam plugins.
Install the Calico plugins
Download the binaries and make sure they’re executable
wget -N -P /opt/cni/bin wget -N -P /opt/cni/bin chmod +x /opt/cni/bin/calico /opt/cni/bin/calico-ipam
The Calico CNI plugins require a standard CNI config file. The
policy section is only required when
deploying the
calico/kube-policy-controller for NetworkPolicy.
mkdir -p /etc/cni/net.d cat >/etc/cni/net.d/10-calico.conf <<EOF { "name": "calico-k8s-network", "cniVersion": "0.1.0", "type": "calico", "etcd_endpoints": "http://<ETCD_IP>:<ETCD_PORT>", "log_level": "info", "ipam": { "type": "calico-ipam" }, "policy": { "type": "k8s" }, "kubernetes": { "kubeconfig": "</PATH/TO/KUBECONFIG>" } } EOF
Replace
<ETCD_IP>:<ETCD_PORT> with your etcd configuration.
Replace
</PATH/TO/KUBECONFIG> with your kubeconfig file. See kubernetes kubeconfig for more information about kubeconfig.
For more information on configuring the Calico CNI plugins, see the configuration guide
Install standard CNI lo plugin
In addition to the CNI plugin specified by the CNI config file, Kubernetes requires the standard CNI loopback plugin.
Download the file
loopback and cp it to CNI binary dir.
wget tar -zxvf cni-v0.3.0.tgz sudo cp loopback /opt/cni/bin/
Installing the Calico network policy controller
The
calico/kube-policy-controller implements the Kubernetes NetworkPolicy API by watching the
Kubernetes API for Pod, Namespace, and NetworkPolicy events and configuring Calico in response. It runs as
a single pod managed by a Deployment.
To install the policy controller:
- Download the policy controller manifest.
- Modify
<ETCD_ENDPOINTS>to point to your etcd cluster.
- Install it using
kubectl.
$ kubectl create -f policy-controller.yaml
After a few moments, you should see the policy controller enter
Running state:
$ kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE calico-policy-controller 1/1 Running 0 1m
For more information on how to configure the policy controller, see the configuration guide.
Role-based access control (RBAC)
When installing Calico on Kubernetes clusters with RBAC enabled, it is necessary to provide Calico access to some Kubernetes APIs. To do this, subjects and roles must be configured in the Kubernetes API and Calico components must be provided with the appropriate tokens or certificates to presnt which identify it as the configured API user.
Detailed instructions for configuring Kubernetes RBAC are outside the scope of this document. For more information, please see the upstream Kubernetes documentation on the topic.
The following yaml file defines the necessary API permissions required by Calico when using the etcd datastore.
kubectl apply -f
Configuring Kubernetes
Configuring the Kubelet
The Kubelet needs to be configured to use the Calico network plugin when starting pods.
The
kubelet can be configured to use Calico by starting it with the following options
--network-plugin=cni
--cni-conf-dir=/etc/cni/net.d
--cni-bin-dir=/opt/cni/bin
For Kubernetes versions prior to v1.4.0, the
cni-conf-dir and
cni-bin-dir options are
not supported. Use
--network-plugin-dir=/etc/cni/net.d instead.
See the
kubelet documentation
for more details.
Configuring the Kube-Proxy
In order to use Calico policy with Kubernetes, the
kube-proxy component must
be configured to leave the source address of service bound traffic intact.
This feature is first officially supported in Kubernetes v1.1.0 and is the default mode starting
in Kubernetes v1.2.0.
We highly recommend using the latest stable Kubernetes release, but if you’re using an older release there are two ways to enable this behavior.
- Option 1: Start the
kube-proxywith the
--proxy-mode=iptablesoption.
- Option 2: Annotate the Kubernetes Node API object with
net.experimental.kubernetes.io/proxy-modeset to
iptables.
See the kube-proxy documentation for more details. | https://docs.projectcalico.org/v2.5/getting-started/kubernetes/installation/integration | 2019-03-18T15:53:43 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.projectcalico.org |
Generate an abstract of a given portion of text. The syntax is
abstract(text[, maxsize[, style[, query]]])
The abstract will be less than
maxsize characters long, and
will attempt to end at a word boundary. If
maxsize is not
specified (or is less than or equal to 0) then a default size of 230
characters is used.
The
style argument is a string or integer, and allows a
choice between several different ways of creating the abstract.
Note that some of these styles require the
query argument as
well, which is a Metamorph query to look for:
dumb(0) Start the abstract at the top of the document.
smart(1) This style will look for the first meaningful chunk of text, skipping over any headers at the top of the text. This is the default if neither
stylenor
queryis given.
querysingle(2) Center the abstract contiguously on the best occurence of
queryin the document.
querymultiple(3) Like
querysingle, but also break up the abstract into multiple sections (separated with "
...") if needed to help ensure all terms are visible. Also take care with URLs to try to show the start and end.
querybestAn alias for the best available query-based style; currently the same as
querymultiple. Using
querybestin a script ensures that if improved styles become available in future releases, the script will automatically "upgrade" to the best style.
If no
query is given for the
query... modes, they
fall back to
dumb mode. If a
query is given with a
non-
query... mode (
dumb/
smart), the mode is
promoted to
querybest. The current locale and index
expressions also have an effect on the abstract in the
query... modes, so that it more closely reflects an
index-obtained hit.
SELECT abstract(STORY, 0, 1, 'power struggle')
FROM ARTICLES
WHERE ARTID = 'JT09115' ; | https://docs.thunderstone.com/site/texisman/abstract.html | 2019-03-18T15:41:28 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.thunderstone.com |
.
What is this website?
You are now reading R-hub documentation website, currently a work in progress. You can find its source.
Package builder
R-hub package builder
What platform to use?.
R-hub package
You can interact with the R package builder via its API client,
rhub, which is documented on its dedicated website.
Submit your package with the webform or the_on_cran()..
How do R-hub and CRAN platforms compare? send my secret API key, token, etc. to R-hub?
You can do that with the
rhub package, via the
check_args argument of check functions.
You cannot do that with the web interface. See more reasons to use the package rather than the web interface. artefacts i.e. the built package that you can download and send to your colleague; and
rhub itself should soon get a method for retrieving artefacts.
Be careful, artefacts only remain online for a few days so download them as soon as you can.
Regularly
If you want to regularly build and deploy your package on different platforms, what you are looking for is continuous integration.
R-hub CI. Not at the moment, but eventually R-hub will offer a CI service, unsurprisingly specific to R packages..
How to inform R-hub of system requirements for my. | https://docs.r-hub.io/ | 2019-03-18T16:48:50 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.r-hub.io |
An Act to repeal 50.36 (3g) and 50.36 (6m) (a) 1.; to amend 50.35, 50.36 (1), 50.36 (2) (a), 50.36 (2) (b), 50.36 (3m), 50.36 (4), 50.36 (6m) (a) (intro.), 50.36 (6m) (a) 2., 50.36 (6m) (a) 3., 50.36 (6m) (b), 50.37 (intro.), 50.37 (4), 50.39 (1) and 323.19 (1); and to create 50.33 (1c), 50.33 (3), 50.36 (1m), 50.36 (3) (am) and 50.36 (3L) of the statutes; Relating to: regulation of hospitals, granting rule-making authority, and requiring the exercise of rule-making authority. (FE) | http://docs.legis.wisconsin.gov/2013/proposals/sb560 | 2019-03-18T15:41:05 | CC-MAIN-2019-13 | 1552912201455.20 | [] | docs.legis.wisconsin.gov |
Techpaper
List of terms
TSD: stands for Total Software Deployment and represents its name and trademark.
Admin unit (main unit): a GUI application operated by the user. It's installed on a workstation or a server computer and is used for computer scanning, as well as for viewing and deploying software.
Scanning: a process of collecting hardware and software information from a computer or a device.
Deployment: a process of installing software on a computer.
Network storage: a database of scanned network computers.
Software storage: a database of software for deployment.
Minimum system requirements for the admin unit
Database mechanism
TSD works with 2 independent databases (also known as Storages): the software storage and the network storage. A Storage is a user-created folder on the hard drive.
In the Network storage, each scanned asset is represented by a separate file. Auxiliary data is stored separately from the asset files and includes user information, logins and passwords for remote access, etc. All data is encrypted.
In the Software storage, each program is represented by a separate folder containing the program installer, the Deployment package(s) and the deployment history. Auxiliary data is stored separately.
It's possible to create several separate storages and switch between them at any time. Asset-related data can be copied to another storage by copying the corresponding file. | http://docs.softinventive.com/tsd/techpaper | 2017-01-16T15:02:35 | CC-MAIN-2017-04 | 1484560279189.36 | [] | docs.softinventive.com |
Inserts an object into a group.
Inserting display objects into a group also removes the object from it current group (objects cannot be in multiple groups). All display objects are part of stage object when first created. At this time, Corona only has one stage, which is the entire screen area.
group:insert( [index,] child, [, resetTransform] )
Number. Inserts child at
index into group, shifting up other elements as necessary. The default value index is
n+1 where
n is the number of children in the group.
An easy way to move an object above all its siblings (top) is to re-insert it:
object.parent:insert( object ).
If a group has 3 display objects:
group[1]is at the bottom of the group.
group[2]is in the middle of the group.
group[3]is at the top of the group.
Objects at the higher index numbers will be displayed on top of objects with lower index numbers (if the objects overlap).
DisplayObject. Object to be inserted into the group.
Boolean. Determines what happens to child’s transform. When
false, child’s local position, rotation, and scale properties are preserved, except the local origin is now relative to the new parent group, not its former parent; When
true, child’s transform is reset (i.e. the
x,
y,
rotation,
xScale, and
yScale properties of child are reset to
0,
0,
0,
1, and
1, respectively). The default value for
resetTransform is
false.
local txt = display.newText( "Hello", 0, 0 ) local g1 = display.newGroup() local g2 = display.newGroup() -- Insert text object into g1 g1:insert( txt ) -- Insert same text object into g2 g2:insert( txt ) print( "g1[1]: " .. tostring(g1[1]) ) -- prints nil print( "g2[1]: " .. tostring(g2[1]) ) -- prints textObject print( "number of children in g1 and g2: " .. g1.numChildren, g2.numChildren ) | https://docs.coronalabs.com/api/type/GroupObject/insert.html | 2017-01-16T14:58:44 | CC-MAIN-2017-04 | 1484560279189.36 | [] | docs.coronalabs.com |
Sugar
Sugar is a cross-platform library of low-level classes and APIs that provide functionality that normally is platform-specific in ways that are platform agnostic. By using Sugar APIs instead of platform-specific APIs, you can make your code more portable between .NET, Cocoa and Java.
Examples of functionality provided by Sugar are generic container classes (such as Lists and Dictionaries), support for reading and writing common data formats (such as XML and JSON), and access to low-level system APIs that are available (but different) on each platform (such as file and network access).
We recommend checking out our range of tutorials on Writing Cross Platform Code with Sugar, as well as the Sugar API Reference.
Sugar is Open Source and available on GitHub, but of course ships in the box with Elements. | https://docs.elementscompiler.com/Platforms/CrossPlatform/Sugar/ | 2017-01-16T15:08:23 | CC-MAIN-2017-04 | 1484560279189.36 | [] | docs.elementscompiler.com |
XCTest Sample
Guide on getting existing XCTest test projects running on Bitbar Testing cloud. To start testing on cloud the application package and the test package need to be uploaded to cloud.
Compiling Unit tests - This should be the default setting, but it’s worth double checking. Open the Build action settings for the scheme in the Scheme Editor. Verify that in the Run column, your test targets are checked. This means that when typing Command-B or even running the app, the tests are compiled too.
In order for the classes under test to be available within the test bundle, they need to be included with test target membership.
In the example project, MyModel.swift was a class under test, so it needed to be added to the test target membership. Normally this isn’t required with Swift, because the
@testable annotation imports the required modules.
Compile tests for device, select Real device from the menu and press Command-B:
In Xcode 7 one can right click on .xctest under Product, and select Show in Finder:
In Xcode8 .xctest can be found inside of the .app, so in order to locate it, right click on app and select Show in Finder. Right click app again in Finder and select Show Package Contents.
Then go to the Plugins folder and right click top of the .xctest and select compress:
Now the XCtest package(zip) exists and can be uploaded to Bitbar Testing cloud with the IPA package created earlier.
XCTest Test Run
To run XCTest tests, an XCTest test project needs to be created in Bitbar Testing.
When creating the test run, upload the .ipa file when asked for the application file and and the zip package for test cases. | http://docs.testdroid.com/xcode/xctest/ | 2017-01-16T15:09:07 | CC-MAIN-2017-04 | 1484560279189.36 | [array(['http://docs.testdroid.com/assets/xcode/xctest/xc-xctest-1.png',
None], dtype=object)
array(['http://docs.testdroid.com/assets/xcode/xctest/xc-xctest-2.png',
None], dtype=object)
array(['http://docs.testdroid.com/assets/xcode/xctest/xc-xctest-3.png',
None], dtype=object)
array(['http://docs.testdroid.com/assets/xcode/xctest/xc-xctest-4.png',
None], dtype=object)
array(['http://docs.testdroid.com/assets/xcode/xctest/xc-xctest-5.png',
None], dtype=object)
array(['http://docs.testdroid.com/assets/xcode/xctest/xc-xctest-6.png',
None], dtype=object)
array(['http://docs.testdroid.com/assets/xcode/xctest/xc-xctest-7.png',
None], dtype=object) ] | docs.testdroid.com |
Configuring mail settings
:
Configuring the built-in MTA and mail server
:
Configuring the host name, port numbers, relay, mail queue and DSN
: Configuring relay server options
Configuring relay server options
Configure an SMTP relay, if needed, to which the FortiMail unit will relay outgoing email. This is typically provided by your Internet service provider (ISP), but could be a mail relay on your internal network.” on page 260
.
Server relay is ignored if the FortiMail unit is operating in transparent mode, and
“Relaying using FortiMail’s built-in MTA versus unprotected SMTP servers” on page 266
is enabled.
.
Server relay is ignored for email that matches an antispam or content profile where you have enabled
Deliver to alternate host
.
GUI item
Description
Relay server name
Enter the domain name of an SMTP relay.
Relay server port
Enter the TCP port number on which the SMTP relay listens.
This is typically provided by your Internet service provider (ISP).
Use SMTPs
Enable to initiate SSL- and TLS-secured connections to the SMTP relay if it supports SSL/TLS.
When disabled, SMTP connections from the FortiMail unit’s built-in MTA or proxy to the relay will occur as clear text, unencrypted.
This option must be enabled to initiate SMTPS connections.
Authentication Required
If the relay server requires use of the SMTP
AUTH
command, enable this option, click the arrow to expand the section and configure:
•
User name
: Enter the name of the FortiMail unit’s account on the SMTP relay.
•
: Enter the password for the FortiMail unit’s user name.
•
Authentication type
: Available SMTP authentication types include:
•
AUTO
(automatically detect and use the most secure SMTP authentication type supported by the relay server)
•
PLAIN
(provides an unencrypted, scrambled password)
•
(provides an unencrypted, scrambled password)
•
DIGEST-MD5
(provides an encrypted hash of the password)
•
CRAM-MD5
(provides an encrypted hash of the password, with hash replay prevention, combined with a challenge and response mechanism) | http://docs-legacy.fortinet.com/fmail/fortimail-admin/FortiMail%20Online%20Help/mail_settings.07.05.html | 2014-09-15T04:02:13 | CC-MAIN-2014-41 | 1410657104119.19 | [] | docs-legacy.fortinet.com |
Difference between revisions of "Presentations/Fall 2010"
From CSLabsWiki
Revision as of 10:50, 27 August 2010
Contents
Full Presentations
Lightning Talks
Resources are presentation files or links.
Presentation Series
Linux
Below is a really rough outline.
Masterpiece Theater
- - Keep copyright in mind; films should be in the public domain or licensed such that we can show them. (See [4] for ideas) | http://docs.cslabs.clarkson.edu/mediawiki/index.php?title=Presentations/Fall_2010&diff=prev&oldid=4550 | 2021-02-24T19:56:32 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.cslabs.clarkson.edu |
Installation Media
We provide ready to use ISO images that include pre-installed software. These can either be used "live" without installation, or they can be used to set up a Fedora 33 Workstation based system that includes a set of Neuroscience software.
Computational Neuroscience
A Fedora 33 based ISO image is available for download from the Fedora website.
It includes a variety of simulators and analysis tools used in computational neuroscience, such as: auryn, bionetgen, calcium-calculator, COPASI, qalculate, getdp, genesis-simulator, gnuplot, moose, nest, neuron, neurord, octave, paraview, python3, brian2, ipython, nest, neuron, libNeuroML, neo, nineml, PyLEMS, and smoldyn.
It also includes the Python Science stack: matplotlib, jupyter notebook, numpy, pandas, pillow, scikit-image, scikit-learn, scipy, statsmodels, and sympy. Additionally, it also includes Julia and R programming languages that are commonly used for analysis.
More software from the Fedora repositories can be installed using the package manager. | https://docs.fedoraproject.org/it/neurofedora/install-media/ | 2021-02-24T21:52:53 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.fedoraproject.org |
Other Documentations
How to open RSVP form inside eventcard
April 17, 2017
RSVP-ing to an event will open a lightbox type RSVP form as shown below. This is the default behavior of RSVP addon for eventON.
You also have the option to allow your website visitors to open the RSVP form within the eventCard of the event. Under event edit page in wp-admin select Show RSVP form within EventCard instead of lightbox
Once this is enabled and saved for the event, the RSVP form will no longer open as a lightbox for this event rather it will open as a inline form inside the eventCard like below.
TIPS:
If you are having issues with the lightbox version of the RSVP form from appearing correct on certain devices, the inline RSVP form is a viable alternative. | https://docs.myeventon.com/documentations/open-rsvp-form-inside-eventcard/ | 2021-02-24T21:18:18 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['https://www.myeventon.com/wp-content/uploads/2017/04/Capture-1.png',
None], dtype=object)
array(['https://www.myeventon.com/wp-content/uploads/2017/04/Capture-2.png',
None], dtype=object)
array(['https://www.myeventon.com/wp-content/uploads/2017/04/Capture-3.png',
None], dtype=object) ] | docs.myeventon.com |
Permalink is the custom URL structure for your website. After installing WP Crowdfunding, you must use Post Name. Because the dashboard URL and other dependent links are set up based on the slug that you choose.
To set up the correct permalink type for your WordPress website, go to
wp-admin (WordPress Dashboard) → Settings → Permalinks
If you have set anything other than Post Name, then there is a very high chance of your visitors getting a 404 error on the frontend section of your website.
| https://docs.themeum.com/wp-crowdfunding/basic/permalink-settings/ | 2021-02-24T20:45:06 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['https://docs.themeum.com/wp-content/uploads/2020/08/wpcf-permalink-1.png',
None], dtype=object) ] | docs.themeum.com |
Percentages and per mille Students can use percent and per mille in the answers. The symbol percent (%) must be typed with the keyboard while the symbol per mille (‰) has a button in the answer toolbar. If you expect the students to answer with per mille, you must provide them the MathType toolbar. Alternatively, you can provide that symbol in the question wording for them to copy & paste. They can also use the combo ALT+0137 (numeric keypad). These symbols are similar to physical units: You can do basic arithmetic with them, and also convert() them. You can make an algorithm with these symbols, CalcMe is going to translate it as percent and permille. LIVE DEMO | https://docs.wiris.com/en/quizzes/user_interface/validation/percentages | 2021-02-24T21:07:36 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.wiris.com |
Why we ask devs to do a coding test
Companies on OfferZen expect our developers to be able to pass a coding test, so we have designed our own test which we ask developers to complete when they apply.
We give developers 7 days to complete the test. We look at the test result and the dev's profile to determine whether or not they are a fit for what companies are looking for. Once devs complete the test, our team reviews the results within 48 hours.
There are 3 tasks in the test and devs have 90 minutes to complete them. They can be solved in any order. Unfortunately the test cannot be taken a second time so it's important for someone taking the test to ensure they won't be interrupted.
The solutions can each be written in the dev's preferred language. The supported languages are: C, C#, C++, Go, Java, JavaScript, Lua, Objective-C, PHP, Pascal, Perl, Python, Ruby, Scala, Swift 2, Swift 3 or VB.NET.
Cheating is easy for us to discover. We will notice if a dev copies ready-made solutions.
We use Codility.com for as our testing service, you can find out more info about Codility here:
- Codility FAQ: find out more about Codility tests
- Example feedback: see an example of what you will see after completing the test
- Example test report: see an example of the report on which you will be graded
What happens if I don't pass the coding test?
If you don't pass the coding test you can reapply to do the test in 6-12 months. | http://docs.offerzen.com/en/articles/745235-offerzen-coding-test | 2021-02-24T20:41:26 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.offerzen.com |
Glossary
A
- Admin Node Manager
API Gateway component responsible for managing API Gateway instances in a domain. For example, this includes collecting monitoring information, managing dynamic settings, and deploying API and policy configuration. The Admin Node Manager must be running to use the API Gateway management tools that connect to it (for example, Policy Studio and API Gateway Manager).
- API
Application Programming Interface (API) is a set of business services that an enterprise can expose to external customers, partners, or employees using a range of different technologies on a range of different devices. For example, APIs typically support HTTP requests and JSON or XML responses to enable mobile client applications.
- API Catalog
Contains the APIs that have been registered in API Manager and are available for use by client application developers. They can browse these APIs and their associated documentation, and invoke APIs using the built-in test capability.
- API Gateway
Server-side application that manages, delivers, and secures APIs. API Gateway provides services such as API transformation, control and governance, security, monitoring, development lifecycle, and administration.
- API Manager
Web-based API administration and partner management tool that is layered on API Gateway. API administrators use API Manager to administer the managed APIs that are exposed to API consumers.
- API package
The complete package of artifacts associated with an API registered in API Manager. This is used to export and import the API in a single package to enable promotion from sandbox to production APIs.
- API Portal
Self-service developer portal that enables client application developers to browse and consume APIs for use in their applications.
B
- Base64
Method of encoding 8-bit characters as ASCII printable characters. It is typically used to encode binary data so that it may be sent over text-based protocols such as HTTP and SMTP. Base64 is a scheme where 3 bytes are concatenated, and split to form 4 groups of 6-bits each. Each 6-bits is translated to an encoded printable ASCII character, using a table lookup. The specification is described in RFC 2045.
- Back-end API
- Used in context of the API-Manager. Is.
C
- CA
Certificate Authority (CA) issues digital certificates (especially X.509 certificates), and vouches for the binding between the data items in a certificate.
- cacerts
File used to keep the root certificates of signing authorities. This is typically stored in
..\jre\lib\security\cacerts. Each entry is identified by a unique alias, and is a key entry or a certificate entry. Key entries consist of a key pair, and certificate entries consist of just a certificate. Because you implicitly trust all CAs in the
cacertsfile for code signing and verification, you must manage the
cacertsfile carefully. The
cacertsfile should contain only certificates of the CAs you trust.
- CMS
Content Management System
- CRL
Certificate Revocation List (CRL) is a signed list indicating a set of certificates that are no longer considered valid by the certificate issuer. CRLs may be used to identify revoked public-key certificates or attribute certificates, and may represent revocation of certificates issued to authorities or to users. The term CRL is also commonly used as a generic term applying to different types of revocation lists.
D
- DName
Distinguished Name (DName or DN) is an identifier that uniquely represents an object in the X.500 Directory Information Tree (DIT). A DName a set of attribute values that identify the path leading from the base of the DIT to the object that is named. An X.509 public-key certificate or CRL contains a DName that identifies its issuer, and an X.509 attribute certificate contains a DN or other form of name that identifies its subject.
- Domain
Multiple groups of API Gateways spanning multiple host machines. An API Gateway domain is a distinct administrative entity, which is managed separately by tools such as API Gateway Manager and API Gateway Analytics.
F
- Filter
Executable rule that performs a specific type of processing on a message. For example, the Message Size filter rejects messages that are greater or less than a specified size. Many categories of message filters are available with API Gateway (for example, Authentication, Authorization, Content filtering, Conversion, Trust, and so on). In Policy Studio, a filter is displayed as a block of business logic that forms part of an execution flow known as a policy.
- Front-end API
Used in context of the API-Manager. Is.
G
- Group
One or more API Gateway instances that are managed as a unit and run the same configuration to virtualize the same APIs and execute the same policies. API Gateway groups enable you to organize API Gateway instances by solution type and manage them as a single entity.
H
- HTTP
Hypertext Transfer Protocol (HTTP) is a protocol for distributed hypermedia systems. HTTP is the foundation of data communication for the World Wide Web. For more details, see.
- HTTPS
Hypertext Transfer Protocol Secure (HTTPS) is a protocol for secure communication over a computer network, and which is widely deployed on the Internet. It is the result of layering HTTP on top of the SSL/TLS protocol. For more details, see.
J
- JMS
Java Message Service (JMS) is a messaging standard that enables application components based on Java 2 Enterprise Edition (J2EE) to create, send, receive, and read messages. It enables communication between different components of a distributed application to be loosely coupled, reliable, and asynchronous. For more details, see.
- JSON
JavaScript Object Notation (JSON) is a lightweight data-interchange format, which is easy for humans to read and write, and easy for machines to parse and generate. JSON is based on a subset of the JavaScript programming language. Its text format is programming language independent, but uses conventions that are familiar to programmers of the C family of languages (for example, C, C++, C#, Java, JavaScript, Perl, and Python). For more details, see.
- JSON Path
JSON Path enables you to locate and process specific parts of a JSON document. It is available in programming languages such as JavaScript, Java, Python and PHP. For more details, see the JSON specification.
K
- Keystore
The keystore file of the JDK contains your public and private keys. It has a file name of
.keystore(leading dot makes the file read-only on Unix). It is stored in PKCS #12 format, contains both public and private keys, and is protected by a passphrase.
- KPS
Key Property Store (KPS) is a data management component in the API Gateway. Data in a KPS table is assumed to be read frequently and seldom written, and can be changed without incurring an API Gateway service outage. KPS tables are shared across an API Gateway group.
L
- LDAP
LDAP is a lightweight version of Directory Access Protocol (DAP), which is part of X.500, a standard for directory services in a network. An LDAP directory stores information on resources in a hierarchical fashion, which makes data retrieval very efficient.
N
- Node Manager
API Gateway component responsible for managing API Gateway instances on a host machine. There must be one Node Manager on each managed host machine. A single Admin Node Manager communicates with all Node Managers in a domain to perform management operations.
O
- OCSP
Online Certificate Status Protocol (OCSP) is an automated certificate checking network protocol. A client will query the OCSP responder for the status of a certificate. The responder returns whether the certificate is still trusted by the CA that issued it.
P
- PEM
Privacy Enhanced Mail (PEM) was originally intended for securing email using various encryption techniques. Its scope widened for use in a broader range of applications, such as Web servers. Its format is essentially a base64-encoded certificate wrapped in
BEGIN CERTIFICATEand
END CERTIFICATEdirectives.
- PKCS#12
Standard for storing private keys and X.509 certificates securely (for example, in a
.p12file).
- Policy
Network of API Gateway filters in which each filter is a modular unit that processes a message. Messages can traverse different paths through the policy, depending on which filters succeed or fail. For example, you could configure policies routing messages that pass a Schema Validation filter to a back-end system, and routing messages that pass a different Schema Validation filter to another system. A policy can also contain other policies, which enables you to build modular reusable policies.
- Private key
Secret component of a pair of cryptographic keys used for asymmetric cryptography.
- Public key
Publicly-disclosed component of a pair of cryptographic keys used for asymmetric cryptography.
R
- RBAC
Role-Based Access Control (RBAC) restricts system access to authorized users based on assigned roles. Permissions to perform specific system operations are assigned to specific roles, and system users are granted permission to perform specific operations only through their roles. This simplifies system administration because users do not need to be assigned permissions directly, and instead acquire them through their assigned roles.
- REST
Representational State Transfer (REST) is an architectural style for building large-scale distributed software that uses the technologies and protocols of the World Wide Web (for example, JSON/XML and HTTP). For more details, see.
S
- SAML
Security Assertion Markup Language (SAML) is an XML standard for establishing trust between entities. SAML assertions contain identity information about users (authentication assertions), and information about user access permissions of (authorization assertions). When a user is authenticated at a site, the site issues a SAML authentication assertion to the user. The user can use this assertion in requests at other affiliated sites. These sites need only check the details in the authentication assertion to authenticate the user. In this way, SAML allows authentication and authorization information to be shared between different sites.
- Selector
Special syntax that enables API Gateway configuration settings to be evaluated and expanded at runtime based on metadata values (for example, from a KPS, message attribute, or environment variable).
- Signature
- Value computed with a cryptographic algorithm and added to a data object in such a way that any recipient of the data can use the signature to verify its origin and integrity.
- SOAP
Simple Object Access Protocol (SOAP) is an XML-based object invocation protocol. SOAP was originally developed for distributed applications to communicate over HTTP and corporate firewalls. SOAP defines the use of XML and HTTP to access services, objects, and servers in a platform-independent way. SOAP is a wire protocol that can be used to facilitate highly ultra-distributed architecture. For more details, see the SOAP specification.
- SSL
Secure Sockets Layer (SSL) is an encrypted communication protocol for sending information securely across the Internet. It sits just above the transport layer, and below the application layer and transparently handles the encryption and decryption of data when a client establishes a secure connection to the server. It optionally provides peer entity authentication between client and server.
T
- TLS
Transport Layer Security (TLS) is the successor to SSL 3.0. Like SSL, it allows applications to communicate over a secure channel.
U
- UDDI
Universal Description, Discovery, and Integration (UDDI) is an XML-based lookup service for locating Web services on the Internet. For more details, see the UDDI standard.
- URI
Uniform Resource Identifier (URI) is a platform-independent way to specify a file or resource on the Web. Strictly speaking, every URL is also a URI, but not every URI is also a URL. For more details on URI formats, see RFC 2396 and RFC 2732.
W
- WSDL
Web Services Description Language (WSDL) is an XML format for describing network services as a set of endpoints operating on messages containing document-oriented or procedure-oriented information. Operations and messages are described abstractly, and bound to a concrete network protocol and message format to define an endpoint. Related concrete endpoints are combined into abstract endpoints (services). WSDL is extensible to allow description of endpoints and messages regardless of what message formats or network protocols are used. For more details, see the WSDL specification.
X
- X.509
Standard that defines the contents and data format of a public key certificate.
- XKMS
XML Key Management Specification (XKMS) uses XML to provide key management services so that a Web service can query the trustworthiness of a user’s certificate over the Internet. XKMS aims to simplify application building by separating digital-signature handling and encryption from the applications themselves. For more details, see the XML Key Management specification.
- XML
Extensible Markup Language (XML) is a subset of Structured General Markup Language (SGML). Its goal is to enable generic SGML to be served, received, and processed on the Web in a similar way to HTML. See the XML Specification for more details.
- XPath
XML Path (XPath) is a language that describes how to locate and process specific parts of an XML document. For more details, see the XML Path Language specification.
- XSL
XML Stylesheet Language (XSL) is used to convert XML documents into different formats, the most common of which is HTML. In a typical scenario, an XML document references an XSL stylesheet, which defines how the XML elements of the document should be displayed as HTML. This enables a clear separation of content and presentation.
- XSLT
Extensible Stylesheet Language Transformation (XSLT) is used to convert XML documents into other XML documents or other formats (for example, HTML, plain text, or XSL Formatting objects)
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve. | https://axway-open-docs.netlify.app/docs/glossary/ | 2021-02-24T20:01:27 | CC-MAIN-2021-10 | 1614178347321.0 | [] | axway-open-docs.netlify.app |
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Class: Aws::CodeGuruProfiler::Types::GetFindingsReportAccountSummaryResponse
- Inherits:
- Struct
- Object
- Struct
- Aws::CodeGuruProfiler::Types::GetFindingsReportAccountSummaryResponse
- Defined in:
- (unknown)
Overview
The structure representing the GetFindingsReportAccountSummaryResponse.
Returned by:
Instance Attribute Summary collapse
- #next_token ⇒ String
The
nextTokenvalue to include in a future
GetFindingsReportAccountSummaryrequest.
- #report_summaries ⇒ Array<Types::FindingsReportSummary>
The return list of [
FindingsReportSummary][1] objects taht contain summaries of analysis results for all profiling groups in your AWS account.
Instance Attribute Details
#next_token ⇒ String
The
nextToken value to include in a future
GetFindingsReportAccountSummary request. When the results of a
GetFindingsReportAccountSummary request exceed
maxResults, this
value can be used to retrieve the next page of results. This value is
null when there are no more results to return.
#report_summaries ⇒ Array<Types::FindingsReportSummary>
The return list of
FindingsReportSummary objects taht contain
summaries of analysis results for all profiling groups in your AWS
account. | https://docs.amazonaws.cn/sdk-for-ruby/v2/api/Aws/CodeGuruProfiler/Types/GetFindingsReportAccountSummaryResponse.html | 2021-02-24T20:25:59 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.amazonaws.cn |
Installing standalone Cloud Portal Web Application
This topic describes how to install the Cloud Portal Web Application (also called the My Cloud Services console, the End User Portal, or clmui) as a standalone application.
The following BMC Communities video describes how to install a standalone BMC Cloud Lifecycle Management Web Portal.
Supported BMC Cloud Lifecycle Management versions
You must be using BMC Cloud Lifecycle Management version 4.5.
Minimum system requirements
To install Cloud Portal Web Application, your system must meet the following minimum system requirements.
Additional requirements for the Cloud Portal Web Application
In addition to the minimum system requirements, ensure that the following requirements are met:
- You are an Administrator or the root user of the computer on which you will install the Cloud Portal Web Application.
- All BMC Cloud Lifecycle Management component products (such as BMC Remedy AR System, BMC Server Automation, and so on) are accessible from the computer on which you will install the Cloud Portal Web Application.
- RSCD Agent is installed on the computer on which you will install the Cloud Portal Web Application. You can use the RSCD installer bundled with your BMC Cloud Lifecycle Management installer files (and not the directory of your installed BMC Cloud Lifecycle Management solution) in the Applications\BL-RSCD directory. For more information about installing RSCD, see Installing only the RSCD agent (Linux and UNIX) and Installing an RSCD agent (Windows).
On Linux systems, ensure that you have execute permission for the JRE directory.
If you are using Microsoft Internet Explorer, ensure that it is not running in Compatibility Mode.
Configure JRE_HOME in your PATH
Ensure that JRE_HOME is in your system PATH variable.
Running the Cloud Portal Web Application in a 2-AR System server environment with LDAP after an upgrade
If you run the Cloud Portal Web Application in a 2-AR System server environment – for example, after you upgraded from 3.1 – users have encountered problems if they have integrated LDAP only with the Enterprise-AR server. Specifically, users could not log on to the new user interface nor could they see their blueprints.
If you are using LDAP in your upgraded environment, BMC recommends the following courses of action:
- As soon as possible, merge the two AR System servers into one AR System server after you finish upgrading. For more information, see Merging two AR System servers into one AR System server.
- If merging the two servers together is not possible at the current time, you must also integrate the Cloud-AR server with LDAP, not just the Enterprise-AR server, before you install the Cloud Portal Web Application.
To install the Cloud Portal Web Application
- Obtain the installer.zip file:
- Download the installer.zip file from EPD to the computer on which you want to install the Cloud Portal Web Application.
- Navigate to the ..\Applications\CLM-UI\Windows folder and locate installer.zip.
- Unzip the installer file.
Do one of the following:
- Review the Welcome panel, and then click Next.
The installer copies files to the target server, verifies free space, and so on.
Take a VM snapshot of the target host, and then click Next.
Review the Destination Directory (by default, C:\Program Files\BMC Software\CloudPortalWeb Application for Windows or /opt/bmc for Linux), and then click Next.
If the path does not exist, you are prompted to accept that the directory will be created. Make sure that you enter a directory with enough space to perform the installation.
Note
For Linux installations, the path must not contain spaces.
- Select the Use Bundled JRE option to simplify SSL configuration (among other advantages), and then click Next.
You can also enter the directory path to an external 64-bit Oracle 1.8 JRE directory – for example, C:\Program Files\Java\jre8.
Review the HTTPS or HTTP port numbers (the HTTPS default port is 8443) used to start up (9070) and shut down (9005) the the Cloud Portal Web Application server, then click Next.
Make sure that you use an unused port.
In the Custom CA Certificate Configuration panel, review the certificate information (the default is NO), and then click Next.
You can chose to install using the existing self-signed certificate, or you can provide the location of a third-party certificate and password.
NoteYou must copy the third-party certificate to the target host.
- In the Tomcat Web Server Certificate Information panel, review the keystore information or update it as needed, and then click Next.
- In the Common Name (CN) field, enter the FQDN for your host under Common Name (CN).
- In the State Name (S) field, enter the full name of the state or province and not its abbreviation (California, not CA).
Click Next.
In the Configuration Inputs panel, enter the Platform Manager and Self-Check Monitor details (as shown in the following screenshot), and then click Next.
The installer provides sample URLs to open the Platform Manager and Self-Check Monitor. Use the product hosts in your environment to construct the URLs. For example:
https://<PlatformManager>:9443/csm
https://<PlatformManager>:7070/csm
https://<SelfCheckerServer>:8443/health
In the Installation Preview panel, review the information, and then click Install.
In the Installation Summary panel, review the information, and then click Done.
Before you go any further, verify the installation and then configure the My Cloud Services Console.
Where to go from here
Verifying the Cloud Portal Web Application installation
Configuring the My Cloud Services Console | https://docs.bmc.com/docs/cloudlifecyclemanagement/45/installing/performing-the-installation/installing-standalone-cloud-portal-web-application | 2021-02-24T21:19:36 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.bmc.com |
To aggregate the results from a loop in a formula, use the last step of the loop as an aggregator. For example, you can make a Javascript step called aggregator and use use the following code snippet:
let arr = steps.aggregator ? steps.aggregator.arr : [];
<insert custom logic>
done({arr:arr})
This Javascript checks to see if the current step has already run and, if it has, it will get the results from the last time the step ran and add to them or if it has not run it will start with an empty array. | https://docs.cloud-elements.com/home/how-to-aggregate-loop-step-results | 2021-02-24T20:55:44 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.cloud-elements.com |
Alerting system status
Let's imagine a management user wants to check the status of the Devo alerting system today.
The user access the Security Operations application by clicking Applications → Security Operations in the Devo navigation pane. The first thing that the user sees is the Overview Dashboard, which shows that there are some alerts that have not been triaged. This information can be easily seen in the Most Critical & Not Triaged Alerts widget, at the top of the Alerts group. The user also checks the activity of each entity in the Entities by Impact widget of the Analytics group.
Step 1
To start working with the not triaged critical alerts, the user clicks the Critical button in the Most Critical & Not Triaged Alerts widget. In the window that appears, he clicks Triage to apply the filter and access the Triage area of the application directly.
In the Triage area, the user will see only critical alerts detected in the last 24 hours, ordered by criticality and date, and grouped by entities (IP address, host, user...). Alerts without entities appear at the end of the list. Triggered alerts always appear grouped.
Step 2
A group of Power Shell Exec Bypass alerts is found, and we want to triage them. To do it, the user clicks the alert name to check the individual triggered alerts in that group. In the Alerts Timeline, we see that the alerts are related to the IP address 10.52.60.69, which phished and downloaded attack tools to compromise other systems.
The user wants to add these alerts to an investigation, so he clicks the + button at the top of the window. In the window that appears, he keeps the default option New investigation and clicks Create investigation.
Step 3
The user is redirected to the Investigations area, where he set the parameters for the new investigation (Name, Importance, ATT&CK Behavior, Details...). He names the investigation RDP Infection Test. All the alerts assigned to the investigation can be seen under the Detections group of the Evidence area since Detection is the type of the alerts added. The user clicks Save to record the investigation.
Step 4
The user has noticed that the IP address 10.52.60.69 is causing some problems, so he proceeds to look for other alerts that may be related to that IP. To do it, the user goes back to the Triage area, enters the IP address as a Keyword and selects All in the Alert Priority field to check all the incidences related to the IP. Then, he clicks Filter.
Before, the user filtered only Critical alerts, so now he finds other alerts related to the suspicious IP with another priority level. The filter returns an alert called New Domain Observed Client, which has the previously detected suspicious IP as an entity, but also another one: 42.62.11.210.
Step 5
The user wants to add this alert to the previously created investigation, so he clicks the + icon, switches the toggle to Add to investigation, and selects the investigation he created (RDP Name Investigation).
Step 6
The investigation has now two different groups of alerts. The first group included alerts of the Detection type, and the new group has alerts of the Observation type (you can find these under the Observations group in the Evidence area of the investigation).
The user can now go to the Entities and Associations sections to see all the different entities (IP addresses, hostnames, etc) of the alerts added, as well as the different relationships between them.
Finally, the user clicks Save to save any modification in the investigation.
Step 7
Now that the user knows that the IP address 42.62.11.210 is related to suspicious events, the user goes to the Hunting area to check events that contain that IP. To do it, the user enters the table ids.bro.http as Target table, choose destHost as Filter key and enters 42.62.11.210 as Filter value.
Click Add to add the filter to the query, then click Filter to see the results that match the specified criteria.
Step 8
Finally, the user adds the results of the hunting to the investigation he created before (RDP Name Investigation). He clicks Add to investigation, switches the toggle to Add to investigation and selects the required investigation from the list. To end the process, he clicks Add to investigation. | https://docs.devo.com/confluence/ndt/applications/devo-security-operations/use-cases/alerting-system-status | 2021-02-24T19:58:49 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.devo.com |
Kinderpedia allows managers and teachers to be added to multiple school or kindergarten accounts using the same email address.
In order to easily switch between different school accounts, all you need to do is hover with the mouse over your name on the top right corner. You will see that a menu shows up with some options for your account alongside a list with the school accounts in which you are added. In order to switch to a different school account, simply click on the name of the account you wish to switch to. The first school name listed under your name is the school account that is currently selected. | https://docs.kinderpedia.co/en/articles/4706691-i-m-a-manager-teacher-how-can-i-switch-between-different-school-kindergarten-accounts | 2021-02-24T20:23:44 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.kinderpedia.co |
Contents:
Contents:
A sample is a selection of rows from your dataset, which can be used as the basis for building the transformation steps in your recipe. The Trifacta application automatically creates initial samples of your data whenever you create a new recipe for a dataset and enables you to create additional samples at any time using a variety of sampling techniques.
Initial Sample
When you create a new recipe and load it in the Transformer page, the Trifacta application displays the initial sample of the dataset. The initial sample consists of the first X rows of the datasets, where X is determined by the following factors:
- The number of columns in the dataset
- The amount of data in each cell
- The maximum permitted size of each sample
Take a Sample
These first rows are displayed for you to begin your work in the Transformer page. However, you may begin to run into limitations with this sample. For example, suppose your dataset is organized by date, with earliest dates listed first. There may be significant changes in the data later in the time period that do not appear in the initial sample. You may decide that you need to take a different sample that captures some of these changes.
Steps:
- In the Transformer page, click the Eyedropper icon at the top of the page.
The Samples panel is displayed.
Figure: Samples panel
At the top of the panel, you can review the Current Sample.
Tip: If the current sample indicates Full Data, then the entire dataset is displayed in the data grid. Unless you wish to use a specific sampling technique to filter down your data, sampling may not be useful across the entire dataset.
-.
For more information, see Samples Panel.
Sampling and Memory
NOTE: After you generate a sample, all steps in a recipe that occur after the step selected when you generated the sample are executed in browser memory on the sample data and then displayed in the data grid.
The above statement is best explained by example:
Implications:
- As you add steps to your recipe without resampling, your recipe and sample consume more memory in your browser.
- When you perform complex multi-dataset operations, such as joins or unions, your recipe/sample combination consumes a lot more memory.
- If you continue adding steps:
- Performance in the browser can be impacted. Basic operations such as selection of data or new recipe steps can become slow to respond.
- The browser can crash.
Sampling Considerations
Tip: When resources permit, it's a good habit to take a new sample after a few multi-dataset operations or operations that otherwise change the number of rows in your dataset have been added to your recipe.
Other considerations:
- Generating samples takes time. This is particularly true for Full Scan samples.
- Sampling can cost money. In some cloud-based environments, generating a sample costs compute resources, which can add to your computing bill.
- You may need multiple samples. For long or complex recipes, you may need to take multiple samples.
- Reference datasets should begin with a sample. When you create a recipe for a reference dataset, you should start by generating a new sample for it.
Invalid samples
Samples can become invalid. If you recipe steps change the number of rows or otherwise reshape your dataset using transformations such as pivot or join in the steps leading up to where you took the current sample, your existing sample may no longer be valid.
When the application determines that a sample is invalid:
- The sample can no longer be used. It is now listed under the Unavailable tab in the Samples panel.
The application automatically reverts to the last known good sample.
NOTE: Depending on when the last known good sample was generated, this reversion could suddenly force a large number of steps to be processed in the browser's memory.
- You should consider generating a new sample immediately.
For more information, see Overview of Sampling.For more information on best practices, see.
This page has no comments. | https://docs.trifacta.com/display/r071/Sampling+Basics?reload=true | 2021-02-24T20:40:02 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.trifacta.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.