content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Add Anveo Users Anveo Users must be set up for Anveo Client Suite so that they can log in to Anveo Mobile App . Any number of roles can be allocated to each web user in which the access rights are defined. The roles you have specified in differ from the roles in Microsoft Dynamics NAV 2009R2. Apart from their basic construction they have nothing in common with each other. However the rights defined in the Anveo Client Suite does not lever out the rights in Microsoft Dynamics NAV 2009R2, the Anveo Client Suite only reduces them additionally. For the quick setup of Anveo Users, please read how to create an Anveo User here. This requires a fully established Anveo Client Suite with all the necessary services and applications (for more details see installation manual).
https://docs.anveogroup.com/en/manual/anveo-mobile-app/setup-of-anveo-users/?product_platform=Microsoft%20Dynamics%20NAV%202009R2&product_name=anveo-mobile-app
2021-07-24T01:36:38
CC-MAIN-2021-31
1627046150067.87
[]
docs.anveogroup.com
The Wireless Clients: Specific Client displays a suite of reports that present identifying and connection information specific to the individual client selected from within the Wireless Clients report. If the client name is not available or cannot be determined, the client's MAC address is displayed. The client name is determined by Wi-Fi authentication and may not be present under all authentication schemes. Please note, if a client is displayed as 0.0.0.0, this indicates the device's controller is unable to obtain its IP address or the access point cannot determine it. The Bandwidth report displays traffic in and out. The Session Details report displays the following information: The Associations report displays connection time by percentage for each associated access point and SSID. The Signal Quality report displays radio signal strength indicator and signal-to-noise ratio percentages. Please note, this dashboard is completely static. Additional reports cannot be added and the charts described previously cannot be removed. However, you can modify the date range ( ), expand any individual dashboard report to a full page view ( ), and specify the minimum and maximum number of items to be displayed.
https://docs.ipswitch.com/NM/WhatsUpGold2019/03_Help/1033/43212.htm
2021-07-24T02:39:28
CC-MAIN-2021-31
1627046150067.87
[]
docs.ipswitch.com
File Context Action Result. Is Success Property Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Gets a value indicating whether the action was successful. public: property bool IsSuccess { bool get(); }; public: property bool IsSuccess { bool get(); }; public bool IsSuccess { get; } member this.IsSuccess : bool Public ReadOnly Property IsSuccess As Boolean
https://docs.microsoft.com/en-us/dotnet/api/microsoft.visualstudio.workspace.filecontextactionresult.issuccess?view=visualstudiosdk-2019
2021-07-24T02:37:48
CC-MAIN-2021-31
1627046150067.87
[]
docs.microsoft.com
Notes Follow standard procedures to install or update the New Relic integration for Kubernetes. Changelog Added: samples for Statefulsets, Daemonsets, Endpoints and Services. Added: API Server metrics can now be queried using the secure port. Configure the port using the API_SERVER_SECURE_PORTenvironment variable. The ClusterRole has been updated to allow this query to happen. Changed: The integration now uses the infrastructure agent v1.8.32-bundle. For more information, refer to the Infrastructure agent release notes between versions v1.8.23 and v1.8.32. The bundle container contains a subset of On-host integrations that are supported by New Relic. This also includes the ability to "Auto Discover" services running on Kubernetes in a similar way to our Container auto-discovery Changed: The integration has been renamed from nr-kubernetesto nri-kubernetes.
https://docs.newrelic.com/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/kubernetes-integration-1130/
2021-07-24T02:00:21
CC-MAIN-2021-31
1627046150067.87
[]
docs.newrelic.com
Method Get the latest device status Response { "Result":{ "UserId":"UUID" "Heartbeat Date":"YYYY-MM-DDTHH:MM:SS" "WiFi Enabled":true "GPS Enabled":true "Mobile Data Enabled":false "Latitude": "Longitude": "Extended Data":{ "Application":"" "Device Model":"" "Device OS Version":"" "GPS Permission Granted":"" "Low Power Mode":"" "Low Precise Location (iOS)":"" "Motion & Fitness Permission Granted (iOS)":"" "Motion Activity Permission Granted (Android)":"" "SDK Version":"" } } "Status":200 "Title":"" "Errors":[] } Updated about 2 months ago
https://docs.telematicssdk.com/docs/device-status-api
2021-07-24T02:18:08
CC-MAIN-2021-31
1627046150067.87
[]
docs.telematicssdk.com
WANG Wan-jo Introduction WANG Wan-jo holds an MA in Screenwriting at Exeter University. She was an important production team member of the literary documentary series, The Inspired Island. Her first feature-length documentary, River Without Banks (2012), depicts a full picture of the prominent Taiwanese poet LO Fu. In 2017, her second work, A Foley Artist, was theatrically released in Taiwan and Hong Kong.
https://docs.tfi.org.tw/en/filmmakers/3351
2021-07-24T00:43:56
CC-MAIN-2021-31
1627046150067.87
[array(['https://docs.tfi.org.tw/sites/default/files/styles/maker_photo/public/photo/Director%20Chen%20Uen%20%E7%8E%8B%E5%A9%89%E6%9F%94_01.jpg?itok=Xu5nc0TT&c=9c1ab9b6cd9daa0e25e60c036b749820', None], dtype=object) ]
docs.tfi.org.tw
Recurring Action Log records WhatsUp Gold activity related to scheduled or recurring actions. It is useful for validating action completion, actions preempted due to a blackout period, and so on.. Recurring Action Log event data can be exported, reused, and distributed. Select export ( ) to access the following options:
https://docs.ipswitch.com/NM/WhatsUpGold2019/03_Help/1033/41486.htm
2021-07-24T01:05:04
CC-MAIN-2021-31
1627046150067.87
[]
docs.ipswitch.com
Tutorial: Add sign-in to Microsoft to an ASP.NET web app In this tutorial, you build an ASP.NET MVC web app that signs in users by using the Open Web Interface for .NET (OWIN) middleware and the Microsoft identity platform. When you've completed this guide, your application will be able to accept sign-ins of personal accounts from the likes of outlook.com and live.com. Additionally, work and school accounts from any company or organization that's integrated with the Microsoft identity platform will be able to sign in to your app. In this tutorial: - Create an ASP.NET Web Application project in Visual Studio - Add the Open Web Interface for .NET (OWIN) middleware components - Add code to support user sign-in and sign-out - Register the app in the Azure portal - Test the app Prerequisites - Visual Studio 2019 with the ASP.NET and web development workload installed How the sample app generated by this guide works The sample application you create is based on a scenario where you use the browser to access an ASP.NET website that prompts a user to authenticate through a sign-in button. In this scenario, most of the work to render the web page occurs on the server side. Libraries This guide uses the following libraries: Set up your project This section describes how to install and configure the authentication pipeline through OWIN middleware on an ASP.NET project by using OpenID Connect. Prefer to download this sample's Visual Studio project instead? Download a project and skip to the Register your application to configure the code sample before executing. Create your ASP.NET project - In Visual Studio: Go to File > New > Project. - Under Visual C#\Web, select ASP.NET Web Application (.NET Framework). - Name your application and select OK. - Select Empty, and then select the check box to add MVC references. Add authentication components In Visual Studio: Go to Tools > NuGet Package Manager > Package Manager Console. Add OWIN middleware NuGet packages by typing the following in the Package Manager Console window: Install-Package Microsoft.Owin.Security.OpenIdConnect Install-Package Microsoft.Owin.Security.Cookies Install-Package Microsoft.Owin.Host.SystemWeb About these libraries These libraries enable single sign-on (SSO) by using OpenID Connect through cookie-based authentication. After authentication is completed and the token representing the user is sent to your application, OWIN middleware creates a session cookie. The browser then uses this cookie on subsequent requests so that the user doesn't have to retype the password, and no additional verification is needed. Configure the authentication pipeline The following steps are used to create an OWIN middleware Startup class to configure OpenID Connect authentication. This class is executed automatically when your IIS process starts. Tip If your project doesn't have a Startup.cs file in the root folder: - Right-click the project's root folder, and then select Add > New Item > OWIN Startup class. - Name it Startup.cs. Make sure the class selected is an OWIN Startup class and not a standard C# class. Confirm this by verifying that you see [assembly: OwinStartup(typeof({NameSpace}.Startup))] above the namespace. Add OWIN and Microsoft.IdentityModel references to Startup.cs: using Microsoft.Owin; using Owin; using Microsoft.IdentityModel.Protocols.OpenIdConnect; using Microsoft.IdentityModel.Tokens; using Microsoft.Owin.Security; using Microsoft.Owin.Security.Cookies; using Microsoft.Owin.Security.OpenIdConnect; using Microsoft.Owin.Security.Notifications; Replace Startup class with the following code: public class Startup { // The Client ID is used by the application to uniquely identify itself to Microsoft identity platform. string clientId = System.Configuration.ConfigurationManager.AppSettings["ClientId"]; // RedirectUri is the URL where the user will be redirected to after they sign in. string redirectUri = System.Configuration.ConfigurationManager.AppSettings["RedirectUri"]; // Tenant is the tenant ID (e.g. contoso.onmicrosoft.com, or 'common' for multi-tenant) static string tenant = System.Configuration.ConfigurationManager.AppSettings["Tenant"]; // Authority is the URL for authority, composed of the Microsoft identity platform and the tenant name (e.g.) string authority = String.Format(System.Globalization.CultureInfo.InvariantCulture, System.Configuration { // Sets the ClientId, authority, RedirectUri as obtained from web.config ClientId = clientId, Authority = authority, RedirectUri = redirectUri, // PostLogoutRedirectUri is the page that users will be redirected to after sign-out. In this case, it is using the home page PostLogoutRedirectUri = redirectUri, Scope = OpenIdConnectScope.OpenIdProfile, // ResponseType is set to request the code id_token - which contains basic information about the signed-in user ResponseType = OpenIdConnectResponseType.CodeIdToken, // ValidateIssuer set to false to allow personal and work accounts from any organization to sign in to your application // To only allow users from a single organizations, set ValidateIssuer to true and 'tenant' setting in web.config to the tenant name // To allow users from only a list of specific organizations, set ValidateIssuer to true and use ValidIssuers parameter TokenValidationParameters = new TokenValidationParameters() { ValidateIssuer = false // This is a simplification }, // OpenIdConnectAuthenticationNotifications configures OWIN to send notification of failed authentications to OnAuthenticationFailed method); } } Note Setting ValidateIssuer = false is a simplification for this quickstart. In real applications, you must validate the issuer. See the samples to learn how to do that. More information The parameters you provide in OpenIDConnectAuthenticationOptions serve as coordinates for the application to communicate with Microsoft identity platform. Because the OpenID Connect middleware uses cookies in the background, you must also set up cookie authentication as the preceding code shows. The ValidateIssuer value tells OpenIdConnect not to restrict access to one specific organization. Add a controller to handle sign-in and sign-out requests To create a new controller to expose sign-in and sign-out methods, follow these steps: Right-click the Controllers folder and select Add > Controller. Select MVC (.NET version) Controller – Empty. Select Add. Name it HomeController and then select Add. Add OWIN references to the class: using Microsoft.Owin.Security; using Microsoft.Owin.Security.Cookies; using Microsoft.Owin.Security.OpenIdConnect; Add the following two methods to handle sign-in and sign-out to your controller by initiating an authentication challenge: /// <summary> /// Send an OpenID Connect sign-in request. /// Alternatively, you can just decorate the SignIn method with the [Authorize] attribute /// </summary> public void SignIn() { if (!Request.IsAuthenticated) { HttpContext.GetOwinContext().Authentication.Challenge( new AuthenticationProperties{ RedirectUri = "/" }, OpenIdConnectAuthenticationDefaults.AuthenticationType); } } /// <summary> /// Send an OpenID Connect sign-out request. /// </summary> public void SignOut() { HttpContext.GetOwinContext().Authentication.SignOut( OpenIdConnectAuthenticationDefaults.AuthenticationType, CookieAuthenticationDefaults.AuthenticationType); } Create the app's home page for user sign-in In Visual Studio, create a new view to add the sign-in button and to display user information after authentication: Right-click the Views\Home folder and select Add View. Name the new view Index. Add the following HTML, which includes the sign-in button, to the file: <html> <head> <meta name="viewport" content="width=device-width" /> <title>Sign in with Microsoft Guide</title> </head> <body> @if (!Request.IsAuthenticated) { <!-- If the user is not authenticated, display the sign-in button --> <a href="@Url.Action("SignIn", "Home")" style="text-decoration: none;"> <svg xmlns="" xml: <style type="text/css">.fil0:hover {fill: #4B4B4B;} .fnt0 {font-size: 260px;font-family: 'Segoe UI Semibold', 'Segoe UI'; text-decoration: none;}</style> <rect class="fil0" x="2" y="2" width="3174" height="517" fill="black" /> <rect x="150" y="129" width="122" height="122" fill="#F35325" /> <rect x="284" y="129" width="122" height="122" fill="#81BC06" /> <rect x="150" y="263" width="122" height="122" fill="#05A6F0" /> <rect x="284" y="263" width="122" height="122" fill="#FFBA08" /> <text x="470" y="357" fill="white" class="fnt0">Sign in with Microsoft</text> </svg> </a> } else { <span><br/>Hello @System.Security.Claims.ClaimsPrincipal.Current.FindFirst("name").Value;</span> <br /><br /> @Html.ActionLink("See Your Claims", "Index", "Claims") <br /><br /> @Html.ActionLink("Sign out", "SignOut", "Home") } @if (!string.IsNullOrWhiteSpace(Request.QueryString["errormessage"])) { <div style="background-color:red;color:white;font-weight: bold;">Error: @Request.QueryString["errormessage"]</div> } </body> </html> More information This page adds a sign-in button in SVG format with a black background: For more sign-in buttons, go to the Branding guidelines. Add a controller to display user's claims This controller demonstrates the uses of the [Authorize] attribute to protect a controller. This attribute restricts access to the controller by allowing only authenticated users. The following code makes use of the attribute to display user claims that were retrieved as part of sign-in: Right-click the Controllers folder, and then select Add > Controller. Select MVC {version} Controller – Empty. Select Add. Name it ClaimsController. Replace the code of your controller class with the following code. This adds the [Authorize]attribute to the class: [Authorize] public class ClaimsController : Controller { /// <summary> /// Add user's claims to viewbag /// </summary> /// <returns></returns> public ActionResult Index() { var userClaims = User.Identity as System.Security.Claims.ClaimsIdentity; //You get the user's first and last name below: ViewBag.Name = userClaims?.FindFirst("name")?.Value; // The 'preferred_username' claim can be used for showing the username ViewBag.Username = userClaims?.FindFirst("preferred_username")?.Value; // The subject/ NameIdentifier claim can be used to uniquely identify the user across the web ViewBag.Subject = userClaims?.FindFirst(System.Security.Claims.ClaimTypes.NameIdentifier)?.Value; // TenantId is the unique Tenant Id - which represents an organization in Azure AD ViewBag.TenantId = userClaims?.FindFirst("")?.Value; return View(); } } More information Because of the use of the [Authorize] attribute, all methods of this controller can be executed only if the user is authenticated. If the user isn't authenticated and tries to access the controller, OWIN initiates an authentication challenge and forces the user to authenticate. The preceding code looks at the list of claims for specific user attributes included in the user's ID token. These attributes include the user's full name and username, as well as the global user identifier subject. It also contains the Tenant ID, which represents the ID for the user's organization. Create a view to display the user's claims In Visual Studio, create a new view to display the user's claims in a web page: Right-click the Views\Claims folder, and then select Add View. Name the new view Index. Add the following HTML to the file: <html> <head> <meta name="viewport" content="width=device-width" /> <title>Sign in with Microsoft Sample</title> <link href="@Url.Content("~/Content/bootstrap.min.css")" rel="stylesheet" type="text/css" /> </head> <body style="padding:50px"> <h3>Main Claims:</h3> <table class="table table-striped table-bordered table-hover"> <tr><td>Name</td><td>@ViewBag.Name</td></tr> <tr><td>Username</td><td>@ViewBag.Username</td></tr> <tr><td>Subject</td><td>@ViewBag.Subject</td></tr> <tr><td>TenantId</td><td>@ViewBag.TenantId</td></tr> </table> <br /> <h3>All Claims:</h3> <table class="table table-striped table-bordered table-hover table-condensed"> @foreach (var claim in System.Security.Claims.ClaimsPrincipal.Current.Claims) { <tr><td>@claim.Type</td><td>@claim.Value</td></tr> } </table> <br /> <br /> @Html.ActionLink("Sign out", "SignOut", "Home", null, new { @class = "btn btn-primary" }) </body> </html> Register your application To register your application and add your application registration information to your solution, you have two options: Option 1: Express mode To quickly register your application, follow these steps: - Go to the Azure portal - App registrations quickstart experience. - Enter a name for your application and select Register. - Follow the instructions to download and automatically configure your new application in a single click. Option 2: Advanced mode To register your application and add the app's registration information to your solution manually, follow these steps: Open Visual Studio, and then: - in Solution Explorer, select the project and view the Properties window (if you don't see a Properties window, press F4). - Change SSL Enabled to True. - Right-click the project in Visual Studio, select Properties, and then select the Web tab. In the Servers section, change the Project Url setting to the SSL URL. - Copy the SSL URL. You'll add this URL to the list of Redirect URIs in the Registration portal's list of Redirect URIs in the next step. If you have access to multiple tenants, use the Directory + subscription filter in the top menu to select the tenant in which you want to register an application. Under Manage, select App registrations > New registration. Enter a Name for your application, for example ASPNET-Tutorial. Users of your app might see this name, and you can change it later. Add the SSL URL you copied from Visual Studio in step 1 (for example,) in Redirect URI. Select Register. Under Manage, select Authentication. In the Implicit grant and hybrid flows section, select ID tokens, and then select Save. Add the following in the web.config file, located in the root folder in the configuration\appSettingssection: <add key="ClientId" value="Enter_the_Application_Id_here" /> <add key="redirectUri" value="Enter_the_Redirect_URL_here" /> <add key="Tenant" value="common" /> <add key="Authority" value="{0}/v2.0" /> Replace ClientIdwith the Application ID you just registered. Replace redirectUriwith the SSL URL of your project. Test your code To test your application in Visual Studio, press F5 to run your project. The browser opens to the:{port} location, and you see the Sign in with Microsoft button. Select the button to start the sign-in process. When you're ready to run your test, use an Azure AD account (work or school account) or a personal Microsoft account (live.com or outlook.com) to sign in. Permissions and consent in the Microsoft identity platform Applications that integrate with the Microsoft identity platform follow an authorization model that gives users and administrators control over how data can be accessed. After a user authenticates with the Microsoft identity platform to access this application, they will be prompted to consent to the permissions requested by the application ("View your basic profile" and "Maintain access to data you have given it access to"). After accepting these permissions, the user will continue on to the application results. However, the user may instead be prompted with a Need admin consent page if either of the following occur: - The application developer adds any additional permissions that require Admin consent. - Or the tenant is configured (in Enterprise Applications -> User Settings) where users cannot consent to apps accessing company data on their behalf. For more information, refer to Permissions and consent in the Microsoft identity platform. View application results After you sign in, the user is redirected to the home page of your website. The home page is the HTTPS URL that's specified in your application registration info in the Microsoft Application Registration Portal. The home page includes a "Hello <user>" welcome message, a link to sign out, and a link to view the user's claims. The link for the user's claims connects to the Claims controller that you created earlier. View the user's claims To view the user's claims, select the link to browse to the controller view that's available only to authenticated users. View the claims results After you browse to the controller view, you should see a table that contains the basic properties for the user: Additionally, you should see a table of all claims that are in the authentication request. For more information, see the list of claims that are in an ID token. Test access to a method that has an Authorize attribute (optional) To test access as an anonymous user to a controller that's protected by the Authorize attribute, follow these steps: - Select the link to sign out the user, and complete the sign-out process. - In your browser, type:{port}/claims to access your controller that's protected by the Authorizeattribute. Expected results after access to a protected controller You're prompted to authenticate to use the protected controller view. Advanced options Protect your entire website To protect your entire website, in the Global.asax file, add the AuthorizeAttribute attribute to the GlobalFilters filter in the Application_Start method: GlobalFilters.Filters.Add(new AuthorizeAttribute()); Restrict who can sign in to your application By default when you build the application created by this guide, your application will accept sign-ins of personal accounts (including outlook.com, live.com, and others) as well as work and school accounts from any company or organization that's integrated with Microsoft identity platform. This is a recommended option for SaaS applications. To restrict user sign-in access for your application, multiple options are available. Option 1: Restrict users from only one organization's Active Directory instance to sign in to your application (single-tenant) This option is frequently used for LOB applications: If you want your application to accept sign-ins only from accounts that belong to a specific Azure AD instance (including guest accounts of that instance), follow these steps: - In the web.config file, change the value for the Tenantparameter from Commonto the tenant name of the organization, such as contoso.onmicrosoft.com. - In your OWIN Startup class, set the ValidateIssuerargument to true. Option 2: Restrict access to users in a specific list of organizations You can restrict sign-in access to only those user accounts that are in an Azure AD organization that's on the list of allowed organizations: - In your OWIN Startup class, set the ValidateIssuerargument to true. - Set the value of the ValidIssuersparameter to the list of allowed organizations. Option 3: Use a custom method to validate issuers You can implement a custom method to validate issuers by using the IssuerValidator parameter. For more information about how to use this parameter, see TokenValidationParameters class. Help and support If you need help, want to report an issue, or want to learn about your support options, see Help and support for developers. Next steps Learn about calling protected web APIs from web apps with the Microsoft identity platform:
https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-v2-asp-webapp
2021-07-24T02:24:59
CC-MAIN-2021-31
1627046150067.87
[array(['media/active-directory-develop-guidedsetup-aspnetwebapp-use/aspnetsigninbuttonsample.png', 'Sign in with Microsoft button'], dtype=object) array(['media/active-directory-develop-guidedsetup-aspnetwebapp-test/aspnetbrowsersignin.png', 'Sign in with Microsoft button shown on browser logon page in browser'], dtype=object) array(['media/active-directory-develop-guidedsetup-aspnetwebapp-test/aspnetbrowsersignin2.png', 'Sign in to your Microsoft account'], dtype=object) ]
docs.microsoft.com
Your enterprise can benefit from the data that is discovered and stored in OnCommand Insight Data Warehouse. The OnCommand Insight Data Warehouse is a centralized repository that stores data from multiple information sources and transforms them into a common, multidimensional data model for efficient querying and analysis. From this repository, you can generate custom reports such as chargeback, consumption analysis, and forecasting reports that answer questions such as the following: Using the data model provided with OnCommand Insight Reporting, you can use report authoring tools to design and schedule reports.
https://docs.netapp.com/oci-73/topic/com.netapp.doc.oci-repg-733/GUID-1C586BA8-67EF-4F9B-BC53-B8365B8C6E6D.html
2021-07-24T01:24:42
CC-MAIN-2021-31
1627046150067.87
[]
docs.netapp.com
Content Pack Installation Cortex XSOAR Content Pack dependencies, errors and warning messages. Troubleshooting Content Pack installation. market browse Marketplace Before you install a Content Pack, you should review the Content Pack to see what it includes, any dependencies that are required, reviews, etc. When selecting a Content Pack, you can view the following information: - Details: general information about the Content Pack including installation, content, version, author, status, etc. - Content: information about the content of the Content Pack such as automations, integrations, etc. - Release Notes: contains information about each version including fixes, improvements and version. - Dependencies: details of any Required Content Packs and Optional Content Packs that may need to be installed with the Content Pack. - Review: You can view or add a review to the Content Pack (need to be logged in). If you experience timeout issues when downloading content, see Marketplace Troubleshooting. Dependencies In Cortex XSOAR, some objects are dependent on other objects. For example, a playbook may be dependent on other playbooks, scripts, integrations and incident fields, etc. In the figure above, you can see that an incident type is dependent on a playbook, an incident layout, and an incident field. A widget is dependent on an incident field and a script. A script is dependent on another script, and an integration, etc. When you install a content pack, mandatory dependencies including required content packs are added automatically to ensure that it installs correctly. Some content, while not essential for installation, ensures that the content runs successfully. These dependencies include optional content packs, which can be added or removed in the Cart. If you delete a content pack, which depends on other Content Packs, these Content Packs may not run correctly. Required Content Packs Required Content Packs are mandatory Content Packs, which download automatically with the Content Pack. These Content Packs are dependent on the required Content Pack and without them installation fails. If a Content Pack is dependent on one or more Content Packs, you have to install all of them. For example, if Content Pack A requires Content Pack B and Content Pack B requires Content Pack C, when you install Content Pack A, all of the other Content Packs are installed. You cannot remove the Required Content Packswhen installing a Content Pack. Also, if you roll back to an earlier version of a content pack, other content packs might be affected. For example, if Content Pack A depends on layouts from Content Pack B Version 2, reverting to Content Back B Version 1 could cause Content Pack A to stop working. In this example, the Impossible Traveler Premium ContentContent Pack requires: Active Directory Query v2 and Base Content Packs (both of which are installed). Rasterize Content Pack (which needs to be installed). Optional Content Packs Optional Content Packsare used by the Content Pack you want to install, but are not necessary for installation. You can choose which optional Content Pack to install in the Cart. When you install optional Content Packs, mandatory dependencies are automatically included. For example, in the Active Directory QueryContent Pack, there are various optional Content Packs used by the Content Pack, such as Microsoft Graph Mail. You can install the Content Pack without Microsoft Graph Mail, if your organization does not need it. Errors and Warning Messages You may receive an error message when you try to install a Content Pack. If you receive an error message, you need to fix the error before installing the Content Pack. If a warning message is issued, you can still download the Content Pack, but you should fix the problem otherwise the content may not work correctly. Error Message Example In this example, we want to install the Impossible Traveler Premium Contentpack, but we already have a custom playbook with the same name/ID. When we try to install the Content Pack, installation fails: When clicking view errors, you can see the error: Warning Message Example In this example, we want to update the Custom Scriptspack. When we try to install, the following message is issued about a missing Docker image: If you click Install Anyway, the Content Pack installs, but you need to add the missing Docker image for the content to run correctly. Recommended For You Recommended Videos Recommended videos not found.
https://docs.paloaltonetworks.com/cortex/cortex-xsoar/6-1/cortex-xsoar-admin/marketplace/marketplace-subscriptions/content-pack-installation.html
2021-07-24T00:28:11
CC-MAIN-2021-31
1627046150067.87
[array(['/content/dam/techdocs/en_US/dita/_graphics/6-1/cortex-xsoar/dependencies-flow-new.png/_jcr_content/renditions/original', None], dtype=object) array(['/content/dam/techdocs/en_US/dita/_graphics/6-1/cortex-xsoar/market-required.png/_jcr_content/renditions/original', None], dtype=object) array(['/content/dam/techdocs/en_US/dita/_graphics/6-1/cortex-xsoar/market-optional-active.png/_jcr_content/renditions/original', None], dtype=object) array(['/content/dam/techdocs/en_US/dita/_graphics/6-1/cortex-xsoar/market-cart-active.png/_jcr_content/renditions/original', None], dtype=object) array(['/content/dam/techdocs/en_US/dita/_graphics/6-1/cortex-xsoar/market-install-fail.png/_jcr_content/renditions/original', None], dtype=object) array(['/content/dam/techdocs/en_US/dita/_graphics/6-1/cortex-xsoar/market-error.png/_jcr_content/renditions/original', None], dtype=object) array(['/content/dam/techdocs/en_US/dita/_graphics/6-1/cortex-xsoar/market-warning-docker.png/_jcr_content/renditions/original', None], dtype=object) ]
docs.paloaltonetworks.com
UiPath.Box.Activities.File.DownloadFile The Download File activity uses the Box DownloadFile API to download a file (File Id and Version) to specified folder (Local file path). You have the option overwrite an existing file that has the same name (Overwrite). Download File activity inside the Box Scope activity. - Enter values for the Input properties. - Your input property values are sent in theDownloadFile API operation request. >.). - Local file path - The location where you want to download the file. This field supports only Stringsor Stringvariables. This is a folder location. The file downloaded will use the name of the file as the file name locally. Misc - Private - If selected, the values of variables and arguments are no longer logged at Verbose level. 选项 - Overwrite - If Selected, the activity overwrites an existing file with the same name. If Not Selected, the activity throws an exception if a file with the same name exists in the specified Local file path. - Version - The specific version of the file that you want to download. This field supports only Stringsor Stringvariables. If there is a specific version of a file that you want to download, use this property to specify the file version ID. 2 个月前更新
https://docs.uipath.com/activities/lang-zh_CN/docs/box-download-file
2021-07-24T01:15:39
CC-MAIN-2021-31
1627046150067.87
[array(['https://files.readme.io/3147c4b-DownloadFile_MSC.png', 'DownloadFile_MSC.png'], dtype=object) array(['https://files.readme.io/3147c4b-DownloadFile_MSC.png', 'Click to close...'], dtype=object) ]
docs.uipath.com
If you are about to create your user segmentation, we recommend you to use custom data attributes. You can pass any user information within a custom attribute. Compared to tags, attributes provide some more flexibility and more data types. Find out in these articles how to handle Segmentation using attributes and how to implement attributes into your code. However, if you still want to use tags for defining user groups, you can implement the following commands to tag and to untag users. How it works 1. Dev task: Implement your tags A tag can be any string. However, we recommend limiting the length to 128 characters. You can use the following tag commands // Tag Command Userlane('tag', 'admin'); // Multiple Tags Command Userlane('tag', 'admin', 'exampleTag', 'anotherOne'); // Multiple Tags Command with user identification Userlane('tag', <user_ID>, ['tag1', 'tag2', 'moreTag']); // Remove Tags Command Userlane('untag', 'myCustomTag'); // Remove Multiple Tags Command Userlane('untag', 'myCustomTag', 'exampleTag', 'anotherOne'); // identify the current user Userlane('user', 'user_ID'); Positioning and page reloads Set all segmentation commands before you initialize Userlane with Userlane ('init', yourPropertyId);. The 'init' command only needs to be called once in your snippet after all segmentation commands in order to confirm the changes. In this way, Userlane will automatically adjust the assistant to reflect changes in the tags and in the segmentation. It is not necessary to call the 'init' command after each individual segmentation command. The tags are not persistent across page reloads. On every page reload, the user starts with empty tags. This means you have to call the tag or untag command(s) after every page reload. 2. Manager Task: Create and apply your segmentation in the Userlane Dashboard Follow this userlane to create a User Segment with the implemented tags and this tour to apply the User Segment to a specific chapter/userlane. Related articles How to segment your userlanes / chapters based on users Best practices: Create a Solid Segmentation Concept How to create custom attributes How to implement custom attributes into your code Do you need more information?
https://docs.userlane.com/en/articles/2413435-how-to-implement-your-segments-using-tags
2021-07-24T01:54:00
CC-MAIN-2021-31
1627046150067.87
[]
docs.userlane.com
Microsoft Endpoint Configuration Manager documentation Official product documentation for the following components of Microsoft Endpoint Manager: Configuration Manager, co-management, and Desktop Analytics Co-management Real-time management Infrastructure simplification Core infrastructure OS deployment Configuration Manager community & support Configuration Manager blog News and announcements Twitter: #ConfigMgr Keep current with our active community on Twitter Configuration Manager forums Ask technical questions in the product forums Configuration Manager troubleshooting Support articles to help you diagnose and fix issues In-console feedback Send a smile or frown to the engineering team UserVoice product feedback Share product ideas with the engineering team Product support Get professional help from Microsoft support System Center 2012 R2 Configuration Manager Documentation for the previous version of Configuration Manager Other content Other community sites Found a problem with Configuration Manager docs? Let us know!
https://docs.microsoft.com/da-DK/mem/configmgr/
2021-07-24T02:17:50
CC-MAIN-2021-31
1627046150067.87
[]
docs.microsoft.com
py_compile — Compile Python source files¶ Source py_compile. PyCompileError¶ Exception raised when an error occurs while attempting to compile the file. py_compile. compile(file, cfile=None, dfile=None, doraise=False, optimize=-1, invalidation_mode=PycInvalidationMode.TIMESTAMP, quiet=0)¶ Compile a source file to byte-code and write out the byte-code cache file. The source code is loaded from the file named file. The byte-code is written to cfile, which defaults to the PEP 3147/PEP 488 path, ending in .pyc. For example, if file is /foo/bar/baz.pycfile will default to /foo/bar/__pycache__/baz.cpython-32.pycfor Python 3.2. If dfile is specified, it is used as the name of the source file in error messages instead of file. If doraise is true, a PyCompileErroris raised when an error is encountered while compiling file. If doraise is false (the default), an error string is written to sys.stderr, but no exception is raised. This function returns the path to byte-compiled file, i.e. whatever cfile value was used.. If the path that cfile becomes (either explicitly specified or computed) is a symlink or non-regular file, FileExistsErrorwillselects the optimization level of the current interpreter..2: Changed default value of cfile to be PEP 3147-compliant. Previous default was file + 'c'( 'o'if optimization was enabled). Also added the optimize parameter. Changed in version 3.4: Changed code to use importlibfor the byte-code cache file writing. This means file creation/writing semantics now match what importlibdoes, e.g. permissions, write-and-move semantics, etc. Also added the caveat that FileExistsErroris raised if cfile is a symlink or non-regular file.. py_compile. main(args=None)¶ Compile several source files. The files named in args (or on the command line, if args is None) are compiled and the resulting byte-code - Module compileall Utilities to compile all Python source files in a directory tree.
https://docs.python.org/3.8/library/py_compile.html
2021-07-24T02:02:31
CC-MAIN-2021-31
1627046150067.87
[]
docs.python.org
Welcome and Device Tabs# This page describes the functionality available on the Welcome page and the tabbed interface that is displayed once you connect or selected a devices. Tabs include: Simplicity Studio® 5 (SSv5) opens to a Welcome page. To return to the Welcome page at any time, click Welcome on the tool bar. On the Welcome Page you can: Select a Target Device# SSv5’s purpose is to provide a development environment directed toward a specific target device. Therefore, one of the first things to do is to define that target device. Once a target device is selected, a tabbed interface to features specific to that device is available, starting on the Overview tab. The device can be a physical piece of hardware, or a virtual part. Physical If you have one or more devices physically connected, either on a development kit or on customer hardware with a supported debug adapter, they are displayed in the Debug Adapters view, where you can get started simply by clicking a device to selected it. They are also displayed in the Get Started area’s Connected Devices drop down. Clicking a device to select it, then click Start. Virtual If you do not have a physical device, but would like to explore some of SSv5’s functions, or get started with developing for a part you will receive later, select a virtual device either in the My Products view or in the Get Started area. In the My Products view, start typing a product name and select the product of interest. Under Get Started, click All Products. Use the checkboxes to limit the search list to kits, boards, or parts. Click in the search products field and start typing. When you see the target, select it and click Start. The next time you return to the Welcome page, that device will be shown in the My Products view. Start a New Project# You can start a new project from the Welcome page, but you must immediately select a target device. See Start a Project for more information. Learn and Support# Expand the Learn and Support section for access to a variety of resources related to developing for a Silicon Labs target. Device-Specific Tabs# Device-Specific tabs include: Overview Tab# Once you have selected a target device, the Launcher perspective editor area changes to the OVERVIEW tab specific to that part. For a physically-connected device you have general device information, as well as details about the hardware components. Each hardware component is pictured in a card and has a View Documents drop down where you can see related hardware documentation. Finally, links to recommended quick start guides from compatible protocol SDKs are provided. SSv5 displays similar information for virtual devices (selected in the My Products view) and devices connected to a supported debug adapter (for example, SEGGER J-Link or a Wireless Starter Kit mainboard in debug OUT mode). The settings in the General Information card vary depending on the target device. General Device Information# Configure Connection Device Connection shows how the device is connected to SSv5. Click Configure to explore or modify connection parameters. If your device firmware is not up to date, you will be invited to update it. If you are targeting an EFR32-based Silicon Labs kit, you will almost certainly also see the following question: The CTUNE value is used to tune the external crystal capacitors to hit the exact frequency they are intended to hit. Because this varies from board to board, in production Silicon Labs measures it during board tests and programs a unique CTUNE value for each board into the EEPROM. Each SDK also has a default CTUNE value programmed in a manufacturing token. This message asks if you want to overwrite the default CTUNE value with the one found in the EEPROM. Because the EEPROM value is more accurate, Silicon Labs recommends you click Yes. The default CTUNE value will also work, but could under some circumstances, such as temperature extremes, bring the radio frequency out of spec. Debug Mode The Debug Mode controls the interface to the wireless starter kit mainboard onboard debugger. Changing Debug Mode opens the Adapter Configuration tab of the J-Link Configuration tool. The debug modes are: Onboard Device (MCU) (default): The debugger built into the development board is connected to the on-board target device. External Device (OUT): The on-board debugger is configured for connection to an external device such as your custom hardware. External Debugger (IN): An external debugger is connected to the device on the development board. See your kit's User’s Guide for more information on the debug modes available. Adapter Firmware This shows the firmware version running on the debug controller of your Silicon Labs development kit and whether or not an update is available. Silicon Labs strongly recommends that you update adapter firmware to the current version. The Changelog shows the firmware release notes. When you update adapter firmware SSv5 will ask you to confirm before proceeding. When firmware is up to date the interface says Latest. Secure Firmware Series 2 devices contain a Secure Element, a tamper-resistant component used to securely store sensitive data and keys, and to execute cryptographic functions and secure services. The Secure Element firmware can also be updated. When you have a Series 2 device connected, the General Information card includes a Secure FW line. This shows the firmware version running on the Secure Element and whether or not an update is available. Silicon Labs strongly recommends that you update firmware to the current version. The Changelog shows the firmware release notes. When you update, you are warned that user software, including any factory-installed applications such as RangeTest, will be deleted. After upgrade, the installed and available versions are the same. Preferred SDK Some developers may have more than one GSDK version installed. The preferred SDK shows the currently selected SDK. Click Manage SDKs to see other options. Example Projects & Demos Tab#. Click RUN on any demo to install it on a target device. Click CREATE on any project to create it. This is equivalent to creating a project from the OVERVIEW tab, except that the project is already selected. By default, the tab enables showing both demos and examples. Demos have a blue tag in the upper left of the card. To see only one type of file, disable the other type. Use the checkboxes and search box to filter the list. Checkboxes include: Technology Type Provider Quality Provider allows you to filter on example sources. Peripheral Examples, provided from the Silicon Labs GitHub repository peripheral-examples, are specific to your connected or selected board, and allow you to exercise various peripheral functions. SSv5 allows you to add other GitHub repositories containing examples. To add a GitHub repository, go to Preferences > Simplicity Studio > External Repos. Here you can add, edit, and delete repos, and select from repos that are already added. Adding a repo is done in two steps: cloning and then selecting the branch, tag, or commit to add. The default branch is Master. application_examples and peripheral_examples are official Silicon Labs repos and may not be edited or deleted. You must be connected to the Internet to create a project from an example in a remote GitHub repository. Projects created from GitHub repositories are created in the location specified in the project creation dialog. The cloned project does not have any Git-related information in it. You cannot sync/checkout/pull code from Git. If you specify a repository installed locally, SSv5 will not synchronize the repo and all Git-related interface items will not be presented. Documentation Tab# The DOCUMENTATION tab shows all documentation compatible with the selected part. Use the checkboxes or text filter field to find a resource of interest. The technology filter corresponding to your development environment will show you most software documents relevant to that environment. Compatible Tools Tab# The COMPATIBLE TOOLS tab shows the tools compatible with the selected product. The Tools button on the toolbar shows all tools unfiltered.
https://docs.silabs.com/simplicity-studio-5-users-guide/latest/ss-5-users-guide-about-the-launcher/welcome-and-device-tabs
2021-07-24T02:31:05
CC-MAIN-2021-31
1627046150067.87
[array(['/ss-5-users-guide-about-the-launcher/0.2/images/welcome-screen.png', 'The SSv5 Welcome page'], dtype=object) array(['/ss-5-users-guide-about-the-launcher/0.2/images/welcome-icon.png', 'welcome icon'], dtype=object) array(['/ss-5-users-guide-about-the-launcher/0.2/images/welcome-all-products.png', 'welcome all products'], dtype=object) array(['/ss-5-users-guide-about-the-launcher/0.2/images/welcome-new-project.png', 'welcome new project'], dtype=object) array(['/ss-5-users-guide-about-the-launcher/0.2/images/welcome-learn-and-support.png', 'welcome learn and support'], dtype=object) array(['/ss-5-users-guide-about-the-launcher/0.2/images/launcher-tab-overview-physical.png', 'launcher tab overview physical'], dtype=object) array(['/ss-5-users-guide-about-the-launcher/0.2/images/launcher-tab-overview-virtual.png', 'launcher tab overview virtual'], dtype=object) array(['/ss-5-users-guide-about-the-launcher/0.2/images/launcher-device-firmware-update.png', 'launcher device firmware update'], dtype=object) array(['/ss-5-users-guide-about-the-launcher/0.2/images/launcher-device-ctune.png', 'launcher device ctune'], dtype=object) array(['/ss-5-users-guide-about-the-launcher/0.2/images/launcher-device-adapter-configuration.png', 'Launcher general information Debug Mode'], dtype=object) array(['/ss-5-users-guide-about-the-launcher/0.2/images/launcher-device-firmware.png', 'Launcher general information adapter firmware'], dtype=object) array(['/ss-5-users-guide-about-the-launcher/0.2/images/launcher-overview-tab-adapter-firmware-after-update.png', 'Launcher general information adapter firmware after update'], dtype=object) array(['/ss-5-users-guide-about-the-launcher/0.2/images/launcher-tab-overview-firmware-update-project-config.png', 'Launcher general information Secure firmware'], dtype=object) array(['/ss-5-users-guide-about-the-launcher/0.2/images/launcher-tab-overview-firmware-update-warning.png', 'Launcher general information Secure firmware confirm update'], dtype=object) array(['/ss-5-users-guide-about-the-launcher/0.2/images/launcher-tab-overview-firmware-after-update.png', 'Launcher general information Secure firmware after update'], dtype=object) array(['/ss-5-users-guide-about-the-launcher/0.2/images/launcher-device-preferred-sdk.png', 'launcher general information preferred SDK'], dtype=object) array(['/ss-5-users-guide-about-the-launcher/0.2/images/launcher-tab-demos-and-examples.png', 'launcher Example Projects & demos tab'], dtype=object) array(['/ss-5-users-guide-about-the-launcher/0.2/images/launcher-tab-demos-examples-peripheral.png', 'launcher Example Projects & demos tab'], dtype=object) array(['/ss-5-users-guide-about-the-launcher/0.2/images/github-add.png', 'Adding a github repo'], dtype=object) array(['/ss-5-users-guide-about-the-launcher/0.2/images/launcher-tab-documentation.png', 'launcher documentation tab'], dtype=object) array(['/ss-5-users-guide-about-the-launcher/0.2/images/launcher-tab-tools.png', 'launcher tab tools'], dtype=object) ]
docs.silabs.com
Unreal Engine (UE) can deliver assets outside of the main executable of your applications in the form of .pak files. To do this, you need to organize your assets into chunks, groups of asset files that are recognized by the cooking process. This how-to will teach you how to organize assets into chunks from within Unreal Editor. When you have finished, you will have a sample project that will produce .pak files you can deliver with a patching system. Recommended Assets For this guide, you will be using the assets for the characters Crunch, Boris, and Khaimera from Paragon, which you can download from the Unreal Marketplace for free. As long as you have assets that you can safely group into separate folders, you do not need to use these specific assets. The Paragon character assets make a convenient test case since UE already organizes them this way. Required Setup Projects do not generate chunks during cooking or packaging by default. To set your project up for chunking, open your Project Settings and navigate to Project > Packaging, then make sure that Use Pak File and Generate Chunks are both enabled. Organizing Your Chunking Scheme Now that you have enabled chunking and set up your plugins, you need to organize your assets and package them into chunks. For more information about the chunking process, refer to Cooking and Chunking. Inside the ParagonBoris folder, right-click, navigate to Create Advanced Asset > Miscellaneous, then create a new Data Asset. Choose Primary Asset Label as the base class for the new data asset. You can create subclasses of PrimaryAssetLabel in C++ to add extra metadata. If you create subclasses for PrimaryAssetLabel in Blueprint, they will not work for chunking purposes. Name the new Primary Asset Label Label_Boris. Open Label_Boris and fill in the following properties: Repeat steps 1 through 4 for ParagonCrunch and ParagonKhaimera. In this example we set the ChunkID to 1002 for Crunch and 1003 for Khaimera. Package or cook content for your project. Final Result If everything is set up correctly, you will see the .pak files in your build directory, under /WindowsNoEditor/PatchingDemo/Content/Paks when UE has finished packaging them. UE will name each of them for the Chunk ID that you designated, and they each will contain the assets for our three characters. You can also navigate click Window > Asset Audit to view your chunks in the Asset Audit Window. You can find more information about Asset Audit in Cooking and Chunking. ![]PakFileAssetAudit.png)
https://docs.unrealengine.com/4.26/ko/SharingAndReleasing/Patching/GeneralPatching/ChunkingExample/
2021-07-24T02:07:18
CC-MAIN-2021-31
1627046150067.87
[array(['./../../../../../Images/SharingAndReleasing/Patching/GeneralPatching/ChunkingExample/ParagonAssets.jpg', 'ParagonAssets.png'], dtype=object) array(['./../../../../../Images/SharingAndReleasing/Patching/GeneralPatching/ChunkingExample/PackagingSettings.jpg', 'PackagingSettings.png'], dtype=object) array(['./../../../../../Images/SharingAndReleasing/Patching/GeneralPatching/ChunkingExample/FinalPakFiles.jpg', 'FinalPakFiles.png'], dtype=object) ]
docs.unrealengine.com
UpdateLayer Updates a specified layer. Required Permissions: To use this action, an IAM user must have a Manage permissions level for the stack, or an attached policy that explicitly grants permissions. For more information on user permissions, see Managing User Permissions. Request Syntax { "Attributes": { " string" : " string" }, "AutoAssignElasticIps": boolean, "AutoAssignPublicIps": boolean, "CloudWatchLogsConfiguration": { "Enabled": boolean, "LogStreams": [ { "BatchCount": number, "BatchSize": number, "BufferDuration": number, "DatetimeFormat": " string", "Encoding": " string", "File": " string", "FileFingerprintLines": " string", "InitialPosition": " string", "LogGroupName": " string", "MultiLineStartPattern": " string", "TimeZone": " string" } ] }, "CustomInstanceProfileArn": " string", "CustomJson": " string", "CustomRecipes": { "Configure": [ " string" ], "Deploy": [ " string" ], "Setup": [ " string" ], "Shutdown": [ " string" ], "Undeploy": [ " string" ] }, "CustomSecurityGroupIds": [ " string" ], "EnableAutoHealing": boolean, "InstallUpdatesOnBoot": boolean, "LayerId": " string", "LifecycleEventConfiguration": { "Shutdown": { "DelayUntilElbConnectionsDrained": boolean, "ExecutionTimeout": number} }, "Name": " string", "Packages": [ " string" ], "Shortname": " string", "UseEbsOptimizedInstances": boolean, "VolumeConfigurations": [ { "Encrypted": boolean, "Iops": number, "MountPoint": " string", "NumberOfDisks": number, "RaidLevel": number, "Size": number, "VolumeType": " string" } ] } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - Attributes One or more user-defined key/value pairs to be added to the stack attributes. Type: String to string map Valid Keys: EcsClusterArn | EnableHaproxyStats | HaproxyStatsUrl | HaproxyStatsUser | HaproxyStatsPassword | HaproxyHealthCheckUrl | HaproxyHealthCheckMethod | MysqlRootPassword | MysqlRootPasswordUbiquitous | GangliaUrl | GangliaUser | GangliaPassword | MemcachedMemory | NodejsVersion | RubyVersion | RubygemsVersion | ManageBundler | BundlerVersion | RailsStack | PassengerVersion | Jvm | JvmVersion | JvmOptions | JavaAppServer | JavaAppServerVersion Required: No - AutoAssignElasticIps Whether to automatically assign an Elastic IP address to the layer's instances. For more information, see How to Edit a Layer. Type: Boolean Required: No - AutoAssignPublicIps For stacks that are running in a VPC, whether to automatically assign a public IP address to the layer's instances. For more information, see How to Edit a Layer. Type: Boolean Required: No - CloudWatchLogsConfiguration Specifies CloudWatch Logs configuration options for the layer. For more information, see CloudWatchLogsLogStream. Type: CloudWatchLogsConfiguration object Required: No - CustomInstanceProfileArn The ARN of an IAM profile to be used for all of the layer's EC2 instances. For more information about IAM ARNs, see Using Identifiers. Type: String Required: No - CustomJson A JSON-formatted string containing custom stack configuration and deployment attributes to be installed on the layer's instances. For more information, see Using Custom JSON. Type: String Required: No - CustomRecipes A LayerCustomRecipesobject that specifies the layer's custom recipes. Required: No - CustomSecurityGroupIds An array containing the layer's custom security group IDs. Type: Array of strings Required: No - EnableAutoHealing Whether to disable auto healing for the layer. Type: Boolean manually running yum(Amazon Linux) or apt-get(Ubuntu) on the instances. Note We strongly recommend using the default value of true, to ensure that your instances have the latest security updates. Type: Boolean Required: No - LayerId The layer ID. Type: String Required: Yes - LifecycleEventConfiguration Type: LifecycleEventConfiguration object Required: No - Name The layer name, which is used by the console. Layer names can be a maximum of 32 characters. Type: String Required: No - Packages An array of Packageobjects that describe the layer's packages. Type: Array of strings Required: No - Shortname For custom layers only, use this parameter to specify the layer's short name, which is used internally by AWS OpsWorks Stacks and by Chef. The short name is also used as the name for the directory where your app files are installed. It can have a maximum of 32 characters and must be in the following format: /\A[a-z0-9\-\_\.]+\Z/. Built-in layer short names are defined by AWS OpsWorks Stacks. For more information, see the Layer reference in the AWS OpsWorks User Guide. Type: String Required: No - UseEbsOptimizedInstances Whether to use Amazon EBS-optimized instances. Type: Boolean Required: No - VolumeConfigurations A VolumeConfigurationsobject that describes the layer's Amazon EBS volumes. Type: Array of VolumeConfiguration objects Required: No:
https://docs.aws.amazon.com/opsworks/latest/APIReference/API_UpdateLayer.html
2022-01-16T23:20:50
CC-MAIN-2022-05
1642320300244.42
[]
docs.aws.amazon.com
Defensible Delete Report for Files Overview This report provides information about the files that are deleted from inSync using Federated Search or Sensitive Data Governance (Compliance) File violations. Access Path On the inSync Management Console menu bar, click Reports and then click Defensible Delete Report for Files report. If you are an inSync GovCloud customer and do not see this report, they might not have been enabled for your organization. Contact Support for assistance. Description
https://docs.druva.com/Endpoints/Alerts%2C_Reports%2C_and_Diagnostics/Reports/Defensible_Delete_Report_for_Files
2022-01-16T22:58:54
CC-MAIN-2022-05
1642320300244.42
[]
docs.druva.com
PageSetup.PrintGridlines property (Excel) True if cell gridlines are printed on the page. Applies only to worksheets. Read/write Boolean. Syntax expression.PrintGridlines expression A variable that represents a PageSetup object. Example This example prints cell gridlines when Sheet1 is printed. Worksheets("Sheet1").PageSetup.PrintGridlines = True Support and feedback Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback.
https://docs.microsoft.com/en-us/office/vba/api/Excel.pagesetup.printgridlines
2022-01-16T21:58:12
CC-MAIN-2022-05
1642320300244.42
[]
docs.microsoft.com
You can update the license information on the DNS/DHCP Server. Ensure that the license client ID is 15-characters long. The activation key contains five sets of five alpha-numeric characters <XXXXX-XXXXX-XXXXX-XXXXX>. Attention: This service cannot be configured on DNS/DHCP Servers operating in an xHA pair. Example { "version": "1.0.0", "services": { "license": { "configurations": [ { "licenseConfiguration": { "clientID": "ANEWCUSTOMER123", "key": "XXXXX-XXXXX-XXXXX-XXXXX-XXXXX" } } ] } } } Parameters - clientID—enter the license client ID. - key—enter the activation key for the client ID.
https://docs.bluecatnetworks.com/r/Address-Manager-API-Guide/License/9.3.0
2022-01-16T21:25:46
CC-MAIN-2022-05
1642320300244.42
[]
docs.bluecatnetworks.com
- Source files and rendered web locations - Contributing to docs - Markdown and styles - Folder structure and files - Metadata - Move or rename a page - Merge requests for GitLab documentation - GitLab - Docs site architecture - Previewing the changes live - Testing - Danger Bot - Automatic screenshot generator Git, that.: reading_time: If you want to add an indication of the approximate reading time of a page, you can set reading_timeto true. This uses a simple algorithm to calculate the reading time based on the number of words. Move. you’ll' --- This document was moved to [another location](../path/to/file/index.md). <!-- This redirect file can be deleted after <YYYY-MM-DD>. --> <!-- Before deletion, see: --> -: '' ---script with the docs deployflag, which triggers the “Triggered from gitlab-org/gitlab‘review-docs-deploy’ job” pipeline trigger in the gitlab-org/gitlab-docsproject for the $DOCS_BRANCH(defaults to master). - The preview URL is shown both at the job output and in the merge request widget. You also get the link to the remote pipeline. - In the gitlab-org/gitlab-docsproject, For more information about documentation testing, see the Documentation testing guide..
https://docs.gitlab.com/13.12/ee/development/documentation/
2022-01-16T22:15:59
CC-MAIN-2022-05
1642320300244.42
[]
docs.gitlab.com
Working with motion graphics overlays You can use the motion graphics overlay feature to superimpose a motion image onto the video in a MediaLive channel. The motion image is based on an HTML5 motion graphic asset. To set up for motion graphics overlay, you must perform work in two areas: You must choose an HTML5 authoring system. You must use this authoring system to prepare an HTML5 asset, and you must continually publish the asset to a location outside of MediaLive. On MediaLive, you must enable motion graphics in each channel where you want to include a motion graphic overlay. After you have started the channel, you use the schedule feature in MediaLive to insert the motion graphic in the running channel. As soon as the schedule receives the action, MediaLive starts to download and render the content. It continually downloads and renders the content for as long as the motion graphics action is active. At any time, you can deactivate the image by creating a deactivate action in the schedule. There is a charge for running a channel that has the motion graphics overlay feature enabled. There is a charge even when there is no motion graphics overlay currently inserted in the channel. The charge is based on the largest video output in the channel. To stop this charge, you must disable the feature. For information on charges for using this mode, see the MediaLive price list.
https://docs.aws.amazon.com/medialive/latest/ug/feature-mgi.html
2022-01-16T22:13:43
CC-MAIN-2022-05
1642320300244.42
[]
docs.aws.amazon.com
OLEFormat.Open method (Word) Opens the specified OLEFormat object. Syntax expression.Open expression Required. A variable that represents an 'OLEFormat' object. See also Support and feedback Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback.
https://docs.microsoft.com/en-us/office/vba/api/Word.oleformat.open
2022-01-16T22:38:45
CC-MAIN-2022-05
1642320300244.42
[]
docs.microsoft.com
AWS Account Management Community Edition users are able to connect their own AWS account and take advantage of the powerful lab deployment and management capabilities of the Snap Labs platform, for free. This means you only pay AWS directly for the lab infrastructure you actually use! Creating an AWS Account We strongly recommend creating a dedicated AWS account for use with the Snap Labs platform. This offers several advantages as it allows you to: - Easily and accurately monitor your billing - Segment your Snap Labs infrastructure from personal or company infrastructure - More easily clean up lab resources if you choose to disconnect your account from the Snap Labs platform To sign up for a free AWS account, check out the AWS Free Tier. EC2 Limits For newly created accounts, the Running Instances limit may be quite small (5 vCPUs). Most lab templates will require more than this, so we recommend requesting a limit increase as soon as you set up your AWS account. You can request a service limit increase here. We deploy labs into the US-East-1 region currently, so be sure to request the increase in that region. Make sure to request an increase for "Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances". We recommend at least 120 vCPUs to allow running multiple lab environments simultaneously. Other limits you might run into include: - VPC Limits (1 per lab) - Elastic IP Address (1 per lab) Connecting your AWS Account to Snap Labs Connecting your AWS account to your Snap Labs account is simple! If you've already got an AWS account, you can be up and running in just a couple minutes. Follow the instructions in the AWS Connection Wizard to connect your accounts in just a few clicks. Behind the scenes, Snap Labs will: - Deploy a CloudFormation Stack - Create an IAM Role with the required permissions - Automatically detect when your accounts are connected - Assume this IAM Role to deploy and manage labs within your own account Snap Labs IAM Permission Requirements The Snap Labs platform needs certain permissions within your AWS account to successfully deploy and manage your lab systems. We DO NOT need full administrative access!! The IAM role we create has the minimal permissions required for the platform to function. We require most permissions to interact with EC2, including permissions to create, modify, and delete resources like EC2 instances, EBS volumes, snapshots etc. For a full list of the specific permissions required, be sure to review the CloudFormation stack created when connecting your AWS and Snap Labs accounts. Disconnecting your AWS Account To disconnect your account, browse to the Account Settings page and select Disconnect. Stranded Snap Labs Resources Be careful when disconnecting your AWS Account from Snap Labs first deleting your deployed labs! Upon disconnecting you're account, Snap Labs will lose the ability to manage resources on your behalf. If you disconnect before deleting your existing labs, there may be resources left in your AWS Account, including running EC2 instances, that may significantly affect your monthly bill. If you do elect to leave labs deployed, but disconnect your account, you can identify Snap Labs managed resources for future cleanup by the SnapLabs tag. Updated 6 months ago
https://docs.snaplabs.io/docs/aws-account-management
2022-01-16T21:37:44
CC-MAIN-2022-05
1642320300244.42
[]
docs.snaplabs.io
Import datasets. Navigate and select the dataset to import. - The dataset is imported into the flow. Import Reference Dataset For any flow, you can create a reference to a recipe in it. This reference enables the output of the recipe, after execution, to be used elsewhere. When you import this reference into another flow, you create a reference dataset. Steps: - In the source flow,. - Snapshot of recipe in development: - In the source flow, select a specific step in your recipe in the Recipe panel. - From the panel context menu, select Download Sample as CSV. - The recipe steps up to the selected step are performed on the current sample, and the current state of the sample is download in CSV format to your local desktop. - Through the Import Data page, you can import this generated file. - For more information, see Take a Snapshot. - Snapshot of job results: - In the source flow, select your recipe in the Recipe panel. - Select the output object icon above the recipe. - In the side panel, click Run..
https://docs.trifacta.com/pages/diffpages.action?originalId=174749532&pageId=177686680
2022-01-16T22:49:16
CC-MAIN-2022-05
1642320300244.42
[]
docs.trifacta.com
Create a Private Signed Cert Example¶ Last time I created a privatly signed cert I did it this way: $ cd /usr/share/ssl/ $ openssl req -config openssl.cnf -new -out /usr/local/ssl/certs/webmail.epicserve.com.csr Answer the following prompts: Enter PEM pass phrase: <enter something that is 4 chars> Verifying - Enter PEM pass phrase: <re-enter pass phrase> Country Name (2 letter code) [GB]: US State or Province Name (full name) [Berkshire]: Kansas Locality Name (eg, city) [Newbury]: Manhattan Organization Name (eg, company) [My Company Ltd]: Epicserve Organizational Unit Name (eg, section) []: Web Hosting Common Name (eg, your name or your server's hostname) []: webmail.epicserve.com Email Address []: A challenge password []: An optional company name []: Run the following command: $ openssl rsa -in privkey.pem -out /usr/local/ssl/private/webmail.epicserve.com.key Answer the following prompts: Enter pass phrase for privkey.pem: <enter the same pass phrase you enterned in the last step> Then run the following: $ openssl x509 -in /usr/local/ssl/certs/webmail.epicserve.com.csr \ -out /usr/local/ssl/certs/webmail.epicserve.com.crt \ -req -signkey /usr/local/ssl/private/webmail.epicserve.com.key \ -days 365 Restart Apache
https://epicserve-docs.readthedocs.io/en/latest/sys_admin/create-a-private-signed-cert-example.html
2022-01-16T22:49:59
CC-MAIN-2022-05
1642320300244.42
[]
epicserve-docs.readthedocs.io
- Prerequisites - Step 1: Prepare a container image for the AWS Fargate task - Step 2: Push the container image to a registry - Step 3: Create an EC2 instance for GitLab Runner - Step 4: Install and configure GitLab Runner on the EC2 instance - Step 5: Create an ECS Fargate cluster - Step 6: Create an ECS task definition - Step 7: Test the configuration - Clean up - Troubleshooting Autoscaling GitLab CI on AWS Fargate The GitLab custom executor driver for AWS Fargate automatically launches a container on the Amazon Elastic Container Service (ECS) to execute each GitLab CI job. After you complete the tasks in this document, the executor can run jobs initiated from GitLab. Each time a commit is made in GitLab, the GitLab instance notifies the runner that a new job is available. The runner then starts a new task in the target ECS cluster, based on a task definition that you configured in AWS ECS. You can configure an AWS ECS task definition to use any Docker image, so you have complete flexibility in the type of builds that you can execute on AWS Fargate. This document shows an example that’s meant to give you an initial understanding of the implementation. It is not meant for production use; additional security is required in AWS. For example, you might want two AWS security groups: - One used by the EC2 instance that hosts GitLab Runner and only accepts SSH connections from a restricted external IP range (for administrative access). - One that applies to the Fargate Tasks and that allows SSH traffic only from the EC2 instance. You can use CloudFormation or Terraform to automate the provisioning and setup of your AWS infrastructure. Prerequisites Before you begin, you should have: - An AWS IAM user with permissions to create and configure EC2, ECS and ECR resources. - AWS VPC and subnets. - One or more AWS security groups. Step 1: Prepare a container image for the AWS Fargate task Prepare a container image. You will upload this image to a registry, where it will be used to create containers when GitLab jobs run. - Ensure the image has the tools required to build your CI job. For example, a Java project requires a Java JDKand build tools like Maven or Gradle. A Node.js project requires nodeand npm. - Ensure the image has GitLab Runner, which handles artifacts and caching. Refer to the Run stage section of the custom executor docs for additional information. - Ensure the container image can accept an SSH connection through public-key authentication. The runner uses this connection to send the build commands defined in the gitlab-ci.ymlfile to the container on AWS Fargate. The SSH keys are automatically managed by the Fargate driver. The container must be able to accept keys from the SSH_PUBLIC_KEYenvironment variable. View a Debian example that includes GitLab Runner and the SSH configuration. View a Node.js example. Step 2: Push the container image to a registry After you create your image, publish the image to a container registry for use in the ECS task definition. - To create a repository and push an image to ECR, follow the Amazon ECR Repositories documentation. - To use the AWS CLI to push an image to ECR, follow the Getting Started with Amazon ECR using the AWS CLI documentation. - To use the GitLab Container Registry, you can use the Debian or NodeJS example. The Debian image is published to registry.gitlab.com/tmaczukin-test-projects/fargate-driver-debian:latest. The NodeJS example image is published to registry.gitlab.com/aws-fargate-driver-demo/docker-nodejs-gitlab-ci-fargate:latest. Step 3: Create an EC2 instance for GitLab Runner Now create an AWS EC2 instance. In the next step you will install GitLab Runner on it. - Go to. - For the instance, select the Ubuntu Server 18.04 LTS AMI. The name may be different depending on the AWS region you selected. - For the instance type, choose t2.micro. Click Next: Configure Instance Details. - Leave the default for Number of instances. - For Network, select your VPC. - Set Auto-assign Public IP to Enable. - Under IAM role, click Create new IAM role. This role is for test purposes only and is not secure. - Click Create role. - Choose AWS service and under Common use cases, click EC2. Then click Next: Permissions. - Select the check box for the AmazonECS_FullAccess policy. Click Next: Tags. - Click Next: Review. - Type a name for the IAM role, for example fargate-test-instance, and click Create role. - Go back to the browser tab where you are creating the instance. - To the left of Create new IAM role, click the refresh button. Choose the fargate-test-instancerole. Click Next: Add Storage. - Click Next: Add Tags. - Click Next: Configure Security Group. - Select Create a new security group, name it fargate-test, and ensure that a rule for SSH is defined ( Type: SSH, Protocol: TCP, Port Range: 22). You must specify the IP ranges for inbound and outbound rules. - Click Review and Launch. - Click Launch. - Optional. Select Create a new key pair, name it fargate-runner-managerand click the Download Key Pair button. The private key for SSH is downloaded on your computer (check the directory configured in your browser). - Click Launch Instances. - Click View Instances. - Wait for the instance to be up. Note the IPv4 Public IPaddress. Step 4: Install and configure GitLab Runner on the EC2 instance Now install GitLab Runner on the Ubuntu instance. - Go to your GitLab project’s Settings > CI/CD and expand the Runners section. Under Set up a specific Runner manually, note the registration token. - Ensure your key file has the right permissions by running chmod 400 path/to/downloaded/key/file. SSH into the EC2 instance that you created by using: ssh ubuntu@[ip_address] -i path/to/downloaded/key/file When you are connected successfully, run the following commands: sudo mkdir -p /opt/gitlab-runner/{metadata,builds,cache} curl -s "" | sudo bash sudo apt install gitlab-runner Run this command with the GitLab URL and registration token you noted in step 1. sudo gitlab-runner register --url --registration-token TOKEN_HERE --name fargate-test-runner --run-untagged --executor custom -n Run sudo vim /etc/gitlab-runner/config.tomland add the following content: concurrent = 1 check_interval = 0 [session_server] session_timeout = 1800 [[runners]] name = "fargate-test" url = "" token = "__REDACTED__" executor = "custom" builds_dir = "/opt/gitlab-runner/builds" cache_dir = "/opt/gitlab-runner/cache" [runners.custom] config_exec = "/opt/gitlab-runner/fargate" config_args = ["--config", "/etc/gitlab-runner/fargate.toml", "custom", "config"] prepare_exec = "/opt/gitlab-runner/fargate" prepare_args = ["--config", "/etc/gitlab-runner/fargate.toml", "custom", "prepare"] run_exec = "/opt/gitlab-runner/fargate" run_args = ["--config", "/etc/gitlab-runner/fargate.toml", "custom", "run"] cleanup_exec = "/opt/gitlab-runner/fargate" cleanup_args = ["--config", "/etc/gitlab-runner/fargate.toml", "custom", "cleanup"] The section of the config.tomlfile shown below is created by the registration command. Do not change it. concurrent = 1 check_interval = 0 [session_server] session_timeout = 1800 name = "fargate-test" url = "" token = "__REDACTED__" executor = "custom" Run sudo vim /etc/gitlab-runner/fargate.tomland add the following content: LogLevel = "info" LogFormat = "text" [Fargate] Cluster = "test-cluster" Region = "us-east-2" Subnet = "subnet-xxxxxx" SecurityGroup = "sg-xxxxxxxxxxxxx" TaskDefinition = "test-task:1" EnablePublicIP = true [TaskMetadata] Directory = "/opt/gitlab-runner/metadata" [SSH] Username = "root" Port = 22 - Note the value of Cluster, as well as the name of the TaskDefinition. This example shows test-taskwith :1as the revision number. If a revision number is not specified, the latest active revision is used. - Choose your region. Take the Subnetvalue from the Runner Manager instance. To find the security group ID: - In AWS, in the list of instances, select the EC2 instance you created. The details are displayed. - Under Security groups, click the name of the group you created. - Copy the Security group ID. In a production setting, follow AWS guidelines for setting up and using security groups. - The port number of the SSH server is optional. If omitted, the default SSH port (22) is used. Install the Fargate driver: sudo curl -Lo /opt/gitlab-runner/fargate "" sudo chmod +x /opt/gitlab-runner/fargate Step 5: Create an ECS Fargate cluster An Amazon ECS cluster is a grouping of ECS container instances. - Go to. - Click Create Cluster. - Choose Networking only type. Click Next step. - Name it test-cluster(the same as in fargate.toml). - Click Create. - Click View cluster. Note the region and account ID parts from the Cluster ARNvalue. - Click Update Cluster button. - Next to Default capacity provider strategy, click Add another provider and choose FARGATE. Click Update. Refer to the AWS documentation for detailed instructions on setting up and working with a cluster on ECS Fargate. Step 6: Create an ECS task definition In this step you will create a task definition of type Fargate with a reference to the container image that you are going to use for your CI builds. - Go to. - Click Create new Task Definition. - Choose FARGATE and click Next step. - Name it test-task. (Note: The name is the same value defined in the fargate.tomlfile but without :1). - Select values for Task memory (GB) and Task CPU (vCPU). - Click Add container. Then: - Name it ci-coordinator, so the Fargate driver can inject the SSH_PUBLIC_KEYenvironment variable. - Define image (for example registry.gitlab.com/tmaczukin-test-projects/fargate-driver-debian:latest). - Define port mapping for 22/TCP. - Click Add. - Click Create. - Click View task definition. SSH_PUBLIC_KEYenvironment variable in containers with the ci-coordinatorname only. You must have a container with this name in all task definitions used by the Fargate driver. The container with this name should be the one that has the SSH server and all GitLab Runner requirements installed, as described above. Refer to the AWS documentation for detailed instructions on setting up and working with task definitions. At this point the GitLab Runner Manager and Fargate Driver are configured and ready to start executing jobs on AWS Fargate. Step 7: Test the configuration Your configuration should now be ready to use. In your GitLab project, create a simple .gitlab-ci.ymlfile: test: script: - echo "It works!" - for i in $(seq 1 30); do echo "."; sleep 1; done - Go to your project’s CI/CD > Pipelines. - Click Run Pipeline. - Update the branch and any variables and click Run Pipeline. imageand servicekeywords in your gitlab-ci.ymlfile are ignored. The runner only uses the values specified in the task definition. Clean up If you want to perform a cleanup after testing the custom executor with AWS Fargate, remove the following objects: - EC2 instance, key pair, IAM role and security group created in step 3. - ECS Fargate cluster created in step 5. - ECS task definition created in step 6. Troubleshooting Application execution failed error when testing the configuration error="starting new Fargate task: running new task on Fargate: error starting AWS Fargate Task: InvalidParameterException: No Container Instances were found in your cluster." The AWS Fargate Driver requires the ECS Cluster to be configured with a default capacity provider strategy. Further reading: - A default capacity provider strategy is associated with each Amazon ECS cluster. If no other capacity provider strategy or launch type is specified, the cluster uses this strategy when a task runs or a service is created. - If a capacityProviderStrategyis specified, the launchTypeparameter must be omitted. If no capacityProviderStrategyor launchTypeis specified, the defaultCapacityProviderStrategyfor the cluster is used.
https://docs.gitlab.com/13.12/runner/configuration/runner_autoscale_aws_fargate/
2022-01-16T23:06:01
CC-MAIN-2022-05
1642320300244.42
[]
docs.gitlab.com
ICMP Anomaly Detection¶ At Security Onion Conference 2016, Eric Conrad shared some IDS rules for detecting unusual ICMP echo requests/replies and identifying C2 channels that may utilize ICMP tunneling for covert communication. Usage¶ We can add the rules to /opt/so/rules/nids/local.rules and the variables to suricata.yaml so that we can gain better insight into ICMP echoes or replies over a certain size, containing particularly suspicious content, etc. Presentation¶ You can find Eric’s presentation here: Download¶ You can download the rules here:
https://docs.securityonion.net/en/2.3/icmp-anomaly-detection.html
2022-01-16T21:09:49
CC-MAIN-2022-05
1642320300244.42
[]
docs.securityonion.net
TVS Manager enables single click log generation and bundling versus what would normally be tens of clicks on a variety of screens. After TVS Manager is installed, an action, Collect Logs, will be available on all adapter instances of TVS Management Packs. To collect logs: - Select the correct adapter instance from Inventory (or Inventory Explorer). - Click the gear on the left side of the toolbar - Click Collect Logs - A pop-up window will request additional parameters; these parameters are: - Collections: The number of collections to wait to complete (default: 10) - Timeout: The length of time (in minutes) to wait for the specified number of collections to complete (default: 60 minutes) - Restart?: Whether to restart the adapter instance before collecting logs (default: true) After submitting the action, it will show up under History > Recent Tasks on the vROps Administration tab. Status messages will be shown after selecting the action in the recent tasks list. When the the task completes check for the status message Bundle ID:. This will tell you which Support Bundle was generated by the task.You can download the bundle from Administration > Support > Support Bundles Possible Error Messages: - Unable to run collectLogsAction. collectLogsis already running on Collector ID 1 - This means that Collect Logsis already running on the given collector node. Wait until the other action is finished. - Unable to find result for action collectLogs - This means that the result was removed before it was retrieved. If this error happens, Contact Support. - The likely cause is a TVS Manager Adapter instance being restarted by a user and a collect logs thread was not properly cleaned up. - Extended Log file long_run_$SOLUTIONID_$RESOURCEID.logwas not created. - The log file wasn't created (reason unknown). Contact Support. - Likely causes are: no write permissions on File/folder, or lack of disk space. - Unabled to locate log4j.properties file at path $LOG4J_PROPERTIES_FILE_LOCATION - The known path for log4j.properties is incorrect on the customer's system. Contact Support. - Only one adapter instance per action is supported for the Collect Logs action. - User attempted to collect logs for more than one adapter instance. Select only one adapter instance to run an action against. - No adapter instance given to the Collect Logs action. - The user should not encounter this error (this parameter is given by vROps). Contact Support. - Missing required fields for log collection: any of {timeout, collections, resourceId, adapterKindKey, collectorId, restartAdapterInstance} - The user should not encounter this error (The parameter validation is done by vrops before given to the adapter instance action). Contact Support. - Cannot collect log files for an adapter on a different collector. TVS Collector ID: 1; $adapterKind Collector ID: 2 - The user should not encounter this error, The adapter instance to run the action on is selected by vROps and should always be the one on the same collector. Contact Support. - Collecting logs for $adapterKind is not supported. - The user should not encounter this error, if they do, TVS Manager has a bug and should be reported. - log file LOG_FILE does not exist on collector 1 - The user should not encounter this error. If they do, that means that vrops did not create a log file for the adapter instance they're trying to collect from.
https://docs.vmware.com/en/VMware-vRealize-True-Visibility-Suite/1.0/tvs-manager/GUID-A748A798-955A-46E4-8EE3-864BBDAAFFC5.html
2022-01-16T22:44:02
CC-MAIN-2022-05
1642320300244.42
[]
docs.vmware.com
If your business spends countless hours extracting data from documents and forms, Appian is here to help. Appian includes a rich set of AI features that accelerate the low-code development of document extraction processes. Leverage the power of artificial intelligence to minimize repetitive and manual data extraction, and eliminate the need for expensive, high-maintenance optical character recognition (OCR) software. Any structured and semi-structured PDFs are well-suited for AI-based document extraction. Structured documents follow a fixed layout such as tax and hospital forms. Semi-structured documents contain similar data in a variety of layouts such as invoices, receipts, and utility bills. Unstructured documents, such as legal contracts and emails with free-flowing paragraphs of text, are best supported with other Appian features. We are so excited for you to start automating your document extraction processes that we offer a pre-built Intelligent Document Processing (IDP) application that supports automatic document classification and extraction, performance monitoring, and secure processing across multiple teams right out of the box. You are ready to begin automating your document-centric processes after a few, simple configuration steps without creating custom process models or interfaces. You can also choose to build your own document extraction process using the integrated document extraction smart services and functions in the process modeler. For more information, see Create a Doc Extraction Process. Document extraction identifies data relationships within a PDF document as key-value pairs. For example, an invoice document contains several form fields, so the process will identify the field names and values that are paired together e.g. Invoice Number and INV-12. Using either Google Cloud Document AI or Appian's native document extraction functionality, each key will be mapped to a data type field. This mapping gets smarter over time as you reconcile and correct the extracted data. To reconcile extracted data, Appian auto-generates a form for human-in-the-loop validation of automated extraction results. After this manual reconciliation, Appian will store and recall the mapping of the extracted key to an Appian field. For example, if you provide mappings, then eventually Appian Document Extraction will recognize that Invoice Number, Invoice #, and Invoice No. all map to the invoiceNumber Appian data type field. Document extraction is a powerful tool to use in your business, but before you put in the work to create your own process, think about what you want to do. For example: If you want the ability to customize these aspects of the document extraction process, like how the data moves post-extraction or who corrects results, you may want to create your own document extraction process. Get started by evaluating document extraction features to use in your process. If your goal is to extract data and collect insights quickly with minimal to no set up, you may want to use the pre-built Intelligent Document Processing (IDP) application. IDP uses a standardized document extraction process in conjunction with automatic document classification. All you have to do is upload your document. To take advantage of Appian's full-stack automation features, consider pairing your document extraction process with other Appian AI features and robotic process automation (RPA). We want to make sure you understand where your data goes when you use Appian document extraction features. Document extraction provides data privacy and protection because it secures your data with Appian as well as Google Cloud. See Data Security in Document Extraction for more information. On This Page
https://docs.appian.com/suite/help/21.1/Appian_Doc_Extraction.html
2022-01-16T22:51:12
CC-MAIN-2022-05
1642320300244.42
[]
docs.appian.com
- Overview - Create a new base virtual machine - Create a new runner - How it works - Checklist for Windows VMs VirtualBox VirtualBox allows you to use VirtualBox’s virtualization to provide a clean build environment for every build. This executor supports all systems that can be run on VirtualBox. The only requirement is that the virtual machine exposes its SSH server and provide a bash-compatible shell. Overview The project’s source code is checked out to: ~/builds/<namespace>/<project-name>. Where: <namespace>is the namespace where the project is stored on GitLab <project-name>is the name of the project as it is stored on GitLab To override the ~/builds directory, specify the builds_dir option under the [[runners]] section in config.toml. You can also define custom build directories per job using the GIT_CLONE_PATH. Create a new base virtual machine - Install VirtualBox. - If running from Windows and VirtualBox is installed at the default location (for example %PROGRAMFILES%\Oracle\VirtualBox), GitLab Runner will automatically detect it. Otherwise, you will need to add the installation folder to the PATHenvironment variable of the gitlab-runnerprocess. - Import or create a new virtual machine in VirtualBox - Configure Network Adapter 1 as “NAT” (that’s currently the only way the GitLab Runner is able to connect over SSH into the guest) - (optional) Configure another Network Adapter as “Bridged networking” to get access to the internet from the guest (for example) - Log into the new virtual machine - If Windows VM, see Checklist for Windows VMs - Install the OpenSSH server - Install all other dependencies required by your build - If you want to upload job artifacts, install gitlab-runnerinside the VM - Log out and shut down the virtual machine It’s completely fine to use automation tools like Vagrant to provision the virtual machine. Create a new runner - Install GitLab Runner on the host running VirtualBox - Register a new runner with gitlab-runner register - Select the virtualboxexecutor - Enter the name of the base virtual machine you created earlier (find it under the settings of the virtual machine General > Basic > Name) - Enter the SSH userand passwordor path to identity_fileof the virtual machine How it works When a new build is started: - A unique name for the virtual machine is generated: runner-<short-token>-concurrent-<id> - The virtual machine is cloned if it doesn’t exist - The port-forwarding rules are created to access the SSH server - GitLab Runner starts or restores the snapshot of the virtual machine - GitLab Runner waits for the SSH server to become accessible - GitLab Runner creates a snapshot of the running virtual machine (this is done to speed up any next builds) - GitLab Runner connects to the virtual machine and executes a build - If enabled, artifacts upload is done using the gitlab-runnerbinary inside the virtual machine. - GitLab Runner stops or shuts down the virtual machine Checklist for Windows VMs - Install Cygwin - Install sshd and Git from Cygwin (do not use Git For Windows, you will get lots of path issues!) - Install Git LFS - Configure sshd and set it up as a service (see Cygwin wiki) - Create a rule for the Windows Firewall to allow incoming TCP traffic on port 22 - Add the GitLab server(s) to ~/.ssh/known_hosts - To convert paths between Cygwin and Windows, use the cygpathutility
https://docs.gitlab.com/13.12/runner/executors/virtualbox.html
2022-01-16T21:23:51
CC-MAIN-2022-05
1642320300244.42
[]
docs.gitlab.com
unreal.EditorUtilityWidget¶ - class unreal.EditorUtilityWidget(outer=None, name='None')¶ Bases: unreal.UserWidget Editor Utility Widget C++ Source: Module: Blutility File: EditorUtility. always_reregister_with_windows_menu(bool): [Read-Write] Should this widget always be re-added to the windows menu once it’s opened auto_run_default_action(bool): [Read-Write] Should this blueprint automatically run OnDefaultActionClicked, or should it open up a details panel to edit properties and/or offer multiple buttons. help_text(str): [Read-Write] Help Text_run_default_action¶ [Read-Only] Should this blueprint automatically run OnDefaultActionClicked, or should it open up a details panel to edit properties and/or offer multiple buttons - Type -
https://docs.unrealengine.com/4.27/en-US/PythonAPI/class/EditorUtilityWidget.html
2022-01-16T23:24:33
CC-MAIN-2022-05
1642320300244.42
[]
docs.unrealengine.com
OSM manual demo The manual demo is a step-by-step walkthrough set of instruction of the automated demo. The OSM project builds on the ideas and implementations of many cloud native ecosystem projects including Linkerd, Istio, Consul, Envoy, Kuma, Helm, and the SMI specification. OSM runs an Envoy based control plane on Kubernetes, can be configured with SMI APIs, and works by injecting an Envoy proxy as a sidecar container next to each instance of your application. The proxy contains and executes rules around access control policies, implements routing configuration, and captures metrics. The control plane continually configures proxies to ensure policies and routing rules are up to date and ensures proxies are healthy. OSM is under active development and is NOT ready for production workloads. Please search open issues on GitHub, and if your issue isn’t already represented please open a new one. The OSM project maintainers will respond to the best of their abilities. The manual demo is a step-by-step walkthrough set of instruction of the automated demo. The automated demo is a set of scripts anyone can run and shows how OSM can manage, secure and provide observability for microservice environments. Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve.
https://release-v0-9.docs.openservicemesh.io/docs/getting_started/
2022-01-16T22:42:06
CC-MAIN-2022-05
1642320300244.42
[]
release-v0-9.docs.openservicemesh.io
Translate Instances Node The Translate Instances node. The Translate Instances node moves top-level geometry instances in local or global space. The Instances page contains more information about geometry instances. Inputs - Geometry Standard geometry input. - Selection Boolean field used to determine if an instance will be translated. - Translation The vector to translate the instances by. - Local Space If enabled, the instances are translated relative to their initial rotation. Otherwise they are translated in the local space of the modifier object. 특성 This node has no properties. Outputs - Geometry Standard geometry output.
https://docs.blender.org/manual/ko/dev/modeling/geometry_nodes/instances/translate_instances.html
2022-01-16T21:29:26
CC-MAIN-2022-05
1642320300244.42
[array(['../../../_images/modeling_geometry-nodes_instances_translate-instances_node.png', '../../../_images/modeling_geometry-nodes_instances_translate-instances_node.png'], dtype=object) ]
docs.blender.org
Removing. NOTE: Duplicate key exposures may occur when pasting with the Enforce Key Exposure option selected. - In the Timeline view, select the layer that contains duplicate key exposures. - In the Timeline toolbar, click the Remove Duplicate Key Exposure button (you may have to customize the toolbar to display it).
https://docs.toonboom.com/help/harmony-14/advanced/timing/remove-duplicate-key-exposure.html
2022-01-16T21:59:30
CC-MAIN-2022-05
1642320300244.42
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
CloudsBands¶ - class CloudsBands[source]¶ Bases: eoreader.bands.bands._Bands Clouds bands class Methods - pop(k[, d]) v, remove specified key and return the corresponding value. ¶ If key is not found, d is returned if given, otherwise KeyError is raised. - popitem() (k, v), remove and return some (key, value) pair ¶ as a 2-tuple; but raise KeyError if D is empty. - update([E, ]**F)
https://eoreader.readthedocs.io/en/0.8.0/api/eoreader.bands.bands.CloudsBands.html
2022-01-16T23:01:17
CC-MAIN-2022-05
1642320300244.42
[]
eoreader.readthedocs.io
The API URLs / end-points for our playground environment are found here Use any fake test data you like, just make sure it passes our validations for example postal code format. Do not use any real personal details in our playground environment- data is saved and if another user is using the same email + zip combination they might get prefilled with your test data.
https://docs.klarna.com/klarna-checkout/in-depth-knowledge/testing/
2022-01-16T22:26:14
CC-MAIN-2022-05
1642320300244.42
[]
docs.klarna.com
. - Vectorizing or rendering starts a process named AnimatePro.exe. Locate it and select it. - Click End Process. TheTask Manager stops the process and removes it from the Processes tab..
https://docs.toonboom.com/help/harmony-12-2/premium-server/installation/batch-processing/stop-windows-process.html
2022-01-16T21:13:24
CC-MAIN-2022-05
1642320300244.42
[array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/CCenter_Server/BatchProcessing/hmy_028_killprocess_001.png', None], dtype=object) ]
docs.toonboom.com
1 Introduction At Mendix World on September 8th, 2021, the keynote from Mendix CTO Johan den Haan announced a number of features which Mendix is releasing. Many of these are available immediately, but some are planned for future release in the months following Mendix World 2021. This document contains the calendar of expected release dates for these features. Johan divided the new features into these major announcements, which are described in the sections below: “GA” means General Availability for all users. A GA release is different than a Beta release. For more information on Private Beta and Public Beta releases, see Beta Releases. If you want to watch Johan’s keynote again, you can find it at Mendix World 2021 — you will need to register or have already registered for Mendix World 2021 to see this.
https://docs.mendix.com/releasenotes/mx-world-2021/
2022-01-16T22:43:07
CC-MAIN-2022-05
1642320300244.42
[array(['attachments/index/innovations.png', 'Announcements are solutions platform, app services framework, end-user services and studio, control center, next-level front-end, page bot and machine learning toolkit, studio pro experience, data hub 2.0, intelligent automation, and hybrid cloud automation.'], dtype=object) ]
docs.mendix.com
Automatically apply a retention label to retain or delete content Microsoft 365 licensing guidance for security & compliance. Note This scenario is not supported for regulatory records or default labels for an organizing structure such as a document set or library in SharePoint, or a folder in Exchange. These scenarios require a published retention label policy. One of the most powerful features of retention labels is the ability to apply them automatically to content that matches specified conditions. In this case, people in your organization don't need to apply the retention labels. Microsoft 365 does the work for them. Auto-applying apply retention labels to content automatically when that content doesn't already have a retention label applied and contains sensitive information, keywords or searchable properties, or a match for trainable classifiers. Now in preview, you can also automatically apply a retention label to cloud attachments that are stored in SharePoint or OneDrive. Tip Use searchable properties to identify Teams meeting recordings and items that have a sensitivity label applied. The processes to automatically apply a retention label based on these conditions: Use the following instructions for the two admin steps. Note Auto-policies use service-side labeling with conditions to automatically apply retention labels to items. You can also automatically apply a retention label with a label policy when you do the following: - Apply a retention label to a document understanding model in SharePoint Syntex - Apply a default retention label for SharePoint and Outlook - Apply a retention label to email by using Outlook rules For these scenarios, see Create and apply retention labels in apps. Before you begin The global admin for your organization has full permissions to create and edit retention labels and their policies. If you aren't signing in as a global admin, see Permissions required to create and manage retention policies and. How to auto-apply a retention label First, create your retention label. Then create an auto-policy to apply that label. If you have already created your retention label, skip to creating an auto-policy. Navigation instructions depend on whether you're using records management or not. Instructions are provided for both scenarios. Step 1: Create a retention label In the Microsoft 365 compliance center, navigate to one of the following locations: If you are using records management: - Solutions > Records management > File plan tab > + Create a label > Retention label If you are not using records management: - Solutions > Information governance > Labels tab > + Create a label Don't immediately see your solution in the navigation pane? First select Show all. Follow the prompts in the configuration. For more information about the retention settings, see Settings for retaining and deleting content. However, if the label will be used for cloud attachments, make sure you configure the start of the retention period to be When items were labeled. If you are using records management: For information about the file plan descriptors, see Use file plan to manage retention labels Auto-apply this label to a specific type of content, and then select Done To edit an existing label, select it, and then select the Edit label option to start the Edit retention label configuration that lets you change the label descriptions and any eligible settings from step 2. Step 2: Create an auto-apply policy When you create an auto-apply policy, you select a retention label to automatically apply to content, based on the conditions that you specify. In the Microsoft 365 compliance center, navigate to one of the following locations: If you are using records management: Information governance: - Solutions > Records management > Label policies tab > Auto-apply a label If you are not using records management: - Solutions > Information governance > Label policies tab > Auto-apply a label Don't immediately see your solution in the navigation pane? First select Show all. Enter a name and description for this auto-labeling policy, and then select Next. For Choose the type of content you want to apply this label to, select one of the available conditions. For more information about the choices, see the Configuring conditions for auto-apply retention labels section on this page.. Follow the prompts in the wizard to select a retention label, and then review and submit your configuration choices. To edit an existing auto-apply policy, select it to start the Edit retention policy configuration that lets you change the selected retention label and any eligible settings from step 2. After content is labeled by using an auto-apply label policy, the applied label can't be automatically removed or changed by changing the content or the policy, or by a new auto-apply label policy. For more information, see Only one retention label at a time. Note An auto-apply retention label policy will never replace an existing retention label that's applied to content. If you want to relabel content by using the conditions you configure, you'll need to manually remove the current retention label from existing content. Configuring conditions for auto-apply retention labels You can apply retention labels to content automatically when that content contains: Specific types of sensitive information Specific keywords or searchable properties that match a query you create A match for trainable classifiers Or, you can automatically apply retention labels to newly shared cloud attachments. When you configure retention labels to auto-apply based on sensitive information, keywords or searchable properties, or trainable classifiers, use the following table to identify when retention labels can be automatically applied. Exchange: SharePoint and OneDrive: Additionally, SharePoint items that are in draft or that have never been published aren't supported for this scenario. Auto-apply labels to content with specific types of sensitive information Important For emails that you auto-apply by identifying sensitive information, all mailboxes are automatically included, which includes mailboxes from Microsoft 365 groups. Although group mailboxes would usually be included by selecting the Microsoft 365 Groups location, for this specific policy configuration, the groups location includes only SharePoint sites connected to a Microsoft 365 group. When you create auto-apply retention label policies for sensitive information, you see the same list of policy templates as when you create a data loss prevention (DLP) policy. Each template is preconfigured to look for specific types of sensitive information. In the following example, the sensitive info types are from the Privacy category, and U.S Personally Identifiable Information (PII) Data template: To learn more about the sensitivity information types, see Learn about sensitive information types. Currently, Learn about exact data match based sensitive information types and document fingerprinting are not supported for this scenario. After you select a policy template, you can add or remove any types of sensitive information, and you can change the confidence level and instance count. In the previous example screenshot, these options have been changed so that a retention label will be auto-applied only when: The type of sensitive information that's detected has a match accuracy (or confidence level) of at least Medium confidence for two of the sensitive info types, and High confidence for one. Many sensitive information types are defined with multiple patterns, where a pattern with a higher match accuracy requires more evidence to be found (such as keywords, dates, or addresses), while a pattern with a lower match accuracy requires less evidence. The lower the confidence level, the easier it is for content to match the condition but with the potential for more false positives. The content contains between 1 and 9 instances of any of these three sensitive info types. The default for the to value is Any. For more information about these options, see the following guidance from the DLP documentation Tuning rules to make them easier or harder to match. Important Sensitive information types have two different ways of defining the max unique instance count parameters. To learn more, see Instance count supported values for SIT. To consider when using sensitive information types to auto-apply retention labels: If you use custom sensitive information types, these can't auto-label existing items in SharePoint and OneDrive. For emails, you can't select specific recipients to include or exclude; only the All recipients setting is supported and for this configuration only, it includes mailboxes from Microsoft 365 groups. Auto-apply labels to content with keywords or searchable properties You can auto-apply labels to content by using a query that contains specific words, phrases, or values of searchable properties. You can refine your query by using search operators such as AND, OR, and NOT. For more information about the query syntax that uses Keyword Query Language (KQL), see Keyword Query Language (KQL) syntax reference. Query-based auto-apply policies use the same search index as eDiscovery content search to identify content. For more information about the searchable properties that you can use, see Keyword queries and search conditions for Content Search. Some things to consider when using keywords or searchable properties to auto-apply retention labels: For SharePoint, crawled properties and custom properties aren't supported for these KQL queries and you must use only predefined managed properties for documents. However, you can use mappings at the tenant level with the predefined managed properties that are enabled as refiners by default (RefinableDate00-19, RefinableString00-99, RefinableInt00-49, RefinableDecimals00-09, and RefinableDouble00-09). For more information, see Overview of crawled and managed properties in SharePoint Server, and for instructions, see Create a new managed property. If you map a custom property to one of the refiner properties, wait 24 hours before you use it in your KQL query for a retention label. Although SharePoint managed properties can be renamed by using aliases, don't use these for KQL queries in your labels. Always specify the actual name of the managed property, for example, "RefinableString01". To search for values that contain spaces or special characters, use double quotation marks ( " ") to contain the phrase; for example, subject:"Financial Statements". Use the DocumentLink property instead of Path to match an item based on its URL. Suffix wildcard searches (such as *cat) or substring wildcard searches (such as *cat*) aren't supported. However, prefix wildcard searches (such as cat*) are supported. Be aware that partially indexed items can be responsible for not labeling items that you're expecting, or labeling items that you're expecting to be excluded from labeling when you use the NOT operator. For more information, see Partially indexed items in Content Search. Examples queries: More complex examples: The following query for SharePoint identifies Word documents or Excel spreadsheets when those files contain the keywords password, passwords, or pw: (password OR passwords OR pw) AND (filetype:doc* OR filetype:xls*) The following query for Exchange identifies any Word document or PDF that contains the word nda or the phrase non disclosure agreement when those documents are attached to an email: (nda OR "non disclosure agreement") AND (attachmentnames:.doc* OR attachmentnames:.pdf) The following query for SharePoint identifies documents that contain a credit card number: sensitivetype:"credit card number" The following query contains some typical keywords to help identify documents or emails that contain legal content: ACP OR (Attorney Client Privilege*) OR (AC Privilege) The following query contains typical keywords to help identify documents or emails for human resources: (resume AND staff AND employee AND salary AND recruitment AND candidate) Note that this final example uses the best practice of always including operators between keywords. A space between keywords (or two property:value expressions) is the same as using AND. By always adding operators, it's easier to see that this example query will identify only content that contains all these keywords, instead of content that contains any of the keywords. If your intention is to identify content that contains any of the keywords, specify OR instead of AND. As this example shows, when you always specify the operators, it's easier to correctly interpret the query. Microsoft Teams meeting recordings Note The ability to retain and delete Teams meeting recordings won't work before recordings are saved to OneDrive or SharePoint. For more information, see Use OneDrive for Business and SharePoint Online or Stream for meeting recordings. To identify Microsoft Teams meeting recordings that are stored in users' OneDrive accounts or in SharePoint, specify the following for the Keyword query editor: ProgID:Media AND ProgID:Meeting Most of the time, meeting recordings are saved to OneDrive. But for channel meetings, they are saved in SharePoint. Identify files and emails that have a sensitivity label To identify files in SharePoint or OneDrive and Exchange emails that have a specific sensitivity label applied, specify the following for the Keyword query editor: InformationProtectionLabelId:<GUID> To find the GUID, use the Get-Label cmdlet from Security & Compliance Center PowerShell: Get-Label | Format-Table -Property DisplayName, Name, Guid Auto-apply labels to content by using trainable classifiers When you choose the option for a trainable classifier, you can select one or more of the pre-trained or custom trainable classifiers: Caution We are deprecating the Offensive Language pre-trained classifier because it has been producing a high number of false positives. Don't use this classifier and if you are currently using it, we recommend you move your business processes off it and instead use the Targeted Harassment, Profanity, and Threat pre-trained classifiers. To automatically apply a label by using this option, SharePoint sites and mailboxes must have at least 10 MB of data. For more information about trainable classifiers, see Learn about trainable classifiers. Tip If you use trainable classifiers for Exchange, see How to retrain a classifier in content explorer. To consider when using trainable classifiers to auto-apply retention labels: - You can't auto-label SharePoint and OneDrive items that are older than six months. Auto-apply labels to cloud attachments Note This option is gradually rolling out in preview and is subject to change. You might need to use this option if you're required to capture and retain all copies of files in your tenant that are sent over communications by users. You use this option in conjunction with retention policies for the communication services themselves, Exchange and Teams. Important When you select a label to use for auto-applying retention labels for cloud attachments, ensure that the label retention setting Start the retention period based on is When items were labeled. Cloud attachments, sometimes also known as modern attachments, are a sharing mechanism that uses embedded links to files that are stored in the cloud. They support centralized storage for shared content with collaborative benefits, such as version control. Cloud attachments are not attached copies of a file or a URL text link to a file. You might find it helpful to refer to the visual checklists for supported cloud attachments in Outlook and Teams. When you choose the option to apply a retention label to cloud attachments, for compliance purposes, a copy of that file is created at the time of sharing. Your selected retention label is then applied to the copy that can then be identified using eDiscovery. Users are not aware of the copy that is stored in the Preservation Hold library. The retention label is not applied to the message itself, or to the original file. If the file is modified and shared again, a new copy of the file as a new version is saved in the Preservation Hold library. For more information, including why you should use the When items were labeled label setting, see How retention works with cloud attachments. The cloud attachments supported for this option are files such as documents, videos, and images that are stored in SharePoint and OneDrive. For Teams, cloud attachments shared in chat messages, and standard and private channels are supported. Cloud attachments shared over meeting invites and apps other than Teams or Outlook aren't supported. The cloud attachments must be shared by users; cloud attachments sent via bots aren't supported. Although not required for this option, we recommend that you ensure versioning is enabled for your SharePoint sites and OneDrive accounts so that the version shared can be accurately captured. If versioning isn't enabled, the last available version will be retained. Documents in draft or that have never been published aren't supported. When you select a label to use for auto-applying retention labels for cloud attachments, make sure the label retention setting Start the retention period based on is When items were labeled. When you configure the locations for this option, you can select: - SharePoint sites for shared files stored in SharePoint communication sites, team sites that aren't connected by Microsoft 365 groups, and classic sites. - Microsoft 365 Groups for shared files that are stored in team sites connected by Microsoft 365 groups. - OneDrive accounts for shared files stored in users' OneDrive. You will need to create separate retention policies if you want to retain or delete the original files, email messages, or Teams messages. Note If you want retained cloud attachments to expire at the same time as the messages that contained them, configure the retention label to have the same retain and then delete actions and timings as your retention policies for Exchange and Teams. To consider when auto-applying retention labels to cloud attachments: Only newly shared cloud attachments will be auto-labeled for retention. Cloud attachments shared outside Teams and Outlook aren't supported. The following items aren't supported as cloud attachments that can be retained: - SharePoint sites, pages, lists, forms, folders, document sets, and OneNote pages. - Files shared by users who don't have access to those files. - Files that are deleted before the cloud attachment is sent. This can happen if a user copies and pastes a previously shared attachment from another message, without first confirming that the file is still available. Or, somebody forwards an old message when the file is now deleted. - Files that are shared by guests or users outside your organization. - Files in draft emails and messages that aren't sent. - Empty files. How long it takes for retention labels to take effect When you auto-apply retention labels based on sensitive information, keywords or searchable properties, or trainable classifiers, it can take up to seven days for the retention labels to be applied: If the expected labels don't appear after seven days, check the Status of the auto-apply policy by selecting it from the Label policies page in the compliance center. If you see the status of Off (Error) and in the details for the locations see a message that it's taking longer than expected to deploy the policy (for SharePoint) or to try redeploying the policy (for OneDrive), try running the Set-RetentionCompliancePolicy PowerShell command to retry the policy distribution: Connect to Security & Compliance Center PowerShell. Run the following command: Set-RetentionCompliancePolicy -Identity <policy name> -RetryDistribution Updating retention labels and their policies For auto-apply retention label policies that are configured for sensitive information, keywords or searchable properties, or a match for trainable classifiers: When a retention label from the policy is already applied to content, a change in configuration to the selected label and policy will be automatically applied to this content in addition to content that's newly identified. For auto-apply retention label policies that are configured for cloud attachments: Because this policy applies to newly shared files rather than existing files, a change in configuration to the selected label and policy will be automatically applied to newly shared content only. retention label policies, that aren't configured for event-based retention, or mark items as regulatory records. For retention labels that you can delete, if they have been applied to items, the deletion fails and you see a link to content explorer to identify the labeled items. However, it can take up to two days for content explorer to show the items that are labeled. In this scenario, the retention label might be deleted without showing you the link to content explorer. Locking the policy to prevent changes If you need to ensure that no one can turn off the policy, delete the policy, or make it less restrictive, see Use Preservation Lock to restrict changes to retention policies and retention label policies. Next steps See Use retention labels to manage the lifecycle of documents stored in SharePoint for an example scenario that uses an auto-apply retention label policy with managed properties in SharePoint, and event-based retention to start the retention period.
https://docs.microsoft.com/en-gb/microsoft-365/compliance/apply-retention-labels-automatically?view=o365-worldwide
2022-01-16T23:12:05
CC-MAIN-2022-05
1642320300244.42
[array(['../media/32f2f2fd-18a8-43fd-839d-72ad7a43e069.png?view=o365-worldwide', 'Diagram of roles and tasks for auto-apply labels.'], dtype=object) array(['../media/sensitive-info-configuration.png?view=o365-worldwide', 'Policy templates with sensitive information types.'], dtype=object) array(['../media/new-retention-query-editor.png?view=o365-worldwide', 'Query editor.'], dtype=object) array(['../media/retention-label-classifers.png?view=o365-worldwide', 'Choose trainable classifier.'], dtype=object) array(['../media/retention-labels-autoapply-timings.png?view=o365-worldwide', 'Diagram of when auto-apply labels take effect.'], dtype=object) ]
docs.microsoft.com
Creating a New WP Model¶ While NFLWin ships with a fairly robust default model, there is always room for improvement. Maybe there’s a new dataset you want to use to train the model, a new feature you want to add, or a new machine learning model you want to evaluate. Good news! NFLWin makes it easy to train a new model, whether you just want to refresh the data or to do an entire refit from scratch. We’ll start with the simplest case: Default Model, New Data¶ Refreshing the data with NFLWin is a snap. If you want to change the data used by the default model but keep the source as nfldb, all you have to do is override the default keyword arguments when calling the train_model() and validate_model() methods. For instance, if for some insane reason you wanted to train on the 2009 and 2010 regular seasons and validate on the 2011 and 2012 playoffs, you would do the following: >>> from nflwin.model import WPModel >>> new_data_model = WPModel() >>> new_data_model.train_model(training_seasons=[2009, 2010], training_season_types=["Regular"]) >>> new_data_model.validate_model(validation_seasons=[2011, 2012], validation_season_types=["Postseason"]) (21.355462918011327, 565.56909036318007) If you want to supply your own data, that’s easy too - simply set the source_data kwarg of train_model() and validate_model() to be a Pandas DataFrame of your training and validation data (respectively): >>> from nflwin.model import WPModel >>> new_data_model = WPModel() >>> training_data.head() gsis_id drive_id play_id offense_team yardline down yards_to_go \ 0 2012090500 1 35 DAL -15.0 0 0 1 2012090500 1 57 NYG -34.0 1 10 2 2012090500 1 79 NYG -34.0 2 10 3 2012090500 1 103 NYG -29.0 3 5 4 2012090500 1 125 NYG -29.0 4 5 home_team away_team offense_won quarter seconds_elapsed curr_home_score \ 0 NYG DAL True Q1 0.0 0 1 NYG DAL False Q1 4.0 0 2 NYG DAL False Q1 11.0 0 3 NYG DAL False Q1 55.0 0 4 NYG DAL False Q1 62.0 0 curr_away_score 0 0 1 0 2 0 3 0 4 0 >>> new_data_model.train_model(source_data=training_data) >>> validation_data.head() gsis_id drive_id play_id offense_team yardline down yards_to_go \ 0 2014090400 1 36 SEA -15.0 0 0 1 2014090400 1 58 GB -37.0 1 10 2 2014090400 1 79 GB -31.0 2 4 3 2014090400 1 111 GB -26.0 1 10 4 2014090400 1 132 GB -11.0 1 10 home_team away_team offense_won quarter seconds_elapsed curr_home_score \ 0 SEA GB True Q1 0.0 0 1 SEA GB False Q1 4.0 0 2 SEA GB False Q1 30.0 0 3 SEA GB False Q1 49.0 0 4 SEA GB False Q1 88.0 0 curr_away_score 0 0 1 0 2 0 3 0 4 0 >>> new_data_model.validate_model(source_data=validation_data) (8.9344062502671591, 265.7971863696315) Building a New Model¶ If you want to construct a totally new model, that’s possible too. Just instantiate WPModel, then replace the model attribute with either a scikit-learn classifier or Pipeline. From that point train_model() and validate_model() should work as normal. Note If you create your own model, the column_descriptions attribute will no longer be accurate unless you update it manually. Note If your model uses a data structure other than a Pandas DataFrame, you will not be able to use the source_data="nfldb" default kwarg of train_model() and validate_model(). If you want to use nfldb data, query it through nflwin.utilities.get_nfldb_play_data() first and convert it from a DataFrame to the format required by your model. Using NFLWin’s Preprocessors¶ While you can completely roll your own WP model from scratch, NFLWin comes with several classes designed to aid in preprocessing your data. These can be found in the appropriately named preprocessing module. Each of these preprocessors inherits from scikit-learn’s BaseEstimator class, and therefore is fully compatible with scikit-learn Pipelines. Available preprocessors include: ComputeElapsedTime: Convert the time elapsed in a quarter into the total seconds elapsed in the game. ComputeIfOffenseIsHome: Create an indicator variable for whether or not the offense is the home team. CreateScoreDifferential: Create a column indicating the difference between the offense and defense point totals (offense-defense). Uses home team and away team plus an indicator giving if the offense is the home team to compute. MapToInt: Map a column of values to integers. Useful for string columns (e.g. a quarter column with “Q1”, “Q2”, etc). CheckColumnNames: Ensure that only the desired data gets passed to the model in the right order. Useful to guarantee that the underlying numpy arrays in a Pandas DataFrame used for model validation are in the same order as they were when the model was trained. To see examples of these preprocessors in use to build a model, look at nflwin.model.WPModel.create_default_pipeline(). Model I/O¶ To save a model to disk, use the nflwin.model.WPModel.save_model() method. Note If you do not provide a filename, the default model will be overwritten and in order to recover it you will need to reinstall NFLWin (which will then overwrite any non-default models you have saved). To load a model from disk, use the nflwin.model.WPModel.load_model() class method. By default this will load the standard model that comes bundled with pip installs of NFLWin. Simply specify the filename kwarg to load a non-standard model. Note By default, models are saved to and loaded from the path given by nflwin.model.WPModel.model_directory, which by default is located inside your NFLWin install. Estimating Quality of Fit¶ When you care about measuring the probability of a classification model rather than getting a yes/no prediction it is challenging to estimate its quality. This is an area I’m actively looking to improve upon, but for now NFLWin does the following. First, it takes the probabilities given by the model for each play in the validation set, then produces a kernel density estimate (KDE) of all the plays as well as just the ones that were predicted correctly. The ratio of these two KDEs is the actual WP measured from the test data set at a given predicted WP. While all of this is measured in validate_model(), you can plot it for yourself by calling the plot_validation() method, which will generate a plot like that shown on the home page. From there NFLWin computes both the maximum deviation at any given percentage and the total area between the estimated WP from the model and what would be expected if the model was perfect - that’s what is actually returned by validate_model(). This is obviously not ideal given that it’s not directly estimating uncertainties in the model, but it’s the best I’ve been able to come up with so far. If anyone has an idea for how to do this better I would welcome it enthusiastically.
http://nflwin.readthedocs.io/en/stable/model.html
2018-03-17T14:32:06
CC-MAIN-2018-13
1521257645177.12
[]
nflwin.readthedocs.io
See Also: TreeView Members In this topic: Customizing the User Interface The System.Web.UI.WebControls System.Web.UI.WebControls.SiteMapDataSource control. Node text that can be displayed as either plain text or hyperlinks. Programmatic access to the System.Web.UI.WebControls.TreeView object model to create trees, populate nodes, set properties, and so on dynamically. Client-side node population (on supported browsers). The ability to display a check box next to each node. Customizable appearance through themes, user-defined images, and styles. The System.Web.UI.WebControls.TreeView control is made up of nodes. Each entry in the tree is called a node and is represented by a System.Web.UI.WebControls.TreeNode object. Node types are defined as follows: A node that contains other nodes is called a parent node. The node that is contained by another node is called a child node. A node that has no children is called a leaf node. The node that is not contained by any other node but is the ancestor to all the other nodes is the root node. A node can be both a parent and a child, but root, parent, and leaf nodes are mutually exclusive. Several visual and behavioral properties of nodes are determined by whether a node is a root, child, or leaf node. Although a typical tree structure has only one root node, the System.Web.UI.WebControls.TreeView control allows you to add multiple root nodes to your tree structure. This is useful when you want to display item listings without displaying a single root node, as in a list of product categories. Each node has a TreeNode.Text property and a TreeNode.Value property. The value of the TreeNode.Text property is displayed in the System.Web.UI.WebControls.TreeView, while the TreeNode TreeNode.NavigateUrl property for the node to a value other than an empty string (""). To put a node into selection mode, set the TreeNode.NavigateUrl property for the node to an empty string (""). Some Internet browsers have a limitation that can affect the performance of the System.Web.UI.WebControls.TreeView control. For example, Microsoft Internet Explorer 6.0 has a URL character limit of 2067 characters that it posts. If the number of characters in a URL of a node is larger than that number, expanding that node will fail and no exception is thrown. The simplest data model of the System.Web.UI.WebControls.TreeView control is static data. To display static data using declarative syntax, first nest opening and closing <Nodes> tags between the opening and closing tags of the System.Web.UI.WebControls.TreeView control. Next, create the tree structure by nesting <asp:TreeNode> elements between the opening and closing <Nodes> tags. Each <asp:TreeNode> element represents a node in the tree and maps to a System.Web.UI.WebControls. The System.Web.UI.WebControls.TreeView control can also be bound to data. You can use either of two methods to bind the System.Web.UI.WebControls.TreeView control to the appropriate data source type: The System.Web.UI.WebControls.TreeView control can use any data source control that implements the System.Web.UI.IHierarchicalDataSource interface, such as an System.Web.UI.WebControls.XmlDataSource control or a System.Web.UI.WebControls.SiteMapDataSource control. To bind to a data source control, set the DataBoundControl.DataSourceID property of the System.Web.UI.WebControls.TreeView control to the System.Web.UI.Control.ID value of the data source control. The System.Web.UI.WebControls.TreeView control automatically binds to the specified data source control. This is the preferred method to bind to data. The System.Web.UI.WebControls.TreeView control can also be bound to an System.Xml.XmlDocument object or a System.Data.DataSet object with relations. To bind to one of these data sources, set the BaseDataBoundControl.DataSource property of the System.Web.UI.WebControls.TreeView control to the data source, and then call the BaseDataBoundControl.DataBind method. using the TreeView.DataBindings collection. The TreeView System.Web.UI.WebControls.TreeNodeBinding. A malicious user can create a callback request and get data for the nodes of the System.Web.UI.WebControls.TreeView control that the page developer is not displaying. Therefore, security of the data must be implemented by the data source. Do not use the TreeView.MaxDataBindDepth property to hide data. Sometimes, it is not practical to statically define the tree structure because the data source returns too much data or because the data to display depends on information that you get at run time. Because of this, the System.Web.UI.WebControls.TreeView control supports dynamic node population. When the TreeNode.PopulateOnDemand property for a node is set to true, that node gets populated at run time when the node is expanded. To populate a node dynamically, you must define an event-handling method that contains the logic to populate a node for the TreeView.TreeNodePopulate event. Browsers that support callback scripts can also take advantage of client-side node population. (This includes Internet Explorer 5.5 and later and some other browsers.) Client-side node population enables the System.Web.UI.WebControls.TreeView control to populate a node using client script when users expand the node, without requiring a round trip to the server. For more information on client-side node population, see TreeView.PopulateNodesFromClient. There are many ways to customize the appearance of the System.Web.UI.WebControls.TreeView control. First, you can specify a different style (such as font size and color) for each of the node types. If you use cascading style sheets (CSS) to customize the appearance of the control, use either inline styles or a separate CSS file, but not both. Using both inline styles and a separate CSS file could cause unexpected results. For more information on using style sheets with controls, see ASP.NET Web Server Controls and CSS Styles. The following table lists the available node styles. You can also control the style of nodes at specific depths within the tree by using the TreeView. If a style is defined for a certain depth level using the TreeView.LevelStyles collection, that style overrides any root, parent, or leaf node style settings for the nodes at that depth. Another way to alter the appearance of the control is to customize the images that are displayed in the System.Web.UI.WebControls.TreeView control. You can specify your own custom set of images for the different parts of the control by setting the properties shown in the following table. You do not need to customize every image property. If an image property is not explicitly set, the built-in default image is used. The System.Web.UI.WebControls.TreeView control also allows you to display a check box next to a node. When the TreeView.ShowCheckBoxes property is set to a value other than TreeNodeTypes.None, check boxes are displayed next to the specified node types. The TreeView.ShowCheckBoxes property can be set to a bitwise combination of the System.Web.UI.WebControls.TreeNodeTypes enumeration member values. Each time the page is posted to the server, the TreeView.CheckedNodes collection is automatically populated with the selected nodes. When check boxes are displayed, you can use the TreeView.TreeNodeCheckChanged event to run a custom routine whenever the state of a check box changes between posts to the server. The System.Web.UI.WebControls.TreeView control provides several events that you can program against. This allows you to run a custom routine whenever an event occurs. The following table lists the events that are supported by the System.Web.UI.WebControls.TreeView control. The System.Web.UI.WebControls.TreeView control does not have built-in scrolling. To add scrolling, place the System.Web.UI.WebControls.TreeView control in a System.Web.UI.WebControls.Panel control and add scrollbars to the System.Web.UI.WebControls.Panel control. For more information, see Panel Web Server Control Overview. The markup rendered by default for this control might not conform to accessibility standards. For details about accessibility support for this control, see ASP.NET Controls and Accessibility. Example <asp:TreeView AccessKey="string" AutoGenerateDataBindings="
http://docs.go-mono.com/monodoc.ashx?link=T%3ASystem.Web.UI.WebControls.TreeView
2018-03-17T14:39:20
CC-MAIN-2018-13
1521257645177.12
[]
docs.go-mono.com
ServiceStack’s API design ServiceStack Services lets you return any kind of POCO, including naked collections: [Route("/reqstars")] public class GetReqstars : IReturn<List<Reqstar>> { } public class ReqstarsService : Service { public object Get(GetReqstars request) => Db.Select<Reqstar>(); } That your C# clients can call with just: List<Reqstar> response = client.Get(new GetReqstars()); This will make a GET call to the custom /reqstars url, making it the minimum effort required in any Typed REST API in .NET! When the client doesn’t contain the [Route] definition it automatically falls back to using ServiceStack’s pre-defined routes - saving an extra LOC! Using explicit Response DTO A popular alternative to returning naked collections is to return explicit Response DTO, e.g: [Route("/reqstars")] public class GetReqstars : IReturn<GetReqstarsResponse> { } public class GetReqstarsResponse { public List<Reqstar> Results { get; set; } public ResponseStatus ResponseStatus { get; set; } } public class ReqstarsService : Service { public object Get(GetReqstars request) { return new GetReqstarsResponse { Results = Db.Select<Reqstar>() }; } } Whilst slightly more verbose this style benefits from better versionability and more coarse-grained APIs as additional results can be added to the Response DTO without breaking existing clients. You’ll also need to follow the above convention if you also wanted to support SOAP clients and Wrapper IResponse Response { get; } //HTTP Response Wrapper IServiceGateway Gateway { get; } //Built-in Service Gateway IVirtualPathProvider VirtualFileSources //Virtual FileSystem Sources IVirtualFiles VirtualFiles { get; } //Writable Virtual FileSystem ICacheClient Cache { get; } //Registered Caching Provider MemoryCacheClient LocalCache { get; } //Local InMemory Caching Provider IDbConnection Db { get; } //Registered ADO.NET IDbConnection IRedisClient Redis { get; } //Registered RedisClient IMessageProducer MessageProducer { get; } //Message Producer for Registered MQ Server IServiceGateway Gateway { get; } //Service Gateway IAuthRepository AuthRepository { get; } //Registered User Repository ISession SessionBag { get; } //Dynamic Session Bag TUserSession SessionAs<TUserSession>(); //Resolve Typed UserSession T TryResolve<T>(); //Resolve dependency at runtime T ResolveService<T>(); //Resolve an auto-wired service void PublishMessage(T message); //Publish messages to Registered MQ Server bool IsAuthenticated { get; } //Is Authenticated Request void Dispose(); //Override to implement custom Dispose } Basic example - Handling Any HTTP Verb Lets revisit the Simple example from earlier: [Route("/reqstars")] public class GetReqstars : IReturn<List<Reqstar>> { } public class ReqstarsService : Service { public object Get(GetReqstars request) => Db.Select<Reqstar>(); } ServiceStack maps HTTP Requests to your Services Actions. An Action is any method that: - Is public - Only contains a single argument - the typed Request DTO - Has a Method name matching a HTTP Method or Any which is used as a fallback if it exists - Can specify either Tor objectReturn type, both have same behavior The above example will handle any GetReqstars request made on any HTTP Verb or endpoint and will return the complete List<Reqstar> contained in your configured RDBMS. ‘dependency-free’ for maximum accessibility and potential re-use. Our recommendation is to follow our Recommended Physical Project Structure and keep your DTOs in a separate ServiceModel project which ensures a well-defined ServiceContract decoupled from their implemenation("/reqstars")] public class GetReqstars : IReturn<List<Reqstar>> { } public class Reqstar { ... } Which can used in any ServiceClient with: var client = new JsonServiceClient(BaseUri); List<Reqstar> response = client.Get(new GetReqstars()); Which makes a GET web request to the /reqstarsReqstars());<Reqstar>>("/reqstars"); All these Service Client APIs have async equivalents with an *Asyncsuffix.Reqstars Request DTO you can just add /Views/GetReqstars}/reqstars" ReqstarsService : Service { [ClientCanSwapTemplates] public object Get(GetReqstars request) => Db.Select<Reqstar>(); } ReqstarsService : Service { [EnableCors] public void Options(GetReqstar request) {} } Which if you now make an OPTIONS request to the above service, will emit the default [EnableCors] headers: var webReq = (HttpWebRequest)WebRequest.Create(Host + "/reqstars");("/reqstars/{Id}", "PATCH")] public class UpdateReqstar : IReturn<Reqstar> { public int Id { get; set; } public int Age { get; set; } } public Reqstar Patch(UpdateReqstar request) { Db.Update<Reqstar>(request, x => x.Id == request.Id); return Db.Id<Reqstar>(request.Id); } And the client call is just as easy as you would expect: var response = client.Patch(new UpdateReqstar {<Reqstar> Post(Reqstar request) { if (!request.Age.HasValue) throw new ArgumentException("Age is required"); Db.Insert(request.TranslateTo<Reqstar>()); return Db.Select<Reqstar>(); } This will result in an Error thrown on the client if it tried to create an empty Reqstar: try { var response = client.Post(new Reqstar()); }Reqstars()); }("/reqstars")] public class Reqstar {} [Route("/reqstars", "GET")] public class GetReqstars {} [Route("/reqstars/{Id}", "GET")] public class GetReqstar {} [Route("/reqstars/{Id}/{Field}")] public class ViewReqstar {} [Route("/reqstars/{Id}/delete")] public class DeleteReqstar {} [Route("/reqstars/{Id}", "PATCH")] public class UpdateReqstar {} [Route("/reqstars/reset")] public class ResetReqstar {} [Route("/reqstars/search")] [Route("/reqstars/aged/{Age}")] public class SearchReqstars {} These are results for these HTTP Requests GET /reqstars => GetReqstars POST /reqstars => Reqstar GET /reqstars/search => SearchReqstars GET /reqstars/reset => ResetReqstar PATCH /reqstars/reset => ResetReqstar PATCH /reqstars/1 => UpdateReqstar GET /reqstars/1 => GetReqstar GET /reqstars/1/delete => DeleteReqstar GET /reqstars/1/foo => ViewReqstar Advanced Usages Custom Hooks The ability to extend ServiceStack’s service execution pipeline with Custom Hooks is an advanced customisation feature that for most times is not needed as the preferred way to add composable functionality to your services is to use Request / Response Filter attributes or apply them globally with Global Request/Response Filters.Context requestContext, TRequest request) { // Called just before any Action is executed } public override object OnAfterExecute(IRequestContext requestContext, object response) { // Called just after any Action is executed, you can modify the response returned here as well } public override object HandleException(IRequestContext requestContext, TRequest request,Reqstars>, IGet<SearchReqstars>, IPost<Reqstar> { public object Any(GetReqstars request) { .. } public object Get(SearchReqstars request) { .. } public object Post(Reqstar request) { .. } } This has no effect to the runtime behaviour and your services will work the same way with or without the added interfaces.
http://docs.servicestack.net/api-design
2018-03-17T14:28:54
CC-MAIN-2018-13
1521257645177.12
[]
docs.servicestack.net
Extensions. Docker Images - Cycle Ibo Prometheus Connector - Cycle Trisotech Connector - Elastic Search Extension - FEEL Scala Extension - Grails Plugin - GraphQL API - Migration API - Mockito Testing Library - Needle Testing Library - OSGi Integration - PHP SDK - Process Test Coverage - Reactor Event Bus - Scenario Testing Library - Single Sign On for JBoss - Spring Boot Starter - Tasklist Translations - Custom Batch:
https://docs.camunda.org/manual/latest/introduction/extensions/
2018-03-17T14:38:29
CC-MAIN-2018-13
1521257645177.12
[]
docs.camunda.org
Contents: Registered users of this product or Trifacta Wrangler Enterprise should login to Product Docs through the application. Contents: Sorts the dataset based on one or more columns in ascending or descending order. You can also sort based on the order of rows when the dataset was created.. Basic Usage sort order:LastName Output: Dataset is sorted in alphabetically ascending order based on the values in the LastName column, assuming that the values are strings. Parameters sort order:column_ref For more information on syntax standards, see Language Documentation Syntax Notes. order Identifies the column or set of columns by which the dataset is sorted. - Multiple column names can be separated by commas. - Ranges of columns cannot be specified. The order can be reversed by adding a negative sign in front of the column name: sort order: -ProductNameMulti-column sorts: You can also specify multi-column sorts. The following example sorts first by the inverse order of ProductName, and within that sort, rows are sorted by ProductColor: sort order: -ProductName,ProductColorSort by original row numbers: As an input value, this parameter also accepts the SOURCEROWNUMBERfunction, which performs the sort according to the original order of rows when the dataset was created. sort order: SOURCEROWNUMBER()See SOURCEROWNUMBER Function. Usage Notes: Data is sorted based on the data type of the source: Examples: set col:LastOrder value:NUMFORMAT(LastOrder, '####.00') Now, you're interested in the highest value for your customers' most recent orders. You can apply the following sort: sort order: -LastOrder: sort order: State,City In the generated output, the data is first sorted by the State value. Each set of rows within the same State value is also sorted by the City value. To revert to the original sort order, use the following ORIGINALORDER function: sort order:ORIGINALORDER() Example - Sort by original row numbers sorted the heat time columns a few times to exam the best performance in each heat according to the sample. You then notice that the data contains headers, and you forget how it was originally sorted. The data now looks like the following: Tip: In the above example, the row numbers remain unchanged despite the sort steps. To assist with sorting operations, you might find it useful to enable this option under the data grid options. See Transformer Page.: sort order:SOURCEROWNUMBER() Then, you can create the header with the following simple step: header If you need to retain the sort order and not revert to the original, you can do the following to the previous example data: header sourcerownumber:1 Results: After you have applied the last header transform, your data should look like the following: You can sort by the Racer column in ascending order to return to the original sort order. This page has no comments.
https://docs.trifacta.com/display/PE/Sort+Transform
2018-03-17T14:42:22
CC-MAIN-2018-13
1521257645177.12
[array(['/download/resources/com.adaptavist.confluence.rate:rate/resources/themes/v2/gfx/loading_mini.gif', None], dtype=object) array(['/download/resources/com.adaptavist.confluence.rate:rate/resources/themes/v2/gfx/rater.gif', None], dtype=object) ]
docs.trifacta.com
Create a File Query From the Cortex® XDR™ management console, you can create a query to investigate the connections between file activity and endpoints. From the Query Builderyou can investigate connections between file activity and endpoints. The Query Builder searches your logs and endpoint data for the file activity that you specify. To search for files on endpoints instead of file-related activity, use the XQL Search. Some examples of file queries you can run include: - Files modified on specific endpoints. - Files related to process activity that exist on specific endpoints. To build a file query: - From Cortex XDR, select.INVESTIGATIONQuery Builder - SelectFILE. - Enter the search criteria for the file events query. - File activity—Select the type or types of file activity you want to search:All,Create,Read,Rename,Delete, orWrite. - File attributes—Define any additional process attributes for which you want to search. Use a pipe (|) to separate multiple values (for examplenotepad.exe|chrome.exe). By default, Cortex XDR will return the events that match the attribute you specify. To exclude an attribute value, toggle the=option to=!. Attributes are: To specify an additional exception (match this value except), click the+to the right of the value and specify the exception value. - NAME—File name. - PATH—Path of the file. - PREVIOUS NAME—Previous name of a file. - PREVIOUS PATH—Previous path of the file. - MD5—MD5 hash value of the file. - SHA256—SHA256 hash value of the file. - DEVICE TYPE—Type of device used to run the file: Unknown, Fixed, Removable Media, CD-ROM. - DEVICE SERIAL NUMBER—Serial number of the device type used to run the file. - .
https://docs.paloaltonetworks.com/cortex/cortex-xdr/cortex-xdr-pro-admin/investigation-and-response/search-queries/query-builder/create-a-file-query.html
2021-06-12T13:56:52
CC-MAIN-2021-25
1623487584018.1
[array(['/content/dam/techdocs/en_US/dita/_graphics/uv/cortex/cortex-xdr/investigation-and-response/query-search-file.png/_jcr_content/renditions/original', None], dtype=object) ]
docs.paloaltonetworks.com
Universal Blob Storage¶ Ververica Platform provides centralized configuration of blob storage for its services. - Configuration - Services Configuration¶ In order to enable universal blob storage configure a base URI for your blob storage provider. Add the following snippet to your Helm values.yaml file: vvp: blobStorage: baseUri: s3://my-bucket/vvp The provided base URI will be picked up by all services that can make use of blob storage, for example Application Manager or Artifact Management. Storage Providers¶ (✓): With custom Flink image Additional Provider Configuration¶ Some supported storage providers have additional options that can be configured in the blobStorage section of the values.yaml file, scoped by provider. The following is a complete listing of supported additional storage provider configuration options: blobStorage: s3: endpoint: "" region: "" oss: endpoint: "" Credentials¶ Ververica Platform supports using a single set of credentials to access your configured blob storage, and will automatically distribute these credentials to Flink jobs that require them. These credentials can be either specified directly in values.yaml, or added to a Kubernetes secret out-of-band and referenced in values.yaml by name. Option 1: values.yaml¶ The following is a complete listing of the credentials that can be given for each storage provider, with example values: blobStorageCredentials: azure: connectionString: DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=vvpArtifacts;AccountKey=VGhpcyBpcyBub3QgYSB2YWxpZCBBQlMga2V5LiAgVGhhbmtzIGZvciB0aG9yb3VnaGx5IHJlYWRpbmcgdGhlIGRvY3MgOikgIA==; s3: accessKeyId: AKIAEXAMPLEACCESSKEY secretAccessKey: qyRRoU+/4d5yYzOGZVz7P9ay9fAAMrexamplesecretkey hdfs: # Apache Hadoop® configuration files (core-site.xml, hdfs-site.xml) # and optional Kerberos configuration files. Note that the keytab # has to be base64 encoded. core-site.xml: | <?xml version="1.0" ?> <configuration> ... </configuration> hdfs-site.xml: | <?xml version="1.0" ?> <configuration> ... </configuration> krb5.conf: | [libdefaults] ticket_lifetime = 10h ... keytab: BQIAA...AAAC keytab-principal: flink Option 2: Pre-create Kubernetes Secret¶ To use a pre-created Kubernetes secret, its keys must match the pattern <provider>.<key>. For example, s3.accessKeyId and s3.secretAccessKey. To configure Ververica Platform to use this secret, add the following snippet to your Helm values.yaml file: blobStorageCredentials: existingSecret: my-blob-storage-credentials Important The values in a Kubernetes secret must be base64-encoded. Example: Apache Hadoop® HDFS¶ For UBS with Apache Hadoop® HDFS we recommend to pre-create a Kubernetes secret with the required configuration files in order to avoid duplication of the configuration files in the Ververica Platform values.yaml file. kubectl create secret generic my-blob-storage-credentials \ --from-file hdfs.core-site.xml=core-site.xml \ --from-file hdfs.hdfs-site.xml=hdfs-site.xml \ --from-file hdfs.krb5.conf=krb5.conf \ --from-file hdfs.keytab=keytab \ --from-file hdfs.keytab-principal=keytab-principal After you have created the Kubernetes secret, you can reference it in the values.yaml as an existing secret. Note that the Kerberos configuration is optional. Advanced Configuration¶ AWS EKS¶ When running on AWS EKS or AWS ECS your Kubernetes Pods inherit the roles attached to the underlying EC2 instances. If these roles already grant access to the required S3 resources you only need to configure vvp.blobStorage.baseUri without configuring any blobStorageCredentials. Apache Hadoop® Versions¶ UBS with Apache Hadoop® HDFS uses a Hadoop 2 client for communication with the HDFS cluster. Hadoop 3 preserves wire compatibility with Hadoop 2 clients and you are able to use HDFS blob storage with both Hadoop 2 and Hadoop 3 HDFS clusters. But note that there may be incompatabilities between Hadoop 2 and 3 with respect to the configuration files core-site.xml and hdfs-site.xml. As an example, Hadoop 3 allows to configure durations with a unit suffix such as 30s which results in a configuration parsing error with Hadoop 2 clients. It’s generally possible to work around these issues by limiting configuration to Hadoop 2 compatible keys/values. Apache Flink® Hadoop Dependency¶ When using HDFS UBS, Ververica Platform dynamically adds the Hadoop dependency flink-shaded-hadoop-2-uber to the classpath. You can use the following annotation to skip this step: kind: Deployment spec: template: metadata: annotations: ubs.hdfs.hadoop-jar-provided: true This is useful if you your Docker image provides a Hadoop dependency. If you use this annotation without a Hadoop dependency on the classpath, your Flink application will fail. Services¶ The following services make use of the universal blob storage configuration. Apache Flink® Jobs¶ Flink jobs are configured to store blobs at the following locations: User-provided configuration has precedence over universal blob storage. Artifact Management¶ Artifacts are stored in the following location: ${baseUri}/artifacts/namespaces/${ns} SQL Service¶ The SQL Service depends on blob storage for storing deployment information and JAR files of user-defined functions. SQL Deployments¶ Before a SQL query can be deployed it needs to be optimized and translated to a Flink job. SQL Service stores the Flink job and all JAR files that contain an implementation of a user-defined function which is used by the query at the following locations: After a query has been deployed, Application Manager maintains the same blobs as for regular Flink jobs, i.e., checkpoints, savepoints, and high-availability files. UDF Artifacts¶ The JAR files of UDF Artifacts that are uploaded via the UI are stored in the following location: ${baseUri}/sql-artifacts/namespaces/${ns}/udfs/${udfArtifact} Connectors, Formats, and Catalogs¶ The JAR files of Custom Connectors and Formats and Custom Catalogs that are uploaded via the UI are stored in the following location: ${baseUri}/sql-artifacts/namespaces/${ns}/custom-connectors/
https://docs.ververica.com/platform_operations/blob_storage.html
2021-06-12T14:10:47
CC-MAIN-2021-25
1623487584018.1
[]
docs.ververica.com
Accessing the panel¶ Note The “staff” flag, which controls whether the user is allowed to log in to the admin interface, can be set by the admin panel itself. The panel can be reached from Admin link of the User Menu in the navigation bar (see the picture below) or through this URL: http://<your_geonode_host>/admin. The Admin Link of the User Menu¶ When clicking on that link the Django-based Admin Interface page opens and shows you all the Django models registered in GeoNode. The GeoNode Admin Interface¶ Reset or Change the admin password¶ From the Admin Interface you can access the CHANGE PASSWORD link on the right side of the navigation bar. The Change Password Link¶ It allows you to access the Change Password Form through which you can change your password. The Change Password Form¶ Once the fields have been filled out, click on CHANGE MY PASSWORD to perform the change. Simple Theming¶ GeoNode provides by default some theming options manageable directly from the Administration panel. Most of the times those options allows you to easily change the GeoNode look and feel without touching a single line of HTML or CSS. As an administrator go to http://<your_geonode_host>/admin/geonode_themes/geonodethemecustomization/. List of available Themes¶ The panel shows all the available GeoNode themes, if any, and allows you to create new ones. Warning Only one theme at a time can be activated (aka enabled). By disabling or deleting all the available themes, GeoNode will turn the gui back to the default one. Editing or creating a new Theme, will actually allow you to customize several properties. At least you’ll need to provide a Name for the Theme. Optionally you can specify also a Description, which will allow you to better identify the type of Theme you created. Theme Name and Description¶ Just below the Description field, you will find the Enabled checkbox, allowing you to toggle the Theme. Theme Name and Description¶ Jumbotron and Get Started link¶ Note Remember, everytime you want to apply some changes to the Theme, you must save the Theme and reload the GeoNode browser tab. In order to quickly switch back to the Home page, you can just click the VIEW SITE link on the top-right corner of the Admin dashboard. The next section, allows you to define the first important Theme properties. This part involves the GeoNode main page sections. Jumbotron and Logo options¶ By changing those properties as shown above, you will easily change your default home page from this to this Updating Jumbotron and Logo¶ It is possible to optionally hide the Jumbotron text and/or the Call to action button Hide Jumbotron text and Call to action button¶ Slide show¶ To switch between a slide show and a jumbotron, flip the value of the welcome theme from “slide show” to “jumbotron” and vice versa to either display a jumbotron with a “get started” link or a slide show in the home page For example, to display a slide show, change the welcome theme from jumbotron background to slide show Before creating a slide show, make sure you have slides to select from (in the multi-select widget) to make up the slide show. If no slides exist, click the plus (+) button beside the slide show multi-select widget to add a new slide. Fill in the slide name, slide content using markdown formatting, and upload a slide image (the image that will be displayed when the slide is in view). For slide images that already contain text, hide slide content by checking the checkbox labeled “Hide text in the jumbotron slide” as shown below, then save the slide. It is also possible to hide a slide from all slide show themes that use it by unchecking the checkbox labeled “Is enabled” as shown below. Selecting the above slide in a slide show and enabling slide show (using the “welcome theme” configuration) will create a slide show with a slide as shown below: Partners¶ GeoNode simple theming, allows also a Partners section, in order to easily list links to third-party institutions collaborating to the project. The example below shows the Partners section of WorldBank CHIANG MAI URBAN FLOODING GeoNode instance made through integrating theming options. Urban-flooding GeoNode Partners Section¶ The Partners items can be managed through the http://<your_geonode_host>/admin/geonode_themes/partner/ Admin section GeoNode Partners Admin Section¶ From here it is possible to add, modify or delete partners items. A new partner is defined by few elements, a Logo, a Name, a Display Name and a Website In order to attach or detach a Partner to an existing Theme on GeoNode, you will need to edit the Theme and go to the Partners section From here you will be able to either to change the Partners title text and/or select/deselect Partners from the multi-select box. Note In order to select/deselect elements from the multi-select box, you must use the CTRL+CLICK button combination. Switching between different themes¶ In the case you have defined more Themes, switching between them is as easy as enabling one and disabling the others. Remember to save the Themes everytime and refresh the GeoNode home page on the browser to see the changes. It is also important that there is only one Theme enabled at a time. In order to go back to the standard GeoNode behavior, just disable or delete all the available Themes. Add a new user¶ In GeoNode, administrators can manage other users. For example, they can Add New Users through the following form. The form above can be reached from the Admin Panel at the following path: Home > People > Users. Click on ADD USER + to open the form page. The Add User button in the Users List page¶ It is also available, in the GeoNode UI, the Add User link of the About menu in the navigation bar. To perform the user creation fill out the required fields (username and password) and click on SAVE. You will be redirected to the User Details Page which allows to insert further information about the user. The user will be visible into the Users List Page of the Admin Panel and in the People Page (see Viewing other users information). The User in the People page¶ Activate/Disable a User¶ When created, new users are active by default. You can check that in the User Details Page from the Admin Panel (see the picture below). New Users Active by default¶ Change a User password¶ GeoNode administrators can also change/reset the password for those users who forget it. As shown in the picture below, click on this form link from the User Details Page to access the Change Password Form. Changing Users Passwords¶ The Change User Password Form should looks like the following one. Insert the new password two times and click on CHANGE PASSWORD. Changing Users Passwords¶ Promoting a User to Staff member or superuser¶ Active users have not access to admin tools. GeoNode makes available those tools only to Staff Members who have the needed permissions. Superusers are staff members with full access to admin tools (all permissions are assigned to them). Administrators can promote a user to Staff Member by ticking the Staff status checkbox in the User Details Page. To make some user a Superuser, the Superuser status checkbox should be ticked. See the picture below. Staff and Superuser permissions¶ Creating a Group¶ The Create Groups link of About menu in the navigation bar allows administrators to reach the Group Creation Page. The following form will open. Fill out all the required fields and click Create to create the group. The Group Details Page will open. The new created group will be searchable in the Groups List Page. Note The Create a New Group button on the Groups List Page allows to reach the Group Creation Form. The Groups Section on the Admin Panel¶ As you can see, GeoNode provides two types of groups. You will learn more about that in the next paragraph. Types of Groups¶ In GeoNode users can be grouped through a Group Profile, an enhanced Django group which can be enriched with some further information such as a description, a logo, an email address, some keywords, etc. It also possible to define some Group Categories based on which those group profiles can be divided and filtered. A new Group Profile can be created as follow: click on the Group Profile + Add button fill out all the required fields (see the picture below), Group Profiles can be explicitly related to group categories click on SAVE to perform the creation, the new created group profile will be visible in the Group Profiles List Group Categories¶ Group Profiles can also be related to Group Categories which represents common topics between groups. In order to add a new Group Category follow these steps: click on the Group Categories + Add button fill out the creation form (type name and description) click on SAVE to perform the creation, the new created category will be visible in the Group Categories List The Group Categories List¶ Filtering Layers by Group Category¶ Managing a Group¶ Through the Groups link of About menu in the navigation bar, administrators can reach the Groups List Page. The Groups Link in the navigation bar¶ In that page all the GeoNode Group Profiles are listed. Group Profiles List Page¶ For each group some summary information (such as the title, the description, the number of members and managers) are displayed near the Group Logo. Administrators can manage a group from the Group Profile Details Page which is reachable by clicking on the title of the group. Group Profile Details Page¶ As shown in the picture above, all information about the group are available on that page: the group Title; the Last Editing Date which shows a timestamp corresponding to the last editing of the group properties; the Keywords associated with the group; Permissions on the group (Public, Public(invite-only), Private); Members who join the group; Managers who manage the group. There are also four links: The Edit Group Details link opens the Group Profile Form through which the following properties can be changed: Title. Logo (see next paragraphs). Description. Keywords, a comma-separated list of keywords. Access, which regulates permissions: Public: any registered user can view and join a public group. Public (invite-only): only invited users can join, any registered user can view the group. Private: only invited users can join the group, registered users cannot see any details about the group, including membership. Categories, the group categories the group belongs to. Group Profile Details Page¶ Managing Group Members (see next paragraphs). the Delete this Group, click on it to delete the Group Profile. GeoNode requires you to confirm this action. the Group Activities drives you to the Group Activities Page where you can see all layers, maps and documents associated with the group. There is also a Comments tab which shows comments on those resources. Group Logo¶ Each group represents something in common between its members. So each group should have a Logo which graphically represents the idea that identify the group. On the Group Profile Form page you can insert a logo from your disk by click on Browse…. Managing Group members¶ The Manage Group Members link opens the Group Members Page which shows Group Members and Group Managers. Managers can edit group details, can delete the group, can see the group activities and can manage memberships. Other Members can only see the group activities. Adding a new Member to the Group¶ The following picture shows you the results. New Members of the Group¶ If you want to change the role of group members after adding them, you can use the “promote” button to make a member into a manager, and the “demote” button to make a manager into a regular member. Group based advanced data workflow¶ By default GeoNode is configured to make every resource (Layer, Document or Map) suddenly available to everyone, i.e. publicly accessible even from anonymous/non-logged in users. It is actually possible to change few configuration settings in order to allow GeoNode to enable an advanced publication workflow. With the advanced workflow enabled, your layer, document or map won’t be automatically published (i.e. made visible and accessible for all, contributors or simple users). For now, your item is only visible by yourself, the manager of the group to which the layer, document or map is linked (this information is filled in the metadata), the members of this group, and the GeoNode Administrators. Before being published, the layer, document or map will follow a two-stage review process, which is described below: From upload to publication: the review process on GeoNode¶ How to enable the advanced workflow¶ You have to tweak the GeoNode settings accordingly. Please see the details of the following GeoNode Settings: Summarizing, when all the options above of the Advanced Workflow are enabled, upon a new upload we will have: - The “unpublished” resources will be hidden to anonymous users only. The registered users will be still able to access the resources (if they have the rights to do that, of course). - The “unpublished” resources will remain hidden to users if the permission (see Admin Guide section: ‘Manage Permissions’) will be explicitly removed - During the upload, whenever the advanced workflow is enabled, the owner’s Groups are automatically allowed to access the resource, even if the “anonymous” flag has been disabled. Those permissions can be removed later on - During the upload, “managers” of the owner’s Groups associated to the resource, are always allowed to edit the resource, the same as they are admin for that resource - “managers” of the owner’s Groups associated to the resource are allowed to “publish” also the resources, not only to “approve” them Change the owner rights in case of advanced workflow is on¶ After switching ADMIN_MODERATE_UPLOADS to True and resource is approved owner is no longer able to modify it. He will see new button on the resource detail page: Request change. After clicking this, view with short form is shown. On this view user can write short message why he want to modify the resource. This message will be sent through messaging and email system to administrators: After administrator unapprove the resource owner is again able to modify it. The group Manager approval¶: The approbation process of an item by a Manager¶ Following this approval, the GeoNode Administrators receive a notification informing them that an item is now waiting for publication An approved layer, waiting for publication by the GeoNode administrators¶ The publication by the GeoNode Administrator¶ Prior to the public release of an approved layer, a document or a map, the Administrator of the platform performs a final validation of the item and its metadata, notably to check that it is in line with license policies. If needed, the GeoNode. Manage profiles using the admin panel¶ So far GeoNode implements two distinct roles, that can be assigned to resources such as layers, maps or documents: party who authored the resource party who can be contacted for acquiring knowledge about or acquisition of the resource These two profiles can be set in the GeoNode interface by accessing the metadata page and setting the Point of Contact and Metadata Author fields respectively. Is possible for an administrator to add new roles if needed, by clicking on the Add Role button in the Base -> Contact Roles section: Clicking on the People section (see figure) will open a web for with some personal information plus a section called Users. Is important that this last section is not modified here unless the administrator is very confident in that operation. Manage layers using the admin panel¶ Some of the Layers information can be edited directly through the admin interface although the best place is in the Layer -> Metadata Edit in GeoNode. Clicking on the Admin > Layers link will show the list of available layers. Warning It is not recommended to modify the Layers’ Attributes or Styles directly from the Admin dashboard unless you are aware of your actions. The Metadata information can be changed for multiple Layers at once through the Metadata batch edit action. By clicking over one Layer link, it will show a detail page allowing you to modify some of the resource info like the metadata, the keywords, the title, etc. Note It is strongly recommended to always use the GeoNode Metadata Wizard or Metadata Advanced tools in order to edit the metadata info. The Permissions can be changed also for multiple Layers at once through the Set layers permissions action. By clicking over one Layer link, it will show a detail page allowing you to modify the permissions for the selected resources. Manage the maps using the admin panel¶ Similarly to the Layers, it is possible to manage the available GeoNode Maps through the Admin panel also. Move to Admin > Maps to access the Maps list. The Metadata information can be changed for multiple Maps at once through the Metadata batch edit action. By clicking over one Map link, it will show a detail page allowing you to modify some of the resource info like the metadata, the keywords, the title, etc. Note It is strongly recommended to always use the GeoNode Metadata Wizard or Metadata Advanced tools in order to edit the metadata info. Notice that by enabling the Featured option here, will allow GeoNode to show the Map thumbnail and the Map detail link on the Home Page Manage the documents using the admin panel¶ Similarly to the Layers and Maps, it is possible to manage the available GeoNode Documents through the Admin panel also. Move to Admin > Documents to access the Documents list. The Metadata information can be changed for multiple Documents at once through the Metadata batch edit action. By clicking over one Document link, it will show a detail page allowing you to modify some of the resource info like the metadata, the keywords, the title, etc. Note It is strongly recommended to always use the GeoNode Metadata Wizard or Metadata Advanced tools in order to edit the metadata info. Manage the base metadata choices using the admin panel¶ Admin > Base contains almost all the objects you need to populate the resources metadata choices. Admin dashboard Base Panel¶ In other words the options available from the select-boxes of the Metadata Wizard and Metadata Advanced panels. Note When editing the resource metadata through the Metadata Wizard, some fields are marked as mandatory and by filling those information the Completeness progress will advance accordingly. Even if not all the fields have been filled, the system won’t prevent you to update the metadata; this is why the Mandatory fields are mandatory to be fully compliant with an ISO 19115 metadata schema, but are only recommended to be compliant with GeoNode. Also the Completeness indicates how far the metadata is to be compliant with an ISO 19115 metadata schema. Of course, it is highly recommended to always fill as much as possible at least all the metadata fields marked as Mandatory. This will improve not only the quality of the data stored into the system, but will help the users to easily search for them on GeoNode. All the Search & Filter panels and options of GeoNode are, in fact, based on the resources metadata fields. Too much generic descriptions and too empty metadata fields, will give highly un-precise and very wide search results to the users. Hierarchical keywords¶ Through the Admin > Base > Hierarchical keywords panel it will be possible to manage all the keywords associated to the resources. Hierarchical keywords list¶ Hierarchical keywords edit¶ The Name is the human readable text of the keyword, what users will see. The Slug is a unique label used by the system to identify the keyword; most of the times it is equal to the name. Notice that through the Position and Relative to selectors, it is possible to establish a hierarchy between the available keywords. The hierarchy will be reflected in the form of a tree from the metadata panels. By default each user with editing metadata rights on any resource, will be able to insert new keywords into the system by simply typing a free text on the keywords metadata field. It is possible to force the user to select from a fixed list of keywords through the FREETEXT_KEYWORDS_READONLY setting. When set to True keywords won’t be writable from users anymore. Only admins can will be able to manage them through the Admin > Base > Hierarchical keywords panel. Licenses¶ Through the Admin > Base > Licenses panel it will be possible to manage all the licenses associated to the resources. Metadata editor Licenses¶ The license description and the info URL will be shown on the resource detail page. The license text will be shown on the catalogue metadata XML documents. Resource Metadata ISO License¶ Warning It is strongly recommended to not publish resources without an appropriate license. Always make sure the data provider specifies the correct license and that all the restrictions have been honored. Metadata Regions¶ Through the Admin > Base > Metadata Regions panel it will be possible to manage all the admin areas associated to the resources. Resource Metadata Regions¶ Notice that those regions are used by GeoNode to filter search results also through the resource list view. GeoNode filtering by Metadata Regions¶ Note GeoNode tries to guess the Regions intersecting the data bounding boxes when uploading a new layer. Those should be refined by the user layer on anyway. Metadata Restriction Code Types and Spatial Representation Types¶ Through the Admin > Base > Metadata Restriction Code Types and Admin > Base > Metadata Spatial Representation Types panels, it will be possible to update only the metadata descriptions for restrictions and spatial representation types. Such lists are read-only by default since they have been associated to the specific codes of the ISO 19115 metadata schema. Changing them would require the system to provide a custom dictionary through the metadata catalog too. Such functionality is not supported actually by GeoNode. Metadata Topic Categories¶ Through the Admin > Base > Metadata Topic Categories panel it will be possible to manage all the resource metadata categories avaialble into the system. Notice that by default, GeoNode provides the standard topic categories available with the ISO 19115 metadata schema. Changing them means that the system won’t be compliant with the standard ISO 19115 metadata schema anymore. ISO 19115 metadata schema extensions are not currently supported natively by GeoNode. It is worth notice that GeoNode allows you to associate Font Awesome Icons to each topic category through their fa-icon code. Those icons will be used by GeoNode to represent the topic category on both the Search & Filter menus and Metadata panels. Warning The list of the Metadata Topic Categories on the home page is currently fixed. To change it you will need to update or override the GeoNode index.html HTML template. By default the Metadata Topic Categories are writable. Meaning that they can be removed or created by the Admin panel. It is possible to make them fixed (it will be possible to update their descriptions and icons only) through the MODIFY_TOPICCATEGORY setting. Announcements¶ As an Administrator you might need to broadcast announcements to the world about your portal or simply to the internal contributors. GeoNode Announcements allow actually to do that; an admin has the possibility to create three types of messages, accordingly to their severity, decide their validity in terms of time period (start date and expiring date of the announcement), who can view them or not (everyone or just the registerd members) and whenever a user can hide the message or not and how long. A GeoNode announcement actually looks like this: A sample Warning Announcement¶ There are three types of announcements accordingly to their severity level: General, Warning and Critical The difference is mainly the color of the announcement box. Only administrators and staff members can create and manage announcements. Currently there two ways to access and manage the announcements list: Via the GeoNode interface, from the Profile panel Via the GeoNode Admin panel The functionalities are almost the same for both the interfaces, except that from the Admin panel it is possible to manage the dismissals too. Dismissals are basically records of members that have read the announcement and closed the message box. An announcement can have one dismissal type among the three below: No Dismissal Allowed it won’t be possible to close the announcement’s message box at all. Session Only Dismissal (*) the default one, it will be possible to close the announcement’s message box for the current browser session. It will show up again at next access. Permanent Dismissal Allowed once the announcement’s message box is closed, it won’t appear again for the current member. How to create and manage Announcements¶ From the Profile panel, click on Announcements link Announcements List from the Profile panel¶ Click either on New Announcement to create a new one or over a title of an existing one to manage its contents. Create a new announcement is quite straight; you have to fill the fields provided by the form. Warning In order to be visible, you will need to check the Site wide option in any case. You might want to hide the message to anonymous users by enabling the Members only option too. Create Announcement from the Profile panel¶ Managing announcements form the Admin panel, is basically the same; the fields for the form will be exactly the same. Create Announcement from the Admin panel¶ Accessing announcements options from the Admin panel, allows you to manage dismissals also. Through this interface you will be able to selectively decide members which can or cannot view a specific announcement, or force them to visualize the messages again by deleting the dismissals accordingly. Create Dismissal from the Admin panel¶ OAuth2 Access Tokens¶ This small section won’t cover entirely the GeoNode OAuth2 security integration, this is explained in detail in other sections of the documentation (refer to OAuth2 Fixtures Update and Base URL Migration and OAuth2 Tokens and Sessions). Here we will focus mainly on the Admin > DJANGO/GEONODE OAUTH TOOLKIT panel items with a specific attention to the Access tokens management. The Admin > DJANGO/GEONODE OAUTH TOOLKIT panel (as shown in the figure below) allows an admin to manage everything related to GeoNode OAuth2 grants and permissions. As better explained in other sections of the documentation, this is needed to correctly handle the communication between GeoNode and GeoServer. DJANGO/GEONODE OAUTH TOOLKIT Admin panel¶ Specifically from this panel an admin can create, delete or extend OAuth2 Access tokens. The section OAuth2 Tokens and Sessions better explains the concepts behind OAuth2 sessions; we want just to refresh the mind here about the basic concepts: If the SESSION_EXPIRED_CONTROL_ENABLED setting is set to True (by default it is set to True) a registered user cannot login to neither GeoNode nor GeoServer without a valid Access token. When logging-in into GeoNode through the sign-up form, GeoNode checks if a valid Access tokenexists and it creates a new one if not, or extends the existing one if expired. New Access tokensexpire automatically after ACCESS_TOKEN_EXPIRE_SECONDS setting (by default 86400) When an Access tokenexpires, the user will be kicked out from the session and forced to login again Create a new token or extend an existing one¶ It is possible from the Admin > DJANGO/GEONODE OAUTH TOOLKIT panel to create a new Access token for a user. In order to do that, just click on the Add button beside Access tokens topic Add a new ``Access token``¶ On the new form Create an ``Access token``¶ select the followings: User; use the search tool in order to select the correct user. The form want the user PK, which is a number, and not the username. The search tool will do everything for you. Source refresh token; this is not mandatory, leave it blank. Token; write here any alphanumeric string. This will be the access_tokenthat the member can use to access the OWS services. We suggest to use a service like in order to generate a strong token string. Application; select GeoServer, this is mandatory Expires; select an expiration date by using the date-time widgets. Scope; select write, this is mandatory. Do not forget to Save. From now on, GeoNode will use this Access Token to control the user session (notice that the user need to login again if closing the browser session), and the user will be able to access the OWS Services by using the new Access Token, e.g.: Notice the ...quest=GetCapabilities&access_token=123456 (access_token) parameter at the end of the URL. Force a User Session to expire¶ Everything said about the creation of a new Access Token, applies to the deletion of the latter. From the same interface an admin can either select an expiration date or delete all the Access Tokens associated to a user, in order to force its session to expire. Remember that the user could activate another session by logging-in again on GeoNode with its credentials. In order to be sure the user won’t force GeoNode to refresh the token, reset first its password or de-activate it.
https://docs.geonode.org/en/master/admin/admin_panel/index.html
2021-06-12T14:40:31
CC-MAIN-2021-25
1623487584018.1
[array(['../../_images/admin_link.png', '../../_images/admin_link.png'], dtype=object) array(['../../_images/django_geonode_admin_interface.png', '../../_images/django_geonode_admin_interface.png'], dtype=object) array(['../../_images/change_password_link.png', '../../_images/change_password_link.png'], dtype=object) array(['../../_images/change_password_form.png', '../../_images/change_password_form.png'], dtype=object) array(['../../_images/themes.png', '../../_images/themes.png'], dtype=object) array(['../../_images/theme-def-0001.png', '../../_images/theme-def-0001.png'], dtype=object) array(['../../_images/theme-def-0002.png', '../../_images/theme-def-0002.png'], dtype=object) array(['../../_images/theme-def-0003c.png', '../../_images/theme-def-0003c.png'], dtype=object) array(['../../_images/theme-def-0003.png', '../../_images/theme-def-0003.png'], dtype=object) array(['../../_images/theme-def-0003a.png', '../../_images/theme-def-0003a.png'], dtype=object) array(['../../_images/theme-def-0003b.png', '../../_images/theme-def-0003b.png'], dtype=object) array(['../../_images/theme-def-0003d.png', '../../_images/theme-def-0003d.png'], dtype=object) array(['../../_images/theme-def-0003e.png', '../../_images/theme-def-0003e.png'], dtype=object) array(['../../_images/theme-def-0007a.png', '../../_images/theme-def-0007a.png'], dtype=object) array(['../../_images/theme-def-0007b.png', '../../_images/theme-def-0007b.png'], dtype=object) array(['../../_images/theme-def-0007c.png', '../../_images/theme-def-0007c.png'], dtype=object) array(['../../_images/theme-def-0007d.png', '../../_images/theme-def-0007d.png'], dtype=object) array(['../../_images/theme-def-0007e.png', '../../_images/theme-def-0007e.png'], dtype=object) array(['../../_images/theme-def-0007f.png', '../../_images/theme-def-0007f.png'], dtype=object) array(['../../_images/theme-def-0007g.png', '../../_images/theme-def-0007g.png'], dtype=object) array(['../../_images/theme-def-0007h.png', '../../_images/theme-def-0007h.png'], dtype=object) array(['../../_images/theme-def-0005.png', '../../_images/theme-def-0005.png'], dtype=object) array(['../../_images/theme-def-0005a.png', '../../_images/theme-def-0005a.png'], dtype=object) array(['../../_images/theme-def-0005b.png', '../../_images/theme-def-0005b.png'], dtype=object) array(['../../_images/theme-def-0005c.png', '../../_images/theme-def-0005c.png'], dtype=object) array(['../../_images/add_user_form.png', '../../_images/add_user_form.png'], dtype=object) array(['../../_images/add_user_button.png', '../../_images/add_user_button.png'], dtype=object) array(['../../_images/add_user_link.png', '../../_images/add_user_link.png'], dtype=object) array(['../../_images/user_details_admin_page.png', '../../_images/user_details_admin_page.png'], dtype=object) array(['../../_images/new_user_in_people.png', '../../_images/new_user_in_people.png'], dtype=object) array(['../../_images/new_user_active.png', '../../_images/new_user_active.png'], dtype=object) array(['../../_images/new_user_disabled.png', '../../_images/new_user_disabled.png'], dtype=object) array(['../../_images/change_user_password_link.png', '../../_images/change_user_password_link.png'], dtype=object) array(['../../_images/chenge_user_password_form.png', '../../_images/chenge_user_password_form.png'], dtype=object) array(['../../_images/staff_and_superuser_permissions.png', '../../_images/staff_and_superuser_permissions.png'], dtype=object) array(['../../_images/create_group_page_link.png', '../../_images/create_group_page_link.png'], dtype=object) array(['../../_images/group_creation_form.png', '../../_images/group_creation_form.png'], dtype=object) array(['../../_images/group_details_page.png', '../../_images/group_details_page.png'], dtype=object) array(['../../_images/groups_list_page.png', '../../_images/groups_list_page.png'], dtype=object) array(['../../_images/groups_admin_section.png', '../../_images/groups_admin_section.png'], dtype=object) array(['../../_images/layers_group_category.png', '../../_images/layers_group_category.png'], dtype=object) array(['../../_images/groups_link.png', '../../_images/groups_link.png'], dtype=object) array(['../../_images/group_profiles_list_page.png', '../../_images/group_profiles_list_page.png'], dtype=object) array(['../../_images/group_profile_details_page.png', '../../_images/group_profile_details_page.png'], dtype=object) array(['../../_images/editing_group_logo.png', '../../_images/editing_group_logo.png'], dtype=object) array(['../../_images/group_logo.png', '../../_images/group_logo.png'], dtype=object) array(['../../_images/add_new_member.png', '../../_images/add_new_member.png'], dtype=object) array(['../../_images/new_members.png', '../../_images/new_members.png'], dtype=object) array(['../../_images/adv_data_workflow_001.jpg', '../../_images/adv_data_workflow_001.jpg'], dtype=object) array(['../../_images/approbation_manager.gif', '../../_images/approbation_manager.gif'], dtype=object) array(['../../_images/unpublished.png', '../../_images/unpublished.png'], dtype=object) array(['../../_images/admin-roles-add.png', '../../_images/admin-roles-add.png'], dtype=object) array(['../../_images/admin-people.png', '../../_images/admin-people.png'], dtype=object) array(['../../_images/admin-profiles-contactroles.png', '../../_images/admin-profiles-contactroles.png'], dtype=object) array(['../../_images/admin-layers.png', '../../_images/admin-layers.png'], dtype=object) array(['../../_images/admin-layers-batch.png', '../../_images/admin-layers-batch.png'], dtype=object) array(['../../_images/set_layers_permissions_action.png', '../../_images/set_layers_permissions_action.png'], dtype=object) array(['../../_images/set_layers_permissions_form.png', '../../_images/set_layers_permissions_form.png'], dtype=object) array(['../../_images/admin-maps.png', '../../_images/admin-maps.png'], dtype=object) array(['../../_images/admin-layers-batch.png', '../../_images/admin-layers-batch.png'], dtype=object) array(['../../_images/admin-maps-featured-001.png', '../../_images/admin-maps-featured-001.png'], dtype=object) array(['../../_images/admin-maps-featured-002.png', '../../_images/admin-maps-featured-002.png'], dtype=object) array(['../../_images/admin-documents.png', '../../_images/admin-documents.png'], dtype=object) array(['../../_images/admin-layers-batch.png', '../../_images/admin-layers-batch.png'], dtype=object) array(['../../_images/admin-panel-metadata-contents-0001.png', '../../_images/admin-panel-metadata-contents-0001.png'], dtype=object) array(['../../_images/admin-panel-metadata-contents-0002.png', '../../_images/admin-panel-metadata-contents-0002.png'], dtype=object) array(['../../_images/admin-panel-metadata-contents-0003.png', '../../_images/admin-panel-metadata-contents-0003.png'], dtype=object) array(['../../_images/admin-panel-metadata-contents-0003a.png', '../../_images/admin-panel-metadata-contents-0003a.png'], dtype=object) array(['../../_images/admin-panel-metadata-contents-0004.png', '../../_images/admin-panel-metadata-contents-0004.png'], dtype=object) array(['../../_images/admin-panel-metadata-contents-0005.png', '../../_images/admin-panel-metadata-contents-0005.png'], dtype=object) array(['../../_images/admin-panel-metadata-contents-0006.png', '../../_images/admin-panel-metadata-contents-0006.png'], dtype=object) array(['../../_images/admin-panel-metadata-contents-0007.png', '../../_images/admin-panel-metadata-contents-0007.png'], dtype=object) array(['../../_images/admin-panel-metadata-contents-0008.png', '../../_images/admin-panel-metadata-contents-0008.png'], dtype=object) array(['../../_images/admin-panel-metadata-contents-0009.png', '../../_images/admin-panel-metadata-contents-0009.png'], dtype=object) array(['../../_images/admin-panel-metadata-contents-0010.png', '../../_images/admin-panel-metadata-contents-0010.png'], dtype=object) array(['../../_images/admin-announcments-001.png', '../../_images/admin-announcments-001.png'], dtype=object) array(['../../_images/admin-announcments-002.png', '../../_images/admin-announcments-002.png'], dtype=object) array(['../../_images/admin-announcments-003.png', '../../_images/admin-announcments-003.png'], dtype=object) array(['../../_images/admin-announcments-004.png', '../../_images/admin-announcments-004.png'], dtype=object) array(['../../_images/admin-announcments-007.png', '../../_images/admin-announcments-007.png'], dtype=object) array(['../../_images/admin-announcments-008.png', '../../_images/admin-announcments-008.png'], dtype=object) array(['../../_images/admin-announcments-009.png', '../../_images/admin-announcments-009.png'], dtype=object) array(['../../_images/admin-announcments-010.png', '../../_images/admin-announcments-010.png'], dtype=object) array(['../../_images/admin-panel-tokens-0001.png', '../../_images/admin-panel-tokens-0001.png'], dtype=object) array(['../../_images/admin-panel-tokens-0002.png', '../../_images/admin-panel-tokens-0002.png'], dtype=object) array(['../../_images/admin-panel-tokens-0003.png', '../../_images/admin-panel-tokens-0003.png'], dtype=object)]
docs.geonode.org
How It Works¶ In Quickstrom, a tester writes specifications for web applications. When checking a specification, the following happens: Quickstrom navigates to the origin page, and awaits the readyWhen condition, that a specified element is present in the DOM. It generates a random sequence of actions to simulate user interaction. Many types of actions can be generated, e.g. clicks, key presses, focus changes, reloads, navigations. Before each new action is picked, the DOM state is checked to find only the actions that are possible to take. For instance, you cannot click buttons that are not visible. From that subset, Quickstrom picks the next action to take. After each action has been taken, Quickstrom queries and records the state of relevant DOM elements. The sequence of actions takens and observed states is called a behavior. The specification defines a proposition, a logical formula that evaluates to true or false, which is used to determine if the behavior is accepted or rejected. When a rejected behavior is found, Quickstrom shrinks the sequence of actions to the smallest, still failing, behavior. The tester is presented with a minimal failing test case based on the original larger behavior. Now, how do you write specifications and propositions? Let’s have a look at The Specification Language.
https://docs.quickstrom.io/en/latest/topics/how-it-works.html
2021-06-12T15:19:26
CC-MAIN-2021-25
1623487584018.1
[]
docs.quickstrom.io
Editing and deleting views As an administrator, you can modify, delete, and manage the capacity views in the TrueSight console. To revert an edited out-of-the-box view to its default settings, you need to reinstall the view. Also, if you accidentally delete an out-of-the-box view and then want to restore it, you must reinstall the view. When you install a view, any prior customization done to the view is lost. For more information about how to install a view, see Installing capacity views. For more information, see the following sections: Before you begin - Ensure that TrueSight Capacity Optimization is registered with the Presentation Server. For more information, see Installing the Presentation Server. - Ensure that your user group has the Capacity_Administration user group assigned to it. Otherwise, the options to edit a view or pages in a view are not available to you. For more information, see Configuring users and user groups. To enable or disable editing of a view You can lock a view for editing or deletion by disabling edit access to the view. By default, the out-of-the-box views are locked for editing while custom views can be edited. - In the TrueSight console, navigate to Administration > Capacity views. - In the View table, Clickfor the view that you want to enable or disable editing for. - Select the appropriate option: - To lock a view, click Lock view (disable editing). - To unlock a locked view, click Unlock view (enable editing). To edit a view - From the> Capacity > Views > Custom Views, open the required view. On the view page, click and select Edit view. Info Alternatively, you can perform the following steps: - In the TrueSight console, select> Administration > Capacity Views. In the Capacity Views page, in the table, clickcorresponding to the view you want to edit and click Edit view. In the Edit view page, specify the following properties and click Save: To edit an existing page in a view - From the> Capacity > Views > Custom Views, open the required view. - Open the required view page. On the view page, clickand select Edit the page. Info Alternatively, perform the following steps: - In the TrueSight console, select> Capacity > Views > Custom Views. - From the list of custom views, click the custom view that you want to edit. - In the custom view, clickand select Edit the page. Depending on the view page template of the page, edit the required fields and click Save. The custom view page is displayed with the new configuration. To add a page to an existing custom view - From the> Capacity > Views > Custom Views, open the required view. - Clickand select Add a new page. - In the Add page dialog box, do the following: - In the Page name box, type the name of the page. - From the Display the page list, select if you want to display the page As a tab of the page menu, or In full screen, without page menu. - Select a template. Click on a template to select it. - Click Create. The page is added to the custom view. The Create button is available for selection only after you select a template. The custom view is created and is displayed. - Configure the view further. The sections of the view that can be edited have a icon located at the top-right. For more information, see View page templates. - Click Save. The page is added to the view. To delete a page in a view - Navigate to the view whose page you want to delete. - Open the view page that you want to delete. - Clickand select Delete page. - Click Yes in the confirmation box. The view page is deleted. To delete a view - Ensure that you have the rights to edit or delete a view. For more information, see To enable or disable editing of a view. - In the TrueSight console, select> Administration > Capacity Views. - In the Capacity Views page, clickcorresponding to the view you want to delete and click Delete view as shown in the following image: - Click Yes in the confirmation box. The view is deleted and is no longer displayed in the console. To modify the access rights for a view - In the TrueSight console, select> Administration > Capacity Views. - From the action menu that is located next to the view you want to edit the access rights for, click Edit access rights. - In the Grant visibility to access groups dialog box, select the access groups this view will be visible and have access to. The selected groups appear in the Selected access groups list. You can apply the selected access groups either to a particular view or to a view group it belongs to. - Click Apply.
https://docs.bmc.com/docs/TSCapacity/110/editing-and-deleting-views-674155367.html
2021-06-12T13:38:26
CC-MAIN-2021-25
1623487584018.1
[]
docs.bmc.com
How to create a new email template If you want to use email templates for your campaigns or funnel this is the tutorial to read Last update 3 months ago If you want to use email templates for your campaigns or funnel this is the tutorial to read If you manage a different kind of email templates you can group them into different folders Here is how to double-check your email outbound for Outlook and Hotmail If you want to add Customerly SPF record on top of what you are already using follow this tutorial If you want to deliver a message to all your users or a segment of them, you can use the campaigns feature and deliver via live chat If you want to know how to unsubscribe or block your leads or users, here are the steps you should take. Learn how to check the newsletter stats If you want to know how to see a newsletter preview this article is for you Here are a few easy steps on how to delete a newsletter
https://docs.customerly.help/campaigns
2021-06-12T14:27:59
CC-MAIN-2021-25
1623487584018.1
[]
docs.customerly.help
pricing. Important - Starting in the second half of 2021, Google is deprecating web-view sign-in support. If you’re using Google federation for B2B invitations or Azure AD B2C, or if you're using self-service sign-up with Gmail, Google Gmail users won't be able to sign in if your apps authenticate users with an embedded web-view. Learn more. - Starting October 2021, Microsoft will no longer support the redemption of invitations by creating unmanaged Azure AD accounts and tenants for B2B collaboration scenarios. In preparation, we encourage customers to opt into email one-time passcode authentication, which is now generally available.<<
https://docs.microsoft.com/en-in/azure/active-directory/external-identities/what-is-b2b
2021-06-12T13:45:40
CC-MAIN-2021-25
1623487584018.1
[array(['media/what-is-b2b/add-a-b2b-user-to-azure-portal.png', 'Screenshot showing the New Guest User invitation entry page'], dtype=object) array(['media/what-is-b2b/consentscreen.png', 'Screenshot showing the Review permissions page'], dtype=object) array(['media/what-is-b2b/tutorial-mfa-policy-2.png', 'Screenshot showing the Conditional Access option'], dtype=object) array(['media/what-is-b2b/access-panel-manage-app.png', 'Screenshot showing the Access panel for a guest user'], dtype=object) array(['media/what-is-b2b/identity-providers.png', 'Screenshot showing the Identity providers page'], dtype=object) array(['media/what-is-b2b/self-service-sign-up-user-flow-overview.png', 'Screenshot showing the user flows page'], dtype=object) ]
docs.microsoft.com
8.1 defpat 1 defpat source code: This module provides the forms defpat and pat-lambda. see also define/match from racket/match and ~define from generic-bind pat-lambda is a version of lambda where (again) the arguments can be match patterns. see also match-lambda, match-lambda*, and match-lambda** from racket/match, and ~lambda from generic-bind The arg-pat can’t start with a [ though, because square brackets are used to specify optional arguments: expands to (defpat head (pat-lambda args body ...)) like lambda, except that each arg-pat can be an arbitrary match pattern. Just as with defpat, the arg-pat can’t start with a [, and you have to use square brackets to specify an optional argument It is very similar to match-lambda**, except that it doesn’t support multiple clauses, and it allows optional arguments, keyword arguments, and a rest argument. As an example, expands to and for keyword-arguments, expands to 2 match-case-lambda like case-lambda, except that each arg-pat can be an arbitrary match pattern. As an example, is equivalent to Clauses with the same arity are grouped together into a single case-lambda clause with multiple match* clauses within it. 3 opt-case-lambda like case-lambda, except that it supports optional arguments. 4 opt-match-case-lambda like match*-case-lambda, except that it supports optional arguments.
https://docs.racket-lang.org/defpat-main/index.html
2021-06-12T13:51:24
CC-MAIN-2021-25
1623487584018.1
[]
docs.racket-lang.org
FullStackedBar3DSeriesView Class Represents a series view of the 3D Full-Stacked Bar type. Namespace: DevExpress.XtraCharts Assembly: DevExpress.XtraCharts.v21.1.dll Declaration public class FullStackedBar3DSeriesView : StackedBar3DSeriesView Public Class FullStackedBar3DSeriesView Inherits StackedBar3DSeriesView Remarks The FullStackedBar3DSeriesView class provides the functionality of a series view of the 3D full-stacked bar type within a chart control. The FullStackedBar3DSeriesView class inherits properties and methods from the base StackedBar3DSeriesView class which defines the common settings of the stacked bar series views. Note that a particular view type can be defined for a series via its SeriesBase.View property. For more information on series views of the stacked bar type, please see the Full-Stacked Bar Chart topic. Example); }
https://docs.devexpress.com/CoreLibraries/DevExpress.XtraCharts.FullStackedBar3DSeriesView
2021-06-12T14:07:24
CC-MAIN-2021-25
1623487584018.1
[]
docs.devexpress.com
Breadcrumb In Scuba 2.x, this refers to the user's query history in the Explorer. In Scuba 3.0, this is used to show hierarchy between content, including knowledge objects and contexts. This is a "breadcrumb trail" of your hierarchy. Click the previous breadcrumbs to go back to previous sections of the hierarchy.
https://docs.scuba.io/lexicon/Breadcrumb
2021-06-12T15:12:17
CC-MAIN-2021-25
1623487584018.1
[]
docs.scuba.io
Note UCP is now MKE. The product formerly known as Universal Control Plane (UCP) is now Mirantis Kubernetes Engine (MKE). This image has commands to install and manage MKE on a Mirantis Container Runtime. You can configure the commands using flags or environment variables. When using environment variables, use the docker container run -e VARIABLE_NAME syntax to pass the value from your shell, or docker container run -e VARIABLE_NAME=value to specify the value explicitly on the command line. The container running this image needs to be named ucp and bind-mount the Docker daemon socket. Below you can find an example of how to run this image. Additional information is available for each command with the --help flag. docker container run -it --rm \ --name ucp \ -v /var/run/docker.sock:/var/run/docker.sock \ mirantis/ucp:3.X.Y \ command [command arguments] Note Depending on the version of MKE 3.X.Y in use, it may be necessary to substitute docker/ucp:3.X.Y for mirantis/ucp:3.X.Y to get the appropriate image (look in and to confirm correct usage).
https://docs.mirantis.com/containers/v3.1/mke-ops-guide/cli-ref.html
2021-06-12T15:13:41
CC-MAIN-2021-25
1623487584018.1
[]
docs.mirantis.com
[−][src]Crate tracing_line_filter A tracing filter for enabling individual spans and events by line number. tracing is a framework for instrumenting Rust programs to collect scoped, structured, and async-aware diagnostics. The tracing-subscriber crate's EnvFilter type provides a mechanism for controlling what tracing spans and events are collected by matching their targets, verbosity levels, and fields. In some cases, though, it can be useful to toggle on or off individual spans or events with a higher level of granularity. Therefore, this crate provides a filtering Layer that enables individual spans and events based on their module path/file path and line numbers. Since the implementation of this filter is rather simple, the source code of this crate is also useful as an example to tracing users who want to implement their own filtering logic. Usage First, add this to your Cargo.toml: tracing-line-filter = "0.1" Examples Enabling events by line: use tracing_line_filter::LineFilter; mod some_module { pub fn do_stuff() { tracing::info!("i'm doing stuff"); tracing::debug!("i'm also doing stuff!"); } } fn main() { use tracing_subscriber::prelude::*; let mut filter = LineFilter::default(); filter .enable_by_mod("my_crate::some_module", 6) .enable_by_mod("my_crate", 25) .enable_by_mod("my_crate", 27); tracing_subscriber::registry() .with(tracing_subscriber::fmt::layer().pretty()) .with(filter) .init(); tracing::info!("i'm not enabled"); tracing::debug!("i'm enabled!"); some_module::do_stuff(); tracing::trace!("hi!"); } Chaining a LineFilter with a tracing_subscriber EnvFilter: use tracing_line_filter::LineFilter; use tracing_subscriber::EnvFilter; mod some_module { pub fn do_stuff() { tracing::info!("i'm doing stuff"); tracing::debug!("i'm also doing stuff!"); // This won't be enabled, because it's at the TRACE level, and the // `EnvFilter` only enables up to the DEBUG level. tracing::trace!("doing very verbose stuff"); } } fn main() { use tracing_subscriber::prelude::*; let mut filter = LineFilter::default(); filter .enable_by_mod("with_env_filter", 30) .enable_by_mod("with_env_filter", 33) // use an `EnvFilter` that enables DEBUG and lower in `some_module`, // and everything at the ERROR level. .with_env_filter(EnvFilter::new("error,with_env_filter::some_module=debug")); tracing_subscriber::registry() .with(tracing_subscriber::fmt::layer().pretty()) .with(filter) .init(); tracing::info!("i'm not enabled"); tracing::debug!("i'm enabled!!"); some_module::do_stuff(); tracing::trace!("hi!"); // This will be enabled by the `EnvFilter`. tracing::error!("an error!"); }
https://docs.rs/tracing-line-filter/0.1.0/tracing_line_filter/
2021-06-12T14:17:42
CC-MAIN-2021-25
1623487584018.1
[]
docs.rs
- request (PutBucketRequest) - The PutBucketRequest that defines the parameters of the operation. Depending on your latency and legal requirements, you can specify a location constraint that will affect where your data physically resides.. Buckets are similar to Internet domain names. Just as Amazon is the only owner of the domain name Amazon.com, only one person or organization can own a bucket within Amazon S3. The similarities between buckets and domain names is not a coincidence - there is a direct mapping between Amazon S3 buckets and subdomains of s3.amazonaws.com. Objects stored in Amazon S3 are addressable using the REST API under the domain bucketname.s3.amazonaws.com. For example, the object homepage.html stored in the Amazon S3 bucket mybucket can be addressed as. To conform with DNS requirements, the following constraints apply: -) - Bucket names cannot contain uppercase characters There is no limit to the number of objects that can be stored in a bucket and no variation in performance when using many buckets or just a few. You can store all of your objects in a single bucket or organize them across several buckets. This example shows how to create a bucket in a specific region and with a canned ACL configuring the bucket to be public readable. // Create a client AmazonS3Client client = new AmazonS3Client(); // Construct request PutBucketRequest request = new PutBucketRequest { BucketName = "SampleBucket", BucketRegion = S3Region.EU, // set region to EU CannedACL = S3CannedACL.PublicRead // make bucket publicly readable }; // Issue call PutBucketResponse response = client.PutBucket(request);
https://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/M_Amazon_S3_AmazonS3_PutBucket.htm
2018-07-16T02:23:16
CC-MAIN-2018-30
1531676589172.41
[array(['../icons/collapse_all.gif', None], dtype=object) array(['../icons/collapse_all.gif', None], dtype=object) array(['../icons/collapse_all.gif', None], dtype=object) array(['../icons/collapse_all.gif', None], dtype=object) array(['../icons/collapse_all.gif', None], dtype=object) array(['../icons/CopyCode.gif', None], dtype=object) array(['../icons/collapse_all.gif', None], dtype=object)]
docs.aws.amazon.com
NewsPoems from google news NewsPoems excerpted from weekly Top Stories - Google News ">Telegraph.co.uk</font></a></font></td><td valign="top" class="j"><font style="font-size:85%;font-family:arial,sans-serif"><br /><div style="padding-top:0.8em;"><img alt="" height="1" width="1" /></div><div class="lh"><a href=""><b>PALIN Resigning Governor's Job; Future Unclear</b></a><br /><font size="-1"><b><font color="#6f6f6f">New York Times</font></b></font><br /><font size="-1"="">Palin stepping down as Alaska governor</a><font size="-1" color="#6f6f6f"><nobr>San Francisco Chronicle</nobr></font></font><br /><font size="-1"><a href="">Palin's Resignation Has Many Asking, What Next?</a><font size="-1" color="#6f6f6f"><nobr>FOXNews</nobr></font></font><br /><font size="-1"><a href="">Analysis: Palin Was Fed Up</a><font size="-1" color="#6f6f6f"><nobr>CBS News</nobr></font></font><br /><font size="-1" class="p"><a href=""><nobr>Washington Post</nobr></a> -<a href=""><nobr>The Associated Press</nobr></a> -<a href=""><nobr>NECN</nobr></a></font><br /><font class="p" size="-1"><a class="p" href=""><nobr><b>all 1,774 news articles »</b></nobr></a></font></div></font></td></tr></table> France threatens blacklist for Yemenia over crash - AFP Aid workers kidnapped in Darfur - Aljazeera.net UN chief gambles on Burma breakthrough - BBC News Germany: Demjanjuk Cleared for Trial - New York Times Google News <font style="font-size:85%;font-family:arial,sans-serif"><br><div style="padding-top:0.8em;"><img alt="" height="1" width="1"></div><div class=lh><table border=0 align=right cellspacing=0 cellpadding=0cellpadding=3<tr><td width=80 align=center<b>Palin Resigning Governor's Job; Future Unclear</b></a><br><font size=-1><b><font color=#6f6f6f>New York Times</font></b></font><br><font size=-1>By ADAM NAGOURNEY and JIMVideo: Sarah Palin Resigning</a> <font size=-1 color=#6f6f6f><nobr>WWLP.com</nobr></font><object width="448" height="356"><param name="movie" value=""></param><param name="wmode" value="transparent"></param><embed src=""type="application/x-shockwave-flash"wmode="transparent"width="448"height="356"></embed></object><br></font><font size=-1><a href="">Palin's Resignation Has Many Asking, What Next?</a> <font size=-1 color=#6f6f6f><nobr>FOXNews</nobr></font></font><br><font size=-1 class=p><a href=""><nobr>The Associated Press</nobr></a> - <a href=""><nobr>Washington Post</nobr></a> - <a href=""><nobr>Christian Science Monitor</nobr></a> - <a href=""><nobr>CBS News</nobr></a></font><br/><font class=p size=-1><a class=p href=><nobr><b>all 1,780 news articles</b></nobr></a></font><br clear=all> </div></font> Palin Resigning Governor's Job; Future Unclear - New York Times North Korea 'tests two missiles' - BBC News Iran Cleric Says British Embassy Staff to Stand Trial - New York Times Ousted Honduran president riled old guard, business - Reuters Failed strategy in Afghanistan - guardian.co.uk Sanford heads for Florida for tense reunion with wife - New York Daily News Fun Fourth of July Facts: A Pop Quiz! - CBS News Germany: Demjanjuk Cleared for Trial - New York Times Talking Business Ire at Madoff Swings Toward the Referee - New York Times For Banks, Wads of Cash and Loads of Trouble - New York Times Airline BA to cut capacity, delay new planes - The Associated Press The iphone 3GS' un-fun feature: idrain - Los Angeles Times MySpace victim's mom disappointed by ruling - msnbc.com Employee shot, wounded at Virginia Apple store - CNET News Disappointment, but the British Are Used to It - New York Times Nash re-ups with Jackets for 8 years - The Associated Press Armstrong Says He's All About The Team - Washington Post Thaw in Senate Talks, but No Hints of Power-Sharing Deal - New York Times France threatens blacklist for Yemenia over crash - AFP Aid workers kidnapped in Darfur - Aljazeera.net World learns from Mexico's A/H1N1 experience: PAHO director - Xinhua Revisions to Health Bill Are Unveiled by Democrats - New York Times Woman at heart of Hep-C probe convinced Springs center not to ... - Colorado Springs Gazette Tyeb Mehta gave shape and meaning to Indian art - Hindu Art Venegas leaves as UCLA men's track coach - The Associated Press Art Imitating Lunch - New York Times 'Performing Democracy — Pakistan Art 2009' exhibited at KAS office - Daily Times Hobart City Art Prize - ABC Online SFMOMA may have shot at Fishers' art collection - San Francisco Chronicle Olympic public art program works unveiled - Vancouver Sun Pigeons can tell good art from bad - Telegraph.co.uk Why art is vital to freedom - Christian Science Monitor Planning Michael Jackson's Memorial A 'Phenomenal Undertaking' - MTV.com Strong Sales Continue for Jackson Albums - New York Times San Franciso Bay Area July 4th fireworks and other events calendar - San Jose Mercury News Jackson custody case: the legal issues - BBC News Diprivan risk well-known to doctors - CNN Michael Jackson fans stick it out at Neverland Ranch - Los Angeles Times 'Public Enemies' misses its mark - Los Angeles Times Concert promoter expects to erase Jackson's debts - Reuters Billy Mays Funeral -- Death of a Salesmen - TMZ.com Director Forman stars at opening of Karlovy Vary film festival - AFP Depp film comes out with all guns blazing - WalesOnline I'm very much unlike the Bebo of 'Kambakkht...': Kareena - Hindu At Outfest, redefining gay film - Los Angeles Times Group helps keep Campbell summer film series alive - San Jose Mercury News Censored 1966 film Hands Up reveals subversive humour in East Germany - Times Online Online film-making for timid Spielbergs - Financial Times Quintessential American film is 'for the kids' - St. Louis Post-Dispatch Film shoots on sea link will cost Rs 1L a day - Times of India A judge's poetry - Daily Monitor Sex writing can be sublime in poetry - Livemint "Friends of Poetry" Inspire Imagination - Kalamazoo Weekly San Diego Poetry Annual to host reading - Examiner.com Local writer hosts poetry reading - Barrie Advance National Youth Poetry Slam Team to give free performance - The Plain Dealer - cleveland.com The Sandbox: "Flower"'s Video Game Poetry - IFC Local author releases her latest book of poetry - Grand Junction Free Press UpSurge! poetry and jazz offers musical fireworks alternative to ... - Examiner.com Katie Holmes on US dance show - The Press Association Choreographer Renowned for Innovative, Unconventional Works - Washington Post 'So You Think You Can Dance': The kiss of death! - Los Angeles Times MADONNA TO PAY HOMAGE TO JACKSON'S DANCE MOVES AT LONDON SHOW - Contactmusic.com Audience pan ''inebriated'' dance contestant - Melbourne Herald Sun Ugly dance - Jerusalem Post Dance Fever: Not for feminists - Daily Star - Lebanon Dance of the Boards - Seattle Post Intelligencer Dance Gal Dance wins Wandering Cloud Stakes - ESPN Week Two: Examiner.com's New York performance art preview - Examiner.com Apollo 11 As Performance Art - Digital City Life as Performance Art - Commercial Record Young artist going places - Kormorant Orlan's art of sex and surgery - guardian.co.uk Weekly performance art openings July 5th – July 11th - Examiner.com Pencil This In: Cantastoria Performance @ Manual Archives, Art ... - LAist Dallas Museum of Art to Examine Relationships Between Performing ... - Art Daily Experience excitement of synchronised drumming - Express & Echo Andrew Grice: Tories fear 'scorched earth' policy by Government - Independent Wealthy investors still cautious, fear further price falls: Poll - AsiaOne Villagers in Lalgarh still in grip of fear - Hindu FAM cancel Merdeka meet due to A(H1N1) virus fear - Malaysia Star Liverpool fear price war over Inler move - Daily Mail Steven Wells: fear and loathing in ER - Times Online Andy Murray: I have nothing to fear - Mirror.co.uk Hondurans fear crisis will turn 'ugly' - Financial Times Fear as tremor hits Italy's G8 venue - Independent Online Wives left behind in Mexico by migrants suffer 'poorer mental health' - Los Angeles Times Mental health centre to lose 90 beds, 240 staff - Globe and Mail Mental health meetings - NW Evening Mail Budget fight may affect health-service providers - Philadelphia Inquirer Mental health centers worry about funding - SunHerald.com (registration) Health Buzz: Complex Genetics Behind Mental Disorders and Other ... - U.S. News & World Report £4m mental health genetics centre - BBC News A balance of nutrients essential to physical and mental health - Canada.com A minister's mental health message - New Zealand Doctor Online Obama speech cues 'guilt free' holidays - Irish Times Kill First, Find Guilt Later - Atlantic Online When gayness was out in open, not a matter of guilt - Times of India "You can't imagine the guilt" - Grays Harbor Daily World Weekend Menu: Me-time with a side order of Guilt - TAWKN.com Guilt trip - the only major excursion for some Canadians this year - ITBusiness.ca Health-care aide admits guilt in death of elderly man - Philadelphia Daily News Finding time - Manila Bulletin Sentenced to life in Pickering murder, Cyr denies guilt - Newsdurhamregion.com Court acquits man of father's murder 'by reason of insanity' - New Straits Times Health Care Insurance, or, What is the Definition of Insanity?... - Daily Kos ON THE EDGE: More insanity from our hired help - Naples Daily News Review: Insanity in Los Angeles - Broadway World Theater review: 'Insanity' at NoHo Arts Center - Los Angeles Times North Cape utility — Stop the insanity - Cape Coral Daily Breeze Montreal woman sentenced for son's drowning - CTV.ca Stop the insanity! - Foreign Policy Finally, a cure for birthday party insanity - Examiner.com Poll: Media went over the top with Jackson coverage - Belfast Telegraph Jackson media frenzy faulted - Los Angeles Times Media delivering what people want - Boston Globe COMMENT: Media ethics —Shaukat Qadir - Daily Times Govt vs media: When the frog calleth the toad ugly! - Daily Monitor The Media Equation A Publisher Stumbles Publicly at the Post - New York Times On Digital Media May Start Pay-TV to Rival Naspers This Year - Bloomberg Troubled times suit PBL Media's hard man - The Australian The Media Reacts to Sarah Palin's Resignation - Huffington Post Rights Advocacy Group Expresses Concern over Human Rights in Guinea - Voice of America Minister Without Portfolio denies human rights violation in Cabinda - AngolaPress Turkmenistan, EU Hold Human Rights Talks In Brussels - RadioFreeEurope/RadioLiberty Church turns to human rights - The Age Lawyer's complaint abuses the human rights process - Vancouver Sun Human Rights Activist on Gambia killings - Ghana Broadcasting Corporation Prominent scholar slams Lee administration for denial of human rights - 한겨레 Human rights in South Korea have deteriorated: AI researcher - 한겨레 Obama and the Human Rights Council. Uh-- Mr. President, Did You ... - TPMCafé Emperor and Empress kick off 12-day tour - Globe and Mail Emperor remains head of Shintoism in Japan - Vancouver Sun Coins to mark 20th anniversary of Emperor's enthronement - The Japan Times President Obama: Another Jimmy Carter or America's First Emperor? - Yorktown Patriot Moghul descendant saved from slums by coal board - Independent Mughal emperor's descendent gets a job - Times of India In the garden: Red Emperor doesn't require ginger care for ... - Fort Worth Star Telegram The Villians of Dissidia Final Fantasy: The Emperor - Gamespy.com Archaeologists Seek New Clues to the Riddle of Emperor Qin's Terra ... - Science Magazine (subscription)
http://baby-docs.com/feed2js/lastrss3.php
2009-07-03T19:12:57
crawl-002
crawl-002-010
[]
baby-docs.com
Blocks (Available in all TurboCAD Variants) One or more objects can be combined and stored as a block. A block is treated as a single object for purposes of selecting and editing. Each block is stored in the drawing's internal library, and each instance of the block is a reference to this source. This means that numerous instances of a block can be added to the model without significantly increasing the file size. Groups are similar, but they are not linked to sources; each group contains its own drawing data. Note: A drawing's block library is internal to the drawing, and is stored with the file. Symbol libraries are similar but are stored separately, and can be accessed while in any drawing. If you need to create a group of objects that will be used in multiple drawings, create a symbol.If you want to import the entire contents of another file (TurboCAD or other format) as a block. Because blocks can contain individual objects, groups, and other blocks, they can be complex hierarchical structures. For block manipulation, use the Blocks Palette (View / Blocks). Tip: You can use the TC Explorer Palette to view blocks of any open drawing, and to drag blocks to and from drawings. You will find details on using block and the Blocks palette on the following pages: Additional Block Controls Show Selected: Toggles the Show Selection option. When on the result will be that anytime a single block is selected int the drawing space, that block will be selected and highlighted in the Blocks palette. Block name prefix: If names are automatically generated, you can enter a string that appears before the item name. The "@" character is a placeholder for the automatic number. Prompt for name: You will receive a prompt each time a new item is created. Generate block names: Names will be automatically assigned. Insert blocks when creating: Each block will be inserted into the drawing once it is created. Compensate for the offset of the base point at the block references: Prevents updating of the reference points for inserted blocks, when a the block reference point is relocated. Show Selection any time an inserted block is selected: Toggles the Show Selection option. When on the result will be that anytime a single block is selected int the drawing space, that block will be selected and highlighted in the Blocks palette. Block Attributes Default UI Menu: Draw/Block/Block Attributes Ribbon UI Menu: A block attribute is AutoCAD- informational text associated with a block, that you can enter whenever you insert a block. TurboCAD reads and displays block attributes from AutoCAD drawings (DWG) and DXF files. - Create the objects that comprise the blocks. (You can also add a block attribute after a block has been created, in Edit mode. This is done the same way as adding another geometric object. ) - Select Block Attribute Definition. Select the start point for the text, preferably on or near the block objects. - Type the "tag" name for the block attribute, such as "COST." This name is used to uniquely identify the attribute within the block, since more than one attribute can be created. If the drawing will be sent to AutoCAD, do not use spaces (use underscores instead). Note: This tool works like the text tool, in terms of alignment and local menu options. See Inserting Text - Enter the prompt and default value in the Inspector Bar, or you can enter these properties later. For example, the Prompt can be "How much does it cost?" and Default can be $0.00. - Press Enter to finish the definition. You can create multiple attributes, such as Part Number, Owner, etc. TurboC TurboCAD as it takes place in AutoCAD when the variable ATTDIA is set to 1. When the block attributes are defined, simply include them in the selection of objects that will make up the new block. Setting Block Attributes. Sync and it will blank if no value is assigned. For blank attributes you will have to select the existing blocks and assign values to those attributes. Extracting Block Attributes. This example has three blocks used to mark windows, doors, and slabs. Here are the three blocks in the Blocks Palette. - Select Extract Attributes. In this window you can select the blocks and attributes that will be included in the schedule or report. Note: You can re-order a a column by dragging its header to the new location. Scan Entire Drawing: Attributes will be extracted from all paper spaces and model space. Scan Model Space: Attributes will only be extracted from model space. Scan Current Space: Attributes will be extracted from the current model space or paper space. Scan Selected Entities: Attributes will be extracted only from currently selected objects. Scan Groups: If any groups contain blocks, these blocks will be scanned for attributes. Scan Nested Block: If blocks contain nested blocks, these nested blocks will be scanned for attributes. Include Xrefs: The content of Xrefs will also be scanned. The Blocks list contains all blocks that have attribute definitions. The Properties list all attributes found for the blocks checked in the Blocks list. Show Summary List: The Properties list contains all attributes for all blocks checked in the Blocks list. Show Selected Block Properties: The Properties list contains attributes only for the block that is currently checked in the Blocks list. Show Visible Properties Only: If selected only attributes that are visible will be shown. - You can select attributes for each block that will be included. For example, click Show Selected Block Properties at the bottom, and select the "Door Mark" block. Check only the "COST" and "TYPE" attributes. - Select the "Room" block and check "AREA," "COST," and "TYPE." - Select the "Window Mark" block and check "COST" and "TYPE." You can right-click on any field under Blocks or Properties to get a popup menu in which you can check or uncheck all, or change the display name. - When the blocks and properties are defined, click Next. TurboCAD scans the file, and the Preview window displays the results. If TurboCAD Table is checked, the report will be inserted into the file. If you want to export the results, click External File. You can click on any column header to change the sorting order, or hide or rename a column. - Click Finish. If the table is to be inserted into TurboCAD, you will see the Insert Table window. Here you can define the column and row sizes. - Click OK, and then click where you want to place the table. You can make changes to the table formatting in the Selection Info palette. Creating a Block Default. \5.. Editing a Block (Changing. Exploding a Block Click the Explode icon. If you explode a block that contains nested groups or blocks, the nested groups will remain intact. Each sub-block must be exploded separately. External References Default UI Menu: Insert/Create External Reference Ribbon UI Menu: An external reference (xref) is a kind of a block in that it is stored in the current drawing's block library. However, unlike a block, the objects associated with an xref definition are not stored in the current drawing; they are stored in another drawing file. When you create an xref, the entire contents of this other file are imported as a block. Note: You can also access An external reference via Block palette. Xrefs are usually used to display the geometry of a common base drawing in the current drawing, such as a frame. They can be taken from files any formats readable by TurboCAD. Only files that have objects in Model Space can be added as Xref's. - To import another drawing as a block (xref), select Insert / Create External Reference. - In the External Reference File Location window, select a file type and locate the desired file. - The selected file is added to the block library of the current file. You can view it and insert it using the Blocks Palette. However, you cannot edit an xref in the Blocks Palette - you must change the original file. Note: If you edit the original file from which the xref was created, the block in the current library will not change. You will need to recreate the xref. External References Panel If you click on the External References button at the top of the Block palette you will see the External References panel at the bottom of the palette. When xref's are nested in other drawings they are shown in a tree format listed below that referenced drawing. Right-clicking on any of the xrefs listed in the external References panel will open the following local menu: Open: will open the xref in TurboCAD. Reload: will reload the reference file, including any updates. Detach: will detach the referenced file, and any insertions of that file in the drawing will be deleted. You cannot detach nested xrefs. You must open the file to which they are attached to remove them. Bind: embeds the selected XREF as a Block in the drawing. All attachment to the external drawing is lost. VISRETAIN:Through the Design Director it is possible to edit the various properties of the layers within an XREF. These changes do not affect the original drawing. Even if the external referenced drawing is altered these layer changes will be retained. However, if the XREF is reloaded the changes will be lost. It is possible to disable this feature by changing the $VIZRETAIN variable through the DCExplorer Palette. Exploding XREFs: When an XREF has been bound to a drawing it becomes a block. Instances of that block in the drawing can be exploded so that you can edit the geometry directly in the drawing. In Place Editing of Groups and Blocks You can edit groups or blocks in place within the drawing. - Select the block or group. - Right click to open the local menu and select Edit tool.All other elements in the drawing will fade-out, while the selected entity remains clear. - Proceed by making you desired changes to the object.This can include changing properties, moving geometry, adding geometry, editing geometry, and deleting geometry. - If you are editing a block, other instances of the block will show the changes you are making simultaneously. - To finish, right click and select Finish block/group editing, from the local menu, or Finish Edit Content from the Block Palette toolbar, or Finish to Edit Block/Finish to Edit Group form the Tools menu. Block attributes that are edited, added or deleted will not be updated in exiting block insertions, including the one that you selected. Only the "Original" block in the palette will reflect attribute changes. As of TurboCAD latest version, you can now snap to objects outside the block or group while in block/group mode. Inserting a Block To insert a block into the drawing, simply drag it out of the Blocks Palette and drop it into your drawing. The inserted block will still be selected after you place it, so that you can move, scale, or rotate it. Blocks are placed on Layer 0, even if their components are on other layers. Layer 0 should always be left visible, or blocks will instantly "disappear." Tip: You can use the TC Explorer Palette to drag blocks to and from drawings. Block Insertion Properties These properties can be used if you want to change any aspect of the block instance - its location, scale, angle, or the block reference itself. For any block, open the Properties window and open the Block Insertion page. For example, a block was inserted, then moved, rotated, and resized. Its Block Insertion page contains the current values for Position, Rotation, and Scale. You can change the values in this window, or use the Select Edit tools and see the updated values in these fields. To replace a selected block with another block, select the replacement block from the list and click Replace with. Click OK to implement the change. Inserting Blocks into Another File or Application You can also use the drag-and-drop technique to insert blocks into another open file. Dragging a block into another drawing accomplishes two things: it inserts the block into the target drawing, and it places the block into the library of the target document. The target drawing must be open and its window must be visible on the screen. (Use Window / Tile to see all open windows.) After you drag the block, the target file becomes the active window. Drag-and-drop can also be used to place blocks, symbols, or any selected objects into other Windows applications, such as Microsoft Word or graphics programs. Note: You can also use File / Extract To to export all blocks into another file. Inserting Blocks from Another File The Insert / File tool can be used to insert some or all blocks from another file into the current drawing. If both drawing have blocks with identical names, you can choose whether to ignore or replace them. Tip: You can also use File / Extract From to insert selected components like blocks (or layers or other settings) from another file into your drawing. However, this method will insert all blocks, without enabling you to pick and choose. - Select Auto Naming and make sure that Prompt for Name is checked for Blocks. - Select Insert / File and choose a file containing one or more blocks you wish to insert. - Use the Add Blocks window to select the blocks to import: The left panel displays the blocks found in the selected file, and the right panel displays any blocks that exist in the current drawing. Select the mode (Add, Replace, or Ignore) and click the relevant button at the top right (Add, Add / Replace All, etc.) to generate the blocks. If you want to pick and choose the blocks to add, make sure Process all additional blocks is not checked. Generate name: Assigns a new name to a block you wish to add. Modes: The options here depend on the selected block, and whether a block with the same name already exists in the current drawing. Add block(s): Adds the selected block. Replace block(s): The blocks from the external file will replace those in the current drawing. Ignore block(s): Click Ignore All and the blocks will not be added. Options Process all additional blocks: Adds and/or replaces all blocks found in the source file. Generate block name with prefix: Assigns a name automatically, with the specified prefix, to the inserted blocks. Using Insert / File also adds all drawing objects found in the source file. However, you can press Undo (Ctrl+Z) immediately after using AddBlocks to clear the imported objects, leaving only the imported blocks. You may have to undo twice, to remove objects both in Model Space and Paper Space. Other source file components like layers, lights, and views will also be inserted, but they can be deleted manually if needed. Warning: If the source file and current drawing have layers or other components with identical names, the layers will be replaced with those of the inserted file. There are other ways to import blocks from another drawing, without importing other components: Open both the source file and new file, and select Windows / Tile so that you can see both drawing windows. Use the Blocks Palette to drag blocks from the source file to the new file. This method imports the blocks only. In the source file, select the blocks you want to export (select in the drawing area, not in the Blocks Palette). Copy the blocks (Ctrl+C or Edit / Copy), and paste them (Ctrl+V) into the destination file. The AddBlocks window will appear. This method imports both the blocks and the layers the blocks are on.
http://docs.imsidesign.com/projects/TurboCAD-2019-User-Guide-Publication/TurboCAD-2019-User-Guide/Groups-Blocks-and-the-Library/Blocks/
2021-09-16T21:04:21
CC-MAIN-2021-39
1631780053759.24
[array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0001.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0002.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0003.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0004.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/blocks-2019-02-15.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0006.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0007.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0008.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0009.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0010.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0011.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0012.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0013.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0014.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0015.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0016.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0017.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0018.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0019.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0020.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0021.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0022.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0023.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0024.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0025.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0026.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0027.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/blocks-2019-02-15-1.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0029.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0030.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0031.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0032.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0033.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0034.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0035.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0036.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0037.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0038.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0039.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0040.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0041.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0042.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0043.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0044.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0045.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0046.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0047.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0048.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0049.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0050.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0051.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0052.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0053.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/blocks-2019-02-15-2.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0055.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0057.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0058.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0059.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0060.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0061.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/14-2-blocks-img0062.png', 'img'], dtype=object) ]
docs.imsidesign.com
ITable Data Interface Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. The ITableData provides an abstraction indicating how the system properties for a given table data model are to be serialized when communicating with the clients. The uniform serialization of system properties ensures that the clients can process the system properties uniformly across platforms. public interface ITableData type ITableData = interface Public Interface ITableData - Derived -
https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.mobile.server.tables.itabledata?view=azure-dotnet
2021-09-16T23:14:05
CC-MAIN-2021-39
1631780053759.24
[]
docs.microsoft.com
The global the-color-database object is an instance of color-database<%>. It maintains a database of standard RGB colors for a predefined set of named colors (such as “black” and “light gray”). See find-color for information on how color names are normalized. The following colors are in the database: Finds a color by name (character case is ignored). If no color is found for the name, #f is returned, otherwise the result is an immutable color object.. Examples: Changed in version 1.16 of package draw-lib: Changed normalization to more generally remove spaces. Returns an alphabetically sorted list of case-folded color names for which find-color returns a color% value.
https://docs.racket-lang.org/draw/color-database___.html
2021-09-16T22:26:12
CC-MAIN-2021-39
1631780053759.24
[]
docs.racket-lang.org
The Django Project is managed by a team of volunteers pursuing three goals:. Changes to this document require a four fifths majority of votes cast in a core team vote and no veto by the technical board.
https://getdocs.org/Django/docs/2.2.x/internals/organization
2021-09-16T21:33:47
CC-MAIN-2021-39
1631780053759.24
[]
getdocs.org
Drafting Reference Point Note: It is important that the Create Editing History option in on (Options|ACIS) if you want boolean operations to be recorded in a part or assembly. Otherwise the part/assembly will be deleted since it is essentially a new object. You can adjust he location of the Reference point for a Drafting object in the following manner. Reference point functionality applies only to the WorldCS and WorkplaneCS, it isn't allowed for EntityCS. WorldCS is the preferred mode. Note: it is important understand that using the WorkplaneCS will mean the composite WorkplaneCS of all the source 3D entities for Drafting Part/Assembly. The default Reference point will therefore be the Center of Extents of all these entities, not the original as is the case with the WorldCS. You can also turn Reference point functionality on and off. - Create the drafting object from sphere with followed options turned on. - With the New drafting object created, the reference point position will be defined automatically according to center of extents of the source entities. The Reference point is defined for the DRAFTING PART (with all of its drafting objects), not for each single drafting object within it: - Add other 3D parts to the sphere to change the drafting object (The position is always related to reference point) - To define new reference point, right click on PART object in thedrafting tree and select Define Reference Point: - Click to define the new reference point position for the PART in Model Space: - All drafting objects within a part where reference point position was changed will be updated accordingly to new reference point position:
http://docs.imsidesign.com/projects/TurboCAD-2019-User-Guide-Publication/TurboCAD-2019-User-Guide/Drafting-Palette-Creating-Standard-Views/Drafting-Reference-Point/
2021-09-16T21:00:25
CC-MAIN-2021-39
1631780053759.24
[array(['../../Storage/turbocad-2019-user-guide-publication/16-3-drafting-reference-point-img0001.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/16-3-drafting-reference-point-img0002.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/16-3-drafting-reference-point-img0003.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/16-3-drafting-reference-point-img0004.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/16-3-drafting-reference-point-img0005.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/16-3-drafting-reference-point-img0006.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/16-3-drafting-reference-point-img0007.png', 'img'], dtype=object) ]
docs.imsidesign.com
Editing 3D Objects using Selection Info Note: For general information on this palette, You. \10.. TC Mesh Simplification (Available only in Platinum) Default UI Menu: Modify/Modify 3D Objects/TC Surface Simplification Ribbon UI Menu: Simplifies meshes by reducing the total polygon count. For example you can use it for reducing the number of the polygons in the laser scanned model (e.g. from 400000 to 4000 triangles). TC Surface Simplification enables when a 3D Boolean operation has been performed on two or more SMeshs. - Select a mesh or TC Surface. - Select the Percent to Keep. - Click the Finish button or select Finish from the local menu. Local Menu Options Ignore Boundaries, Contract Boundaries at end, Fix Boundaries: this switch tells the simplifier how should it process the models boundaries (boundary = set of the edges where each edge belongs to the one triangle only) Do Full Update before simplification: you should use this setting if the simplifier fails or its result is incorrect. Usually it means that the simplifier's input model was incorrect. You may try to heal the model by using "Do Full Update" in this case. XClip (Available in all TurboCAD Variants) Default UI Menu: Modify/Clip/XClip Ribbon UI Menu: The XClip tool creates a cropped/clipped display of a selected external reference or block reference based upon a selected boundary. You can use any circle, or closed polyline consisting of only straight segment as a boundary. - Select an xref or a block or a group of xrefs or blocks. - In the local menu select the Rectangular/Polygonal option. - Draw the desired cutting area. 4.Press Finish from Local menu or Inspector bar. In Local Menu,If you click the "Generate Polyline" option,it will create a polyline of area you have cutted. A cropped version of the xref or block is created. the original xref or block insertion is destroyed. The xclip and the selected boundary are not associative, so updating the boundary does not update the xclip. If the xref/block contained 3D object these objects are shown as "hollowed" whether they are surfaces or ACIS solids. Xclipping does not create new geometry for the clipped entities so the missing faces are simply not displayed. In this regard xclips are not like booleans. Regardless of the current UCS, the clip depth is applied parallel to the clipping boundary. XClip Properties XClip properties provide added control for how xclips are displayed. Display only result: if this option is unchecked the clipping boundary will be ignored, and all of the geometry of the clipped blocks or xrefs will be displayed. Enable front clip: If this option is selected the xclip will clip everything in the clipped entities above a specified height. Front clipping always occurs parallel to the original clip boundary. Front Clip: Sets the height for front clipping. Enable back clip: If this option is selected the xclip will clip everything in the clipped entities below a specified height. Front clipping always occurs parallel to the original clip boundary. Back Clip: Sets the depth for back clipping. In the following picture the the xclip has a Front Clip of 12in. and a Back Clip of 1in.
http://docs.imsidesign.com/projects/TurboCAD-2019-User-Guide-Publication/TurboCAD-2019-User-Guide/Editing-in-3D/Editing-3D-Objects-using-Selection-Info/
2021-09-16T21:49:28
CC-MAIN-2021-39
1631780053759.24
[array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0001.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0002.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0003.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0004.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0005.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0006.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0007.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0008.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0009.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0010.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0011.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0012.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0013.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0014.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0015.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0016.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0017.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0018.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0019.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0020.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0021.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0022.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0023.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0024.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0025.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0026.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0027.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0028.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0029.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/editing-3d-objects-using-selection-info-2019-02-14.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0031.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0032.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0033.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/10-5-editing-3d-objects-using-selection-info-img0034.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/editing-3d-objects-using-selection-info-2019-02-14-1.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/editing-3d-objects-using-selection-info-2019-07-04-1.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/editing-3d-objects-using-selection-info-2019-07-04-2.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/editing-3d-objects-using-selection-info-2019-07-04-4.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/editing-3d-objects-using-selection-info-2019-07-04-7.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/editing-3d-objects-using-selection-info-2019-07-04-8.png', 'img'], dtype=object) array(['../../Storage/turbocad-2019-user-guide-publication/editing-3d-objects-using-selection-info-2019-07-04-9.png', 'img'], dtype=object) ]
docs.imsidesign.com
Installing TurboCAD To install TurboCAD, insert the CD into your CD-ROM. If the installation process does not start automatically, select Start / Run from the Windows taskbar and type D:\setup.exe (where D is the drive letter of the CD-ROM). After you have installed TurboCAD, the Setup program creates a program folder. If you choose the default settings, TurboCAD is installed in the C:\Program Files\IMSIDesign\TCWP2019 folder. This folder contains several subfolders that contain TurboCAD program files and related files such as templates, sample drawings, and symbols. Note: The Setup program also creates a program group containing the TurboCAD application icon, as well as shortcuts to the Help and the Readme file.The program group is accessed through the Start menu.Before you start the program, please read the Readme file, which contains the latest information on TurboCAD. To remove TurboCAD from your computer: In Windows, select Start / Settings / Control Panel. - Double-click Add/Remove Programs. - Select TurboCAD from the list. - Click Add/Remove and follow the instructions on the screen.
http://docs.imsidesign.com/projects/TurboCAD-2019-User-Guide-Publication/TurboCAD-2019-User-Guide/Getting-Started-with-TurboCAD-2019/Installing-TurboCAD/
2021-09-16T22:16:04
CC-MAIN-2021-39
1631780053759.24
[]
docs.imsidesign.com
Building Container Images¶ Two make targets exists to build container images automatically based on the locally checked out branch: Developer images¶ Run make dev-docker-image to build a cilium-agent Docker image that contains your local changes. ARCH=amd64 DOCKER_DEV_ACCOUNT=quay.io/myaccount DOCKER_IMAGE_TAG=jane-developer-my-fix make dev-docker-image Run make docker-operator-generic-image (respectively, docker-operator-aws-image or docker-operator-azure-image) to build the cilium-operator Docker image: ARCH=amd64 DOCKER_DEV_ACCOUNT=quay.io/myaccount DOCKER_IMAGE_TAG=jane-developer-my-fix make docker-operator-generic-image The commands above assumes that your username for quay.io is myaccount. Race detection¶ See section on compiling Cilium with race detection. Official release images¶ Anyone can build official release images using the make target below. DOCKER_IMAGE_TAG=v1.4.0 make docker-images-all Experimental Docker BuildKit and Buildx support¶ Docker BuildKit allows build artifact caching between builds and generally results in faster builds for the developer. Support can be enabled by: export DOCKER_BUILDKIT=1 Multi-arch image build support for arm64 (aka aarch64) and amd64 (aka x86-64) can be enabled by defining: export DOCKER_BUILDX=1 Multi-arch images are built using a cross-compilation builder by default, which uses Go cross compilation for Go targets, and QEMU based emulation for other build steps. You can also define your own Buildx builder if you have access to both arm64 and amd64 machines. The “cross” builder will be defined and used if your current builder is “default”. Buildx targets push images automatically, so you must also have DOCKER_REGISTRY and DOCKER_DEV_ACCOUNT defined, e.g.: export DOCKER_REGISTRY=docker.io export DOCKER_DEV_ACCOUNT=your-account Currently the cilium-runtime and cilium-builder images are released for amd64 only (see the table below). This means that you have to build your own cilium-runtime and cilium-builder images: make docker-image-runtime After the build finishes update the runtime image references in other Dockerfiles ( docker buildx imagetools inspect is useful for finding image information). Then proceed to build the cilium-builder: make docker-image-builder After the build finishes update the main Cilium Dockerfile with the new builder reference, then proceed to build Hubble from github.com/cilium/hubble. Hubble builds via buildx QEMU based emulation, unless you have an ARM machine added to your buildx builder: export IMAGE_REPOSITORY=${DOCKER_REGISTRY}/${DOCKER_DEV_ACCOUNT}/hubble CONTAINER_ENGINE="docker buildx" DOCKER_FLAGS="--push --platform=linux/arm64,linux/amd64" make image Update the main Cilium Dockerfile with the new Hubble reference and build the multi-arch versions of the Cilium images: make docker-images-all Official Cilium repositories¶ The following table contains the main container image repositories managed by Cilium team. It is planned to convert the build process for all images based on GH actions. Image dependency: [docker|quay].io/cilium/cilium quay.io/cilium/cilium-envoy depends on: quay.io/cilium/cilium-envoy-builder Update cilium-builder and cilium-runtime images¶ cilium-builder depends on cilium-runtime so one needs to update cilium-runtime first. Steps 4 and 7 will fetch the digest of the image built by GitHub actions. $ make -C images/ update-runtime-image Commit your changes and create a PR in cilium/cilium. $ git commit -s -a -m "update cilium-{runtime,builder}" Ping one of the members of team/build to approve the build that was created by GitHub Actions here. Note that at this step cilium-builder build failure is expected since we have yet to update the runtime digest. Wait for cilium-runtime build to complete. Only after the image is available run: $ make -C images/ update-runtime-image update-builder-image Commit your changes and re-push to the PR in cilium/cilium. $ git commit --amend -s -a Ping one of the members of team/build to approve the build that was created by GitHub Actions here. Wait for the build to complete. Only after the image is available run: $ make -C images/ update-runtime-image update-builder-image Commit your changes and re-push to the PR in cilium/cilium. $ git commit --amend -s -a.
https://docs.cilium.io/en/stable/contributing/development/images/
2021-09-16T20:42:29
CC-MAIN-2021-39
1631780053759.24
[]
docs.cilium.io
By default, core dumps from crashing programs are now stored by systemd-coredump, rather than created in the crashing process’s current working directory by ABRT. They may be extracted using the coredumpctl tool. For example, simply run coredumpctl gdb to view a backtrace for the most recent crash in gdb. For more information on this change, refer to the manpages coredumpctl(1), systemd-coredump(8), and coredump.conf(5).
https://docs.fedoraproject.org/hu/fedora/f26/release-notes/developers/Development_Tools/
2021-09-16T22:57:39
CC-MAIN-2021-39
1631780053759.24
[]
docs.fedoraproject.org
Date: Tue, 18 Jun 2013 09:32:24 +0200 (CEST) From: FreeBSD Security Advisories <[email protected]> To: FreeBSD Security Advisories <[email protected]> Subject: FreeBSD Security Advisory FreeBSD-SA-13:06.mmap Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive |----- Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=0+0+archive/2013/freebsd-security-notifications/20130623.freebsd-security-notifications
2021-09-16T22:17:26
CC-MAIN-2021-39
1631780053759.24
[]
docs.freebsd.org
Legacy EXTRA! macros and Visual Basic applications that contain parameterized properties cause an error to occur in Reflection 2008. When you run the macro or application, the HostOptions object returns the error "Method or property not found" when it encounters one of the following parameterized properties: AttributeForeground, AttributeBackground, or Color. Reflection 2008 is based on C# and doesn't support parameterized properties; therefore, statements that get and set these properties will have no effect. To avoid this problem, replace the parameterized properties with an equivalent form. To edit the property For example, change HostOptions.AttributeForeground(x) = y to HostOptions.set_AttributeForeground x, y -or- y = HostOptions.get_Attribute Foreground x
https://docs.attachmate.com/Reflection/2008/R1/Guide/pt/vba_guide/16474.htm
2021-09-16T22:04:41
CC-MAIN-2021-39
1631780053759.24
[]
docs.attachmate.com
Date: Wed, 17 Aug 2011 18:08:18 +0000 From: "Miller, Vincent (Rick)" <[email protected]> To: FreeBSD <[email protected]> Subject: Re: How much disk space required for make release? Message-ID: <CA717B7D.3BE2%[email protected]> In-Reply-To: <CALFgp2M_xk-EE9q2F91Y1L=Dhw2BF5nuXCLC0rvYn=4hhu27vg@mail.gmail.com> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help I want to thank everyone for their suggestions. I ended up creating a larger swap and /tmp and reran make release with much better results. It's not completely finished yet, but has certainly progressed much further than the other day. =3D=3D Vincent (Rick) Miller Systems Engineer [email protected] t: 703-948-4395 21345 Ridgetop Cir Dulles, VA 20166 VerisignInc.com =20 On 8/16/11 5:50 PM, "Edwin L. Culp W." <[email protected]> wrote: >On Tue, Aug 16, 2011 at 1:38 PM, Miller, Vincent (Rick) ><[email protected]> wrote: >> Hello all, >> >> I am attempting to 'make release' 8.2-RELEASE. After running for a few >>hours, it died citing lack of disk space. The filesystem has >>approximately 80GB available. How much disk space is required when >>making a release? >> >> =3D=3D >> Vincent (Rick) Miller >> Systems Engineer >> [email protected] >> >> t: 703-948-4395 >> 21345 Ridgetop Cir Dulles, VA 20166 >> >> VerisignInc.com >> _______________________________________________ >> [email protected] mailing list >> >> To unsubscribe, send any mail to >>"[email protected]" >> >I am not running 8. Only 7, soon to be updated to 9, but I just >finished building a release on Current amd64. I doubt that it will >help much but . . . > ># du -s -m release >8693 release > >A little less than 9G. I wouldn't want to have less that 10G free, if >I were going to build regularly. > >Boy am I glad that disks are so much cheaper now. > >ed Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=509164+0+archive/2011/freebsd-questions/20110821.freebsd-questions
2021-09-16T22:10:54
CC-MAIN-2021-39
1631780053759.24
[]
docs.freebsd.org
StackTraces Base.StackTraces.StackFrame— Type. StackFrame Stack information representing execution context, with the following fields: func::Symbol The name of the function containing the execution context. linfo::Union{Core.MethodInstance, CodeInfo, Nothing} The MethodInstance containing the execution context (if it could be found). file::Symbol The path to the file containing the execution context. line::Int The line number in the file containing the execution context. from_c::Bool True if the code is from C. inlined::Bool True if the code is from an inlined frame. pointer::UInt64 Representation of the pointer to the execution context as returned by backtrace. Base.StackTraces.StackTrace— Type. StackTrace An alias for Vector{StackFrame} provided for convenience; returned by calls to stacktrace. Base.StackTraces.stacktrace— Function. stacktrace([trace::Vector{Ptr{Cvoid}},] [c_funcs::Bool=false]) -> StackTrace Returns a stack trace in the form of a vector of StackFrames. (By default stacktrace doesn't return C functions, but this can be enabled.) When called without specifying a trace, stacktrace first calls backtrace. The following methods and types in Base.StackTraces are not exported and need to be called e.g. as StackTraces.lookup(ptr). Base.StackTraces.lookup— Function. lookup(pointer::Union{Ptr{Cvoid}, UInt}) -> Vector{StackFrame} Given a pointer to an execution context (usually generated by a call to backtrace), looks up stack frame context information. Returns an array of frame information for all functions inlined at that point, innermost function first. Base.StackTraces.remove_frames!— Function. remove_frames!(stack::StackTrace, name::Symbol) Takes a StackTrace (a vector of StackFrames) and a function name (a Symbol) and removes the StackFrame specified by the function name from the StackTrace (also removing all frames above the specified function). Primarily used to remove StackTraces functions from the StackTrace prior to returning it. remove_frames!(stack::StackTrace, m::Module) Returns the StackTrace with all StackFrames from the provided Module removed.
https://docs.julialang.org/en/v1.0/base/stacktraces/
2021-09-16T21:10:14
CC-MAIN-2021-39
1631780053759.24
[]
docs.julialang.org
3.274 text-align-white-space-006 Expected Results There is no red visible on the page. Actual Results IE8 Mode (Internet Explorer 8) There is a red block in the top-left corner of the green block. The test fails because the value of text-align does not remain as justify when the value of white-space is set to pre-line. The value of text-align is incorrectly reset to the initial value.
https://docs.microsoft.com/en-us/openspecs/ie_standards/ms-css21/a896dbfe-be2b-4cd3-ab3d-26dd185c082f
2021-09-16T23:27:57
CC-MAIN-2021-39
1631780053759.24
[]
docs.microsoft.com
Mlt Field Query¶ The more_like_this_field query is the same as the more_like_this query, except it runs against a single field. It provides nicer query DSL over the generic more_like_this query, and support typed fields query (automatically wraps typed fields with type filter to match only on the specific type). { "more_like_this_field" : { "name.first" : { "like_text" : "text like this one", "min_term_freq" : 1, "max_query_terms" : 12 } } } - Note - more_like_this_field can be shortened to mlt_field. The more_like_this_field top level parameters include:
https://pyes.readthedocs.io/en/latest/guide/reference/query-dsl/mlt-field-query.html
2021-09-16T22:27:26
CC-MAIN-2021-39
1631780053759.24
[]
pyes.readthedocs.io
Configure diagnostic trace 9 minute read By default, API Gateway produces diagnostic trace and debugging information to record details about its runtime execution. For example, this includes services starting or stopping, exceptions, and messages sent through the gateway. This information can then be used by administrators and developers for diagnostics and debugging purposes, and is useful when contacting Axway Support. You can view and search the contents of API Gateway tracing in the following locations: - Logs > Trace view in API Gateway Manager - A console window for the running server - Trace files in the following locations: - Admin Node Manager: INSTALL_DIR/trace - API Gateway instance: INSTALL_DIR/groups/<group-id>/<instance-id>/trace - API Gateway Analytics: INSTALL_DIR/trace You can view and search the contents of the gateway trace log, domain audit log, and transaction logs in the Logs view in API Gateway Manager. This section explains how to configure the trace log only. For more details, see Configure logging and events. For details on how to redact sensitive data from trace files (for example, user passwords or credit card details), see Hide sensitive data in API Gateway Manager. The trace level you set impacts the redaction. View API Gateway trace files Each time the gateway starts up, by default, it writes a trace file to the trace directory in your gateway installation (for example, INSTALL_DIR/groups/group-2/server1/trace). The following example shows an extract from a default: TraceLevel: INFO Timestamp: 15/Jun/2012:09:54:01.047 (day:hours:minutes:seconds:milliseconds) Thread-id: [1b10] TraceMessage: Realtime monitoring enabled Set API Gateway trace levels The possible trace levels in order of least to most verbose output are as follows: FATAL ERROR INFO DEBUG DATA FATAL is the least verbose and DATA the most verbose trace level. The default trace level is INFO. Set the trace level You can set the trace level using the following different approaches: Startup trace: When Admin Node Manager is starting up, it gets its trace level from the tracelevel attribute of the SystemSettings element in /system/conf/nodemanager. You can set the trace level in this file if you need to diagnose boot up issues. Default Settings trace: When the gateway has started, it reads its trace level from the Default Settings for the gateway instance. To set this trace level in the Policy Studio, click the Server Settings node in the Policy Studio tree, then click the General option, then select a Tracing level from the drop-down list. Dynamic trace: You can also change dynamic gateway trace levels on-the-fly in API Gateway Manager. For more details, see Configure logging and events. Configure API Gateway trace files. By default, trace.xml contains the following setting: <FileRolloverTrace maxfiles="500" filename="%s_%Y%m%d%H%M%S.trc"/> This setting means that API Gateway writes Node Manager trace output to nodemanager;onhostname_timestamp .trc (for example, nodemanager;on127.0.0.1_20130118160212.trc) in the trace directory of the API Gateway installation. And, the maximum number of files that the trace directory can contain is 500. Configure rollover settings The FileRolloverTrace element can contain the following attributes: filename File name used for trace output. Defaults to the tracecomponent attribute read from the SystemSettings element. directory Directory where the trace file is written. Defaults to INSTALL_DIR/trace when not specified. - If you change the trace directory, you will not be able to view the trace files in API Gateway Manager. For the recommended way to change the trace directory, see the following Axway knowledge base article. maxlen Maximum size of the trace file in bytes before it rolls over to a new file. Defaults to 16777216 ( 16 MB). maxfiles Maximum number of files that the trace directory contains for this filename. Defaults to 500. rollDaily Whether the trace file is rolled at the start of the day. Defaults to true. The following setting shows example attributes: <FileRolloverTrace maxfiles="5" maxlen="10485760" rollDaily="true This setting means. Write output to syslog On Linux, you can send API Gateway trace output to syslog. In your INSTALL_DIR/system/conf/trace.xml file, add a SyslogTrace element, and specify a facility. For example: <SyslogTrace facility="local0"/> Run trace at DEBUG level When troubleshooting, it can be useful to set to the trace level to DEBUG for more verbose output. When running a trace at DEBUG level, the gateway writes the status of every policy and filter that it processes into the trace file. Debug a filter The trace output for a specific filter takes the following format: Filter name { Trace for the filter is indented, API Gateway logs Set API Gateway trace levels. Run trace at DATA level When the trace level is set to DATA, the gateway writes the contents of the messages that it receives and sends to the trace file. This enables you to see what messages the gateway has received and sent (for example, to reassemble a received or sent message). NoteWhen the trace level is set to DATA, passwords provided during login are logged in plain text. Search by thread ID Every HTTP request handled by the gateway is processed in its own thread, and threads can be reused when an HTTP transaction is complete. You can see what has happened to a message in the gateway by following the trace by thread ID. Because multiple messages can be processed by the box,. Integrate trace output with Apache log4J Apache log4j is included on API Gateway classpath. This is because some third-party products that API Gateway interoperates with require log4j. The configuration for log4j is found in the gateway INSTALL_DIR/system/conf directory in the log4j2.yaml file. For example, to specify that the log4j appender sends output to the gateway trace file, add the following setting to your log4j2.yaml file: Root: AppenderRef: - ref: STDOUT - ref: VordelTrace level: debug Environment variables These variables override the trace.xml file settings, which enables the logging behavior to be defined at runtime. APIGW_LOG_TRACE_TO_FILE=[true | false] - true = Write trace files to disk - false = Do not write trace files to disk APIGW_LOG_TRACE_JSON_TO_STDOUT=[true | false] - true = Output JSON formatted trace to stdout - false = Do not output JSON formatted trace to stdout Feedback Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve.
https://axway-open-docs.netlify.app/docs/apim_administration/apigtw_admin/tracing/
2021-09-16T21:03:18
CC-MAIN-2021-39
1631780053759.24
[]
axway-open-docs.netlify.app
Method used for displaying images on the screen Memory cache limit (in megabytes) Enable OpenGL multi-sampling, only for systems that support it, requires restart Number of frames to render ahead during playback (sequencer only) Generate Image Mipmaps on the GPU Use international fonts Scale textures for the 3D View (looks nicer but uses more memory and slows image reloading) Allow user to choose any codec (Windows only, might generate instability) Draw tool/property regions over the main region, when using Triple Buffer Allow any .blend file to run scripts automatically (unsafe with blend files from an untrusted source) Automatically convert all new tabs into spaces for new and loaded text files Draw user interface text anti-aliased Use textures for drawing international fonts Translate interface Translate new data names (when adding/creating some)
https://docs.blender.org/api/blender_python_api_2_69_8/bpy.types.UserPreferencesSystem.html
2021-09-16T21:38:01
CC-MAIN-2021-39
1631780053759.24
[]
docs.blender.org
Profiling Profile.@profile— Macro @profile @profile <expression> runs your expression while taking periodic backtraces. These are appended to an internal buffer of backtraces. The methods in Profile are not exported and need to be called e.g. as Profile.print(). Profile.clear— Function clear() Clear any existing backtraces from the internal buffer., :countsorts in order of number of collected samples, and :overheadsorts by the number of samples incurred by each function by itself. noisefloor– Limits frames that exceed the printout to only those lines with at least mincountoccurrences. recur– Controls the recursion handling in :treeformat. :off(default) prints the tree as normal. :flatinstead compresses any recursion (by ip), showing the approximate effect of converting any self-recursion into an iterator. :flatcdoes the same but also includes collapsing of C frames (may do odd things around jl_apply).. Profile.init— Function init(; n::Integer, delay::Real)) Configure the delay between backtraces (measured in seconds), and the number n of). Profile.fetch— Function fetch() -> data Returns a copy of the buffer of profile backtraces. Note that the values in data have meaning only on this machine in the current session, because it depends on the exact memory addresses used in JIT-compiling. This function is primarily for internal use; retrieve may be a better choice for most users.. Profile.callers— Function callers(funcname, [data, lidict], [filename=<filename>], [linerange=<start:stop>]) -> Vector{Tuple{count, lineinfo}}..
https://docs.julialang.org/en/v1.5/stdlib/Profile/
2021-09-16T21:57:16
CC-MAIN-2021-39
1631780053759.24
[]
docs.julialang.org
Nodes Contributors Download PDF of this page. Management node The management node (sometimes abbreviated as mNode) interacts with a storage cluster to perform management actions, but is not a member of the storage cluster. Management nodes periodically collect information about the cluster through API calls and report this information to Active IQ for remote monitoring (if enabled). Management nodes are also responsible for coordinating software upgrades of the cluster nodes. The management node is a virtual machine that runs in parallel with one or more Element software-based storage clusters. In addition to upgrades, it is used to provide system services including monitoring and telemetry, manage cluster assets and settings, run system tests and utilities, and enable NetApp Support access for troubleshooting. As of the Element 11.3 release, the management node functions as a microservice host, allowing for quicker updates of select software services outside of major releases. These microservices or management services, such as the Active IQ collector, QoSSIOC for the vCenter Plug-in, and management node service, are updated frequently as service bundles. Storage nodes Net. Compute nodes NetApp HCI compute nodes are hardware that provides compute. Regardless of whether it is a four-node storage cluster or a two-node storage cluster, the minimum number of compute nodes remains two for a NetApp HCI deployment. Witness Nodes NetApp HCI Witness Nodes are virtual machines that run on compute nodes in parallel with an Element software-based storage cluster. Witness Nodes do not host slice or block services. A Witness Node enables storage cluster availability in the event of a storage node failure. You can manage and upgrade Witness Nodes in the same way as other storage nodes. A storage cluster can have up to four Witness Nodes. Their primary purpose is to ensure that enough cluster nodes exist to form a valid ensemble quorum.
https://docs.netapp.com/us-en/hci/docs/concept_hci_nodes.html
2021-09-16T23:00:59
CC-MAIN-2021-39
1631780053759.24
[]
docs.netapp.com
yevent.target: Y.AbstractType The shared type that this event was created on. This event describes the changes on target. yevent.currentTarget: Y.AbstractType The current target of the event as the event traverses through the (deep)observer callbacks. It refers to the type on which the event handler (observe/observeDeep) has been attached. Similar to Event.currentTarget. yevent.transaction: Y.Transaction The transaction in which this event was created on. yevent.path: Array<String|number> Computes the path from the Y.Doc to the changed type. You can traverse to the changed type by calling ydoc.get(path[0]).get(path[1]).get(path[2]).get( ... yevent.changes.delta: Delta Computes the changes in the array-delta format. See more in the Delta Format section. The text delta is only available on Y.TextEvent ( ytextEvent.delta) yevent.changes.keys: Map<string, { action: 'add' | 'update' | 'delete', oldValue: any }> Computes changes on the attributes / key-value map of a shared type. In Y.Map it is used to represent changed keys. In Y.Xml it is used to describe changes on the XML-attributes.
https://docs.yjs.dev/api/y.event
2021-09-16T22:51:59
CC-MAIN-2021-39
1631780053759.24
[]
docs.yjs.dev
pyes.queryset¶ The main QuerySet implementation. This provides the public API for the ORM. Taken from django one and from django-elasticsearch. - class pyes.queryset. QuerySet(model=None, using=None, index=None, type=None, es_url=None, es_kwargs={})¶ Represents a lazy database lookup for a set of objects. aggregate(*args, **kwargs)¶ Returns a dictionary containing the calculations (aggregation) over the current queryset If args is present the expression is passed as a kwarg using the Aggregate object’s default alias. all()¶ Returns a new QuerySet that is a copy of the current one. This allows a QuerySet to proxy for a model manager in some cases. annotate(*args, **kwargs)¶ Return a query set in which the returned objects have been annotated with data aggregated from related fields. bulk_create(objs, batch_size=None)¶ Inserts each of the instances into the database. This does not call save() on each of the instances, does not send any pre/post save signals, and does not set the primary key attribute if it is an autoincrement field. complex_filter(filter_obj)¶ Returns a new QuerySet instance with filter_obj added to the filters. filter_obj can be a Q object (or anything with an add_to_query() method) or a dictionary of keyword lookup arguments. This exists to support framework features such as ‘limit_choices_to’, and usually it will be more natural to use other methods. count()¶ Performs a SELECT COUNT() and returns the number of records as an integer. If the QuerySet is already fully cached this simply returns the length of the cached results set to avoid multiple SELECT COUNT(*) calls. create(**kwargs)¶ Creates a new object with the given kwargs, saving it to the database and returning the created object. dates(field_name, kind, order='ASC')¶ Returns a list of datetime objects representing all available dates for the given field_name, scoped to ‘kind’. defer(*fields)¶ Defers the loading of data for certain fields until they are accessed. The set of fields to defer is added to any existing set of deferred fields. The only exception to this is if None is passed in as the only parameter, in which case all deferrals are removed (None acts as a reset option). evaluated()¶ Lets check if the queryset was already evaluated without accessing private methods / attributes exclude(*args, **kwargs)¶ Returns a new QuerySet instance with NOT (args) ANDed to the existing set. get(*args, **kwargs)¶ Performs the query and returns a single object matching the given keyword arguments. get_or_create(**kwargs)¶ Looks up an object with the given kwargs, creating one if necessary. Returns a tuple of (object, created), where created is a boolean specifying whether an object was created. latest(field_name=None)¶ Returns the latest object, according to the model’s ‘get_latest_by’ option or optional given field_name. only(*fields)¶ Essentially, the opposite of defer. Only the fields passed into this method and that are not already specified as deferred are loaded immediately when the queryset is evaluated. ordered¶ Returns True if the QuerySet is ordered – i.e. has an order_by() clause or a default ordering on the model. update(**kwargs)¶ Updates all elements in the current QuerySet, setting all the given fields to the appropriate values.
https://pyes.readthedocs.io/en/stable/references/pyes.queryset.html
2021-09-16T21:18:25
CC-MAIN-2021-39
1631780053759.24
[]
pyes.readthedocs.io
MOM Template for ITIL Problem Review After multiple incidents were reported to the service desk and a problem was recognized, the ITIL methodology recommends to convene a MOM meeting in order to review the problem and make recommendations on how to resolve it. During the ITIL MOM meeting, similar problems that occurred in the past are reviewed, in hope of using the same (or similar) solution to resolve the current problem by using the MOM Template. The MoM (Minutes of Meeting) template records the following issues and details – - The major problem which was discussed during the meeting - The location of the meeting - Who recorded the minutes, and who was the organizer (chair) of the meeting - Basic details: Date, Start Hour, and Duration of the meeting - The participants of the meeting, with the following information regarding each participant - Name - Title - Means of communication (E-Mail, Phone, etc.). This is optional - The agenda of the meeting, with the following information – - Start and finish an hour of each topic - The topic which will be discussed - Who will present the topic - The action items were decided upon in the meeting. These usually appear in the form of a table, and include the following columns – - A number of action items. Usually a simple running number list - The action item itself. This is the main column of the table and should explain in detail exactly what needs to be done in order to solve the problem (or a part of it) - Owner: Lists who is responsible for performing the action item. This may require more than one person, but shouldn’t list more than one. In this case, the one owner is accountable for the action item being completed. The owner may appear in the name, or by role - Due Date: Presents when the owner of the action item needs to complete it - Comments: This column may be filled in during the meeting, or before the next one in which it will be reviewed How It Fits Into the ITIL Methodology In order for the organization to be as efficient as can be, each problem should be reviewed before its solution is authorized. This review should include past solutions to similar problems, and brainstorming the recommended solution. Meeting in small groups can achieve this goal, and the result of each meeting should be clear action items written in an MOM. The service desk should always have a recommendation on how to solve the problem, and this should serve as an agenda for the meeting. Of course, the recommendation doesn’t necessarily have to be approved. Best Practices for MOM - The MOM should be displayed to all the participants during the meeting so that the action items are visible to all. - Any presentations or other relevant material should be distributed to the participants in advance of the meeting - The group should be made of many different roles, in order to refrain from group thinking and to force the members to explain themselves in simple language.
https://www.itil-docs.com/blogs/problem-management/mom-template-for-itil-problem-review
2021-09-16T21:51:28
CC-MAIN-2021-39
1631780053759.24
[array(['https://cdn.shopify.com/s/files/1/0576/7063/1573/files/ITIL-MoM-Template-for-Major-Problem-Review-1_480x480.png?v=1625311789', 'MOM Template, MOM Template for ITIL, ITIL MOM for Major Problems Review'], dtype=object) ]
www.itil-docs.com
You can search for folders in the Folders section. To perform this kind of search you have to add as many conditions as you want on the folders's metadata. It is possible to restrict the search to a parent folder, a specific language and if you select a template you will also be able to put conditions on the template's fields. Click on Search and the result will appear.
https://docs.logicaldoc.com/en/search/search-folder
2021-09-16T22:38:19
CC-MAIN-2021-39
1631780053759.24
[array(['/images/stories/en/folder_searchform.gif', None], dtype=object) array(['/images/stories/en/folder_search.gif', None], dtype=object)]
docs.logicaldoc.com
Handling Signals¶ SignalHandler¶ This Engine() - class cherrypy.process.plugins.SignalHandler(bus. - set_handler(signal, listener=NoneCLD', 18: 'SIGCONT', 19: 'SIGSTOP', 20: 'SIGTSTP', 21: 'SIGTTIN', 22: 'SIGTTOU', 23: 'SIGURG', 24: 'SIGXCPU', 25: 'SIGXFSZ', 26: 'SIGVTALRM', 27: 'SIGPROF', 28: 'SIGWINCH', 29: 'SIGPOLL', 30: 'SIGPWR', 31: 'SIGSYS', 34: 'SIGRTMIN', 64: 'SIGRTMAX'}¶ A map from signal numbers to names. Windows Console Events¶ Microsoft Windows uses console events to communicate some signals, like Ctrl-C. When deploying CherryPy on Windows platforms, you should obtain the Python for Windows Extensions; once you have them installed, CherryPy will handle Ctrl-C and other console events (CTRL_C_EVENT, CTRL_LOGOFF_EVENT, CTRL_BREAK_EVENT, CTRL_SHUTDOWN_EVENT, and CTRL_CLOSE_EVENT) automatically, shutting down the bus in preparation for process exit.
http://docs.cherrypy.org/en/3.3.0/refman/process/plugins/signalhandler.html
2015-03-27T03:33:35
CC-MAIN-2015-14
1427131294307.1
[]
docs.cherrypy.org
changes.mady.by.user Vaclav Pech Saved on Jul 15, 2013 Saved on Jul 24, 2013 ... For latest update, see the Agent section of the User Guide and the respective Demos. Powered by a free Atlassian Confluence Open Source Project License granted to Codehaus. Evaluate Confluence today.
http://docs.codehaus.org/pages/diffpages.action?pageId=131432512&originalId=231736730
2015-03-27T03:41:04
CC-MAIN-2015-14
1427131294307.1
[]
docs.codehaus.org
Table of Contents or higher Spring 2.5.6 or higher * Adobe BlazeDS 3.2 or higher ** * As of the 1.0.2.RELEASE version, Spring BlazeDS Integration is forward-compatible with Spring 3.0.x ** As of the 1.0.3.RELEASE version, Spring BlazeDS Integration is forward-compatible with BlazeDS 4.: <!-- The front controller of this Spring Web application, responsible for handling all application requests --> >> As of release 1.0.2 of Spring BlazeDS Integration,: package flex.samples.product; import org.springframework.flex.remoting.RemotingDestination; import org.springframework.flex.remoting.RemotingExclude; import org.springframework.flex.remoting.RemotingInclude; import org.springframework.stereotype.Service; @Service("productService") @RemotingDestination(channels={"my-amf","my-secure-amf"}) public class ProductServiceImpl implements ProductService { @RemotingInclude public Product read(String id) { ... } @RemotingExclude public Product create(Product product){ ... } @RemotingInclude public Product update(Product product){ ... } @RemotingExclude public void delete(Product product) { ... } } Spring.="remoting then imported into Eclipse for running the application via WTP. The sample build requires Maven 2.0.9}/spring-flex-samples/spring-flex-testdrive and execute: mvn clean install This will first build all of the individual Flex projects and then finally assemble the 'testdrive' WAR project. As of release 1.0.2 of Spring BlazeDS Integration, the Test Drive's Maven build includes an additional profile for building the samples to use Spring 3 and Spring Security 3. To build the samples using this profile, execute: mvn clean install -P spring_3_0 As a convenience for anyone who is adverse to using Maven and just wants to get the Test Drive up and running quickly in Eclipse, pre-packaged builds of the Test Drive can be downloaded directly via the following links: Spring BlazeDS Integration Test Drive with Spring 2.5.6 Spring BlazeDS Integration Test Drive with Spring 3.0.)
http://docs.spring.io/spring-flex/docs/1.0.x/reference/htmlsingle/spring-flex-reference.html
2015-03-27T04:06:25
CC-MAIN-2015-14
1427131294307.1
[]
docs.spring.io
Cross tabulations A cross tabulation allows you to summarize the values in a column based on the values in two or more other columns and display the result as a matrix. In the tutorials in this section, you will learn how to perform a cross tabulation. After you have performed a cross tabulation, you will learn how to create a computed column from your cross tabulation. A cross tabulation can help you gain granularity from a summary without losing the highest level of data summarization. For example, in addition to finding the total amount of sales for each store in your chain, you can also obtain the sales figures for the individual departments within each store and how they compare to the total. To illustrate this concept, shown below is the Sales by Store summary from Perform a tabulation. This result was reached simply by adding up the total sales for each store and assigning the total for each result to a single row in the table. Another way to think about this is that you created a "bucket" for each store and placed the amount of each transaction in the bucket of the store where the transaction took place. What if you want to know which departments contributed to the sales of each store? The best way to get this information, without losing sales totals for each store, is to perform a cross tabulation. Effectively, a cross tabulation groups on a second metric and summarizes it, placing the data into one column for each unique value in the group. Each row will contain sales figures for each store, and a new set of columns will be created to show the sales totals for each department.
https://docs.1010data.com/TRSGettingStartedGuide/CrossTabulations.html
2021-09-17T02:58:57
CC-MAIN-2021-39
1631780054023.35
[array(['Screens/Tabulation/SalesByStore.png', None], dtype=object)]
docs.1010data.com
Important You are viewing documentation for an older version of Confluent Platform. For the latest, click here. Install Using Docker Images. Considerations¶ You should consider the following before using the Docker images. - Multi-node Environment - For more information, see Configure Multi-Node Environment. - Examples: Examples:. Tutorials and Demos¶ Examples are available on GitHub for many components. The following tutorials leverage these examples and can help you get started.
https://docs.confluent.io/5.1.4/installation/docker/installation/index.html
2021-09-17T05:01:15
CC-MAIN-2021-39
1631780054023.35
[]
docs.confluent.io
Date: Mon, 18 Jan 2010 21:29:05 -0800 From: Gary Kline <[email protected]> To: FreeBSD Mailing List <[email protected]> Subject: can't build pidgin... Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive | Help when I do a make install clean in net-im/pidgin I constantly get rejects about the datestamp being wrong and the file is not retrieved. any help will be greatly appreciated. tia... . -- Gary Kline [email protected] Public Service Unix The 7.79a release of Jottings: Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=361756+0+archive/2010/freebsd-questions/20100124.freebsd-questions
2021-09-17T05:30:01
CC-MAIN-2021-39
1631780054023.35
[]
docs.freebsd.org
The Credentials Library stores community string information for SNMP devices in your WhatsUp Gold database to be used whenever a read or write community string is needed to monitor a device. In WhatsUp Gold, credentials are used to limit access to a device's SNMP data. Devices need SNMP credentials assigned to them before SNMP-based Active Monitors can be applied. Configure the following fields to create a SNMPv1 credential:
https://docs.ipswitch.com/NM/WhatsUpGold2018/03_Help/1033/41089.htm
2021-09-17T03:56:03
CC-MAIN-2021-39
1631780054023.35
[]
docs.ipswitch.com
java.lang.Object org.jboss.netty.logging.InternalLoggerFactoryorg.jboss.netty.logging.InternalLoggerFactory public abstract class InternalLoggerFactory Creates an InternalLogger or changes the default factory implementation. This factory allows you to choose what logging framework Netty should use. The default factory is JdkLoggerFactory. You can change it to your preferred logging framework before other Netty classes are loaded: Please note that the new default factory is effective only for the classes which were loaded after the default factory is changed. Therefore,Please note that the new default factory is effective only for the classes which were loaded after the default factory is changed. Therefore, InternalLoggerFactory.setDefaultFactory(new Log4JLoggerFactory()); setDefaultFactory(InternalLoggerFactory)should be called as early as possible and shouldn't be called more than once. public InternalLoggerFactory() public static InternalLoggerFactory getDefaultFactory() JdkLoggerFactory. public static void setDefaultFactory(InternalLoggerFactory defaultFactory) public static InternalLogger getInstance(Class<?> clazz) public static InternalLogger getInstance(String name) public abstract InternalLogger newInstance(String name)
https://docs.jboss.org/netty/3.2/api/org/jboss/netty/logging/InternalLoggerFactory.html
2021-09-17T04:45:41
CC-MAIN-2021-39
1631780054023.35
[]
docs.jboss.org
special sponsors # Platform.sh (beta) Platform.sh (opens new window) is the end-to-end web platform for agile teams. With it you can build, evolve, and scale your website fleet—with zero infrastructure management investment. Get hosting, CI/CD, automated updates, global 24x7 support. And much more. This integration is currently in development and as such it has the following serious caveats: - This should be considered at an betalevel of readiness - This has only been tested against Platform.sh's phpproject templates - This currently only supports Platform.sh's phpapplication container - It's not yet clear how much customization to your project is currently supported However, if you'd like to try it out and give your feedback on what worked and what didn't then please continue. You can also read about some more caveats here. You can report any issues or feedback over here (opens new window) or check out - Getting Started - Configuration - Environment variables - Platform CLI - Application Tooling - Accessing relationships - External access - Pulling and pushing relationships and mounts - Importing databases - Caveats and known issues - Development # on one of your Platform.sh sites. # Go through interactive prompts to get your site from platformsh lando init --source platformsh # OR do it non-interactively # NOTE: You will want to make sure you set $PLATFORMSH_CLI_TOKEN # and $PLATFORMSH_SITE_NAME to values that make sense for you lando init \ --source platformsh \ --platformsh-auth "$PLATFORMSH_CLI_TOKEN" \ --platformsh-site "$PLATFORMSH_SITE_NAME" # OR if you already have your platform code locally cd /path/to/repo lando init \ --source cwd \ --recipe platformsh # Start it up lando start # Import any relevant relationships or mounts # NOTE: You will likely need to change the below to specify # relationships and mounts that make sense for your application # See further below for more information about lando pull lando pull -r database -m web/sites/default/files # List information about this app. lando info Note that if your platformsh project requires environment variables set in the Platform Management Console (opens new window) you will need to set those manually! See the Environment Variables section below for details. # Configuration While Lando recipes sets: platformsh config: id: YOURSITEID overrides: {} You will immediately notice that the default platformsh recipe Landofile does not contain much. This is because Lando uses the exact same images and configuration mechanisms locally as Platform.sh does in production. This means that instead of modifying your Landofile to add, edit or remove the services, dependencies, build steps, etc you need to run your application you will want to modify your Platform.sh configuration according to their documentation and then do the usual lando rebuild for those changes to be applied. Of course, since this is still a Lando recipe you can continue to extend and override your Landofile in the usual way for any additional power you require locally. Here are some details on how Lando interprets the various Platform.sh configuration files: # routes.yaml Lando will load your routes.yaml (opens new window) and use for its own proxy configuration. # routes.yaml "https://{default}/": type: upstream upstream: "app:http" cache: enabled: true # Base the cache on the session cookie and custom Drupal cookies. Ignore all other cookies. cookies: ['/^SS?ESS/', '/^Drupal.visitor/'] ".{default}/": type: redirect to: "https://{default}/" The above routes configuration example will produce the following Lando pretty proxy URLs, assuming {default} resolves to my-app.lndo.site. Note, however, that Lando will only use routes that contain the {default} placeholder. FQDN routes will not be used since these generally will be pointing at your production site and not Lando. If you would still like to use these routes then we recommend you review our proxy docs on how to add them back into the mix. # services.yaml Lando will load your services.yaml (opens new window) and spin up exactly the same things there as you have running on your Platform.sh site, including any advanced configuration options you may have specified for each like schemas, endpoints, extensions, properties, etc. This means that Lando knows how to handle more complex configuration such as in the below example: # services.yaml db: type: mariadb:10.4 disk: 2048 configuration: schemas: - main - legacy endpoints: admin: default_schema: main privileges: main: admin legacy: admin db2: type: postgresql:12 disk: 1025 configuration: extensions: - pg_trgm - hstore We currently only support the below services and we highly recommend you consult the Platform.sh docs for how to properly configure each. - Elasticsearch (opens new window) - Headless Chrome (opens new window) - InfluxDB (opens new window) - Kafka (opens new window) - MariaDB/MySQL (opens new window) - Memcached (opens new window) - MongoDB (opens new window) - PostgreSQL (opens new window) - RabbitMQ (opens new window) - Redis (opens new window) - Solr (opens new window) - Varnish (opens new window) Also note that you will need to run a lando rebuild for configuration changes to manifest in the same way you normally would for config changes to your Landofile. # .platform.app.yaml Lando will load your .platform.app.yaml (opens new window) and spin up exactly the same things there as you have running on your Platform.sh site. This means that similarly to Platform.sh Lando will also: - Install any dependencies specificed in the build.flavoror dependencieskeys - Run any buildor deployhooks - Set up needed relationships, variables, webconfig, crontasks, etc. We currently only support the below langauges and we highly recommend you consult the Platform.sh docs for how to properly configure each. Also note that you will need to run a lando rebuild for configuration changes to manifest in the same way you normally would for config changes to your Landofile. # Multiple applications Lando should support Platform.sh's multiple applications configurations (opens new window) although they are not extensively tested at this point so YMMV. If you have a multiple application setup then you will need to navigate into either the directory that contains the .platform.app.yaml or the source.root specified in your .platform/applications.yaml file to access the relevant tooling for that app. This is how tooling works for our multiapp example (opens new window). # Get access to tooling for the "base" application lando # Access tooling for the "discreet" application cd discreet lando # Access tooling for the "php" application cd ../php lando # Environment Application containers running on Lando will also set up the same PLATFORM_* provided environment variables (opens new window) so any service connection configuration, like connecting your Drupal site to mysql or redis, you use on Platform.sh with these variables should also automatically work on Lando. Lando does not currently pull variables you have set up in the Platform.sh dashboard so you will need to add those manually. # Overriding config Platform.sh application language and service configuration is generally optimized for production. While these values are usually also suitable for local development purposes Lando also provides a mechanism to override both application language and service configuration with values that make more sense for local. name: myproject recipe: platformsh config: id: PROJECTID overrides: app: variables: env: APP_ENV: dev d8settings: skip_permissions_hardening: 1 db: configuration: properties: max_allowed_packet: 63 Note that app in the above example should correspond to the name of the Platform.sh application you want to override and db should correspond to the name of one of the services in your services.yaml. Also note that you will need to lando rebuild for this changes to apply. # Environment variables Lando will also set and honor any variables (opens new window) that have been set up in your .platform.app.yaml or applications.yaml. However, some of these, such as APP_ENV=prod do not make a ton of sense for local development. In these situations you can override any Platform.sh variable directly from your Landofile with values that make more sense for local. Here is an example: name: platformsh-drupal8 recipe: platformsh config: id: PROJECTID overrides: app: variables: env: APP_ENV: dev d8settings: skip_permissions_hardening: 1 Perhaps more importantly, Lando will not automatically pull and set up environment variables that have been set in the Platform Management Console (opens new window). This means that if your build hook requires these environment variables then it will likely fail. To remediate we recommend you manually add these variables into a local environment file that is also in your .gitignore and then lando rebuild. Here are some steps on how to do that. - Update your Landofile so it knows to load an environment file. env_file: - platformsh.local.env - Make sure you add it to your .gitignorefile. platformsh.local.env - Create the env file touch platformsh.local.env - Discover envvars by running lando platform var - Use the information from above to populate platformsh.local.env SPECIAL_KEY=mysecret - Run lando rebuildto trigger the build process using the newly added envvars. # Platform CLI Every application container will contain the Platform.sh CLI (opens new window); automatically authenticated for use with the account and project you selected during lando init. # Who am i? lando platform auth:info # Tell me about my project lando platform project:info If you find yourself unauthenticated for whatever reason. You should try the following: # Reauthenticate using already pulled down code lando init --source cwd --recipe platformsh # Rebuild your lando app lando rebuild -y # Application Tooling Lando will also setup useful tooling commands based on the type of your application container. These can be used to both relevant tooling and utilities that exist inside the application container. Here are the defaults we provide for the php application container. lando composer Runs composer commands lando php Runs php commands # Usage # Install some composer things lando composer require drush/drush # Run a php script lando php myscript.php Of course the user can also lando ssh and work directly inside any of the containers Lando spins up for your app. # Attach to the closest applicaiton container lando ssh # Attach to the db service lando ssh -s db Note that Lando will surface commands for the closest application it finds. Generally, this will be the .platform.app.yaml located in your project root but if you've cd multiappsubdir then it will use that instead. # Adding additional tooling While Lando will set up tooling routes for the obvious utilities for each application type it tries to not overwhelm the user with all the commands by providing a minimally useful set. It does this because it is very easy to specify more tooling commands in your Landofile. tooling: # Here are some utilities that should exist in every application # container node: service: app npm: service: app ruby: service: app # And some utilities we installed in the `build.flavor` # or `dependencies` key grunt: service: app sass: service: app drush: service: app Note that the service should match the name of your application in the associated .platform.app.yaml. Very often this is just app. Now run lando again and see that extra commands! lando composer Runs composer commands lando drush Runs drush commands lando grunt Runs grunt commands lando node Runs node commands lando npm Runs npm commands lando php Runs php commands lando ruby Runs ruby commands lando sass Runs sass commands lando drush cr lando npm install lando grunt compile:things lando ruby -v lando node myscript.js If you are not sure whether something exists inside your application container or not you can easily test using the -c option provided by l lando ssh # Does yarn exist? lando ssh -c "yarn" Also note that Lando tooling is hyper-powerful so you might want to check out some of its more advanced features. # Accessing relationships Lando will also set up tooling commands so you can directly access the relationships specified in your .platform.app.yaml. These are contextual so they will connect via the tool that makes the most sense eg mysql for mariadb and redis-cli for redis. As an example say you have the following relationships in your .platform.app.yaml. relationships: database: 'db:mysql' redis: 'cache:redis' Then you'd expect to see the following commands and usage: lando database Connects to the database relationship lando redis Connects to the database relationship # Drop into the mysql shell using the database relationship creds lando database # Drop into the redis-cli shell using the redis relationship creds lando redis Note that some services eg solr provide web based interfaces. In these cases Lando will provide a localhost address you can use to access that interface. # External access If you would instead like to connect to your database, or some other service, from your host using a GUI client like SequelPro, instead of via the Lando CLI you can run lando info and use the external_connection information and any relevant creds for the service you want to connect to. Here is example connection info for a multi-endpoint mariadb service called db below: lando info --service db --format default { service: 'db', urls: [], type: 'platformsh-mariadb', healthy: true, creds: [ { internal_hostname: 'database2.internal', password: '3ac01938c66f0ce06304a6357da17c34', path: 'main', port: 3306, user: 'admin' }, { internal_hostname: 'reports.internal', password: 'd0c99f580a0d646d62904568573f5012', port: 3306, user: 'reporter' }, { internal_hostname: 'imports.internal', password: 'a6bf5826a81f7e9a3fa42baa790207ef', path: 'legacy', port: 3306, user: 'importer' } ], internal_connection: { host: 'db', port: '3306' }, external_connection: { host: '127.0.0.1', port: '32915' }, config: {}, version: '10.4', meUser: 'app', hasCerts: false, hostnames: [ 'db.landod8.internal' ] }, Note that you must have a relationship from your app to a given service in order for it to have credentials. Also note that this is slightly different than the normal output from lando info because platformsh services work slightly different. While you can use the internal_connection:host and internal_connection:port for internal connections we recommend you use the host and port indicated for the relevant cred you want to connect to instead. So if you wanted to connect to the main db you would use the following depending on whether you are connecting externally or internally: external creds host: 127.0.0.1 port: 32915 user: admin password: 3ac01938c66f0ce06304a6357da17c34 database: main internal creds host: database2.internal port: 3306 user: admin password: 3ac01938c66f0ce06304a6357da17c34 database: main Of course, it is always preferrable to just use PLATFORM_RELATIONSHIPS for all your internal connections anyway. # Pulling and pushing relationships and mounts Lando also provides wrapper commands called lando pull and lando push. With lando pull you can import data and download files from your remote Platform.sh site. With lando push you can do the opposite, export data or upload files to your remote Platform.sh site. Note that only database relationships are currently syncable. lando pull Pull relationships and/or mounts from Platform.sh Options: --help Shows lando or delegated command help if applicable --verbose, -v Runs with extra verbosity --auth Platform.sh API token --mount, -m A mount to download --relationship, -r A relationship to import # Interactively pull relationships and mounts lando pull # Import the remote database relationship and drupal files mount lando pull -r database -m web/sites/default/files # Import multiple relationships and mounts lando pull -r database -r migrate -r readonly -m tmp -m private # You can also specify a target for a given mount using -m SOURCE:TARGET lando pull -m tmp:/var/www/tmp -m /private:/somewhere/else # You can also specify a target db/schema for a given relationships using -r RELATIONSHIP:SCHEMA lando pull -r admin:legacy # Skip the mounts part lando pull -r database -m none # Effectively "do nothing" lando pull -r none -m none lando push Push relationships and/or mounts to Platform.sh Options: --help Shows lando or delegated command help if applicable --verbose, -v Runs with extra verbosity --auth Platform.sh API token --mount, -m A mount to push up --relationship, -r A relationship to push up # Interactively push relationships and mounts lando push # Import the remote database relationship and drupal files mount lando push -r database -m web/sites/default/files # Import multiple relationships and mounts lando push -r database -r migrate -r readonly -m tmp -m private # You can also specify a target for a given mount using -m SOURCE:TARGET lando push -m tmp:/var/www/tmp -m /private:/somewhere/else # You can also specify a target db/schema for a given relationships using -r RELATIONSHIP:SCHEMA lando push -r admin:legacy -r admin:main # Skip the relationships part lando push -r none -m tmp # Effectively "do nothing" lando push -r none -m none # Importing databases If you have data that exists outside Platform.sh eg a dump.sql file you'd like to import you can leverage the special lando commands we give you to access each relationship. You will need to make sure that the relationship you connect with has the appropriate permissions needed to import your dump file. # Import to the main schema using the database relationships lando database main < dump.sql # Caveats and known issues Since this is a currently an beta release there are a few known issues, and workarounds, to be aware of. We also recommend you consult GitHub for other Platform.sh tagged issues (opens new window). We also highly encourage you to post an issue (opens new window) if you see a problem that doesn't already have an issue. # $HOME considerations Platform.sh sets $HOME to /app by default. This makes sense in a read-only hosting context but is problematic for local development since this is also where your git repository lives and you probably don't want to accidentally commit your $HOME/.composer cache into your repo. Lando changes this behavior and sets $HOME to its own default of /var/www for most user initiated commands and automatic build steps. It also will override any PLATFORM_VARIABLES that should be set differently for local dev. For a concrete example of this Platform.sh's Drupal 8 template will set the Drupal /tmp directory to /app/tmp, Lando will instead set this to /tmp. However, it's probable at this early stage that we have not caught all the places where we need to do both of the above. As a result you probably want to: # 1. Look out for caches, configs, or other files that might normally end up in Do you due diligence and make sure you git status before you git add. If you see something that shouldn't be there let us know (opens new window) and then add it to your .gitignore until we have resolved it. # 2. Consider LANDO specific configuration If you notice your application is not working quite right it's possible you need to tweak some of the defaults for your application's configuration so they are set differently on Lando. We recommend you do something like the below snippet. settings.local.php $platformsh = new \Platformsh\ConfigReader\Config(); if ($config->environment === 'lando') { $settings['file_private_path'] = '/tmp'; $config['system.file']['path']['temporary'] = '/tmp'; } Note that the above is simply meant to be illustrative. # Redirects Lando will currently not perform redirects specified in your routes.yaml. Instead it will provide separate http and https routes. Adding redirect support is being discussed in this ticket: (opens new window). # Local considerations There are some application settings and configuration that Platform.sh will automatically set if your project is based on one of their boilerplates. While most of these settings are fine for local development, some are not. If these settings need to be altered for your site to work as expected locally then Lando will modify them. For example if your project is based on the Drupal 8 Template (opens new window) then Lando will set the tmp directory and set skip_permissions_hardening to TRUE. Lando will likely not do this in the future in favor of a better solution but until then you can check out what we set over here (opens new window). # Memory limits Some services eg Elasticsearch require A LOT of memory to run. Sometimes this memory limit is above the defaults set by Docker Desktop. If you are trying to start an app with memory intensive services and it is hanging try to bump the resources allocated to Docker Desktop and try again. See the below docs: # Xdebug You can enable and use xdebug by turning on the extension in your .platform.app.yaml and doing a lando rebuild. runtime: extensions: - redis - xdebug Due to how Platform.sh sets up xdebug it should be ok to have this on even in production. However, if you would like to enable it only on Lando you can override the extensions in your Landofile. Note that the entire array is replaced in the overrides so your Landofile should reflect all the extensions you want to use not just the difference. recipe: platformsh config: id: PROJECT_ID overrides: app: runtime: extensions: - redis - xdebug Lando will also make a best effort attempt to set the correct xdebug configuration so that it works "out of the box". If you find that things are not working as expected you can modify the configuration to your liking using the same override mechanisn. config: id: PROJECT_ID overrides: app: runtime: extensions: - redis - xdebug php: # XDEBUG 2 xdebug.remote_enable: 1 xdebug.remote_mode: req xdebug.remote_port: 9000 xdebug.remote_connect_back: 0 # XDEBUG 3 xdebug.discover_client_host: true xdebug.mode: debug # Platformsh.agent errors When you run lando start or lando rebuild you may experience either Lando hanging or an error being thrown by something called the platformsh.agent. We are attempting to track down the causes of some of these failures but they are generally easy to identify and workaround: # Check if a container for your app has exited docker ps -a # Inspect the cause of the failure # # Change app to whatever you named your application # in your .platform.app.yaml lando logs -s app # Try again # Running lando start again seems to work around the error lando start # Persistence across rebuilds We've currently only verified that data will persist across lando rebuilds for the MariaDB/MySQL and PostgreSQL services. It may persist on other services but we have not tested this yet so be careful before you lando rebuild on other services. # Multiapp If you are using .platform/applications.yaml to configure multiple applications and you have two apps with the same source.root then Lando will currently use the first application for tooling. As a workaround you can use lando ssh with the -s option to access tooling for other applications with that source.root. In the below example, assume there are three php applications with the same source.route. # Go into a directory that has many apps with that same source.route # See the php version of the first app with source.root at this directory lando php -v # Access another app with same source.root lando -s app2 -c "php -v" # Unsupported things There are a few things that are currently unsupported at this time, athough we hope to add support in the future. - Non phpapplication containers. #2368 (opens new window) workersand the network_storageservice #2393 (opens new window) # Development If you are interested in working on the development of this recipe we recommend you check out: - The Lando contrib docs - The Dev Docs (opens new window) for this recipe
https://docs.lando.dev/config/platformsh.html
2021-09-17T04:55:11
CC-MAIN-2021-39
1631780054023.35
[]
docs.lando.dev
CurrentProject.Properties property (Access) Returns a reference to a CurrentProject object's AccessObjectProperties collection. Read-only. Syntax expression.Properties expression A variable that represents a CurrentProject object. Remarks The AccessObjectProperties collection object is the collection of all the properties related to a CurrentProject object. You can refer to individual members of the collection by using the member object's index or a string expression that is the name of the member object. The first member object in the collection has an index value of 0, and the total number of member objects in the collection is the value of the AccessObjectProperties collection's Count property minus 1. Support and feedback Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback.
https://docs.microsoft.com/en-us/office/vba/api/access.currentproject.properties
2021-09-17T05:19:05
CC-MAIN-2021-39
1631780054023.35
[]
docs.microsoft.com
#include <mitkVtkInterpolationProperty.h> Encapsulates the enumeration vtkInterpolation. Valid values are (VTK constant/Id/string representation): VTK_FLAT/0/Flat, VTK_GOURAUD/1/Gouraud, VTK_PHONG/2/Phong Default is the Gouraud interpolation Definition at line 31 of file mitkVtkInterpolationProperty.h. Constructor. Sets the representation to a default value of surface(2) Constructor. Sets the interpolation to the given value. If it is not valid, the interpolation is set to gouraud(1) Constructor. Sets the interpolation to the given value. If it is not valid, the representation is set to gouraud(1) this function is overridden as protected, so that the user may not add additional invalid interpolation types. Reimplemented from mitk::EnumerationProperty. Adds the enumeration types as defined by vtk to the list of known enumeration values. Reimplemented from mitk::EnumerationProperty. Definition at line 34 of file mitkVtkInterpolationProperty.h. Reimplemented from mitk::EnumerationProperty. Returns the current interpolation value as defined by VTK constants. Definition at line 40 of file mitkVtkInterpolationProperty.h. Definition at line 42 of file mitkVtkInterpolationProperty.h. Sets the interpolation type to VTK_FLAT. Sets the interpolation type to VTK_WIREFRAME. Sets the interpolation type to VTK_SURFACE.
https://docs.mitk.org/nightly/classmitk_1_1VtkInterpolationProperty.html
2021-09-17T03:00:06
CC-MAIN-2021-39
1631780054023.35
[]
docs.mitk.org
One of the most powerful features of Scandi is its Plugin Mechanism, giving extensions virtually unlimited possibilities to alter the theme's behavior. In this tutorial, we will be using the Plugin Mechanism to implement an example extension that will allow the user to switch to a dark theme. What you will learn: Writing Scandi plugins Creating and styling new components Working with Redux and browser local storage CSS variables Inverting the colors of a web app Scandi extension developing practices For this tutorial, you will need to have a Scandi theme set up and running. If you don't, you can set it up in minutes by using the create-scandipwa-app (CSA) script. There is no need for a local Magento instance as long as you have an internet connection. Before you learn to develop with Scandi, you need to have a basic understanding of JavaScript, a scripting language for the web. The MDN developer docs are a great resource for JavaScript documentation. You should also be familiar with React, the UI library that Scandi uses. You don't need to read all of this documentation right now, but this is a great place to start if you get lost in code. The first thing we need to do to get started is creating an extension. An extension is a reusable package that can be installed on any Scandi theme. Once you are done with this tutorial, you will be able to use this extension in other projects, as long as their version is compatible — and even share it with others! To create an extension, navigate to the root of your CSA application in the terminal. You can create a new extension using a scandipwa script: scandipwa extension create scandi-dark-theme If you haven't installed the Scandi CLI script, you can do so with npm i -g scandipwa-cli This script will initialize a new extension named packages/scandi-dark-theme and configure your theme to install it. It will also enable it by setting scandipwa.extensions["scandi-dark-theme"] to true in package.json. We should now verify that the extension is working properly. For testing purposes, we will create a plugin that simply logs something to the console. In the src/plugin directory of your extension, create a file named Header.component.plugin.js with the following contents: src/plugin/Header.component.plugin.jsexport const testPlugin = (args, callback, instance) => {console.log("Extension is working!");return callback(...args);};export default {"Component/Header/Component": {"member-function": {render: testPlugin,},},}; Above, we define a plug-in testPlugin that logs a message to the console before passing control to the callback function. Once the callback function returns, we return the value it produces. We then export a configuration that specifies that this plugin should be used for the render method of the class with the namespace Component/Header/Component. The plugin mechanism will wrap the render method of the Header component with our custom plugin - whenever render is called, our plugin will be used instead. However, we don't want to alter the value returned by render, so we must call callback (which represents the original render function, possibly wrapped in other plugins) and pass on its return value. If this still seems confusing, feel free to refer to the plugin documentation. Now, whenever the render method of the Header component will be called, our message should appear in the console. And indeed it does! You might have to restart your app for the plugin to be registered. We want the user to be able to enable or disable dark mode, so we need a way for our application to keep track of whether Dark Mode is turned on. Since this state is global to the entire application, the best place to put it is in a Redux store. Redux is a global state container library. Scandi uses Redux to keep track of its global state and has certain conventions for how Redux should be used. Create a new store called DarkMode. When you have created the necessary boilerplate for the Redux store, we will create an action for it, and implement the reducer. Then, we will register the reducer in the global store. The quickest way to create a new store in VSCode is with the ScandiPWA Development Toolkit add-on. Open your extension's directory in a new window - then press Ctrl+Shift+P to open the command pop-up and search for the ScandiPWA: Create a store command. In our Redux store, DarkMode.action.js should contain a function for creating actions. In Redux terminology, an action is a simple JavaScript object that describes a state update (but doesn't do anything itself). In our case, we need an action creator for enabling or disabling Dark Mode. src/store/DarkMode/DarkMode.action.jsexport const DARKMODE_ENABLE = 'DARKMODE_ENABLE';/** @namespace ScandiDarkTheme/Store/DarkMode/Action/enableDarkMode */export const enableDarkMode = (enabled) => ({type: DARKMODE_ENABLE,enabled}); Nothing complicated here – enableDarkMode(true) returns { type: 'DARKMODE_ENABLE', enable: true }, and enableDarkMode(false) returns { type: 'DARKMODE_ENABLE', enable: false }. These Redux Actions are simple objects that don't do anything until we write code that interprets their meaning and updates the store, called reducers. The Reducer is the part that determines how the Redux store should be updated in response to actions. src/store/DarkMode/DarkMode.reducer.jsimport { DARKMODE_ENABLE } from './DarkMode.action';/** @namespace ScandiDarkTheme/Store/DarkMode/Reducer/getInitialState */export const getInitialState = () => ({enabled: false});/** @namespace ScandiDarkTheme/Store/DarkMode/Reducer/DarkModeReducer */export const DarkModeReducer = (state = getInitialState(), action) => {switch (action.type) {case DARKMODE_ENABLE:const { enabled } = action;return {enabled};default:return state;}};export default DarkModeReducer; Our reducer maintains a single field in its state, enabled. Whenever it receives a DARKMODE_SET-type action, it returns (updates) the state with a new enabled value. Note that this function will be called by Redux. Our only responsibility is to define how the state should update. We have defined DarkModeReducer, but, like any function, it doesn't do anything until it's called. Reducer functions should be managed by Redux and some core Scandi code. All the existing Reducers are registered in store/index.js, in the function getStaticReducers. We can register our reducer by writing a plug-in for this function: src/plugin/getStaticReducers.plugin.jsimport DarkModeReducer from "../store/DarkMode/DarkMode.reducer";export const getStaticReducers = (args, callback) => ({...callback(args),DarkModeReducer,});export default {"Store/Index/getReducers": {function: getStaticReducers,},}; Now, the reducer should be registered. You can check with the Redux DevTools extension for Chrome or Firefox that there is now a DarkModeReducer in the store. Next, we'll need a way for the user to change the value in this Redux store. We already wrote a testPlugin for the Header component that technically works, but doesn't do much. Instead of logging to the console, we want to render a toggle button for enabling dark mode: src/plugin/Header.component.plugin.jsimport ModeToggleButton from "../component/ModeToggleButton";import "./Header.style.plugin";export const renderTopMenu = (args, callback, instance) => {return (<>{callback(...args)}<div block="Header" elem="DarkModeToggle"><ModeToggleButton /></div></>);};export default {"Component/Header/Component": {"member-function": {renderTopMenu,},},}; This code will render a ModeToggleButton right after the top menu. However, for this to work, we will also have to define the ModeToggleButton – otherwise, our plugin will attempt to render a non-existent component. How can we find the namespace to plug in to? This can be achieved by using React Developer Tools - a browser extension that allows you to inspect the rendered React elements. I knew that I wanted to render the button at the top of the page, so I checked which element renders it. Once I had the name of the element (Header), I could easily search for it in the codebase and find the corresponding namespace. You can create a new component in VSCode with the ScandiPWA Development Toolkit add-on by using the ScandiPWA: Create a component command. Enable the "connected to the global state" option. When you've created the ModeToggleButton component, you will see that it contains several files: Containers are for business logic. In our case, that means connecting to the Redux store to provide the current DarkMode state (enabled or disabled), and a function to dispatch actions to update the state. This will "connect" it to the Redux store we created in the previous section. src/component/ModeToggleButton/ModeToggleButton.container.jsimport { connect } from "react-redux";import { enableDarkMode } from "../../store/DarkMode/DarkMode.action";import ModeToggleButton from "./ModeToggleButton.component";/** @namespace ScandiDarkTheme/Component/ModeToggleButton/Container/mapStateToProps */export const mapStateToProps = (state) => ({isDarkModeEnabled: state.DarkModeReducer.enabled,});/** @namespace ScandiDarkTheme/Component/ModeToggleButton/Container/mapDispatchToProps */export const mapDispatchToProps = (dispatch) => ({enableDarkMode: (enabled) => dispatch(enableDarkMode(enabled)),});export default connect(mapStateToProps,mapDispatchToProps)(ModeToggleButton); mapStateToProps has access to the Redux store - we want the component to get isDarkModeEnabled as a prop. mapDispatchToProps is connected to the Redux dispatcher - by dispatching enableDarkMode, we can now enable or disable the dark mode configuration in the Redux store. The .component file is responsible for rendering the user interface. In this case, we render a simple button – and when it's clicked, we toggle the Dark Mode Setting. src/component/ModeToggleButton/ModeToggleButton.component.jsimport PropTypes from "prop-types";import { PureComponent } from "react";import "./ModeToggleButton.style";/** @namespace ScandiDarkTheme/Component/ModeToggleButton/Component/ModeToggleButtonComponent */export class ModeToggleButtonComponent extends PureComponent {static propTypes = {isDarkModeEnabled: PropTypes.bool.isRequired,enableDarkMode: PropTypes.func.isRequired,};render() {const { isDarkModeEnabled, enableDarkMode } = this.props;return (<buttonblock="ModeToggleButton"aria-label={ __("Toggle Dark Mode") }onClick={() => enableDarkMode(!isDarkModeEnabled)}>{ __("Toggle Dark Mode") }</button>);}}export default ModeToggleButtonComponent; Now we have a button that toggles the state in our Dark Mode Redux store (you can check this with the Redux DevTools). Next, we need to implement a component that will read from this state and use a dark Scandi theme dark if Dark Mode is enabled. There are several ways we can implement dark mode: Adjusting the values of all theme colors using CSS variables Using the filter property to invert the brightness of the entire app Using an all-white overlay with the difference blending mode, resulting in inverted colors Adjusting CSS variables would be a neat solution, and it would give us control over each color individually. However, in Scandi, many color values do not use the theme variables but are instead hardcoded. This limits how much control we can have on the app's colors via CSS variables, so this technique wouldn't work. Another approach would be setting filter: invert() hue-rotate(180deg) on the root HTML element to invert the brightness, but keep the same hue for all colors. This would be an elegant solution, but after experimenting with it I noticed that, even though it worked well in Chromium, it can cause layout bugs in Firefox: After some testing, I concluded the last method — using an overlay with a blending mode — works well in Scandi, so is what we'll be using for the purposes of this tutorial. It feels a bit "hacky" but unlike the other techniques, it works. This is how we will implement it: Create a component that covers the page with a color-inverting overlay if Dark Mode is enabled Create a plugin that would render this component on the page Make some adjustments to fix colors that are broken as a result of Dark Mode First, we create a new component responsible for implementing dark mode, called DarkModeProvider. Like the dark mode toggle button, this component needs access to the dark mode configuration. However, it should render something different: src/component/DarkModeProvider/DarkModeProvider.component.js// [...]render() {const { children, isDarkModeEnabled } = this.props;// we specify a modifier called `isEnabled` in the `mods` prop// if isDarkModeEnabled is true, the modifier will be added, otherwise notreturn (<div block="DarkModeProvider" mods={{ isEnabled: isDarkModeEnabled }}>{children}</div>);}// [...] Now, let's create a plugin that wraps the entire application in a DarkModeProvider. We can do this by plugging into the renderRouter function of the App component – the entire application is rendered inside this. src/plugin/App.component.plugin.jsimport DarkModeProvider from "../component/DarkModeProvider";export const renderRouter = (args, callback, instance) => {return <DarkModeProvider key="router">{callback(...args)}</DarkModeProvider>;};export default {"Component/App/Component": {"member-function": {renderRouter,},},}; The DarkModeProvider component makes use of the Block-Element-Modifier (BEM) methodology. This is a set of guidelines for formatting CSS classes so that components can be easily styled, composed, and maintained. In this example, the block is "DarkModeProvider" and the element has 1 modifier: isEnabled, which is either true or false. If it is false, the modifier does not get added. If it is true, the class gets an additional modifier: DarkModeProvider_isEnabled. We will be using this class selector in CSS, to ensure that dark mode is only active when the modifier is added: src/component/DarkModeProvider/DarkModeProvider.style.scss.DarkModeProvider {// the ::after pseudo-element is what we use to invert all of the colors&::after {// by default (when dark mode is off), we don't want it to be visible// so we set the opacity to 0.// it is overridden with opacity: 1 in .DarkModeProvider_isEnabled::afteropacity: 0;// defines a smooth transition when enabling or disabling dark modetransition: opacity ease-out 100ms;content: ""; // needed for ::after to be rendered at all// 1. make sure the element covers the entire pagedisplay: block;position: fixed;top: 0;bottom: 0;right: 0;left: 0;// 2. make sure the element is white, and "above" all the other layersz-index: 99999;background-color: white;// 3. magic. by using the difference blending mode with a white color,// all the colors in the app become inverted.// this works in all modern browsers.mix-blend-mode: difference;// we want click events to "pass through" this element,// so that it wouldn't interfere with the colors of the apppointer-events: none;}// styles that are only applied if dark mode is enabled&_isEnabled {&::after {// makes the inverting ::after element (from above) visibleopacity: 1;}}} Now, our dark mode turns the theme dark, as expected. However, there are still some issues. As you might notice, all of the images appear inverted. In addition, all of the colors are inverted as well. Our next steps will be to fix these issues. To fix incorrect colors, we find the CSS variables responsible for incorrectly colored elements, and we invert their hues whenever dark mode is enabled: src/component/DarkModeProvider/DarkModeProvider.style.scss// [...]&_isEnabled {// adjust-hue is a SCSS function that "rotates" the hue of a specific color// in this case, we use it to create complementary colors of the same brightness--primary-error-color: #{adjust-hue(#dc6d6d, 180deg)};--primary-success-color: #{adjust-hue(#7fcd91, 180deg)};--primary-info-color: #{adjust-hue(#ffd166, 180deg)};--primary-base-color: var(--imported_primary_base_color,#{adjust-hue($default-primary-base-color, 180deg)});--primary-dark-color: var(--imported_primary_dark_color,#{adjust-hue($default-primary-dark-color, 180deg)});--primary-light-color: var(--imported_primary_light_color,#{adjust-hue($default-primary-light-color, 180deg)});--secondary-base-color: var(--imported_secondary_base_color,#{adjust-hue($default-secondary-base-color, 180deg)});--secondary-dark-color: var(--imported_secondary_dark_color,#{adjust-hue($default-secondary-dark-color, 180deg)});--secondary-light-color: var(--imported_secondary_light_color,#{adjust-hue($default-secondary-light-color, 180deg)});--link-color: var(--primary-base-color);--cart-overlay-totals-background: var(--secondary-base-color);--overlay-desktop-border-color: var(--primary-light-color);--menu-item-figure-background: var(--secondary-base-color);--menu-item-hover-color: var(--primary-base-color);--newsletter-subscription-placeholder-color: var(--secondary-dark-color);--newsletter-subscription-button-background: var(--link-color);--button-background: var(--primary-base-color);--button-border: var(--primary-base-color);--button-hover-background: var(--primary-dark-color);--button-hover-border: var(--primary-base-color); To fix image appearance, we want to re-invert all images so that they appear normal when the entire page is inverted. We plug into the render method of the Image component to wrap its contents in a ColorInverter component (which we haven't yet defined) src/plugin/Image.component.plugin.js// wraps the output of the Image.render function in our ColorInverter componentexport const render = (args, callback, instance) => {return <ColorInverter>{callback(...args)}</ColorInverter>;};// export a configuration specifying the namespace we want to plug in to// as well as the type of pluginexport default {"Component/Image/Component": {"member-function": {render,},},}; The ColorInverter component is very similar to our existing DarkModeProvider component - it inverts the colors of its child elements. The difference is that ColorInverter can use the filter property without causing bugs to invert the colors. The container file is exactly the same as the one for DarkModeProvider (except for the different component name) — all it needs to is to provide the current Dark Mode state to the component. The component file is also similar: src/component/ColorInverter/ColorInverter.component.js// [...]export class ColorInverterComponent extends PureComponent {static propTypes = {isDarkModeEnabled: PropTypes.bool.isRequired,children: ChildrenType.isRequired,};render() {const { isDarkModeEnabled, children } = this.props;// we specify a modifier called `isInverted` in the `mods` prop// if isDarkModeEnabled is true, the modifier will be added, otherwise notreturn (<div block="ColorInverter" mods={{ isInverted: isDarkModeEnabled }}>{children}</div>);}}// [...] Now, in the stylesheet, all we need to do is invert the colors: src/component/ColorInverter/ColorInverter.style.scss.ColorInverter {filter: invert(0);transition: filter ease-out 100ms;// these styles will only apply to elements whose Block is "ColorInverter"// and that have the { isInverted: true } prop// the corresponding CSS class for these elements is .ColorInverter_isInverted&_isInverted {filter: invert(1);}} Now, images look good regardless if dark mode is enabled: Optional exercises you can complete to make sure you have understood the code: We fixed product images, but configurable product color options are still inverted. Override the ProductCard and ProductAttributeValue components to fix these colors in PLP and PDP. The dark mode toggle button can be distracting. Instead of rendering it at the top of the page, put it in the My Account page, in the Dashboard section. Now that you have created your extension, you can use it on any of your projects, or publish it to share it with others. We hope this tutorial was useful for learning the principles of Scandi plugin development, and can't wait to see what you will create! Written by Reinis Mazeiks. Feel free to ask questions and share feedback in the Slack channel. Thanks!
https://docs.scandipwa.com/tutorials/dark-mode-extension
2021-09-17T02:58:51
CC-MAIN-2021-39
1631780054023.35
[]
docs.scandipwa.com
Date: Tue, 19 Jan 2010 21:21:06 +0000 From: RW <[email protected]> To: [email protected] Subject: Re: 8.0-RELEASE Hanging on boot-up/Harvesting Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help On Tue, 19 Jan 2010 18:34:19 +0000 (GMT) Andy Hiscock <[email protected]> wrote: > Thought Id give Version 8.0-RELEASE A go on a server Im building for > someone. All went well except when it comes to boot-up. Works > though the config until it gets to some sort of networking > routine/initiating. The line said something about "Harvesting > ppp/Ethernet"? I think that's probably a red-herring - if the line ends with quickstart you can rule it out. Commonly that initrandom output is last line to display before fsck runs, the actual sequence is: /etc/rc.d/initrandom /etc/rc.d/geli /etc/rc.d/gbde /etc/rc.d/encswap /etc/rc.d/ccd /etc/rc.d/swap1 /etc/rc.d/fsck my guess is that it's hanging on fsck or possibly swap1. Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=517314+0+archive/2010/freebsd-questions/20100124.freebsd-questions
2021-09-17T05:16:34
CC-MAIN-2021-39
1631780054023.35
[]
docs.freebsd.org
4.6 Release Notes Welcome to the 4.6 release of Scuba. In addition to the rudimentary bug fixes and performance improvements, there are a number of updates in this version that we hope you will appreciate. 4.6.1 Update Some of the key highlights include: Round to Week Functionality — You can now round to a week and the dates are aligned to either Sunday or Monday using the ROUND_TO_SUNDAY or ROUND_TO_MONDAY functions. Queries — Perform operations on fields with empty values in the expression builder as well as compare enumerated values Flow Properties — Flow Properties relate to specific Flows. New UI enhancements make this connection clear. Enhanced Data Model Discoverability — Don’t know whether a Knowledge Object is an Event, Actor or Flow? Use the All tab for greater discoverability. Improved In-Application Filtering — Find your Knowledge Objects and Dashboards more easily with searching for Monitoring — Back-end work to support Prometheus monitoring. Query Round to Week Functionality Most functionality that lets you round to a week will make a choice for you about when a week starts. However, different use cases might want to delimit the start of a week differently. That’s why we are rolling out two new functions, ROUND_TO_SUNDAY and ROUND_TO_MONDAY which will round to weeks based on when you wish the week to begin. Exposed Addition in Expression Builder Often, users will want to use arithmetic in the Expression Builder in order to add filter logic. For example, let’s say that you have the ability to “Like” a comment and “Like” a video you might want to add the number of Likes across these fields. Suppose we have two fields Likes and Loves. Currently, if we add the columns together with the syntax =[Likes_Comment]+[Likes_Video] we will return With the new ADD_WITH_NULL_AS_ZERO() function, we will be able to sum across columns that contain null values. In this case, if we specified =ADD_WITH_NULL_AS_ZERO([Likes_Comment], [Likes_Video]) we would return Usability Flow Properties Every Flow Property is derived from a flow. Now, in the UI, when you navigate to the Flows tab while exploring your Data Model, you’ll see that how many Flow Properties are associated with a given flow. For example, we can see that for the Lifecycle Flow, there are 6 properties associated with that flow. From there, one can then click and review the Flow Properties associated with the Flow being referenced. This is meant to enable Enhanced Data Model Discoverability and In-Application Filtering Don’t remember whether the Knowledge Object you are looking for is an Actor Property, Event Property, Flow, or Flow Property? The All tab makes it easy to search for any of the objects you are looking for. Additionally, with the enhanced filtering logic you can filter to your name and see all the properties that are associated with your username. Back-End Prometheus We are switching to Prometheus for our in-application monitoring and have done some behind-the-scenes work to lay the groundwork for that switch. Looking forward, one of the benefits of Prometheus is PromQL which will let us set up much better import monitoring. 4.6.1 Update Import - lz4 compressed input is now accepted.
https://docs.scuba.io/release-notes/4.6-Release-Notes.1310687393.html
2021-09-17T04:40:33
CC-MAIN-2021-39
1631780054023.35
[]
docs.scuba.io
Config Endpoints¶ New in version 1.12.0. These endpoints facilitate access and modification of the configuration in a granular way. Config sent to the endpoints must be in the same format as returned by the corresponding GET request. When posting the configuration succeeds, the posted configuration is immediately applied, except for changes that require a restart. Query /rest/config/restart-required to check if a restart is required. For all endpoints supporting PATCH, it takes the existing config and unmarshals the given JSON object on top of it. This means all child objects will replace the existing objects, not extend them. For example for RawListenAddresses in options, which is an array of strings, sending {RawListenAddresses: ["tcp://10.0.0.2"]} will replace all existing listen addresses. /rest/config¶ GET returns the entire config and PUT replaces it. /rest/config/restart-required¶ GET returns whether a restart of Syncthing is required for the current config to take effect. /rest/config/folders, /rest/config/devices¶ GET returns all folders respectively devices as an array. PUT takes an array and POST a single object. In both cases if a given folder/device already exists, it’s replaced, otherwise a new one is added. /rest/config/folders/*id*, /rest/config/devices/*id*¶ Put the desired folder- respectively device-ID in place of *id*. GET returns the folder/device for the given ID, PUT replaces the entire config, PATCH replaces only the given child objects and DELETE removes the folder/device. /rest/config/options, /rest/config/ldap, /rest/config/gui¶ GET returns the respective object, PUT replaces the entire object and PATCH replaces only the given child objects.
https://docs.syncthing.net/rest/config.html
2021-09-17T04:40:22
CC-MAIN-2021-39
1631780054023.35
[]
docs.syncthing.net
Bootstrap Note When referring to "PostgreSQL cluster" in this section, the same concepts apply to both PostgreSQL and EDB Postgres Advanced, unless differently stated. This section describes the options you have to create a new PostgreSQL cluster and the design rationale behind them. There are primarily two ways to bootstrap a new cluster: - from scratch ( initdb) - from an existing PostgreSQL cluster, either directly ( pg_basebackup) or indirectly ( recovery) Important Bootstrapping from an existing cluster opens up the possibility to create a replica cluster, that is an independent PostgreSQL cluster which is in continuous recovery, synchronized with the source and that accepts read-only connections. Warning Cloud Native PostgreSQL requires both the postgres user and database to always exists. Using the local Unix Domain Socket, it needs to connect as postgres user to the postgres database via peer authentication in order to perform administrative tasks on the cluster. DO NOT DELETE the postgres user or the postgres database!!! The bootstrap section The bootstrap method can be defined in the bootstrap section of the cluster specification. Cloud Native PostgreSQL currently supports the following bootstrap methods: initdb: initialize an empty PostgreSQL cluster (default) recovery: create a PostgreSQL cluster by restoring from an existing cluster via a backup object store, and replaying all the available WAL files or up to a given point in time pg_basebackup: create a PostgreSQL cluster by cloning an existing one of the same major version using pg_basebackupvia streaming replication protocol - useful if you want to migrate databases to Cloud Native PostgreSQL, even from outside Kubernetes. Differently from the initdb method, both recovery and pg_basebackup create a new cluster based on another one (either offline or online) and can be used to spin up replica clusters. They both rely on the definition of external clusters. API reference Please refer to the "API reference for the bootstrap section for more information. The externalClusters section The externalClusters section allows you to define one or more PostgreSQL clusters that are somehow related to the current one. While in the future this section will enable more complex scenarios, it is currently intended to define a cross-region PostgreSQL cluster based on physical replication, and spanning over different Kubernetes clusters or even traditional VM/bare-metal environments. As far as bootstrapping is concerned, externalClusters can be used to define the source PostgreSQL cluster for either the pg_basebackup method or the recovery one. An external cluster needs to have: - a name that identifies the origin cluster, to be used as a reference via the sourceoption at least one of the following: - information about streaming connection - information about the recovery object store, which is a Barman Cloud compatible object store that contains the backup files of the source cluster - that is, base backups and WAL archives. Note A recovery object store is normally an AWS S3 or an Azure Blob Storage compatible source that is managed by Barman Cloud. When only the streaming connection is defined, the source can be used for the pg_basebackup method. When only the recovery object store is defined, the source can be used for the recovery method. When both are defined, any of the two bootstrap methods can be chosen. Furthermore, in case of pg_basebackup or full recoverypoint in time), the cluster is eligible for replica cluster mode. This means that the cluster is continuously fed from the source, either via streaming, via WAL shipping through the PostgreSQL's restore_command, or any of the two. API reference Please refer to the "API reference for the externalClusters section for more information. Bootstrap an empty cluster ( initdb) The initdb bootstrap method is used to create a new PostgreSQL cluster from scratch. It is the default one unless specified differently. The following example contains the full structure of the initdb configuration: apiVersion: postgresql.k8s.enterprisedb.io/v1 kind: Cluster metadata: name: cluster-example-initdb spec: instances: 3 superuserSecret: name: superuser-secret bootstrap: initdb: database: app owner: app secret: name: app-secret storage: size: 1Gi The above example of bootstrap will: - create a new PGDATAfolder using PostgreSQL's native initdbcommand - set a password for the postgressuperuser from the secret named superuser-secret - create an unprivileged user named app - set the password of the latter ( app) using the one in the app-secretsecret (make sure that usernamematches the same name of the owner) - create a database called appowned by the appuser. Thanks to the convention over configuration paradigm, you can let the operator choose a default database name ( app) and a default application user name (same as the database name), as well as randomly generate a secure password for both the superuser and the application user in PostgreSQL. Alternatively, you can generate your passwords, store them as secrets, and use them in the PostgreSQL cluster - as described in the above example. The supplied secrets must comply with the specifications of the kubernetes.io/basic-auth type. As a result, the username in the secret must match the one of the owner (for the application secret) and postgres for the superuser one. The following is an example of a basic-auth secret: apiVersion: v1 data: username: YXBw password: cGFzc3dvcmQ= kind: Secret metadata: name: app-secret type: kubernetes.io/basic-auth The application database is the one that should be used to store application data. Applications should connect to the cluster with the user that owns the application database. Important Future implementations of the operator might allow you to create additional users in a declarative configuration fashion. The postgres superuser and the postgres database are supposed to be used only by the operator to configure the cluster. In case you don't supply any database name, the operator will proceed by convention and create the app database, and adds it to the cluster definition using a defaulting webhook. The user that owns the database defaults to the database name instead. The application user is not used internally by the operator, which instead relies on the superuser to reconcile the cluster with the desired status. Important For now, changes to the name of the superuser secret are not applied to the cluster. The actual PostgreSQL data directory is created via an invocation of the initdb PostgreSQL command. If you need to add custom options to that command (i.e., to change the locale used for the template databases or to add data checksums), you can add them to the options section like in the following example: apiVersion: postgresql.k8s.enterprisedb.io/v1 kind: Cluster metadata: name: cluster-example-initdb spec: instances: 3 bootstrap: initdb: database: app owner: app options: - "-k" - "--locale=en_US" storage: size: 1Gi The user can also specify a custom list of queries that will be executed once, just after the database is created and configured. These queries will be executed as the superuser ( postgres), connected to the postgres database: apiVersion: postgresql.k8s.enterprisedb.io/v1 kind: Cluster metadata: name: cluster-example-initdb spec: instances: 3 bootstrap: initdb: database: app owner: app options: - "-k" - "--locale=en_US" postInitSQL: - CREATE ROLE angus - CREATE ROLE malcolm storage: size: 1Gi Warning Please use the postInitSQL option with extreme care as queries are run as a superuser and can disrupt the entire cluster. Compatibility Features EDB Postgres Advanced adds many compatibility features to the plain community PostgreSQL. You can find more information about that in the EDB Postgres Advanced. Those features are already enabled during cluster creation on EPAS and are not supported on the community PostgreSQL image. To disable them you can use the redwood flag in the initdb section like in the following example: apiVersion: postgresql.k8s.enterprisedb.io/v1 kind: Cluster metadata: name: cluster-example-initdb spec: instances: 3 imageName: <EPAS-based image> licenseKey: <LICENSE_KEY> bootstrap: initdb: database: app owner: app redwood: false storage: size: 1Gi Important EDB Postgres Advanced requires a valid license key (trial or production) to start. Bootstrap from another cluster Cloud Native PostgreSQL enables the bootstrap of a cluster starting from another one of the same major version. This operation can happen by connecting directly to the source cluster via streaming replication ( pg_basebackup), or indirectly via a recovery object store ( recovery). The source cluster must be defined in the externalClusters section, identified by name (our recommendation is to use the same name of the origin cluster). cluster definition). Bootstrap from a backup ( recovery) The recovery bootstrap mode lets you create a new cluster from an existing backup, namely a recovery object store. There are two ways to achieve this result in Cloud Native PostgreSQL: - using a recovery object store, that is a backup of another cluster created by Barman Cloud and defined via the barmanObjectStoreoption in the externalClusterssection - using an existing Backupobject in the same namespace (this was the only option available before version 1.8.0). Both recovery methods enable either full recovery (up to the last available WAL) or up to a point in time. When performing a full recovery, the cluster can also be started in replica mode. Note You can find more information about backup and recovery of a running cluster in the "Backup and recovery" page. Recovery from an object store You can recover from a backup created by Barman Cloud and stored on a supported object storage. Once you have defined the external cluster, including all the required configuration in the barmanObjectStore section, you need to reference it in the .spec.recovery.source option. The following example defines a recovery object store in a blob container in Azure: apiVersion: postgresql.k8s.enterprisedb.io/v1 kind: Cluster metadata: name: cluster-restore spec: [...] superuserSecret: name: superuser-secret bootstrap: recovery: source: clusterBackup externalClusters: - name: clusterBackup barmanObjectStore: destinationPath: azureCredentials: storageAccount: name: recovery-object-store-secret key: storage_account_name storageKey: name: recovery-object-store-secret key: storage_account_key clusters definition). Recovery from a Backup object In case a Backup resource is already available in the namespace in which the cluster should be created, you can specify its name through .spec.bootstrap.recovery.backup.name, as in the following example: apiVersion: postgresql.k8s.enterprisedb.io/v1 kind: Cluster metadata: name: cluster-example-initdb spec: instances: 3 superuserSecret: name: superuser-secret bootstrap: recovery: backup: name: backup-example storage: size: 1Gi This bootstrap method allows you to specify just a reference to the backup that needs to be restored. Additional considerations Whether you recover from a recovery object store or an existing Backup resource, the following considerations apply: - The application database name and the application database user are preserved from the backup that is being restored. The operator does not currently attempt to back up the underlying secrets, as this is part of the usual maintenance activity of the Kubernetes cluster itself. - In case you don't supply any superuserSecret, a new one is automatically generated with a secure and random password. The secret is then used to reset the password for the postgresuser of the cluster. - By default, the recovery will continue up to the latest available WAL on the default target timeline ( currentfor PostgreSQL up to 11, latestfor version 12 and above). You can optionally specify a recoveryTargetto perform a point in time recovery (see the "Point in time recovery" section). Point in time recovery (PITR) Instead of replaying all the WALs up to the latest one, we can ask PostgreSQL to stop replaying WALs at any given point in time, after having extracted a base backup. PostgreSQL uses this technique to achieve point-in-time recovery (PITR). Note PITR is available from recovery object stores as well as Backup objects. The operator will generate the configuration parameters required for this feature to work in case a recovery target is specified, like in the following example that uses a recovery object stored in Azure: apiVersion: postgresql.k8s.enterprisedb.io/v1 kind: Cluster metadata: name: cluster-restore-pitr spec: instances: 3 storage: size: 5Gi bootstrap: recovery: source: clusterBackup recoveryTarget: targetTime: "2020-11-26 15:22:00.00000+00" externalClusters: - name: clusterBackup barmanObjectStore: destinationPath: azureCredentials: storageAccount: name: recovery-object-store-secret key: storage_account_name storageKey: name: recovery-object-store-secret key: storage_account_key Besides targetTime, you can use the following criteria to stop the recovery: targetXIDspecify a transaction ID up to which recovery will proceed targetNamespecify a restore point (created with pg_create_restore_pointto which recovery will proceed) targetLSNspecify the LSN of the write-ahead log location up to which recovery will proceed targetImmediatespecify to stop as soon as a consistent state is reached You can choose only a single one among the targets above in each recoveryTarget configuration. Additionally, you can specify targetTLI force recovery to a specific timeline. By default, the previous parameters are considered to be exclusive, stopping just before the recovery target. You can request inclusive behavior, stopping right after the recovery target, setting the exclusive parameter to false like in the following example relying on a blob container in Azure: apiVersion: postgresql.k8s.enterprisedb.io/v1 kind: Cluster metadata: name: cluster-restore-pitr spec: instances: 3 storage: size: 5Gi bootstrap: recovery: source: clusterBackup recoveryTarget: targetName: "maintenance-activity" exclusive: false externalClusters: - name: clusterBackup barmanObjectStore: destinationPath: azureCredentials: storageAccount: name: recovery-object-store-secret key: storage_account_name storageKey: name: recovery-object-store-secret key: storage_account_key Bootstrap from a live cluster ( pg_basebackup) The pg_basebackup bootstrap mode lets you create a new cluster (target) as an exact physical copy of an existing and binary compatible PostgreSQL instance (source), through a valid streaming replication connection. The source instance can be either a primary or a standby PostgreSQL server. The primary use case for this method is represented by migrations to Cloud Native PostgreSQL, either from outside Kubernetes or within Kubernetes (e.g., from another operator). Warning The current implementation creates a snapshot of the origin PostgreSQL instance when the cloning process terminates and immediately starts the created cluster. See "Current limitations" below for details. Similar to the case of the recovery bootstrap method, once the clone operation completes, the operator will take ownership of the target cluster, starting from the first instance. This includes overriding some configuration parameters, as required by Cloud Native PostgreSQL, resetting the superuser password, creating the streaming_replica user, managing the replicas, and so on. The resulting cluster will be completely independent of the source instance. Important Configuring the network between the target instance and the source instance goes beyond the scope of Cloud Native PostgreSQL documentation, as it depends on the actual context and environment. The streaming replication client on the target instance, which will be transparently managed by pg_basebackup, can authenticate itself on the source instance in any of the following ways: The latter is the recommended one if you connect to a source managed by Cloud Native PostgreSQL or configured for TLS authentication. The first option is, however, the most common form of authentication to a PostgreSQL server in general, and might be the easiest way if the source instance is on a traditional environment outside Kubernetes. Both cases are explained below. Requirements The following requirements apply to the pg_basebackup bootstrap method: - target and source must have the same hardware architecture - target and source must have the same major PostgreSQL version - source must not have any tablespace defined (see "Current limitations" below) - source must be configured with enough max_wal_sendersto grant access from the target for this one-off operation by providing at least one walsender for the backup plus one for WAL streaming - the network between source and target must be configured to enable the target instance to connect to the PostgreSQL port on the source instance - source must have a role with REPLICATION LOGINprivileges and must accept connections from the target instance for this role in pg_hba.conf, preferably via TLS (see "About the replication user" below) - target must be able to successfully connect to the source PostgreSQL instance using a role with REPLICATION LOGINprivileges Seealso For further information, please refer to the "Planning" section for Warm Standby, the pg_basebackup page and the "High Availability, Load Balancing, and Replication" chapter in the PostgreSQL documentation. About the replication user As explained in the requirements section, you need to have a user with either the SUPERUSER or, preferably, just the REPLICATION privilege in the source instance. If the source database is created with Cloud Native PostgreSQL, you can reuse the streaming_replica user and take advantage of client TLS certificates authentication (which, by default, is the only allowed connection method for streaming_replica). For all other cases, including outside Kubernetes, please verify that you already have a user with the REPLICATION privilege, or create a new one by following the instructions below. As postgres user on the source system, please run: createuser -P --replication streaming_replica Enter the password at the prompt and save it for later, as you will need to add it to a secret in the target instance. Note Although the name is not important, we will use streaming_replica for the sake of simplicity. Feel free to change it as you like, provided you adapt the instructions in the following sections. Username/Password authentication The first authentication method supported by Cloud Native PostgreSQL with the pg_basebackup bootstrap is based on username and password matching. Make sure you have the following information before you start the procedure: - location of the source instance, identified by a hostname or an IP address and a TCP port - replication username ( streaming_replicafor simplicity) You might need to add a line similar to the following to the pg_hba.conf file on the source PostgreSQL instance: # A more restrictive rule for TLS and IP of origin is recommended host replication streaming_replica all md5 The following manifest creates a new PostgreSQL 13.4 cluster, called target-db, using the pg_basebackup bootstrap method to clone an external PostgreSQL cluster defined as source-db (in the externalClusters array). As you can see, the source-db definition points to the source-db.foo.com host and connects as the streaming_replica user, whose password is stored in the password key of the source-db-replica-user secret. apiVersion: postgresql.k8s.enterprisedb.io/v1 kind: Cluster metadata: name: target-db spec: instances: 3 imageName: quay.io/enterprisedb/postgresql:13.4 bootstrap: pg_basebackup: source: source-db storage: size: 1Gi externalClusters: - name: source-db connectionParameters: host: source-db.foo.com user: streaming_replica password: name: source-db-replica-user key: password All the requirements must be met for the clone operation to work, including the same PostgreSQL version (in our case 13.4). TLS certificate authentication The second authentication method supported by Cloud Native PostgreSQL with the pg_basebackup bootstrap is based on TLS client certificates. This is the recommended approach from a security standpoint. The following example clones an existing PostgreSQL cluster ( cluster-example) in the same Kubernetes cluster. Note This example can be easily adapted to cover an instance that resides outside the Kubernetes cluster. The manifest defines a new PostgreSQL 13.4 cluster called cluster-clone-tls, which is bootstrapped using the pg_basebackup method from the cluster-example external cluster. The host is identified by the read/write service in the same cluster, while the streaming_replica user is authenticated thanks to the provided keys, certificate, and certification authority information (respectively in the cluster-example-replication and cluster-example-ca secrets). apiVersion: postgresql.k8s.enterprisedb.io/v1 kind: Cluster metadata: name: cluster-clone-tls spec: instances: 3 imageName: quay.io/enterprisedb/postgresql:13.4 bootstrap: pg_basebackup: source: cluster-example storage: size: 1Gi externalClusters: - name: cluster-example connectionParameters: host: cluster-example-rw.default.svc user: streaming_replica sslmode: verify-full sslKey: name: cluster-example-replication key: tls.key sslCert: name: cluster-example-replication key: tls.crt sslRootCert: name: cluster-example-ca key: ca.crt Current limitations Missing tablespace support Cloud Native PostgreSQL does not currently include full declarative management of PostgreSQL global objects, namely roles, databases, and tablespaces. While roles and databases are copied from the source instance to the target cluster, tablespaces require a capability that this version of Cloud Native PostgreSQL is missing: definition and management of additional persistent volumes. When dealing with base backup and tablespaces, PostgreSQL itself requires that the exact mount points in the source instance must also exist in the target instance, in our case, the pods in Kubernetes that Cloud Native PostgreSQL manages. For this reason, you cannot directly migrate in Cloud Native PostgreSQL a PostgreSQL instance that takes advantage of tablespaces (you first need to remove them from the source or, if your organization requires this feature, contact EDB to prioritize it). Snapshot copy The pg_basebackup method takes a snapshot of the source instance in the form of a PostgreSQL base backup. All transactions written from the start of the backup to the correct termination of the backup will be streamed to the target instance using a second connection (see the --wal-method=stream option for pg_basebackup). Once the backup is completed, the new instance will be started on a new timeline and diverge from the source. For this reason, it is advised to stop all write operations to the source database before migrating to the target database in Kubernetes. Important Before you attempt a migration, you must test both the procedure and the applications. In particular, it is fundamental that you run the migration procedure as many times as needed to systematically measure the downtime of your applications in production. Feel free to contact EDB for assistance. Future versions of Cloud Native PostgreSQL will enable users to control PostgreSQL's continuous recovery mechanism via Write-Ahead Log (WAL) shipping by creating a new cluster that is a replica of another PostgreSQL instance. This will open up two main use cases: - replication over different Kubernetes clusters in Cloud Native PostgreSQL - 0 cutover time migrations to Cloud Native PostgreSQL with the pg_basebackupbootstrap method
https://docs.enterprisedb.io/cloud-native-postgresql/1.8.0/bootstrap/
2021-09-17T02:58:36
CC-MAIN-2021-39
1631780054023.35
[]
docs.enterprisedb.io
Enum D6JointDriveType Type of drives that can be used for moving or rotating bodies attached to the joint. Namespace: FlaxEngine Assembly: FlaxEngine.CSharp.dll Syntax [Unmanaged] [Tooltip("Type of drives that can be used for moving or rotating bodies attached to the joint.")] public enum D6JointDriveType Remarks Each drive is an implicit force-limited damped spring: force = spring * (target position - position) + damping * (targetVelocity - velocity) Alternatively, the spring may be configured to generate a specified acceleration instead of a force. A linear axis is affected by drive only if the corresponding drive flag is set.There are two possible models for angular drive : swing / twist, which may be used to drive one or more angular degrees of freedom, or slerp, which may only be used to drive all three angular degrees simultaneously.
https://docs.flaxengine.com/api/FlaxEngine.D6JointDriveType.html
2021-09-17T04:13:26
CC-MAIN-2021-39
1631780054023.35
[]
docs.flaxengine.com
Date: Fri, 18 Jan 2008 19:02:23 -0700 From: Chad Perrin <[email protected]> To: freebsd general questions <[email protected]> Subject: Re: Gutman Method on Empty Space Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help. Have you looked into the `shred` utility (gshred on FreeBSD)? -- CCD CopyWrite Chad Perrin [ ] Kent Beck: "I always knew that one day Smalltalk would replace Java. I just didn't know it would be called Ruby." Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=1305306+0+archive/2008/freebsd-questions/20080120.freebsd-questions
2021-09-17T05:28:48
CC-MAIN-2021-39
1631780054023.35
[]
docs.freebsd.org