content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
My
Site Blog Feature Receiver Constructor.
public: MySiteBlogFeatureReceiver();
public MySiteBlogFeatureReceiver ();
Public Sub New () | https://docs.microsoft.com/en-us/dotnet/api/microsoft.office.server.userprofiles.mysiteblogfeaturereceiver.-ctor?view=sharepoint-server | 2021-07-24T06:07:40 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.microsoft.com |
Location
Documentation Home
Palo Alto Networks
Support
Live Community
Knowledge Base
Best Practices
Internet Gateway Best Practice Security Policy
Best Practice Internet Gateway Security Policy
Identify Your Application Allow List
Application Allow List Example
Document:
Internet Gateway Best Practice Security Policy
Application Allow List Example
Download PDF
Last Updated:
Thu Jun 03 16:35:48 PDT 2021
Application Allow List Example
Keep in mind that you do not need to capture every application that might be in use on your network in your initial inventory. Instead you should focus on the applications (and general types of applications) that you want to allow. Temporary rules in the best practice rulebase will catch any additional applications that may be in use on your network so that you are not inundated with complaints of broken applications during your transition to application-based policy. The following is an example application allow list for an enterprise gateway deployment.
Application Type
Best Practice for Securing
SaaS Applications
SaaS application service providers own and manage the software and infrastructure, but you retain full control of the data, including who can create, access, share, and transfer it.
Generate a SaaS applications usage report
to check if SaaS applications currently in use have unfavorable hosting characteristics such as past data breaches or lack of proper certifications. Based on business needs and the amount of risk you’re willing to accept, use the information to:
Block existing applications with unfavorable hosting characteristics immediately.
Create granular policies that block applications with unfavorable hosting characteristics to prevent future violations.
Identify network traffic trends of the top applications that have unfavorable hosting characteristics so you can adjust policy accordingly.
Sanctioned Applications
These are the applications that your IT department administers specifically for business use within your organization or to provide infrastructure for your network and applications. For example, in an internet gateway deployment these applications fall into the following categories:
Infrastructure Applications
—These are the applications that you must allow to enable networking and security, such as ping, NTP, SMTP, and DNS.
IT Sanctioned Applications
—These are the applications that you provision and administer for your users. These fall into two categories:
IT Sanctioned On-Premise Applications
—These are the applications you install and host in your data center for business use. With IT sanctioned on-premise applications, the application infrastructure and the data reside on enterprise-owned equipment. Examples include Microsoft Exchange and active sync, as well as authentication tools such as Kerberos and LDAP.
IT Sanctioned SaaS Applications
—These are SaaS applications that your IT department has sanctioned for business purposes, for example, Salesforce, Box, and GitHub.
Administrative Applications
—These are applications that only a specific group of administrative users should have access to in order to administer applications and support users (for example, remote desktop applications).
Tag all sanctioned applications
with the predefined
Sanctioned
tag. Panorama and firewalls consider applications without the Sanctioned tag as unsanctioned applications.
General Types of Applications
Besides the applications you officially sanction and deploy, you will also want to allow your users to safely use other types of applications:
General Business Applications
—For example, allow access to software updates, and web services, such as WebEx, Adobe online services, and Evernote.
Personal Applications
—For example, you may want to allow your users to browse the web or safely use web-based mail, instant messaging, or social networking applications, including consumer versions of some SaaS applications.
Begin with wide application filters to gain an understanding of what applications are in use on your network. You can then decide how much risk you are willing to assume and begin to pare down the application allow list. For example, suppose multiple messaging applications are in use, each with the inherent risk of data leakage, transfer of malware-infected files, etc. The best approach is to officially sanction a single messaging application and then begin to phase out the others by slowly transitioning from an allow policy to an alert policy, and finally, after giving users ample warning, a block policy for all messaging applications except the one you choose to sanction. In this case, you might also choose to enable a small group of users to continue using an additional messaging application as needed to perform job functions with partners.
Custom Applications Specific to Your Environment
If you have proprietary applications on your network or applications that you run on non-standard ports, it is a best practice to
create custom applications
for each of them. This way you can allow the application as a sanctioned application (and apply the predefined Sanctioned tag) and lock it down to its default port. Otherwise you would either have to open up additional ports (for applications running on non-standard ports), or allow unknown traffic (for proprietary applications), neither of which are recommended in a best practice Security policy.. | https://docs.paloaltonetworks.com/best-practices/10-1/internet-gateway-best-practices/best-practice-internet-gateway-security-policy/identify-your-application-allow-list/application-allow-list-example.html | 2021-07-24T04:42:58 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.paloaltonetworks.com |
Servlet-servlet</artifactId> <version>5.0.1</version> </dependency>
For other dependency managers, see the central Maven repository.
sentry-servlet module comes with a
ServletContainerInitializer that registers a
ServletRequestListener which enhances each Sentry event triggered within the scope of HTTP request with a request information like HTTP method, query string, url and HTTP headers.
SentryInitializer servlet container initializer that initializes Sentry on application startup.
package sentry.sample; import io.sentry.Sentry; import javax.servlet.ServletContainerInitializer; import javax.servlet.ServletContext; import javax.servlet.ServletException; public final class SentryInitializer implements ServletContainerInitializer { @Override public void onStartup(Set<Class<?>> c, ServletContext ctx) throws ServletException { Sentry.init(options -> { options.setDsn(""); }); } }
Create a file in
src/main/resources/META-INF/services named
javax.servlet.ServletContainerInitializer, with a full name of your custom
SentryInitializer class as a content:
sentry.sample.SentryInitializer-servlet
- Version:
- 5.0.1
- Repository:
- | https://docs.sentry.io/platforms/java/guides/servlet/ | 2021-07-24T05:20:17 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.sentry.io |
Job Example 12: Extracting Rows and Sending Them in Delimited Format
Job Objective
Extract rows from Teradata Database tables and write them to an external target file as delimited data.
Data Flow Diagram
Figure 37 shows a diagram of the job elements for Job Example 10.Figure 37: Job Example PTS00016, PTS00017 -- Extracting Rows and Sending Them in Delimited Format
Sample Script
For the sample script that corresponds to this job, see the following scripts in the sample/userguide directory:
PTS00016: Extracting Rows and Writing Them in Delimited Format using the Export operator.
PTS00017: Extracting Rows and Writing Them in Delimited Format using the SQL Selector operator.
Rationale
This job uses the: | https://docs.teradata.com/r/j9~8T4F8ZcLkW7Ke0mxgZQ/ZL8CRTeGqfdxHeDXdJkNgA | 2021-07-24T04:02:23 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.teradata.com |
One of the first thing you will want to know about your Varnish setup is whether or
not content is cached. In this tutorial we’ll provide a vcl snippet to achieve
this as well as explain how to leverage it via
varnishlog and
varnishncsa.
The code is fairly straightforward: req.http.x-cache = "synth synth"; # uncomment the following line to show the information in the response # set resp.http.x-cache = req.http.x-cache; }; }
The
x-cache header is were we store the information, using two terms that
cover slightly different aspect of the content,
The first word can be:
hit: we are delivering an object from the cache
miss: the object comes from the backend after failing to find it in the cache.
pass: the object comes from the backend because the request bypassed the cache.
synth: we created a synthetic object on-the-fly to satisfy the request.
The second word will be:
cached: the object will be reused.
uncacheable: the object comes from the backend but will not be reused.
synth: synthetic object.
You can either copy-paste the above snippet at the top of your vcl file (after
the
vcl 4.X; statement), or you can save it as
/etc/varnish/hit-miss.vcl
and include it:
vcl 4.0; import std; include "hit-miss.vcl"; ...
You’ll need to uncomment the last line to see this information as a respone header. This is not done by default as it outputs information that should possibly be hidden from regular users.
You can then use the
-i switch in a
curl command to see the headers:
$ curl -i HTTP/1.1 200 OK Date: Tue, 24 Jul 2018 18:43:16 GMT Server: Varnish Content-Type: text/html; charset=utf-8 X-Varnish: 32773 Age: 0 Via: 1.1 varnish connection: close x-cache: miss cached Content-Length: 282 ...
Note: Piped responses can’t be modified, so you can’t get the information for them that way.
varnishncsa grabs information about an HTTP transaction and formats it,
usually as a line for easy logging (
man varnishncsa):
# log the full header varnishncsa -F '<%{x-cache}i> %U %s'
varnishlog (and
varnishncsa) uses a powerful VSL query language (
man
vsl-query) that allows for versatile filters.
For example, you can:
# only show passes: varnishlog -q 'ReqHeader:x-cache[1] eq pass' # only show cached objects varnishlog -q 'ReqHeader:x-cache[2] eq cached'
*Note: you can apply the same
-q arguments to
varnishncsa. | https://docs.varnish-software.com/tutorials/hit-miss-logging/ | 2021-07-24T03:50:20 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.varnish-software.com |
Once you’ve confirmed that payments from the pretend accounts are successful, you can now enable the PayPal Account.
First, you’ll need to change your PayPal API app from Sandbox to Live API. To get Live API Credentials, go to Tools > Business Setup > Offer PayPal checkout on your website. Click Set Up Online Payments followed up Get Your API Credentials
Next, you’ll need to update the PayPal information on your OJS distribution setting with live API credentials and unselect ‘Test Mode’ | https://docs.pkp.sfu.ca/using-paypal-for-ojs-and-ocs/en/your-real-paypal-account.html | 2021-07-24T05:47:07 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.pkp.sfu.ca |
System Integration Overview
In this article we will briefly cover the different ways you can integrate Ucommerce with 3rd party systems and what to consider.
Performance considerations
When dealing with integrations it is always a good idea to get yourself an overview around how the integrations should work before deciding on the model you want to go with. Things that are important factors are:
- How many integrations do I need? (Price updates, Stock updates, product imports, orders export)
- How often should my integrations run? (daily, hourly, every 2 minutes, on demand)
- How much data is being changed?
Be realistic. Sometimes the clients want to update stock and prices for all 100.000 products every minute. This is not going to happen. So talk with your clients and find a sweet spot that suits both them and you.
Web API integrations
You can enable Ucommerce in Web API request and as such, just do your usual CRUD operations directly within a webservice or a scheduled task directly under the website. Everything is in place. You just need to start consuming the APIs you need.
If you are interested in this model, you can read how to add a new webservice.
Integrations outside website
The model that gives you the best performance in terms of dedicated resources is to run your integrations outside the website. This will free resources for your visitors that will get a faster shopping experience as the website is not spending resources on system integration. This model also sets the highest requirement in terms of setup. You can read how to enable the Ucommerce APIs outside webcontext to learn how to do that. | https://docs.ucommerce.net/ucommerce/v9.4.2/system-integration/system-integration-overview.html | 2021-07-24T05:33:10 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.ucommerce.net |
First, you create a remote app access client in the region-specific Workspace ONE Access for the integration with NSX-T Data Center. Then, you use the certificate thumbprint, ClientID, and shared secret, to register NSX-T Data Center to identify it as a trusted consumer of the Workspace ONE Access identity and authentication services.
Procedure
- In a Web browser, log in to the region-specific Workspace ONE Access instance in Region A by using the administration interface.
- On the main navigation bar, from the Catalog drop-down menu, select Settings.
- In the left pane, click Remote app access.
- Click Clients and click Create client.
- In the Create client dialog box, configure these settings, and click Add.
- In a Web browser, log in to the NSX-T Manager for the workload domain by using the user interface.
- On the main navigation bar, click System.
- In the left pane, click Users, click the Configuration tab, and click Edit.
- In the Edit VMware Identity Manager configuration dialog box, configure these settings and click Save.
Results
Important:
After you configure Workspace ONE Access as an identity provider, the NSX-T Manager URL for a local account login is appended by /login.jsp?local=true, that is,. | https://docs.vmware.com/en/VMware-Validated-Design/services/deployment-of-vrealize-suite-2019-on-vmware-cloud-foundation-310/GUID-45919214-DD82-4011-A9BC-ABBF74B70D2F.html | 2021-07-24T05:47:40 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.vmware.com |
From the Settings page, under the Project menu, you can add file extensions you want Waydev to ignore when analyzing your codebase. The default ignored extensions are ‘svg, gif, png, jpg, ttf, map, eot, woff, woff2, plist, json, 000, po, mo, sql.’
You can edit the extensions you want Waydev to ignore by clicking the ‘Edit’ button on the right side of the page.
You can also set a custom regex to ignore commits with a message that matches the regex you set. For example, if you set the setting to "/\bweb\b/i", Waydev will ignore all the commits which contain the word "web" in the commit message.
Now you can adjust the time frame for Churn, Legacy Refactor & Helping Others according to your sprint duration. It's set as default for 21 days, but if your sprints are not two-weeks long, you can set a custom time frame from the Settings page.
You can set Waydev to ignore commits that have more than your chosen number of lines of code. By default, Waydev ignores commits that have over 8000 lines of code.
By default, Waydev only takes into account the commits that are not merges, except for the Work Log. If you want to see them in other pages/stats, you can turn "Include Merges" on.
You can set a new stats timezone and all the stats will be converted according to the selected timezone. For example, if you select UTC, all the stats will be converted according to the UTC timezone.
If you select Local, commit stats will be converted according to the local time of execution, and pull requests will be converted to the UTC timezone.
If you select Custom, you will be able to select any timezone from the drop-down.
By default, commits are displayed according to the local time of execution, and pull requests are displayed according to the UTC timezone.
You can also select a custom time of the day and timezone you want Waydev to process your commits at.
If you need to update the data before we do it automatically, you can click on the "Manually Start Cloning Process" button and we will start updating your data right away. You can only do this once every 2 hours.
| https://docs.waydev.co/en/articles/3785405-settings | 2021-07-24T05:05:50 | CC-MAIN-2021-31 | 1627046150129.50 | [array(['https://downloads.intercomcdn.com/i/o/203479279/08f29fc5bb30a135af617188/settings-page-24-04.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/197828272/fdb1ffd71e6c60d94fa9b028/waydev-settings-page-regex.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/203487318/4515223898d45f9dd37a1356/settings-page-stats-timezone-3.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/197828577/1cbf754692ee332001a05866/waydev-settings-page-time-zone.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/197836959/478eddcac8cb67b9554f2907/settings-page-manual-clone.png',
None], dtype=object) ] | docs.waydev.co |
High Dynamic Range and Wide Color Gamut
High Dynamic Range (HDR) displays a greater difference in light intensity from white to black, and Wide Color Gamut (WCG) provides a greater range of colors.
High Dynamic Range and Wide Color Gamut Overview
In DXGI 1.5 there is support for HDR and WCG, both using a minimum of 10 bits (rather than 8 bits) per color. DXGI 1.5 provides support for HDR10, a 10 bit HDR/WCG format.
Current maximum brightness of displays is designed to support diffuse reflected surfaces, with nothing brighter than paper, referred to as "paper white". Paper white defines how bright white should be, for example in a controlled dark environment like a movie theater, 80 nits is typically used, in contrast to a PC monitor which could be 220 nits (one "nit" is short for one candela per square meter, and is a unit of light intensity). This enables a monitor screen to closely resemble what can be printed out. HDR is the display of pixel values above this paper white level so you can more accurately represent things like light sources, reflections of light sources, and similar bright objects, on screen, which is currently simulated by using a tone mapping operator, such as Reinhard.
Because of this additional capability, title content creators can now:
Represent more detail in bright and dark areas. The image below compares the color values the standards ST.2084 and sRGB can represent over a range of light intensity, measured in nits. The standard sRGB, in violet, shows that when light intensity reaches less than 0.1 nit or greater than 100 nit, there is no more differentiation in the color value.
Clearly differentiate diffuse areas from specular highlights, for example metal surfaces now look much more like actual metal.
Differentiate specular highlights from light sources of different colors.
Differentiate true light sources from reflections.
Typically the areas of high intensity are small peaks, and are dynamic (quickly come and go) so the "average" intensity over a series of frames is usually not significantly different from a Standard Dynamic Range (SDR) frame. Users typically adjust their display to set an average luminance of an SDR frame for optimal eye comfort. If applications have too many high intensity frames, users can get fatigue.
The umbrella term Ultra-High Definition (UHD) for TV displays refers to a combination of HDR (branded as "UHD Premium" on current TVs), WCG, along with a high frame rate and greater pixel resolution. UHD is not synonymous though with 4K displays. Also, technically HDR refers only to the difference between the brightest whites and darkest blacks, though sometimes is used as a generic term to include WCG.
Currently most content is developed assuming paper white to be 80 to 100 "nits". Most current monitors peak at around 250 to 300 nits. This is well short of the sun's direct light on a metallic surface (around 10,000 nits), the sun's reflection on a glass or metallic surface (around 300,000 nits), and tiny compared with the Sun itself, at a blistering 1.6 billion nits.
At the other end of the spectrum, moonlight can be around 1 nit, and starlight down to 0.000001 of a nit.
A move to HDR displays will increase the peak light intensity, typically to around 1000 nits for LCD TV screens, and up to around 800 nits for OLED TV screens.
The relationship between the values stored for red, green and blue, and the actual colors rendered on a display, is determined by a color standard, such as the 8 bit variation of the color standard BT.709 (used on many current TVs, and very similar to the 8 bit sRGB on computer monitors). BT.709 as a standard also supports 10 bit color. The following images shows the increase in color range provided by the color standard BT.2020 (which has 10 and 12 bit variants). Although the most obvious improvement is the range of green values, note also the deeper reds, yellows and purples.
The images themselves are "xy chrominance" diagrams, which map out the "gamut" (range of valid colors) within a colorspace, and ignore the luminance (intensity) values. The overall horseshoe shape consists of all colors that are perceivable by the average human. The curved line that surrounds this shape with the blue numbers is the "spectral locus" (which is a plot of the monochromatic wavelength colors - laser light - going from 700nm to 380nm). The straight line at the bottom from violet to red are "nonpure" colors that can’t be represented with monochromatic light. This outside boundary of the horseshoe represents the purest (most saturated) colors that humans can perceive.
D65 is a definition of a "whitepoint". This whitepoint is used in most consumer electronics colorspaces, including sRGB. The triangle you see shows all of the colors that can be represented in a "3 channel additive colorspace" (combinations of R G and B light) such as the output from an LCD display. The xyY colorspace is defined such that you can calculate all of the possible colors that you can obtain via a combination of two lights (for example, a point at pure red and second point at pure green LCD output) by drawing a straight line between these two points.
As displays support greater ranges of color and luminance (e.g. HDR), apps should take advantage of this by increasing bit depth. 10-bit/channel color is an excellent starting point. 16-bit/channel color may work well in some cases. Games that want to use HDR to drive an HDR display (a TV or Monitor) will want to use at least 10bit, but could also consider using 16bit floating point, for the format for the final swap chain.
HDR and WCG APIs
In order to enable HDR and WCG in your app, refer to the following APIs.
- IDXGISwapChain4::SetHDRMetaData : sets High Dynamic Range (HDR) and Wide Color Gamut (WCG) header metadata.
- DXGI_HDR_METADATA_HDR10 : structure containing the metadata settings.
- DXGI_HDR_METADATA_TYPE : enum identifying the type of header metadata.
- DXGI_COLOR_SPACE_TYPE : defines the colorspace (sRGB, YCbCr), color range, gamma settings, and other details of the color format.
Related topics | https://docs.microsoft.com/en-us/windows/win32/direct3ddxgi/high-dynamic-range-and-wide-color-gamut | 2019-07-15T20:59:39 | CC-MAIN-2019-30 | 1563195524111.50 | [array(['images/nit-show-3.jpg', None], dtype=object)] | docs.microsoft.com |
Issues Resolved
- We noticed a few performance issues when users exited the Slyce camera and returned numerous times in a row. This has been improved in this release.
- We noticed users would tap multiple times on the same image, creating duplicate search requests. We added a debounce of 1 second to prevent users from creating duplicate results by tapping multiple times on the camera in quick succession.
- When users performed a second search, the loading layer would not display. We resolved that issue.
- We fixed a crash that would happen in the rare case where an image was submitted at exactly 400px. | https://docs.slyce.it/hc/en-us/articles/360017790191-Android-5-3-1-Release-Notes | 2019-07-15T20:17:35 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.slyce.it |
Link to prev
On this page:
Syntax
<txp:link_to_prev>
The link_to_prev tag can be used as a single tag or a container tag to return the permanent URL of the previous previous article exists.
- Values:
0(no) or
1(yes).
- Default:
0.
Examples
Example 1: Link to previous article using its title
<txp:link_to_prev> <txp:prev_title /> </txp:link_to_prev>
Other tags used: prev_title.
Example 2: Link to previous article using static text
<txp:link_to_prev Previous </txp:link_to_prev>
This will always display the text ‘Previous’, even when there is no previous article.
Note: While
showalways will enable this tag to display what is wrapped inside it, prev_title returns nothing if there is no previous previous article’s title, and also apply a
class to it:
<a class="link--prev" href="<txp:link_to_prev />" title="<txp:prev_title />"> ← Previous article </a>
Other tags used: prev_title. | https://docs.textpattern.com/tags/link_to_prev | 2019-07-15T20:40:52 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.textpattern.com |
Configure legacy on-premises public folders for a hybrid deployment
Summary: Use the steps in this article to synchronize public folders between Office 365 and your Exchange Server 2010 on-premises deployment.
In a hybrid deployment, your users can be in Exchange Online , on-premises, or both, and your public folders are either in Exchange Online or on-premises. Public folders can reside in only one place, so you must decide whether your public folders will be in Exchange Online or on-premises. They can't be in both locations. Public folder mailboxes are synchronized to Exchange Online by the Directory Synchronization service. However, mail-enabled public folders aren't synchronized across premises.
This topic describes how to synchronize mail-enabled public folders if your users are in Office 365 and your Exchange Server 2010 SP3 public folders are on-premises. However, an Office 365 user who is not represented by a MailUser object on-premises (local to the target public folder hierarchy) won't be able to access legacy or modern on-premises public folders.
Note
This topic refers to the Exchange Server 2010 SP3 servers as the legacy Exchange server.
You will sync your mail-enabled public folders by using the following scripts, which are initiated by a Windows task that runs in the on-premises environment:
Sync-MailPublicFolders.ps1: This script synchronizes mail-enabled public folder objects from your local Exchange on-premises deployment with Office 365. It uses the local Exchange on-premises deployment as master to determine what changes need to be applied to O365. The script will create, update, or delete mail-enabled public folder objects on O365 Active Directory based on what exists in the local on-premises Exchange deployment.
SyncMailPublicFolders.strings.psd1: This is a support file used by the preceding synchronization script and should be copied to the same location as the preceding script.
When you complete this procedure your on-premises and Office 365 users will be able to access the same on-premises public folder infrastructure.
What hybrid versions of Exchange will work with public folders?
The following table describes the version and location combinations of user mailboxes and public folders that are supported. "Hybrid not applicable" is still a supported scenario, but is not considered a hybrid scenario since both the public folders and the users are residing in the same location.
Note
Outlook 2016 does not support accessing Exchange 2007 legacy public folders. If you have users who are using Outlook 2016, you must move your public folders to a more recent version of Exchange Server. More information about Outlook 2016 and Office 2016 compatibility with Exchange 2007 and earlier versions can be found in this article.
Step 1: What do you have to know before you begin?
These instructions assume that you have used the Hybrid Configuration Wizard to configure and synchronize your on-premises and Exchange Online environments, and that the DNS records that are used for the Autodiscover service for most users reference an on-premises end point. For more information, see Hybrid Configuration Wizard.
These instructions assume that Outlook Anywhere is enabled and functional on all the on-premises legacy Exchange public folder servers. For information about how to enable Outlook Anywhere, see Outlook Anywhere.
Implementing legacy public folder coexistence for a hybrid deployment of Exchange with Office 365 may require you to fix conflicts during the import procedure. Conflicts can occur because a non-routable email address that's assigned to mail-enabled public folders, conflicts with other users and groups in Office 365, and other reasons.
These instructions assume that your Exchange Online organization has been upgraded to a version that supports public folders.
In Exchange Online, you must be a member of the Organization Management role group. This role group is different from the permissions assigned to you when you subscribe to Exchange Online. For information about how to enable the Organization Management role group, see Manage Role Groups.
In Exchange 2010, you must be a member of the Organization Management or Server Management RBAC role groups. For details, see Add Members to a Role Group
To access public folders cross-premises, users must upgrade their Outlook clients to the November 2012 Outlook public update or a later version.)and download in your preferred language from the dialouge box.
Outlook 2016 for Mac (and earlier versions) and Outlook for Mac for Office 365 are not supported for cross-premises legacy public folders. Users must be in the same location as the public folders to access them with Outlook for Mac or Outlook for Mac for Office 365. Additionally, users whose mailboxes are in Exchange Online won't be able to access on-premises public folders using Outlook Web App.
After you follow the instructions in this article to configure your on-premises public folders for a hybrid deployment, users who are external to your organization won't be able to send messages to your on-premises public folders unless you take additional steps. You can either set the accepted domain for the public folders to Internal Relay (see Manage accepted domains in Exchange Online) or you can disable Directory Based Edge Blocking (DBEB) (see Use Directory Based Edge Blocking to reject messages sent to invalid recipients).
Step.
Create an empty mailbox database on each public folder server.
For Exchange 2010, run the following command. This command excludes the mailbox database from the mailbox provisioning load balancer. This prevents new mailboxes from being added automatically to this database..
Create a proxy mailbox within the new mailbox database, and hide the mailbox from the address book. The SMTP of this mailbox will be returned by AutoDiscover as the DefaultPublicFolderMailbox SMTP, so that by resolving this SMTP the client can reach the legacy exchange server for public folder access.
New-Mailbox -Name <PFMailbox1> -Database <NewMDBforPFs>
Set-Mailbox -Identity <PFMailbox1> -HiddenFromAddressListsEnabled $true organization.
Step 3: 4: Configure directory synchronization
The Directory Synchronization service doesn't synchronize mail-enabled public folders. Running the following script will synchronize the mail-enabled public folders across premises. the legacy Exchange server, run the following command to synchronize mail-enabled public folders from your local on-premises Active Directory to O365.
Sync-MailPublicFolders.ps1 -Credential (Get-Credential) -CsvSummaryFile "<sync_summary.csv>"
Where you're prompted for your Office 365 username and password, and <sync_summary.csv> 5: Configure Exchange Online users to access on-premises public folders
The final step in this procedure is to configure the Exchange Online organization and to allow access to the legacy on-premises public folders.
Enable the exchange online organization to access the on-premises public folders. You will point to all of the proxy public folder mailboxes that you created in Step 2: Make remote public folders discoverable.
Run the following command in Exchange Online PowerShell:
Set-OrganizationConfig -PublicFoldersEnabled Remote -RemotePublicFolderMailboxes PFMailbox1,PFMailbox2,PFMailbox3 Method 1: Manually verify that the service is started and that the admin account can sign in. Office 365 randomly selects one of the public folder mailboxes that's supplied in this command.
Important
An Office 365 user who is not represented by a MailUser object on-premises (local to the target public folder hierarchy) won't be able to access legacy or Exchange 2013 on-premises public folders. See the Knowledge Base article Exchange Online users can't access legacy on-premises public folders for a solution.
How do I know this worked?
Log on to Outlook for a user who is in Exchange Online, and then run the following public folder tests:
View the hierarchy.
Check permissions.
Create and delete public folders.
Post content to and delete content from a public folder.
Feedback | https://docs.microsoft.com/en-gb/exchange/collaboration-exo/public-folders/set-up-legacy-hybrid-public-folders | 2019-07-15T20:46:00 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.microsoft.com |
Feature: #19157 - Add option to exclude all hidden records in EXT:impexp ¶
See Issue #19157
Description ¶
The export configuration of EXT:impexp has been extended to allow to completely deactivate exporting of hidden/deactivated records. This behaviour can be controlled via a new option which is checked by default.
Furthermore, if the inclusion of hidden records is activated (which is now an explicit choice), then an additional button is shown, allowing users to preselect all hidden records for manual exclusion. | https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/8.0/Feature-19157-impexpCouldHaveAnOptionToExcludeAllHiddenRecords.html | 2019-07-15T20:52:06 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.typo3.org |
Tables ¶
There are several ways to create tables in reST. Use what works best for your use case.
Grid Table ¶
+----------+----------+ | Header 1 | Header 2 | +==========+==========+ | 1 | one | +----------+----------+ | 2 | two | +----------+----------+
You can use this table generator to create a grid table.
Simple Table ¶
======== ======== Header 1 Header 2 ======== ======== 1 one 2 two ======== ========
Csv Tables ¶
.. csv-table:: Numbers :header: "Header 1", "Header 2" :widths: 15, 15 1, "one" 2, "two"
t3-field-list-table Tables ¶
t3-field-list-table
is a custom directive, created by the t3SphinxThemeRtd
template. If you want your .rst file to be correctly rendered on other
platforms as well (e.g. GitHub), you should not use this.
.. t3-field-list-table:: :header-rows: 1 - :Header1: Header1 :Header2: Header2 - :Header1: 1 :Header2: one - :Header1: 2 :Header2: two
Example: | https://docs.typo3.org/m/typo3/docs-how-to-document/master/en-us/WritingReST/Tables.html | 2019-07-15T21:05:14 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.typo3.org |
JavaScript Console commands")
You can use commands to send messages and perform other tasks in the JavaScript Console window of Visual Studio. For examples that show how to use that window, see QuickStart: Debug JavaScript. The information in this topic applies to Windows Store apps, Windows Phone Store apps, and apps created using Visual Studio Tools for Apache Cordova. For info on supported console commands in Cordova apps, see Debug Your App. For info on using the console in Internet Explorer F12 tools, see this topic.
If the JavaScript Console window is closed, you can open it while you're debugging in Visual Studio by choosing Debug > Windows > JavaScript Console.
Note
If the window is not available during a debugging session, make sure that the debugger type is set to Script in the Debug properties for the project.
console object commands.console.[command] if you need to avoid possible confusion with local objects named console.
Tip
Older versions of Visual Studio do not support the complete set of commands. Use IntelliSense on the console object to get quick information about supported commands.
Miscellaneous commands
These commands are also available in the JavaScript Console window (they are not available from code).
Checking whether a console command exists
You can check whether a specific command exists before attempting to use it. This example checks for the existence of the
console.log command. If
console.log exists, the code calls it.
if (console && console.log) { console.log("msg"); }
Examining objects in the JavaScript Console window
You can interact with any object that's in scope when you use the JavaScript Console window. To inspect an out-of-scope object in the console window, use
console.log ,
console.dir, or other commands from your code. Alternatively, you can interact with the object from the console window while it is in scope by setting a breakpoint in your code (Breakpoint > Insert Breakpoint).
Formatting console.log output
If you pass multiple arguments to
console.log, the console will treat the arguments as an array and concatenate the output.
var user = new Object(); user.first = "Fred"; user.last = "Smith"; console.log(user.first, user.last); // Output: // Fred Smith
console.log also supports "printf" substitution patterns to format output. If you use substitution patterns in the first argument, additional arguments will be used to replace the specified patterns in the order they are used.
The following substitution patterns are supported:
%s - string
%i - integer
%d - integer
%f - float
%o - object
%b - binary
%x - hexadecimal
%e - exponent
Here are some examples of using substitution patterns in
console.log:
var user = new Object(); user.first = "Fred"; user.last = "Smith"; user.age = 10.01; console.log("Hi, %s %s!", user.first, user.last); console.log("%s is %i years old!", user.first, user.age); console.log("%s is %f years old!", user.first, user.age); // Output: // Hi, Fred Smith! // Fred is 10 years old! // Fred is 10.01 years old!
See Also
QuickStart: Debug JavaScript
Quickstart: Debug HTML and CSS | https://docs.microsoft.com/en-us/visualstudio/debugger/javascript-console-commands?view=vs-2015 | 2019-07-15T21:28:50 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.microsoft.com |
Introduction¶
The Pylomer API platform represents a collection of data access best-practices and industrial-strength security.
Organizations’ proprietary and sensitive data is often locked up in spreadsheets or behind corporate firewalls, making it inaccessible outside the office. With all the recent and very public data breaches, business owners have good reason to be concerned about the security of their important data. And with the move toward making that data accessible outside the office and on mobile devices, the problem is compounded.
Additionally, there is the problem of flexibility of spreadsheets and monolithic databases which function as large, integrated units. Anyone using spreadsheets extensively has seen the number of columns increase to the point where a spreadsheet becomes difficult to manage. Database tables often become similarly unwieldy, especially as they grow organically over time.
Microservices¶
The modern solution to these problems is the microservices model. Each type of data, or table, is kept small, and contains only the attributes closely associated with that type of data. Each table can be accessed directly over the Internet at its secure HTTP URL, or endpoint, with no need for special code to connect to a database. This opens the door to easy access by mobile devices, and preserves your options to access the data in other, possibly unanticipated, ways.
The Pylomer API Platform¶
The Pylomer API platform offers the best in security and flexibility for your data. Data is stored on Google’s hardened infrastructure and secured with industrial-strength authentication. No user without an authorized Gmail address can access your data. Period. Your data resides in the same NoSQL Datastore which Google uses both internally and for its public-facing applications such as AdWords.
The Pylomer API admin console allows easy management of the data and highlights the power and simplicity of data access. The source code for the admin console is provided as a reference guide for your web designer to access the data. The source code for the back end is also provided and can be customized and redeployed by a web developer as desired.
In summary, preventing vendor lock-in and intruder break-in should be at forefront of any decisions by management when considering a platform for publishing data on the Internet for either public or internal use. The modern web services architecture and Pylomer API platform offer both. | http://docs.pylomer.com/en/latest/intro.html | 2019-07-15T21:00:08 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.pylomer.com |
Is your company entitled to do input tax deduction?
The value added tax is calculated as a percentage of the reward and makes up together with it the price the beneficiary has to pay.
The value added tax identification number (short: VAT ID) is a distinctive qualification of a taxable company within the European Union.
Why does one need a value added tax identification number?
The value added tax identification number authorizes the participation in the EU-wide domestic market. Holders of such a number have the possibility to deliver goods free of tax to other EU-member states, considered that the trading partner can also show a valid VAT ID.
While exporting into another EU-country there will be no taxes charged when stating the VAT ID, instead the recipient will have to tax the incomes with the tax rates of the destination country.
So because of this destination principle, the taxation will be shifted to the recipient country.
Where can I find the VAT ID in STAFFOMATIC?
In your STAFFOMATIC account you will find the VAT ID under "account", "plans and billing" and "billing information and payment method".
| http://docs.staffomatic.com/en/articles/1108619-state-the-value-added-tax-identification-number | 2019-07-15T21:09:33 | CC-MAIN-2019-30 | 1563195524111.50 | [array(['https://downloads.intercomcdn.com/i/o/33008068/7d74a0e153957e4ee41ad232/Bildschirmfoto+2017-09-06+um+11.09.50.png',
None], dtype=object) ] | docs.staffomatic.com |
Dear Staffomatic user,
May 16, 2019 is the day, we want to take the next big step with you and make Staffomatic even more future-proof.
What does it mean?
On Thursday, 16 May 2019, from 18:00 until the early hours of the next morning, we will be making improvements to our server setup to pave the way for more exciting features and innovations.
How can I use Staffomatic during this time?
Our servers will not be able to process any changes during this period. Staffomatic is in "read-only mode", data storage is not possible during this period. For this reason, we do not ask you to make any changes to the duty roster or change any other data. Also the time recording is not usable during the work.
What do I have to do?
There's nothing specific to do on your side. Since we are hosting the new server setup in a new data center (still in Germany), there will be a small adjustment of the data processing contract. A message will be displayed in Staffomatic. From a data protection point of view, nothing changes, as the new data center is still located in Germany.
How long is this going to take?
The complete process will continue into the early morning hours. In rare cases, a second working day is required to transfer 100% of the system data.
We will inform you the day before and a few hours before the start of the restrictions and keep you up to date during the work. If you have any questions you can reach us as usual at [email protected]
Thank you for your understanding!
Your Staffomatic Team | http://docs.staffomatic.com/en/articles/2968290-maintenance-work-on-our-servers-update | 2019-07-15T21:09:48 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.staffomatic.com |
You can use the following formatting options in the cards' titles:
- New line: Shift + Enter
- Italic: _italic_ or *italic*
- Strong: __strong__ or **strong**
- Link: [StoriesOnBoard]()
Please note that when formatted titles are edited in the connected integrated systems (including GitHub), the formatting will be lost as these systems don’t have support for formatting in titles.
Did you know that markdown formatting is also available in card details? | http://docs.storiesonboard.com/en/articles/826841-simple-formatting-options-in-cards-titles | 2019-07-15T20:18:21 | CC-MAIN-2019-30 | 1563195524111.50 | [array(['https://uploads.intercomcdn.com/i/o/26878297/2a71d348551e5226922a3884/image.png',
None], dtype=object) ] | docs.storiesonboard.com |
.
.
Material Categories
Below are previews of the different material categories. | https://docs.chaosgroup.com/display/VNFR/Preset+Material+Library | 2019-07-15T20:18:50 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.chaosgroup.com |
Way 3 - TCP Balanced Round Robin with HAPROXY and PROXY
Important
This only applies to Web Safety version 6.0 and up. Older versions do not support PROXY protocol management from Admin UI.
In this case we will deploy a haproxy node in front of many proxy nodes. Browsers will connect to haproxy node that will distribute TCP connections to proxy nodes using round robin scheme.
This deployment is different from previously described Way 2 because haproxy and Squid instances will be connected using PROXY protocol. This protocol is used to notify Squid of real IP addresses of haproxy clients (browsers). It allows for full policy members matching by Active Directory name and IP address, ranges and subnets, removing limitations described at Way 2 article.. Note how each
serveris marked with
send-proxydirective. send-proxy server squid2 192.168.178.12:3128 check send-proxy.
Enable support for PROXY protocol in UI / Squid / Settings / Netwrork by setting the Require presence of PROXY protocol header checkbox and providing haproxy’s IP address in address field as indicated on the following screenshot. Click Save and Restart.
If Active Directory integration is required,. | https://docs.diladele.com/administrator_guide_6_3/active_directory_extra/redundancy/haproxy_proxy_protocol.html | 2019-07-15T20:01:44 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.diladele.com |
$ docker pull registry.redhat.io/openshift3/perl-516-rhel7 $ docker pull registry.redhat.io/rhscl/perl-520-rhel7 $ docker pull registry.redhat.io/rhscl/perl-524-rhel7
OpenShift Online provides S2I enabled Perl images for building and running Perl applications. The Perl S2I builder image assembles your application source with any required dependencies to create a new image containing your Perl application. This resulting image can be run either by OpenShift Online or by a container runtime.
RHEL 7 images are available through the Red Hat Registry:
$ docker pull registry.redhat.io/openshift3/perl-516-rhel7 $ docker pull registry.redhat.io/rhscl/perl-520-rhel7 $ docker pull registry.redhat.io/rhscl/perl-524-rhel7
You can use these images through the
perl Dancer application. This template builds and deploys the sample application on Perl 5.24 with a MySQL database using a persistent volume for storage.
The sample application can be built and deployed using the
rhscl/perl-524-rhel7 image with the following command:
$ oc new-app --template=dancer-mysql-persistent | https://docs.openshift.com/online/using_images/s2i_images/perl.html | 2019-07-15T19:57:46 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.openshift.com |
Update Branched Last Known Good¶
Action¶
Respond to the ticket and take ownership.
Rsync images from the tree QA claims is LNG to alt. Do this from a system that mounts /mnt/koji such as releng1. EG syncing the images from 20100315 as LNG:
$ rsync -avHh --progress --stats --exclude Packages \ --exclude repodata --exclude repoview --exclude debug \ --exclude drpms --exclude source \ /mnt/koji/mash/branched-20100315/13/ \ secondary1:/srv/pub/alt/stage/branched-20100315/
Update the lng symlink
- ::
$ ssh secondary1 ln -sfT branched-20100315 /srv/pub/alt/stage/branched-lng
Update the ticket when complete and close it. | https://docs.pagure.org/releng/sop_update_branch_last_known_good.html | 2019-07-15T20:29:09 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.pagure.org |
class ProgressTracker
A progress tracker helps surface information about the progress of an operation to a user interface or API of some kind. It lets you define a set of steps that represent an operation. A step is represented by an object (typically a singleton).
Steps may logically be children of other steps, which models the case where a large top level operation involves sub-operations which may also have a notion of progress. If a step has children, then the tracker will report the steps children as the "next step" after the parent. In other words, a parent step is considered to involve actual reportable work and is a thing. If the parent step simply groups other steps, then you'll have to step over it manually.
Each step has a label. It is assumed by default that the label does not change. If you want a label to change, then you can emit a ProgressTracker.Change.Rendering object on the ProgressTracker.Step.changes observable stream after it changes. That object will propagate through to the top level trackers changes stream, which renderers can subscribe to in order to learn about progress.
An operation can move both forwards and backwards through steps, thus, a ProgressTracker can represent operations that include loops.
A progress tracker is not thread safe. You may move events from the thread making progress to another thread by using the Observable subscribeOn call. | https://docs.corda.r3.com/api/kotlin/corda/net.corda.core.utilities/-progress-tracker/index.html | 2019-07-15T20:26:42 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.corda.r3.com |
This guide will walk you through setting up the Garden framework.
Please follow the guide for your operating system:
If you'd like to run Kubernetes locally, please see our local Kubernetes guide for installation and usage information.
If you want to install Garden from source, see the instructions in our contributor guide.
For Mac, we recommend the following steps to install Garden. You can also follow the manual installation steps below if you prefer.
If you haven't already set up Homebrew, please follow their installation instructions.
You can easily install Garden using Homebrew or using our installation script.
brew tap garden-io/gardenbrew install garden-cli
To later upgrade to the newest version, simply run
brew update and then
brew upgrade garden-cli.
curl -sL | bash
To later upgrade to the latest version, simply run the script again.
To install Docker, Kubernetes and kubectl, we recommend Docker for Mac.
Please refer to their installation guide for how to download and install it (which is a pretty simple process).
If you'd like to use a local Kubernetes cluster, please refer to the local Kubernetes guide for further information.
You can run Garden on Windows 10 Home, Pro or Enterprise editions.
Note: The Home edition doesn't support virtualization, but you can still use Garden if you're working with remote Kubernetes and in-cluster building.
To install the Garden CLI and its dependencies, please use our installation script. To run the script, open PowerShell as an administrator and run:
Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString(''))
The things the script will check for are the following:
The Chocolatey package manager. The script installs it automatically if necessary.
git and rsync . The script will install or upgrade those via Chocolatey.
Whether you have Hyper-V available and enabled. This is required for Docker for Windows. If it's available, the
installer will also ask if you'd like to install Docker for Windows. If you do not already have Hyper-V enabled,
the script will enable it, but you will need to restart your computer before starting Docker.
If applicable, whether Kubernetes is enabled in your Docker for Windows installation.
To later upgrade to the newest version, simply re-run the above script.
You need the following dependencies on your local machine to use Garden:
Git
rsync
And if you're building and running services locally, you need the following:
Use your preferred method or package manager to install
git and
rsync. On Ubuntu, that's
sudo apt install git rsync.
You can use our installation script to install Garden automatically:
curl -sL | bash
To later upgrade to the latest version, simply run the script again.
Or if you prefer to do it manually, download the Garden CLI for your platform from our latest release page, extract and make sure it is on your PATH. E.g. by extracting to
~/.garden/bin and adding
export PATH=$PATH:~/.garden/bin to your
.bashrc or
.zshrc file.
If you're installing manually, please make sure you copy all the files in the release package to the directory you're including in your PATH. For Windows and Linux, there's a
garden binary and
static directory, and for macOS there's an additional
fse.node binary. The
garden CLI expects these files to be next to the
garden binary.
To install Docker, please follow the instructions in the official documentation.
If you'd like to use a local Kubernetes cluster, please refer to the local Kubernetes guide for installation and usage information.
If you're running Garden behind a firewall, you may need to use a proxy to route external requests. To do this, you need to set the
HTTP_PROXY,
HTTPS_PROXY and
NO_PROXY environment variables. For example:
export HTTP_PROXY= # <- Replace with your proxy address.export HTTPS_PROXY=$HTTP_PROXY # <- Replace if you use a separate proxy for HTTPS.export NO_PROXY=local.app.garden,localhost,127.0.0.1 # <- This is important! See below.
The
NO_PROXY variable should include any other hostnames you might use for local development, since you likely don't want to route local traffic through the proxy. | https://docs.garden.io/basics/installation | 2019-07-15T20:30:54 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.garden.io |
If to do, step by step.
Step 1: Download the “hetrixtools_agent.php” file from GitHub and save it on your local computer.
(a)
Right-click on the link below, then select “Save Link As…”:
Step 2: Upload the “hetrixtools_agent.php” file to your webhosting cPanel account.
(a)
Open your cPanel web interface and click on “File Manager”:
(b)
In the “File Manager” make sure you are currently in the home folder of your account, not in the public_html folder:
(c)
Create a new folder called “hetrixtools” in your home directory:
(d)
Open the newly created “hetrixtools” folder:
(e)
Upload the “hetrixtools_agent.php” file from your local computer into the “hetrixtools” folder:
(f)
Select the “hetrixtools_agent.php” file that you have downloaded at Step 1 Paragraph (a) in this guide.
Step 3: Edit the “hetrixtools_agent.php” file.
(a)
Get the SID (Server ID) from the Agent Install screen:
(b)
In your cPanel, select the “hetrixtools_agent.php” file and click on “Edit”:
(c)
In the Editor window, locate the SIDPLACEHOLDER text:
(d)
Replace SIDPLACEHOLDER with the Server ID you have gotten at Step 3 Paragraph (a) and then save the changes to the file:
(e)
Before closing the Editor window, make sure to copy the path of the “hetrixtools_agent.php” file. You can find this in the top left corner of the Editor window:
The “hetrixtools_agent.php” file location should look something like this: /home/yourusername/hetrixtools/hetrixtools_agent.php
It differs based on your cPanel username and based on the location of your cPanel account on the actual server, so make sure you get your correct path from the Editor window as explained above.
Step 4: Configuring the cronjob that runs the agent.
(a)
From your cPanel web interface go to Cronjobs:
(b)
Here you’ll have to create a new cronjob for the agent.
The “Command” field must be: “php – q” (without the quote marks) followed by a space, followed by the script path that you got at Step 3 Paragraph (e).
And you’re all done. If the steps above have been executed properly, our platform will start receiving data within (2) two minutes from when you’ve added the cronjob. | https://docs.hetrixtools.com/install-the-php-version-of-the-server-monitoring-agent/ | 2019-07-15T20:17:15 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.hetrixtools.com |
.
extension ViewController: SlyceViewControllerDelegate { func slyceViewController(_ viewController: SlyceViewController, shouldDisplayDefaultDetailFor itemDescriptor: SlyceItemDescriptor) -> Bool let result = itemDescriptor.item; // Present your view controller here after retrieving the desired data from the item descriptors. return false } func slyceViewController(_ viewController: SlyceViewController, shouldDisplayDefaultListFor itemDescriptors: [SlyceItemDescriptor]) -> Bool { let results = itemDescriptors; let topResult = itemDescriptors[0].item; // Present your view controller here after retrieving the desired data from the item descriptors. return false } }
Bug Fixes:
We discovered the way the timezone was being passed was causing some issues calculating some metrics with our analytics. This has been resolved in the 5.2.1 release, and we highly recommend the upgrade if you are utilizing the Slyce Analytics. | https://docs.slyce.it/hc/en-us/articles/360017697651-iOS-5-2-1-Release-Notes- | 2019-07-15T20:58:47 | CC-MAIN-2019-30 | 1563195524111.50 | [array(['/hc/article_attachments/360012956631/Exit.jpeg', 'Exit.jpeg'],
dtype=object) ] | docs.slyce.it |
pongon success¶
pongon successful contact. It does not make sense in playbooks, but it is useful from
/usr/bin/ansibleto verify the ability to login and that a usable Python is configured.
See also
# Test we can logon to 'webservers' and execute python with json lib. # ansible webservers -m ping # Example from an Ansible Playbook - ping: # Induce an exception to see what happens - ping: data: crash
Common return values are documented here, the following are the fields unique to this module:
More information about Red Hat’s support of this module is available from this Red Hat Knowledge Base article. | https://docs.ansible.com/ansible/latest/modules/ping_module.html | 2019-07-15T21:08:14 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.ansible.com |
Elementarily, a data lake is a storage for the monstrous amount of data, both structured and unstructured in their native formats which handles the three Vs of big data (Volume, Velocity, and Variety). Data lake eliminates all the restrictions of a typical data warehouse system by providing unlimited storage, unrestricted file size, schema-on-read, and various ways to access data ( including SQL-like queries and ad hoc queries using Presto, Apache Impala etc.)
This article will focus on how to connect to a Hevo Data Lake as a destination.
Prerequisites
Hevo Data Lake needs to access the S3 bucket. Copy the following Bucket Policy to the bucket.
{
"Version": "2012-10-17",
"Id": “access-to-hevo-data-lake",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AWS Account ID>:role/<EMR Role for EC2>"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::<S3 Bucket Name>/*",
"arn:aws:s3:::<S3 Bucket Name>"
]
}
]
}
AWS Account ID: You can find your Account ID Number on the AWS Management Console, choose Support on the navigation bar on the upper-right, and then choose Support Center. Your currently signed-in account number (ID) appears in the Support Center title bar.
EMR Role for EC2: An IAM role is an IAM identity that you can create in your account that has specific permissions. You can find more about the Role of the EMR here.
S3 Bucket Name: The name of the S3 bucket involved here.
Setup Guide
- A destination can either be added while creating a pipeline or by directly heading to Destinations option under the Admin tab on the left app bar and clicking Add Destination button.
- Select Destination type as Data Lake from the Select Destination Type drop-down
- Configure the Tenant Settings for the execution layer of the Data Lake.
- Create a new Tenant by clicking on Add New Tenant or select an existing one by selecting the radio button. It is highly recommended to not make a new tenant pointing to the existing tenant's cluster.
- Tenant Name: A unique name for the tenant
- Executor Host: Host IP of the master node where of your EMR Cluster
- Executor Port: Port of the Livy Server running on your EMR Cluster, it is 8998 by default
- Metastore Host: Host IP of the Hive Metastore
- Metastore Port: Port of the Hive Metastore
- JDBC Host: Host IP of the JDBC Server
- JDBC Port: Port of the JDBC Server
- Click on Save Tenant to continue with setting up the storage layer.
- Configure the storage layer of the Data Lake.
- Destination Name: A unique name for the destination.
- Database Name: The database where all the tables will be, if it doesn’t exist, it will be created for you.
- Bucket Name: Since we're using S3 as the data store, this bucket denotes S3 Bucket name you want to dump the data to.
- Prefix: A location prefix where you want your data to be in.
- File Format: Select one of the file formats appropriate to your use case.
- Click on Continue to save the destination. You can test the connection. Hevo tries to create and delete a dummy table with name ‘dummy_table’. You’ll know if it encounters a failure.
Note: You’ll get a thrift URI which you can use to plug any external query engine like Presto or Apache Impala.
Please sign in to leave a comment. | https://docs.hevodata.com/hc/en-us/articles/360013439694-Hevo-Data-Lake | 2019-07-15T21:19:20 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.hevodata.com |
@ThreadSafe public interface AdvancedCacheLoader<K,V> extends CacheLoader<K,V>
CacheLoaderinterface that allows processing parallel iteration over the existing entries.
contains, init, load
start, stop
void process(AdvancedCacheLoader.KeyFilter<K> filter, AdvancedCacheLoader.CacheLoaderTask<K,V> task, Executor executor, boolean fetchValue, boolean fetchMetadata)
CacheLoaderTask#processEntry(org.infinispan.marshall.core.MarshalledEntry, TaskContext)is invoked. Before passing an entry to the callback task, the entry should be validated against the filter. Implementors should build an
AdvancedCacheLoader.TaskContextinstance (implementation) that is fed to the
AdvancedCacheLoader.CacheLoaderTaskon every invocation. The
AdvancedCacheLoader.CacheLoaderTaskmight invoke
AdvancedCacheLoader.TaskContext.stop()at any time, so implementors of this method should verify TaskContext's state for early termination of iteration. The method should only return once the iteration is complete or as soon as possible in the case TaskContext.stop() is invoked.
filter- to validate which entries should be feed into the task. Might be null.
task- callback to be invoked in parallel for each stored entry that passes the filter check
executor- an external thread pool to be used for parallel iteration
fetchValue- whether or not to fetch the value from the persistent store. E.g. if the iteration is intended only over the key set, no point fetching the values from the persistent store as well
fetchMetadata- whether or not to fetch the metadata from the persistent store. E.g. if the iteration is intended only ove the key set, then no pint fetching the metadata from the persistent store as well
PersistenceException- in case of an error, e.g. communicating with the external storage
int size()
PersistenceException- in case of an error, e.g. communicating with the external storage
Copyright © 2014 JBoss, a division of Red Hat. All Rights Reserved. | https://docs.jboss.org/infinispan/6.0/apidocs/org/infinispan/persistence/spi/AdvancedCacheLoader.html | 2019-07-15T21:41:57 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.jboss.org |
The. to use the Docker containerizer. This means that to use the Docker containerizer you need to upgrade Docker on the agent nodes each time a new version of Docker comes out.
- The UCR is more stable and allows deployment at scale.
- The UCR offers features not available in the Docker containerizer, such as GPU and CNI support.
- The UCR allows you to take advantage of continuing innovation within both the Mesos and DC/OS, including features such as IP per container, strict container isolation, and more.
Provision Containers with the Universal Container Runtime from the DC/OS Web InterfaceProvision Containers with the Universal Container Runtime from the DC/OS Web Interface
PrerequisitePrerequisite
If your service pulls Docker images from a private registry, you must specify the
cluster_docker_credentials_path in your
config.yaml file before installing DC/OS.
Specify the UCR from the web interface. Go to Services > Run a Service > Single Container > More Settings. In the Container Runtime section, choose the Universal Container Runtime radio button.
In the Container Image field, enter your container image.
Provision Containers with the Universal Container Runtime from the DC/OS CLIProvision Containers with the Universal Container Runtime from the DC/OS CLI
PrerequisitePrerequisite
If your service pulls Docker images from a private registry, you must specify the
cluster_docker_credentials_path in your
config.yaml file before installing DC/OS.
- Specify the container type
MESOSand a the appropriate object in your Marathon application definition. Here, we specify a Docker container with the
dockerobject. is a preview feature in DC/OS 1.9.
- The UCR does not support the following: runtime privileges, Docker options, force pull, named ports, numbered ports, bridge networking, port mapping, private registries with container authentication. | https://docs.mesosphere.com/1.9/deploying-services/containerizers/ucr/ | 2019-07-15T20:43:38 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.mesosphere.com |
If a KVM node hosting the Virtualized Control Plane has failed and recovery is not possible, you can recreate the KVM node from scratch with all VCP VMs that were hosted on the old KVM node. The replaced KVM node will be assigned the same IP addresses as the failed KVM node. | https://docs.mirantis.com/mcp/q4-18/mcp-operations-guide/openstack-operations/manage-vcp/replace-kvm.html | 2021-01-16T02:38:25 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.mirantis.com |
CloudWatchDestination
An object that defines an Amazon CloudWatch destination for email events. You can use Amazon CloudWatch to monitor and gain insights on your email sending metrics.
Contents
- DimensionConfigurations
An array of objects that define the dimensions to use when you send email events to Amazon CloudWatch.
Type: Array of CloudWatchDimensionConfiguration objects
Required: Yes
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/pinpoint-email/latest/APIReference/API_CloudWatchDestination.html | 2021-01-16T03:54:24 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.aws.amazon.com |
- Product
- Customers
- Solutions
This check monitors Ambari through the Datadog Agent.
The Ambari check is included in the Datadog Agent package. No additional installation is needed on your server.
To configure this check for an Agent running on a host:
Edit the
ambari.d/conf.yaml file, in the
conf.d/ folder at the root of your Agent’s configuration directory to start collecting your Ambari performance data. See the sample ambari.d/conf.yaml for all available configuration options.
init_config: instances: ## @param url - string - required ## The URL of the Ambari Server, include http:// or https:// # - url: localhost
Available for Agent versions >6.0
Collecting logs is disabled by default in the Datadog Agent. Enable it in your
datadog.yaml file:
logs_enabled: true
Edit your
ambari.d/conf.yaml by uncommenting the
logs lines at the bottom. Update the logs
path with the correct path to your Ambari log files.
logs: - type: file path: /var/log/ambari-server/ambari-alerts.log source: ambari service: ambari log_processing_rules: - type: multi_line name: new_log_start_with_date # 2019-04-22 15:47:00,999 pattern: \d{4}\-(0?[1-9]|1[012])\-(0?[1-9]|[12][0-9]|3[01]) ...
ambari under the Checks section.
This integration collects for every host in every cluster the following system metrics:
If service metrics collection is enabled with
collect_service_metrics this integration collects for each whitelisted service component the metrics with headers in the white list.
ambari.can_connect:
Returns
OK if the cluster is reachable, otherwise returns
CRITICAL.
ambari.state:
Returns
OK if the service is installed or running,
WARNING if the service is stopping or uninstalling,
or
CRITICAL if the service is uninstalled or stopped.
Ambari does not include any events.
Need help? Contact Datadog support. | https://docs.datadoghq.com/integrations/ambari/ | 2021-01-16T03:03:58 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.datadoghq.com |
Eucalyptus provides access to the current view of service state and the ability to manipulate the state. You can inspect the service state to either ensure system health or to identify faulty services. You can modify a service state to maintain activities and apply external service placement policies.
Use the
euserv-describe-services command to view the service state. The output indicates:
-aflag.
You can also make requests to retrieve service information that is filtered by either:
-eventsto return a summary of the last fault. You can retrieve extended information (primarily useful for debugging) by specifying
-events -events-verbose. provides a list of components and their respective statuses. This allows you to find out if a service is enabled without requiring cloud credentials.
To modify a service:
Enter the following command on the CLC, Walrus, or SC machines:
systemctl stop eucalyptus-cloud.service
On the CC, use the following command:
systemctl stop eucalyptus-cluster.service
If you want to shut down the SC for maintenance. The SC is
SC00 is
ENABLED and needs to be
DISABLED for maintenance.
To stop
SC00 first verify that no volumes or snapshots are being created and that no volumes are being attached or detached, and then enter the following command on SC00:
systemctl stop eucalyptus-cloud.service
To check status of services, you would enter:
euserv-describe-services
When maintenance is complete, you can start the eucalyptus-cloud process on
SC00 , which will enter the
DISABLED state by default.
systemctl start eucalyptus-cloud.service
Monitor the state of services using
euserv-describe-services until
SC00 is
ENABLED . | https://docs.eucalyptus.cloud/eucalyptus/5/admin_guide/managing_system/system_tasks/inspect_health/ | 2021-01-16T02:44:07 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.eucalyptus.cloud |
The Document Modeling Plugin is designed to assist in modeling the document structure. This plugin uses concepts from the OMG SysML standard, familiar to systems engineers. The plugin is designed not only for systems engineers, but also for systems analysts, systems architects, or anyone who needs to model a document structure for a specific project. The Document Modeling Plugin allows you to review a prepared document structure in the document preview dialog and save the document in .pdf, .html, or .xml file formats. The following SysML concepts were used for the document modeling:
- Conform
- Viewpoint
- View
- Expose | https://docs.nomagic.com/display/DMP190SP4/Document+Modeling+Plugin?reload=true | 2021-01-16T03:17:08 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.nomagic.com |
To enable users to log into the SS management console, you create user accounts and assign them roles, which are sets of permissions. You can add individual users or import users in bulk.
Adding a new user and assigning roles
- Log on to the product's Management Console. In the "Configure" menu, click "Users and Roles" to access "System User Store."
For example,
Then click on the "Users" link.
Note
The "Users" link is only visible to users with "Admin" permission. It is used to add new user accounts and modify or delete existing accounts.
- Click on the "Add New User" link.
- The "Add User" window opens. The first step requires you to enter the user name and password. If you want to add a user with the default "Everyone" role, click "Finish". Else, click "Next" to define a user role other than the default.
- If you proceed to the next step, a window will appear for you to select the roles to be assigned to the user. This can be done by selecting the appropriate check-boxes or using the "Select all"/"Unselect all" links.
- Click "Finish" once done. A new user account will be created with the specified roles. The user name is displayed in the "Users" list..
Overview
Content Tools
Activity | https://docs.wso2.com/pages/viewpage.action?pageId=28711597 | 2021-01-16T02:56:36 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.wso2.com |
Using Service-Linked Roles for Managed Blockchain
Amazon Managed Blockchain uses AWS Identity and Access Management (IAM) service-linked roles. A service-linked role is a unique type of IAM role that is linked directly to Managed Blockchain. Service-linked roles are predefined by Managed Blockchain and include all the permissions that the service requires to call other AWS services on your behalf.
A service-linked role makes setting up Managed Blockchain easier because you don’t have to manually add the necessary permissions. Managed Blockchain defines the permissions of its service-linked roles, and unless defined otherwise, only Managed Blockchain can assume its roles. The defined permissions include the trust policy and the permissions policy. The permissions policy cannot be attached to any other IAM entity.
You can delete a service-linked role only after first deleting its related resources. This protects your Managed Blockchain Managed Blockchain
Managed Blockchain uses the service-linked role named AWSServiceRoleForAmazonManagedBlockchain. This role enables access to AWS Services and Resources used or managed by Amazon Managed Blockchain.
The AWSServiceRoleForAmazonManagedBlockchain service-linked role trusts the following services to assume the role:
managedblockchain.amazonaws.com
The role permissions policy allows Managed Blockchain to complete actions on the specified resources shown in the following example policy.
{ "Version": "2012-10-17", "Statement": [ { "Action": [ "logs:CreateLogGroup" ], "Effect": "Allow", "Resource": "arn:aws:logs:*:*:log-group:/aws/managedblockchain/*" }, { "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:PutLogEvents", "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:*:*:log-group:/aws/managedblockchain/*:log-stream:*" ] } ] }
You must configure permissions to allow an IAM entity (such as a user, group, or role) to create, edit, or delete a service-linked role. For more information, see Service-Linked Role Permissions in the IAM User Guide.
Creating a Service-Linked Role for Managed Blockchain
You don't need to manually create a service-linked role. When you create a network, a member, or a peer node, Managed Blockchain creates the service-linked role for you. It doesn't matter if you use the AWS Management Console, the AWS CLI, or the AWS API. The IAM entity performing the action must have permissions to create the service-linked role. After the role is created in your account, Managed Blockchain can use it for all networks and members.
If you delete this service-linked role, and then need to create it again, you can use the same process to recreate the role in your account. When you create a network, member, or node, Managed Blockchain creates the service-linked role for you again.
Editing a Service-Linked Role for Managed Blockchain
Managed Blockchain does not allow you to edit the AWSServiceRoleForAmazonManagedBlockchain Managed Blockchain Managed Blockchain service is using the role when you try to delete the resources, then the deletion might fail. If that happens, wait for a few minutes and try the operation again.
To manually delete the service-linked role
Use the IAM console, the AWS CLI, or the AWS API to delete the AWSServiceRoleForAmazonManagedBlockchain service-linked role. For more information, see Deleting a Service-Linked Role in the IAM User Guide.
Supported Regions for Managed Blockchain Service-Linked Roles
Managed Blockchain supports using service-linked roles in all of the Regions where the service is available. For more information, see AWS Regions and Endpoints. | https://docs.aws.amazon.com/managed-blockchain/latest/hyperledger-fabric-dev/using-service-linked-roles.html | 2021-01-16T03:31:21 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.aws.amazon.com |
Workaround of the Data Synchronization
If you need several PHP application servers in your environment, you can easily add them without worries about additional configurations.
The newly added instances can be synchronized with the first added node. To achieve this, you just need to follow the next workflow:
1. Log in to the Jelastic dashboard.
2. Click Create environment to set up a new.
Note:
- can scale it in to a single node and then up to a needed number of instances. Also, you can use WebDAV module or perform manual synchronization via configuration manager.
- You can use the initial (master) node of the layer as your storage server for sharing data within the whole layer. | https://docs.jelastic.com/data-synchronization/ | 2021-01-16T02:53:56 | CC-MAIN-2021-04 | 1610703499999.6 | [array(['01-environment-wizard.png', 'environment wizard'], dtype=object)
array(['02-upload-application-archive.png', 'upload application archive'],
dtype=object)
array(['03-deploy-application.png', 'deploy application'], dtype=object)
array(['04-data-synchronization-during-scaling.png',
'data synchronization during scaling'], dtype=object)] | docs.jelastic.com |
Running Multiple Domain Names on PHP Server
Using multiple domains provides you with ability to increase the usability, efficiency and scalability of your PHP application and of course saves your money without necessity to set up separate instances.
So, let’s see how to run multiple domains on PHP application server to make your PHP application even more scalable and effective.
1. Log in to Jelastic Manager.
2. Click Create environment at the top left corner of the dashboard.
3. In the opened wizard navigate to PHP tab, pick application server and specify the number of resources your application needs. After that enter the name for environment and click Create button.
In some seconds your environment will appear on the Jelastic dashboard.
4. You need to have the names in DNS, resolving to your IP address. So, buy domain names for your environment. It can be done in two ways: by adding CNAME record or by setting A Records. You can read more here.
5. After that click the Settings button for your environment and bind your domains. As an example we use the following URLs: mydomain.com and myseconddomain.com.
6. Now you can upload zip packages with your apps to the Deployment manager and deploy them to the environment you’ve created earlier.
7. Once your applications are successfully deployed you need to specify your virtual host configurations.
- for Apache
Click Config button next to the Apache server and open the httpd.conf file (in conf directory). Set VirtualHost parameters for two domain names separately by specifying the paths to the deployed contexts and the names of domains:
- for NGINX
Click Config button next to the NGINX server and open the nginx.conf file in the conf directory.
Specify your settings in the server block
- server_name (your domain)
- ROOT (the context you stated while deploying)
Note that you need to have a separate server block with its own settings for each domain which you bind.
In our case the settings will be following:
8. Don’t forget to Save the changes and Restart application server in order to apply new settings.
9. Now you can check the results to ensure that all works properly.
Hope this instruction will be useful for you. Domain names are very crucial pieces of your online identity so don’t forget to protect them. With Jelastic it takes just a few minutes. Enjoy! | https://docs.jelastic.com/multiple-domains-php/ | 2021-01-16T02:22:40 | CC-MAIN-2021-04 | 1610703499999.6 | [array(['01-environment-wizard.png', 'environment wizard'], dtype=object)
array(['02-php-environment-for-multi-domains.png',
'PHP environment for multi domains'], dtype=object)
array(['03-bind-domain.png', 'bind domain'], dtype=object)
array(['04-upload-first-application.png', 'upload first application'],
dtype=object)
array(['05-upload-second-application.png', 'upload second application'],
dtype=object)
array(['06-apache-httpd-conf.png', 'Apache httpd conf'], dtype=object)
array(['07-nginx-conf.png', 'NGINX nginx conf'], dtype=object)
array(['08-restart-apache.png', 'restart Apache'], dtype=object)
array(['09-php-application-in-browser.gif', 'PHP application in browser'],
dtype=object) ] | docs.jelastic.com |
17.15. Clipping and merging raster layers¶
Note
In this lesson we will see another example of spatial data preparation, to continue using geoalgorithms in real-world scenarios.
For this lesson, we are going to calculate a slope layer for an area surrounding a city area, which is given in a vector layer with a single polygon. The base DEM is divided in two raster layers that, together, cover an area much larger than that around the city that we want to work with. If you open the project corresponding to this lesson, you will see something like this.).
Both of them are easily solvable with the appropriate geoalgorithms.
First, we create a rectangle defining the area that we want. To do it, we create a layer containing the bounding box of the layer with the limits of the city area, and then we buffer it, so as to have a raster layer that covers a bit more that the strictly necessary.
To calculate the bounding box , we can use the Polygon from layer extent algorithm
To buffer it, we use the Fixed distance buffer algorithm, with the following parameter values.
Warning.
Note GDAL Merge algorithm.
Note
You can save time merging first and then cropping, and you will avoid calling the clipping algorithm twice. However, if there are several layers to merge and they have a rather big size, you will end up with a large layer than it can later be difficult to process. In that case, you might have to call the clipping algorithm several times, which might be time consuming, but don’t worry, we will soon see that there are some additional tools to automate that operation. In this example, we just have two layers, so you shouldn’t worry about that now.
With that, we get the final DEM we want.
Now it is time to compute the slope layer.
A slope layer can be computed with the Slope, Aspect, Curvature algorithm, but the DEM obtained in the last step is not suitable as input, since elevation values are in meters but cellsize is not expressed in meters (the layer uses a CRS with geographic coordinates). A reprojection is needed. To reproject a raster layer, the Warp (reproject) algorithm can be used again. We reproject into a CRS with meters as units .
Warning
todo: Add image
The reprojection processes might have caused the final layer to contain data outside the bounding box that we calculated in one of the first steps. This can be solved by clipping it again, as we did to obtain the base DEM. | https://docs.qgis.org/3.16/en/docs/training_manual/processing/cutting_merging.html | 2021-01-16T03:24:53 | CC-MAIN-2021-04 | 1610703499999.6 | [array(['../../../_images/medfordarea.png',
'../../../_images/medfordarea.png'], dtype=object)
array(['../../../_images/bbox.png', '../../../_images/bbox.png'],
dtype=object)
array(['../../../_images/buffer_dialog.png',
'../../../_images/buffer_dialog.png'], dtype=object)
array(['../../../_images/buffer1.png', '../../../_images/buffer1.png'],
dtype=object)
array(['../../../_images/buffer_squared.png',
'../../../_images/buffer_squared.png'], dtype=object)
array(['../../../_images/warp.png', '../../../_images/warp.png'],
dtype=object)
array(['../../../_images/clip1.png', '../../../_images/clip1.png'],
dtype=object)
array(['../../../_images/merge.png', '../../../_images/merge.png'],
dtype=object)
array(['../../../_images/finaldem.png', '../../../_images/finaldem.png'],
dtype=object)
array(['../../../_images/slope.png', '../../../_images/slope.png'],
dtype=object)
array(['../../../_images/slopereproj.png',
'../../../_images/slopereproj.png'], dtype=object)
array(['../../../_images/metricconversions.png',
'../../../_images/metricconversions.png'], dtype=object)] | docs.qgis.org |
Timetabling
Importing timetables from third-parties, XUNO timetabling, and configurations
- Exporting your timetable from Timetabler
- How to add new students to your Basic Timetable
- How to create attendance types
- How to generate a basic timetable within XUNO
- How to import your timetable
- How to manage attendance and timetable data
- How to move students between classes in Xuno timetables
- How to setup your timetable periods
- How XUNO handles team teaching, composite classes, yard-duties and attendance
- Importing your timetable from EdVal
- Importing your timetable from First Class
- Importing your timetable from MAZE
- Timetable import - specifications to create a csv
- Troubleshooting: Student, Staff or Class details not showing or updating in XUNO?
- Video: How to import your timetable into XUNO
- Video: How to manage your timetable and attendance data | https://docs.xuno.com.au/category/14-timetabling | 2021-01-16T03:44:27 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.xuno.com.au |
Help Center
Make a Zivver account
Introduction
This document explains how to create a free Zivver account.
Create a Zivver account
- Go to the Zivver WebApp signup page.
- Enter your e-mail address.
- Click REGISTER.
You will now receive an email from Zivver in the email box of your email address.
- Go to your mailbox.
- Open the email from Zivver.
- Click Complete registration.
A new window opens.
- Enter your full name.
- Create a strong password.
- Enter the password again.
- Tick the option I agree with the General Terms and Conditions for Consumers and the Privacy and Cookie Statement.
- Click ACTIVATE.
You will now receive an email with a recovery code in the inbox of your email address. Save this recovery code. You will need it when your password changes.
- Click Continue.
You have successfully created a Zivver account.
You must set up two-factor authentication (2FA) for your Zivver account before you can send secure messages with Zivver. There are multiple ways to do so, depending on what you wish to use as a second factor:
- Log in with an SMS code
- Use an Authenticator App
- Use the Chrome Authenticator extension | https://docs.zivver.com/en/guest/signup-for-zivver.html | 2021-01-16T02:05:57 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.zivver.com |
User Management allows you to add additional user login access to the appbase.io dashboard. You can access User Management from the
Security view in your dashboard.
Users need to sign in via the login URL provided in the dashboard using the provided username and password. Passwords are encrypted and can't be seen once set.
Create A New User
When creating a new user, you can either provide admin privileges or set the specific scopes for actions that this user can take via the UI or APIs. Please read about each scope and its privileges carefully before granting access to the user.
Develop scope users can access Elasticsearch APIs. For example, they can do the following actions:
- Create/delete indices
- Import data to any index
- Browse data
- Access request logs
- Search Relevancy users with this scope has access to
Search Relevancyand
Developviews. This scope is suitable for users who maintain the search in your team. Check the
Search Relevancydocs to know more.
- Analytics users with analytics scope can access all the analytics views to evaluate the search performance. Additionally, they'll get the monthly insights report.
- Curated Insights scope users can access the
Curated Insightsview in
dashboardto subscribe to appbase.io curated insights. You can read more about it here.
Access Control users can access the API Credentials, Role Based Access Control and Search Template views.
User Management scope users can do the following actions:
- Create new users,
- Edit existing users. For example, modify the privileges of other users,
- Delete the existing users.
Billing scope allows access to the
Billingpage in dashboard. A user with this scope can add / edit payment methods and make changes to the subscription.
Downtime Alerts In case of service downtime, appbase.io send emails to all the users with the
Downtime Alertsscope.
User Management vs API Credentials
When you need to provide GUI access to the dashboard, we recommend creating a new user. On the other hand, when you need to offer programmatic access to a subset of the APIs or set restrictions based on IPs, HTTP Referers, fields, time, we recommend creating an API credential. | https://docs.appbase.io/docs/security/user-management/ | 2021-05-06T12:52:01 | CC-MAIN-2021-21 | 1620243988753.97 | [array(['https://i.imgur.com/yoCzEmG.png', 'usermanagement'], dtype=object)
array(['https://i.imgur.com/ocOgZpD.png', 'Create a New User'],
dtype=object)
array(['https://i.imgur.com/WdY8mMx.png', 'develop'], dtype=object)
array(['https://i.imgur.com/FcQysyK.png', 'relevancy'], dtype=object)
array(['https://i.imgur.com/Ts78oD2.png', 'analytics'], dtype=object)] | docs.appbase.io |
Does Spotlight PRO work without a valid license?
No, Spotlight PRO requires a valid and active license key to function.
If a license is left to expire or is deactivated from a website, Spotlight will fall back to using the free version and its features. For example, if you were using Spotlight PRO's Highlight layour and your license expired, that Instagram feed will switch to the Grid layout and use only the available free features from your feed's Design options. | https://docs.spotlightwp.com/article/777-does-spotlight-pro-work-without-a-valid-license | 2021-05-06T12:46:15 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.spotlightwp.com |
Queens Series Release Notes¶
8.4.1-153]
Enabled collectd on overcloud nodes to connect to local QDR running on each overcloud node in metrics_qdr container.
Add a role specific parameter, ContainerCpusetCpus, default to ‘all’, which allows to limit the specific CPUs or cores a container can use. To disable it and rely on container engine default, set it to ‘’.
deep_compare is now enabled by default for stonith resources, allowing their properties to be updated via stack update. To disable it set ‘tripleo::fencing::deep_compare: false’.
Add new parameter ‘GlanceImageImportPlugins’, to enable plugins used by image import process. Add parameter ‘GlanceImageConversionOutputFormat’, to provide desired output format for image conversion plugin..
ServiceNetMap now handles any network name when computing the default network for each service in ServiceNetMapDefaults.
Partial backport from train to use bind mounts for certificates. The UseTLSTransportForNbd is not available in queens..
8.4.1¶
New Features¶
Created a ExtraKernelPackages parameter to allow users to install additional kernel related packages prior to loading the kernel modules defined in ExtraKernelModules..
Adds support for Ironic Networking Baremetal. Networking Baremetal is used to integrate the Bare Metal service with the Networking service.
Avoid life cycle issues with Cinder volumes by ensuring Cinder has a default volume type. The name of the default volume type is controlled by a new CinderDefaultVolumeType parameter, which defaults to “tripleo”. Fixes bug 1782217..
Fixes an issue whereby TLS Everywhere brownfield deployments were timing out because the db entry for cell0 in the database was not being updated in step 3. This entry is now updated in step 3..
Other Notes¶.
8.3.1¶
8.3.0¶
New Features¶
Added support for containerized networking-ansible Ml2 plugin.
Added support for networking-ansible ML2 plugin.
Add OctaviaEventStreamDriver parameter to specify which driver to use for syncing Octavia and Neutron LBaaS databases..
The default Octavia event_streamer_driver has changed from queue_event_streamer to noop_event_streamer. See
Fixed an issue where if Octavia API or Glance API were deployed away from the controller node with internal TLS, the service principals wouldn’t be created..
CephOSD/Compute nodes crash under memory pressure unless custom tuned profile is used (bug 1800232).
8.2.0¶
New Features¶
Add support for ODL deployment on IPv6 networks.
Added Dell EMC SC multipath support This change adds support for cinder::backend::dellsc_iscsi::use_multipath_for_image_xfer Added a new parameter CinderDellScMultipathXfer.
8.1.0¶
New Features¶
Allow plugins that support it to create VLAN transparent networks The vlan_transparent determines if plugins that support it to create VLAN transparent networks or not
Add ‘neutron::plugins::ml2::physical_network_mtus’ as a NeutronML2PhysicalNetworkMtus in heat template to allow set MTU in ml2 plugin”.
Bug Fixes¶
Launchpad bug 1788337 that affected the overcloud deployment with TLS Everywhere has been fixed. The manila bootstrap container no longer fails to connect securely to the database..
Ping the default gateways before controllers in validation script. In certain situations when using IPv6 its necessary to establish connectivity to the router before other hosts..
8.0.7¶
New Features¶
Add cleanup services for neutron bridges that work with container based deployments.
Introduce NovaLibvirtRxQueueSize and NovaLibvirtTxQueueSize to set virtio-net queue sizes as a role parameter. Valid values are 256, 512 and 1024
Deprecation Notes¶.
Bug Fixes¶
Previously the default throughput-performance was set on the computes. Now virtual-host is set as default for the Compute roles. For compute NFV use case cpu-partitioning, RT realtime-virtual-host and HCI throughput-performance.
Previously, when blacklisting all servers of the primary role, the stack would fail since the bootstrap server id was empty. The value is now defaulted in case all primary role servers are blacklisted..
8.0.5¶
New Features¶
Adds docker service for Neutron SFC. NFS configuration of storage backend for Nova. This way the instance files will be stored on a shared NFS storage.
Upgrade Notes¶.
Bug Fixes¶.
This fixes an issue with the yaml-nic-config-2-script.py script that converts old-style nic config files to new-style. It now handles blank lines followed by a comment line.).
Moving to file logging for ODL as docker logs, sometimes, miss older logs due to journal rollover.
8.0.4¶
New Features¶
Adds support for configuring the cinder-backup service with an NFS backend..
Adds network_plugin_ipv6_enabled, emc_ssl_cert_verify and emc_ssl_cert_path options for Manila VNX driver.
Upgrade Notes¶
Containerized memcached logs to stdout/stderr instead of a file. Its logs may be picked up via journald.
Deprecation Notes¶
The Debug parameter do not activate Memcached debug anymore. You have to pass MemcachedDebug explicitly.
Bug Fixes¶
Fixes bug 1744174.
Fix a typo in the manila-share pacemaker template which was causing failures on upgrades and updates.
Fixes update and upgrade along with modifying configuration for OpenDaylight deployments. See
Fixes minor updates issue for ovn dbs pacemaker bundle resource by tagging the docker image used for ovn dbs pacemaker resource with pcmklatest and adding required missing tasks in “update_tasks” and “upgrade_tasks” section of the service file.
8.0.3¶
New Features¶
Makes collectd deployment default output metrics data to Gnocchi instance running on overcloud nodes.
Adds possibility to override default polling interval for collectd and set default value to 120 seconds, because current default (10s) was too aggressive.
Add support for Neutron LBaaSV2 service plugin in a containerized deployment.
Allow users to specify SSH name and public key to add to Octavia amphorae.
Adds network_plugin_ipv6_enabled, emc_ssl_cert_verify and emc_ssl_cert_path options for Manila Unity driver.
Previously, get-occ-config.sh could configure nodes out of order when deploying with more than 10 nodes. The script has been updated to properly sort the node resource names by first converting the names to a number.
Default Octavia SSH public key to ‘default’ keypair from undercloud..
8.0.2¶
New Features¶
Deprecation Notes¶
Using ‘client’ for OvsVhostuserMode parameter. See ‘vhost-user’ section at
odl-dlux-all feature for OpenDaylight is no longer supported and removed from default installed OpenDaylightFeatures. See
Bug Fixes¶ missing type “flat” from the default allowed network types for the ODL OVS parameter HostAllowedNetworkTypes. See
Fixes default of vhostuser_mode in ODL-OVS to be server, and clarifies the configuration parameter. See
Delete ODL data folder while updating/upgrading ODL..
{{role.name}}ExtraConfigwill now be honored even when using deprecated params in roles_data.yaml. Previously, its value was ignored and never used even though it is defined as a valid parameter in the rendered template.
9.0.0.0b1¶
New Features¶
Containers are now the default way of deploying. There is still a way to deploy the baremetal services in environments/baremetal-services.yaml, but this is expected to eventually disappear..
Upgrade Notes¶
Environment files originally referenced from environments/services-docker should be altered to the environments/services paths. If some of the deployed baremetal services need to be retained as non-containerized, update its references to environments/services-baremetal instead of environments/services.
Note
Overcloud upgrades to baremetal services (non-containerized), or mixed services is no more tested nor verified..
Bug Fixes¶
Fixes OpenDaylight container service not starting due to missing config files in /opt/opendaylight/etc directory.
Fixes failure to create Neutron certificates for roles which do not contain Neutron DHCP agent, but include other Neutron agents (i.e. default Compute role).
8.0.0¶
New Features¶
This exposes the GnocchiStorageSwiftEndpointType parameter, which sets the interface type that gnocchi will use to get the swift endpoint.
Configure ODL to log karaf logs to file for non-containarised deployment and to console for containarised deployment.
Add neutron-plugin-ml2-cisco-vts as a Neutron Core service template in support of the cisco VTS controller ml2 plugin.
Add support for Mistral event engine.
Add Mistral to the provided controller roles.
Add support for deploying Service Function Chaining api service with neutron networking-sfc.
Added support for providing Octavia certificate data through heat parameters.
Add configuration of octavia’s ‘service_auth’ parameters.
Manila now supports the CephNFS back end. Deploy using the ControllerStorageNFS role and ‘-n network_data_ganesha.yaml’, along with manila-cephfsganesha-config-docker.yaml.
Adds ability to configure metadata agent for networking-ovn based deployments.
Add neutron-plugin-ml2-cisco-vts as a dockerized Neutron Core service template in support of the cisco VTS controller ml2 plugin.
Introduces a puppet service to configure AIDE Intrusion Detection. This service init’s the database and copies the new database to the active naming. It also sets a cron job, when parameter AideEmail is populated, otherwise reports are sent to /var/log/aide/.
AIDE rules can be supplied as a hash, and should the rules ever be changed, the service will populate the new rules and re-init a fresh integrity database.
This patch allows to attach optional volumes to and set optional environment variables in the neutron-api, heat-api and nova-compute containers. This makes it easier to plug plugins to that containers.
Add KernelIpForward configuration to enable/disable the net.ipv4.ip_forward configuration.
Configure OpenDaylight SNAT to use conntrack mechanism with OVS and controller based mechanism with OVS-DPDK.
Barbican API added to containarised overcloud deployment
With the move to containers, Ceph OSDs may be combined with other Ceph services and dedicated Ceph monitors on controllers may be used less. Popular Ceph roles which include OSDs are Ceph file, object and nodes which run all Ceph services. This pattern also applies to Hyper Converged (HCI) roles. The following pre-composed roles have been added to make it easier to deploy in this pattern. - CephAll: Standalone Storage Full Role - CephFile: Standalone Scale-out File Role - CephObject: Standalone Scale-out Object Role - HciCephAll: HCI Full Stack Role - HciCephFile: HCI Scale-out File Role - HciCephObject: HCI Scale-out Object Role - HciCephMon: HCI Scale-out Block Full Role - ControllerNoCeph: OpenStack Controller without any Ceph Services
Support added for per-service deploy_steps_tasks which are run every step on the overcloud nodes.
Default values for OctaviaFlavorProperties have been added and OctaviaManageNovaFlavor is now enabled by default so a usable OpenStack flavor will be available for creating Octavia load balancers immediately after deployment.
Service templates now support an external_post_deploy_tasks interface, this works in the same way as external_deploy_tasks but runs after all other deploy steps have completed..
Support for Instance HA is added. This configures the control plane to do fence for a compute node that dies, then a nova –force-down and finally and evacuation for the vms that were running on the failed node.
Encryption of the internal network’s communications through IPSec has been added. To enable you need to add the OS::TripleO::Services::Ipsec service to your roles if it’s not there already. And you need to add the file environments/ipsec.yaml to your overcloud deploy.
The
IpsecVarsparameter was added in order to configure the parameters in the tripleo-ipsec ansible role that configures IPSec tunnels if they’re enabled.
Add support for Dell EMC Isilon manila driver
Allow to easily personalize Kernel modules and sysctl settings with two new parameters. ExtraKernelModules and ExtraSysctlSettings are dictionaries that will take precedence over the defaults settings provided in the composable service.
Allow to configure extra Kernel modules and extra sysctl settings per role and not only global to the whole deployment. The two parameters that can be role-specific are ExtraKernelModules and ExtraSysctlSettings.
Mistral is now deployed with Keystone v3 options (authtoken).
The memcached service now reacts to the Debug flag, which will make its logs verbose. Also, the MemcachedDebug flag was added, which will just add this for the individual service.
When containerizing mistral-executor, we need to mount /var/lib/mistral so our operators can get the config-download logs when the undercloud is containerized and config-download is used to deploy the overcloud.
Add new CinderRbdExtraPools Heat parameter, which specifies a list of Ceph pools for use with RBD backends for Cinder. An extra Cinder RBD backend driver is created for each pool in the list. This is in addition to the standard RBD backend driver associated with the CinderRbdPoolName. The new parameter is optional, and defaults to an empty list. All of the pools are associated with a single Ceph cluster.
Added new real-time roles for NFV (ComputeOvsDpdkRT and ComputeSriovRT)
Add MinPoll and MaxPoll options to NTP module. These options specify the minimum and maximum poll intervals for NTP messages, in seconds to).
Enables deploying OpenDaylight with TLS. Open vSwitch is also configured to communicate with OpenDaylight via TLS.
Add support for ODL OVS Hardware Offload. This feature requires Linux Kernel >= 4.13 Open vSwitch >= 2.8 iproute >= 4.12.
Endpoint is added for ODL. Public access is not allowed for ODL so public endpoint is not added.
Support containerized ovn-controller
Support containerized OVN Dbs without HA
Support containerized OVN DBs with HA
Add support for OVS Hardware Offload. This feature requires Linux Kernel >= 4.13 Open vSwitch >= 2.8 iproute >= 4.12.
Add the ability to deploy PTP. Precision Time Protocol (PTP) is a protocol used to synchronize clocks throughout a compute network. With hardware timestamping support on the host, PTP can achieve clock accuracy in the sub-microsecond range. PTP can be used as an alternative to NTP for high precision clock calibration.
A new parameter, RabbitNetTickTime, allows tuning the Erlang net_ticktime parameter for rabbitmq-server. The default value is 15 seconds. This replaces previous tunings in the RabbitMQ environment file which set socket options forcing TCP_USER_TIMEOUT to 15 seconds.
Neutron no longer accesses octavia through a neutron service plugin.
Introduce a new service to configure RHSM with Ansible, by calling ansible-role-redhat-subscription in host_prep_tasks.
When using RHSM proxy, TripleO will now verify that the proxy can be reached otherwise we’ll stop early and not try to subscribe nodes.
The parameters KeystoneChangePasswordUponFirstUse, KeystoneDisableUserAccountDaysInactive, KeystoneLockoutDuration, KeystoneLockoutFailureAttempts, KeystoneMinimumPasswordAge, KeystonePasswordExpiresDays, KeystonePasswordRegex, KeystonePasswordRegexDescription, KeystoneUniqueLastPasswordCount were introduced. They all correspond to keystone configuration options that belong to the security_compliance group.
A new role ComputeSriov has been added to the roles definition to create a compute with SR-IOV capabilities. The SR-IOV services has been removed from the default Compute role, so that a cluster can have general Compute along with ComputeSriov roles.
Add support for Dell EMC Unity Manila driver
Add support for Dell EMC VMAX Iscsi cinder driver
Add support for Dell EMC VMAX Manila driver VNX cinder driver
Add support for Dell EMC VNX Manila driver.
force_config_drive is now set to False in Nova. Instances will now fetch their metadata from the metadata service instead from the config drive.).
Each service template may optionally define a fast_forward_upgrade_tasks key, which is a list of ansible tasks to be performed during the fast-forward upgrade process. As with Upgrade steps each task is associated to a particular step provided as a variable and used along with a release variable by a basic conditional that determines when the task should run.
Add ODL upgradability Steps of upgrade are as follows 1. Block OVS instances to connect to ODL done in upgrade_tasks 2. Set ODL upgrade flag to True done in upgrade_tasks 3. Start ODL. This is done via docker config step 1 4. Start Neutron re-sync triggered by starting of Neutron server container in step 4 of docker config 5. Delete OVS groups and ports 6. Stop OVS 7. Unblock OVS ports 8. Start OVS 9. Unset ODL upgrade flag Steps 5 to 9 are done in post_upgrade_steps
The Heat API Cloudwatch service has been removed from heat in Queens and would not be available for deployment.
When deploying with RHSM, sat-tools 6.2 will be installed instead of 6.1. The new version is supported by RHEL 7.4 and provides katello-agent package.
If a existing cluster has enabled SR-IOV with Compute role, then the service OS::TripleO::Services::NeutronSriovAgent has to be added to the Compute role in their roles_data.yaml. If the existing cluster has created SR-IOV role as a custom role (other than Compute), then this change will not affect.
Upgrade Heat templates version to queens in order to match Heat requirements.
Since we are now running the ansible-playbooks with a step variable rather than via Heat SoftwareConfig/Deployments, the per service upgrade_tasks need to use “when: step|int == [0-6]” rather than “tags: step[0-6]” to signal the step at which the given task is to be invoked. This also applies to the update_tasks that are used for the minor update. This also deprecates the upgrade_batch_tasks
Deprecation Notes¶
This patch removes Contrail templates from tripleo as a preparation for the new microservice based templates.
The pre-existing environment files which previously enabled the deployment of Ceph on baremetal, via puppet-ceph, will all be’ migrated to deploy Ceph in containers, via ceph-ansible.
The CeilometerWorkers parameter, unused, is now deprecated.
The Heat API Cloudwatch API is deprecated in Pike and so it is now not deployed by default. You can override this behaviour with the environments/heat-api-cloudwatch.yaml environment file in the tripleo-heat-templates.
Deprecates the OpenDaylightConnectionProtocol heat parameter. This parameter is now decided based on using TLS or non-TLS deployments.
Parameter “OpenDaylightPort” is deprecated and will be removed from R.
Security Issues¶.
Live migration over TLS has been disabled since the settings it was using don’t meet the required security standards. It is currently not possible to enable it via t-h-t..
Expose panko expirer params to enable and configure it.
Add s3 driver option and params associated with it.
Allow the configuration of image_member_quota in Glance API. This error blocks the ability of sharing images if the default value (128) is reached.
Enabling ceilometer automatically enables keystone notifications through the ‘notifications’ topic (which was the default).
Deployments with Ceph now honor the DeploymentServerBlacklist parameter. Previously, this meant that changes could still be triggered for servers in the blacklist.
Added hiera for network_virtual_ips in vip_data to allow composable networks to be configured in puppet.
Allow containerized services to be executed on hosts with SELinux in the enforcing mode.
If docker-puppet.py fails on any config_volume, it can be difficult to reproduce the failure given all the other entries in docker-puppet.json. Often to reproduce a single failure, one has to modify the json file, and remove all other entries, save the result to a new file, then pass that new file as $CONFIG. The ability to specify $CONFIG_VOLUME, which will cause docker-puppet.py to only run the configuration for the specified entry in docker-puppet.json whose config_volume value matches the user specified value has been added..
Drop redundant MetricProcessingDelay param from gnocchi base templates. This is already done in metricd templates, so lets drop it to avoid duplicates in config file.
Enable the ntp iburst configuration for each server by default. As some services are very sensitive to time syncronization, this will help speed up the syncronization when servers are unavailable for a time. See LP#1731883
Ensure Debug is a boolean Oslo has trouble when Debug is not a proper python boolean. See
Fixes Heat resource for API networks to be the correct name of “InternalApiNetwork” instead of “InternalNetwork”.
Fixes dynamic networks to fallback to ctlplane network when they are disabled.
Fixes missing Keystone authtoken password for Tacker.
Fixes issue in OpenDaylight deployments where SSL between Neutron DHCP agent with OVS did not work due to missing SSL certificate/key configuration.
The “neutron_admin_auth_url” is now properly set using KeystoneInternal rather than using the NeutronAdmin endpoint.
Fixes GUI feature loaded into OpenDaylight, which fixes the GUI as well as the URL used for Docker healthcheck.
Fixes missing SSL/TLS configuration for OpenDaylight docker deployments.
Fixes bug where neutron port status was not updated with OpenDaylight deployments due to firewall blocking the websocket port used to send the update (port 8185).
Fixes generation public certificates for haproxy in a non-containerized TLS deployment scenario.
Removes hardcoded network names. The networks are now defined dynamically by network_data.yaml.
When Horizon is enabled, the _member_ Keystone role will now be created. (bug 1741066).
Disables QoS with OpenDaylight until officially supported.
–.
Changes the default RabbitMQ partition handling strategy from ‘pause_minority’ to ‘ignore’, avoiding crashes due to race conditions with nodes starting and stopping concurrently.
Remove Ceilometer Collector, Expirer and Api from the roles data and templates. Both these services have been deprecated in Pike release and targeted for removal in the current Queens release.
Remove unused nova ports 3773 and 8773 from being opened in iptables.
Restore ceilometer templates to disable Api, Collector and Expirer. These are required for fast forward upgrades to remove the services during the upgrades..
Allow to configure SR-IOV agent with agent extenstions.
Start sequence at 1 for the downloaded deploy steps playbook instead of 0. The first step should be 1 since that is what the puppet manifests expect. on a containerized environment..
Processes are storing important health and debug data in some files within /var/cache/swift, and these files must be shared between all swift-* processes. Therefore it is needed to mount this directory on all Swift containers, which is required to make swift-recon working.
Add swift_config puppet tag to the dockerized proxy service to ensure the required hash values in swift.conf are set properly. This is required when deploying a proxy node without the storage service at the same time.
The standalone Telemetry role at roles/Telemetry.yaml had an incorrect list of services. The list has been updated to remove services such as MySQL and RabbitMQ and the services common to all TripleO roles have been added.
Change the default ManageEventPipeline to true. This is because we want the event pipeline publishers overridden by heat templates to take effect over the puppet defaults. Once we drop panko:// from the pipeline we can switch this back to false.
In the deploy steps playbook downloaded via “openstack overcloud config download”, all the tasks require sudo. The tasks now use “become: true”.
Use StrictHostKeyChecking=no to inject the temporary ssh key in enable-ssh-admin.sh. The user provides the list of hosts for ssh, so we can safely assume that they intend to ssh to those hosts. Also, for the ovb case the hosts will have new host ssh keys which have not yet been accepted.
Other Notes¶
With the migration from puppet-ceph to ceph-ansible for the deployment of Ceph, the format of CephPools parameter changes because the two tools use a different format to represent the list of additional pools to create.. | https://docs.openstack.org/releasenotes/tripleo-heat-templates/queens.html | 2021-05-06T13:25:48 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.openstack.org |
Stein Series Release Notes¶
10.6.2-112¶
New Features¶ Octavia anti-affinity parameters.
Added support for running the Octavia driver agent in a container. This will enable features such as the OVN load balancer provider in octavia as well as other third party providers..
Enabling additional healtchecks for Swift to monitor account, container and object replicators as well as the rsync process..
Fixes an issue where filtering of networks for kerberos service principals was too aggressive, causing deployment failure. See bug 1854846.
Fixed an issue where containers octavia_api and octavia_driver_agent would fail to start on node reboot.
Fix Swift ring synchronization to ensure every node on the overcloud has the same copy to start with. This is especially required when replacing nodes or using manually modifed rings.
10.6.2¶
New Features¶
Added the “connection_logging” parameter for the Octavia service.).
Bug Fixes¶
Restart certmnonger after registering system with IPA. This prevents cert requests not completely correctly when doing a brownfield update.”.
10.
When running config-download manually, fact gathering at the play level can now be controlled with the gather_facts Ansible boolean variable..
10.6.0¶
New Features¶.
Introduce new tag into roles that will create external_bridge (usable only for multiple-nics)..
Upgrade Notes¶
During upgrade user will need to create custom roles_data.yaml and remove external_bridge from tags to be sure that bridge will be not added. only OVN Tunnel Encap Type that we are supporting in OVN is Geneve and this is set by default in ovn puppet. So there are no need to set it in TripleO
Bug Fixes¶
Fixed launchpad bug 1831122 with the NetApp Backend..
10.5.0¶
New Features¶
Added the configuration option to disable Exact Match Cache (EMC)
A new parameter, CinderEtcdLocalConnect, is available for the CinderVolume service. When deploying the service A/A, the parameter can be set to true which willconfigure cinder-volume to connect to Etcd locally through the node’s own IP instead of going through a VIP.
The Etcd service is added to the DistributedCompute and DistributedComputeHCI roles for Active/Active management of the CinderVolume service.
Added ability to rewrap project KEKs (key encryption keys) when doing an upgrade. This allows deployers to rewrap KEKs whenever they rotate the master KEK and HMAC keys when using the PKCS#11 plugin behind Barbican.
Also added some needed ordering for master key creation, sync and update when using a Thales HSM behind Barbican.
Podman is now the default ContainerCli unless you deploy Pacemaker. then you must run Docker when deploying on CentOS7..
ContainerHealthcheckDisabled is a new parameter which allows to disable the container healthcheck management in Paunch.
Adds the ability to set
external_resource_network_idfor the network,
external_resource_vip_idfor the network VIP,
external_resource_subnet_idfor the subnet(s), and
external_resource_segment_idfor the segment(s) to network_data.yaml. When setting these properties, the external_id attribute will be set on the corresponding Heat resources. This causes Heat to not re-create these resources and instead adopt them from outside the stack.
A new service, OS::TripleO::Services::NovaAZConfig, is available which can be used to create a host aggregate and availabiity zone in Nova during the deployment. Compute nodes in the deployment will also be added to the zone. The zone name is set with the parameter value NovaComputeAvailabilityZone. If let unset, it will default to the root stack name. By default the service is mapped to None, but can be enabled by including environments/nova-az-config.yaml..
By adding parameter OctaviaAmphoraImageFormat, it adds flexibility to select amphora image format without forcing to use of the NovaEnableRbdBackend parameter.
When deploying with internal TLS, the Octavia API now runs as an Apache WSGI application improving support for IPv6 and performance.
Using Ansible timezone module to manage the system timezone for the deployed systems.
The get_attr function is now used to read the
gateway_ipof a ports subnet. The gateway_ip value is passed to nic config templates using the
%network%InterfaceDefaultRouteparameter. (This parameter is only used if the network is present in the roles
default_route_networks.) Using get_attr ensures that the correct gateway ip address is used when networks have multiple subnets.
Upgrade Notes¶
Removes UpgradeRemoveUnusedPackages parameter and some service upgrade_tasks that use this parameter to remove any unused packages.
When deploying with internal TLS, previous versions configured a separate TLS proxy to provide a secure access point for the Octavia API. This is now implemented by running the Octavia API as an Apache WSGI application and the Octavia TLS Proxy will be removed during updates and upgrades.
Deprecation Notes¶
The nova-placement service is deprecated in Stein and will be replaced in Train by an extracted Placement API service..
Lets deprecate it also in tripleo so that it can be removed in a later release.
[1]
Managing timezone via puppet is now deprecated.
Bug Fixes¶.
ServiceNetMap now handles any network name when computing the default network for each service in ServiceNetMapDefaults.
With large number of OSDs, where each OSD need a connection, the default nofile (1024) of nova_compute is too small. This changes the default DockerNovaComputeUlimit to 131072 what is the same for cinder.
With cellsv2 multicell in each cell there needs to be a novnc proxy as the console token is stored in the cell conductor database. This change adds the NovaVncProxy service to the CellController role and configures the endpoint to the local public address of the cell..
10.4.0¶
New Features¶
Adds a specific upgrade hiera file. This is currently used to override variables during upgrade.
Introduce new parameter, ContainerLogStdoutPath. Must be an absolute path to a directory where podman will output all containers stdout. The existence of the directory is ensured directly as a host_prep_task..
Adds a new GlobalConfigExtraMapData parameter that can be used to inject global_config_settings hieradata into the deployment. Any values generated in the stack will override those passed in by the parameter value.
Add neutron-plugin-ml2-mlnx-sdn-assist as a containerized Neutron Core service template to support Mellanox SDN ml2 plugin.
Adds functionality wheter to enable/disable KSM on compute nodes. Especially in NFV use case one wants to disable the service. Because ksm has little benefit in overcloud nodes it gets disabled per default but can be set via NovaComputeEnableKsm.
Added a new Barbican option BarbicanPkcs11AlwaysSetCkaSensitive. The default value is true.
Allow Neutron DHCP agent to use broadcast in DHCP replies
Add the ability to configure the cinder-volume service to run in active-active (A/A) mode using the cluster name specified by the new CinderVolumeCluster parameter. Note that A/A mode requires the backend driver support running A/A. Cinder’s RBD driver supports A/A, but most other cinder drivers currently do not.
ContainerImagePrepareDebug is a parameter that allows to run the tripleo container image prepare command with –debug. It is set to ‘False’ by default for backward compatibility.
Docker is deprecated in Stein and will be removed in Train. It is being replaced by Podman and Buildah.
Deprecated services now live in deployment/deprecated directory.
The
baremetalML2 mechanism driver is enabled in the Networking Service (neutron) in the overcloud by default when the Baremtal Service (ironic) is enabled. Previously the user would have to enable this driver manually by overriding the
NeutronMechanismDriversparameter...
The RabbitMQ management plugin (
rabbitmq_management) is now enabled. By default RabbitMQ managment is available on port 15672 on the localhost (
127.0.0.1) interface..
Add container for the Swift container sharder service. This service is required for sharding containers. It is disabled by default and can be enabled by setting the SwiftContainerSharderEnabled to true.
The Shared File Systems service (manila) API has been switched to running behind httpd, and it now supports configuring TLS options.
This patch switches the default mechanism driver for neutron from openvswitch to OVN. DVR is now enabled by default which in the case of OVN means that we’re distributing FIP N/S traffic as E/W is anyways distributed
When deploying mistral-executor, create a tripleo-admin user on the undercloud for running external deploy tasks with ansible.
Add new CinderNetappPoolNameSearchPattern parameter, which controls which Netapp FlexVol volumes represent pools in Cinder.
Known Issues¶
Add OvnDbInternal to EndpointMap and use it for ovn_db_host
OVN controller/metadata use ovn_dbs_vip hiera key to configure the central ovn DB. This key is not available on split control plane or multi cell setup and therefore installation fails.
With this change a new entry gets created in the EndpointMap named OvnDbInternal. This can then be exported for an overcloud stack and can be used as an input for the cell stack.
The information from the EndpointMap is used for ovn-metadata and ovn-controller as the ovn_db_host information in puppet-tripleo
Upgrade Notes¶.
Installing haproxy services on baremetal is no longer supported.
Installing MySQL Server services on baremetal is no longer supported.
Installing Redis services on baremetal is no longer supported.
Installing sahara services on baremetal is no longer supported.
During upgrade from ml2/ovs please remember to provide similar environment file to environments/updates/update-from-ml2-ovs-from-rocky.yaml. This is good also to remember to provide this file as a first to avoid overwriting custom modification by upgrade environment file. If you will not provide such file during upgrade from ml2/ovs you will see error and notification about problems witch mutually exclusive network drivers.
Deprecation Notes¶
Duplicate environment files
environments/neutron-sriov.yamland
environments/neutron-ovs-dpdk.yamlfile are deprecated.
Xinetd tripleo service is no longer managed. The xinetd service hasn’t been managed since the switch to containers. OS::TripleO::Services::Xinetd is disabled by default and dropped from the roles. The OS::TripleO::Services::Xinetd will be removed in Train.
docker_puppet_tasks is deprecated in favor of container_puppet_tasks. docker_puppet_tasks is still working in Stein but will be removed in Train.
The NodeDataLookup parameter type was changed from string to json
Removed ‘glance-registry’ related changes since it’s been deprecated from glance & no longer been used.
The TLS-related environment files in the environments/ directory were deleted. The ones in the environments/ssl/ are preferred instead. Namely, the following files:: enable-internal-tls.yaml, enable-tls.yaml, inject-trust-anchor-hiera.yaml, inject-trust-anchor.yaml, no-tls-endpoints-public-ip.yaml, tls-endpoints-public-dns.yaml tls-endpoints-public-ip.yaml, tls-everywhere-endpoints-dns.yaml.
TripleO UI is deprecated in Stein and will be removed in Train.
The CinderNetappStoragePools parameter is deprecated in favor of the new CinderNetappPoolNameSearchPattern parameter. The previously deprecated CinderNetappEseriesHostType parameter has been removed.
The /var/lib/docker-puppet is deprecated and can now be found under /var/lib/container-puppet. We don’t have Docker anymore so we try to avoid confusion in the directories. The directory still exists but a readme file points to the right directory.
Bug Fixes¶.
Bug 1784967 invalid JSON in NodeDataLookup error message should be more helpful.
Other Notes¶
Paramter
ConfigDebugnow also controls the paunch logs verbosity.
Octavia may be deployed for a standalone cloud, which has yet Nova services available for Amphorae SSH keys management. For that case, the parameter
OctaviaAmphoraSshKeyFilemust be defined by a user. Otherwise, it takes an empty value by usual for overcloud deployments meanings and Nova will be used to create a key-pair for Octavia instead.
The utility script
tools/merge-new-params-nic-config-script.pypreviously used the
Controllerrole by default if the
--role-nameargument was not specified. The argument (
--role-name) no longer have a default. It is now mandatory to specify the role when merging new parameters into existing network configuration templates.
Remove
NeutronExternalNetworkBridgeHeat parameter. Option
external_network_bridgeis deprecated and should not be used in Neutron.
10.3.0¶
New Features¶
Added code in the barbican-api.yaml template to allow barbican to be configured to run with either an ATOS or Thales HSM back-end. Also added environment files with all the required variables. The added code installs and configures the client software on the barbican nodes, generates the required kets for the PKCS#11 plugin, and configures barbican correctly. For the Thales case, it also contacts the RFS server to add the new clients to the HSM.
Add new CinderNfsSnapshotSupport parameter, which controls whether cinder’s NFS driver supports snapshots. The default value is True.
Composable Networks now support creating L3 routed networks. L3 networks use multiple L2 network segments and multiple ip subnets. In addition to the base subnet automatically created for any composable network, additional subnets can be defined under the
subnetskey for each network in the data file (
network_data.yaml) used by composable networks. Please refer to the
network_data_subnets_routed.yamlfile for an example demonstrating how to define composable L3 routed networks.
For composable roles it is now possible to control which subnet in a L3 routed network will host network ports for the role. This is done by setting the subnet for each network in the role defenition (
roles_data.yaml). For example:
- name: <role_name> networks: InternalApi: subnet: internal_api_leaf2 Tenant: subnet: tenant_leaf2 Storage: subnet: storage_leaf2
To enable control of which subnet is used for virtual IPs on L3 routed composable networks the new parameter
VipSubnetMapwhere added. This allow the user to override the subnet where the VIP port should be hosted. For example:
parameter_defaults: VipSubnetMap: ctlplane: ctlplane-leaf1 InternalApi: internal_api_leaf1 Storage: storage_leaf1 redis: internal_api_leaf1
New roles for DistributedCompute and DistributedComputeHCI are added. These roles match the existing Compute roles, but also include the CinderVolume service. The CinderVolume service is included using the BlockStorageCinderVolume service name so that it can be mapped independently from CinderVolume.
Add new parameter ‘GlanceImageImportPlugins’, to enable plugins used by image import process. Add parameter ‘GlanceImageConversionOutputFormat’, to provide desired output format for image conversion plugin.
Allow to output HAProxy in a dedicated file
Adds new HAProxySyslogFacility param
Add parameter NovaHWMachineType which allows to explicitly set machine_type across all compute nodes during deployment, to allow migration compatibility from compute nodes with higher host OS version to compute nodes with lower host OS version.
The network data for composible networks have been extended to enable configuration of the maximum transmission unit (MTU) that is guaranteed to pass through the data path of the segments in the network. The MTU property is set on the neutron networks in the undercloud. The MTU information is used in the nic-config templates so that overcloud node networking is configured with the correct MTU settings. DB urls as part 1. Nova support added here - transport urls as part 2. Nova support added here -
The MTU defined for the
Tenantnetwork in network_data is now used to set neutron’s
global_physnet_mtuunless the
NeutronGlobalPhysnetMtuparameter is used to override the default. (Neutron uses the
global_physnet_mtuvalue to calculate MTU for all virtual network components. For flat and VLAN networks, neutron uses this value without modification. For overlay networks such as VXLAN, neutron automatically subtracts the overlay protocol overhead from this value.).
Deployments using custom names for subnets must also set the subnet to use for the roles used in the deployment. I.e if
NetworkNameSubnetNameparameter was used to define a non-default subnet name for any network, the role defenition (
roles_data.yaml) and
VipSubnetMapparameter must use the same value.
Warning
The update will fail if
<NetworkName>SubnetNamewas used to set a custom subnet name, and the role defenition and/or the
VipSubnetMapis not set to match the custom subnet name.
Installing Aodh services on baremetal is no longer supported.
Installing glance on Baremetal is no longer supported
Installing Ironic on baremetal is no longer supported
Installing Keepalived service on baremetal is no longer supported.
Deploying keystone on baremetal is no longer supported.
Installing memcached services on baremetal is no longer supported.
Installing zaqar on baremetal is no longer supported
Tags are now used on the
ctlplanenetwork to store the list of cidrs associated with the subnets on the
ctlplanenetwork. Users of Deployed Server (pre-provisioned servers) need to update the port map (
DeployedServerPortMap) to include the required data. For example:
parameter_defaults: DeployedServerPortMap: controller0-ctlplane: fixed_ips: - ip_address: 192.168.24.9 subnets: - cidr: 192.168.24.0/24 network: tags: - 192.168.24.0/24 - 192.168.25.0/24 compute0-ctlplane: fixed_ips: - ip_address: 192.168.25.8 subnets: - cidr: 192.168.25.0/24 network: tags: - 192.168.24.0/24 - 192.168.25.0/24
Prior to upgrading any custom nic-config templates must have the MTU associated parameters introduced in this release added. As an example the following must be added to all nic-config templates when network isolation is used:
ControlPlaneMtu: default: 1500 description: The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the network. (The parameter is automatically resolved from the ctlplane network's mtu attribute.) type: number StorageMtu: default: 1500 description: The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the Storage network. type: number StorageMgmtMtu: default: 1500 description: The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the StorageMgmt network. type: number InternalApiMtu: default: 1500 description: The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the InternalApi network. type: number TenantMtu: default: 1500 description: The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the Tenant network. type: number ExternalMtu: default: 1500 description: The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the External network. type: numbe ManagementMtu: default: 1500 description: The maximum transmission unit (MTU) size(in bytes) that is guaranteed to pass through the data path of the segments in the Management network. type: number
The hiera bootstrap_nodeid_ip key has been replaced with per-service SERVICE_bootstrap_node_ip where SERVICE is the service_name from the composable service templates. If any out-of-tree services use this key they will need to adjust to the new interface on upgrade.
We don’t run the upgrade_tasks Ansible tasks that stop systemd services and since all services are now containerized. However, we decided to keep the tasks that remove the rpms in case some of deployments didn’t cleanup them in previous releases, they can still do it now. These tasks were useful in Rocky when we converted the Undercloud from baremetal to containers but in Stein this is not useful anymore. It’s actually breaking upgrades for Podman, as containers are now seen by systemd, and these tasks conflicts with the way containers are managed in Paunch.
Deprecation Notes¶
For deploying with hw offloading, we should use the “environments/ovs-hw-offload.yaml” file beside neutron, opendaylight or ovn environments files, no needs to have seperated files as before
The recommended API for checking when OpenDaylight is up and ready has changed. Use the new ODL Infrautils diagstatus REST API endpoint, vs the old netvirt:1 endpoint.
The NtpServer default set now includes multiple pool.ntp.org hosts to ensure that the time can be properly synced during the deployment. Having only a single timesource can lead to deployment failures if the time source is unavailable during the deployment. It is recommended that you either set multiple NtpServers or use the NtpPool configuration to ensure that enough time sources are available for the hosts. Note that the NtpPool configuration is only available when using chrony. See LP#1806521
Novajoin now log’s to
/var/log/containerin the same way other TripleO container services do. See Bug: 1796658..
/opt/opendaylight/data folder is mounted on host. This folder contains information about installed features in ODL. Mounting this folder on container makes ODL believe that features are installed and it doesnot generate required for proper boot. Thus this folder is no longer mounted to host so that ODL can boot properly on restart.
CephOSD/Compute nodes crash under memory pressure unless custom tuned profile is used (bug 1800232).
Other Notes¶
HostPrepConfig has been removed. The resource isn’t used anymore. It was using the old fashion to run Ansible via Heat, which we don’t need anymore with config-download by default in Rocky.
MongoDB hasn’t been supported since Pike, it’s time to remove the deployment files. Starting in Stein, it’s not possible to deploy MongoDB anymore.
10.2.0¶
New Features¶
Add CinderStorageAvailabilityZone parameter that configures cinder’s DEFAULT/storage_availability_zone. The default value of ‘nova’ matches cinder’s own default value.
Add several CinderXXXAvailabilityZone parameters, where XXX is any of the cinder volume service’s storage backends. The parameters are optional, and when set they override the “backend_availability_zone” for the corresponding backend.
Octavia default timeouts for backend member and frontend client can be set by params exposed in template:
OctaviaTimeoutClientData: Frontend client inactivity timeout
OctaviaTimeoutMemberConnect: Backend member connection timeout
OctaviaTimeoutMemberData: Backend member inactivity timeout
OctaviaTimeoutTcpInspect: Time to wait for TCP packets for content inspection
The value for all of these options is expected to be in milliseconds.
The default timesync service has changed from NTP to Chrony.
Added Dell EMC SC multipath support This change adds support for cinder::backend::dellsc_iscsi::use_multipath_for_image_xfer Added a new parameter CinderDellScMultipathXfer.
Add GlanceCacheEnabled parameter which will enable the glance image cache by seetting up the flavor value to ‘keystone+cachemanagement’ in glance-api.conf
It is now possible to enable support for routed networks in the undercloud when the undercloud is updated or upgraded. To enable support for routed networks set
enable_routed_networksto
Truein
undercloud.confand re-run the undercloud installer.
ContainerCliallows ‘docker’ (deprecated) and ‘podman’ for Neutron L3/DHCP and OVN metadata rootwrap containers managed by agents. Parameters
OVNWrapperDebugand
NeutronWrapperDebug(Defaults to False) allow to log debug messages for the wrapper scripts managing rootwrap containers. It is also controled by the global
Debugsetting.
Upgrade Notes¶
swift worker count parameter defaults have been changed from ‘auto’ to 0. If not provided, puppet module default would instead be used and the number of server processes will be limited to ‘12’.
Octavia amphora images are now expected to be located in directory /usr/share/openstack-octavia-amphora-images on the undercloud node for uniformization across different OpenStack distributions.
Deprecation Notes¶
NTP timesync has been deprecated for Chrony and will be removed in T.
The environments/docker.yaml is no longer necessary as the default registry points to containerized services too. The environment file is now deprecated (and emptied) and will be removed in the future.
The
Fluentdservice is deprecated and it will be removed in future releases. It will be replaced by rsyslog. Rsyslog is not integrated yet, so Fluentd will be an option as long as rsyslog is not integrated.
Sensu service will be remove in the future releases.
The dynamic tripleo firewall_rules, haproxy_endpoints, haproxy_userlists that are configured with dots are deprecated with the update to puppet 5. They will no longer work and must be switched to the colon notation to continue to function. For example tripleo.core.firewall_rules must be converted to tripleo::core::firewall_rules. Similarly the haproxy endpoints and userlists that are dynamic using dots must also be converted to use colons.
Ensure Octavia amphora image files are placed in directory /usr/share/openstack-octavia-amphora-images on the undercloud node.
Parameter
DockerAdditionalSocketsis deprecated. No sockets are expected to bind mount for podman. So it only works for the docker runtime.
Bug Fixes¶
When masqurading was eneabled on the Undercloud the networks
192.168.24.0/24and
10.0.0.0/24was always masqueraded. (See bug: 1794729.)
Directory /var/lib/gnocchi/tmp is created by gnocchi-upgrade with root ownership. It is now ensured that the directory is created before upgrade with proper ownership. For details see:.
Nova metadata api is running via http wsgi in its own service. Therefore we can cleanup ports being opened by nova api service.
Fix an issue where Octavia amphora images were not accessible during overcloud deployment..
The deployed-server get-occ-config.sh script now allows $SSH_OPTIONS to be overridden.
Neutron/OVN rootwrap containers are managed by agents and will no longer be deleted, when the parent container restarts.
10.1.0¶
New Features¶
Add support for ODL deployment on IPv6 networks. nova file_backed_memory and memory_backing_dir support for qemu.conf.
Running Nova with file_backed_memory requires libvirt version 4.0.0 and qemu version 2.6.0
The Dell EMC SC configuration option excluded_domain_ip has been deprecated and will be removed in a future release. Deployments should now migrate to the option excluded_domain_ips for equivalent functionality..
The environment file puppet-pacemaker.yaml has been removed, make sure that you no longer reference it. The docker-ha.yaml file should have already been used in place of puppet-pacemaker.yaml during upgrade from Ocata to Pike. The environment file puppet-pacemaker-no-restart.yaml has been removed too, it was only used in conjunction with puppet-pacemaker.yaml.
The environment file deployed-server-pacemaker-environment.yaml has been removed, make sure that you no longer reference it. Its current contents result in no tangible difference from the default resource registry state, so removing the file should not change the overcloud.
Remove zaqar wbesocket service when upgrading from non-containerized environment.
Deprecation Notes¶
All references to the logging_source output in the services templates have been removed, since it’s been unused for a couple of releases now..
Make sure all Swift services are disabled after upgrading to a containerized undercloud..
SELinux can be configured on the Standalone deployment by setting SELinuxMode.
10.0.0¶
New Features¶ OctaviaEventStreamDriver parameter to specify which driver to use for syncing Octavia and Neutron LBaaS databases.
Upgrade Notes¶
The default Octavia event_streamer_driver has changed from queue_event_streamer to noop_event_streamer. See
Deprecation Notes¶
The environments/standalone.yaml has been deprecated and should be replaced with environments/standalone/standalone-tripleo.yaml when using the ‘openstack tripleo deploy’ command.
All references to the logging_group output in the services templates have been removed, since it’s been unused for a couple of releases now.
Bug Fixes¶.
The baremetal API version is no longer hardcoded in
stackrc. This allows easy access to new features in ironicclient as they are introduced. If you need to use a fixed API version, set the
OS_BAREMETAL_API_VERSIONenvironment variable.. | https://docs.openstack.org/releasenotes/tripleo-heat-templates/stein.html | 2021-05-06T13:30:09 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.openstack.org |
Once you add the Slang Voice Assistant to your app, you might be interested in customizing the Assistant to suit your needs better.
This section talks about the various aspects that are available for you to customize. The following things are available for you to customize:
Training the Assistant to recognize additional data
Enable and disable User journeys
Customize greeting messages
Customize prompts and statements that the Assistant speaks
The languages recognized by the Assistant
The UI of the Assistant inside the app | https://docs.slanglabs.in/slang/getting-started/advanced-topics/customizing-the-assistant | 2021-05-06T12:44:29 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.slanglabs.in |
Identifying semantic changes and workarounds
As SQL Developer, Analyst, or other Hive user, you need to know potential problems with queries due to semantic changes. Some of the operations that changed were not widely used, so you might not encounter any of the problems associated with the changes.
Over the years, Apache Hive committers enhanced versions of Hive supported in legacy releases of CDH and HDP, with users in mind. Changes were designed to maintain compatibility with Hive applications. Consequently, few syntax changes occurred over the years. A number of semantic changes, described in this section did occur, however. Workarounds are described for these semantic changes. | https://docs.cloudera.com/cdp-private-cloud/latest/upgrade/topics/cdp_data_migration_hive_semantics.html | 2021-05-06T13:41:34 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.cloudera.com |
Here's how each view in the Marketing Performance tab can help you.
Overview
In this view, you can easily compare all traffic and the revenue from all channels.
Source/ medium view
In this view, you can compare channels by returns
Referral view
It lets you monitor your influencer marketing efforts, other organic referrals and campaigns with coupons.
Social view
The social view shows you only traffic and revenue coming from social networks so you can compare how organic performs vs paid.
Campaign view
In this view, you see revenue from defined campaigns - either defined with a UTM parameter (Metrilo catches them automatically) or defined in the "Manage referrals" field in the tab (top right corner).
Other uses of the Marketing performance tab
Generate unique tracking link for a campaign and follow its performance.
Track each influencer's coupon.
Tag all campaigns to easily differentiate them from the organic traffic.
Import a list of coupons to monitor them as a whole campaign.
Check the tab daily to monitor daily targets and adjust ad spend. | https://docs.metrilo.com/en/articles/2476656-how-the-marketing-performance-tab-can-help-you | 2021-05-06T13:01:02 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.metrilo.com |
scipy.stats.nhypergeom¶
scipy.stats.
nhypergeom(*args, **kwds) = <scipy.stats._discrete_distns.nhypergeom_gen object>[source]¶
A negative hypergeometric discrete random variable.
Consider a box containing \(M\) balls:, \(n\) red and \(M-n\) blue. We randomly sample balls from the box, one at a time and without replacement, until we have picked \(r\) blue balls.
nhypergeomis the distribution of the number of red balls \(k\) we have picked.
As an instance of the
rv_discreteclass,
nhypergeomobject inherits from it a collection of generic methods (see below for the full list), and completes them with details specific for this particular distribution.
Notes
The symbols used to denote the shape parameters (M, n, and r) are not universally accepted. See the Examples for a clarification of the definitions used here.
The probability mass function is defined as,\[f(k; M, n, r) = \frac{{{k+r-1}\choose{k}}{{M-r-k}\choose{n-k}}} {{M \choose n}}\]
for \(k \in [0, n]\), \(n \in [0, M]\), \(r \in [0, M-n]\), and the binomial coefficient is:\[\binom{n}{k} \equiv \frac{n!}{k! (n - k)!}.\]
It is equivalent to observing \(k\) successes in \(k+r-1\) samples with \(k+r\)’th sample being a failure. The former can be modelled as a hypergeometric distribution. The probability of the latter is simply the number of failures remaining \(M-n-(r-1)\) divided by the size of the remaining population \(M-(k+r-1)\). This relationship can be shown as:\[NHG(k;M,n,r) = HG(k;M,n,k+r-1)\frac{(M-n-(r-1))}{(M-(k+r-1))}\]
where \(NHG\) is probability mass function (PMF) of the negative hypergeometric distribution and \(HG\) is the PMF of the hypergeometric distribution.
The probability mass function above is defined in the “standardized” form. To shift distribution use the
locparameter. Specifically,
nhypergeom.pmf(k, M, n, r, loc)is identically equivalent to
nhypergeom.pmf(k - loc, M, n, r).
References
- 1
Negative Hypergeometric Distribution on Wikipedia
- 2
Negative Hypergeometric Distribution from
Examples
>>> from scipy.stats import nhypergeom >>> import matplotlib.pyplot as plt
Suppose we have a collection of 20 animals, of which 7 are dogs. Then if we want to know the probability of finding a given number of dogs (successes) in a sample with exactly 12 animals that aren’t dogs (failures), we can initialize a frozen distribution and plot the probability mass function:
>>> M, n, r = [20, 7, 12] >>> rv = nhypergeom(M, n, r) >>> x = np.arange(0, n+2) >>> pmf_dogs = rv.pmf(x)
>>> fig = plt.figure() >>> ax = fig.add_subplot(111) >>> ax.plot(x, pmf_dogs, 'bo') >>> ax.vlines(x, 0, pmf_dogs, lw=2) >>> ax.set_xlabel('# of dogs in our group with given 12 failures') >>> ax.set_ylabel('nhypergeom PMF') >>> plt.show()
Instead of using a frozen distribution we can also use
nhypergeommethods directly. To for example obtain the probability mass function, use:
>>> prb = nhypergeom.pmf(x, M, n, r)
And to generate random numbers:
>>> R = nhypergeom.rvs(M, n, r, size=10)
To verify the relationship between
hypergeomand
nhypergeom, use:
>>> from scipy.stats import hypergeom, nhypergeom >>> M, n, r = 45, 13, 8 >>> k = 6 >>> nhypergeom.pmf(k, M, n, r) 0.06180776620271643 >>> hypergeom.pmf(k, M, n, k+r-1) * (M - n - (r-1)) / (M - (k+r-1)) 0.06180776620271644
Methods | https://docs.scipy.org/doc/scipy-1.6.1/reference/generated/scipy.stats.nhypergeom.html | 2021-05-06T14:11:01 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.scipy.org |
An activity that contains and runs multiple embedded activities or views.This activity has similar interface and usage to the TabActivity, with some enhancements, such as scroll and fling gestures.
ScrollableTabHost
ScrollableTabWidget
Returns the
ScrollableTabHost the activity is using to host its tabs.
ScrollableTabHostthe activity is using to host its tabs.
Returns the
ScrollableTabWidget the activity is using to draw the actual tabs.
ScrollableTabWidgetthe activity is using to draw the actual tabs.
Updates the screen state (current list and other views) when the content changes.
Sets the default tab that is the first tab highlighted.
Sets the default tab that is the first tab highlighted.(). | http://docs.droidux.com/libs/2.6/commonpack/reference/com/droidux/pack/commons/widget/tabs/ScrollableTabActivity.html | 2020-03-28T11:38:58 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.droidux.com |
Accessing data
This page shows how to run your own code and access data hosted in your cloud account. It assumes that you know how to run a Spark application on Data Mechanics.
Specify data in your argumentsSpecify data in your arguments
On Google Cloud Platform, suppose that:
- you want to run a word count Scala Application hosted at
gs://<your-bucket>/wordcount.jar
- that reads input files in
gs://<your-bucket>/input/*
- and writes to
gs://<your-bucket>/output
- The main class is
org.<your-org>.wordcount.WordCount <input> <output>
Here is the payload you would submit:
curl -X POST \https://<your-cluster-url>/api/apps/ \-H 'Content-Type: application/json' \-d '{"jobName": "word-count","configOverrides": {"type": "Scala","sparkVersion": "3.0.0","mainApplicationFile": "gs://<your-bucket>/wordcount.jar","mainClass": "org.<your-org>.wordcount.WordCount","arguments": ["gs://<your-bucket>/input/*", "gs://<your-bucket>/output"]}}'
The command above fails because the Spark pods do not have sufficient permissions to access the code and the data:
Caused by: com.google....json.GoogleJsonResponseException: 403 Forbidden{"code" : 403,"errors" : [ {"domain" : "global","message" : "<service-account-name>@developer.gserviceaccount.com does not have storage.objects.get access to <your-bucket>/wordcount.jar.","reason" : "forbidden"} ],"message" : "<service-account-name>@developer.gserviceaccount.com does not have storage.objects.get access to <your-bucket>/wordcount.jar."}
Permissions on node instancesPermissions on node instances
- GCP
- AWS
Find the service account used by GCE instances running as Kubernetes nodes. Depending on your setup, it can be the default Compute Engine service account, of the form
[email protected]
or another service account that you created yourself of the form
SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com
The error log in the previous section shows the service account currently used by the Spark pods, which is the GCE instances' service account.
Once you have found the service account, grant it sufficient permissions using IAM roles. The list of IAM roles for GCS is here.
The Spark application above should now work, without modifying the payload. | https://docs.datamechanics.co/docs/accessing-data/ | 2020-03-28T12:33:01 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.datamechanics.co |
3.1.4.11 LsarLookupSids (Opnum 15)
The LsarLookupSids method translates a batch of security principal SIDs to their name forms. It also returns the domains that these names are a part of.
NTSTATUS LsarLookupSids( [in] LSAPR_HANDLE PolicyHandle, [in] PLSAPR_SID_ENUM_BUFFER SidEnumBuffer, [out] PLSAPR_REFERENCED_DOMAIN_LIST* ReferencedDomains, [in, out] PLSAPR_TRANSLATED_NAMES TranslatedNames, [in] LSAP_LOOKUP_LEVEL LookupLevel, [in, out] unsigned long* MappedCount );
PolicyHandle: Context handle obtained by an LsarOpenPolicy or LsarOpenPolicy2 call.
SidEnumBuffer: Contains the SIDs to be translated. The SIDs in this structure can be that of users, groups, computers, Windows-defined well-known security principals, or domains.
ReferencedDomains: On successful return, contains the domain information for the domain to which each security principal belongs. The domain information includes a NetBIOS domain name and a domain SID for each entry in the list.
TranslatedNames: On successful return, contains the corresponding name form for security principal SIDs in the SidEnumBuffer parameter. It MUST be ignored on input.
LookupLevel: Specifies what scopes are to be used during translation, as specified in section 2.2.16.
MappedCount: On successful return, contains the number of names that are translated completely to their Name forms. It MUST be ignored on input.
Return Values: The following table contains a summary of the return values that an implementation MUST return, as specified by the message processing shown after the table.
The behavior required when receiving an LsarLookupSids message MUST be identical to that when receiving an LsarLookupSids2 message, with the following exceptions:
Elements in the TranslatedNames output structure do not contain a Flags field.
Due to the absence of LookupOptions and ClientRevision parameters, the RPC server MUST assume that LookupOptions is 0 and ClientRevision is 1.
The server MUST return STATUS_ACCESS_DENIED if neither of the following conditions is true: | https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-lsat/eb7ac899-e697-4883-93de-1e60c7720c02?redirectedfrom=MSDN | 2020-03-28T13:09:10 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.microsoft.com |
Credits
Some of the components included in Splunk Enterprise are licensed under free or open source licenses. We wish to thank the contributors to those projects.
A complete listing of third-party software information for Splunk Enterprise is available as a PDF file for download: Splunk Enterprise 8.0 Third-party software credits.
Splunk Enterprise version 8.0.0 and later includes the Splunk Analytics Workspace app by default. The list of third-party software used in the app is included in the Splunk Enterprise 8.0 Third-party software credits PDF file.
This documentation applies to the following versions of Splunk® Enterprise: 8.0.0
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/8.0.0/ReleaseNotes/Credits | 2020-03-28T12:38:06 | CC-MAIN-2020-16 | 1585370491857.4 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Improving collection filtering
Locksmith supports hiding locked products across your shop, preventing them from appearing to unauthorized customers in collection listings, search results, and anywhere else product lists appear.
However, if a collection has some products which are locked, and some that aren't, the result can be a collection with empty or partially-empty pages. This is because Shopify only permits filtering products out of a collection page by page - there's no way to reshuffle products so that every page appears full.
If your collections don't contain a lot of products, you could try simply turning up the maximum number of products per page. This isn't a direct fix, but it can make pages appear less empty. And, if you have less than 50 products (the maximum in Shopify) in your collection, this can eliminate the issue completely.
If that doesn't work in your case, try these steps:
- Create versions of each collection that are geared toward each audience. For example, if you have a "Staff" collection that has some manager-only products, create one "Staff" collection with just non-manager products, and another "Staff" collection with all the manager-friendly products.
- Next, add links to all versions to your shop's navigation menus. We'll take care of filtering the links themselves in the next step, but for now, make sure your shop's navigation gives your visitors a way to get to the collection that's right for them.
- Finally, lock each collection, making sure to check the box labeled "Hide any links to this collection and its products in my shop's navigation menus". This will instruct Locksmith to only show a collection link to the visitor if they're qualified to open its lock.
Handling the "All" collection
Shopify generates a default collection called "All", located at the
/collections/all url of your shop. Out of the box, this collection contains your entire product catalog.
Because this collection is subject to the same conditions that are described above, it may be useful to override this collection with one that just contains the products in your shop that are public friendly.
To do that, simply create a new collection in your shop called "All", and manually specify the products (either individually or using conditions) that should be visible to the public. This will override the default collection, and visitors who open it will see normal, full collection pages, containing your public-friendly products.
Handling "Frontpage" collections, and other theme-specific product areas
Some themes include the ability to feature a collection of products on the frontpage, or elsewhere. Most themes don't support swapping collections based on the visitor, so it will require custom code to use audience-specific collections in these cases. This kind of thing would only work with specific key types such as customer tags or e-mail addresses.
Feel free to get in touch if you have questions about any of this! You can do that by just hitting the message icon on the bottom right of this screen ↘️ | https://docs.uselocksmith.com/article/228-improving-collection-filtering | 2020-03-28T11:19:10 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.uselocksmith.com |
Dymo Printer Settings for Check-In¶
While TouchPoint Check-In does work with Dymo printers, we do not recommend you using these, due to the cost of labels. However, should you choose to use a Dymo, below are steps for setting it up.
Printer Settings on your PC¶
- On your PC, go to Start > Devices and Printers. Then right-click on Dymo printer and select printing preferences.
- On the Layout tab, Select Orientation > landscape
- Click Advanced in the bottom right
- From the Paper size drop down, choose the correct label size. This number should be listed on the outside of the Dymo Label box. The two most common labels are the:
- White Address Label - 30572 or the 30252 (3 1/2” x 1 1/8”)
- White Shipping Label - 30573 or the 30256 (2 1/8” x 4”)
- Click OK on that advanced page, and then OK on the next page.
Printer Settings in Check-In¶
- When you start up Check-In, select the Dymo printer in the Printer drop down.
- Check the box Use Advanced Page Size
- Click the button from printer and that will pull the label size from the driver (100 = 1”).
- For the 3.5” x 1 1/8 inch label, it pulls 350 x 109. Adjust it to 335 Width x 100 Height, which seems to fit everything on the label.
- If you have another sized label, make a similar adjustment. Make adjustment of 5-10 increments down in size until you find the correct adjustment.
- If you need to adjust the alignment after printing a label, close Check-In and restart it. The previous settings will be saved, so just adjust the advanced page size height and width down (again - in increments of 5-10) to see how it looks. Continue doing that until the label prints with everything properly aligned (so none of the information is running off the side or bottom of the label). | http://docs.touchpointsoftware.com/Checkin/DymoPrinter.html | 2020-03-28T10:57:53 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.touchpointsoftware.com |
Are you often frustrated when you use FOSS code and can’t find the license or copyright holders so that you can do disclosure and attribution? Do you wish there was an easy way to get the data you need to be compliant? Are you worried when you start to use FOSS components that have ambiguous metadata?
If so, you’ll love ClearlyDefined, a community of people coming together to generate better metadata for FOSS projects. We’d love to work with you to hook our systems up to your workflow.
At ClearlyDefined, original projects
We are building a clearing house of FOSS metadata and we’d like you to consider consuming that data. If you are using FOSS code now, it may be the case that you are already doing some of your own scanning and discovery of license metadata for yourself. ClearlyDefined can provide some of the data you need so you don’t have to do all this work on your own anymore. Even better, the data is curated by a broad community. As a consumer you get API access to harvested and curated data on a range of FOSS projects.
ClearlyDefined is a diverse and inclusive community of technical experts who are passionate about quality FOSS. If you or a colleague are interested in this role, check out for more information. | https://docs.clearlydefined.io/roles/data-consumer | 2020-03-28T12:20:26 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.clearlydefined.io |
Lean Adoption in the Chemical Industry
As chemical companies continue to focus on attaining the next level of operational excellence, more and more are adopting or expanding their use of Lean or Lean Six Sigma across their organization. As this adoption matures, companies are looking for ways to standardize their improvement processes, accelerate their time-to-value in these programs and find better ways to measure and monitor the value they are getting for their investments. We’ve seen it happen in other industries adopting lean, and we are starting to see it in chemicals. A phrase that captures this scenario well is "process-rich and tools-poor". I certainly didn't coin the phrase, but it aptly describes the situation many chemical companies face.
One of the most effective ways to realize this value is by using software. Software can help standardize the improvement process providing teams with a consistent way to do work, and providing management with a consistent way to analyze results. Software can help accelerate the insight of practitioners by automating the analysis of the process and quickly focus teams in on the most impactful improvement opportunities. Software can also help companies manage the execution and performance of their program to quickly understand if they are getting the results desired. Finally, software can quickly bring teams together, even if they are separated by physically in order to better share work products, best practices, and results. Microsoft and our partners provide the software that helps companies realize these capabilities.
A partner of ours, the Orlando Software Group, provides some unique value in this space. I've worked with OSGI for a number of years, and I have seen them deliver a lot of value to their customers.
If you are a chemical company - or any company for that matter - looking to take your program to the next level, and are facing challenges with standardized work and analysis, quicker insight, and better deployment of the program across the organization, you should check out OSGI. If you want to view a 5 minute demo, go here. | https://docs.microsoft.com/en-us/archive/blogs/chemicals/lean-adoption-in-the-chemical-industry | 2020-03-28T12:37:34 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.microsoft.com |
PegaSys Plus Enterprise Ethereum Client
What is PegaSys Plus?
PegaSys Plus is a commercially-licensed Ethereum client built on Hyperledger Besu. Hyperledger Besu is an Ethereum client written in Java that runs private permissioned networks or public networks. PegaSys Plus extends Hyperledger Besu by providing additional enterprise features such as security configurations, event streaming, and advanced monitoring.
Why use PegaSys Plus?
PegaSys Plus is designed for enterprises that want to accelerate their blockchain solution to production quickly. Users gain all of the benefits and features of Hyperledger Besu like Solidity-based smart contracts, simple Ethereum-based digital asset models, and multiple consensus algorithms, along with additional features that ensure the security, reliability, and scalability of their blockchain solution.
PegaSys Plus includes functionality that addresses enterprise requirements in the following areas:
Security
Secure your data at rest by encrypting the blockchain’s internal database. Encryption keys are held securely in keystores or vaults (for example, Hashicorp Vault). This provides an extra layer of security in case hackers tamper with your infrastructure – even if they gain access to your data, it is inaccessible due to encryption.
Reliability
Monitor and capture metrics in real-time to view the health of validator nodes in the network. By capturing metrics like the last time a node produced a block or a decrease in performance, you can use real-time insights to resolve problems before they impact your business.
Visibility
Track blockchain events in real-time and set alerts to notify team members of specific events. Subscribe to any websocket-based event using a streaming platform (for example Apache Kafka or AWS Kinesis). This subscription type allows scalable and reliable event tracking without requiring a websocket subscription. Use the streaming platforms to set alerts or functionality triggers for blockchain events, and watch transactions as they arrive on the network, get included in blocks, or get dropped.
Need more information?
PegaSys Plus is designed to take your enterprise blockchain solution from proof of concept to production. To learn more about how PegaSys Plus features and vendor support can meet your enterprise’s requirements, contact us. | https://docs.plus.pegasys.tech/en/latest/ | 2020-03-28T12:19:09 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.plus.pegasys.tech |
You:
Now you need to create a rectangular grid of points separated 80 meters from each other:
Bemerkung.
Bemerkung:
Now you have a new column with plot names that are meaningful to you. For the systematic_plots_clip layer, change the field used for labeling to your new Plot_id field.:
..note:: The GPX format accepts only this CRS, if you select a different one, QGIS will give no error but you will get an empty file. Working with GPS Data in the QGIS User Manual.
Save your QGIS project now.. | https://docs.qgis.org/2.18/de/docs/training_manual/forestry/systematic_sampling.html | 2020-03-28T12:48:08 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.qgis.org |
method r
Documentation for method
r assembled from the following types:
class IO::Special
From IO::Special
(IO::Special) method r
method r(IO::Special: --> Bool)
The 'read access' file test operator, returns
True if and only if this instance represents the standard input handle(
<STDIN>).
class IO::Path
(IO::Path) method r
Defined as:
method r(--> Bool)
Returns
True if the invocant is a path that exists and is accessible. The method will
fail with
X::IO::DoesNotExist if the path points to a non-existent filesystem entity. | https://docs.raku.org/routine/r | 2020-03-28T11:38:17 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.raku.org |
How to Write a FunctionalTest
FunctionalTest test your applications
Controller instances and anything else which requires a web request. The
core of these tests are the same as
SapphireTest unit tests but add several methods for creating SS_HTTPRequest
and receiving SS_HTTPResponse objects. In this How To, we'll see how to write a test to query a page, check the
response and modify the session within a test.
mysite/tests/HomePageTest.php
<?php class HomePageTest extends FunctionalTest { /** * Test generation of the view */ public function testViewHomePage() { $page = $this->get('home/'); // Home page should load.. $this->assertEquals(200, $page->getStatusCode()); // We should see a login form $login = $this->submitForm("LoginFormID", null, array( 'Email' => '[email protected]', 'Password' => 'wrongpassword' )); // wrong details, should now see an error message $this->assertExactHTMLMatchBySelector("#LoginForm p.error", array( "That email address is invalid." )); // If we login as a user we should see a welcome message $me = Member::get()->first(); $this->logInAs($me); $page = $this->get('home/'); $this->assertExactHTMLMatchBySelector("#Welcome", array( 'Welcome Back' )); } } | https://docs.silverstripe.org/en/3/developer_guides/testing/how_tos/write_a_functionaltest/ | 2020-03-28T11:38:42 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.silverstripe.org |
TOPICS×
Adobe Primetime authentication and Adobe Primetime DRM
Adobe Primetime authentication ( ) provides user/device authentication and authorization across multiple content providers. The user must have a valid cable TV or satellite TV subscription.
Adobe Primetime authentication can be used along with Adobe Primeitme DRM for protecting the media content. In this scenario, The video player (SWF) can load another SWF called the Access Enabler , which is hosted by Adobe Systems. The Access Enabler is used to connect to the Adobe Primetime authentication Primetime authentication Primetime DRM Primetime authentication provides a media token validator Java library that can be deployed to a server. When using the Primetime DRM server for content protection, you can integrate the media token validator with a Primetime DRM server-side plug-in to automatically issue a generic license after successfully validating the media token. The content is then streamed from the CDN servers to the client. To acquire a content license, the short-lived media token can be submitted to the Primetime DRM server, where the validity of the token is verified and a license can be issued.
The long-lived AuthN token is used generally by the Access Enabler across all content developers to represent the AuthN for that MVPD subscriber. In addition, the Primetime DRM Server and Token Verifier can be operated by the CDN or a service provider on behalf of the content provider. | https://docs.adobe.com/content/help/en/primetime/drm/drm-sdk-5-3-1/additional-deployment-scenarios/adobe-pass-and-adobe-access.html | 2020-07-02T13:03:50 | CC-MAIN-2020-29 | 1593655878753.12 | [array(['/content/dam/help/primetime.en/help/digital-rights-management/drm-sdk-overview/adobe-access-components/additional-deployment-scenarios/assets/AdobePass_web.png',
None], dtype=object) ] | docs.adobe.com |
Landing Page
To set a page to be full width, first select full page width under the layout settings below where you added your text. Click Update.
Next, if you'd like the page to be full-width and not 2/3 of the page, type "full title-center" (no quotes) as shown in the above screenshot in the custom body class box. | https://docs.designbybloom.co/article/176-landing-page | 2020-07-02T12:16:19 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.designbybloom.co |
Customers Menu
The Customers menu provides access to customer account management tools, and gives you the ability to see who is currently online in your store.
Customers Menu
Display the Customers menu
On the Admin sidebar, click Customers.
Menu options
All Customers
Lists all customers who have registered for an account with your store or were added by the administrator.
Now Online
Lists all customers and visitors who are currently online in your store.
Customer Groups
The customer group determines which discounts are available to shoppers and the tax class for the purchase.
Segments
Dynamically display content and promotions to specific customers based on properties such as customer address, order history, shopping cart contents, and more.
Companies
Lists all active company accounts and pending requests, regardless of status setting, and provides the tools needed to create and manage company accounts. | https://docs.magento.com/user-guide/customers/customers-menu.html | 2020-07-02T13:33:31 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.magento.com |
Refer to Amazon Sales Channel 4.0+ for updated documentation.
Onboarding: Product Listing Actions
Step 2 Options for Listing Settings
If you are managing a store that is in “Active” or “Inactive” status, see Product Listing Actions.
The Product Listing Actions section defines how your catalog interacts with Amazon. These settings include:
Indicate if your Magento catalog products that meet Amazon eligibility requirements are automatically sent to your Amazon Seller Central account to create new listings.
Set the default handling time for an order. This value defines the number of days generally required for you to process and ship an order. For example, if someone selects 2-day shipping, that shipping transit time does not start until processing completes and packages are handed off to a carrier. The total delivery time is (handling time + transit time + any holidays).
These settings are part of your store’s Listing Settings. Update these configurations during onboarding through the Listing Settings step.
To configure Product Listing Actions settings:
Expand the Product Listing Actions section.
For Automatic List Action (required), choose an option in drop-down:
Automatically List Eligible Products: Choose when you want your Magento catalog products (that meet Amazon’s eligibility requirements) to automatically push to Amazon and create new Amazon Listings.
Do Not Automatically List Eligible Products: Choose when you want to manually select your eligible Magento catalog products and create new Amazon Listings. When selected, catalog products that meet your listing criteria and contain all required information display on the Ready to List tab for manual publishing. The Ready to List tab only displays when this option is selected.
For Default Handling Time (required), enter a numerical amount of lead time days needed before shipment. The default value is 2 days.
This default handing time value is only effective for Amazon listings created through Amazon Sales Channel. Any Amazon listings that were created in your Amazon Seller Central account use the default handling time set for the listing in Amazon.
When complete, continue to the Third Party Listings section.
Product Listing Actions | https://docs.magento.com/user-guide/sales-channels/amazon/ob-product-listing-actions.html | 2020-07-02T13:09:45 | CC-MAIN-2020-29 | 1593655878753.12 | [array(['/user-guide/images/images/sales-channels/amazon/onboarding-step-listing-settings.png',
None], dtype=object) ] | docs.magento.com |
3.2.4.1.1 RpcReplyOpenPrinter (Opnum 58)
RpcReplyOpenPrinter establishes a context handle from a print server to a print client.<388> The server uses the RPC context handle returned by this method to send notification data to the client machine.
DWORD RpcReplyOpenPrinter( [in, string] STRING_HANDLE pMachine, [out] PRINTER_HANDLE* phPrinterNotify, [in] DWORD dwPrinterRemote, [in] DWORD dwType, [in, range(0,512)] DWORD cbBuffer, [in, unique, size_is(cbBuffer), disable_consistency_check] BYTE* pBuffer );
pMachine: A string that specifies the print client computer name. It is synonymous with pName, as specified in Print Server Name Parameters (section 3.1.4.1.4).
phPrinterNotify: A pointer to a remote printer RPC context handle that is used by a print server to send notifications to a print client. RPC context handles are specified in [C706].
dwPrinterRemote: A value that is supplied to the server by the dwPrinterLocal parameter of a corresponding call to RpcRemoteFindFirstPrinterChangeNotification (section 3.1.4.10.3) or RpcRemoteFindFirstPrinterChangeNotificationEx (section 3.1.4.10.4). This value MUST NOT be zero.
dwType: A value that MUST be 0x00000001. client MUST validate parameters by verifying that the pMachine parameter corresponds to the current machine.
This method SHOULD execute without further access checks.
If parameter validation fails, the client MUST fail the operation immediately and return a nonzero error response to the server. Otherwise, the client MUST process the message as follows: | https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-rprn/7fcd3036-d45a-4ec7-b081-f2b860e66676 | 2020-07-02T13:43:02 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.microsoft.com |
Circle Datetime
This feature is supported in wearable applications only.
The circle datetime component extends the datetime component (
elm_datetime) by visualizing the selected field. If a rotary event is activated by the
eext_rotary_object_event_activated_set()function, the circle datetime increases or decreases the value of the selected field in the
elm_datetime component through the clockwise or counter-clockwise rotary event.
For more information, see the Efl Extension Circle Datetime API.
Figure: Circle datetime component
Adding a Circle Datetime Component
To create a circle datetime component, use the
eext_circle_object_datetime_add() function:
- The
elm_datetimehandle must be passed as the first parameter.
- If a circle surface is passed as the second parameter, a circle object connected with a circle surface is created, and it is rendered by the circle surface. If you pass
NULLinstead of a circle surface, the new circle object is managed and rendered by itself.
Evas_Object *datetime; Evas_Object *circle_datetime; datetime = elm_datetime_add(parent); circle_datetime = eext_circle_object_datetime_add(datetime, surface);
The circle datetime component is created with the
default style.
Activating a Rotary Event
To activate or deactivate the circle datetime, use the
eext_rotary_object_event_activated_set() function:
eext_rotary_object_event_activated_set(circle_datetime, EINA_TRUE);
If the second parameter is
EINA_TRUE, the circle datetime can receive rotary events.
Configuring the Circle Properties
To configure the circle properties of the circle datetime:
You can disable the circle object within the circle datetime component using the following functions:
eext_circle_object_disabled_set()
eext_circle_object_disabled_get()
Related Information
- Dependencies
- Tizen 4.0 and Higher for Wearable | https://docs.tizen.org/application/native/guides/ui/efl/wearable/component-circle-datetime/ | 2020-07-02T12:36:02 | CC-MAIN-2020-29 | 1593655878753.12 | [array(['../media/circle_datetime.png', 'Circle datetime component'],
dtype=object) ] | docs.tizen.org |
Rendering
The SDK is the “back end” of the editor. It handles image rendering and image modification. If you’re interested in building your own UI, or not using a UI at all, this is the way to go.
Rendering using the SDK requires an input image as well as a
canvas HTML element that it should
render to. Let’s create the
canvas element first:
<canvas id="canvas" />
Now let’s create an
Image, load it, instantiate the SDK and render the image to the canvas:
window.onload = function () { const canvas = document.getElementById('canvas') const image = new Image() image.addEventListener('load', () => { const sdk = new PhotoEditorSDK('webgl', { canvas: canvas, image: image }) sdk.render() }) image.src = 'image.png' }
The canvas should now display your image. | https://docs.photoeditorsdk.com/guides/html5/v3_6/concepts/rendering | 2020-07-02T12:01:26 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.photoeditorsdk.com |
Oracle RDBMS
Oracle is an Industry-leading enterprise Relational Database Management System.
This integration installs and configures Telegraf and a custom Python script to send Oracle metrics into Wavefront. Telegraf is a light-weight server process capable of collecting, processing, aggregating, and sending metrics to a Wavefront proxy. The custom script uses the Dynamic Performance views that Oracle provides to gather metrics.
In addition to setting up the metrics flow, this integration also sets up a dashboard.
To see a list of the metrics for this integration, select the integration from.
Oracle RDBMS. Create wavefront User in Oracle
CREATE USER wavefront IDENTIFIED BY <yourpassword>; GRANT select_catalog_role TO wavefront; GRANT CREATE SESSION TO wavefront;
Step 3. Install Python
- Make sure python 3.6 or higher is installed on the Telegraf agent server.
- Install python package cx_Oracle. Use the following snippet.
python3 -m pip install cx_Oracle --upgrade
Step 4. Create a Script to Gather Oracle RDBMS Metrics
- Download wavefront_oracle_metrics.py onto your Telegraf agent server.
- Test the script execution using this command:
python wavefront_oracle_metrics.py
You should get a response similar to this:
usage: wavefront_oracle_metrics.py [-h] -u USER -p PASSWD -s SID wavefront_oracle_metrics.py: error: the following arguments are required: -u/--user, -p/--passwd, -s/--sid
If the script is not executing, adjust the file permission and the Python path.
- Download exec_oracle_python.sh onto your Telegraf agent server.
- Edit the script to change the environment variables, and python execution path for your Telegraf agent server.
- Change the
wavefront password&
sidparameters in exec_oracle_python.sh file.
# Example. /usr/bin/python "/home/oracle/Documents/wavefront_oracle_metrics.py" -u "wavefront" -p "wavefront123" -s "orcl"
- Note down the full paths for files downloaded and saved from steps 1 & 3 above.
Step 5. Configure Telegraf Exec Input Plugin
For Linux Telegraf agent server.
Create a file called
oracle.conf in
/etc/telegraf/telegraf.d and enter the following snippet:
[[inputs.exec]] commands = ["/home/oracle/Documents/exec_oracle_python.sh"] timeout = "5s" data_format = "influx"
NOTE: use the path of the exec_oracle_python.sh.
For Windows Telegraf agent server.
Edit the
telegraf.conf file located at
Program Files\Telegraf and enter the following snippet:
[[inputs.exec]] commands = [ 'python "C:\Wavefront\wavefront_oracle_metrics.py" -u "wavefront" -p "<password>" -s "<sid>"' ] timeout = "5s" data_format = "influx"
Change the
password and
sid in the code snippet.
NOTE: use the path of the wavefront_oracle_metrics.py.
Step 6. Restart Telegraf
For Linux
Run
sudo service telegraf restart to restart your Telegraf agent.
For Windows
Restart the Telegraf service using the Windows Services Management Console or from the command prompt:
net stop telegraf net start telegraf | https://docs.wavefront.com/oracle.html | 2020-07-02T12:01:43 | CC-MAIN-2020-29 | 1593655878753.12 | [array(['images/Oracle-DB-metrics.png', 'images/Oracle-DB-metrics.png'],
dtype=object) ] | docs.wavefront.com |
path you can then pass to
--include in verbatim to only restore the single file or directory.
There are case insensitive variants of of
--exclude and
--include called
--iexclude and
--iinclude. These options will behave the same way but
ignore the casing of paths. When finished, quit with Ctrl-c or umount the mountpoint.
Mounting repositories via FUSE is not possible on OpenBSD, Solaris/illumos
and Windows. For Linux, the
fuse kernel module needs to be loaded. For
FreeBSD, you may need to install FUSE and load the kernel module (
kldload
fuse)..
Printing files to stdout¶
Sometimes it’s helpful to print files to stdout so that other programs can read the data directly. This can be achieved by using the dump command, like this:
$ restic -r /srv/restic-repo dump latest production.sql | mysql
If you have saved multiple different things into the same repo, the
latest
snapshot may not be the right one. For example, consider the following
snapshots in a repo:
$ restic -r /srv/restic-repo snapshots ID Date Host Tags Directory ---------------------------------------------------------------------- 562bfc5e 2018-07-14 20:18:01 mopped /home/user/file1 bbacb625 2018-07-14 20:18:07 mopped /home/other/work e922c858 2018-07-14 20:18:10 mopped /home/other/work 098db9d5 2018-07-14 20:18:13 mopped /production.sql b62f46ec 2018-07-14 20:18:16 mopped /home/user/file1 1541acae 2018-07-14 20:18:18 mopped /home/other/work ----------------------------------------------------------------------
Here, restic would resolve
latest to the snapshot
1541acae, which does
not contain the file we’d like to print at all (
production.sql). In this
case, you can pass restic the snapshot ID of the snapshot you like to restore:
$ restic -r /srv/restic-repo dump 098db9d5 production.sql | mysql
Or you can pass restic a path that should be used for selecting the latest snapshot. The path must match the patch printed in the “Directory” column, e.g.:
$ restic -r /srv/restic-repo dump --path /production.sql latest production.sql | mysql
It is also possible to
dump the contents of a whole folder structure to
stdout. To retain the information about the files and folders Restic will
output the contents in the tar format:
$ restic -r /srv/restic-repo dump /home/other/work latest > restore.tar | https://restic.readthedocs.io/en/v0.9.6/050_restore.html | 2020-07-02T13:07:01 | CC-MAIN-2020-29 | 1593655878753.12 | [] | restic.readthedocs.io |
Configuring content for translation
Translation services can process the following types of page content:
- Content of editable regions
- Values of page fields (entered on the Form tab of the Pages application)
- The properties of web parts and widgets.
-? | https://docs.kentico.com/k11/multilingual-websites/configuring-translation-services/configuring-content-for-translation | 2018-09-18T20:33:02 | CC-MAIN-2018-39 | 1537267155676.21 | [] | docs.kentico.com |
You can set applications from the Mac or the virtual machines to be used to open different categories of URLs.
About this task
You can open the following categories of URLs:
RSS feeds (feed)
File transfers (FTP, SFTP)
Web pages (HTTP, HTTPS)
Mail (mailto)
VMRC (VMware Remote Console)
Newsgroups (news)
Remote sessions (Telnet, SSH)
If you make a Web browser the default from within a virtual machine, the default setting for how Fusion handles URLs does not change. The next time you start or resume the virtual machine, or change the URL preferences, the Fusion settings overwrite the changes that you make in the guest machine.
Procedure
- Select a virtual machine in the Virtual Machine Library window and click Settings.
- Under System Settings in the Settings window, click Default Applications.
- Click Configure.
- Set or change the preference. | https://docs.vmware.com/en/VMware-Fusion/10.0/com.vmware.fusion.using.doc/GUID-2626A5BE-469F-424A-9FC5-8C32E913C1A6.html | 2018-09-18T19:45:49 | CC-MAIN-2018-39 | 1537267155676.21 | [] | docs.vmware.com |
CudaPackage¶
Different from other packages,
CudaPackage does not represent a build
system. Instead its goal is to simplify and unify usage of
CUDA in other
packages.
Provided variants and dependencies¶
CudaPackage provides
cuda variant (default to
off) to enable/disable
CUDA, and
cuda_arch variant to optionally specify the architecture.
It also declares dependencies on the
CUDA package
depends_on('cuda@...')
based on the architecture as well as specifies conflicts for certain compiler versions.
Usage¶
In order to use it, just add another base class to your package, for example:
class MyPackage(CMakePackage, CudaPackage): ... def cmake_args(self): spec = self.spec if '+cuda' in spec: options.append('-DWITH_CUDA=ON') cuda_arch = spec.variants['cuda_arch'].value if cuda_arch is not None: options.append('-DCUDA_FLAGS=-arch=sm_{0}'.format(cuda_arch[0])) else: options.append('-DWITH_CUDA=OFF') | https://spack.readthedocs.io/en/latest/build_systems/cudapackage.html | 2018-09-18T18:58:00 | CC-MAIN-2018-39 | 1537267155676.21 | [] | spack.readthedocs.io |
This is a guide to many pandas tutorials, geared mainly for new users.
pandas own 10 Minutes to pandas
More complex recipes are in the Cookbook.
For more resources, please visit the main repository.
This guide is a comprehensive introduction to the data analysis process using the Python data ecosystem and an interesting open dataset. There are four sections covering selected topics as follows:
Practice your skills with real data sets and exercises. For more resources, please visit the main repository.
© 2008–2012, AQR Capital Management, LLC, Lambda Foundry, Inc. and PyData Development Team
Licensed under the 3-clause BSD License. | http://docs.w3cub.com/pandas~0.22/tutorials/ | 2018-09-18T20:12:54 | CC-MAIN-2018-39 | 1537267155676.21 | [] | docs.w3cub.com |
. will execute the archetype and generate the code. If this is your first time running this command, Maven will download will also be the default Mule version used for the generated artifact.
mvn org.mule.tools:mule-project-archetype:3.1.1:create ...
The plug-in prompts you to answer several questions about the project you are writing. These may vary according to the options you select. An example of the output is shown below. will add the namespaces for those transports to the configuration file. | https://docs.mulesoft.com/mule-user-guide/v/3.5/creating-project-archetypes | 2017-03-23T06:09:32 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.mulesoft.com |
Kongregate Integration
Integrate Kongregate purchases for web-browser games
Introduction
In this guide, you will learn how to set up Kongregate purchases with Gamedonia. It is a pretty straightforward process, let's see how it's done.
Kongregate purchases are only available for the Gamedonia Actionscript web SDK.
Gamedonia calls will only work under a valid Gamedonia session. This means there needs to be a user logged in with Gamedonia. The main exceptions are, for obvious reasons, creating and logging users in.
Kongregate setup
First of all, to be able to purchase items in Kongregate, you need to create items to purchase. You can do it visiting the site of your game at Kongregate and adding /items to the URL like this:
There you can manage your items for purchase. Let's create a new one by clicking on New Item.
Set up the next fields:
- Identifier - This will be referenced fom your code.
- Name - The public name of the item.
- Description - A short description of the product.
- Price - The price of the product.
Now set up any optional fields you may need and click on Create.
Then you will see your created product with its price and attributes. Copy the product identifier, because you will need it later.
Before you're done at the Kongregate site, we first need to obtain your Kongregate API key. You can do that at the URL of your Kongregate game adding /api in the end. The URL would be like this:
There you can see the some keys for your game. Copy the one called API key. You will need it later.
Dashboard setup
Go to the Gamedonia Dashboard and open the Social Networks > Settings tab. Here you will just need to set up a single field called Game API Key with the API key of your game you got from the Kongregate website. Then click on the Update button just below the Kongregate Settings and your API Key will be stored in Gamedonia.
Code
Now on to the actual process of purchasing an item in Kongregate using the Gamedonia Actionscript web SDK.
Initialize and request
First you need to initialize Gamedonia with the right Kongregate options. Let's see how it's done:
import com.gamedonia.sdk.GDOptions; import com.gamedonia.sdk.Gamedonia; import com.gamedonia.sdk.GamedoniaInAppPurchases; import com.gamedonia.sdk.GamedoniaStoreEvent; import com.gamedonia.sdk.GamedoniaUsers; import com.gamedonia.sdk.social.GamedoniaKongregate; protected function init():void { // Create an options object with Kongregate API key // This way GamedoniaSDK will initialize the Kongregate API var options:GDOptions = new GDOptions(); options.inAppPurchases = true; options.stage = stage; options.kong_api_key = "your_kongregate_API_key"; // Allow the API access to this SWF Security.loadPolicyFile("xmlsocket://webapi.gamedonia.com:1843"); Gamedonia.initializeWithOptions("your_gamedonia_api_key", "your_gamedonia_game_secret", "", "v1", options, handleInit); // Add event listeners for all 3 purchase events GamedoniaInAppPurchases.instance.addEventListener(GamedoniaStoreEvent.PRODUCTS_REQUESTED, onProductsRequested); GamedoniaInAppPurchases.instance.addEventListener(GamedoniaStoreEvent.PRODUCT_PURCHASED_OK, onProductPurchasedOk); GamedoniaInAppPurchases.instance.addEventListener(GamedoniaStoreEvent.PRODUCT_PURCHASED_KO, onProductPurchasedKo); } protected function handleInit(response:Object):void { GamedoniaUsers.authenticate(Gamedonia.CredentialsType_KONGREGATE, null, handleLogin); } protected function handleLogin(success:Boolean):void { var productsList:Array = new Array( "product_id" ); GamedoniaInAppPurchases.instance.requestProducts( productsList ); }
As you can see, what we first did is initialize Gamedonia adding some Kongregate configuration using the options parameter. It's important that you set your Kongregate API key that you obtained before at the Kongregate website.
Once the initialization is done, you need to login your user using Kongregate credentials. You can do this using the authenticate method. Then you will want to request the products you want to have available from your Kongregate item list. The id of each product has to be exactly the same as the one you set up on the Kongregate website.
Event callbacks
Next thing to do is to process the event listener callbacks. You can handle each event ( PRODUCTS_REQUESTED, PRODUCT_PURCHASED_OK, PRODUCT_PURCHASED_KO) as you prefer. The code would look like this:
private function onProductsRequested(event:GamedoniaStoreEvent):void { // Your products_requested processing } private function onProductPurchasedOk(event:GamedoniaStoreEvent):void { // Your purchase success processing } private function onProductPurchasedKo(event:GamedoniaStoreEvent):void { // Your purchase fail processing }
Buy a product
To actually buy a product in Kongregate you just need a single line of code:
GamedoniaInAppPurchases.instance.buyProduct("product_id");
It's important that you set exactly the same product id in your code as in the Kongregate web. When buyProduct is called from Kongregate, a pop-up should appear with the details of the transaction. The user can confirm the purchase by clicking on the Checkout button, or he may close and cancel the transaction.
Then the transaction event of success or failure will trigger and you will be able to manage each one adequately. | https://docs.gamedonia.com/guides/kongregate | 2017-03-23T06:10:30 | CC-MAIN-2017-13 | 1490218186780.20 | [array(['/sites/default/files/images/kongregate-items.jpg', None],
dtype=object)
array(['/sites/default/files/images/kongregate-creacion_prod.jpg', None],
dtype=object)
array(['/sites/default/files/images/kongregate-p_creado.jpg', None],
dtype=object)
array(['/sites/default/files/images/kongregate-apis.jpg', None],
dtype=object)
array(['/sites/default/files/images/kongregate-dashboard.jpg', None],
dtype=object)
array(['/sites/default/files/images/kongregate-transaction.jpg', None],
dtype=object) ] | docs.gamedonia.com |
Handsontable performs multiple calculations to display the grid properly. The most demanding actions are performed on load, change and scroll events. Every single operation decreases the performance, but most of them are unavoidable.
We use Performance Lab to measure the execution times in various configurations. Some tests have shown that there are methods which may potentially boost the performance of your application. Those work only in certain cases, but we hope they can be successfully applied to your app as well.
Set constant size
You can try setting a constant size for your table's columns. This way, Handsontable won't have to calculate the optimal width for each column. In order to do that, define the column widths in the colWidths property of your Handsontable instance configuration, for example:
var hot = new Handsontable(obj, { // other options colWidths: [50, 150, 45] });
For more information, see our documentation.
As Handsontable won't do the column width calculations, you need to make sure, that your table contents fit inside the columns with the provided widths.
Turn off autoRowSize and/or autoColumnSize
You can tweak the value of the
autoRowSize and
autoColumnSize options.
They allow you to define the amount of width/height-related calculations
made during the table's initialization.
For more information, see our documentation for rows and columns.
Define the number of pre-rendered rows and columns
You can explicitly specify the number of rows and columns to be rendered outside of the visible part of the table. In some cases you can achieve better results by setting a lower number (as less elements get rendered), but sometimes setting a larger number may also work well (as less operations are being made on each scroll event). Tweaking these settings and finding the sweet spot may improve the feeling of your Handsontable implementation.
For more information, see our documentation for rows and columns.
Rule of thumb: don't use too much styling
Changing your background, font colors etc. shouldn't lower the performance, however adding too many CSS animations, transitions and other calculation-consuming attributes may impact the performance, so keep them it a reasonable level. | https://docs.handsontable.com/pro/1.5.1/tutorial-performance-tips.html | 2017-03-23T06:14:06 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.handsontable.com |
Multiple mlab scene models example¶
Example showing a dialog with multiple embedded scenes.
When using several embedded scenes with mlab, you should be very careful always to pass the scene you want to use for plotting to the mlab function used, elsewhere it uses the current scene. In this example, failing to do so would result in only one scene being used, the last one created.
The trick is to use the ‘mayavi_scene’ attribute of the MlabSceneModel, and pass it as a keyword argument to the mlab functions.
For more examples on embedding mlab scenes in dialog, see also: the examples Mlab interactive dialog example, and Lorenz ui example, as well as the section of the user manual Embedding a Mayavi scene in a Traits dialog.
Python source code:
multiple_mlab_scene_models.py
import numpy as np from traits.api import HasTraits, Instance, Button, \ on_trait_change from traitsui.api import View, Item, HSplit, Group from mayavi import mlab from mayavi.core.ui.api import MlabSceneModel, SceneEditor class MyDialog(HasTraits): scene1 = Instance(MlabSceneModel, ()) scene2 = Instance(MlabSceneModel, ()) button1 = Button('Redraw') button2 = Button('Redraw') @on_trait_change('button1') def redraw_scene1(self): self.redraw_scene(self.scene1) @on_trait_change('button2') def redraw_scene2(self): self.redraw_scene(self.scene2) def redraw_scene(self, scene): # Notice how each mlab call points explicitely to the figure it # applies to. mlab.clf(figure=scene.mayavi_scene) x, y, z, s = np.random.random((4, 100)) mlab.points3d(x, y, z, s, figure=scene.mayavi_scene) # The layout of the dialog created view = View(HSplit( Group( Item('scene1', editor=SceneEditor(), height=250, width=300), 'button1', show_labels=False, ), Group( Item('scene2', editor=SceneEditor(), height=250, width=300, show_label=False), 'button2', show_labels=False, ), ), resizable=True, ) m = MyDialog() m.configure_traits() | http://docs.enthought.com/mayavi/mayavi/auto/example_multiple_mlab_scene_models.html | 2017-03-23T06:12:14 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.enthought.com |
Using Stetl¶
This section explains how to use Stetl for your ETL. It assumes Stetl is installed and you are able to run the examples. It may be useful to study some of the examples, especially the core ones found in the examples/basics directory. These examples start numbering from 1, building up more complex ETL cases like (INSPIRE) transformation using Jinja2 Templating.
In addition there are example cases like the Dutch Topo map (Top10NL) ETL in the examples/top10nl directory .
The core concepts of Stetl remain pretty simple: an input resource like a file or a database table is mapped to an output resource (also a file, a database, a remote HTTP server etc) via one or more filters. The input, filters and output are connected in a pipeline called a processing chain or Chain. This is a bit similar to a current in electrical engineering: an input flows through several filters, that each modify the current. In our case the current is (geospatial) data. Stetl design follows the so-called Pipes and Filters Architectural Pattern.
Stetl Config¶
Stetl components (Inputs, Filters, Outputs) and their interconnection (the Pipeline/Chain)
are specified in a Stetl config file. The file format follows the Python
.ini file-format.
To illustrate, let’s look at the example 2_xslt.
This example takes the input file
input/cities.xml and transforms this file to a valid GML file called
output/gmlcities.gml. The Stetl config file looks as follows.
[etl] chains = input_xml_file|transformer_xslt|output_file
Most of the sections in this ini-file specify a Stetl component: an Input, Filter or Output component.
Each component is specified by its (Python) class and per-component specific parameters.
For example
[input_xml_file] uses the class
inputs.fileinput.XmlFileInput reading and parsing the
file
input/cities.xml specified by the
file_path property.
[transformer_xslt] is a Filter that
applies XSLT with the script file
cities2gml.xsl that is in the same directory. The
[output_file]
component specifies the output, in this case a file.
These components are coupled in a Stetl Chain using the special .ini section
[etl]. That section specifies one
or more processing chains. Each Chain is specified by the names of the component sections, their interconnection using
a the Unix pipe symbol “|”.
So the above Chain is
input_xml_file|transformer_xslt|output_file. The names
of the component sections like
[input_xml_file] are arbitrary.
Note: since v1.1.0 a datastream can be split (see below) to multiple
Outputs using
() like :
[etl] chains = input_xml_file|transformer_xslt|(output_gml_file)(output_wfs)
In later versions also combining
Inputs and
Filter-splitting will be provided.
Configuring Components¶
Most Stetl Components, i.e. inputs, filters, outputs, have properties that can be configured within their
respective [section] in the config file. But what are the possible properties, values and defaults?
This is documented within each Component class using the
@Config decorator much similar to the standard Python
@property, only with
some more intelligence for type conversions, defaults, required presence and documentation.
It is loosely based on and Bruce Eckel’s with a fix/hack for Sphinx documentation.
See for example the
stetl.inputs.fileinput.FileInput documentation.
For class authors: this information is added
via the Python Decorators much similar to
@property. The
stetl.component.Config
is used to define read-only properties for each Component instance. For example,
class FileInput(Input): """ Abstract base class for specific FileInputs, use derived classes. """ # Start attribute config meta # Applying Decorator pattern with the Config class to provide # read-only config values from the configured properties. @Config(ptype=str, default=None, required=False) def file_path(self): """ Path to file or files or URLs: can be a dir or files or URLs or even multiple, comma separated. For URLs only JSON is supported now. Required: True Default: None """ pass @Config(ptype=str, default='*.[gxGX][mM][lL]', required=False) def filename_pattern(self): """ Filename pattern according to Python glob.glob for example: '\*.[gxGX][mM][lL]' Required: False Default: '\*.[gxGX][mM][lL]' """ pass # End attribute config meta def __init__(self, configdict, section, produces): Input.__init__(self, configdict, section, produces) # Create the list of files to be used as input self.file_list = Util.make_file_list(self.file_path, None, self.filename_pattern, self.depth_search)
This defines two configurable properties for the class FileInput.
Each
@Config has three parameters:
p_type, the Python type (
str,
list,
dict,
bool,
int),
default (default value if not present) and
required (if property in mandatory or optional).
Within the config one can set specific config values like,
[input_xml_file] class = inputs.fileinput.XmlFileInput file_path = input/cities.xml
This automagically assigns
file_path to
self.file_path without any custom code and assigns the
default value to
filename_pattern. Automatic checks are performed: if
file_path (
required=True) is present, if its type is string.
In some cases type conversions may be applied e.g. when type is
dict or
list. It is guarded that the value is not
overwritten and the docstrings will appear in the auto-generated documentation, each entry prepended with a
CONFIG tag.
Running Stetl¶
The above ETL spec can be found in the file
etl.cfg. Now Stetl can be run, simply by typing
stetl -c etl.cfg
Stetl will parse
etl.cfg, create all Components by their class name and link them in a Chain and execute
that Chain. Of course this example is very trivial, as we could just call XSLT without Stetl. But it becomes interesting
with more complex transformations.
Suppose we want to convert the resulting GML to an ESRI Shapefile. As we cannot use GDAL
ogr2ogr on the input
file, we need to combine XSLT and ogr2ogr. See example
3_shape. Now we replace the output
by using outputs.ogroutput.Ogr2OgrOutput, which can execute any ogr2ogr command, converting
whatever it gets as input from the previous Filter in the Chain.
[etl] chains = input_xml_file|transformer_xslt|output_ogr_shape [input_xml_file] class = inputs.fileinput.XmlFileInput file_path = input/cities.xml [transformer_xslt] class = filters.xsltfilter.XsltFilter script = cities2gml.xsl # The ogr2ogr command-line. May be split over multiple lines for readability. # Backslashes not required in that case. [output_ogr_shape] class = outputs.ogroutput.Ogr2OgrOutput temp_file = temp/gmlcities.gml ogr2ogr_cmd = ogr2ogr -overwrite -f "ESRI Shapefile" -a_srs epsg:4326 output/gmlcities.shp temp/gmlcities.gml
Using Docker¶
A convenient way to run Stetl is via a Docker image. See the installation instructions at Install with Docker. A full example can be viewed in the Smart Emission project:.
In the simplest case you run a Stetl Docker container from your own built image or the Dockerhub-provided one, justb4/stetl:latest basically as follows:
sudo docker run -v <host dir>:<container dir> -w <work dir> justb4/stetl:latest <any Stetl arguments>
For example within the current directory you may have an
etl.cfg Stetl file:
WORK_DIR=`pwd` sudo docker run -v ${WORK_DIR}:${WORK_DIR} -w ${WORK_DIR} justb4/stetl:latest -c etl.cfg
A more advanced setup would be (network-)linking to a PostGIS Docker image like kartoza/postgis:
# First run Postgis, remains running, sudo docker run --name postgis -d -t kartoza/postgis:9.4-2.1 # Then later run Stetl STETL_ARGS="-c etl.cfg -a local.args" WORK_DIR="`pwd`" sudo docker run --name stetl --link postgis:postgis -v ${WORK_DIR}:${WORK_DIR} -w ${WORK_DIR} justb4/stetl:latest ${STETL_ARGS}
The last example is used within the SmartEmission project. Also with more detail and keeping all dynamic data (like PostGIS DB), your Stetl config and results, and logs within the host. For PostGIS see: and Stetl see:.
Stetl Integration¶
Note: one can also run Stetl via its main ETL class:
stetl.etl.ETL.
This may be useful for integrations in for example Python programs
or even OGC WPS servers (planned).
Reusable Stetl Configs¶
What we saw in the last example is that it is hard to reuse this etl.cfg when we have for example a different input file or want to map to different output files. For this Stetl supports parameter substitution. Here command line parameters are substituted for variables in etl.cfg. A variable is declared between curly brackets like {out_xml}. See example 6_cmdargs.
[etl] chains = input_xml_file|transformer_xslt|output_file [input_xml_file] class = inputs.fileinput.XmlFileInput file_path = {in_xml} [transformer_xslt] class = filters.xsltfilter.XsltFilter script = {in_xsl} [output_file] class = outputs.fileoutput.FileOutput file_path = {out_xml}
Note the symbolic input, xsl and output files. We can now perform this ETL using the stetl -a option in two ways. One, passing the arguments on the commandline, like
stetl -c etl.cfg -a "in_xml=input/cities.xml in_xsl=cities2gml.xsl out_xml=output/gmlcities.gml"
Two, passing the arguments in a properties file, here called etl.args (the name of the suffix .args is not significant).
stetl -c etl.cfg -a etl.args
Where the content of the etl.args properties file is:
# Arguments in properties file in_xml=input/cities.xml in_xsl=cities2gml.xsl out_xml=output/gmlcities.gml
This makes an ETL chain highly reusable. A very elaborate Stetl config with parameter substitution can be seen in the Top10NL ETL.
Connection Compatibility¶
During ETL Chain processing Components typically pass data to a next
stetl.component.Component .
A
stetl.filter.Filter Component both consumes and produces data, Inputs produce data and
Outputs only consume data.
Data and status flows as
stetl.packet.Packet objects between the Components. The type of the data in these Packets needs
to be compatible only between two coupled Components.
Stetl does not define one unifying data structure, but leaves this to the Components themselves.
Each Component provides the type of data it consumes (Filters, Outputs) and/or produces (Inputs, Filters). This is indicated in its class definition using the consumes and produces object constructor parameters. Some Components can produce and/or consume multiple data types, like a single stream of records or a record array. In those cases the produces or consumes parameter can be a list (array) of data types.
During Chain construction Stetl will check for compatible formats when connecting Components. If one of the formats is a list of formats, the actual format is determined by:
- explicit setting: the actual input_format and/or output_format is set in the Component .ini configuration
- no setting provided: the first format in the list is taken as default
Stetl will only check if these input and output-formats for connecting Components are compatible when constructing a Chain.
The following data types are currently symbolically defined in the
stetl.packet.FORMAT class:
any- ‘catch-all’ type, may be any of the types below.
etree_doc- a complete in-memory XML DOM structure using the
lxmletree
etree_element- each Packet contains a single DOM Element (usually a Feature) in
lxmletree format
etree_feature_array- each Packet contains an array of DOM Elements (usually Features) in
lxmletree format
geojson_feature- as
structbut following naming conventions for a single Feature according to the GeoJSON spec:
geojson_collection- as
structbut following naming conventions for a FeatureCollection according to the GeoJSON spec:
ogr_feature- a single Feature object from an OGR source (via Python SWIG wrapper)
ogr_feature_array- a Python list (array) of a single Feature objects from an OGR source
record- a Python
dict(hashmap)
record_array- a Python list (array) of
dict
string- a general string
struct- a JSON-like generic tree structure
xml_doc_as_string- a string representation of a complete XML document
xml_line_stream- each Packet contains a line (string) from an XML file or string representation (DEPRECATED)
Many components, in particular Filters, are able to transform data formats.
For example the XmlElementStreamerFileInput can produce an
etree_element, a subsequent XmlAssembler can create small in-memory etree_doc s that
can be fed into an XsltFilter, which outputs a transformed etree_doc. The type any is a catch-all,
for example used for printing any object to standard output in the
stetl.packet.Component.
An etree_element may also be interesting to be able to process single features.
Starting with Stetl 1.0.7 a new
stetl.filters.formatconverter.FormatConverterFilter class provides a Stetl Filter
to allow almost any conversion between otherwise incompatible Components.
TODO: the Packet typing system is still under constant review and extension. Soon it will be possible to add new data types and converters. We have deliberately chosen not to define a single internal datatype like a “Feature”, both for flexibility and performance reasons.
Multiple Chains¶
Usually a complete ETL will require multiple steps/commands. For example we need to create a database, maybe tables and/or making tables empty. Also we may need to do postprocessing, like removing duplicates in a table etc. In order to have repeatable/reusable ETL without any manual steps, we can specify multiple Chains within a single Stetl config. The syntax: chains are separated by commas (steps are sill separated by pipe symbols).
Chains are executed in order. We can even reuse the specified components from within the same file. Each will have a separate instance within a Chain.
For example in the Top10NL example we see three Chains:
[etl] chains = input_sql_pre|schema_name_filter|output_postgres, input_big_gml_files|xml_assembler|transformer_xslt|output_ogr2ogr, input_sql_post|schema_name_filter|output_postgres
Here the Chain input_sql_pre|schema_name_filter|output_postgres sets up a PostgreSQL schema and creates tables. input_big_gml_files|xml_assembler|transformer_xslt|output_ogr2ogr does the actual ETL and input_sql_post|schema_name_filter|output_postgres does some PostgreSQL postprocessing.
Chain Splitting¶
In some cases we may want to split processed data to multiple
Filters or
Outputs.
For example to produce output files in multiple formats like GML, GeoJSON etc
or to publish converted (Filtered) data to multiple remote services (SOS, SensorThings API)
or just for simple debugging to a target
Output and
StandardOutput.
See issue and the Chain Split example.
Here the Chains are split by using
() in the ETL Chain definition:
# Transform input xml to valid GML file using an XSLT filter and pass to multiple outputs. # Below are two Chains: simple Output splitting and splitting to 3 sub-Chains at Filter level. [etl] chains = input_xml_file | transformer_xslt |(output_file)(output_std), input_xml_file | (transformer_xslt|output_file) (output_std) (transformer_xslt|output_std) [output_std] class = outputs.standardoutput.StandardOutput | http://stetl.readthedocs.io/en/latest/using.html | 2017-03-23T06:06:55 | CC-MAIN-2017-13 | 1490218186780.20 | [] | stetl.readthedocs.io |
- Alerts and Monitoring >
- Alerts >
- Manage Alert Configurations
Manage Alert Configurations¶
On this page
Overview¶
An alert configuration defines the conditions that trigger an alert and the alert’s notification methods. This tutorial describes how to create and manage the alert configurations for a specified group. To create and manage global alert configurations, see Manage Global Alerts.
Default Alert Configurations¶
Ops Manager creates the following alert configurations for a group automatically upon creation of the group:
If you enable Backup, Ops Manager creates the following alert configurations for the group, if they do not already exist:.
Ops Manager will fill in the default values automatically when a user selects that option when creating an alert configuration. If the key, token, or URL that is used to send the notification becomes invalid, Ops, Ops Ops, Ops Manager cancels the open alerts whether or not they have been acknowledged and sends no further notifications.
Disable or Enable an Alert Configuration¶
When you disable an alert configuration, Ops: | https://docs.opsmanager.mongodb.com/v3.4/tutorial/manage-alert-configurations/ | 2017-03-23T06:13:26 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.opsmanager.mongodb.com |
This page will explain the different features of the Advanced tab..
NowSecure Lab allows you to quickly run any linux command and command-line applications on the Santoku Operating System by using the Linux Shell.
The shell will automatically starts in the assessment directory where all of the NowSecure Lab. | https://docs.nowsecure.com/lab-workstation/step-by-step-guide/advanced/ | 2017-03-23T06:18:10 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.nowsecure.com |
Estimating partition size
Determining how much data your DataStax or Cassandra partitions can hold.
For efficient operation, partitions must be sized within certain limits in DataStax Enterprise and Apache Cassandra™. Two measures of partition size are the number of values in a partition and the partition size on disk. The practical limit of cells per partition is two. | https://docs.datastax.com/en/landing_page/doc/landing_page/planning/planningPartitionSize.html | 2017-03-23T06:10:24 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.datastax.com |
in our marketplace. Get in touch for more details.
Last but not least, since v0.15.0 we support ECMAScript 6 and follow the Airbnb JavaScript style guide so those standards are required for the new plugins.
Handsontable currently supports the following features:
- Arrows
- Classes
- Enhanced Object Literals
- Template Strings
- Destructing
- Default + Rest + Spread
- Let + Const
- Iterators + For..Of
- Generators
- Unicode (partialy)
- Modules
- Module Loader
- Map + Set + WeakMap (our shim) + WeakSet
- Proxy
- Symbol
- Math + Number + String + Object APIs
- Binary and Octal Literals
- Promises | https://docs.handsontable.com/pro/1.5.1/tutorial-custom-plugin.html | 2017-03-23T06:13:50 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.handsontable.com |
Online Help
Viewing discovered installations
You can view the computers where each of the software products you manage in the Software Catalog have been discovered.
To view the computers where a software product was discovered:
In the Software Catalog, double-click the software product to track. The Software Product [Product Name] window opens.
On the Discovered Installation tab, view the list of discovered installations of this product. You can select a record and click Show Details to view the details of the audit snapshot in the Audit Snapshot Viewer.
Click OK. | https://docs.alloysoftware.com/alloydiscovery/help/administrative-settings/software-catalog/viewing-discovered-installations.htm | 2022-09-24T22:32:28 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.alloysoftware.com |
Disabling the self-provisioners role
Disabling self-provisioners role¶
By default, when a user authenticates with Openshift via Oauth, it is part of the
self-provisioners group. This group provides the ability to create new projects. On CentOS CI we do not want users to be able to create their own projects, as we have a system in place where we create a project and control the administrators of that project.
To disable the self-provisioner role do the following as outlined in the documentation[1]. subjects that the self-provisioners role applies to.
oc patch clusterrolebinding.rbac self-provisioners -p '{"subjects": null}'
Verify the change occurred successfully
oc describe clusterrolebinding.rbac self-provisioners Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- ---------
When the cluster is updated to a new version, unless we mark the role appropriately, the permissions will be restored after the update is complete.
Verify that the value is currently set to be restored after an update:
oc get clusterrolebinding.rbac self-provisioners -o yaml
apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" ...
We wish to set this
rbac.authorization.kubernetes.io/autoupdate to
false. To patch this do the following.
oc patch clusterrolebinding.rbac self-provisioners -p '{ "metadata": { "annotations": { "rbac.authorization.kubernetes.io/autoupdate": "false" } } }'
Resources¶
- [1] | https://docs.infra.centos.org/operations/ci/disabling_self_provisioner_role/ | 2022-09-24T21:46:47 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.infra.centos.org |
Foreword
This tutorial was provided by community member Liam Tart (PogoP), and is published here with permission.
Thank you, Liam!
This tutorial will explain how master materials and material instancing can be used to quickly and efficiently build a set of tileable materials that can be applied to meshes in your scene. By texturing your meshes using tileable materials, you do not have to rely on baking from a high poly model and you can rapidly change the look and feel of your environment.
This tutorial has been written alongside the production of the Sci-fi Bunk Room scene on the Unreal Launcher Learn tab, so it is recommended to download that in order to show this use of material instancing in practice.
This tutorial will show you how to set up an efficient master material that can be instanced to make a variety of different material types, such as various metals and plastics, which can then be applied to different material IDs on your mesh. This master material will also use a number of switches to enable you to turn specific functions on/off depending on the material in question, such as parallax and detail normal maps.
Before starting this tutorial, it is recommend you read the Materials docs to get a basic understanding of how materials and material instancing work. This tutorial will show how to apply these concepts to a working environment.
Goals
The end goal of setting up this master material is to create a material that can be instanced and give access to a number of parameters that will allow us to tweak our materials quickly. You can put whatever options and features you like into your master material, but as a base, it is useful to allow the option to input specific albedo, roughness, metallic, and normal maps, and also have fine control over these features by adding in an albedo tint, and a roughness and metallic value.
The screenshot below shows just how much flexibility this will give your environments, allowing you to quickly change the way your scene looks by tweaking a few values.
Master Material Setup
We will start by creating your master material. This is what all of the materials in your scene will be instanced from, so it needs to be set up correctly and efficiently as it likely get quite complex. One advantage to using a master material is that if you want to add more features to your materials later on, you need only apply it to the master material and it will propagate down to the instances.
Here is an example showing the final master material setup. All of the input textures and constants in this material are parameterized to allow access in the instanced material. To do this, you can right-click on the value or map and click Convert to parameter, and then you can assign it a name. This will allow you to change it quickly in the material instance.
Also, the Static Switch parameter is extensively used in this material, which allows you to choose between using certain setups such as detail normal maps, parallax etc. This will make your final material instance cheaper as you are able to switch off un-needed features. More about this can be read here: Parameter Expressions
Albedo Setup
In this section, you want to be able to choose between using a specific albedo map or a simple color value, which will allow you to rapidly prototype materials by specifying a simple value to get the correct albedo. You can then bring that value through to Photoshop as a base for your final albedo map. This will also apply to the Roughness Setup section of this tutorial.
As stated previously, this setup allows you to choose between using an albedo map or a color value for your base color. You choose between the two by using the static switch parameter called UseDiffuseMap? in your instanced material. If this is set to true, you can then use the input a specific albedo texture, and this is multiplied with a DiffTint constant value which allows you to tint the color of the specified albedo map. If you do not use an albedo map, you can define the albedo value using DiffuseValue.
To the left of the DiffMap texture input, you can see that a node has been plugged into the UV slot. This connects to the Parallax part of the master material. You could add a UV tiling setup into this as well, which would allow you to control the tiling of your material in UE4 rather than in your 3d modeling package.
The following example shows the flexibility of this system. On the left, the material is using a single color for the albedo map. In the middle, it is using just an albedo map. On the right, it is using an albedo map that has been tinted orange. This shows how you can use the same maps to get a lot of variety in the engine.
Metallic Setup
This is very simple and lets you choose between using a metallic map or a single value to define your materials' metallic properties.
This is useful because it allows you to choose between using a single value, which would be good for a tiling metal material for example, but also gives you the option to specify a metallic map if you want certain parts of your material to be non-metal whilst others are metallic.
The following example shows this in action. The material on the left uses a metallic value of 0, the middle a value of 1, and the one on the right uses a specific metallic map.
Roughness Setup
Similar to the albedo setup, this allows you to choose between using a roughness map or a single value to define your overall roughness. This is great for quickly assigning roughness values to materials and then later taking this value through to Photoshop to create a unique roughness map for your material.
The following example shows this working. The material on the left uses a roughness value of 1, the middle uses a roughness value of 0, and the one on the right uses a specific roughness map.
The UseDetailRoughness? static switch lets you use a detail roughness map which is plugged into the DetailTiling constant which is multiplied with a TexCoord, to allow you to tile your detail roughness map. This value also plugs into the detail normal map. This is useful for materials that require a texture that can only be viewed up close and can have a very high tiling rate.
The following example shows this in action. The material on the left does not use a detail roughness/normal map, the one in the middle uses a detail map with a tiling rate of 1, and the one on the right uses a tiling rate of 4.
Occlusion Setup
This setup uses a static switch parameter to choose between using an occlusion map or a single value. Most materials will use an occlusion value of 1 which is a simple white color, however this gives you the option to use a specific ambient occlusion map if your material requires it, for example if this material is for a uniquely unwrapped model that has darker areas where light may not reach.
Emissive Setup
This setup lets you choose between using an emissive on your material or not (by default it is set to false to increase performance). There is also another static switch used which lets you choose to specify a specific emissive map. If you choose to not use a specific emissive map, you can use the EmissiveScale parameter to set your emissive value. However, this EmissiveScale parameter will also be used if you do specify an EmissiveMap, allowing you greater control of the brightness of your material.
The following example shows how this emissive setup works. The material on the left has an emissive scale set to 0, the middle set to 1, and the one on the right is set to 2.
Normal Setup
The normal map setup is fairly simple even if it may look slightly more complex. By default, this material will default to a flat normal map texture, though you could add a switch to turn the normal map off and use a constant value instead, which may save on texture memory.
This setup also allows you to combine a normal map with a detail normal map. The DetailTiling constant is multiplied with a TexCoord which allows you to control the tiling rate. The DetailNormalStrength lets you control the strength of the detail normal map.
Parallax Setup
This parallax setup lets you use a BumpOffset node which gives your materials an illusion of depth if a heightmap is applied. More about this can be read here (Bump Offset).
This setup is plugged into the UV slots of all the aforementioned sections, meaning if parallax is enabled then all your maps will receive the effect.
The following example shows how the Parallax effect works. On the left, Parallax is disabled, and on the right, Parallax is set to 0.007. It's a fairly subtle effect but may work well depending on the material in question.
Material Instancing
Now that your master material is setup, you can right click on your master material and crate a material instance.
You can then name this material and organize it in whichever folder you like. I'd recommend setting up folders per material type, ie, a folder for all your metals, a folder for plastics, etc.
You can now double click on your newly created material instance and you will see the following:
As you can see, all of the parameterized values and static switches that we set up in the master material are visible here in the material instance. Enabling switches such as UseDiffuseMap? will enable you to input your own diffuse/albedo map, and will also enable certain parameters tied to this, such as the diffuse multiply option in this case.
Here is my material setup for the Sci-fi Bunk scene. You can see that I have created a library of tiling materials with various colors and material properties, which can easily be applied to meshes in the scene.
Application to Meshes
Now that you have a material library set up, you can easily apply these tiling materials to your meshes. As you can see in the following image, this mesh doesn't use any uniquely baked normal maps, but instead relies on tiling materials to build up a believable surface. When you have an entire library of meshes built this way, you can see the advantages to using material instances, as you are able to tweak one single value and it will propagate throughout your scene. For example, if I wanted to change the wood texture used in this scene, I'd only have to change it once in the wood material instance, and my entire scene would update. This is far quicker than having to re-texture several unique diffuse maps.
Extra Features
Now that your base master material and material library is set up, you can easily expand your master material and give it more features. For example, in this material setup, there is no option for vertex painting. You could add a basic lerp function combined with vertex paint to allow you to, for example, paint dirt on to your material surfaces. More information on this can be read here: Create a Material for 2-Way Texture Blending.
That should be all you need to get a basic master material setup for your scene. Remember to check out the Sci-fi Bunk Room scene to see exactly how this can be applied. Good luck! | https://docs.unrealengine.com/4.26/en-US/Resources/Community/SciFiBunk_MaterialInstancing/ | 2022-09-24T23:40:54 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['./../../../../Images/Resources/Community/SciFiBunk_MaterialInstancing/Image_0_Left.jpg',
'Material Instance Color: Orange'], dtype=object)
array(['./../../../../Images/Resources/Community/SciFiBunk_MaterialInstancing/Image_0_Right.jpg',
'Material Instance Color: White'], dtype=object)
array(['./../../../../Images/Resources/Community/SciFiBunk_MaterialInstancing/image_1.jpg',
'image_1.jpg'], dtype=object)
array(['./../../../../Images/Resources/Community/SciFiBunk_MaterialInstancing/image_2.jpg',
'image_2.jpg'], dtype=object)
array(['./../../../../Images/Resources/Community/SciFiBunk_MaterialInstancing/image_3.jpg',
'image_3.jpg'], dtype=object)
array(['./../../../../Images/Resources/Community/SciFiBunk_MaterialInstancing/image_4.jpg',
'image_4.jpg'], dtype=object)
array(['./../../../../Images/Resources/Community/SciFiBunk_MaterialInstancing/image_6.jpg',
'image_6.jpg'], dtype=object)
array(['./../../../../Images/Resources/Community/SciFiBunk_MaterialInstancing/image_8.jpg',
'image_8.jpg'], dtype=object)
array(['./../../../../Images/Resources/Community/SciFiBunk_MaterialInstancing/image_9.jpg',
'image_9.jpg'], dtype=object)
array(['./../../../../Images/Resources/Community/SciFiBunk_MaterialInstancing/image_11.jpg',
'image_11.jpg'], dtype=object)
array(['./../../../../Images/Resources/Community/SciFiBunk_MaterialInstancing/image_12.jpg',
'image_12.jpg'], dtype=object)
array(['./../../../../Images/Resources/Community/SciFiBunk_MaterialInstancing/Image_13_Left.jpg',
'Parallax Disabled'], dtype=object)
array(['./../../../../Images/Resources/Community/SciFiBunk_MaterialInstancing/Image_13_Right.jpg',
'Parallax Enabled'], dtype=object)
array(['./../../../../Images/Resources/Community/SciFiBunk_MaterialInstancing/image_14.jpg',
'image_14.jpg'], dtype=object)
array(['./../../../../Images/Resources/Community/SciFiBunk_MaterialInstancing/image_15.jpg',
'image_15.jpg'], dtype=object)
array(['./../../../../Images/Resources/Community/SciFiBunk_MaterialInstancing/image_16.jpg',
'image_16.jpg'], dtype=object)
array(['./../../../../Images/Resources/Community/SciFiBunk_MaterialInstancing/image_17.jpg',
'image_17.jpg'], dtype=object) ] | docs.unrealengine.com |
Container Activity
Description
A Container activity provides a way to organize a workflow and encapsulate logic. An author can drill into a container to reveal the logic within it. You can also nest Container activities.
Usage
Consider a workflow that asks a user for input, performs processing and displays a result. At a high level there are only three steps to the workflow. Each of these could be a Container activity. At this level the workflow appears simple. Each container represents a subworkflow with the logic to implement that step. There might be several activities required to prompt a user for input. The container hides this complexity from the rest of the workflow.
Inputs
This activity has no. | https://docs.vertigisstudio.com/workflow/latest/help/Content/wf5/help/activities/container.htm | 2022-09-24T23:06:42 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.vertigisstudio.com |
4.0. In the new version, we fixed some exacting bugs and changed the design. The detail on the changelog is given below.
Updating
We're happy to inform you that we've launched our new version 4.0. If you have already purchased it then you may be interested to update the system. Before performing the update please take a backup of your current system. In order to update Productify to the new version first download the new source code from your Envato account. Then delete all the files except the .env for the current Productify root folder. Finally, upload the new files to the root directory. If required then update your database info from the .env file. We are always available for support so feel free to inform us if you need any help with the update. Please stay with us :)
Server >= 8.0.0
- BCMath PHP Extension
- Ctype PHP Extension
- Fileinfo PHP extension
- JSON PHP Extension
- Mbstring PHP Extension
- OpenSSL PHP Extension
- PDO PHP Extension
- XML PHP Extension
Note: By default, most of the popular hosting provider has all of the above requirements. If you are having any issue to install the application feel free to inform us we will try to assist you with the installation :)
Installation
The installation of the Productify is super easy and similar like other PHP and laravel based systems. Please follow the below steps to install our Productify system.
Note: Here we will show the cpanel (Hosting) installation process. If you want to install the system in your cloud server or if you are interested to install it using FileZilla of if you want something else then let us know we will help you with the installation..
The below.
In this step, you need to grant the directed permissions respectively for instance.
Open your Cpanel, and open your root directory then go to the Storage folder there you will find two folders(framework, logos) and the other(cache) folder you will found in the Bootstrap folder. Select any of the three folders (For example framework) right-click on this, and now click on the change permission, enter the relevant vale 775, and click on the change permission on the bottom again. In the same manner, grant permission for the rest of the two folders by right-clicking on the folder and selecting change permission.
You may check the below screenshot:
In the next step, you need to complete the environment settings wizard.
In this step continue entering some, your hosting information like DB host, DB Post, DB Name, User of DB and Password then press Setup Application button to next to Application Step Or Edit .env file for install DB.
You need to enter the following information for environment setup.
You need to enter the following information for database setup.
For classic editor you will get all of the settings(.env file) together
You need to enter the following information for mail setup. The mail is required for changing the password in case you forgot your password.
A defult admin user has created with the following login credentials.
Note: If you are having any problem with the installation process then feel free to contact us([email protected]) and we will help you with the installation.
Dashboard
The page that you are going to after login to the system. From this page, you will be able to see an overall overview of the system. From here using the last side nave you can navigate to any other page. After the fresh installation,.
Note: Red star marked fields are required and you can't leave them empty.
You need to enter the following information for system setup.
Other Settings:
The things that you need to know about payment methods, processing steps, sizes settings, showrooms, unit settings.
Profile
This is the admin profile page. From this page, you can change your name, email, password, and profile picture. Name and email fields are required here and you can't leave them blank. Check the attached screenshot.
Staff
Peoples who are working for your company and involved in your manufacturing process. Each Staff is related to products module. Single or multiple staff will be involved in a processing step.
Suppliers
Peoples who supplied the raw materials for your company. Each purchase will belong to a supplier. You can store name, email, phone number, company name, address etc of a supplier.
Users
People who are able to access the system. You can add multiple users and each user will be able to access the system. You can also define admin and general users. General users will not be able to add another user.! | https://docs.codeshaper.tech/productify/ | 2022-09-24T23:51:32 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['assets/media/installation/i-8.png', None], dtype=object)
array(['assets/media/installation/st3-issues.png', None], dtype=object)
array(['assets/media/backend-guide/dash-00.png', None], dtype=object)
array(['assets/media/backend-guide/1.png', None], dtype=object)] | docs.codeshaper.tech |
Deploying Mac Packages
Deploying package files to computers is a powerful workflow. This is a fairly technical process, so reach out to our support team by emailing [email protected] if you have any questions.
The Jamf Fundamentals plan is required to deploy packages. For more information, see Changing Your Service Plan.
Packages need to be signed and built as a distribution package. For more information, see Building and Signing Mac Packages. If you already have a PKG on your local computer, deploying an existing PKG with Jamf Now is easy.
Packages must be 20 GB or smaller.
- Log in to Jamf Now.
- Click Apps.
- Click Add an App.
- Click Upload Your App.
- Drag your custom .pkg file onto the upload page or click Browse to search for it on your computer.
When the file has been uploaded, it will appear in your Apps page and is ready for deployment via a Blueprint.
You cannot uninstall Mac packages from computers with Jamf Now. To remove packaged apps from a Mac, we recommend contacting the app's developer to see if a package uninstaller exists for the specific packaged application. You can then upload that uninstaller in to Jamf Now and deploy it to the Mac to uninstall the app. | https://docs.jamf.com/jamf-now/documentation/Deploying_Mac_Packages.html | 2022-09-24T21:57:07 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.jamf.com |
Choose your operating system:
Windows
macOS
Linux
Enabling the Google PAD Plugin
-
Excluding Chunks From the OBB File
-
Recommended Implementation
Google Play Asset Delivery (Google PAD) is the Google Play Store's solution for delivering asset packs to applications once they are installed on a device. This solution is intended for use alongside the Android App Bundle distribution format. While the App Bundle distributes a customized .apk to the end user, handling code and binaries for the initial installation, the Play Asset Delivery system provides models, textures, sounds, shaders, and other large asset files separately from the .apk. This makes it possible for apps distributed through Google Play to manage the space taken up by content by delivering content as it is needed.
For more general information about Google Play Asset Delivery, refer to the official Android documentation at
Unreal Engine 4.25 and newer includes Google PAD integration through a plugin, making this system simple to implement in your own projects. This plugin provides a function library with calls for managing downloads and requesting information from the Play Asset Delivery system.
UGooglePADFunctionLibrary is available in both C++ and Blueprint.
For additional information on shipping with Android App Bundles, see the page on Packaging Android Projects.
Enabling the Google PAD Plugin
The Google PAD plugin can be found in the Unreal Editor's Plugins window, under the Android section. It is enabled by default in Unreal Engine 4.25.0 and newer. To use Google PAD, you must be using Android App Bundles as your packaging format.
To fully enable the plugin, open Project Settings and navigate to Plugins > GooglePAD > Packaging. Click the Enable Plugin checkbox, and the module will be available on startup for Android projects.
If you want to use Google PAD for install-time assets, you also need to navigate to Platforms > Android > APK Packaging and disable Package Data inside APK. The main .obb file will then be delivered as an install-time asset pack automatically.
Click to enlarge image
Creating Asset Packs
Asset packs for Google PAD are packaged inside Android App Bundle builds, and managed by the Google Play Store when they are uploaded. This section explains how to package and organize asset packs for inclusion in your App Bundle.
Asset Pack Delivery Modes
Chunks are Unreal Engine's format for organizing external assets. Chunk 0 represents the base installation of the game, while all other chunks are .pak files containing assets outside of the game's main installation.
To utilize Google PAD, you must group your game's assets into chunks, and you must group those chunks into asset packs based on the delivery mode you want to use for them. Google Play Asset Delivery supports the following delivery modes for asset packs:
You can create a total of 50 asset packs per application. You can only have one Install-Time and one Fast-Follow asset pack per project, but can use as many On-Demand asset packs as you want as long as you do not exceed this limit.
Creating Chunks
Open your **Project Settings and navigate to Project > Packaging and make sure Generate Chunks is enabled.
Now you can organize assets into chunks using the asset manager or primary asset labels.
You can create a primary asset label by right-clicking in the Content Browser and clicking Miscellaneous > Data Asset.
Click to enlarge image.
You will be prompted to pick a Data Asset Class. Select PrimaryAssetLabel, then click Select.
Click to enlarge image.
Name your new Primary Asset Label, then double-click it to edit its information.
Click to enlarge image.
Enable Label Assets in my Directory to designate all the assets in the same folder as belonging to this asset label. Set the Chunk ID to any value higher than 0 to designate which chunk the assets belonging to this label will belong to. You can also add assets directly to the asset label using the Explicit Assets list.
The Asset Manager is located in Project Settings, under Game > Asset Manager.
Click to enlarge image.
Here you can designate rules that your project will use to procedurally group assets into chunks. See the Cooking and Chunking page for more details.
Once you have designated which assets belong to specific chunks, packaging your project will output your chunks as .pak files. You can find them in your project folder under
Saved\StagedBuilds[PlatformName][ProjectName]\Content\Paks.
Click to enlarge image.
Including Chunks in your App Bundle Build
Each delivery mode for Play Asset Delivery has different requirements for incorporating chunks into App Bundles.
For Install-Time assets, you do not need to make any changes.
For Fast-Follow or On-Demand assets, select the .pak files you want to include and move them to your project's
Build/Android/gradle/assetpacks directory. Each delivery mode has a different subfolder:
Fast-Follow asset packs must be placed in
Build/Android/gradle/assetpacks/fast-follow/[assetpackname]/src/main/assets
On-Demand asset packs must be placed in
Build/Android/gradle/assetpacks/on-demand/[assetpackname]/src/main/assets
Replace [assetpackname] with the name of the asset pack that the chunks will be bundled into. You can create multiple different named asset packs with different sets of .pak files. However, the names of your asset packs must be unique, and they may not be re-used between fast-follow and on-demand. This name will be the one that you use when querying for asset packs with the API.
Finally, you need to add a build.gradle file in the asset pack folder containing the following code:
apply plugin: 'com.android.asset-pack' def fileparts = projectDir.absolutePath.replaceAll('\\\\', '/').tokenize('/') def assetPackName = fileparts[fileparts.size()-1] def assetPackType = fileparts[fileparts.size()-2] assetPack { packName = assetPackName dynamicDelivery { deliveryType = assetPackType instantDeliveryType = assetPackType } }
After you have met these requirements, package the project as an app bundle again, and it will include each of these asset packs in your build. When you upload the App Bundle to the Google Play Store, the asset packs will be available for download using the Google PAD API.
This workflow will be streamlined further in Unreal Engine 4.26.
Excluding Chunks From the OBB File
By default, .pak files are included in the OBB file generated alongside your project. To exclude them, you need to open your
DefaultEngine.ini file and filter them using OBB filters under Android Runtime Settings.
[/Script/AndroidRuntimeSettings.AndroidRuntimeSettings] +ObbFilters="-*pakchunk1*" +ObbFilters="-*pakchunk2*" +ObbFilters="-*pakchunk3*" +ObbFilters="-*pakchunk4*" +ObbFilters="-*pakchunk5*" +ObbFilters="-*pakchunk6*" +ObbFilters="-*pakchunk7*" +ObbFilters="-*pakchunk8*" +ObbFilters="-*pakchunk9*"
In the example above, the OBB filters will catch any .pak files containing any of the text provided. For instance,
+ObbFilters="-*pakchunk1*" will omit any pak file whose name contains "pakchunk1".
API Reference
The following sections detail the available functions in the Google PAD function library and their usage.
Requests and Error Handling
All requests in the Google PAD function library return an
EGooglePADErrorCode denoting whether or not the operation succeeded and, if not, what specific error prevented the request from being completed. The possible error codes are as follows:
In addition to this return value, request functions will have an out variable providing the requested information. If you get a result of
AssetPack_NO_ERROR, you can proceed with the provided information normally. Otherwise, you should use flow control to react to the provided error code appropriately.
Getting the Location of Downloaded Files
The function
GetAssetPackLocation fetches the location of an asset pack that has been downloaded and caches information about it locally. If the asset is available, it will output an integer handle that can be used to access the cached information as needed.
Calling
GetAssetsPath and providing the location handle will output a string with the asset path for the desired asset pack.
GetStorageMethod will output an
EGooglePADStorageMethod stating the way the asset pack is stored on the user's device. Once you know the asset path and storage method, you can then use appropriate calls to access the assets.
The possible storage methods are as follows:
Once you are done using the above information, you must pass the location handle to
ReleaseAssetPackLocation to free the cached location info.
If
GetAssetPackLocation returns an error code of
AsetPack_UNAVAILABLE or
AssetPack_DOWNLOAD_NOT_FOUND, then the desired asset pack is unavailable and must be downloaded.
Requesting Information about Asset Packs
The function
RequestInfo takes in a
TArray of asset pack names and returns an
EGooglePADErrorCode denoting their current status. RequestInfo is not required to initiate a download, but can be used to determine whether remote asset packs are valid.
Requesting or Cancelling a Download
The function
RequestDownload takes in a
TArray of strings representing the names of the asset packs you would like to download, then sends a request to the remote service to begin downloading those asset packs. If
RequestDownload shows no errors, the asset packs will be downloaded and transferred to the app asynchronously in the background.
Because this functionality is asynchronous, the
RequestDownload function does not return information about the downloaded asset pack, other than an error code denoting whether the request was successful. You must use the functions detailed in the Monitoring Download Status section below to check for the download's current status, and to access the asset pack itself you must use
GetAssetPackLocation once the download is complete.
The function
CancelDownload also uses a list of asset pack names, and will cancel downloading the designated asset packs.
Getting Cellular Data Status
The function
ShowCellularDataConfirmation will prompt the user for whether they want to download data using their cellular network. If the prompt is already present, you can use GetShowCellularDataConfirmationStatus to return an
EGooglePADCellularDataConfirmStatus stating whether or not the user has approved the download.
Results of
AssetPack_CONFIRM_UNKNOWN and
AssetPack_CONFIRM_PENDING mean the user has not given approval yet, and the application should stand by until approval is given.
A result of
AssetPack_CONFIRM_USER_CANCELLED means that the user has chosen not to allow the use of cellular data, and downloads should not be permitted at this time.
A result of
AssetPack_CONFIRM_USER_APPROVED means that the user has given express approval to use cellular data and downloads should be allowed to proceed. Additionally, If this function returns an
EGooglePADErrorCode with a result of
AssetPack_NETWORK_UNRESTRICTED, the user is on their wi-fi network and does not need to use cellular data, therefore downloads should be permitted without the need to continue checking this function.
Monitoring Download Status
GetDownloadState will locally cache the download status of an asset pack and return a download handle providing access to the cached information. This function takes in the name of the asset pack that you want to download and outputs the handle as an integer. You should keep the download handle cached so that you can continue to monitor the download, otherwise, you will need to re-acquire it.
With a valid download handle, you can call
GetDownloadStatus to return the status of the desired asset pack as an
EGooglePADDownloadStatus. This enum represents the status of a download as one of several possible states, which are as follows:
You can also use the download state handle to call
GetBytesDownloaded, which will return the number of bytes currently downloaded to the user's device, and
GetTotalBytesToDownload, which will return the total target size of the download.
Once you have finished using the download status information, you must call
ReleaseDownloadState and provide the handle to release the cached download information from memory.
Removing Asset Packs
The function
RequestRemoval takes in an asset pack name and removes the specified asset pack from the user's device asynchronously. The asset pack's removal status can be monitored with
GetDownloadStatus as above.
Recommended Implementation
Implementation of the Google PAD API can be modeled as a cycle of different states for each download.
Implementing your solution in a custom GameState class will enable you to track a download continuously even as you change scenes and game modes. Alternatively, you may want to implement your solution in a front-end game mode that loads on startup so that you can perform necessary patches and updates before starting the game. The exact details of your solution will depend on your project's specific needs for updating assets. | https://docs.unrealengine.com/5.0/en-US/using-google-play-asset-delivery-in-unreal-engine/ | 2022-09-24T23:54:12 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['./../../Images/sharing-and-releasing-projects/android/packaging-and-publishing/GooglePlayAssetDeliveryReference/GooglePADPlugin.jpg',
'Google PAD Plugin'], dtype=object)
array(['./../../Images/sharing-and-releasing-projects/android/packaging-and-publishing/GooglePlayAssetDeliveryReference/GooglePADPlugin_2.jpg',
'Google PAD Plugin Options'], dtype=object)
array(['./../../Images/sharing-and-releasing-projects/android/packaging-and-publishing/GooglePlayAssetDeliveryReference/GenerateChunks.jpg',
'Activate Generate Chunks in the Project > Packaging section'],
dtype=object) ] | docs.unrealengine.com |
About the VertiGIS Studio Web Installation Help
The VertiGIS Studio Web Installation Help explains how to install the On-Premises version of VertiGIS Studio Web and Web Designer. For information about using Web Designer to configure Web apps, see the VertiGIS Studio Web Designer Help.
For documentation about manual configuration and custom development, see the VertiGIS Studio Developer Center.
This help provides instructions for configuring VertiGIS Studio software only. For help installing or configuring third-party software, consult the documentation provided by the third-party vendor:
For documentation about ArcGIS Enterprise, see Esri's online help.
Conventions Used in this Guide
This guide uses the following conventions:
User input and interface references: In procedural sections, when you are instructed to do something in the user interface, the UI components are in bold typeface to stand out from the rest of the text, for example:
In the Name box, type a name.
Code examples: Code snippets are presented in a different typeface and may have background shading, for example:
<ElementName ID="1" DisplayName="My Component" />
Cautions, Notes, and Tips: Information that is useful or important is set apart and emphasized by adding an icon to draw attention to it and identify what type of information it is. There are three levels of information:
Cautions are indicated by the exclamation icon on a yellow background. Cautions indicate information that could cause you to lose data if you do not follow the instructions.
Notes are indicated by the sticky note icon. Notes include information that is important for you to know.
Tips are indicated by the light bulb icon. Tips contain information that makes a task easier or provides extra information that is useful but not essential. | https://docs.vertigisstudio.com/webviewer/latest/install-help/Content/gwv/about-the-gwv-designer-guide.htm | 2022-09-24T23:40:22 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.vertigisstudio.com |
# Custom batch operations
Apostrophe piece type modules have access to a batch operation system in the user interface. This allows editors to take an action, such as archiving, on many pieces at once. By default, all piece types have batch operation UI for archiving pieces and restoring pieces from the archive, for example.
We can add additional custom batch operations using the provided module API. Let's look at how we would add a batch operation that resets piece field values to the configured defaults. This involves two major steps:
- Configuring the batch operation itself
- Adding the API route that powers the batch operation
# Configuring the batch operation
Batch operations are a "cascading" configuration, so they use
add and, optionally,
group sub-properties to inherit existing batch operations properly. Here is an example of what the "Reset" batch operation configuration might look like. We'll then walk through each piece of this.
module.export = { batchOperations: { add: { reset: { label: 'Reset', icon: 'recycle-icon', messages: { progress: 'Resetting {{ type }}...', completed: 'Reset {{ count }} {{ type }}.' }, if: { archived: false }, modalOptions: { title: 'Reset {{ type }}', description: 'Are you sure you want to reset {{ count }} {{ type }}?', confirmationButton: 'Yes, reset the selected content' } }, } }, icons: { 'recycle-icon': 'Recycle' }, };
Our new batch operation,
reset, is in the
add object, telling Apostrophe that this is a new operation to add to the module. It then has a number of configuration properties:
label: 'Reset',
label defines its legible label. The label is used for accessibility when this is an ungrouped operation and is used as the primarily interface label when the operation is grouped. We should always include a label.
icon: 'recycle-icon',
The
icon setting is the primary visible interface when the operation is not in an operation group (see below for more on that). Note that this icon is configured in the
icons module setting in the example.
messages: { progress: 'Resetting {{ type }}...', completed: 'Reset {{ count }} {{ type }}.' },
The
messages object properties are used in notifications that appear to tell the editor what is happening behind the scenes. The
progress message appears when the operation begins and the
completed messages appears when it is done.
They both can use the
type interpolation key, which Apostrophe replaces with the piece type label. The
completed message can also include a
count interpolation key, which is replaced by the number of pieces that were updated.
if: { archived: false },
if is an optional property that allows you to define filter conditions when the option is available. In this case, the "Reset" operation is only available when the
archived filter is
false (the editor is not looking at archived pieces). This might be because archived pieces should be left as they are and not reset to their defaults. This property works similar to conditional schema fields, but in this case the conditions are for manager filters, not fields.
modalOptions: { title: 'Reset {{ type }}', description: 'Are you sure you want to reset {{ count }} {{ type }}?', confirmationButton: 'Yes, reset the selected content' }
The
modalOptions object configures the confirmation modal that appears when an editor initiates a batch operation. This confirmation step helps to prevent accidental changes to possibly hundreds of pieces. If this is not included, the batch operation's
label is used for the title, there is no description, and the standard confirmation button label is used (e.g., "Yes, continue.").
With these configuration, we should immediately see a button for the "Reset" operation in the article piece manager.
# Adding the API route
Right now if we clicked that new button and confirmed to continue nothing would happen except for an error notification saying something like "Batch operation Reset failed." Since the batch operation is called
reset, the manager is going to look for an API route at
/v1/api/article/reset (the piece type's base API path, plus
/reset). We need to add that route to the piece type.
Batch operation route handlers will usually have a few steps in common, so we can look at those elements in the example below.
module.export = { // `batchOperations` and other module settings... apiRoutes(self) { return { post: { reset(req) { // Make sure there is an `_ids` array provided. if (!Array.isArray(req.body._ids)) { throw self.apos.error('invalid'); } // Ensure that the req object and IDs are using the same locale // and mode. req.body._ids = req.body._ids.map(_id => { return self.inferIdLocaleAndMode(req, _id); }); // Run the batch operation as a "job," passing the iterator function // as an argument to actually make the changes. return self.apos.modules['@apostrophecms/job'].runBatch( req, self.apos.launder.ids(req.body._ids), resetter, { action: 'reset' } ); // The iterator function that updates each individual piece. async function resetter (req, id) { const piece = await self.findOneForEditing(req, { _id: id }); if (!piece) { throw self.apos.error('notfound'); } // 🪄 Do the work of resetting piece field values. await self.update(req, piece); } } } }; } };
Let's look at the pieces of this route, focusing on the parts that are likely to be common among most batch operations.
apiRoutes(self) { return { post: { reset(req) { // ... } } }; }
We're adding our route to the
apiRoutes customization function as a
POST route since the route will need to receive requests with a
body object.
if (!Array.isArray(req.body._ids)) { throw self.apos.error('invalid'); }
The Apostrophe user interface should take care of this for you, but it is always a good idea to include a check to make sure that the body of the reqest includes an
_ids array.
req.body._ids = req.body._ids.map(_id => { return self.inferIdLocaleAndMode(req, _id); });
This step may not be obvious, but since Apostrophe documents have versions in various locales, as well as both "live" and "draft" modes, it's important to use the
self.inferIdLocaleAndMode() method on the IDs in most cases. In this context it is primarily used to update the
req object to match the document IDs.
return self.apos.modules['@apostrophecms/job'].runBatch( req, self.apos.launder.ids(req.body._ids), resetter, { action: 'reset' } );
This is more or less the last part (though we'll also need to take a look at that
resetter iterator). The job module,
@apostrophecms/job, has methods to process long-running jobs, including
runBatch for batch operations.
runBatch takes the following arguments:
- the
reqobject
- an array of IDs,
req.body._ids, used to find database documents to update (we're running it through a method that ensures they are ID-like)
- an iterator function (more on that below)
- an options object, which we always use to include to define the
actionname for client-side event handlers
async function resetter (req, id) { const piece = await self.findOneForEditing(req, { _id: id }); if (!piece) { throw self.apos.error('notfound'); } // 🪄 Do the work of resetting piece field values here... await self.update(req, piece); }
Finally, the iterator,
resetter in this example, will receive the request object and a single document ID. This is where we as developers need to do the work of updating each selected piece. Our example here finds the piece, throws an error if not found, then eventually uses the
update method to update the piece document. The magic
🪄 comment is where we would add the additional functionality to actually reset values.
With that API route added, when we restart the website and run the batch operation again we should see our notifications indicating that it completed successfully.
| https://v3.docs.apostrophecms.org/guide/batch-operations.html | 2022-09-24T22:43:38 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['/images/archive-button.png',
'An article piece manager modal with arrow pointing at the archive button at top left'],
dtype=object)
array(['/images/batch-operation-recycle-button.png',
'The article piece manager, now with a button using the recycle symbol'],
dtype=object)
array(['/images/batch-operation-complete.png',
'The articles manager modal with two notifications indicating that the batch operation completed successfully'],
dtype=object) ] | v3.docs.apostrophecms.org |
Python Cookbook Examples¶
There are large number of code examples available in OpenEye Python Cookbook that use OpenEye Toolkits to solve a wide range of cheminformatics and molecular modeling problems.
2D Depiction chapter contains examples that illustrate how the OEDepict TK and Grapheme TK can be utilized to depict molecules and visualize properties calculated with other OpenEye toolkits.
Visualizing 3D Information chapter contains examples that illustrate how the Grapheme TK can be utilized to project complex 3D information into the 2D molecular graph.
Cheminformatics chapter contains examples that solve various cheminformatics problems such as similarity search, ring perception and manipulating molecular graphs.
OpenEye Python Cookbook examples using OEGrapheme TK
Visualizing Torsional Angle Distribution
Visualizing Electron Density
Visualizing Molecular Dipole Moment
-
Depicting Molecular Properties
Visualizing Shape and Color Overlap
Protein-ligand visualization: | https://docs.eyesopen.com/toolkits/python/graphemetk/cookbook-examples.html | 2022-09-24T22:35:49 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.eyesopen.com |
Push Certificate Mismatch Error
A push certificate allows for communication between Jamf Now and the Apple Push Notification service (APNs), which allows for device communication with managed devices in Jamf Now. Push certificates need to be renewed annually to allow for continued device communication.
When uploading a push certificate to be renewed, Jamf Now compares the Common Name of the current push certificate to the Common Name of the push certificate you are attempting to upload. The "Push Certificate Mismatch" error indicates that the push certificate you are uploading does not match your current push certificate in Jamf Now.
Resolving a Push Certificate Mismatch
The "Push Certificate Mismatch" error can be corrected by locating the originally created APNs certificate and uploading it to Jamf Now. To locate this original certificate, we recommend signing in to any Apple ID that could have been used to create the original push certificate.
Apple will also send email reminders to the Apple ID used to create the original push certificate with the subject "Apple Push Notification Service certificate expiration". You can search your email inbox to investigate what Apple ID could have created the original push certificate.
Once a push certificate is located in the APNs portal, you can compare the Common Name displayed in the portal to what is displayed in Jamf Now. If they match, you have located the original and correct push certificate for renewal purposes. Continue with the renewal process and upload this renewed push certificate to Jamf Now.
Locating the Common Name on Devices
Managed devices also display the Common Name of the push certificate used to enroll the device with Jamf Now. This is shown as the "Topic" on the device.
For iOS, navigate to.
For macOS, navigate to.
Avoiding Push Certificate Mismatch Errors
Once the correct push certificate is located, renewed, and uploaded, you should ensure you saved the correct Apple ID in the final step in the push certificate renewal process. You can then reference this Apple ID a year from now for next year's push certificate renewal process.
| https://docs.jamf.com/jamf-now/documentation/Push_Certificate_Mismatch_Error.html | 2022-09-24T22:12:59 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['images/PushCertMismatch.png',
'Screenshot of the Push Certificate mismatch error message, with the option button to Replace Certificate.'],
dtype=object)
array(['images/PushCertsPortal.png',
'Screenshot of the Apple Push Certificates Portal, highlighting the Common Name displayed in the Subject DN section.'],
dtype=object)
array(['images/PushCertSaveID.png',
'Screenshot of step 4, with a text box to type and save the Apple ID used for your push certificate.'],
dtype=object) ] | docs.jamf.com |
Calendar Export-iCal buttons.png
From Carleton Moodle Docs
Calendar_Export-iCal_buttons.png (330 × 252 pixels, file size: 19 KB, MIME type: image/png)
File history
Click on a date/time to view the file as it appeared at that time.
- You cannot overwrite this file.
File usage
The following page links to this file: | https://docs.moodle.carleton.edu/index.php?title=File:Calendar_Export-iCal_buttons.png&oldid=4411 | 2022-09-24T22:46:56 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.moodle.carleton.edu |
The core idea behind G-API is portability – a pipeline built with G-API must be portable (or at least able to be portable). It means that either it works out-of-the box when compiled for new platform, or G-API provides necessary tools to make it running there, with little-to-no changes in the algorithm itself.
This idea can be achieved by separating kernel interface from its implementation. Once a pipeline is built using kernel interfaces, it becomes implementation-neutral – the implementation details (i.e. which kernels to use) are passed on a separate stage (graph compilation).
Kernel-implementation hierarchy may look like:
A pipeline itself then can be expressed only in terms of
A,
B, and so on, and choosing which implementation to use in execution becomes an external parameter.
G-API provides a macro to define a new kernel interface – G_TYPED_KERNEL():
This macro is a shortcut to a new type definition. It takes three arguments to register a new type, and requires type body to be present (see below). The macro arguments are:
std::function<>-like signature which defines API of the kernel;
Kernel declaration may be seen as function declaration – in both cases a new entity must be used then according to the way it was defined.
Kernel signature defines kernel's usage syntax – which parameters it takes during graph construction. Implementations can also use this signature to derive it into backend-specific callback signatures (see next chapter).
Kernel may accept values of any type, and G-API dynamic types are handled in a special way. All other types are opaque to G-API and passed to kernel in
outMeta() or in execution callbacks as-is.
Kernel's return value can only be of G-API dynamic type – cv::GMat, cv::GScalar, or
cv::GArray<T>. If an operation has more than one output, it should be wrapped into an
std::tuple<> (which can contain only mentioned G-API types). Arbitrary-output-number operations are not supported.
Once a kernel is defined, it can be used in pipelines with special, G-API-supplied method "::on()". This method has the same signature as defined in kernel, so this code:
is a perfectly legal construction. This example has some verbosity, though, so usually a kernel declaration comes with a C++ function wrapper ("factory method") which enables optional parameters, more compact syntax, Doxygen comments, etc:
so now it can be used like:
In the current version, kernel declaration body (everything within the curly braces) must contain a static function
outMeta(). This function establishes a functional dependency between operation's input and output metadata.
Metadata is an information about data kernel operates on. Since non-G-API types are opaque to G-API, G-API cares only about
G* data descriptors (i.e. dimensions and format of cv::GMat, etc).
outMeta() is also an example of how kernel's signature can be transformed into a derived callback – note that in this example,
outMeta() signature exactly follows the kernel signature (defined within the macro) but is different – where kernel expects cv::GMat,
outMeta() takes and returns cv::GMatDesc (a G-API structure metadata for cv::GMat).
The point of
outMeta() is to propagate metadata information within computation from inputs to outputs and infer metadata of internal (intermediate, temporary) data objects. This information is required for further pipeline optimizations, memory allocation, and other operations done by G-API framework during graph compilation.
Once a kernel is declared, its interface can be used to implement versions of this kernel in different backends. This concept is naturally projected from object-oriented programming "Interface/Implementation" idiom: an interface can be implemented multiple times, and different implementations of a kernel should be substitutable with each other without breaking the algorithm (pipeline) logic (Liskov Substitution Principle).
Every backend defines its own way to implement a kernel interface. This way is regular, though – whatever plugin is, its kernel implementation must be "derived" from a kernel interface type.
Kernel implementation are then organized into kernel packages. Kernel packages are passed to cv::GComputation::compile() as compile arguments, with some hints to G-API on how to select proper kernels (see more on this in "Heterogeneity"[TBD]).
For example, the aforementioned
Filter2D is implemented in "reference" CPU (OpenCV) plugin this way (NOTE – this is a simplified form with improper border handling):
Note how CPU (OpenCV) plugin has transformed the original kernel signature:
GCPUFilter2D::run()takes one argument more than the original kernel signature.
The basic intuition for kernel developer here is not to care where that cv::Mat objects come from instead of the original cv::GMat – and just follow the signature conventions defined by the plugin. G-API will call this method during execution and supply all the necessary information (and forward the original opaque data as-is).
Sometimes kernel is a single thing only on API level. It is convenient for users, but on a particular implementation side it would be better to have multiple kernels (a subgraph) doing the thing instead. An example is goodFeaturesToTrack() – while in OpenCV backend it may remain a single kernel, with Fluid it becomes compound – Fluid can handle Harris response calculation but can't do sparse non-maxima suppression and point extraction to an STL vector:
A compound kernel implementation can be defined using a generic macro GAPI_COMPOUND_KERNEL():
It is important to distinguish a compound kernel from G-API high-order function, i.e. a C++ function which looks like a kernel but in fact generates a subgraph. The core difference is that a compound kernel is an implementation detail and a kernel implementation may be either compound or not (depending on backend capabilities), while a high-order function is a "macro" in terms of G-API and so cannot act as an interface which then needs to be implemented by a backend. | https://docs.opencv.org/5.x/d0/d25/gapi_kernel_api.html | 2022-09-24T22:56:22 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.opencv.org |
5. Neutron Physics¶
There are limited differences between physics treatments used in the continuous-energy and multi-group modes. If distinctions are necessary, each of the following sections will provide an explanation of the differences. Otherwise, replacing any references of the particle’s energy (E) with references to the particle’s energy group (g) will suffice.
5.1. Sampling Distance to Next Collision¶
As a particle travels through a homogeneous material, the probability distribution function for the distance to its next collision \(\ell\) is
where \(\Sigma_t\) is the total macroscopic cross section of the material. Equation (1) tells us that the further the distance is to the next collision, the less likely the particle will travel that distance. In order to sample the probability distribution function, we first need to convert it to a cumulative distribution function
By setting the cumulative distribution function equal to \(\xi\), a random number on the unit interval, and solving for the distance \(\ell\), we obtain a formula for sampling the distance to next collision:
Since \(\xi\) is uniformly distributed on \([0,1)\), this implies that \(1 - \xi\) is also uniformly distributed on \([0,1)\) as well. Thus, the formula usually used to calculate the distance to next collision is
5.2. \((n,\gamma)\) and Other Disappearance Reactions¶
All absorption reactions other than fission do not produce any secondary neutrons. As a result, these are the easiest type of reactions to handle. When a collision occurs, the first step is to sample a nuclide within a material. Once the nuclide has been sampled, then a specific reaction for that nuclide is sampled. Since the total absorption cross section is pre-calculated at the beginning of a simulation, the first step in sampling a reaction is to determine whether a “disappearance” reaction occurs where no secondary neutrons are produced. This is done by sampling a random number \(\xi\) on the interval \([0,1)\) and checking whether
where \(\sigma_t\) is the total cross section, \(\sigma_a\) is the absorption cross section (this includes fission), and \(\sigma_f\) is the total fission cross section. If this condition is met, then the neutron is killed and we proceed to simulate the next neutron from the source bank.
Note that photons arising from \((n,\gamma)\) and other neutron reactions are not produced in a microscopically correct manner. Instead, photons are sampled probabilistically at each neutron collision, regardless of what reaction actually takes place. This is described in more detail in Photon Production.
5.3. Elastic Scattering¶
Note that the multi-group mode makes no distinction between elastic or inelastic scattering reactions. The specific multi-group scattering implementation is discussed in the Multi-Group Scattering section.
Elastic scattering refers to the process by which a neutron scatters off a nucleus and does not leave it in an excited. It is referred to as “elastic” because in the center-of-mass system, the neutron does not actually lose energy. However, in lab coordinates, the neutron does indeed lose energy. Elastic scattering can be treated exactly in a Monte Carlo code thanks to its simplicity.
Let us discuss how OpenMC handles two-body elastic scattering kinematics. The first step is to determine whether the target nucleus has any associated motion. Above a certain energy threshold (400 kT by default), all scattering is assumed to take place with the target at rest. Below this threshold though, we must account for the thermal motion of the target nucleus. Methods to sample the velocity of the target nucleus are described later in section Effect of Thermal Motion on Cross Sections. For the time being, let us assume that we have sampled the target velocity \(\mathbf{v}_t\). The velocity of the center-of-mass system is calculated as
where \(\mathbf{v}_n\) is the velocity of the neutron and \(A\) is the atomic mass of the target nucleus measured in neutron masses (commonly referred to as the atomic weight ratio). With the velocity of the center-of-mass calculated, we can then determine the neutron’s velocity in the center-of-mass system:
where we have used uppercase \(\mathbf{V}\) to denote the center-of-mass system. The direction of the neutron in the center-of-mass system is
At low energies, elastic scattering will be isotropic in the center-of-mass system, but for higher energies, there may be p-wave and higher order scattering that leads to anisotropic scattering. Thus, in general, we need to sample a cosine of the scattering angle which we will refer to as \(\mu\). For elastic scattering, the secondary angle distribution is always given in the center-of-mass system and is sampled according to the procedure outlined in Sampling Angular Distributions. After the cosine of the angle of scattering has been sampled, we need to determine the neutron’s new direction \(\mathbf{\Omega}'_n\) in the center-of-mass system. This is done with the procedure in Transforming a Particle’s Coordinates. The new direction is multiplied by the speed of the neutron in the center-of-mass system to obtain the new velocity vector in the center-of-mass:
Finally, we transform the velocity in the center-of-mass system back to lab coordinates:
In OpenMC, the angle and energy of the neutron are stored rather than the velocity vector itself, so the post-collision angle and energy can be inferred from the post-collision velocity of the neutron in the lab system.
For tallies that require the scattering cosine, it is important to store the scattering cosine in the lab system. If we know the scattering cosine in the center-of-mass, the scattering cosine in the lab system can be calculated as
However, equation (11) is only valid if the target was at rest. When the target nucleus does have thermal motion, the cosine of the scattering angle can be determined by simply taking the dot product of the neutron’s initial and final direction in the lab system.
5.4. Inelastic Scattering¶
Note that the multi-group mode makes no distinction between elastic or inelastic scattering reactions. The specific multi-group scattering implementation is discussed in the Multi-Group Scattering section.
The major algorithms for inelastic scattering were described in previous sections. First, a scattering cosine is sampled using the algorithms in Sampling Angular Distributions. Then an outgoing energy is sampled using the algorithms in Sampling Energy Distributions. If the outgoing energy and scattering cosine were given in the center-of-mass system, they are transformed to laboratory coordinates using the algorithm described in Transforming a Particle’s Coordinates. Finally, the direction of the particle is changed also using the procedure in Transforming a Particle’s Coordinates.
Although inelastic scattering leaves the target nucleus in an excited state, no secondary photons from nuclear de-excitation are tracked in OpenMC.
5.5. \((n,xn)\) Reactions¶
Note that the multi-group mode makes no distinction between elastic or inelastic scattering reactions. The specific multi-group scattering implementation is discussed in the Multi-Group Scattering section.
These types of reactions are just treated as inelastic scattering and as such are subject to the same procedure as described in Inelastic Scattering. For reactions with integral multiplicity, e.g., \((n,2n)\), an appropriate number of secondary neutrons are created. For reactions that have a multiplicity given as a function of the incoming neutron energy (which occasionally occurs for MT=5), the weight of the outgoing neutron is multiplied by the multiplicity.
5.6. Multi-Group Scattering¶
In multi-group mode, a scattering collision requires that the outgoing energy group of the simulated particle be selected from a probability distribution, the change-in-angle selected from a probability distribution according to the outgoing energy group, and finally the particle’s weight adjusted again according to the outgoing energy group.
The first step in selecting an outgoing energy group for a particle in a given incoming energy group is to select a random number (\(\xi\)) between 0 and 1. This number is then compared to the cumulative distribution function produced from the outgoing group (g’) data for the given incoming group (g):
If the scattering data is represented as a Legendre expansion, then the value of \(\Sigma_{s,g \rightarrow g'}\) above is the 0th order for the given group transfer. If the data is provided as tabular or histogram data, then \(\Sigma_{s,g \rightarrow g'}\) is the sum of all bins of data for a given g and g’ pair.
Now that the outgoing energy is known the change-in-angle, \(\mu\) can be determined. If the data is provided as a Legendre expansion, this is done by rejection sampling of the probability distribution represented by the Legendre series. For efficiency, the selected values of the PDF (\(f(\mu)\)) are chosen to be between 0 and the maximum value of \(f(\mu)\) in the domain of -1 to 1. Note that this sampling scheme automatically forces negative values of the \(f(\mu)\) probability distribution function to be treated as zero probabilities.
If the angular data is instead provided as a tabular representation, then the value of \(\mu\) is selected as described in the Tabular Angular Distribution section with a linear-linear interpolation scheme.
If the angular data is provided as a histogram representation, then the value of \(\mu\) is selected in a similar fashion to that described for the selection of the outgoing energy (since the energy group representation is simply a histogram representation) except the CDF is composed of the angular bins and not the energy groups. However, since we are interested in a specific value of \(\mu\) instead of a group, then an angle is selected from a uniform distribution within from the chosen angular bin.
The final step in the scattering treatment is to adjust the weight of the neutron to account for any production of neutrons due to \((n,xn)\) reactions. This data is obtained from the multiplicity data provided in the multi-group cross section library for the material of interest. The scaled value will default to 1.0 if no value is provided in the library.
5.7. Fission¶
While fission is normally considered an absorption reaction, as far as it concerns a Monte Carlo simulation it actually bears more similarities to inelastic scattering since fission results in secondary neutrons in the exit channel. Other absorption reactions like \((n,\gamma)\) or \((n,\alpha)\), on the contrary, produce no neutrons. There are a few other idiosyncrasies in treating fission. In an eigenvalue calculation, secondary neutrons from fission are only “banked” for use in the next generation rather than being tracked as secondary neutrons from elastic and inelastic scattering would be. On top of this, fission is sometimes broken into first-chance fission, second-chance fission, etc. The nuclear data file either lists the partial fission reactions with secondary energy distributions for each one, or a total fission reaction with a single secondary energy distribution.
When a fission reaction is sampled in OpenMC (either total fission or, if data exists, first- or second-chance fission), the following algorithm is used to create and store fission sites for the following generation. First, the average number of prompt and delayed neutrons must be determined to decide whether the secondary neutrons will be prompt or delayed. This is important because delayed neutrons have a markedly different spectrum from prompt neutrons, one that has a lower average energy of emission. The total number of neutrons emitted \(\nu_t\) is given as a function of incident energy in the ENDF format. Two representations exist for \(\nu_t\). The first is a polynomial of order \(N\) with coefficients \(c_0,c_1,\dots,c_N\). If \(\nu_t\) has this format, we can evaluate it at incoming energy \(E\) by using the equation
The other representation is just a tabulated function with a specified interpolation law. The number of prompt neutrons released per fission event \(\nu_p\) is also given as a function of incident energy and can be specified in a polynomial or tabular format. The number of delayed neutrons released per fission event \(\nu_d\) can only be specified in a tabular format. In practice, we only need to determine \(nu_t\) and \(nu_d\). Once these have been determined, we can calculated the delayed neutron fraction
We then need to determine how many total neutrons should be emitted from fission. If no survival biasing is being used, then the number of neutrons emitted is
where \(w\) is the statistical weight and \(k_{eff}\) is the effective multiplication factor from the previous generation. The number of neutrons produced is biased in this manner so that the expected number of fission neutrons produced is the number of source particles that we started with in the generation. Since \(\nu\) is not an integer, we use the following procedure to obtain an integral number of fission neutrons to produce. If \(\xi > \nu - \lfloor \nu \rfloor\), then we produce \(\lfloor \nu \rfloor\) neutrons. Otherwise, we produce \(\lfloor \nu \rfloor + 1\) neutrons. Then, for each fission site produced, we sample the outgoing angle and energy according to the algorithms given in Sampling Angular Distributions and Sampling Energy Distributions respectively. If the neutron is to be born delayed, then there is an extra step of sampling a delayed neutron precursor group since they each have an associated secondary energy distribution.
The sampled outgoing angle and energy of fission neutrons along with the position of the collision site are stored in an array called the fission bank. In a subsequent generation, these fission bank sites are used as starting source sites.
The above description is similar for the multi-group mode except the data are provided as group-wise data instead of in a continuous-energy format. In this case, the outgoing energy of the fission neutrons are represented as histograms by way of either the nu-fission matrix or chi vector.
5.8. Secondary Angle-Energy Distributions¶
Note that this section is specific to continuous-energy mode since the multi-group scattering process has already been described including the secondary energy and angle sampling.
For a reaction with secondary products, it is necessary to determine the outgoing angle and energy of the products. For any reaction other than elastic and level inelastic scattering, the outgoing energy must be determined based on tabulated or parameterized data. The ENDF-6 Format specifies a variety of ways that the secondary energy distribution can be represented. ENDF File 5 contains uncorrelated energy distribution whereas ENDF File 6 contains correlated energy-angle distributions. The ACE format specifies its own representations based loosely on the formats given in ENDF-6. OpenMC’s HDF5 nuclear data files use a combination of ENDF and ACE distributions; in this section, we will describe how the outgoing angle and energy of secondary particles are sampled.
One of the subtleties in the nuclear data format is the fact that a single reaction product can have multiple angle-energy distributions. This is mainly useful for reactions with multiple products of the same type in the exit channel such as \((n,2n)\) or \((n,3n)\). In these types of reactions, each neutron is emitted corresponding to a different excitation level of the compound nucleus, and thus in general the neutrons will originate from different energy distributions. If multiple angle-energy distributions are present, they are assigned incoming-energy-dependent probabilities that can then be used to randomly select one.
Once a distribution has been selected, the procedure for determining the outgoing angle and energy will depend on the type of the distribution.
5.8.2. Product Angle-Energy Distributions¶
If the secondary distribution for a product was given in file 6 in ENDF, the angle and energy are correlated with one another and cannot be sampled separately. Several representations exist in ENDF/ACE for correlated angle-energy distributions.
5.8.2.3. N-Body Phase Space Distribution¶
Reactions in which there are more than two products of similar masses are sometimes best treated by using what’s known as an N-body phase distribution. This distribution has the following probability density function for outgoing energy and angle of the \(i\)-th particle in the center-of-mass system:
where \(n\) is the number of outgoing particles, \(C_n\) is a normalization constant, \(E_i^{max}\) is the maximum center-of-mass energy for particle \(i\), and \(E'\) is the outgoing energy. We see in equation (47) that the angle is simply isotropic in the center-of-mass system. The algorithm for sampling the outgoing energy is based on algorithms R28, C45, and C64 in the Monte Carlo Sampler. First we calculate the maximum energy in the center-of-mass using the following equation:
where \(A_p\) is the total mass of the outgoing particles in neutron masses, \(A\) is the mass of the original target nucleus in neutron masses, and \(Q\) is the Q-value of the reaction. Next we sample a value \(x\) from a Maxwell distribution with a nuclear temperature of one using the algorithm outlined in Maxwell Fission Spectrum. We then need to determine a value \(y\) that will depend on how many outgoing particles there are. For \(n = 3\), we simply sample another Maxwell distribution with unity nuclear temperature. For \(n = 4\), we use the equation
where \(\xi_i\) are random numbers sampled on the interval \([0,1)\). For \(n = 5\), we use the equation
After \(x\) and \(y\) have been determined, the outgoing energy is then calculated as
There are two important notes to make regarding the N-body phase space distribution. First, the documentation (and code) for MCNP5-1.60 has a mistake in the algorithm for \(n = 4\). That being said, there are no existing nuclear data evaluations which use an N-body phase space distribution with \(n = 4\), so the error would not affect any calculations. In the ENDF/B-VII.1 nuclear data evaluation, only one reaction uses an N-body phase space distribution at all, the \((n,2n)\) reaction with H-2.
5.9. Transforming a Particle’s Coordinates¶
Since all the multi-group data exists in the laboratory frame of reference, this section does not apply to the multi-group mode.
Once the cosine of the scattering angle \(\mu\) has been sampled either from a angle distribution or a correlated angle-energy distribution, we are still left with the task of transforming the particle’s coordinates. If the outgoing energy and scattering cosine were given in the center-of-mass system, then we first need to transform these into the laboratory system. The relationship between the outgoing energy in center-of-mass and laboratory is
where \(E'_{cm}\) is the outgoing energy in the center-of-mass system, \(\mu_{cm}\) is the scattering cosine in the center-of-mass system, \(E'\) is the outgoing energy in the laboratory system, and \(E\) is the incident neutron energy. The relationship between the scattering cosine in center-of-mass and laboratory is
where \(\mu\) is the scattering cosine in the laboratory system. The scattering cosine still only tells us the cosine of the angle between the original direction of the particle and the new direction of the particle. If we express the pre-collision direction of the particle as \(\mathbf{\Omega} = (u,v,w)\) and the post-collision direction of the particle as \(\mathbf{\Omega}' = (u',v',w')\), it is possible to relate the pre- and post-collision components. We first need to uniformly sample an azimuthal angle \(\phi\) in \([0, 2\pi)\). After the azimuthal angle has been sampled, the post-collision direction is calculated as
5.10. Effect of Thermal Motion on Cross Sections¶
Since all the multi-group data should be generated with thermal scattering treatments already, this section does not apply to the multi-group mode.
When a neutron scatters off of a nucleus, it may often be assumed that the target nucleus is at rest. However, the target nucleus will have motion associated with its thermal vibration, even at absolute zero (This is due to the zero-point energy arising from quantum mechanical considerations). Thus, the velocity of the neutron relative to the target nucleus is in general not the same as the velocity of the neutron entering the collision.
The effect of the thermal motion on the interaction probability can be written as
where \(v_n\) is the magnitude of the velocity of the neutron, \(\bar{\sigma}\) is an effective cross section, \(T\) is the temperature of the target material, \(\mathbf{v}_T\) is the velocity of the target nucleus, \(v_r = || \mathbf{v}_n - \mathbf{v}_T ||\) is the magnitude of the relative velocity, \(\sigma\) is the cross section at 0 K, and \(M (\mathbf{v}_T)\) is the probability distribution for the target nucleus velocity at temperature \(T\) (a Maxwellian). In a Monte Carlo code, one must account for the effect of the thermal motion on both the integrated cross section as well as secondary angle and energy distributions. For integrated cross sections, it is possible to calculate thermally-averaged cross sections by applying a kernel Doppler broadening algorithm to data at 0 K (or some temperature lower than the desired temperature). The most ubiquitous algorithm for this purpose is the SIGMA1 method developed by Red Cullen and subsequently refined by others. This method is used in the NJOY and PREPRO data processing codes.
The effect of thermal motion on secondary angle and energy distributions can be accounted for on-the-fly in a Monte Carlo simulation. We must first qualify where it is actually used however. All threshold reactions are treated as being independent of temperature, and therefore they are not Doppler broadened in NJOY and no special procedure is used to adjust the secondary angle and energy distributions. The only non-threshold reactions with secondary neutrons are elastic scattering and fission. For fission, it is assumed that the neutrons are emitted isotropically (this is not strictly true, but is nevertheless a good approximation). This leaves only elastic scattering that needs a special thermal treatment for secondary distributions.
Fortunately, it is possible to directly sample the velocity of the target nuclide and then use it directly in the kinematic calculations. However, this calculation is a bit more nuanced than it might seem at first glance. One might be tempted to simply sample a Maxwellian distribution for the velocity of the target nuclide. Careful inspection of equation (55) however tells us that target velocities that produce relative velocities which correspond to high cross sections will have a greater contribution to the effective reaction rate. This is most important when the velocity of the incoming neutron is close to a resonance. For example, if the neutron’s velocity corresponds to a trough in a resonance elastic scattering cross section, a very small target velocity can cause the relative velocity to correspond to the peak of the resonance, thus making a disproportionate contribution to the reaction rate. The conclusion is that if we are to sample a target velocity in the Monte Carlo code, it must be done in such a way that preserves the thermally-averaged reaction rate as per equation (55).
The method by which most Monte Carlo codes sample the target velocity for use in elastic scattering kinematics is outlined in detail by [Gelbard]. The derivation here largely follows that of Gelbard. Let us first write the reaction rate as a function of the velocity of the target nucleus:
where \(R\) is the reaction rate. Note that this is just the right-hand side of equation (55). Based on the discussion above, we want to construct a probability distribution function for sampling the target velocity to preserve the reaction rate – this is different from the overall probability distribution function for the target velocity, \(M ( \mathbf{v}_T )\). This probability distribution function can be found by integrating equation (56) to obtain a normalization factor:
Let us call the normalization factor in the denominator of equation (57) \(C\).
5.10.1. Constant Cross Section Model¶
It is often assumed that \(\sigma (v_r)\) is constant over the range of relative velocities of interest. This is a good assumption for almost all cases since the elastic scattering cross section varies slowly with velocity for light nuclei, and for heavy nuclei where large variations can occur due to resonance scattering, the moderating effect is rather small. Nonetheless, this assumption may cause incorrect answers in systems with low-lying resonances that can cause a significant amount of up-scatter that would be ignored by this assumption (e.g. U-238 in commercial light-water reactors). We will revisit this assumption later in Energy-Dependent Cross Section Model. For now, continuing with the assumption, we write \(\sigma (v_r) = \sigma_s\) which simplifies (57) to
The Maxwellian distribution in velocity is
where \(m\) is the mass of the target nucleus and \(k\) is Boltzmann’s constant. Notice here that the term in the exponential is dependent only on the speed of the target, not on the actual direction. Thus, we can change the Maxwellian into a distribution for speed rather than velocity. The differential element of velocity is
Let us define the Maxwellian distribution in speed as
To simplify things a bit, we’ll define a parameter
Substituting equation (62) into equation (61), we obtain
Now, changing variables in equation (58) by using the result from equation (61), our new probability distribution function is
Again, the Maxwellian distribution for the speed of the target nucleus has no dependence on the angle between the neutron and target velocity vectors. Thus, only the term \(|| \mathbf{v}_n - \mathbf{v}_T ||\) imposes any constraint on the allowed angle. Our last task is to take that term and write it in terms of magnitudes of the velocity vectors and the angle rather than the vectors themselves. We can establish this relation based on the law of cosines which tells us that
Thus, we can infer that
Inserting equation (66) into (64), we obtain
This expression is still quite formidable and does not lend itself to any natural sampling scheme. We can divide this probability distribution into two parts as such:
In general, any probability distribution function of the form \(p(x) = f_1(x) f_2(x)\) with \(f_1(x)\) bounded can be sampled by sampling \(x'\) from the distribution
and accepting it with probability
The reason for dividing and multiplying the terms by \(v_n + v_T\) is to ensure that the first term is bounded. In general, \(|| \mathbf{v}_n - \mathbf{v}_T ||\) can take on arbitrarily large values, but if we divide it by its maximum value \(v_n + v_T\), then it ensures that the function will be bounded. We now must come up with a sampling scheme for equation (69). To determine \(q(v_T)\), we need to integrate \(f_2\) in equation (68). Doing so we find that
Thus, we need to sample the probability distribution function
Now, let us do a change of variables with the following definitions
Substituting equation (73) into equation (72) along with \(dx = \beta dv_T\) and doing some crafty rearranging of terms yields
It’s important to make note of the following two facts. First, the terms outside the parentheses are properly normalized probability distribution functions that can be sampled directly. Secondly, the terms inside the parentheses are always less than unity. Thus, the sampling scheme for \(q(x)\) is as follows. We sample a random number \(\xi_1\) on the interval \([0,1)\) and if
then we sample the probability distribution \(2x^3 e^{-x^2}\) for \(x\) using rule C49 in the Monte Carlo Sampler which we can then use to determine the speed of the target nucleus \(v_T\) from equation (73). Otherwise, we sample the probability distribution \(\frac{4}{\sqrt{\pi}} x^2 e^{-x^2}\) for \(x\) using rule C61 in the Monte Carlo Sampler.
With a target speed sampled, we must then decide whether to accept it based on the probability in equation (70). The cosine can be sampled isotropically as \(\mu = 2\xi_2 - 1\) where \(\xi_2\) is a random number on the unit interval. Since the maximum value of \(f_1(v_T, \mu)\) is \(4\sigma_s / \sqrt{\pi} C'\), we then sample another random number \(\xi_3\) and accept the sampled target speed and cosine if
If is not accepted, then we repeat the process and resample a target speed and cosine until a combination is found that satisfies equation (76).
5.10.2. Energy-Dependent Cross Section Model¶
As was noted earlier, assuming that the elastic scattering cross section is constant in (56) is not strictly correct, especially when low-lying resonances are present in the cross sections for heavy nuclides. To correctly account for energy dependence of the scattering cross section entails performing another rejection step. The most common method is to sample \(\mu\) and \(v_T\) as in the constant cross section approximation and then perform a rejection on the ratio of the 0 K elastic scattering cross section at the relative velocity to the maximum 0 K elastic scattering cross section over the range of velocities considered:
where it should be noted that the maximum is taken over the range \([v_n - 4/\beta, 4_n + 4\beta]\). This method is known as Doppler broadening rejection correction (DBRC) and was first introduced by Becker et al.. OpenMC has an implementation of DBRC as well as an accelerated sampling method that samples the relative velocity directly.
5.11. S(\(\alpha,\beta,T\)) Tables¶
Note that S(\(\alpha,\beta,T\)) tables are only applicable to continuous-energy transport.
For neutrons with thermal energies, generally less than 4 eV, the kinematics of scattering can be affected by chemical binding and crystalline effects of the target molecule. If these effects are not accounted for in a simulation, the reported results may be highly inaccurate. There is no general analytic treatment for the scattering kinematics at low energies, and thus when nuclear data is processed for use in a Monte Carlo code, special tables are created that give cross sections and secondary angle/energy distributions for thermal scattering that account for thermal binding effects. These tables are mainly used for moderating materials such as light or heavy water, graphite, hydrogen in ZrH, beryllium, etc.
The theory behind S(\(\alpha,\beta,T\)) is rooted in quantum mechanics and is quite complex. Those interested in first principles derivations for formulae relating to S(\(\alpha,\beta,T\)) tables should be referred to the excellent books by [Williams] and [Squires]. For our purposes here, we will focus only on the use of already processed data as it appears in the ACE format.
Each S(\(\alpha,\beta,T\)) table can contain the following:
Thermal inelastic scattering cross section;
Thermal elastic scattering cross section;
Correlated energy-angle distributions for thermal inelastic and elastic scattering.
Note that when we refer to “inelastic” and “elastic” scattering now, we are actually using these terms with respect to the scattering system. Thermal inelastic scattering means that the scattering system is left in an excited state; no particular nucleus is left in an excited state as would be the case for inelastic level scattering. In a crystalline material, the excitation of the scattering could correspond to the production of phonons. In a molecule, it could correspond to the excitation of rotational or vibrational modes.
Both thermal elastic and thermal inelastic scattering are generally divided into incoherent and coherent parts. Coherent elastic scattering refers to scattering in crystalline solids like graphite or beryllium. These cross sections are characterized by the presence of Bragg edges that relate to the crystal structure of the scattering material. Incoherent elastic scattering refers to scattering in hydrogenous solids such as polyethylene. As it occurs in ACE data, thermal inelastic scattering includes both coherent and incoherent effects and is dominant for most other materials including hydrogen in water.
5.11.1. Calculating Integrated Cross Sections¶
The first aspect of using S(\(\alpha,\beta,T\)) tables is calculating cross sections to replace the data that would normally appear on the incident neutron data, which do not account for thermal binding effects. For incoherent inelastic scattering, the cross section is stored as a linearly interpolable function on a specified energy grid. For coherent elastic data, the cross section can be expressed as
where \(E_i\) are the energies of the Bragg edges and \(s_i\) are related to crystallographic structure factors. Since the functional form of the cross section is just 1/E and the proportionality constant changes only at Bragg edges, the proportionality constants are stored and then the cross section can be calculated analytically based on equation (78). For incoherent elastic data, the cross section can be expressed as
where \(\sigma_b\) is the characteristic bound cross section and \(W'\) is the Debye-Waller integral divided by the atomic mass.
5.11.2. Outgoing Angle for Coherent Elastic Scattering¶
Another aspect of using S(\(\alpha,\beta,T\)) tables is determining the outgoing energy and angle of the neutron after scattering. For incoherent and coherent elastic scattering, the energy of the neutron does not actually change, but the angle does change. For coherent elastic scattering, the angle will depend on which Bragg edge scattered the neutron. The probability that edge \(i\) will scatter then neutron is given by
After a Bragg edge has been sampled, the cosine of the angle of scattering is given analytically by
where \(E_i\) is the energy of the Bragg edge that scattered the neutron.
5.11.3. Outgoing Angle for Incoherent Elastic Scattering¶
For incoherent elastic scattering, OpenMC has two methods for calculating the cosine of the angle of scattering. The first method uses the Debye-Waller integral, \(W'\), and the characteristic bound cross section as given directly in an ENDF-6 formatted file. In this case, the cosine of the angle of scattering can be sampled by inverting equation 7.4 from the ENDF-6 Format:
where \(\xi\) is a random number sampled on unit interval and \(c = 2EW'\). In the second method, the probability distribution for the cosine of the angle of scattering is represented as a series of equally-likely discrete cosines \(\mu_{i,j}\) for each incoming energy \(E_i\) on the thermal elastic energy grid. First the outgoing angle bin \(j\) is sampled. Then, if the incoming energy of the neutron satisfies \(E_i < E < E_{i+1}\) the cosine of the angle of scattering is
where the interpolation factor is defined as
To better represent the true, continuous nature of the cosine distribution, the sampled value of \(mu'\) is then “smeared” based on the neighboring values. First, values of \(\mu\) are calculated for outgoing angle bins \(j-1\) and \(j+1\):
Then, a final cosine is calculated as:
where \(\xi\) is again a random number sampled on the unit interval. Care must be taken to ensure that \(\mu\) does not fall outside the interval \([-1,1]\).
5.11.4. Outgoing Energy and Angle for Inelastic Scattering¶
Each S(\(\alpha,\beta,T\)) table provides a correlated angle-energy secondary distribution for neutron thermal inelastic scattering. There are three representations used in the ACE thermal scattering data: equiprobable discrete outgoing energies, non-uniform yet still discrete outgoing energies, and continuous outgoing energies with corresponding probability and cumulative distribution functions provided in tabular format. These three representations all represent the angular distribution in a common format, using a series of discrete equiprobable outgoing cosines.
5.11.4.1. Equi-Probable Outgoing Energies¶
If the thermal data was processed with \(iwt = 1\) in NJOY, then the outgoing energy spectra is represented in the ACE data as a set of discrete and equiprobable outgoing energies. The procedure to determine the outgoing energy and angle is as such. First, the interpolation factor is determined from equation (84). Then, an outgoing energy bin is sampled from a uniform distribution and then interpolated between values corresponding to neighboring incoming energies:
where \(E_{i,j}\) is the j-th outgoing energy corresponding to the i-th incoming energy. For each combination of incoming and outgoing energies, there is a series equiprobable outgoing cosines. An outgoing cosine bin is sampled uniformly and then the final cosine is interpolated on the incoming energy grid:
where \(\mu_{i,j,k}\) is the k-th outgoing cosine corresponding to the j-th outgoing energy and the i-th incoming energy.
5.11.4.2. Skewed Equi-Probable Outgoing Energies¶
If the thermal data was processed with \(iwt=0\) in NJOY, then the outgoing energy spectra is represented in the ACE data according to the following: the first and last outgoing energies have a relative probability of 1, the second and second-to-last energies have a relative probability of 4, and all other energies have a relative probability of 10. The procedure to determine the outgoing energy and angle is similar to the method discussed above, except that the sampled probability distribution is now skewed accordingly.
5.11.4.3. Continuous Outgoing Energies¶
If the thermal data was processed with \(iwt=2\) in NJOY, then the outgoing energy spectra is represented by a continuous outgoing energy spectra in tabular form with linear-linear interpolation. The sampling of the outgoing energy portion of this format is very similar to Correlated Energy and Angle Distribution, but the sampling of the correlated angle is performed as it was in the other two representations discussed in this sub-section. In the Law 61 algorithm, we found an interpolation factor \(f\), statistically sampled an incoming energy bin \(\ell\), and sampled an outgoing energy bin \(j\) based on the tabulated cumulative distribution function. Once the outgoing energy has been determined with equation (34), we then need to decide which angular distribution data to use. Like the linear-linear interpolation case in Law 61, the angular distribution closest to the sampled value of the cumulative distribution function for the outgoing energy is utilized. The actual algorithm utilized to sample the outgoing angle is shown in equation (88). As in the case of incoherent elastic scattering with discrete cosine bins, the sampled cosine is smeared over neighboring angle bins to better approximate a continuous distribution.
5.12. Unresolved Resonance Region Probability Tables¶
Note that unresolved resonance treatments are only applicable to continuous-energy transport.
In the unresolved resonance energy range, resonances may be so closely spaced that it is not possible for experimental measurements to resolve all resonances. To properly account for self-shielding in this energy range, OpenMC uses the probability table method. For most thermal reactors, the use of probability tables will not significantly affect problem results. However, for some fast reactors and other problems with an appreciable flux spectrum in the unresolved resonance range, not using probability tables may lead to incorrect results.
Probability tables in the ACE format are generated from the UNRESR module in NJOY following the method of Levitt. A similar method employed for the RACER and MC21 Monte Carlo codes is described in a paper by Sutton and Brown. For the discussion here, we will focus only on use of the probability table table as it appears in the ACE format.
Each probability table for a nuclide contains the following information at a number of incoming energies within the unresolved resonance range:
Cumulative probabilities for cross section bands;
Total cross section (or factor) in each band;
Elastic scattering cross section (or factor) in each band;
Fission cross section (or factor) in each band;
\((n,\gamma)\) cross section (or factor) in each band; and
Neutron heating number (or factor) in each band.
It should be noted that unresolved resonance probability tables affect only integrated cross sections and no extra data need be given for secondary angle/energy distributions. Secondary distributions for elastic and inelastic scattering would be specified whether or not probability tables were present.
The procedure for determining cross sections in the unresolved range using probability tables is as follows. First, the bounding incoming energies are determined, i.e. find \(i\) such that \(E_i < E < E_{i+1}\). We then sample a cross section band \(j\) using the cumulative probabilities for table \(i\). This allows us to then calculate the elastic, fission, and capture cross sections from the probability tables interpolating between neighboring incoming energies. If interpolation is specified, then the cross sections are calculated as
where \(\sigma_{i,j}\) is the j-th band cross section corresponding to the i-th incoming neutron energy and \(f\) is the interpolation factor defined in the same manner as (84). If logarithmic interpolation is specified, the cross sections are calculated as
where the interpolation factor is now defined as
A flag is also present in the probability table that specifies whether an inelastic cross section should be calculated. If so, this is done from a normal reaction cross section (either MT=51 or a special MT). Finally, if the cross sections defined are above are specified to be factors and not true cross sections, they are multiplied by the underlying smooth cross section in the unresolved range to get the actual cross sections. Lastly, the total cross section is calculated as the sum of the elastic, fission, capture, and inelastic cross sections.
5.13. Variance Reduction Techniques¶
5.13.1. Survival Biasing¶
In problems with highly absorbing materials, a large fraction of neutrons may be killed through absorption reactions, thus leading to tallies with very few scoring events. To remedy this situation, an algorithm known as survival biasing or implicit absorption (or sometimes implicit capture, even though this is a misnomer) is commonly used.
In survival biasing, absorption reactions are prohibited from occurring and instead, at every collision, the weight of neutron is reduced by probability of absorption occurring, i.e.
where \(w'\) is the weight of the neutron after adjustment and \(w\) is the weight of the neutron before adjustment. A few other things need to be handled differently if survival biasing is turned on. Although fission reactions never actually occur with survival biasing, we still need to create fission sites to serve as source sites for the next generation in the method of successive generations. The algorithm for sampling fission sites is the same as that described in Fission. The only difference is in equation (14). We now need to produce
fission sites, where \(w\) is the weight of the neutron before being adjusted. One should note this is just the expected number of neutrons produced per collision rather than the expected number of neutrons produced given that fission has already occurred.
Additionally, since survival biasing can reduce the weight of the neutron to very low values, it is always used in conjunction with a weight cutoff and Russian rouletting. Two user adjustable parameters \(w_c\) and \(w_s\) are given which are the weight below which neutrons should undergo Russian roulette and the weight should they survive Russian roulette. The algorithm for Russian rouletting is as follows. After a collision if \(w < w_c\), then the neutron is killed with probability \(1 - w/w_s\). If it survives, the weight is set equal to \(w_s\). One can confirm that the average weight following Russian roulette is simply \(w\), so the game can be considered “fair”. By default, the cutoff weight in OpenMC is \(w_c = 0.25\) and the survival weight is \(w_s = 1.0\). These parameters vary from one Monte Carlo code to another.
5.13.2. Weight Windows¶
In fixed source problems, it can often be difficult to obtain sufficiently low variance on tallies in regions that are far from the source. The weight window method was developed to increase the population of particles in important spatial regions and energy ranges by controlling particle weights. Each spatial region and particle energy range is assigned upper and lower weight bounds, \(w_u\) and \(w_\ell\), respectively. When a particle is in a given spatial region / energy range, its weight, \(w\), is compared to the lower and upper bounds. If the weight of the particle is above the upper weight bound, the particle is split into \(N\) particles, where
and \(N_{max}\) is a user-defined maximum number of splits. To ensure a fair game, each of the \(N\) particles is assigned a weight \(w/N\). If the weight is below \(w_\ell\), it is Russian rouletted as described in Survival Biasing with a survival weight \(w_s\) that is set equal to
where \(f_s\) is a user-defined survival weight ratio greater than one.
On top of the standard weight window method described above, OpenMC implements two additional checks intended to mitigate problems with long histories. First, particles with a weight that falls below some very small cutoff (defaults to \(10^{-38}\)) are killed with no Russian rouletting. Additionally, the total number of splits experienced by a particle is tracked and if it reaches some maximum value, it is prohibited from splitting further.
At present, OpenMC allows weight windows to be defined on all supported mesh types.
References
- Gelbard
Ely M. Gelbard, “Epithermal Scattering in VIM,” FRA-TM-123, Argonne National Laboratory (1979).
- Squires
G. L. Squires, Introduction to the Theory of Thermal Neutron Scattering, Cambridge University Press (1978).
- Williams
M. M. R. Williams, The Slowing Down and Thermalization of Neutrons, North-Holland Publishing Co., Amsterdam (1966). Note: This book can be obtained for free from the OECD. | https://docs.openmc.org/en/stable/methods/neutron_physics.html | 2022-09-24T21:51:53 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.openmc.org |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.