content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Home loan rates in Canada have been at memorable lows for a spell now. This assists with energizing the recuperation of the Canadian economy. During the initial 4 months of 2010, lodging deals in Canada rose significantly, with expansions in both the normal selling cost and number of homes sold. The following four months over the mid year were fairly gentler. The housing market is currently getting more adjusted. There is expanded stock tempered by expanded interest. The Canadian government is attempting to fix the standards to meet all requirements for a home loan, which made a few purchasers hop into the housing market before the new guidelines became effective. The HST charge which became effective July first, made purchasers in Ontario and British Columbia race to purchase homes. Repressed interest from the new monetary downturn additionally prodded home deals. Financing costs have fallen off their memorable lows, however are still moderately low. The entirety of the mortgage rates canada significant banks in Canada are determining that loan fees will ascend throughout the following year and a half. While estimates contrast with respect to the specific measure of an expansion in contract rates, the agreement is by all accounts, that the short-term loaning rate will be somewhere in the range of 2.5% and 3.5% before the finish of 2011. Fixed financing costs are likewise expected to ascend since they are attached to security yields. Before the finish of 2011, borrowers with an incredible financial assessment might be taking a gander at a long term fixed pace of 5.36%. What’s the significance here for the home customer? It implies that the following not many months are an incredible opportunity to purchase in Ontario and different pieces of Canada. With house costs balancing out and more homes to look over, there are without a doubt homes that will be incredible purchases. Loan fees are still truly moderate notwithstanding being up somewhat from their bottommost extremes. By securing a fixed rate for a long term contract, purchasers can be secure with low regularly scheduled installments as factor loan costs rise. As of late, home costs have mellowed from overheated conditions in the Spring while loan fees stay close to record lows. Indeed, even the long term fixed rate is experiencing difficulty keeping a climb and rather is by all accounts returning a piece. Lodging stock in Canada is as yet staying up with the lower interest as numerous venders either removed their homes back from the market or were not enthusiastically posting homes at lower costs. Thus, great arrangements can be discovered right now with the advantage of exceptionally low home loan rates making it alluring to purchase.. The hot housing market in Canada the previous Spring has offered path to a more cheerful market today. This implies purchasers and dealers are more like a harmony that is neither constraining costs up or making them fall. The current housing market permits a certified purchaser to take as much time as necessary tracking down the correct home and a willing vender to get a reasonable cost. Since loan fees stay close to memorable lows, regardless of whether you pick a long term fixed home loan rate or a long term variable rate, the premium expense of house buying will stay close to record lows. Where Canadian lodging costs head from here isn’t sure, however the low interest climate combined with milder home costs makes home purchasing more moderate for those thinking about a home buy.
http://www.ogi-docs.com/mortgage-rates-in-canada-which-direction-are-rates-going/
2021-04-10T22:09:35
CC-MAIN-2021-17
1618038059348.9
[]
www.ogi-docs.com
How to version Jupyter notebooks¶ Introduction¶ This guide will show you how to: Install neptune-notebook extension for Jupyter Notebook or JupyterLab Connect Neptune to your Jupyter environment Log your first notebook checkpoint to Neptune and see it in the UI By the end of it, you will see your first notebook checkpoint in Neptune! Before you start¶ Make sure you meet the following prerequisites before starting: Have Python 3.x installed Have Jupyter Notebook or JupyterLab installed Have node.js installed if you are using JupyterLab Note If you are using conda you can install node with: conda install -c conda-forge nodejs Step 1 - Install neptune-notebooks¶ Jupyter Notebook To install neptune-notebooks on Jupyter Notebook: Install neptune-notebooks extension: pip install neptune-notebooks Enable the extension for Jupyter Notebook: jupyter nbextension enable --py neptune-notebooks JupyterLab To install neptune-notebooks on JupyterLab go to your terminal and run: jupyter labextension install neptune-notebooks Note Remember that you need to have node.js installed to use JupyterLab extensions. Step 2 - Create some Notebook¶ Create a notebook with some analysis in it. For example, you can use this code to create an interactive Altair chart of the cars dataset: Add installation command in the first cell pip install altair vega_datasets Create Altair chart in the second cell import altair as alt from vega_datasets import data source = data.cars() brush = alt.selection(type='interval') points = alt.Chart(source).mark_point().encode( x='Horsepower:Q', y='Miles_per_Gallon:Q', color=alt.condition(brush, 'Origin:N', alt.value('lightgray')) ).add_selection( brush ) bars = alt.Chart(source).mark_bar().encode( y='Origin:N', color='Origin:N', x='count(Origin):Q' ).transform_filter( brush ) chart = points & bars chart Run both cells and see the interactive Altair visualization. Step 3 - Configure Neptune API token¶ Now, you need to connect your notebook to Neptune. Copy your Neptune API token. Click on the Neptune icon and paste your API token there. Step 4 - Save Notebook Checkpoint to Neptune¶ Click on the Upload button. You will be prompted to: Choose which project you want to send this notebook to Add a description of the notebook Step 5 - See your notebook checkpoint in Neptune¶ Click on the green link that was created at the bottom of your notebook or go directly to the Notebooks section of your Neptune project. Your notebook checkpoint was tracked and you can explore it now or later. Conclusion¶ You’ve learned how to: Install neptune-notebook extension for Jupyter Notebook or JupyterLab Connect Neptune to your Jupyter environment Log your first notebook checkpoint to Neptune and see it in the UI What’s next¶ Now that you know how to save notebook checkpoints to Neptune you can learn:
https://docs-legacy.neptune.ai/getting-started/quick-starts/how-to-version-notebooks.html
2021-04-10T21:41:47
CC-MAIN-2021-17
1618038059348.9
[array(['../../_images/get_token.gif', 'Get Neptune API token'], dtype=object) ]
docs-legacy.neptune.ai
This section includes information on locating Address Manager in your network, such as factors affecting the placement of Address Manager and DNS/DHCP Server appliances in your network as well as the required and optional ports used by Address Manager and DNS/DHCP Server. Address Manager should always be installed in a trusted part of the network. If you require remote access to Address Manager within the trusted part of the network, the use of a virtual private network (VPN) is recommended. Topology designs should take into account that Address Manager is designed for use on the internal network and does not contain its own firewall. DNS/DHCP Server appliances contain a packet-filtering firewall that is dynamically configured according to the services in use on the appliance. Therefore, DNS/DHCP Server appliances can be safely deployed in any part of the network. DNS/DHCP Server is designed to be secure for use on hostile network segments, such as in DMZ environments. Address Manager needs to be accessible to administrators and it needs to contact other servers. Address Manager also must be able to receive notifications from the servers it is managing. Address Manager and DNS/DHCP Server both require communication on various TCP and UDP ports depending on the services that are configured on the appliances. For information on services and ports, refer to Address Manager service ports and DNS/DHCP Server firewall requirements. Network topology Address Manager makes possible many different types of server topologies. Traditional DNS best practices still apply to much of the topology of a Address Manager-designed network. Beyond the recommendation that Address Manager reside in a trusted part of the network, the rest of the topology can change.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Network-requirements/9.0.0
2021-04-10T21:19:01
CC-MAIN-2021-17
1618038059348.9
[]
docs.bluecatnetworks.com
BuildMaster has two related features that help you share resources across applications: There are a few types of credentials that are built-in to BuildMaster: There are also a few built-in resource types: However, because these are implemented as extensions, many integrations like Azure DevOps, Jira, Jenkins, etc. will add additional types. You manage secure credentials under Application Settings > Credentials or Administration > Secure Credentials. All secure credentials have the following properties: $CredentialPropertyvariable function to extract secure properties Depending on the type of secure credential, there will be other fields like Username or Password that you can edit. Secret fields (like Password) will be displayed as an empty textbox, and entering a value in the textbox will change it. You can click the "Show Secret fields" button to show these values if you have the appropriate permissions. Most Operations that work with resources and/or credentials will let you use a secure resource name and/or a secure credential name. For example, although the AzureDevOps::Get-Source operation has all the properties necessary to connect to a Azure DevOps repository (InstanceUrl, Usernname, Token, Project, etc.), you can simply specify a configured secure resource name instead. AzureDevOps::Get-Source ( From: MyAzureDevOpsResource ); At runtime, the operation will search for a secure resource named MyAzureDevOpsResource. If that secure resource has a secure credential name configured, then the associated secure credential will also be used. Note that, if the credential has "restricted to environment use" configured, then this will only be permitted if the deployment plan is running in the same environment. This can be frustrating and unintuitive to your users, so be careful when using it. You may want to permit or restrict certain users from accessing certain secure credentials, such as allowing Developers to manage credentials in the Integration and Testing environments. This is done by associating credentials with an environment, and then creating the appropriate access controls scoped to that environment. There are two task attributes you can use to control this access: On the manage credentials page, users will only see the credentials they have permission to manage, and will only be able to create credentials in permitted environments. Most Operations will simply input the name of a Secure Resource or Secure Credential, and use the properties you've configured. However, there may be times when you need to access the configured properties, such as such as if you want to pass a username or password to a script or command-line utility. To access the properties of a configured Secure Credential or Secure Property from within OtterScript, you can use the $SecureCredentialProperty and $SecureResourceProperty functions. set $host = $CredentialProperty(InternalProGet, ServerUrl); exec sometool.exe --host $host; By default, accessing secret properties of a secure credential will raise an error. This is to prevent malicious OtterScript like this: # NOTE - this won't actually work unless you explciitly allow it Send-Email ( To: [email protected], Subject: Username is $HdarsUser and password is $HdarsPassword ); If you absolutely need to access secret field on a Secure Credential (such as a Password or API Token), you'll need to enable variable usage on that credential. Here is a case where you may need to do that: set $HDarsUser = $SecureCredentialProperty(UsernamePassword::HDarsUser, UserName); set $HDarsPassword = $SecureCredentialProperty(UsernamePassword::HDarsUser, Password); exec hdars-tool.exe --user $HdarsUser -- pass $HdarsPassword; If you have important credentials, you should strongly consider writing a custom operation that can securely handle the credentials. Prior to BuildMaster 6.2, a single feature called "Resource Credentials" was used instead of Secure Credentials and Secure Resources. These will appear with a warning icon on the secure credentials page. While Resource Credentials are considered a "Legacy Feature", you can still create and edit them as needed. In some cases, such as custom/community extensions, they may be the only option, until the extension is adapted to use the new features. Many operations that utilize a Secure Resource will attempt to search for a Resource Credential if the specified Secure Resource was not found. This enables upgrades without breaking previous OtterScripts plans and modules. You can "convert" a Resource Credential into a Secure Credentials and/or Secure Resources from the manage secure credentials page. This is a one-way conversion, so make sure to save important information (like the password) if you need to recreate it. For example, a "GitHub Resource Credential" would convert to a "GitHub Account" (Secure Credential) and a "GitHub Project" (Secure Resource), unless the Username was empty; in that case, it will only be converted to a Secure Resource. BuildMaster v6.1.10 introduced "cascading resource credentials", which allowed for credentials to be created at the application-group level. A resource credential could also "inherit" from another resource credential, which would cause specific properties to be overriden. For example, an individual application's resource credentials could specify a system-level parent that has username/password information stored for a full system, and then override just the "repository URL". This way, you can specify the secret value in one location (with elevated privileges required to access it), and connect to different repositories consistently throughout all applications. For security reasons, the environment of the parent credential must match the inheritor's exactly, or the parent environment must not be specified (indicating that it applies to all environments) When resolving a resource credential at execution time, candidates by name are selected in order of matching properties: application + environment, application, application group, ancestor application-group, environment, system. Is this documentation incorrect or incomplete? Help us by contributing! This documentation is licensed under CC-BY-SA-4.0 and stored in GitHub. Generated from commit 99e6640e on master
https://docs.inedo.com/docs/buildmaster/administration/resource-credentials?utm_source=buildmaster&utm_medium=product&utm_campaign=buildmaster6
2021-04-10T21:27:44
CC-MAIN-2021-17
1618038059348.9
[]
docs.inedo.com
- : The command has the following fields: In the rolesfield, you can specify both built-in roles and user-defined roles. To specify a role that exists in the same database where revokeRolesFromUserruns,::
https://docs.mongodb.com/v4.2/reference/command/revokeRolesFromUser/
2021-04-10T22:23:39
CC-MAIN-2021-17
1618038059348.9
[]
docs.mongodb.com
Troubleshooting - The Locksmith in-app resource search bar - Can Locksmith hide content from my in-store search? - Why is Locksmith adding information to my orders? - My featured collections on my home page only show one product, or I'm seeing a ProductDrop error in that section - I switched themes, and Locksmith isn't working. - Why aren't my locks working? - My passcode or newsletter prompt is not updating on my store when I change it - Why isn't my passcode, secret link, or newsletter key working? - Solving installation issues - I can't lock my custom page template! - What should I do if my site is loading slowly? - How to clear cache for a single website - Broken search results, collection filtering, or general theme display issues - I want to hide links to my registration page, but I don't see the option to - My account is suspended! - Solving uninstallation issues - Liquid errors when using server keys - I'm the administrator of my site and I cannot access pages because of Locksmith locks - Why are my customers seeing a reCAPTCHA when logging in? - My infinite scrolling doesn't show all of my products!
https://docs.uselocksmith.com/category/156-troubleshooting?sort=popularity
2021-04-10T21:33:49
CC-MAIN-2021-17
1618038059348.9
[]
docs.uselocksmith.com
Date: Thu, 11 Jun 2015 09:19:04 -0500 (CDT) From: "Valeri Galtsev" <[email protected]> To: "Matthew Seaman" <[email protected]> Cc: [email protected] Subject: Re: FreeBSD and Docker Message-ID: <[email protected]> In-Reply-To: <[email protected]>> <1507965.zgzlHR604A@thinkpad> <[email protected]> <[email protected]> <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help ++++++++++++++++++++++++++++++++++++++++ Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=356663+0+/usr/local/www/mailindex/archive/2015/freebsd-questions/20150614.freebsd-questions
2021-11-27T14:04:07
CC-MAIN-2021-49
1637964358189.36
[]
docs.freebsd.org
Authority management¶ By default, you can log in with the GOD role ( nebula as the default username and nebula as the default password). And the ADMIN account can be created or deleted on the authority management page. You cannot perform operations on other accounts using the ADMIN account you created except for viewing the username, the role, and the creation time. Last update: November 11, 2021
https://docs.nebula-graph.io/2.6.1/nebula-dashboard-ent/5.account-management/
2021-11-27T15:01:11
CC-MAIN-2021-49
1637964358189.36
[array(['../figs/ds-032.png', 'god'], dtype=object) array(['../figs/ds-031.png', 'admin'], dtype=object)]
docs.nebula-graph.io
Date: Mon, 15 Feb 1999 10:46:26 -0000 From: "Barry Scott" <[email protected]> To: "Freebsd-Isdn@Freebsd. Org" <[email protected]> Subject: ISDN status monitor - status Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive | Help I have been quite for too long after posting the screen shot of the ISDN status monitor. Here is my current status. The original version was written in Perl. But had a number of problems with the graphs and I hate maintaining Perl code. Further the gdchart software is feature poor for this application. (Cannot chart Tx and Rx against time). The current version is implemented in Python and is a nice clear and maintainable implementation. The python version has the following feature: * Three ways to graph the Tx and Rx data * Control over the length of time the graph shows * Control over number of log lines shown * Control over number of accounting lines shown * Style sheets used to allow user colour preferences to be used. I cannot easily release a kit until the Python Image Library (PIL) comes out of beta. I've been finding bugs in it that will be fixed in the next PIL kit. The Python and PIL folks are talking about releasing at the start of March. I'll take the release kits and run against them then release ISDN status. BArry To Unsubscribe: send mail to [email protected] with "unsubscribe freebsd-isdn" in the body of the message Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=18319+0+/usr/local/www/mailindex/archive/1999/freebsd-isdn/19990214.freebsd-isdn
2021-11-27T14:24:24
CC-MAIN-2021-49
1637964358189.36
[]
docs.freebsd.org
9.0.005.06 Genesys Softphone Release Notes Helpful Links Releases Info Product Documentation Genesys Products What's New This release contains the following new features and enhancements: - Genesys Softphone can now be deployed in the Citrix XenApp 7 and XenDesktop 7 virtual desktop infrastructures. Deployment requires Genesys Softphone VDI Adapter to be installed on the client workstations. Refer to the Supported OS and the Supported Virtualization system guides for details about the supported versions. (SOFTPHONE-504) Resolved Issues This release contains the following resolved issues: Genesys Softphone no longer fails to initialize when it is configured with HTTP Connector and the Workspace configuration contains a Softphone log option (sipendpoint.system.diagnostics.log_*) different from the configuration specified in the softphone.config configuration file. (SOFTPHONE-608) Upgrade Notes No special procedure is required to upgrade to release 9.0.005.06. This page was last edited on February 28, 2019, at 15:45.
https://docs.genesys.com/Documentation/RN/9.0.x/gsp90rn/gsp9000506
2021-11-27T13:52:06
CC-MAIN-2021-49
1637964358189.36
[]
docs.genesys.com
Testing environment When connected to your dashboard, you’ll notice a Test/Live toggle in the middle of the application's upper section. By default, your account will be set to Test mode. This environment lets you fully integrate Snipcart on your website and even process dummy transactions. Every environment (Test/Live) is isolated. Any configuration changes made within one environment won't affect the other. This allows you to experiment without any impact on your Live environment and real customers. General settings, promo codes, shipping methods or any other configuration are all isolated within their respective environment. For instance, all the promo codes you create within your Test environment will only be available when using your Test API Key. They are completely isolated from the Live environment, so you don't have to worry about creating test promo codes: your Live clients will not be able to use them. Getting your Test API key To get your Test API Key, log into the dashboard and make sure you’re in the Test environment. Under Account → API keys, you'll find your Test API key. Important notes The toggle allows you to access the Test and Live environment data of your account. It only affects the data and configuration displayed and does not activate a mode or another on your site. To start processing real transactions, you need to switch API keys—please read this entry on Going live.
https://docs.snipcart.com/v3/testing/environment
2021-11-27T15:32:35
CC-MAIN-2021-49
1637964358189.36
[array(['https://snipcartweb-10f3.kxcdn.com/media/1419/docs-snipcart-test-live-toggle.png', 'Test toggle'], dtype=object) ]
docs.snipcart.com
DIBSDIBS Configure DIBSConfigure DIBS Login to the DIBS administration. MD5 keysMD5 keys Click Integration -> MD5 Keys in the menu to the left. Check the Perform MD5 control and click Update. Return valuesReturn values Next click on the Integration -> Return values menu item. Check the following items – Order ID, Paytype, Card number with last four digits unmasked and All fields exclusive of card information response. API userAPI user Click Setup -> User Setup -> API users and create a new API user. Configure Tea CommerceConfigure Tea Commerce Create a payment method and select DIBS as the payment provider. Now configure the settings. DIBS supports a wide range of different settings which you can read more about in their documentation.
https://docs.teacommerce.net/3.3.1/payment-providers/dibs/
2021-11-27T14:06:48
CC-MAIN-2021-49
1637964358189.36
[array(['/img/5d0bbd8-dibs-1.png', 'dibs-1.png'], dtype=object) array(['/img/855d0fa-dibs-2.png', 'dibs-2.png'], dtype=object) array(['/img/77f476f-dibs-3.png', 'dibs-3.png'], dtype=object) array(['/img/9deedac-dibs-4.png', 'dibs-4.png'], dtype=object)]
docs.teacommerce.net
-10-23 Welcome: Kira Makagon, EVP of Innovation at RingCentral Paul Chapman, CIO of Box Leo Laporte, founder of the TWiT Netcast Network They'll discuss their experiences migrating their organizations' IT into the cloud, their lessons learned once there, and more.... Cloud Agents The latest award of a broadband provider-based Cloud Agent goes to Washington (state)! Welcome to the Seattle (AT&T) Cloud Agent. Enterprise Agents In with the new, out with the old... In: We have new bulk actions for Enterprise Agents. Out: Red Hat, CentOS and Oracle Linux versions 6.7, 7.2 and 7.3 are approaching their term limits. Details below. Bulk Actions. End of Life for Linux package Enterprise Agents on Red Hat 6.7, 7.2 and 7.3. Snapshot creation via API . Endpoint Agent new Wireless View and metrics. Minor enhancements & bug fixes. Questions and comments Got feedback for us? Want to vote for your favorite feature or blog post? Send us an email ! Release Notes: 2018-11-13 Release Notes: 2018-10-10 Last modified 1yr ago Copy link Contents Cloud Agents Enterprise Agents Bulk Actions End of Life for Linux package Enterprise Agents on Red Hat 6.7, 7.2 and 7.3 Snapshot creation via API Endpoint Agent new Wireless View and metrics Minor enhancements & bug fixes Questions and comments
https://docs.thousandeyes.com/archived-release-notes/2018/2018-10-23-release-notes
2021-11-27T14:38:57
CC-MAIN-2021-49
1637964358189.36
[]
docs.thousandeyes.com
See Also: UIPageControl Members This control appears as a bar on which a number of dots represent available pages (a similar control is at the bottom of the iOS home screen). When the application user selects one of the pages, it raises the UIControl.ValueChanged event. The application developer can then use the UIPageControl.CurrentPage property to determine what page to display. When the number of pages gets large, it may be impossible to display a separate dot per page. In that situation, the UIKit.UIPageControl clips the display of page indicators.
http://docs.go-mono.com/monodoc.ashx?link=T%3AUIKit.UIPageControl
2021-11-27T15:05:57
CC-MAIN-2021-49
1637964358189.36
[]
docs.go-mono.com
__init__()constructor is only called on one thread, whereas its predict()method can run on any of the available threads (which is configured via the threads_per_processfield in the API's predictorconfiguration). If threads_per_processis set to 1(the default value), then there is no concern, since __init__()and predict()will run on the same thread. However, if threads_per_process. __init__()and predict()running in separate threads) is: predict()method:
https://docs.cortex.dev/v/0.22/troubleshooting/tf-session-in-predict
2021-11-27T14:34:25
CC-MAIN-2021-49
1637964358189.36
[]
docs.cortex.dev
Properties for Inventory Devices on Apply Allocations and Exemptions Page FlexNet Manager Suite 2020 R2 (On-Premises) This page displays the following columns for inventory devices. Note that some of the columns display properties of hardware asset records, and are populated only for inventory devices that are linked to assets. Some columns are displayed by default whereas others can be displayed through the column chooser. For displaying columns and other UI options, see the topics under Managing Columns in a Table. FlexNet Manager Suite (On-Premises) 2020 R2
https://docs.flexera.com/FlexNetManagerSuite2020R2/EN/WebHelp/topics/AllocExemption_InventoryDevProps.html
2021-11-27T14:05:53
CC-MAIN-2021-49
1637964358189.36
[]
docs.flexera.com
Date: Wed, 14 Oct 2020 12:23:18 +0200 From: Ralf Mardorf <[email protected]> To: [email protected] Subject: Re: A couple of questions about SSDs Message-ID: <20201014122318.56f4ee4e@archlinux> In-Reply-To: <[email protected]> References: <[email protected]> <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help On Wed, 14 Oct 2020 10:32:02 +0100, Jon Schneider wrote: >smartctl tool Better use a vendor's tool to do so. The SSD might not be in the smartctl data base at all, or it might not provide correct information. On Linux I'm running the vendor's Linux software, to get smart data, that most likely is correct and to update the firmware, while the SSDs are in use. Mine are cheap Toshiba SSDs. $ sudo ocz-ssd-utility Not all vendor's support FLOSS operating systems, hence I cared for this, before I bought my SSDs. Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=109532+0+/usr/local/www/mailindex/archive/2020/freebsd-questions/20201018.freebsd-questions
2021-11-27T14:05:28
CC-MAIN-2021-49
1637964358189.36
[]
docs.freebsd.org
Polywrap vs. JavaScript SDKs tip Polywrap is a substantial improvement over JavaScript SDKs and produces a seamless and intuitive Web3 protocol integration experience for blockchain developers. The table below highlights the contrasts between JavaScript SDKs and Polywrap wrappers as it concerns compatibility & maintainability, ease of use, bundle size, and upgrades & patches.
https://docs.polywrap.io/getting-started/polywrap-vs-javascript-sdks/
2021-11-27T14:58:07
CC-MAIN-2021-49
1637964358189.36
[]
docs.polywrap.io
Upgrade from 2.0.2 to 2.1.0¶ Updating Servers to SecureDrop 2.1.0¶ Servers running Ubuntu 20.04 will be updated to the latest version of SecureDrop automatically within 24 hours of the release. Updating Workstations to SecureDrop 2.1.0.1.0.1.0 Important If you do see the warning “refname ‘2.1.0’ is ambiguous” in the output, we recommend that you contact us immediately at [email protected] (GPG encrypted). Finally, run the following commands: ./securedrop-admin setup ./securedrop-admin tailsconfig Updating Tails¶ Follow the graphical prompts to update to the latest version of the Tails operating system on your Admin and Journalist Workstations. Important Older versions of Tails had problems with automatic updates, which SecureDrop tries to correct automatically. Check the version of Tails on your Admin and Journalist Workstations (Applications ▸ Tails ▸ About Tails). If you are running a version older than Tails 4.23, and did not receive an automatic upgrade prompt after connecting to the Internet, perform a manual update. If this also fails, please don’t hesitate to contact us..
https://docs.securedrop.org/en/latest/upgrade/2.0.2_to_2.1.0.html
2021-11-27T13:48:06
CC-MAIN-2021-49
1637964358189.36
[array(['../_images/securedrop-updater.png', '../_images/securedrop-updater.png'], dtype=object)]
docs.securedrop.org
Support | Blog | Contact Us Trifacta SaaS Release 7.1 Release 6.8 Release 6.4 Release 6.0 Release 5.1 Release 5.0 Trifacta Wrangler free Schedule Enterprise Demo Feedback Trifacta.com Enterprise Release 6.8.2 Outdated release! Latest docs are Release 8.7: Install Software To install Trifacta®, please review and complete the following sections in the order listed below. Topics: Install Dependencies without Internet AccessInstall for DockerInstall on CentOS and RHELInstall on UbuntuLicense KeyInstall Hadoop Dependencies Topics: Search Community: Send Feedback This page has no comments. © 2013-2021 Trifacta® Inc. Privacy Policy | Terms of Use This page has no comments.
https://docs.trifacta.com/pages/viewpage.action?pageId=148809550
2021-11-27T15:31:21
CC-MAIN-2021-49
1637964358189.36
[]
docs.trifacta.com
This page provides information on the Phoenix Grid Texture. Page Contents Overview The Phoenix Grid Texture can be created from the Material Editor. It loads and exposes a grid channel of a selected Phoenix Simulator as a procedural texture. This texture can then be plugged into materials as color or opacity and applied to surfaces of liquids or meshes in general. The texture can be plugged into the Fire/Smoke Simulator's volumetric options in order to modulate the opacity of fire and smoke or to color the fire and smoke. The main application of this texture is for shading the meshes of simulated liquids which were exported with an RGB grid channel. The technique of mixing colored liquids and rendering their colors is available in the Paints Quick Setup preset. You could also use it as a blending mask, as described in the Milk & Chocolate tutorial. Another use of the Grid Texture is for rendering via an external volumetric Shader such as the V-Ray Environment Fog. See the External Volumetric Shader section on the Tips and Tricks page for more information. Parameters Source node | node – Allows you to specify a Fire Smoke Simulator or a Liquid Simulator. Note that if the Grid Texture is plugged into a Particle Shader's Color Map slot and the Grid Texture's Coordinate Source is Object XYZ, then Liquid Simulator should also be enabled and connected in the Particle Shader, otherwise the Grid Texture wouldn't know how to get mapped because the Particle Shader has no grid box like the Phoenix Simulator. Channel | channel – Specifies the channel retrieved from the Phoenix node: Rendering Fire Color - Returns the resulting color for the Fire, as specified in the Rendering → Volumetric Options → Fire roll-out of the Phoenix FD Simulator. Rendering Smoke Color - Returns the resulting color for the Smoke, as specified in the Rendering → Volumetric Options → Smoke roll-out of the Phoenix FD Simulator. Rendering Smoke Opacity - Returns the resulting Smoke Opacity, as specified in the Rendering → Volumetric Options → Smoke Opacity roll-out of the Phoenix FD Simulator. Channel Speed - Returns the contents of the Speed simulation channel. Speed Grid Channel Output has to be enabled on the Phoenix FD Simulator for this to work. Channel Velocity - Returns the contents of the Velocity simulation channel. Velocity Grid Channel Output has to be enabled on the Phoenix FD Simulator for this to work. Channel RGB - Returns the contents of the RGB simulation channel. RGB Grid Channel Output has to be enabled on the Phoenix FD Simulator for this to work. Channel Temperature/Liquid - Returns the contents of the Temperature/Liquid simulation channel. Temper./Liquid Grid Channel Output has to be enabled on the Phoenix FD Simulator for this to work. Channel Smoke - Returns the contents of the Smoke simulation channel. Smoke Grid Channel Output has to be enabled on the Phoenix FD Simulator for this to work. Channel Fuel - Returns the contents of the Fuel simulation channel. Fuel Grid Channel Output has to be enabled on the Phoenix FD Simulator for this to work Channel Viscosity - Returns the contents of the Viscosity simulation channel. Viscosity Grid Channel Output has to be enabled on the Phoenix FD Simulator for this to work. Tiling mode | tiling – Specifies how to handle sampling outside of the box. Single – The texture will not be tiled and the region outside the box will be empty. Single Clamped – The texture will not be tiled but its edges will be clamped, thus "stretching" the edge pixels. Wrap – Tiles the texture by repeating it infinitely. Mirror – Tiles the texture by flipping it back and forth infinitely. Sampler | sampler_type – between this method and the Linear method becomes less noticeable. Color Scale | output_scale – Multiplies the color output values of the Grid Texture. Color Offset | output_offset – Multiplies the color output values of the Grid Texture. The values are first scaled by Color Scale and then the Color Offset is added. Rescale Grid Channel – Rescales the output values of a Grid Channel to a certain range. Rescaling using this helper simply measures the data range of the selected Channel and changes the Color Scale and Color Offset options. This means that if you simulate or load a new cache sequence in the Simulator, you will need to do this rescaling again to make sure the Grid Texture values are still in the Min-Max range. Rendering channels such as the Fire Color and Smoke Color/Opacity cannot be rescaled. Rescaling is especially useful when you are reading channels such as Channel Speed or Channel Temperature which can range up to several hundred or thousand, and you want to use them as color or opacity, so you need to scale them down between 0-1, which would require you to set a very small Color Scale multiplier. Min – Sets the minimum value you want for the Grid Texture output. Max – Sets the maximum value you want for the Grid Texture output. Rescale to Current Frame – Rescales the Grid Channel based on the Channel values only for the current frame. If the Channel changes its range for other timeline frames, the Grid Texture will output values outside the Min-Max range or smaller than that range. Rescale to Entire Sequence – Rescales the channel based on the lowest and highest values of the Channel in the entire cache sequence. This will make sure that the values output by the Grid Texture will not exceed Max or go below Min for any frame of the cache sequence. Skip the Displacement | skip_fine_displ – When enabled, the content is sampled without adding Phoenix displacement. Alpha is the Color's Intensity | alpha_intensity – When enabled, the alpha for the texture will be based on the color's intensity. When disabled the alpha will be solid (1.0).
https://docs.chaosgroup.com/display/PHX3MAX/Grid+Texture+%7C+PhoenixFDGridTex
2019-10-13T22:52:14
CC-MAIN-2019-43
1570986648343.8
[]
docs.chaosgroup.com
Overview In this tutorial we're going to be covering the use of the Batch Render tool. The Batch Render tool is useful when you have multiple scenes that need to be rendered out, but have objects that may change between scenes. With Batch Render you can set up all of your scene tabs, and render each individual scene tab with the press of one button. V-Ray also recognizes items turned off and on in each scene tab (our provided scene illustrates this) so, for example, if you have renderings for a client with different variations of a design to show, you can quickly and easily set up multiple renderings via Sketchups Scene Tabs, and just click Batch Render and walk away to let V-Ray do its thing. First lets review what the Batch Render tab looks like. In your V-Ray Tool set you'll see a button labeled BR short for Batch Render (). By clicking this tool you will prompt the Batch Render function. The Batch Render function can be cancelled the same way cancelling any rendering is done, by closing the Frame Buffer window or using the cancel render button in the V-Ray Progress Window. Now that we know where the Batch Render function is located and what it does, lets try it out. Download the test scene located here and lets get started. This folder contains all the necessary assets needed to run a batch render in sketchup. You can find the final output files located in the Final Output folder, here you can see what the renders will look like based on the content provided. Batch Rendering 1.1. Open the scene named “Batch Scene Start.skp” *note: this sketchup scene has been set up ahead of time, to learn more about Scene tabs and how they work check out this tutorial here. 1.2. Once the Sketchup file is open go ahead and click the Batch Render tool (). 1.3. At this point you’ll be prompted to provide some “Save Output” information. This is normal. 1.4. Go ahead and open your V-Ray Options Editor tool () and select your Output dialogue. There you will have control over your output size and where it will save the rendered file too. 1.5. Click the save output box. 1.6. Create a name for your output file (in my case I used 001 and a PNG File and selected the “Batch Render Tutorial” folder as a save location). You can path your output file to save anywhere you like. *Note: In SketchUp it is important to turn off scene transitions and set the scene delay to 0. You can do so from your SketchUp Model Info window, under the Animation settings 1.7. Now that you have your Output Location setup, click the Batch Render button () and go grab a cup of coffee (or tea) and come back to check your renders periodically to see which have finished. Now that you know how it works, test it out, play around and enjoy!
https://docs.chaosgroup.com/pages/viewpage.action?pageId=5081342
2019-10-13T23:42:11
CC-MAIN-2019-43
1570986648343.8
[]
docs.chaosgroup.com
Click Modes The RadMap supports both single and double mouse clicks. It provides you with a predefined behaviors for them out of the box. The possible values are to be found in the MouseBehavior enumeration: Center - positions the clicked or double clicked point into the center of the map. None - the click or double click does nothing. ZoomPointToCenter - zooms in to the clicked or double clicked point and positions it into the center of the map. ZoomToPoint - zooms in to the clicked or double clicked point. In order to configure the behavior for the single click you have to set the MouseClickMode property. For the double click mode set the MouseDoubleClickMode property. Here is an example: <telerik:RadMap x:Name="radMap" MouseClickMode="Center" MouseDoubleClickMode="ZoomToPoint" / Also you can set these properties to None in order to prevent the users from zooming. Additionally setting the MouseDragMode property to None will disable them from panning. <telerik:RadMap x: private void radMap_MapMouseClick( object sender, MapMouseRoutedEventArgs e ) { //implement logic regarding single click here } private void radMap_MapMouseDoubleClick( object sender, MapMouseRoutedEventArgs e ) { //implement logic regarding double click here } Private Sub radMap_MapMouseClick(sender As Object, e As MapMouseRoutedEventArgs) 'implement logic regarding single click here' End Sub Private Sub radMap_MapMouseDoubleClick(sender As Object, e As MapMouseRoutedEventArgs) 'implement logic regarding double click here' End Sub
https://docs.telerik.com/devtools/wpf/controls/radmap/features/click-modes
2019-10-13T22:13:36
CC-MAIN-2019-43
1570986648343.8
[]
docs.telerik.com
(Dataset or DataArray) – Objects to align. join ({'outer', 'inner', 'left', 'right', 'exact', 'override'}, optional) – Method for joining the indexes of the passed objects along each dimension: . copy (bool, optional) – If copy=True, data in the return values is always copied. If copy=Falseand reindexing is unnecessary, or can be performed with only slice operations, then the output may share memory with the input. In either case, new xarray objects are always returned. indexes (dict-like, optional) – Any indexes explicitly provided with the indexes argument should be used in preference to the aligned indexes. exclude (sequence of str, optional) – Dimensions that must be excluded from alignment fill_value (scalar, optional) – Value to use for newly missing values - Returns aligned – Tuple of objects with aligned coordinates. - Return type - - Raises ValueError – If any dimensions without labels on the arguments have different sizes, or a different size than the size of the aligned dimension labels.
https://xray.readthedocs.io/en/stable/generated/xarray.align.html
2019-10-13T23:16:59
CC-MAIN-2019-43
1570986648343.8
[]
xray.readthedocs.io
This represents tarmaks global flags ClusterAmazon offers Amazon-specific settings for that instance pool Contains the cluster apply flags Contains the cluster destroy flags This contains the cluster specifc operation flags Contains the cluster images build flags Contains the cluster images flags Contains the cluster kubeconfig flags Configure the cluster internal deployment of prometheus Contains the cluster logs flags Contains the cluster plan flags EgressRule parameters for the firewall Contains the environment destroy flags This contains the environment specific operation flags Firewall contains the configuration a user expects to be applied. IngressRule parameters for the firewall Amazon specific settings for that instance pool InstaceSpecManifest defines location and hash for a specific manifest InstaceSpecManifest defines the state and hash of a run manifest Label structure for instancepool node labels Taint structure for instancepool node taints
http://docs.tarmak.io/release-0.6/api-docs.html
2019-10-13T23:05:11
CC-MAIN-2019-43
1570986648343.8
[]
docs.tarmak.io
file.. Some examples: The following constants are available: The following track variables are available: Dynamic Track selection example¶. ?filter=(type=="audio"&&systemBitrate<100000)||(type=="video"&&systemBitrate<800000) Scenario 2: Select the highest audio track and the two highest video tracks. ?filter=(type=="audio"&&systemBitrate>100000)||(type=="video"&&systemBitrate>1300000) Attention The filter expression is passed as a query parameter, e.g. ?filter=, and must be properly escaped (preferably using a standard function from your favourite framework). Note that when testing the URL with a command-line tool like cURL you have to take into account the escaping rules for your command-line as well. So for curl the '&' needs to be escaped in the bash shell: #!/bin/bash curl -v '(type=="audio"%26%26systemBitrate<100000)||(type=="video"%26%26systemBitrate<800000)' TS playout when packaging the content. To do so, it is necessary to order the tracks when packaging. To do this: - Create separate fMP4's that contain all of the stream's track of a specific type (audio, video, subtitles) - When creating these fMP4's, make sure that the track that a player should start playback with is added on the command-line first - Create a server manifest based on these fMP4's like you normally would - For HLS playout, the tracks that were first in order will be signaled as DEFAULT=YESin the HLS Master Playlist For example, using the command-lines will ensure that the tracks from tears-of-steel-avc1-1000k.mp4 and tears-of-steel-aac-128k.mp4 are set to DEFAULT=YES: #! mp4split -o tos_aac-sorted.isma \ tears-of-steel-aac-128k.mp4 \ tears-of-steel-aac-64k.mp4 mp4split -o tears-of-steel-sorted.ism \ tos_avc1-sorted.ismv \ tos_aac-sorted.isma.
http://docs.unified-streaming.com/documentation/vod/playout-control.html
2019-10-13T22:56:56
CC-MAIN-2019-43
1570986648343.8
[]
docs.unified-streaming.com
Enable and Disable the Built-in Administrator Account Applies To: Windows 8, Windows 8.1, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2 Log on by using audit mode Use the Local Users and Groups MMC (server versions only) Use an answer file® System Image Manager (Windows SIM). The following sample answer file shows how to enable the Administrator account, specify an Administrator password, and automatically log on to the system. Note Both the Microsoft-Windows-Shell-Setup<CODE>Autologon section and the Microsoft-Windows-Shell-Setup<CODE>UserAccounts<CODE>AdministratorPassword section are needed for automatic logon in audit mode to work. The auditSystem configuration pass must include both these settings.. Disabling the Built-in Administrator Account. Use either of the following methods to disable the built-in administrator account: Run the sysprep /generalize command When you run the sysprep /generalize command, the next time that the computer starts, the built-in Administrator account will be disabled. Use the net user command Run the following command to disable the Administrator account: net user administrator /active:no. Configuring the Built-in Administrator Password Instructions -. Note In Windows Server 2012, Windows Server 2008 R2, and Windows Server 2008, the default password policy requires a strong password for all user accounts. To configure a weak password, you can use an answer file that includes the Microsoft-Windows-Shell-Setup<CODE>UserAccounts<CODE>AdministratorPassword setting. You cannot configure a weak password, either manually or by using a script such as the net user command. See Also Concepts Other Resources Windows Deployment Options
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-8.1-and-8/hh825104%28v%3Dwin.10%29
2019-10-13T23:24:58
CC-MAIN-2019-43
1570986648343.8
[]
docs.microsoft.com
Todo Update a lot of the links in here. Use: nova-apiservice nova-api-metadataservice nova-api-metadataservice is generally used when you run in multi-host mode with nova-networkinstallations. For details, see Metadata service in the Compute Administrator Guide. nova-computeservice. nova-placement-apiservice nova-schedulerservice nova-conductormodule. For more information, see the conductorsection in the Configuration Options..
https://docs.openstack.org/nova/pike/install/get-started-compute.html
2019-10-13T23:34:32
CC-MAIN-2019-43
1570986648343.8
[]
docs.openstack.org
Detailed Manager Information¶ This page documents all the internals of the Managers in depth and is not intended for the general user nor should be required for setting up and running them. This page is for those who are interested in the inner workings and detailed flow of the how the Manager interacts with the Server, the Adapter, the Scheduler, and what it is abstracting away from the user. Since this is intended to be a deep dive into mechanics, please let us know if something is missing or unclear and we can expand this page as needed.
https://qcfractal.readthedocs.io/en/stable/managers_detailed.html
2019-10-13T22:14:21
CC-MAIN-2019-43
1570986648343.8
[]
qcfractal.readthedocs.io
Therma-Tru Doors Door System Maintenance All Therma-Tru door systems and associated components should be inspected and checked at least once a year for the following conditions: fading of door finishes, weatherstrip seal inadequacies, door bottom gasket or sill gasket wear, and vinyl threshold or oak riser splitting or cracking. Upon inspection if any of these components fail to function, they should be repaired or replaced as follows. Door Finishes Clearcoats and Stains All exterior finishes are affected by exposure and weathering from the sun, moisture and air pollutants. A simple application of a maintenance coat of topcoat will renew the protection over the stained surface on Classic-Craft and Fiber-Classic door slabs. Before top coat fails, reapply a fresh coat. First, clean it with a house hold detergent and water. Rinse and let dry. Reapply the topcoat approximately every 3-5 years or when the gloss starts to fade. Paint on Classic-Craft or Fiber-Classic Doors For fading, cracking, splitting, etc., of painted Classic-Craft or Fiber-Classic doors, stripping and refinishing may be required. Paint on Smooth-Star or Steel Doors For cracking, splitting or deteriorating paint finishes on steel doors, lightly sand surface of door and touch up to match overall finish. Note for Outswing Door Systems Swing-out doors must have all the edges -sides, top and bottom - finished. Inspect and maintain these edges regularly as all other surfaces. Weatherstripping If the weatherstripping fail to perform (i.e, not sealing the door system properly, cracking, tearing, etc.) the weatherstrip needs to be replaced. remove the existing weatherstripping and replace. Door Bottom and Sill Gaskets If the door bottom gasket fails to perform (i.e., splitting, cracking, pulling away from door slab, etc.) the door bottom needs to be replaced with a new door bottom. If the sill gaskets on outswing sills fail to perform (i.e., splitting, cracking, etc.) the sill gasket needs to be replaced. Pull the existing gasket and replace the gasketing for proper functioning of the sill. Risers for Adjustable Sills If the riser for an adjustable sill fails to perform (i.e., splitting, cracking, etc.), the riser needs to be replaced. Remove the existing riser and replace. Vinyl Thresholds If the vinyl threshold fails to perform (i.e., splitting, cracking, etc.), the vinyl threshold needs to be replaced. Remove the existing threshold and replace. Corner Seal Pads If corner seal pads are torn or missing, replace them. Sealing/Resealing Areas If a caulk seal fails to perform (i.e. waterleakage), remove existing seal and reseal area. Stripping to Refinish - 1 Choose a standard paint stripper. Paint or stain and topcoat can be removed with most methylene chloride based strippers. Follow the paint stripper manufacturer's directions and cautions for correct use. Check with the product manufacturer of your fiberglass door/stainable polyurethane product for details. - 2 Apply the stripper, working on the small areas at a time Tip:For a fiberglass door, apply the stripper to the (A) glass frame first and (B) the raised panels second, before moving on to the rest of the door - 3 - Remove the stripper within 2-3 minutes. If your fiberglass door/stainable polyurethane product has a facory-applied primer, it might be removed ith long exposure to paint strippers. Tip:Use a nylong bristle brush for easier removal of paint and stain from the wood-grain texture. For fiberglass doors, grade 000 steel wood can also be used. - 4 - Wash off the reaming stripper. After the stain or paint has been removed, clean with mild soap and water to completely remove any stripper residue. Rinse well and wipe dry. Make sure the product is completely clean and dry before refinishing.
https://docs.grandbanksbp.com/article/93-therma-tru-doors
2019-10-13T22:30:30
CC-MAIN-2019-43
1570986648343.8
[]
docs.grandbanksbp.com
Setting up multilingual websites Kentico websites can host content in multiple languages (cultures). Multilingual websites have separate versions of pages for each language. The system allows you to automatically offer culture versions of the website to visitors based on various settings. Users can also switch between individual languages manually using a dedicated web part. You can find a list of available cultures in the Localization application on the Cultures tab. All major cultures are provided by default. You can assign cultures to websites and then translate the content to the respective languages. Multilingual user and administration interface The administration interface of Kentico can also be multilingual. To learn more, refer to the Setting up a multilingual user interface chapter. Setting up multiple cultures for websites To add more cultures to a website: - Open the Sites application. - Edit () your site. - Switch to the Cultures tab. - Click Add cultures. - In the selection dialog, choose the cultures you wish to use. - Click Select. The cultures are now assigned to the website and editors can start translating pages. Open the Pages application. You can now switch between languages using the selector below the content tree. To learn how to manage pages for different cultures, continue with Editing the content of multilingual websites. Setting the default content culture When you create a new website, you specify the default culture in the New site wizard. If you create a website based on a web template, the site uses the culture of the given web template. All default Kentico web templates use English - United States as their default culture. You can modify the default culture settings of a website in: - Settings -> Content -> Default content culture - Sites -> Edit a site -> Default content culture The two settings are interlinked. Values configured in one section are reflected in the other location.
https://docs.kentico.com/k9/multilingual-websites/setting-up-multilingual-websites
2019-10-13T23:47:59
CC-MAIN-2019-43
1570986648343.8
[]
docs.kentico.com
Selection.WordOpenXML property (Word) Returns a String that represents the XML contained within the selection in the Microsoft Word Open XML format. Read-only. Syntax expression. WordOpenXML expression An expression that returns a Selection object. Remarks This property returns only the XML in the document that is needed to represent the specified selection. See also Support and feedback Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback.
https://docs.microsoft.com/en-us/office/vba/api/word.selection.wordopenxml
2019-10-13T23:39:41
CC-MAIN-2019-43
1570986648343.8
[]
docs.microsoft.com
Working with Database Projects. Important If you have the older type of database projects with the .dbp extension, you must upgrade them to the new type of database project. .dbp projects are no longer supported in Visual Studio. Common High-Level Tasks See Also Concepts Creating and Managing Databases and Data-tier Applications in Visual Studio
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/xee70aty(v%3Dvs.100)
2019-10-14T00:22:27
CC-MAIN-2019-43
1570986648343.8
[]
docs.microsoft.com
Merging SubTools as DynaMesh Combining objects when updating a DynaMesh can also be done through the SubTool sub-palette, in a similar way to the Remesh All function. When doing this, DynaMesh will use the SubTool operator icons found in every SubTool. For more information on Remesh All and its operators see here. The currently selected SubTool must be in DynaMesh mode. The SubTool to be merged must also be assigned the white polygroup (Group As DynaMesn Sub). If both of these are true, performing a Tool >> SubTool >> Merge Down operation will subtract the merged mesh from the current one. Follow along with these steps to use any SubTool as a DynaMesh subtractive: 1. Make sure that the DynaMesh SubTool is above the SubTool you wish to merge with. 2. The SubTool that is immediately below your selected DynaMesh SubTool must have the Difference icon selected. This is the second icon in the SubTool icons. 3. Now select the second SubTool, and in the Tool >> PolyGroups sub-palette click the Group As DynaMesh Sub button. This will convert the SubTool that will be used as a subtraction into a white polygroup. When using DynaMesh a white polygroup is an indicator for ZBrush to use that mesh as a subtraction. 4. Select the DynaMesh SubTool (the sphere in this example), and click on the MergeDown button found in the Tool >> SubTool sub-palette. 5. Hold CTRL and Click+Drag anywhere in the open document to perform a DynaMesh re-mesh. ZBrush will use the Cylinder to create a hole through the sphere. When using the Merge Down command for subtraction, make sure to have the DynaMesh selected. If you instead have the subtractive mesh selected, ZBrush will see this as an addition and combine the SubTools instead of subtracting. Note: The function Merge Down cannot be undone. If you are not sure of the result of your operation, you can duplicate the SubTools as a backup and hide them. Convert Inserted Meshes from Positive to Negative Previously, if you hadn’t pressed the ALT key while inserting a mesh, it wasn’t possible to use the new mesh as a negative one and subtract it from the current DynaMesh object. The new Tool >> Polygroups >> Group as Dynamesh Sub function will also allow an inserted mesh to be converted to a subtraction. Please refer to the Polygroup chapter of this documentation for more information on this feature.
http://docs.pixologic.com/user-guide/3d-modeling/modeling-basics/creating-meshes/dynamesh/with-subtools/
2019-10-13T22:51:16
CC-MAIN-2019-43
1570986648343.8
[array(['http://docs.pixologic.com/wp-content/uploads/2013/01/DynaMesh-SubTool01.jpg', 'Merging SubTools 01'], dtype=object) array(['http://docs.pixologic.com/wp-content/uploads/2013/01/DynaMesh-SubTool02.jpg', 'Merging SubTools 02'], dtype=object) ]
docs.pixologic.com
Lightning Scheduler: Optimize Customer Appointment Scheduling Quickly schedule customer appointments with a group of professionals—like a financial advisor, CPA, and attorney—with multi-resource scheduling in Lightning Scheduler. Concurrent scheduling lets you book multiple customer appointments during the same time slot, maximizing calendar availability. Where: This change applies to Lightning Scheduler in Enterprise, Performance, and Unlimited editions.
http://releasenotes.docs.salesforce.com/en-us/winter20/release-notes/rn_fsc_lt_scheduler.htm?edition=&impact=
2019-10-13T22:21:43
CC-MAIN-2019-43
1570986648343.8
[]
releasenotes.docs.salesforce.com
This Option will allow Users you to create Business and Admin to create Business Categories and Business Plan on your website, Users can also claim Business created by other user. To do this you have to follow the Steps below. Watch the video below: - Login to your “Admincp” - Click on “Manage Features” - Click on “Business Manager” - Click on “Add Category” for category and “Add Plan” for Business Plan. - Fill the “Form” and Click on “Add Category” to save your added category. - Fill the “Form” to Add “Business Plan” and click on Add Plan to save your added plan Thanks for Reading This
http://docs.crea8social.com/docs/site-management/how-to-add-category-and-business-plan/
2019-10-13T23:47:56
CC-MAIN-2019-43
1570986648343.8
[array(['http://docs.crea8social.com/wp-content/uploads/2018/02/download-17.png', None], dtype=object) array(['http://docs.crea8social.com/wp-content/uploads/2018/02/download-6-30.png', None], dtype=object) array(['http://docs.crea8social.com/wp-content/uploads/2018/02/download-10-24.png', None], dtype=object) array(['http://docs.crea8social.com/wp-content/uploads/2018/02/download-11-23.png', None], dtype=object) array(['http://docs.crea8social.com/wp-content/uploads/2018/02/download-12-21.png', None], dtype=object) array(['http://docs.crea8social.com/wp-content/uploads/2018/02/download-13-21.png', None], dtype=object) ]
docs.crea8social.com
Appendix A: Custom Recipes¶ This appendix describes how to use custom recipes in Driverless AI. You’re welcome to create your own recipes, or you can select from a number of recipes available in the repository. Notes: - Recipes only need to be added once. After a recipe is added to an experiment, that recipe will then be available for all future experiments. - In most cases, MOJOs will not be available for custom recipes (the Python Scoring Pipeline, however, features full support for custom recipes). Unless the recipe is simple, creating the MOJO is only possible with additional MOJO runtime support. Contact [email protected] for more information about creating MOJOs for custom recipes. Additional Resources¶ - Custom Recipes FAQ: For answers to common questions about custom recipes. - How to Write a Recipe: A guide for writing your own recipes. - Model Template: A template for creating your own Model recipe. - Scorer Template: A template for creating your own Scorer recipe. - Transformer Template: A template for creating your own Transformer recipe.
http://docs.h2o.ai/driverless-ai/latest-stable/docs/userguide/custom-recipes.html
2019-10-13T23:34:42
CC-MAIN-2019-43
1570986648343.8
[]
docs.h2o.ai
Queens Series Release Notes¶ 12.4.0-7¶. 12.3.0¶ 12.2.0¶ New Features¶ Adds the use_journal option for configuring oslo.log. This will enable passing the logs to journald. Heat has additional configuration option for plugin_dirs parameter. This parameter provides a list of directories to search for plug-ins. This change allows configuration of plugin_dirs parameter in heat.conf file.
https://docs.openstack.org/releasenotes/puppet-heat/queens.html
2019-10-13T23:33:50
CC-MAIN-2019-43
1570986648343.8
[]
docs.openstack.org
Deploy and run Splunk Enterprise Enterprise Docker containers You deploy Splunk Enterprise inside a Docker container by downloading and launching the required Splunk Enterprise image in Docker. The image is an executable package that includes everything you need to run Splunk Enterprise. A container is a runtime instance of an image. -.2.8, 7.3.0, 7.3.1, 7.3.2:
https://docs.splunk.com/Documentation/Splunk/7.2.4/Installation/DeployandrunSplunkEnterpriseinsideDockercontainers
2019-10-13T22:49:01
CC-MAIN-2019-43
1570986648343.8
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
A registry is a content store and a metadata repository for various artifacts such as services, WSDLs and configuration files. These artifacts are keyed by unique paths where a path is similar to a Unix file path. In WSO2 products, all configurations pertaining to modules, logging, security, data sources and other service groups are stored in the registry by default. The registry kernel of WSO2 provides the basic registry and repository functionality. WSO2 products use the services provided by the registry kernel to establish their own registry spaces, which are utilized for storing data and persisting configuration. Here are some of the features provided by the WSO2 Registry interface: - Provides the facility to organize resources into collections. - Keeps multiple versions of resources. - Manages social aspects such as rating of resources. - Provides AtomPub interfaces to publish, view and manage resources from remote or non-Java clients. The Registry space of any WSO2 product contains three major partitions: - Local Repository : Used to store configuration and runtime data that is local to the server. This partition is not to be shared with multiple servers. Mount point is /_system/local - Configuration Repository : Used to store product-specific configurations. This partition can be shared across multiple instances of the same product (e.g., sharing ESB configurations across an ESB cluster). Mount point is /_system/config. - Governance Repository : Used to store configuration and data that are shared across the whole platform. This typically includes services, service descriptions, endpoints or datasources. Mount point of theis registry is /_system/governance. You can browse the contents of the registry using the product's management console. This section provides the following information: - Managing the Registry - Searching the Registry - Using Remote Registry Instances for the Registry Partitions
https://docs.wso2.com/display/ADMIN44x/Working+with+the+Registry
2019-10-13T23:17:47
CC-MAIN-2019-43
1570986648343.8
[]
docs.wso2.com
.NET Core command-line interface (CLI) tools The .NET Core command-line interface (CLI) is a new cross-platform toolchain for developing .NET applications. The CLI is a foundation upon which higher-level tools, such as Integrated Development Environments (IDEs), editors, and build orchestrators, can rest. Installation Either use the native installers or use the installation shell scripts: - The native installers are primarily used on developer's machines and use each supported platform's native install mechanism, for instance, DEB packages on Ubuntu or MSI bundles on Windows. These installers install and configure the environment for immediate use by the developer but require administrative privileges on the machine. You can view the installation instructions in the .NET Core installation guide. - Shell scripts are primarily used for setting up build servers or when you wish to install the tools without administrative privileges. Install scripts don't install prerequisites on the machine, which must be installed manually. For more information, see the install script reference topic. For information on how to set up CLI on your continuous integration (CI) build server, see Using .NET Core SDK and tools in Continuous Integration (CI). By default, the CLI installs in a side-by-side (SxS) manner, so multiple versions of the CLI tools can coexist on a single machine. Determining which version is used on a machine where multiple versions are installed is explained in more detail in the Driver section. CLI commands The following commands are installed by default: Basic commands Project modification commands Advanced commands The CLI adopts an extensibility model that allows you to specify additional tools for your projects. For more information, see the .NET Core CLI extensibility model topic. Command structure CLI command structure consists of the driver ("dotnet"), the command (or "verb"), and possibly command arguments and options. You see this pattern in most CLI operations, such as creating a new console app and running it from the command line as the following commands show when executed from a directory named my_app: dotnet new console dotnet build --output /build_output dotnet /build_output/my_app.dll Driver The driver is named dotnet and has two responsibilities, either running a framework-dependent app or executing a command. The only time dotnet is used without a command is when it's used to start an application. To run a framework-dependent app, specify the app after the driver, for example, dotnet /path/to/my_app.dll. When executing the command from the folder where the app's DLL resides, simply execute dotnet my_app.dll. When you supply a command to the driver, dotnet.exe starts the CLI command execution process. First, the driver determines the version of the SDK to use. If the version isn't specified in the command options, the driver uses the latest version available. To specify a version other than the latest installed version, use the --fx-version <VERSION> option (see the dotnet command reference). Once the SDK version is determined, the driver executes the command. Command ("verb") The command (or "verb") is simply a command that performs an action. For example, dotnet build builds your code. dotnet publish publishes your code. The commands are implemented as a console application using a dotnet {verb} convention. Arguments The arguments you pass on the command line are the arguments to the command invoked. For example when you execute dotnet publish my_app.csproj, the my_app.csproj argument indicates the project to publish and is passed to the publish command. Options The options you pass on the command line are the options to the command invoked. For example when you execute dotnet publish --output /build_output, the --output option and its value are passed to the publish command. Migration from project.json If you used Preview 2 tooling to produce project.json-based projects, consult the dotnet migrate topic for information on migrating your project to MSBuild/.csproj for use with release tooling. For .NET Core projects created prior to the release of Preview 2 tooling, either manually update the project following the guidance in Migrating from DNX to .NET Core CLI (project.json) and then use dotnet migrate or directly upgrade your projects.
https://docs.microsoft.com/en-us/dotnet/core/tools/
2018-12-10T00:38:07
CC-MAIN-2018-51
1544376823228.36
[]
docs.microsoft.com
Locking Files from Your Laptop's File System You can lock files and folders on your laptop from the Edge options on your laptop's file system. Before You Begin Before you can lock files on your laptop, you must have enabled Data Loss Prevention (DLP) on your laptop. See Enable Data Loss Prevention. We recommend that you create a DLP passkey before you lock items on your laptop. If you lock items before creating a DLP passkey, you can only unlock the files when your laptop has connectivity to the CommServe host. See Creating a Data Loss Prevention Passkey. Procedure - On your laptop, open your file system and go to the file or folder you want to lock. - Right-click the file or folder, point to Edge, and click Lock. Related Links Unlocking Files and Folders on Your Laptop Configuring DLP Settings from the Web Console Viewing and Removing DLP Authorized Users
http://docs.snapprotect.com/netapp/v11/article?p=features/dlp/end_user/locking_files_from_file_system.htm
2018-12-10T00:16:15
CC-MAIN-2018-51
1544376823228.36
[]
docs.snapprotect.com
Here you can find the keys to success when setting up your Device Magic Dashboard. Here you can find helpful information about your account and how to adjust aspects of your Dashboard to customize your experience. Here you will find information about our Form Builder including both information about the different form fields you can use as well as how to build different types of questions... Destinations are how you set up where you want your data to go once you submit a form. These integrations allow two way data flow between Device Magic and 3rd party software. Resources are files you can upload to the Device Magic Dashboard and use in your forms. Examples would be using an Excel file as options for a Select List question... Learn more about the app and how to navigate and use it. Here you can learn more about how to pre-populate answers/data in existing forms and send them to a member or members of your team. Here you can find information about how to set up a two way data flow between Device Magic and an outside system. Here you will find code samples which will help when building a custom integration utilizing our open RestAPI. Here you can learn about which operating systems Device Magic operates on.
https://docs.devicemagic.com/
2018-12-09T23:40:24
CC-MAIN-2018-51
1544376823228.36
[]
docs.devicemagic.com
Description / Features The plugin enables analysis of Groovy projects within Sonar. It leverages CodeNarc for coding rules violations, Gmetrics for cyclomatic complexity and Cobertura for code coverage. Installation - Install the Groovy plugin through the Update Center or download it into the SONAR_HOME/extensions/plugins directory - Restart the Sonar server Usage Run a Sonar Analysis with the Sonar Runner (Recommended Way) To launch a Sonar analysis of your Groovy project, use the Sonar Runner. A sample project is available on github that can be browsed or downloaded: /projects/languages/groovy/groovy-sonar-runner. Run a Sonar. Known Limitations The source directory must be added to the pom.xml, even if the project is built with Maven. This comes from the fact that Sonar does not call gmaven plugin. Change Log Release 0.6 (8 issues) Release 0.5 (3 issues) Release 0.4 (3 issues) Release 0.3 (11 issues) Release 0.2 (5 issues)
http://docs.codehaus.org/pages/viewpage.action?pageId=230397868
2014-09-16T05:10:50
CC-MAIN-2014-41
1410657113000.87
[array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/wait.gif', None], dtype=object) array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/wait.gif', None], dtype=object) array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/wait.gif', None], dtype=object) array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/wait.gif', None], dtype=object) array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/wait.gif', None], dtype=object) array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/wait.gif', None], dtype=object) ]
docs.codehaus.org
] Deletes faces from a collection. You specify a collection ID and an array of face IDs to remove from the collection. This operation requires permissions to perform the rekognition:DeleteFaces action. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. delete-faces --collection-id <value> --face-ids <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>] --collection-id (string) Collection from which to remove the specific faces. --face-ids (list) An array of face IDs faces from a collection The following delete-faces command deletes the specified face from a collection. aws rekognition delete-faces \ --collection-id MyCollection --face-ids '["0040279c-0178-436e-b70a-e61b074e96b0"]' Output: { "DeletedFaces": [ "0040279c-0178-436e-b70a-e61b074e96b0" ] } For more information, see Deleting Faces from a Collection in the Amazon Rekognition Developer Guide.
https://docs.aws.amazon.com/cli/latest/reference/rekognition/delete-faces.html
2021-05-06T14:13:18
CC-MAIN-2021-21
1620243988753.97
[]
docs.aws.amazon.com
Theme Editor - Header Builder - Category Page Builder - Product Page Builder Theme Editor is a module that comes with every Argento theme and allows you to customize its styles, layout, etc. Please check the following pages with a description of available customizable options and customization examples for every Argento design: - Argento Essence - Argento Flat - Argento Pure2 - Argento Mall - Argento Luxury - Argento Stripes - Argento Force - Argento Home Header Builder With Header Builder, you can replace static header with a fully customizable one. Features: - move header blocks between three rows and nine columns with an easy-to-use drag-and-drop interface - remove blocks from header - set column settings and assign CSS classes for it - preview header layout before applying it on the frontend Blocks available in header Preview After you moved blocks or changed columns settings in layout, press Preview Header button to see how your changes will look on frontend. You can resize the preview dragging its corner to test layout on different window sizes. If you like new layout, do not forget to save config an clear cache to apply it on frontend. Category Page Builder Category Page builder is a set of options with preview feature to modify the look of category page. It is available for every theme in Argento package. General layout of category page is two columns with left sidebar. Left sidebar contains category filters and some other blocks (e.g. wishlist). Layout - dropdown to change page layout. Values: - theme defined; - one column; - two columns (left sidebar); - two columns (right sidebar); - three columns. Content width - dropdown to change width of main content. Values: - full width; - limited width. Max width - available only when limited width selected. Set max allowed width for main content. Description position - dropdown to change category description position. Values: - theme defined; - after products list into main column; - after all columns before footer; - before products list into main column; - before all columns after title. Layered Navigation position - dropdown to change layered navigation position. Values: - theme defined; - before products list into main column; - sidebar. Product list mode - dropdown to set look of product list. Values: - Grid Only - Formats the list as a grid of rows and columns. Each product appears in a single cell of the grid. - List Only - Formats the list with each product on a separate row. - Grid (default / List) - By default, products appear in Grid view and can be toggled to List view. - List (default / Grid) - By default, products appear in List View and can be toggled to Grid view. Grid Mode In this subsection you can determine the number of products displayed in grid view and set columns number for grid. List Mode Lits mode for category page is not very common. But still it can give your store a fresh look when it is configured properly. Product Page Builder Product Page builder is a set of options with preview feature to modify the look of product page. It is available for every theme in Argento package. Pure2 and Flat Argento themes have product page with two columns and right sidebar layout. You can change it here. Other valuable options are content width, width of image block and block with main product info. You can get pretty fresh look of product page when you set image and main product info width to 100%. Image This subsection allows to change position of image thumbnails. It duplicates config of Swissup Lightbox Pro module (check store view level config if you see no changes at storefront). Values: - Theme defined. - Horizontal. - Vertical. - Hidden. To Cart Form Here you can change “Add to cart form” position at product page. You can move form under product image similiar to Argento Stripes design. Or move it back to its normal position. Values: - Theme defined. - Product Info Main. - Product Media (bottom). Tabs This subsection is powered by Swissup Easytabs module. You can change tabs layout and tabs position. Possible tabs layout: - Collapsed tabs (traditional layout). - Expanded tabs. - Accordion. Possible tabs position: - Theme Defined. - Main Content - most common position for tabs (under product image and “add to cart” form blocks). - Product Info Main (bottom) - like tabs at Argento Luxury desing at product page. On right side under “add to cart” form.
https://docs.swissuplabs.com/m2/argento/customization/theme-editor/
2021-05-06T12:07:04
CC-MAIN-2021-21
1620243988753.97
[array(['/images/m2/argento/customization/theme-editor/header-builder.png', 'Header Builder Configuration'], dtype=object) array(['/images/m2/argento/customization/theme-editor/preview.png', 'Header Builder Preview'], dtype=object) array(['/images/m2/argento/customization/theme-editor/category/config-with-instant-preview.gif', 'Category page config with instant preview'], dtype=object) array(['/images/m2/argento/customization/theme-editor/category/config-grid.png', 'Category page grid config'], dtype=object) array(['/images/m2/argento/customization/theme-editor/category/config-list.png', 'Category page list config'], dtype=object) array(['/images/m2/argento/customization/theme-editor/product/config.png', 'Product page config'], dtype=object) array(['/images/m2/argento/customization/theme-editor/product/config-image.png', 'Product image config'], dtype=object) array(['/images/m2/argento/customization/theme-editor/product/config-to-cart-form.png', 'Product add to cart form config'], dtype=object) array(['/images/m2/argento/customization/theme-editor/product/config-tabs.png', 'Product tabs config'], dtype=object) ]
docs.swissuplabs.com
Creating a token without payment This case corresponds to a simple token creation. The merchant website transmits buyer details to the payment gateway, in particular the e-mail address, which is mandatory. - The buyer verifies the information displayed on the payment page. - He or she clicks on the payment method that will be registered.The payment page displays the buyer details once again and prompts to enter the banking details. -, - the buyer's token that they can later use for another financial operation. The processing of a token creation request without payment results in the creation of a VERIFICATION type transaction,.
https://docs.lyra.com/en/collect/form-payment/subscription-token/1creating-a-token-without-payment.html
2021-05-06T13:07:54
CC-MAIN-2021-21
1620243988753.97
[]
docs.lyra.com
. See Modifying Configuration Properties Using Cloudera Manager. - Click Save Changes to commit the changes. - Restart the role. - Restart the service. Expose HBase Metrics to Ganglia Using the Command Line - Edit /etc/hbase/conf/hadoop-metrics2-hbase.properties on the master or RegionServers you want to monitor, and add the following properties, substituting the server information with your own: hbase.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31 hbase.sink.ganglia.servers=<Ganglia server>:<port> hbase.sink.ganglia.period=10 - Restart the master or RegionServer.
https://docs.cloudera.com/documentation/enterprise/5-13-x/topics/admin_hbase_ganglia.html
2021-05-06T14:22:08
CC-MAIN-2021-21
1620243988753.97
[]
docs.cloudera.com
Managing timeouts Payment session concept A "payment session" is the time spent by a buyer on the payment page. The payment session begins as soon as the payment gateway receives the payment form. The delay of payment session is 10 minutes (except for certain payment methods). - sufficient to enable each buyer to make his or her payment - by the deadline: it is not reset to every action of the user - non-modifiable: it is fixed by the payment gateway because of technical constraints. After this delay, the payment session times out and the session data is purged. Expiration of the payment session In some cases the payment session will expire while the buyer has not completed the payment. Most frequent cases: - Once redirected to the payment page for example, the buyer realizes that it is time to go to lunch. An hour later, the buyer decides to continue his or her payment and clicks on the logo corresponding to his or her payment method. The buyer's payment session has already expired, the payment gateway displays an error message indicating that it was disconnected due to an extended period of the buyer's Expert Back Office, no vads_url_return field is transmitted in his or her payment form - Once redirected to the payment page, the buyer closes the browser (by mistake or because he or she no longer wants to make the payment). Notification in case of session expiration It is possible to notify the merchant website in case of expiration of the payment session. To do so, the merchant must configure and enable the Instant Payment Notification URL on cancellation notification rule (see chapter Setting up notifications in case of abandoned or canceled payments).
https://docs.lyra.com/en/collect/form-payment/subscription-token/managing-timeouts.html
2021-05-06T12:08:13
CC-MAIN-2021-21
1620243988753.97
[]
docs.lyra.com
>>, . manager. This documentation applies to the following versions of Splunk® Enterprise: 8.1.0, 8.1.1, 8.1.2, 8.1.3 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/Splunk/8.1.3/Indexer/ConfigureSmartStore
2021-05-06T13:18:23
CC-MAIN-2021-21
1620243988753.97
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Configuration There are three options available: Option Description Enabled Disable or enable module functionality Api Key Google Maps Api Key. Get API Key Street Number Placement Ability to place street number at the start/end of Street Line 1, or place it direclty to the Street Line 2 You may set these options at global or store view level depending on your needs. Next Up Get API Key
https://docs.swissuplabs.com/m1/extensions/address-autocomplete/configuration/
2021-05-06T12:38:19
CC-MAIN-2021-21
1620243988753.97
[]
docs.swissuplabs.com
FeatureCollection¶ A FeatureCollection is a collection of Features similar to a JDBC ResultSet. Overview¶ FeatureCollection is similar to a Java Collection<Feature>. The crucial difference is the requirement to close each FeatureIterator after use in order to prevent memory and connection leaks. In addition to the above key requirement, FeatureCollection provides methods to review the FeatureType of the members, ask for the bounds (rather than just the size) and so on. With this in mind: FeatureCollectionis method compatible with java.util.Collectionwhere possible Iteratorneed to be closed. As provided: try (SimpleFeatureIterator iterator = featureCollection.features()) { while (iterator.hasNext()) { SimpleFeature feature = iterator.next(); // process feature } } All the content is of the same FeatureTypeas indicated by indicated by: FeatureType type = featureCollection.getSchema(); We cannot support the Java ‘for each’ loop syntax; as we need to be sure to close our iterator(). We can support the Java try-with-resource syntax: try (SimpleFeatureIterator iterator = featureCollection.features()){ while( iterator.hasNext() ){ SimpleFeature feature = iterator.next(); ... } } FeatureCollection¶ The interface provides the following methods: public interface FeatureCollection<T extends FeatureType, F extends Feature> { // feature access - close when done! FeatureIterator<F> features() // feature access with out the loop void accepts(FeatureVisitor, ProgressListener); T getSchema(); String getID(); // sub query FeatureCollection<T,F> subCollection(Filter); FeatureCollection<T,F> sort(SortBy); // summary information ReferencedEnvelope getBounds() boolean isEmpty() int size() boolean contains(Object) boolean containsAll(Collection<?>) // convert to array Object[] toArray() <O> O[] toArray(O[]) } Streaming Results¶ A FeatureCollection is not an in-memory snapshot of your data (as you might expect), we work with the assumption that GIS data is larger than you can fit into memory. Most implementations of FeatureCollection provide a memory footprint close to zero and each time you access the data will be loaded as you use it. Please note that you should not treat a FeatureCollection as a normal in-memory Java collection - these are heavyweight objects and we must ask you to close any iterators you open.: FeatureIterator<SimpleFeature> iterator = featureCollection.features(); try { while( iterator.hasNext() ){ SimpleFeature feature = iterator.next(); ... } } finally { iterator.close(); } We ask that you treat interaction with FeatureCollection as a ResultSet carefully closing each object when you are done with it. In Java 7 this becomes easier with the try-with-resource syntax: try (FeatureIterator<SimpleFeature> iterator = featureCollection.features()){ while( iterator.hasNext() ){ SimpleFeature feature = iterator.next(); ... } } SimpleFeatureCollection¶ Because Java Generics (i.e. <T> and <F>) are a little hard to read we introduced SimpleFeatureCollection to cover the common case: public interface SimpleFeatureCollection extends FeatureCollection<SimpleFeatureType,SimpleFeature> { // feature access - close when done! SimpleFeatureIterator features() // feature access with out the loop void accepts(FeatureVisitor, ProgressListener); SimpleFeatureType getSchema() String getID() // sub query SimpleFeatureCollection subCollection(Filter) SimpleFeatureCollection sort(SortBy) // summary information ReferencedEnvelope getBounds() boolean isEmpty() int size() boolean contains(Object) boolean containsAll(Collection<?>) // convert to array Object[] toArray() <O> O[] toArray(O[]) } This SimpleFeatureCollection interface is just syntactic sugar to avoid typing in FeatureCollection<SimpleFeatureType,SimpleFeature> all the time. If you need to safely convert you can use the DataUtilities.simple method: SimpleFeatureCollection simpleCollection = DataUtilities.simple(collection); Creating a FeatureCollection is usually done for you as a result of a query, although we do have a number of implementations you can work with directly. From DataStore¶ The most common thing to do is grab a FeatureCollection from a file or service.: File file = new File("example.shp"); Map map = new HashMap(); map.put( "url", file.toURL() ); DataStore dataStore = DataStoreFinder.getDataStore( Map map ); SimpleFeatureSource featureSource = dataStore.getFeatureSource( typeName ); SimpleFeatureCollection collection = featureSource.getFeatures(); Please be aware that this is not a copy - the SimpleFeatureCollection above should be considered to be the same thing as the example.shp. Changes made to the collection will be written out to the shapefile. Using a Query to order your Attributes Occasionally you will want to specify the exact order in which your attributes are presented to you, or even leave some attributes out altogether.: Query query = new Query( typeName, filter); query.setPropertyNames( "geom", "name" ); SimpleFeatureCollection sorted = source.getFeatures(query); Please note that the resulting SimpleFeatureCollection.getSchema()will not match SimpleFeatureSource.getFeatureType(), since the attributes will now be limited to (and in the order) specified. Using a Queryto Sort a SimpleFeatureCollection Sorting is available: Query query = new Query( typeName, filter); SortBy sort = filterFactory.sort( sortField, SortOrder.DESCENDING); query.setSortBy( new SortBy[] { sort } ); SimpleFeatureCollection sorted = source.getFeatures(query); Load into Memory If you would like to work with an in-memory copy, you will need to explicitly take the following step: SimpleFeatureCollection collection = myFeatureSource.getFeatures(); SimpleFeatureCollection memory = DataUtilities.collection( collection ); However as mentioned above this will be using the default TreeSetbased feature collection implementation and will not be fast. How not fast? Well your shapefile access on disk may be faster (since it has a spatial index). DefaultFeatureCollection¶ GeoTools provides a default implementation of feature collection that can be used to gather up your features in memory; prior to writing them out to a DataStore. This default implementation of SimpleFeatureCollection uses a TreeMap sorted by FeatureId; so it does not offer very fast performance. To create a new DefaultFeatureCollection: DefaultFeatureCollection featureCollection = new DefaultFeatureCollection(); You can also create your collection with an “id”, which will can be used as a handle to tell your collections apart.: DefaultFeatureCollection featureCollection = new DefaultFeatureCollection("internal"); You can create new features and add them to this FeatureCollection as needed: SimpleFeatureType TYPE = DataUtilities.createType("location","geom:Point,name:String"); DefaultFeatureCollection featureCollection = new DefaultFeatureCollection("internal",TYPE); WKTReader2 wkt = new WKTReader2(); featureCollection.add( SimpleFeatureBuilder.build( TYPE, new Object[]{ wkt.read("POINT(1,2)"), "name1"}, null) ); featureCollection.add( SimpleFeatureBuilder.build( TYPE, new Object[]{ wkt.read("POINT(4,4)"), "name2"}, null) ); To FeatureSource¶ You often need to “wrap” up your FeatureCollection as a feature source in order to make effective use of it ( SimpleFeatureSource supports the ability to query the contents, and can be used in a MapLayer for rendering).: SimpleFeatureSource source = DataUtilities.source( collection ); Existing Content¶ The DataUtilities class has methods to create a feature collection from a range of sources: DataUtilities.collection(FeatureCollection<SimpleFeatureType, SimpleFeature>) DataUtilities.collection(FeatureReader<SimpleFeatureType, SimpleFeature>) DataUtilities.collection(List<SimpleFeature>) DataUtilities.collection(SimpleFeature) DataUtilities.collection(SimpleFeature[]) DataUtilities.collection(SimpleFeatureIterator) For more information see DataUtilities. Performance Options¶ For GeoTools 2.7 we are making available a couple new implementations of FeatureCollection. These implementations of SimpleFeatureCollection will each offer different performance characteristics: TreeSetFeatureCollection: the traditional TreeSetimplementation used by default. Note this does not perform well with spatial queries as the contents are not indexed. However finding a feature by “id” can be performed quickly. It is designed to closely mirror the experience of working with content on disk (even down to duplicating the content it gives you in order to prevent any trouble if another thread makes a modification). DataUtilities.source(featureCollection)will wrap TreeSetFeatureCollectionin a CollectionFeatureSource. ListFeatureCollection: uses a list to hold contents; please be sure not to have more then one feature with the same id. The benefit here is being able to wrap a List you already have up as a FeatureCollectionwithout copying the contents over one at a time. The result does not perform well as the contents are not indexed in anyway (either by a spatial index, or by feature id). DataUtilities.source(featureCollection)will wrap ListFeatureCollectionin a CollectionFeatureSource. Here is an example using the ListFeatureCollection: SimpleFeatureType TYPE = DataUtilities.createType("location","geom:Point,name:String"); WKTReader2 wkt = new WKTReader2(); ArrayList<SimpleFeature> list = new ArrayList<SimpleFeature>(); list.add( SimpleFeatureBuilder.build( TYPE, new Object[]{ wkt.read("POINT(1,2)"), "name1"}, null) ); list.add( SimpleFeatureBuilder.build( TYPE, new Object[]{ wkt.read("POINT(4,4)"), "name2"}, null) ); SimpleFeatureCollection collection = new ListFeatureCollection(TYPE,list); // O(N) access SimpleFeatureSource source = DataUtilities.source( collection ); SimpleFeatureCollection features = source.getFeatures( filter ); Please keep in mind that the original list is being used by the ListFeatureCollection; so the contents will not be copied making this a lean solution for getting your features bundled up. The flip side is that you should use the FeatureCollectionmethods to modify the contents after creation (so it can update the bounds). SpatialIndexFeatureCollection: uses a spatial index to hold on to contents for fast visual display in a MapLayer; you cannot add more content to this feature collection once it is used DataUtilities.source(featureCollection)will wrap SpatialIndexFeatureCollectionin a SpatialIndexFeatureSourcethat is able to take advantage of the spatial index. Here is an example using the SpatialIndexFeatureCollection: final SimpleFeatureType TYPE = DataUtilities.createType("location","geom:Point,name:String"); WKTReader2 wkt = new WKTReader2(); SimpleFeatureCollection collection = new SpatialIndexFeatureCollection(); collection.add( SimpleFeatureBuilder.build( TYPE, new Object[]{ wkt.read("POINT(1,2)"), "name1"} )); collection.add( SimpleFeatureBuilder.build( TYPE, new Object[]{ wkt.read("POINT(4,4)"), "name1"} )); // Fast spatial Access SimpleFeatureSource source = DataUtilities.source( collection ); SimpleFeatureCollection features = source.getFeatures( filter ); The SpatialIndexFeatureCollectionis fast, but tricky to use. It will store the features itself, using a JTS STRtreespatial index. This means the contents of the feature collection cannot be modified after the index set up, and the index is set up the first time you query the collection (asking for size, bounds, or pretty much anything other then add ). To get the full benefit you need to use SimpleFeatureSourceas shown above; it will make use of the spatial index when performing a filter. Contents¶ A SimpleFeatureCollection method compatible with Java Collection<Feature>; this means that an Iterator is available for you to to access the contents. However you will need to close your iterator after use; so that any resources (such as database connections) are returned. Direct¶ The following lists several ways of reading data so you can choose the approach that suites you your needs. You may find the use of Iterator comfortable (but a bit troubling with try/catch code needed to close the iterator). FeatureVisitor* as it involves the fewest lines of code (but it “gobbles” all the error messages). On the other extreme FeatureReader makes all the error messages visible requiring a lot of try/catch code. Finally we have FeatureIterator when working on Java 1.4 code before generics were available. Using FeatureIterator Use of iterator is straight forward; with the addition of a try/finally statement to ensure the iterator is closed after use.: CoordinateReferenceSystem crs = features.getMemberType().getCRS(); BoundingBox bounds = new ReferencedEnvelope( crs ); FeatureIterator<SimpleFeature> iterator = features.iterator(); try { while( iterator.hasNext()){ SimpleFeature feature = iterator.next(); bounds.include( feature.getBounds() ); } } finally{ iterator.close(); } Invalid Data Currently GeoTools follows a “fail first” policy; that is if the data does not exactly meet the requirements of the SimpleFeatureTypea RuntimeExceptionwill be thrown. However often you may in want to just “skip” the troubled Feature and carry on; very few data sets are perfect.: SimpleFeatureCollection featureCollection = featureSource.getFeatures(filter); FeatureIterator iterator = null; int count; int problems; try { for( iterator = features.features(); iterator.hasNext(); count++){ try { SimpleFeature feature = (SimpleFeature) iterator.next(); ... } catch( RuntimeException dataProblem ){ problems++; lastProblem = dataProblem; } } } finally { if( iterator != null ) iterator.close(); } if( problems == 0 ){ System.out.println("Was able to read "+count+" features."); else { System.out.println("Read "+count + "features, with "+problems+" failures"); } Individual DataStoresmay be able to work with your data as it exists (invalid or not). Use of FeatureVisitor FeatureVisitorlets you traverse a FeatureCollectionwith less try/catch/finally boilerplate code.: CoordinateReferenceSystem crs = features.getMemberType().getCRS(); final BoundingBox bounds = new ReferencedEnvelope( crs ); features.accepts( new AbstractFeatureVisitor(){ public void visit( Feature feature ) { bounds.include( feature.getBounds() ); } }, new NullProgressListener() ); You do not have to worry about exceptions, open or closing iterators and as an added bonus this may even be faster (depending on the number of cores you have available). Comparison with SimpleFeatureReader SimpleFeatureReaderis a “low level” version of Iterator that is willing to throw IOExceptions, it is a little bit more difficult to use but you may find the extra level of detail worth it.: SimpleFeatureReader reader = null; try { reader = dataStore.getFeatureReader( typeName, filter, Transaction.AUTO_COMMIT ); while( reader.hasNext() ){ try { SimpleFeature feature = reader.next(); } catch( IllegalArgumentException badData ){ // skipping this feature since it has invalid data } catch( IOException unexpected ){ unexpected.printStackTrace(); break; // after an IOException the reader is "broken" } } } catch( IOException couldNotConnect){ couldNotConnect.printStackTrace(); } finally { if( reader != null ) reader.close(); } Aggregate Functions¶ One step up from direct access is the use of an “aggregate” function that works on the entire FeatureCollection to build you a summary. Traditionally functions that work on a collection are called “aggregate functions”. In the world of databases and SQL these functions include min, max, average and count. GeoTools supports these basic concepts, and a few additions such as bounding box or unique values. Internally these functions are implemented as a FeatureVisitor; and are often optimized into raw SQL on supporting DataStores. Here are the aggregate functions that ship with GeoTools at the time of writing. For the authoritative list check javadocs. Sum of a FeatureCollection Here is an example of using Collection_Sum on a FeatureCollection: FilterFactory2 ff = CommonFactoryFinder.getFilterFactory2(); Function sum = ff.function("Collection_Sum", ff.property("age")); Object value = sum.evaluate( featureCollection ); assertEquals( 41, value ); Max of a FeatureCollection Here is an example of using Collection_Max on a FeatureCollection: FilterFactory2 ff = CommonFactoryFinder.getFilterFactory2(); Function sum = ff.function("Collection_Max", ff.property("age")); Object value = sum.evaluate( featureCollection ); assertEquals( 41, value ); As an alternative you could directly use MaxVisitor: Expression = ff.property("age"); MaxVisitor maxVisitor = new MaxVisitor(expression); collection.accepts(maxVisitor, null); CalcResult result = maxVisitor.getResult(); Object max = result.getValue(); MaxVisitoris pretty good about handling numeric and string types (basically anything that is comparable should work). CalcResultis used to hold the value until you are interested in it; you can run the same visitor across several collections and look at the maximum for all of them. Group By Visitor¶ This visitor allow us to group features by some attributes and apply an aggregation function on each group. This visitor acts like the SQL group by command with an aggregation function. This visitor is implemented as a feature visitor that produces a calculation result. Internally the aggregation function is mapped to a correspondent visitor and for each features group a different instance of that visitor will be applied. For SQL data stores that support group by statements and are able to handle the aggregation function this visitor will be translated to raw SQL optimizing significantly is execution. In particular, the following conditions apply to JDBC data stores: Aggregations and grouping on property names is support Simple math expressions of the above are also supported (subtract, add, multiply, divide) Functions may be supported, or not, depending on the filter capabilities of the data store. At the time of writing only PostgreSQL supports a small set of functions (e.g., dateDifference, floor, ceil, string concatenation and the like). Here are the currently supported aggregate functions: Follow some examples about how to use the group by visitor to compute some stats about the following example data: Average energy consumption per building type: SimpleFeatureType buildingType = ...; FeatureCollection featureCollection = ...; GroupByVisitor visitor = new GroupByVisitorBuilder() .withAggregateAttribute("energy_consumption", buildingType) .withAggregateVisitor("Average") .withGroupByAttribute("building_type", buildingType) .build(); featureCollection.accepts(visitor, new NullProgressListener()); CalcResult result = visitor.getResult(); The result of a group by visitor can be converted to multiple formats, in this case we will use the Map conversion: Map values = result.toMap(); The content of the Map will be something like this: List("School") -> 63.333 List("Hospital") -> 387.5 List("Fabric") -> 137.5 Max energy consumption per building type and energy type: GroupByVisitor visitor = new GroupByVisitorBuilder() .withAggregateAttribute("energy_consumption", buildingType) .withAggregateVisitor("Max") .withGroupByAttribute("building_type", buildingType) .withGroupByAttribute("energy_type", buildingType) .build(); The content of the Map will be something like this: List("School", "Wind") -> 75.0 List("School", "Solar") -> 65.0 List("Hospital", "Nuclear") -> 550.0 List("Hospital", "Solar") -> 225.0 List("Fabric", "Fuel") -> 125.0 List("Fabric", "Wind") -> 150.0 As showed in the examples multiple group by attributes can be used but only one aggregate function and only one aggregate attribute can be used. To compute several aggregations multiple group by visitors need to be created and executed. Histogram by energy consumption classes: FilterFactory ff = dataStore.getFilterFactory(); PropertyName pn = ff.property("energy_consumption")); Expression expression = ff.function("floor", ff.divide(pn, ff.literal(100))); GroupByVisitor visitor = new GroupByVisitorBuilder() .withAggregateAttribute("energy_consumption", buildingType) .withAggregateVisitor("Count") .withGroupByAttribute(expression) .build(); The expression creates buckets of size 100 and gives each one an integer index, 0 for the first bucket (x >= 0 and x < 100), 1 for the second (x >= 100 and x <200), and so on (each bucket contains its minimum value and excludes its maximum value, this avoids overlaps). A bucket with no results will be skipped. The result is: List(0) -> 3 List(1) -> 2 List(2) -> 1 List(5) -> 1 Buckets 3 and 4 are not present as no value in the data set matches them. Classifier Functions¶ Another set of aggregate functions are aimed at splitting your FeatureCollection up into useful groups. These functions produce a Classifier for your FeatureCollection, this concept is similar to a histogram. These classifiers are used: With the function “classifier” to sort features into groups With gt-brewer to produce attractive styles for visualization of your data. Here are some examples of defining and working with classifiers: Create Classifier You can produce a Classifierfor your FeatureCollectionas follows: FilterFactory2 ff = CommonFactoryFinder.getFilterFactory2(); Function classify = ff.function("Quantile", ff.property("name"), ff.literal(2)); Classifier groups = (Classifier) classify.evaluate(collection); The following classifier functions are available. EqualInterval- classifier where each group represents the same sized range Jenks- generate the Jenks’ Natural Breaks classification Quantile- classifier with an even number of items in each group StandardDeviation- generated using the standard deviation method UniqueInterval- variation of EqualIntervalthat takes into account unique values These functions produce the Java object Classifieras an output. Customizing your Classifier You can think of the Classifieras a series of groups or bins into which you will sort Features. Each partition has a title which you can name as you please.: groups.setTitle(0, "Group A"); groups.setTitle(1, "Group B"); Using Your Classifierto group Features You can then use this Classifier to sort features into the appropriate group: // groups is a classifier with "Group A" and "Group B" Function sort = ff.function("classify", ff.property("name"), ff.literal(groups)); int slot = (Integer) sort.evaluate(feature); System.out.println(groups.getTitle(slot)); // ie. "Group A" You can think of a Classifier as a filter function similar to a Java switch statement. Join¶ GeoTools does not have any native ability to “Join” FeatureCollections; even though this is a very common request. References: gt-validationadditional examples Filter example using filters Join FeatureCollection You can go through one collection, and use each feature as a starting point for making a query resulting in a “Join”. In the following example we have: outer: whileloop for each polygon inner: FeatureVisitorlooping through each point Thanks to Aaron Parks for sending us this example of using the bounding box of a polygon to quickly isolate interesting features; which can then be checked one by one for “intersects” (i.e. the features touch or overlap our polygon). void polygonInteraction() { SimpleFeatureCollection polygonCollection = null; SimpleFeatureCollection fcResult = null; final DefaultFeatureCollection found = new DefaultFeatureCollection(); FilterFactory2 ff = CommonFactoryFinder.getFilterFactory2(); SimpleFeature feature = null; Filter polyCheck = null; Filter andFil = null; Filter boundsCheck = null; String qryStr = null; try (SimpleFeatureIterator it = polygonCollection.features()) { while (it.hasNext()) { feature = it.next(); BoundingBox bounds = feature.getBounds(); boundsCheck = ff.bbox(ff.property("the_geom"), bounds); Geometry geom = (Geometry) feature.getDefaultGeometry(); polyCheck = ff.intersects(ff.property("the_geom"), ff.literal(geom)); andFil = ff.and(boundsCheck, polyCheck); try { fcResult = featureSource.getFeatures(andFil); // go through results and copy out the found features fcResult.accepts( new FeatureVisitor() { public void visit(Feature feature) { found.add((SimpleFeature) feature); } }, null); } catch (IOException e1) { System.out.println("Unable to run filter for " + feature.getID() + ":" + e1); continue; } } } } Joining two Shapefiles The following example is adapted from some work Gabriella Turek posted to the GeoTools user email list. Download: Here is the interesting bit from the above file: private static void joinExample(SimpleFeatureSource shapes, SimpleFeatureSource shapes2) throws Exception { SimpleFeatureType schema = shapes.getSchema(); String typeName = schema.getTypeName(); String geomName = schema.getGeometryDescriptor().getLocalName(); SimpleFeatureType schema2 = shapes2.getSchema(); String typeName2 = schema2.getTypeName(); String geomName2 = schema2.getGeometryDescriptor().getLocalName(); FilterFactory2 ff = CommonFactoryFinder.getFilterFactory2(); Query outerGeometry = new Query(typeName, Filter.INCLUDE, new String[] {geomName}); SimpleFeatureCollection outerFeatures = shapes.getFeatures(outerGeometry); SimpleFeatureIterator iterator = outerFeatures.features(); int max = 0; try { while (iterator.hasNext()) { SimpleFeature feature = iterator.next(); try { Geometry geometry = (Geometry) feature.getDefaultGeometry(); if (!geometry.isValid()) { // skip bad data continue; } Filter innerFilter = ff.intersects(ff.property(geomName2), ff.literal(geometry)); Query innerQuery = new Query(typeName2, innerFilter, Query.NO_NAMES); SimpleFeatureCollection join = shapes2.getFeatures(innerQuery); int size = join.size(); max = Math.max(max, size); } catch (Exception skipBadData) { } } } finally { iterator.close(); } System.out.println( "At most " + max + " " + typeName2 + " features in a single " + typeName + " feature"); } When run on the uDig sample data set available here: You can run an intersection test between bc_pubsand bc_municipality: Welcome to GeoTools:2.5.SNAPSHOT At most 88 bc_pubs features in a single bc_municipality feature Here are a couple other examples for innerFilterto think about: ff.intersects(ff.property(geomName2), ff.literal( geometry )); // 88 pubs ff.dwithin(ff.property(geomName2), ff.literal( geometry ),1.0,"km"); // 60 pubs ff.not( ff.disjoint(ff.property(geomName2), ff.literal( geometry )) ); // 135 pubs! ff.beyond(ff.property(geomName2), ff.literal( geometry ),1.0,"km"); // 437 pubs
https://docs.geotools.org/stable/userguide/library/main/collection.html
2021-05-06T12:51:56
CC-MAIN-2021-21
1620243988753.97
[]
docs.geotools.org
Changelog Version 1.4.2 Jan 13, 2021 - Convert section “Swissup Checkout” into item “Checkout” under section “Swissup” at System Config page. Version 1.4.1 Jul 1, 2020 - Added fix for Italian addresses to use proper province code. Requires to enable advanced formatting to apply the fix. Version 1.4.0 Jun 16, 2020 - Enabled module at customer account page. - Added ability to restrict address search to the currently selected country. - Added ability to extend or completely redefine Address Mapping Settings. - Unit number support added. - Postcode suffix support added. - Scripts no longer added to the frontend if module is disabled. Version 1.3.3 May 22, 2020 - Fixed City detection for Brazilian addresses Version 1.3.2 May 4, 2020 - Magento 2.3.5 CSP compatibility Version 1.3.2.2 Oct 16, 2019 - Fixed ‘undefined’ word in input field after pressing ‘tab’ key. (Happens when API key is invalid). Version 1.2.1 May 23, 2019 - Improved Czech Republic addresses autocompletion Version 1.2.0 Jan 15, 2019 - Added ability to fill housenumber into custom address field. Both Magento Commerce Edition and our AddressFieldManager are suported. Version 1.1.1 Nov 01, 2018 - Fixed non working autocomplete when form doesn’t have ID attribute (Some rare third-party modules) Version 1.0.8 - Translation file added Version 1.0.7 - Fixed API authFailure processing on slow networks - Code refactoring Version 1.0.6 - Add country config option (Restrict the search to a specific countries) - Js code style was improved (jscs, eslint) Version 1.0.5 - Fixed region field autocompletion Version 1.0.4 - Improved logic for UK addresses. Fixed missing town field. Version 1.0.3 - Using street.long_nameinstead of street.short_name - Added js gm_authFailure callback to fix disabled street address line, when API key is invalid Version 1.0.2 - Acl fixes - Updated module dependencies Version 1.0.1 - Added configuration option to enable/disable module Version 1.0.0 - Initial release
https://docs.swissuplabs.com/m2/extensions/address-autocomplete/changelog/
2021-05-06T13:31:11
CC-MAIN-2021-21
1620243988753.97
[]
docs.swissuplabs.com
Prisma Cloud compliance checks 1. Overview Prisma Cloud Labs compliance checks are designed by our research team and fill gaps not offered by other benchmarks. Like all compliance checks, Prisma Cloud’s supplementary checks monitor and enforce a baseline configuration across your environment. Prisma Cloud Labs compliance checks can be enabled or disabled in custom rules. New rules can be created under Defend > Compliance > Policy. 2. Checks - 597 — Secrets in clear text environment variables (container and serverless function check) Checks if a running container (instantiated from an image) or serverless function contains sensitive information in its environment variables. These env vars can be easily exposed with docker inspect, and thus compromise privacy. - 598 — Container app is running with weak settings Weak settings incidents indicate that a well-known service is running with a non-optimal configuration. This covers settings for common applications, specifically: Mongo, Postgres, Wordpress, Redis, Kibana, Elasitc Search, RabbitMQ, Tomcat, Haproxy, KubeProxy, Httpd, Nginx, MySql, and registries. These check for things such as the use of default passwords, requiring SSL, etc. The output for a failed compliance check will contain a "Cause" field that gives specifics on the exact settings detected that caused a failure. - 599 — Container is running as root (container check) Checks if the user value in the container configuration is root. If the user value is 0, root, or "" (empty string), the container is running as a root user, and the policy’s configured effect (ignore, alert, or block) is actuated. - 420 — Image is not updated to latest (image check) For running containers, Prisma Cloud checks that the creation time of each layer in image:tag is the same as its corresponding image:tag in the registry. For any image pulled from a password protected registry/repo, the registry must be configured in Prisma Cloud. To add a registry, go to Defend > Vulnerabilities > Registry. If an image does not belong to any user configured registry, and the image origin is Docker Hub, the image is compared against image:tag in Docker Hub. Each layer in the image is assessed separately. If a layer cannot be found in the registry, it is skipped, and the next layer is assessed. - 422 — Image contains malware (image check) Checks if any binary in the image matches the md5 checksum for known malicous software. - 423 — Image is not trusted (image check) Checks if unauthorized (untrusted) images are pulled or loaded into your environment. Prisma Cloud provides a mechanism to specify specific registries, repositories, and images that are considered trusted. Enable this check to prevent unauthorized containers from running in your critical environment. For more information, see Trusted images. - 424 — Sensitive information provided in environment variables (image and serverless function check) Checks if images or serverless functions contain sensitive information in their environment variables. Container images define environment variables with the Dockerfile ENV instruction. These environment variables can be easily exposed with docker inspect. - 425 — Private keys stored in image (image and serverless function check) Searches for private keys stored in an image or serverless function. If found, the policy effect (ignore, alert, block) is applied on deployment. - 426 — Image contains binaries used for crypto mining (image check) Detects when there are crypto miners in an image. Attackers have been quietly poisoning registries and injecting crypto mining tools into otherwise legitimate images. When you run these images, they perform their intended function, but also mine Bitcoin for the attacker. This check is based on research from Prisma Cloud Labs. For more information, see Real World Security: Software Supply Chain. - 448 — Package binaries should not be altered Checks the integrity of package binaries in an image. During an image scan, every binary’s checksum is compared with its package info. If there’s a mismatch, a compliance issue is raised. Besides scan-time, this compliance issue an also be raised at run-time if a modified binary is spawned. 3. Prisma Cloud Labs Istio compliance checks The Istio compliance check help you enforce a secure Istio configuration and address risks such as misconfigured TLS settings and universally scoped service roles. The goals of the compliance rules are to: Ensure mutual TLS is configured correctly (enabled and over HTTPs). Ensure RBAC policy is configured with service level access control (service x can only talk with service y). Ensure RBAC policy is not too permissive. 3.1. Checks 427 — Configure TLS per service using Destination Rule traffic policy 450 — Enable mesh-wide mutual TLS authentication using Peer Authentication Policy 451 — Avoid using permissive authorization policies without rules as it can compromise the target services 452 — Enable Istio access control on all workloads in the mesh using Authorization Policies
https://docs.twistlock.com/docs/compute_edition/compliance/prisma_cloud_compliance_checks.html
2021-05-06T13:51:25
CC-MAIN-2021-21
1620243988753.97
[]
docs.twistlock.com
Promise.reduce( Iterable<any>|Promise<Iterable<any>> input, function(any accumulator, any item, int index, int length) reducer, [any initialValue] ) -> Promise Given an Iterable(arrays are Iterable), or a promise of an Iterable, which produces promises (or a mix of promises and values), iterate over all the values in the Iterable into an array and reduce the array to a value using the given reducer function. If the reducer function returns a promise, then the result of the promise is awaited, before continuing with next iteration. If any promise in the array is rejected or a promise returned by the reducer function is rejected, the result is rejected as well. Read given files sequentially while summing their contents as an integer. Each file contains just the text 10. Promise.reduce(["file1.txt", "file2.txt", "file3.txt"], function(total, fileName) { return fs.readFileAsync(fileName, "utf8").then(function(contents) { return total + parseInt(contents, 10); }); }, 0).then(function(total) { //Total is 30 }); If initialValue is undefined (or a promise that resolves to undefined) and the iterable contains only 1 item, the callback will not be called and the iterable's single item is returned. If the iterable is empty, the callback will not be called and initialValue is returned (which may be undefined). Promise.reduce will start calling the reducer as soon as possible, this is why you might want to use it over Promise.all (which awaits for the entire array before you can call Array#reduce on it). © 2013–2018 Petka Antonov Licensed under the MIT License.
https://docs.w3cub.com/bluebird/api/promise.reduce
2021-05-06T12:31:12
CC-MAIN-2021-21
1620243988753.97
[]
docs.w3cub.com
Get the code¶ The development of the Two!Ears Auditory Model happens independently for its different modules. All of them are hosted as git repositories on github. For an overview go to. In order to get started you should familarize yourself with git and then start to clone the repository you would like to change the code. First you have to start with getting the main repository as all other modules need this as a basis: $ git clone Warning At github you will also find a repository that is used for the official releases of the model (). Please don’t use this for development. Work with the whole Two!Ears model¶ Let us now assume you want to take part in the development of the complete Two!Ears Auditory Model. What you have to do then is the following. In the main repository you will find the file TwoEars.xml which defines all modules that are part of the Two!Ears Auditory Model: <?xml version="1.0" encoding="utf-8"?> <!-- Configure which parts of the Two!Ears model should be started --> <requirements> <TwoEarsPart sub="src" startup="startBinauralSimulator">binaural-simulator</TwoEarsPart> <TwoEarsPart sub="API_MO" startup="SOFAstart">sofa</TwoEarsPart> <TwoEarsPart startup="startAuditoryFrontEnd">auditory-front-end</TwoEarsPart> <TwoEarsPart startup="startBlackboardSystem">blackboard-system</TwoEarsPart> </requirements> First clone all necessary repositories: $ mkdir git $ mkdir git/twoears/ $ cd git/twoears $ git clone # if not already done $ git clone $ git clone $ git clone $ git clone Then you have to define the paths of the single modules. In the main repository copy the file TwoEarsPaths_Example.xml to TwoEarsPaths.xml and adjust it to the used paths, in our case the default settings: <?xml version="1.0" encoding="utf-8"?> <!-- Configuration file for the Two!Ears auditory model --> <repoPaths> <!-- --> <binaural-simulator>~/git/twoears/binaural-simulator</binaural-simulator> <!-- --> <auditory-front-end>~/git/twoears/auditory-front-end</auditory-front-end> <!-- --> <blackboard-system>~/git/twoears/blackboard-system</blackboard-system> <!-- --> <sofa>~/git/twoears/SOFA</sofa> </repoPaths> After that you can simply run the following command in the folder of the main repository in order to work with the whole model: >> startTwoEars; Work with a single module¶ If you would like to work only on a single module you need only to get its repository as well as all its dependencies. If you not sure what are the dependencies, you can have a look at its XML configuration file. For example, in the case of the Binaural simulator it is stored in the file BinauralSimulator.xml in its main directory of the binaural-simulator repository: <?xml version="1.0" encoding="utf-8"?> <!-- Configure dependency of Two!Ears modules for the Binaural Simulator --> <!-- Start the Two!Ears Binaural Simulator with startTwoEars('BinauralSimulator.xml'); --> <requirements> <TwoEarsPart sub="src" startup="startBinauralSimulator">binaural-simulator</TwoEarsPart> <TwoEarsPart sub="API_MO" startup="SOFAstart">sofa</TwoEarsPart> </requirements> This means in this case you have to clone the sofa repository in order to use the Binaural simulator: $ mkdir git $ mkdir git/twoears $ cd git/twoears $ git clone $ git clone After that you can start the Binaural simulator in Matlab with the following command in order to test it: >> startTwoEars('BinauralSimulator.xml');
http://docs.twoears.eu/en/1.5/dev/development-system/get-the-code/
2021-05-06T12:06:21
CC-MAIN-2021-21
1620243988753.97
[]
docs.twoears.eu
Spectro-temporal modulation spectrogram¶ Neuro-physiological studies suggest that the response of neurons in the primary auditory cortex of mammals are tuned to specific spectro-temporal patterns [Theunissen2001], [Qiu2003]. This response characteristic of neurons can be described by the so-called STRF. As suggested by [Qiu2003], the STRF can be effectively modelled by two-dimensional (2D) Gabor functions. Based on these findings, a spectro-temporal filter bank consisting of 41 Gabor filters has been designed by [Schaedler2012]. This filter bank has been optimised for the task of ASR, and the respective real parts of the 41 Gabor filters is shown in Fig. 36. The input is a log-compressed rate-map with a required resolution of 100 Hz, which corresponds to a step size of 10 ms. To reduce the correlation between individual Gabor features and to limit the dimensions of the resulting Gabor feature space, a selection of representative rate-map frequency channels will be automatically performed for each Gabor filter [Schaedler2012]. For instance, the reference implementation based on 23 frequency channels produces a 311 dimensional Gabor feature space. The Gabor feature processor is demonstrated by the script DEMO_GaborFeatures.m, which produces the two plots shown in Fig. 37. A log-compressed rate-map with 25 ms time frames and 23 frequency channels spaced between 124 and 3657 Hz is shown in the left panel for a speech signal. These rate-map parameters have been adjusted to meet the specifications as recommended in the ETSI standard [ETSIES]. The corresponding Gabor feature space with 311 dimension is presented in the right panel, where vowel transition (e.g. at time frames around 0.2 s) are well captured. This aspect might be particularly relevant for the task of ASR.
http://docs.twoears.eu/en/latest/afe/available-processors/spectro-temporal-modulation-spectrogram/
2021-05-06T11:53:20
CC-MAIN-2021-21
1620243988753.97
[]
docs.twoears.eu
Recording your session in ZBrush is as simple as pressing Movie: Record. By default, you will only record the document and your interface items will be skipped. - To record a movie, press Movie: Record To record a movie of the entire Interface follow the steps below: - Press Movie: Window - Set the final output size by selecting Movie: Small, Movie: Medium, Movie: Large. Movie Small is 25% of your screen size. Movie Large is 100% of your screen size. Movie Medium is 50% of your screen size. - To show menus, unpress Movie: Modifiers: Skip Menus. - Set the frames per second for the recording by adjusting Movie: Modifiers: Recording FPS - Set the frames per second for the Playback FPS by adjusting Movie: Modifiers: Playback FPS - Press Movie: Record - Start sculpting - When you are done press Movie: Save As Create A TimeLapse Video Using TimeLapse can significantly reduce the length (and file size) of your movie. Time lapse causes frames to be recorded only when the mouse is doing something that affects the document or model; sculpting or painting, basically. Even actions such as rotating the model are not shown, although once the model has been rotated to its new position, a frame will be recorded to show the new orientation. - Set the duration of each snapshot with Movies: Modifiers: Snapshot Time. - Turn TimeLapse on. Press Movie: TimeLapse - Start sculpting
http://docs.pixologic.com/user-guide/movies/
2019-06-16T06:50:00
CC-MAIN-2019-26
1560627997801.20
[]
docs.pixologic.com
Pass parameters to a URL by using the ribbon Applies to Dynamics 365 for Customer Engagement apps version 9.x Ribbon actions are defined in the <Actions> element of a <CommandDefinition> element. There are several ways to pass contextual Dynamics 365 for Customer Engagement Customer Engagement information as query string parameters to a URL by using the ribbon. Use a <Url>element. Within the Urlelement, use the PassParams attribute. Use a <Url>element together with a <CrmParameter>element. When used from a Urlelement, the name attribute value must be set. Use a <JavaScriptFunction>element together with a <CrmParameter. Language codes are four-digit or five-digit locale IDs. Valid locale ID values can be found at Locale ID (LCID) Chart). Note We recommend that you use the entity name instead of the entity type code because the entity type code may be different between Dynamics 365 for Customer Engagement installations. Example The following sample shows the URL without parameters: The following sample Dynamics 365 for Customer Engagement record or view by using Open forms, views, dialogs, and reports. More information: HttpRequest.QueryString Property. More information: Sample: Passing Multiple Values to a Web Page Web Resource Through the Data Parameter See also Customize commands and the ribbon Open Forms And Views with a URL Define Ribbon Tab Display Rules Sample: Export Ribbon Definitions Feedback Send feedback about:
https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/customize-dev/pass-parameters-url-by-using-ribbon
2019-06-16T07:21:14
CC-MAIN-2019-26
1560627997801.20
[]
docs.microsoft.com
Tip: Display Administrative Tools on the Windows 7 Start Menu Follow Our Daily Tips Twitter | Blog | RSS | Facebook The Administrative Tools menu is not displayed by default in Windows 7. If you want to display this menu on your computer or for a user with administrator privileges, you need to customize the Start menu. You can add the Administrative Tools menu to either the Start menu or to the Start menu and the All Programs submenu of the Start menu by completing the following steps: 1. Right-click Start, and then click Properties.The Taskbar And Start Menu Properties dialog box is displayed with the Start Menu tab selected by default. 2. Click Customize.Scroll down the list until you can see the System Administra¬tive Tools heading. 3. At this point, you have two options: - If you want to display the Administrative Tools menu as a submenu of the All Programs menu, select Display On The All Programs Menu. - If you want to display the Administrative Tools menu directly on the Start menu and as a submenu of the All Programs menu, select Display On The All Programs Menu And The Start Menu. 4. Click OK twice. From the Microsoft Press book Windows 7 Administrator’s Pocket Consultant by William R. Stanek. Looking for More Tips? For more tips on Windows 7 and other Microsoft technologies, visit the TechNet Magazine Tips library.
https://docs.microsoft.com/en-us/previous-versions/technet-magazine/ff700232%28v%3Dmsdn.10%29
2019-06-16T07:02:44
CC-MAIN-2019-26
1560627997801.20
[]
docs.microsoft.com
Contents HR Service Delivery Previous Topic Next Topic HR security Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share HR security HR Service Delivery provides Restricted Caller Access, Encryption Support, and Edge Encryption security features. Restricted caller access for HR Restricted caller access (RCA) defines cross-scope access to HR Service Delivery applications. RCA is available to help secure sensitive information in HR scoped tables and script include APIs. Without RCA, tables that are not private to a scope are susceptible to queries by any server-side script. application scope and resource access Encryption Support for HR and Employee Document Management HR Service Delivery and Employee Document Management provides encryption support to secure sensitive information. To encrypt employee documents or fields in HR, activate the Encryption Support [com.glide.encryption] plugin. Encryption prevents unauthorized users from downloading and viewing employee documents or viewing specific fields. After the plugin has been activated: Reveal the Encryption Context field to the sn_hr_ef.encryption_context role.Note: The base system does not reveal the Encryption Context field on the Role form. This field defines the encryption key used to encrypt fields and documents. Also, ensure the Application field has Employee Document Management selected. See Roles. From the Encryption Context field, select an existing or add an encryption context. See Set up encryption contexts. Add the sn_hr_ef.encryption_context role to the user adding employee documents. Users with this role can access encrypted documents. Employees can view their own documents when HR Service Delivery is licensed, activated, and the document type allows employee access. The sn_hr_ef.encryption_context role is not required for employees to view their own documents that are encrypted. See Define policies for a document type. Note: Documents created prior to plugin activation are not encrypted. See Encryption Support. Edge Encryption for HR and Employee Document Management HR Service Delivery and Employee Document Management provides edge encryption to secure sensitive information. Edge encryption provides you with direct control over your data security. Encryption and key management are performed on your intranet between your browser and your ServiceNow instance. See Understanding Edge Encryption. Because edge encryption is enabled on a proxy server on your side of the network, there is significant planning, network administration and management, and setup required. See Planning for Edge Encryption. To install edge encryption, see Edge Encryption installation. To configure edge encryption, see Edge Encryption configuration. Edge encryption for HR You can encrypt columns (fields) or attachments associated with an HR table. See Encrypt fields using encryption configurations.Note: There are limitations when using edge encryption. See Edge Encryption limitations. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/london-hr-service-delivery/page/product/human-resources/concept/hr-security.html
2019-06-16T07:32:29
CC-MAIN-2019-26
1560627997801.20
[]
docs.servicenow.com
Overview cPanel & WHM allows you to deny cPanel users the ability to create certain domains or use certain top-level domains (TLDs). This feature is useful, for example, to deny cPanel users the ability to park a well-known domain (such as google.com) on top of a domain. For more information about the domains that cPanel users can create, read our Aliases and Addon Domains documentation. Add to the list of user-denied domains To add to the list of domains that cPanel & WHM does not allow users to create, perform the following steps: From the command line, run the following command to view the default list of domains that cPanel & WHM does not allow users to create. cat /usr/local/cpanel/etc/commondomains If the file contains a large number of entries, run the grep 'example.com' /usr/local/cpanel/etc/commondomainscommand to determine whether the example.comdomain exists in the list. Warning: Do not edit this file directly. System updates overwrite any changes to this file. Instead, follow the next step to create a new file. - With a text editor, add the domains and TLDs that you do not want to allow users to create to the /var/cpanel/commondomainsfile. Add each domain or TLD on a separate line, and do not prepend or append a dot to the domains. For example: a.com b.com c.com .cat .ninja Only add domain names and TLDs to this file (for example, example.com). When you list a domain name, cPanel & WHM will automatically prevent the creation of subdomains for that domain or domains under a TLD. For example, if you list the example.comdomain, users also cannot create the sub.example.comdomain. - In the Domains section of WHM's Tweak Settings interface (WHM >> Home >> Server Configuration >> Tweak Settings), set the Prevent cPanel users from creating specific domains setting to On. Additional documentation
https://hou-1.docs.confluence.prod.cpanel.net/display/CKB/How+to+Prevent+cPanel+Users+from+Creating+Certain+Domains
2019-06-16T06:53:54
CC-MAIN-2019-26
1560627997801.20
[]
hou-1.docs.confluence.prod.cpanel.net
How to install Document Manager Dynamics 365 add-in Default Integration Document Manager is an add-in available for installation only through the AppSource Store. It is design to extend the default integration between Dynamics 365 and SharePoint Online. Even you can install the add-in without the configured integration between the systems, the described features will not work till the setup is in place. Document Manager Follow the Office Store wizard and installed the add-in. 03 Admin Configuration Page Navigate to the add-in configuration page by clicking on the Display Name of the installed solution. The “Getting Started” link will provide you with up to date information how to configure and use the add-in. There are three configuration steps you need to complete: – Authentication – Document Locations – License 04 Authentication Document Manager uses Azure Active Directory authentication. When opening the Authentication page for the first time, you will be asked to Authenticated and grant the admin consent for Document Manager Azure app. Review the requested permissions and provided consent. 05 Document Locations Document Manager replies on the build in settings about documents in Dynamics. It reuses the values in the system lists Document Locations and SharePoint Sites. Each entity that has documents must has corresponding system values in these lists. These system values identify the SharePoint location of the folder that contains entity’s documents. Document Manager works in two modes: Default: Select this mode if you have already the default integration configured. Document Manager will continue to use this setup. The newly create location will follow the default pattern. Custom: Select this mode if you have already a custom logic about creating document locations. Document Manager won’t create any location records. 06 Choose licensing Document Manager offers 30 days free trial with full set of features. After that you can continue to use Document Manager only on a subscription base.Read more… 07 Add Documents Grid control on Entity’s form Once the pervious steps are completed, you are ready to use the Documents Grid control. This control provides to the users the interactions with the documents. Documents Grid is a HTML web resource named singens_docm.documents.html. You can add it to a entity’s form as a regular web resource. #7.1 – Insert new section and named it Documents – Select it and click on Insert “Web Resource” #7.2 – Select the web resource “singens_docm.documents.html“ #7.3 – Apply formatting according to your form specifics – Save and Publish the form
http://docs.singens.com/document-manager/how-to-install/
2019-06-16T07:36:29
CC-MAIN-2019-26
1560627997801.20
[array(['https://docs.singens.com/wp-content/uploads/2018/06/Upload.png', 'Default Integration'], dtype=object) array(['https://docs.singens.com/wp-content/uploads/2018/06/Mapping.png', 'Prerequisite'], dtype=object) array(['https://docs.singens.com/wp-content/uploads/2019/02/DM-solutions.png', 'Install Document Manager'], dtype=object) array(['https://docs.singens.com/wp-content/uploads/2019/02/Document-Manager-Config.png', 'Admin Configuration Page'], dtype=object) array(['https://docs.singens.com/wp-content/uploads/2019/02/Document-Manager-Consent.png', 'Authentication'], dtype=object) array(['https://docs.singens.com/wp-content/uploads/2019/02/Document_manager-Locations.png', 'Document Locations'], dtype=object) array(['https://docs.singens.com/wp-content/uploads/2019/02/Document-Manager-Trial.png', 'Choose licensing'], dtype=object) array(['https://docs.singens.com/wp-content/uploads/2019/02/Singens-Document-Manager.png', "Add Documents Grid control on Entity's form"], dtype=object) array(['https://docs.singens.com/wp-content/uploads/2019/02/Document-Manager-Add-Section.png', None], dtype=object) array(['https://docs.singens.com/wp-content/uploads/2019/02/Document-Manager-Add-WebResource-02.png', None], dtype=object) array(['https://docs.singens.com/wp-content/uploads/2019/02/Document-Manager-Add-WebResource-03.png', None], dtype=object) ]
docs.singens.com
Web. Feedback Was this page helpful? Thank you for letting us know! Sorry to hear that. Please tell us how we can improve. Last modified June 15, 2021: (6cbd5cd)
https://docs.armory.io/docs/installation/armory-operator/op-manifest-reference/op-webhook/
2021-11-27T02:24:00
CC-MAIN-2021-49
1637964358078.2
[]
docs.armory.io
OpenLDAP Overlays¶ OpenLDAP server supports overlays which can be added to a LDAP database to modify its functionality. The overlays listed below are enabled by the debops.slapd role by default. Sync Provider overlay¶ The role will by default enable the Sync Provider ( syncprov) dynamic module and overlay, in both the cn=config configuration database, and the main OpenLDAP database. The Sync Provider functionality is used in different data replication strategies. Enabling it by default, even on a standalone OpenLDAP server, should be harmless - the replication requires additional configuration defined in each OpenLDAP database. The overlay is enabled first to keep the X-ORDERED index number consistent between the cn=config database and the main database. Manual page: slapo-syncprov(5) Password Policy overlay¶ The debops.slapd role will by default import the ppolicy LDAP schema, load the ppolicy dynamic module and enable the Password Policy overlay in the main OpenLDAP database. The Password Policy overlay is used to maintain the security and quality of various passwords stored in the LDAP database. By default the overlay will ensure that the cleartext passwords passed to the OpenLDAP server are hashed using the algorithms specified in the olcPasswordHash parameter (salted SHA-512 via crypt(3) function is set by default by the debops.slapd role). The LDAP administrators can define default and custom Password Policies in the main database, which can enforce additional password requirements, like minimum password length, different types of characters used, lockout policy, etc. Manual page: slapo-ppolicy(5) Attribute Uniqueness overlay¶ The Attribute Uniqueness overlay is used to enforce that specific LDAP attributes are unique acrosse the LDAP directory. The default configuration enforces the uniqueness of the uidNumber and gidNumber attributes in the entire LDAP directory, and the uid, gid and ou=People,dc=example,dc=org subtree of the directory. Manual page: slapo-unique(5) Reverse Group Membership Maintenance overlay¶ The memberOf overlay is used to update the LDAP objects of group members when they are added or removed from a particular groupOfNames or groupOfEntries objects, as well as "role occupants" defined in a given organizationalRole object. The overlay also maintains reverse membership information of the groupOfURLs objects maintained by the AutoGroup overlay. Applications and services can search for objects with the memberOf attribute with specific values to get the list of groups or roles a given user belongs to. Manual page: slapo-memberof(5) Referential Integrity overlay¶ The refint overlay is used to update Distinguished Name references in other LDAP objects when a particular object is renamed or removed. This ensures that the references between objects in the LDAP database are consistent. Manual page: slapo-refint(5) Audit Logging overlay¶ The auditlog overlay records all changes performed in the LDAP database using an external log file. Changes are stored in the LDIF format, that includes a timestamp and the identity of the modifier. The role will automatically ensure that the audit log files are rotated periodically using the logrotate service to keep the disk usage under control. Manual page: slapo-auditlog(5) Attribute Constraints overlay¶ The constraint overlay can be used to place constraints on specific LDAP attributes, for example number of possible values, size or format. Manual page: slapo-constraint(5) AutoGroup overlay¶ The autogroup overlay is yet another attempt at creating dynamic groups in the LDAP directory. Normally using the combination of the slapo-dynlist(5) and the slapo-dyngroup(5) overlays the LDAP directory can support dynamic group objects which define membership in a group using LDAP search URLs. However these groups are "virtual" and don't really exist, using the dynamic attributes in searches will not include these groups. Also, the reverse membership information defined by the memberOf attribute cannot be implemented this way. With autogroup overlay, the directory server checks on each add, modify or delete operation on an object if that object is included in a search of a particular groupOfURLs group and statically adds or removes a reference to it in the member attribute as needed. With addition of the memberof overlay which maintains reverse membership information of a given object using the memberOf attribute, the AutoGroup overlay can be used to provide two-way dynamic group support in the LDAP directory. The write performance might be an issue with large datasets. The dynamic groups are defined using the groupOfURLs LDAP object. The memberURL attribute(s) define the LDAP search URLs (RFC 4516) used to specify the members of the group. Warning During development of the feature in DebOps, crashes of the slapd daemon were observed in multi-master replication mode on older Debian releases. The OpenLDAP version included in Debian Buster seems to work fine, though. LastBind overlay¶ The lastbind overlay and the corresponding OpenLDAP module can be used to maintain information about last login time of a LDAP account, similar to the lastLogon functionality from Active Directory. The primary purpose of the lastbind overlay is detection of inactive user accounts; it shouldn't be relied on for real-time login tracking. The time of the last successful authenticated bind operation of a given LDAP object is stored in the authTimestamp operational attribute (not replicated, not visible in normal queries, has to be specifically requested). By default the timestamp is updated once a day to avoid performance issues in larger environments. Manual page: slapo-lastbind(5)
https://docs.debops.org/en/master/ansible/roles/slapd/slapd-overlays.html
2021-11-27T02:55:02
CC-MAIN-2021-49
1637964358078.2
[]
docs.debops.org
Date: Sat, 9 Dec 1995 02:09:42 -0700 (MST) From: Terry Lambert <[email protected]> To: [email protected] (Gary D. Kline) Cc: [email protected], [email protected], [email protected], [email protected] Subject: Re: tcpdump Message-ID: <[email protected]> In-Reply-To: <[email protected]> from "Gary D. Kline" at Dec 8, 95 10:22:38 pm Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help > Will enabling the pcaudio driver let xset work to do > key-clicks?? Uh.. if you hack your console driver to handle "keyclick on/off" and make noise for it, and if you hack you XFree86 to ask for keyclicks on and off, then the plain-ol-speaker driver will do that. No one who spends enough time at a keyboard that the changes would be trivial can stand keyclick, though. 8-). Terry Lambert [email protected] --- Any opinions in this posting are my own and not those of my present or previous employers. Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=854194+0+/usr/local/www/mailindex/archive/1995/freebsd-questions/19951203.freebsd-questions
2021-11-27T03:13:34
CC-MAIN-2021-49
1637964358078.2
[]
docs.freebsd.org
Lab Overview - HOL-1928-HCI - VxRail Getting Started. This lab introduces VxRail 4.5 and VMware Virtual SAN. In this lab, you will learn and explore the following Lab Module List: Module 1: Getting Started (10 Minutes) - Basic Module 2: Monitoring and Maintenance (10 minutes) - Intermediate Module 3: Cluster Expansion - Adding nodes (10 minutes) - Basic Module 4: High Availability options for Rack and Datacenter failures - (5 Minutes) Basic Module 5:Space efficient options - (5 Minutes) Basic Module 6: Data Security (5 Minutes) - Basic Module 7: Data Protection (5 Minutes) - Basic Module 8: Upgrading Infrastructure Software (10 minutes) - Intermediate. Dell EMC VxRail Appliances, the fastest growing hyper-converged systems worldwide, are the standard for simplifying and modernizing VMware environments, regardless of where an organization starts or ends their IT transformation. The only HCI appliances powered by Dell EMC PowerEdge platforms and fully integrated and pre-tested with VMware vSAN seamlessly extend existing VMware environments, simplifying deployment and enabling IT organizations to leverage in-house expertise and operational processes. Tight integration with VMware technologies ensures that VxRail Appliances across the infrastructure can be easily managed from a central location. VxRail accelerates the adoption of HCI and creates certainty in IT transformation. VxRail Appliances are the standard for transforming VMware environments. Quickly and easily integrate into existing VMware eco-systems, removing IT lifecycle complexity while simplifying deployment, administration and management. As such they are an integral infrastructure option for modernizing and automating IT via IT Transformation, Digital Transformation and Workplace Transformation. For more information on VxRail please visit: optimized and supported VMware-based solution, the appliances also integrate with VMwares cloud management platform and end-user computing solutions. VxRail is also a foundational infrastructure platform which makes it simple to introduce advanced VMware SDDC offerings like NSX, vRealize Automation, and vRealize Operations. The following image describes the software that is included with each and every VxRail.. As the world's most configurable appliances, Dell EMC VxRail provides extreme flexibility with purpose-built appliances that are designed to address any use case, including big data, analytics, 2D/3D visualization, or collaboration applications. VxRail Appliances, built with the latest PowerEdge servers based on Intel Xeon Scalable processors, deliver more predictable high performance with up to 2x more IOPS while cutting response times in half. The VxRail Appliance family offers GPU optimized, storage dense, high performance computing, and entry level options - to give you the perfect match for your specific HCI workload requirements. Module 1: Getting Started Connecting to your vSphere Client - You will now connect to the vSphere Web Client session which you will use throughout the lab. Click on the Google Chrome icon located on the Windows Taskbar or on the Desktop. Login to the VMware vSphere Web Client using the following credentials: User name: [email protected] Password: Password123! We will now verify the configuration of the new VxRail Appliance, checking the resources that are currently under the management of the vCenter Server. Click on Hosts and Clusters button at the Navigator pane on the left 1. Click on vcenter01.demo.local on the Navigator pane 2. Select the Summary tab 3. Observe on the Navigator pane that under the vCenter Server vcenter01.demo.local there is one datacenter, vlab-dc, and one cluster, vlab-cluster. vlab-cluster is the VxRail cluster. 4. Note that the vCenter on this lab environment is running version 6.5 Build 8024368. 1. Select vcenter01.demo.local on the Navigator pane 2. Select the Configure tab on the main pane 3. Select Storage Providers 4. Click on the Synchronize button Congratulations on completing Module 1. Proceed to any module below which interests you most. To end your lab click on the END button. Module 2 - Monitoring and maintaining the logical and physical health of the VxRail cluster In this module you will navigate through VxRail Manager interface to become more familiar with the options available to monitor the health indicators of the VxRail Cluster, and how these functions can simplify the management of your environment. You will also have the opportunity to execute a few hardware maintenance simulations. Make sure you are connected to the VxRail Manager Interface. If not, Click on the Google Chrome icon located on the Windows Taskbar or on the Desktop If asked to acknowledge the security exception, click on the Advanced link, and then click on Proceed to vxm.demo.local; otherwise go directly to the Log On page 1. Use the following credentials to login to VxRAil Manager: 2. Click Authenticate We will have to temporary unmute the cluster's health status. If you have the orange notice in the vxrail main page please perform the following steps: 1. Click on Config on the right tab 2. in the Config page select general and scroll down to "Cluster Health Monitoring" 3. Select On to turn Health Monitoring 4. Click Apply The VxRail Manager dashboard shows system health and support resources at a glance, including expansion status, overall system health, support, community activity, and event history. Here is a brief explanation of the information sections in the dashboard: When a new upgrade package is downloaded or installed, the status of the upgrade task will be displayed in the dashboard. When a new node is detected by VxRail Manager, the node information will be displayed in the upper left portion of the screen. You should already see the node EMCVLAB40000000 in your dashboard. Note: this node will be employed in the next lab module. The Overall System Health area shows the high-level system status of your VxRail Appliance. Status is shown as one of the following: Note: A Critical status message may be displayed during the execution of the labs. This can be ignored. VxRail community shows the most recent articles and other content from the online VxRail community. Support displays status and links to support resources, including: The access to ESRS (EMC Secure Remote Services) is not allowed in this virtual lab environment which explains the heartbeat message in the support area. Event history displays the most recent system events. 1. Click on Events tab within VxRail Manager The VxRail Manager Events tab displays a list of current events. The events list can be sorted by ID, Severity, or Time. New critical events are displayed in red. If a physical component is listed in the 'Event Details', the 'Component ID' field will have a link to the Health > Physical screen to facilitate visualization and identification of the component. A csv file with the event messages can be created, and exported/downloaded by the web browser. 1. Select HEALTH on the vertical bar within VxRail Manager 2. Select the Logical tab at the top of the screen. This screen displays CPU, memory, and storage usage for your entire cluster, individual appliances, and individual nodes. The color-coded status for storage IOPS, CPU usage, and memory usage indicates the following: Observe that in the storage information display, VxRail Manager provides a summary of total provisioned and used capacity which can be used to identify over-provisioning levels. Note on the upper part of the screen that you can select to display information either for the 'Cluster' as a whole or for a specific node. The product serial numbers of the hosts (PSNT) are used to identify the hosts. You may need to scrowdown the page to see the VxRail Manager Appliance information with Health > Logical tab. 1. Click on the appliance serial number EMCVLAB1000000 Scroll down to the ESXi Nodes area Note that in this screen we have the host name associated with the PSNT selected, node6001-dev.demo.local in our example. This view provides the status of the host components. By clicking on the '>' expand sign we can obtain more information about the components. This is a fast way to check the status of the host components. 1. Scroll up and click Physical tab The Physical tab of the VxRail Manager Health window displays information about the hardware components of your appliance. A graphical representation of the appliances in your cluster makes it easy to navigate for event and status information. 1. Make sure you are on the Physical tab view. 2. Click on the Appliance image You can view an appliance's status and information such as ID, serial number, service tag, and individual appliance components. You can drill down to see status and information for appliance components such as disks, compute nodes, NIC ports, and power supplies. In the upper left part of the screen you can see that the service tag of our first appliance is 5HB4YK2. This is a P570 Model. In the main part of the screen we have a detailed view about the front end and back end characteristics of the appliance. In case of problems with any of the "Customer Replaceable" hardware components, the failed component is highlighted to facilitate identification. The front view provides disk drive information. To simplify serviceability, VxRail has pre-defined slots for the capacity drives as well as cache drives of each disk group. In the P570 models we can have up to 4 Disk Groups per node with a maximum of 5 capacity disks per group. The first 20 slots that we see in the front view image are reserved for capacity drives and the last 4 slots are reserved for cache drives. We can observe that we only have 3 capacity disks in our first disk group. Scroll down to the disk information display Observe that the disk type is HDD, and the capacity available for use is 1.09TB This is the cache drive of disk group 1 Observe that this is an SSD drive Note that the 'Remaining Write Endurance' is displayed for all flash drives. Monitoring of the wear level of the flash drives is done automatically by VxRail Manager. In case the endurance of any flash drive falls below a pre-determined threshold, the system will send alert messages to the support center, in addition to logging an event. From the same disk information window we can initiate a drive replacement procedure. 1. Click on the Replace Disk link Note that in this virtual lab we will execute a simulated drive replacement procedure, for illustration only. 1. Click on Continue A pre-check is executed to ensure that the hosts are in the appropriate state and that the cluster health allows the execution of the procedure. After the pre-check is complete, click on Continue In a real environment VxRail performs a disk clean-up and displays a status bar showing progress. At the end of the cleanup the disk will be ready to be replaced. Because we are in a virtual environment, this cleanup procedure will fail. Please Click on Cancel Click Confirm on the 'Abort Disk Replacement' pop-up dialog box Click Done 1. Hover the mouse over the graphics and click on the node in the Back View Under node information we can obtain the BIOS firmware revision, ESXi and VIB versions, Boot device information, and BMC firmware revision. The easy access to this information can facilitate serviceability. 1. Hover the mouse over the graphics and click on the Network interface card. This screen will provide us with the MAC addresses, link speed and status of the ports. 1. Hover the mouse over the graphics and click on the Power Supply This screen will provide us with the serial number, revision number and part number. The node shut down procedure can be quite useful when replacing certain hardware components or performing other maintenance procedures. This procedure perform a series of checks to ensure that the host and the cluster are in a state that will allow the execution of a clean procedure. 1. For a simulation Hover the mouse over the graphics and click on the Node 2. Click on the Shutdown link Leave unchecked the box: "Move powered-off and suspended virtual machines to the other hosts" Click Confirm on the 'Shut Down Node' pop-up dialog box. Like in the disk replacement procedure, a pre-check is executed to ensure that the hosts are in the appropriate state and that the cluster health allows the execution of a node shutdown procedure, noting that in the case of a node shutdown additional verifications have to be executed. After the pre-check is complete, click Continue The procedure will put the host in maintenance mode and then shut it down. Wait for the message indicating that the node is powered off before proceeding During host maintenance procedures VxRail Manager mutes the health monitoring. When this happens, an alert is displayed in orange on the top of the screen. Once completed the node information will display that the node is powered off We will have to unmute the System Health Monitoring before proceeding to the next exercise. 1. Click on the Config icon 2. Select the General tab on top of the screen 3. Scroll down to Cluster Health Monitoring 4. Select the option On for Health Monitoring 5. Click Apply Ignore alert messages about maintenance activity in progress. The orange bar should disappear. There are situations in which the shutdown of the entire cluster is required; for example, when the appliances are being physically relocated. For these situations VxRail manager provides a cluster shutdown function that simplifies and automates the time of this entire process. This can be quite useful, especially when the cluster has a large number of hosts. On the same Config > General view, scroll down to Shut Down Cluster Click Shut Down button Click Confirm on the 'Shut Down Cluster' pop-up dialog box The confirmation for shut down cluster will display. Press Confirm. As we saw in the previous exercises, a pre-check has to be executed to ensure that the cluster and nodes are in the proper state for a normal shutdown. One check in particular is that all customer virtual machines have been shut down, to ensure a graceful shutdown and a clean restart afterwards. After the Pre-check is complete, click the Shut Down button After the previous host maintenance procedure, it will be necessary to unmute the System Health Monitoring. Scroll up to Cluster Health Monitoring Select the option On for Health Monitoring Click Apply Congratulations on completing Module 2. Proceed to any module below which interests you most. To end your lab click on the END button. Module 3 - Adding a node to an existing VxRail Cluster In this module you will learn how to add a node to your VxRail cluster, which is a very simple process. One of the core benefits provided by VxRail is to allow a configuration to start small, at the right cost to satisfy the current demands, and then grow the configuration as needed, in small increments. Note: Starting with VxRail software version 4.5.150 the first 3 nodes in a cluster must be identical (previous versions require the first 4 nodes to be identical). Additionally VxRail clusters must be entirely all flash or entirely hybrid. we power on a new ESXi node that is connected to the same network as the VxRail cluster, this node is automatically discovered by VxRail manager. The information that a new node has been detected is displayed on the VxRail manager dashboard. 1. Click Dashboard on the vertical bar 2. Select the node to be added to the cluster 3. Click on Add Nodes Note: Up to 6 nodes can be selected at a time for cluster expansion procedure. Enter the following credentials: Click Next During the installation of this system we provisioned 4 IP addresses to each network, but only configured 3 hosts. Because we have an extra IP available in each network we can proceed without any changes. However, if only 3 IP addresses were provisioned, we would have to explicitly enter a new IP address for each the 3 networks being maintained in this step, Management, vMotion and vSAN. Click Next The final steps of the node expansion process is to Validate the configuration and Confirm the Build request. The Cluster Expansion process can be Monitored on the VxRail Manager Dashboard while in progress. Upon completion, the Cluster Expansion section in the Dashboard disappears, and the health and other information about the new node can be observed as already demonstrated in the previous Cluster Monitoring and Maintenance module. We will NOT carry out the Validate and Build steps because of resource constraints in the virtual Lab. In a production VxRail cluster, a node can take between 7-10 minutes to be added. Click Cancel now In this module we demonstrated the process to perform the cluster expansion. Please proceed to the next lab module Congratulations on completing Module 3. Proceed to any module below which interests you most. To end your lab click on the END button. Module 4 - Defining fault domains in the cluster configuration When Fault Domains are enabled vSAN ensures that each protection component is placed in separate fault domains. Fault domains enablement is necessary when trying to protect against rack, room, floor and local site failures. The purpose of this module is to increase your level of familiarity with the fault domain definition. You will now connect to the vSphere Web Client. Click on the Google Chrome icon located on the Windows Taskbar or on the Desktop. Login to the VMware vSphere Web Client using the following credentials: User name: [email protected] Password: Password123! Navigate to Fault Domains & Stretched Cluster configuration page. 1. Select the vlab-cluster within your Hosts and Clusters page 2. Click on the Configure tab 3. Scroll down to Fault Domains & Stretched Cluster 4. Click on the + sign In the New Fault Domains configuration enter the following information: 1. In the Name field, type FD01 2. Select a host to be inserted in the Fault Domain. In this first example, select node6001-dev.demo.local 3. Click OK Repeat the process for all hosts, creating three different fault domains(FD01 to FD03) At the end of the configuration you will have three Fault Domains defined (you may need to click on the Refresh icon for the fault domains you have just created to be displayed). In our example we only have three hosts, but consider a larger configuration with 16 hosts and 4 racks; in this case, we would be able to allocate 4 hosts to each of the 4 fault domains that we have defined, and place the hosts of each FD in their own Rack, providing then an efficient way to protect against rack failures. Without the Fault Domain definition, each host would be its own Fault Domain. Congratulations on completing Module 4. Proceed to any module below which interests you most. To end your lab click on the END button. Module 5 - Deduplication and Compression The De-duplication and Compression feature is enabled in this vSAN Cluster. In this module we will check the space savings obtained from de-duplication and observe the object types created for metadata management. We will not enable/disable the de-duplication feature because it requires a rolling reformat of all the disks. You will now connect to the vSphere Web Client. Click on the Google Chrome icon located on the Windows Taskbar or on the Desktop. Login to the VMware vSphere Web Client using the following credentials: User name: [email protected] Password: Password123! Once logged in to vCenter Web Client navigate to Hosts and Clusters 1. Select the vlab-cluster within your Hosts and Clusters page 2. Click on the Configure tab 3. Scroll down to vSAN General Settings Observe in the vSAN is Turned ON area that the state of Deduplication and compression feature is Enabled Let's go to the next step. We will not execute the enable / disable function because it requires a disk reformat that can take more than 20 minutes in this lab environment. You will now check the space savings from Deduplication and Compression. 1. Navigate to Storage tab within the Navigator panel 2. Select the VxRail-Virtual-SAN-Datastore 1. On the Navigator pane, ensure that the VxRail-Virtual-SAN-Datastore is selected 2. On the main view, select Monitor 3. Select vSAN The Deduplication and Compression Overview section provides the capacity figures via Used Before and Used After charts. Let us study the numbers on the right side of the main pane reported as USED BEFORE and USED AFTER. Before data reduction we were using near 177 GB in our virtual cluster. After dedup and compression was enabled, and after running a "dedup friendly" workload, the amount of used capacity has reduced to approximately 15 GB, this is a reduction of ~12X and reported as a ratio. We want to note that the ratio of data reduction is totally dependent on the application data. It is also important to observe that Deduplication and Compression reserves about 5% of the total raw capacity to store the deduplication metadata. In our virtual lab the Deduplication and compression overhead is 7.62 GB and our total allocated area is 143.95 GB. Congratulations on completing Module 5. Proceed to any module below which interests you most. To end your lab click on the END button. Module 6 - VxRail Encryption Native support for Data at Rest Encryption (DARE) was introduced with vSAN 6.6, and is available on VxRail release 4.5. Encryption can be enabled on both Hybrid and All Flash models. In this module we will illustrate a few concepts and components needed to enable the native Data at Rest Encryption. We highly recommend reading the VMware's Data at Rest Encryption guide that is available for download at The core benefit of implementing Data at Rest Encryption at the storage level is that we can provide a higher level of data security without losing the benefits of data reduction features, such as deduplication and compression. When encryption occurs within a virtual machine at the host level, the chances of finding duplicate data blocks are significantly reduced. Also, data that might have once been easily compressible, is likely to be no longer as compressible. By moving the encryption to the storage system, encryption can be done after data reduction services are applied, as data is being written to persistent media, preserving the ability to optimize the use of the storage capacity. There are three parties participating in vSAN Encryption domain of trust: 1. Key Management Server (KMS) or a KMS Cluster 2. vCenter 3. vSphere Hosts with vSAN enabled (vSAN host) VMware vCenter and vSphere hosts can only use a KMS after establishing a trust with the KMS. A digital certificate must be provided to the KMS from the vCenter environment. A Key Management Server (KMS) has to be available to provide standards-compliant lifecycle management of encryption keys. Tasks such as key creation, activation, deactivation, and deletion of encryption keys are performed by Key Management Servers. vCenter Server provides a central location for Key Management Server configuration that is available to be used by either vSAN Encryption or VM Encryption. Certificates used to establish the trust with the KMS are persisted into the VMware Endpoint Certificate Store (VECS). These certificates are shared by both vSAN Encryption and VM Encryption. To ensure proper trust between the hosts and the KMS, certificates and the KEK_ID (Key Encryption Key) are pushed to vSphere hosts for vSAN Encryption. Using the KEK_ID and KMS configuration, hosts can directly communicate with the KMS cluster without a dependency of vCenter being available. The KMS should be external to the vSphere-VSAN cluster being encrypted. Choosing a KMS Server solution that provides a resilient and available KMS infrastructure is an important part of the vSAN Encryption design. A list of Key Management Server solutions compatible with vSAN Encryption and VM Encryption can be found in the VMware site. Key Managers are provided by 3rd party vendors and at the time of this writing, two vendors are on the hardware compatibility list for Key Managers: HyTrust and Dell/EMC Cloudlink () Once the trust is established between the KMS and vCenter, a vSAN cluster (with vSAN Enterprise Licensing) may use vSAN Encryption. After the KMS has been configured, vSAN Encryption is easily enabled through the cluster management UI in the vSphere Web Client, by configuring vSAN's general settings. vSAN Encryption is a configuration option that affects the entire cluster, and requires all disks to be reformatted. A common recommendation is to enable encryption before loading applications to the system in order to avoid the overhead of the reformatting. When encrypting a system that is already in use consider the following options: Congratulations on completing Module 6. Proceed to any module below which interests you most. To end your lab click on the END button. Module 7 - Creating and Managing Snapshots VMware's Snapshots provide the ability to capture a point-in-time state of a Virtual Machine. This includes the VM's storage, memory and other devices such as Virtual NICs. Using the Snapshot Manager in vSphere Web Client, administrators can create, revert or delete VM snapshots. A chain of up to 32 snapshots per VM is supported. This module briefly demonstrates the Snapshot functionality. You will now connect to the vSphere Web Client. Click on the Google Chrome icon located on the Windows Taskbar or on the Desktop. Login to the VMware vSphere Web Client using the following credentials: User name: [email protected] Password: Password123! At the Navigator pane of your vSphere Web Client session 1. Right click the VxRail Manager VM 2. Click Snapshots 3. Click Take Snapshot Note: Starting with vSAN 6.0 U2, you have the ability to take32 snapshots of a single VM. A name is automatically generated with a timestamp to facilitate the identification of an image when reverting, but this name can be modified at will. You can choose to include the VM's memory as part of the snapshot operation. However, when the memory content is part of the snapshot, the time to perform the snap is elongated. Click OK The 'Create virtual machine snapshot' task should complete in seconds. At the Navigator pane of your vSphere Web Client Session Right-click the VxRail ManagerVM Click Snapshots Click Manage Snapshots You will now revert the VM image to the state that was when we took the snapshot session. The revert image process will suspend the VM. You will power on and reconnect later. Congratulations on completing Module 7. Proceed to any module below which interests you most. To end your lab click on the END button. Module 8 - Upgrading infrastructure software This is the last module. In this module you will install updates for the system software installed on your VxRail Appliance. The software that makes the VxRail Appliance includes VMware ESXi, VMware vCenter, vSAN and VxRail Manager. a new software version is available for upgrade, CONFIG in the left navigation bar displays a highlighted number. In this virtual lab, to expedite the process, we executed the initial step of the upgrade which is to load the composite bundle. We will re-initiate now the upgrade process, but will not have to wait for the file download. Enter the following credentials: Click login The page will be refreshed. The page should display that VxRail is ready to upgrade your cluster. Note that VxRail Manager is the only component included in our bundle. 1. This is an upgrade from VxRail Manager 4.5.100 to 4.5.150. 2. Click Continue Root privileges are required to continue. Enter the credentials for the VxRail Manager upgrade 1. vCenter Server Administrator Account <<<< Please note here that the domain in this case is vsphere.local >>>> 2. VxRail Manager account 3. Click Submit. Monitor progress The time to execute this upgrade will vary depending on the amount of resources available in the lab infrastructure. It will likely take more than 5 minutes for the lab to complete. In a production VxRail environment, the upgrade of VxRail Manager takes a few minutes to complete and is followed by a reboot of the VxRail Manager VM. This is the last step of the lab. You can either conclude the lab now or wait for the completion If you decide to wait for the the completion of the upgrade process: Congratulations on completing Module 8. Proceed to any module below which interests you most. To end your lab click on the END button. Conclusion Thank you for participating in the VMware Hands-on Labs. Be sure to visit to continue your lab experience online. Lab SKU: HOL-1928-01-HCI Version: 20200210-210718
https://docs.hol.vmware.com/HOL-2019/hol-1928-01-hci_html_en/
2021-11-27T03:20:52
CC-MAIN-2021-49
1637964358078.2
[]
docs.hol.vmware.com
ImunifyAV antivirus is a website antivirus that scans user websites for detecting malware codes and monitoring the domain blacklists of Google, Yandex, and other resources. The antivirus detects and deletes malware scripts such as web-shells, backdoors, phishing pages, trojans, etc. ImunifyAV official website. Note ImunifyAV does not scan archives. Licenses The module has two versions that deliver the following functions: Revisium Antivirus Free - unlimited checks; - only administrators can run antivirus checks; - "By users" mode allows scanning the whole directory of a selected user including his websites starting from /var/www/<user>/; - "By domain" mode allows scanning the whole directory of a web-domain; - can't cure and delete infected files. Revisium Antivirus Premium - only administrators can run antivirus checks; - scheduled website scanning; - cure and delete infected files; - store copies of cured files; Installing the module The module with a free version is installed automatically. Navigate to Integration → Modules → ImunifyAV (ex. Revisium). Click on Trial to activate a free version or Buy to order the Premium license. Before you install the module, make sure that: - a public IP address is assigned to the server with ISPmanager; - PHP 7.1 can use the functions putenv and passthru. Go to Web-server settings→ PHP → select PHP 7.1 → Settings. Check that "putenv" and "passthru" are not specified for the "disable_functions" variable. Note PHP 7.1 and the required extensions (ioncube, posix, intl, json) will be installed and activated automatically when installing the antivirus module. Configuring the module Note The system configures the same antivirus settings both for domains and users. Perform the following steps to set up the module: - Go to Tools→ ImunifyAV (ex. Revisium) → Settings. - Select the file types to scan: - Quick-check — the antivirus will check critical files only ( ph*,htm*, js,txt,tpl and other critical files). This helps reduce server load and increase scanning speed dramatically. - Diasable Quick-check for full scanning. Skip media files — select the checkbox not to scan media files and documents. You can select the checkbox Optimize by speed to scan files from cache folders selectively. It speeds up the scanning process with the same level of malware detection; - Max concurrent threads. Possible values: "1", "2", "4". The optimal value is 0,5 *number of available server kernels. - Max allowed memory per scanning (Mb) — configures how much memory is allowed for a single scanning process. If some websites fail to scan try to increase this value. Possible values: "256Mb", "384Mb", "512Mb", "1024Mb". - Set the Log level to increase the logging level. Possible values:"Full' and 'Regular". - Select the Max. scanning time for 1 site to set the time to scan a website. Possible values:"1 hour", "3 hours", "12 hours", "24 hours", "Unlimited". - Check domain blacklisted status — if the option is on, the antivirus will check a domain for blacklisted status in Google and antivirus services. - Enable the option Auto update antivirus databases to keep the ImunifyAV bases up to date. - Automatic scanning parameters: - Scheduled scanning — set the interval of automatic website scanning. Possible values: "Never", "Once a month", "Daily", "Once a week", "Once a month". - In the Start at fieldset the time when the scanning process will start automatically. - — select the checkbox to notify administrator on malware detection after scheduled scanning. Enter the Email for notifications in the field that will open. - Select the checkbox — select the check box to use an external SMTP server instead of common php mail() function. SMTP server — enter the URL of the SMTP-server; SMTP user — enter user login of the SMTP-server; SMTP password — enter the user password of the SMTP-server; SMTP port — enter the port to connect to the SMTP-server. Enable the option Enable SSL for SMTPwhen SMTP connection needs to go over SSL. - Banner settings: - Select the Malware detection banner checkbox to show the banner in ISPmanager upon malware detection. - Select the Misconfigured notifications banner checkbox to show the banner in ISPmanager when email notifications are not configured. - Number of days to keep sets a period in days to keep original versions of cleaned files. This option is available only in the Premium version. Possible values: "7", "14", "30". - Trim malicious files instead of deleting it — select the checkbox not to files when malware is detected but trim it instead. The website will work correctly after automatic scanning if malicious files are not included into another files or database. This option is available only in the Premium version. Management tools Navigate to Tools→ ImunifyAV (ex. Revisium). Scanning modes There are two scanning modes: - By users — the system will check domain directories for viruses and domain reputation for blacklist statuses. - By domains — the system will check user directories including all domains. Domain reputation is not checked. To change the mode click By user or By domains. Scanning To start the scanning process, click the following buttons: - Scan all — scan all domains/users; - Scan — scan the selected domain/user only. If the system detects malware objects, the infected domain/user will be marked as "infected". You will see the following buttons on the toolbar: - Report — view the detailed report to see detected files; - Cleanup— cure the files according to the scanning settings. Once completed, the status in the list of domains/users will change into "Cured". You will show the number of cured threads, date and time when the clean processes started. Clicking the "Undo" button will restore the files back. Note You can undo the operation only for all cured domains/users that have original copies. You cannot restore a single file. Copies of the files before they were cured are stored in the temporary directory usr/local/mgr5/var/raisp_data/backups/. ImunifyAV logs are stored in /usr/local/mgr5/var/raisp_data/log. /usr/local/mgr5/var/raisp_data/log.
https://docs.ispsystem.com/ispmanager6-lite/integrations/integration-with-imunifyav
2021-11-27T01:47:07
CC-MAIN-2021-49
1637964358078.2
[]
docs.ispsystem.com
Share, clone, and mirror dashboards in Splunk Observability Cloud 🔗 Splunk Observability Cloud dashboards are groupings of charts and visualizations of metrics that make it quick and easy to find the metrics you monitor. This topic explains how to share, clone and mirror dashboards to suit your specific needs. See the following sections for more information on how to: Share a dashboard 🔗 The following section describes how to share a dashboard from Splunk Observability Cloud. Use the share menu option 🔗 This method allows you to share a copy of the current state of a dashboard. Copies include unsaved changes at the time you share, and auto-expire unless the recipient saves them. Sharing a copy is useful for when you make a change that you want to show to team members, but don’t want to modify the original dashboard. In the share menu there are two ways to share the dashboard: Share directly 🔗 To share a dashboard copy, select Share from the Dashboard actions menu. A pop-out window will open with sharing options. To share directly, Click Add Recipients and add email addresses or select any available notification integrations as your sharing method. After adding recipients, click Share. Recipients will receive a link to the dashboard copy. When they open it, they can edit and save their copy without affecting the original. Caution Administrators can add email addresses of people who aren’t members of your organization. Recipients who aren’t members will be asked to create a user account before they can view the shared content. Be sure the email addresses you enter for non-members are correct, especially if the item you are sharing contains any sensitive or proprietary information. Copy link 🔗 Alternatively, you might want to send out an email or post a link to the dashboard copy on an internal communication tool as opposed to sharing directly to each individual member. To do this, click Copy next to the link provided in the pop-out window and paste this link into your communication. If you share the dashboard link with a group, be aware that only members of your organization with an account are able to view the dashboard. Use the browser URL 🔗 You can share a dashboard browser URL. However, using the URL shares the original dashboard rather than a copy. Share browser URLs for a dashboard with caution; any changes made to the dashboard are visible to all viewing the dashboard, and can overwrite changes others have made to the dashboard. Clone a dashboard 🔗 There are various reasons you might want to clone a dashboard. Dashboard cloning allows you to modify an existing dashboard without making changes to the original. You can also clone a dashboard that is read-only or that you don’t have write permissions for in order to modify it. To clone a dashboard, select Save as from the dashboard actions menu. You’ll be asked to specify a dashboard name and the dashboard group in which to save the new dashboard. Rename the dashboard to avoid multiple dashboards with the same name. You can save the dashboard to an existing custom or user dashboard group, or you can create a new dashboard group. If you create a new group, the group is added as a Custom Dashboard group. Mirror a dashboard 🔗 Available in Enterprise Edition Dashboard mirroring allows the same dashboard to be added to multiple dashboard groups or multiple times to one dashboard group. A dashboard can be edited from any of its mirrors and the changes made are reflected on all mirrors. However the dashboard name, filters, and dashboard variables can all be customized at the mirror level, without affecting other mirrors. These local customizations allow users to see the same metrics in the same charts, but the mirror can be filtered so that each user is presented with the metrics relevant to them. Why mirror dashboards? 🔗 Common use cases for dashboard mirrors: You create. You have created a dashboard in your user dashboard group, which another user in your organization has found useful. They want to follow any changes you make to the dashboard so they add a mirror of your dashboard to their user dashboard group. Dashboard mirror example 🔗 The following example provides a common use case of dashboard mirroring: In this example, there is a non-mirrored dashboard named CPU Utilization in dashboard group Project‑1. The dashboard is filtered on AWS availability zone us‑east‑1a. The Project-2 dashboard group needs the same dashboard but filtered on AWS availability zone us‑east‑1b. Since filters are customizable within each mirrored dashboard this can be accomplished by adding a mirror of this dashboard in the Project‑2 dashboard group, and filtering on AWS availability zone us‑east‑1b. Now there are two mirrors of the same dashboard, seen in two different places with different filters. If dashboard group Project-1 edited the mirror in group Project‑1, by adding a chart “Mean CPU Utilization”, the filter in this dashboard is still AWS availability zone us‑east‑1a. When they open the mirror in group Project‑2, they will see the added chart, but with the groups AWS availability zone us‑east‑1b filter applied. Create a mirror 🔗 Any Splunk Observability Cloud user can create a mirror of any custom or user dashboard. Users simply need write permission for the dashboard group where they want to place the mirror. Note If you are working with a dashboard you control, be sure to set appropriate write permissions on the dashboard, to prevent inadvertent edits by other users who might be viewing a mirror of the dashboard. To create a mirror, select Add a mirror from the dashboard actions menu. When you create a mirror, you have a number of ways to customize how the mirror will be displayed in the target dashboard group. Dashboard mirrors can also be added to the same group as the current dashboard. This is useful if you want to have quick access to the same set of charts but with different filters or dashboard variable settings. Select a dashboard group 🔗 Select or search for a group where you want the mirror to be placed. Dashboard groups for which you don’t have write permissions will not be available as targets for the mirror. Customize the dashboard name and description 🔗 Specify a name for the mirror in the target group. The default name suggested when creating a new dashboard mirror is the name of the original dashboard, which may be different from the displayed name of the dashboard you are currently mirroring if that dashboard itself is a mirror. Specify a new description for the mirror in the target group. As with the name, the default will come from the dashboard. A dashboard or mirror’s description is visible when you select Dashboard Info from the Actions menu. Customize dashboard filters 🔗 Specify any filters you want applied to the mirror. By default, the mirror will have the same filter(s) as the dashboard you are mirroring. Setting filters here means the target mirror will have different default filters applied. Filters can also be set later by any user with write permissions for that group. Once the dashboard mirror is created, there are two ways to customize the dashboard filters; from the Overrides bar or the Dashboard Info tab. As with any dashboard, changes you make to filters on the Overrides bar are applied immediately, which lets you modify your view and explore your data in real time. If you apply filters and want them to be displayed on the mirror by default, click Save to save the mirror with the filters applied. Once saved, the new filters will be stored in the customization section in the dashboard info tab. On the Dashboard Info tab, anyone with dashboard write permissions can apply filters to the dashboard (in the top portion of the tab). These filters will be applied to all mirrors that don’t have filter customizations applied.. Customize dashboard variables 🔗 You can specify various dashboard variable settings that will apply to this mirror in this dashboard group. Select Dashboard Variables from the mirror’s Actions menu. When these settings are saved, the dashboard variable and the suggested values now reflect the customizations you specified. Implementation notes about fitler and variable customization on mirrored dashboards: You can make changes directly on the Overrides bar; if you save the mirror, these settings will be saved as default values in the Variable Details section of the Dashboard Variables tab. When you save customization options that you set in the Dashboard Variables tab, these changes are automatically saved as default settings for this mirror. On the Dashboard Variables tab, anyone with dashboard write permissions can add, delete, and edit dashboard variables and their settings. These variables will be applied to all mirrors that don’t have variable customizations applied. If you want to override the dashboards default variables with no variables, you can leave the value blank. Doing so means you are overriding the dashboard variable default value with a setting of “no default value.” Dashboard mirrors and permissions 🔗 Dashboard mirrors can only inherit permissions from the dashboard group where they are saved to. Therefore, when you create a new dashboard mirror, teams and users with read and/or write permissions on the dashboard group will have the same permissions on all mirrors. The following table shows the prerequisites you need to do dashboard mirror actions. * When you view the Mirrors of this dashboard list on the Dashboard Info page of a dashboard, not all mirrors might show up. The list only shows mirrors for which you have read permissions. ** When a dashboard has one or more mirrors, the Delete dashboard option is not available; it is replaced with the Remove mirror option. If all mirrors have been removed from the groups in which they were placed, the Delete dashboard option will be available on the last mirror. *** If you want to delete the last dashboard mirror in the same group as the original dashboard, and the original dashboard inherits permissions from this group, you have to change the permission settings of the original dashboard so that it inherits permissions from another group.
https://docs.splunk.com/observability/data-visualization/dashboards/dashboard-share-clone-mirror.html
2021-11-27T02:25:49
CC-MAIN-2021-49
1637964358078.2
[]
docs.splunk.com
. -<< Admin Tech Command Use the Admin Tech command to collect system status information for a device in a tar file, to aid in troubleshooting and diagnostics. - From the device table, select the device. - Click the More Actions icon to the right of the row and click Admin Tech. - In the Generate admin-tech File window, limit the contents of the Admin Tech tar file if desired: - The Include Logs checkbox is selected by default. Deselect this checkbox to omit any log files from the compressed tar file. Log files are stored in the /var/log/directory on the local device. - Select the Include Cores checkbox to include any core files. Core files are stored in the /var/crash directory on the local device. - Select the Include Tech checkbox to include any files related to device processes (daemons) and operations. These files are stored in the /var/tech directory on the local device. - Click Generate. A tar file is created which contains the contents of various files on the local device. This file has a name similar to 20150709-032523-admin-tech.tar.gz, where the numeric fields are the date and time. - Send the admin-tech.tar.gz file to your Viptela customer support contact. Interface Reset Command Use the Interface Reset command to shutdown and then restart an interface on a device in a single operation, without having to modify the device's configuration. - From the device table, select the device. - Click the More Actions icon to the right of the row and click Interface Reset. - In the Interface Reset window, select the desired interface. - Click Reset.
https://sdwan-docs.cisco.com/Product_Documentation/vManage_Help/Release_18.1/Tools/Operational_Commands
2021-11-27T03:27:07
CC-MAIN-2021-49
1637964358078.2
[array(['https://sdwan-docs.cisco.com/@api/deki/files/4328/g00437.png?revision=1', 'g00437.png'], dtype=object) ]
sdwan-docs.cisco.com
How can I set up the right permissions in BigQuery? To use this functionality, first create the service account you want to impersonate. Then grant users that you want to be able to impersonate this service account the roles/iam.serviceAccountTokenCreator role on the service account resource. Then, you also need to grant the service account the same role on itself. This allows it to create short-lived tokens identifying itself, and allows your human users (or other service accounts) to do the same. More information on this scenario is available here. Once you've granted the appropriate permissions, you'll need to enable the IAM Service Account Credentials API. Enabling the API and granting the role are eventually consistent operations, taking up to 7 minutes to fully complete, but usually fully propagating within 60 seconds. Give it a few minutes, then add the impersonate_service_account option to your BigQuery profile configuration.
https://6167222043a0b700086c2b31--docs-getdbt-com.netlify.app/faqs/bq-impersonate-service-account-setup
2021-11-27T04:15:40
CC-MAIN-2021-49
1637964358078.2
[]
6167222043a0b700086c2b31--docs-getdbt-com.netlify.app
BigQuery configurations Use project and dataset in configurations schemais interchangeable with the BigQuery concept dataset databaseis interchangeable with the BigQuery concept of project For our reference documentation, you can declare project in place of database. This will allow you to read and write from multiple BigQuery projects. Same for dataset. Using table partitioning and clusteringUsing table partitioning and clustering Partition clausePartition clause BigQuery supports the use of a partition by clause to easily partition a table by a column or expression. This option can help decrease latency and cost when querying large tables. Note that partition pruning only works when partitions are filtered using literal values (so selecting partitions using a subquery won't improve performance). The partition_by config can be supplied as a dictionary with the following format: {"field": "<field name","data_type": "<timestamp | date | datetime | int64 >","granularity": "< hour | day | month | year >"# Only required if data_type is "int64""range": {"start": <int>,"end": <int>,"interval": <int>}} Partitioning by a date or timestampPartitioning by a date or timestamp When using a datetime or timestamp column to partition data, you can create partitions with a granularity of hour, day, month, or year. A date column supports granularity of day, month and year. Daily partitioning is the default for all column types. If the data_type is specified as a date and the granularity is day, dbt will supply the field as-is when configuring table partitioning. - Source code - Compiled code {{ config(materialized='table',partition_by={"field": "created_at","data_type": "timestamp","granularity": "day"})}}selectuser_id,event_name,created_atfrom {{ ref('events') }} Partitioning with integer bucketsPartitioning with integer buckets If the data_type is specified as int64, then a range key must also be provied in the partition_by dict. dbt will use the values provided in the range dict to generate the partitioning clause for the table. - Source code - Compiled code {{ config(materialized='table',partition_by={"field": "user_id","data_type": "int64","range": {"start": 0,"end": 100,"interval": 10}})}}selectuser_id,event_name,created_atfrom {{ ref('events') }} Additional partition configsAdditional partition configs If your model has partition_by configured, you may optionally specify two additional configurations: require_partition_filter(boolean): If set to true, anyone querying this model must specify a partition filter, otherwise their query will fail. This is recommended for very large tables with obvious partitioning schemes, such as event streams grouped by day. Note that this will affect other dbt models or tests that try to select from this model, too. partition_expiration_days(integer): If set for date- or timestamp-type partitions, the partition will expire that many days after the date it represents. E.g. A partition representing 2021-01-01, set to expire after 7 days, will no longer be queryable as of 2021-01-08, its storage costs zeroed out, and its contents will eventually be deleted. Note that table expiration will take precedence if specified. {{ config(materialized = 'table',partition_by = {"field": "created_at","data_type": "timestamp","granularity": "day"},require_partition_filter = true,partition_expiration_days = 7)}} Clustering ClauseClustering Clause BigQuery tables can be clustered to colocate related data. Clustering on a single column: {{config(materialized = "table",cluster_by = "order_id",)}}select * from ... Clustering on a multiple columns: {{config(materialized = "table",cluster_by = ["customer_id", "order_id"],)}}select * from ... Managing KMS EncryptionManaging KMS Encryption Customer managed encryption keys can be configured for BigQuery tables using the kms_key_name model configuration. Using KMS EncryptionUsing KMS Encryption To specify the KMS key name for a model (or a group of models), use the kms_key_name model configuration. The following example sets the kms_key_name for all of the models in the encrypted/ directory of your dbt project. name: my_projectversion: 1.0.0...models:my_project:encrypted:+kms_key_name: 'projects/PROJECT_ID/locations/global/keyRings/test/cryptoKeys/quickstart' Labels and TagsLabels and Tags Specifying labelsSpecifying labels dbt supports the specification of BigQuery labels for the tables and views that it creates. These labels can be specified using the labels model config. The labels config can be provided in a model config, or in the dbt_project.yml file, as shown below. Configuring labels in a model file {{config(materialized = "table",labels = {'contains_pii': 'yes', 'contains_pie': 'no'})}}select * from {{ ref('another_model') }} Configuring labels in dbt_project.yml models:my_project:snowplow:+labels:domain: clickstreamfinance:+labels:domain: finance Specifying tagsSpecifying tags BigQuery table and view tags can be created by supplying an empty string for the label value. {{config(materialized = "table",labels = {'contains_pii': ''})}}select * from {{ ref('another_model') }} Policy tagsPolicy tags BigQuery enables column-level security by setting policy tags on specific columns. dbt enables this feature as a column resource property, policy_tags (not a node config). version: 2models:- name: policy_tag_tablecolumns:- name: fieldpolicy_tags:- 'need_to_know' Please note that in order for policy tags to take effect, column-level persist_docs must be enabled for the model, seed, or snapshot. Merge behavior (incremental models)Merge behavior (incremental models) The incremental_strategy config controls how dbt builds incremental models. dbt uses a merge statement on BigQuery to refresh incremental tables. The incremental_strategy config can be set to one of two values: merge(default) insert_overwrite Performance and costPerformance and cost The operations performed by dbt while building a BigQuery incremental model can be made cheaper and faster by using clustering keys in your model configuration. See this guide for more information on performance tuning for BigQuery incremental models. Note: These performance and cost benefits are applicable to incremental models built with either the merge or the insert_overwrite incremental strategy. The merge strategy The merge incremental strategy will generate a merge statement that looks something like: merge into {{ destination_table }} DESTusing ({{ model_sql }}) SRCon SRC.{{ unique_key }} = DEST.{{ unique_key }}when matched then update ...when not matched then insert ... The merge approach has the benefit of automatically updating any late-arriving facts in the destination incremental table. The drawback of this approach is that BigQuery must scan all source tables referenced in the model SQL, as well as the entirety of the destination table. This can be slow and costly if the incremental model is transforming very large amounts of data. Note: The unique_key configuration is required when the merge incremental strategy is selected. The insert_overwrite strategy The insert_overwrite strategy generates a merge statement that replaces entire partitions in the destination table. Note: this configuration requires that the model is configured with a Partition clause. The merge statement that dbt generates when the insert_overwrite strategy is selected looks something like: /*Create a temporary table from the model SQL*/create temporary table {{ model_name }}__dbt_tmp as ({{ model_sql }});/*If applicable, determine the partitions to overwrite byquerying the temp table.*/declare dbt_partitions_for_replacement array<date>;set (dbt_partitions_for_replacement) = (select as structarray_agg(distinct date(max_tstamp))from `my_project`.`my_dataset`.`sessions`);/*Overwrite partitions in the destination table which matchthe partitions in the temporary table*/merge into {{ destination_table }} DESTusing {{ model_name }}__dbt_tmp SRCon FALSEwhen not matched by source and {{ partition_column }} in unnest(dbt_partitions_for_replacement)then deletewhen not matched then insert ... For a complete writeup on the mechanics of this approach, see this explainer post. Determining partitions to overwriteDetermining partitions to overwrite dbt is able to determine the partitions to overwrite dynamically from the values present in the temporary table, or statically using a user-supplied configuration. The "dynamic" approach is simplest (and the default), but the "static" approach will reduce costs by eliminating multiple queries in the model build script. Static partitionsStatic partitions To supply a static list of partitions to overwrite, use the partitions configuration. {% set partitions_to_replace = ['timestamp(current_date)','timestamp(date_sub(current_date, interval 1 day))'] %}{{config(materialized = 'incremental',incremental_strategy = 'insert_overwrite',partition_by = {'field': 'session_start', 'data_type': 'timestamp'},partitions = partitions_to_replace)}}with events as (select * from {{ref('events')}}{% if is_incremental() %}-- recalculate yesterday + todaywhere date(event_timestamp) in ({{ partitions_to_replace | join(',') }}){% endif %}),... rest of model ... This example model serves to replace the data in the destination table for both today and yesterday every day that it is run. It is the fastest and cheapest way to incrementally update a table using dbt. If we wanted this to run more dynamically— let’s say, always for the past 3 days—we could leverage dbt’s baked-in datetime macros and write a few of our own. Think of this as "full control" mode. You must ensure that expressions or literal values in the the partitions config have proper quoting when templated, and that they match the partition_by.data_type ( timestamp, datetime, date, or int64). Otherwise, the filter in the incremental merge statement will raise an error. Dynamic partitionsDynamic partitions If no partitions configuration is provided, dbt will instead: - Create a temporary table for your model SQL - Query the temporary table to find the distinct partitions to be overwritten - Query the destination table to find the max partition in the database When building your model SQL, you can take advantage of the introspection performed by dbt to filter for only new data. The max partition in the destination table will be available using the _dbt_max_partition BigQuery scripting variable. Note: this is a BigQuery SQL variable, not a dbt Jinja variable, so no jinja brackets are required to access this variable. Example model SQL: {{config(materialized = 'incremental',partition_by = {'field': 'session_start', 'data_type': 'timestamp'},incremental_strategy = 'insert_overwrite')}}with events as (select * from {{ref('events')}}{% if is_incremental() %}-- recalculate latest day's data + previous-- NOTE: The _dbt_max_partition variable is used to introspect the destination tablewhere date(event_timestamp) >= date_sub(date(_dbt_max_partition), interval 1 day){% endif %}),... rest of model ... Controlling table expirationControlling table expiration By default, dbt-created tables never expire. You can configure certain model(s) to expire after a set number of hours by setting hours_to_expiration. {{ config(hours_to_expiration = 6) }}select ... Authorized ViewsAuthorized Views If the grant_access_to config is specified for a model materialized as a view, dbt will grant the view model access to select from the list of datasets provided. See BQ docs on authorized views for more details. models:+grant_access_to:- project: project_1dataset: dataset_1- project: project_2dataset: dataset_2 {{ config(grant_access_to=[{'project': 'project_1', 'dataset': 'dataset_1'},{'project': 'project_2', 'dataset': 'dataset_2'}]) }} Views with this configuration will be able to select from objects in project_1.dataset_1 and project_2.dataset_2, even when they are located elsewhere and queried by users who do not otherwise have access to project_1.dataset_1 and project_2.dataset_2. LimitationsLimitations The grant_access_to config is not thread-safe when multiple views need to be authorized for the same dataset. The initial dbt run operation after a new grant_access_to config is added should therefore be executed in a single thread. Subsequent runs using the same configuration will not attempt to re-apply existing access grants, and can make use of multiple threads.
https://6167222043a0b700086c2b31--docs-getdbt-com.netlify.app/reference/resource-configs/bigquery-configs
2021-11-27T03:45:16
CC-MAIN-2021-49
1637964358078.2
[]
6167222043a0b700086c2b31--docs-getdbt-com.netlify.app
In this article Overview The Version Details page is the space where you can view information about a Version of the API, the Assets it contains, and Permissions. Besides performing all lifecycle actions for an API published on the API Portal, you can view information about your API Versions, as well as view, add and delete API policies, - Created: The date when the API is created. - Owner: The user who owns the Asset. Typically, the owner inherits the asset when it is moved or the creator is no longer a user in the Org. - Updated: The time the API Version is last updated. - Versions: The status of the API version. Currently, all API versions are unpublished. - Policy: The policies applied against the API Version, listed by label. Click to display the API policy dialog window. - Type: The type of API policy for each policy listed. - Owner: The owner of the API policy. - Update:. The time the API policy is last updated. - Status: Indicates if the API policy is Enabled or Disabled. You can enable or disable an API policy in the settings dialog window. - Description: The description of the API that is added during creation of the API or API Version. - Tags: Metadata added to the API and API version. For details about the assets used in the API version, see Managing API Version Assets. Viewing API Policy Information To view the API policies applied at the various levels for a version of an API: - In the API Manager console, click the target API version. - In the Version Details tab of the AP version, click View Applied Policies to display the Related Policies dialog window. - In the Applied Policies tab, the following information is displayed. - Name: The name entered in the Label field for the API Policy. Click the name to display the settings dialog window for that API policy. - Type: The type of policy. - Path: The path of the policy. - Click the Hierarchy tab to view at which level (Org, API, version), the API policy is applied. - Click Close to return the Version Details tab. You can change policies from the Version Details page. Editing Version Details You can edit the description and tags for an API Version without having to republish the API. You can also change the Snaplex if the API version is unpublished. - Navigate to the target API, and click the version whose details you are changing. - Click Edit Details, and change the content in the following fields: - Description Known Issue In 4.26, to change a Snaplex for an API Version, you must create a new version, update the Snaplex, and then publish a new version of the API. Clicking Edit Details from the API > Versions page displays the Server field for selecting another Snaplex, but the field is disabled. Generating the Specification for your API You can preserve the API specification used in the version in the published Developer Portal. Clicking Generate Specification enables the API consumer to examine its contents in the Open API Specification format when viewing the documentation for an API on the Developer Portal.
https://docs-snaplogic.atlassian.net/wiki/spaces/DRWIP/pages/1989411097/Version+Details
2021-11-27T03:39:10
CC-MAIN-2021-49
1637964358078.2
[]
docs-snaplogic.atlassian.net
This reference page is linked to from the following overview topics: Dependency Graph Plug-in Basics, Example: Voxelizer Node, Example: Bounding Box Deformer, Parent class descriptions, MPxNode and its derived classes, Deformers and Topology, Implementing a Deformer Node, Deformer Node Example, Using the Maya Python API. Base class for user defined Deform: Deformers are full dependency nodes and can have attributes and a deform() method. In general, to derive the full benefit of the Maya deformer base class, it is suggested that you do not write your own compute() method. Instead, write the deform() method, which is called by the MPxDeformerNode's compute() method. However, there are some exceptions when you would instead write your own compute(), namely: In the case where you cannot simply override the deform() method, the following example code shows one possible compute() method implementation. This compute() example creates an iterator for the deformer set corresponding to the output geometry being computed. Note that this sample compute() implementation does not do any deformation, and does not implement handling of the nodeState attribute. If you do choose to override compute() in your node, there is no reason to implement the deform() method, since it will not be called by the base class.. #include <MPxDeformerNode.h> Deformation details. method allows the plug-in node to inform the system that it intends to deform components other than just positions. It should typically be called in advance of any deformation taking place (e.g. in postConstructor()), not in the deform() method. If it is called from deform(), the setting will take effect the next time the DG causes the deformation to be calculated. Retrieves the value set by setDeformationDetails(). See the documentation of that method for the interpretation of the value. This callback method can be overriden and is called whenever the set this deformer is operating on is modified. It passes in a selection list of items that are either being added/removed. Returns the name of this class. Reimplemented from MPxNode.
https://docs.autodesk.com/MAYAUL/2013/ENU/Maya-API-Documentation/cpp_ref/class_m_px_deformer_node.html
2021-11-27T02:30:27
CC-MAIN-2021-49
1637964358078.2
[]
docs.autodesk.com
Dataform provides methods that enable you to easily reference another dataset in your project using the ref function. This provides two advantages: In this step you'll learn how to manage dependencies in Dataform. You'll now create a second table called customers , following the same process as before New Datasetand select the table template. customers Create. Define your dataset: 1SELECT 2 customers.id AS id, 3 customers.first_name AS first_name, 4 customers.last_name AS last_name, 5 customers.email AS email, 6 customers.country AS country, 7 COUNT(orders.id) AS order_count, 8 SUM(orders.amount) AS total_spent 9 10FROM 11 dataform-demos.dataform_tutorial.crm_customers AS customers 12 LEFT JOIN ${ref('order_stats')} orders 13 ON customers.id = orders.customer_id 14 15WHERE 16 customers.id IS NOT NULL 17 AND customers.first_name <> 'Internal account' 18 AND country IN ('UK', 'US', 'FR', 'ES', 'NG', 'JP') 19 20GROUP BY 1, 2, 3, 4, 5 customers.sqlx, below the config block. reffunction. The reffunction enables you to reference any other table defined in a Dataform project. reffunction has been replaced with the fully qualified table name. Once you can see that your query is valid you can publish the table to your warehouse by clicking on Publish Table . View the Dependency tree: Dependency Treetab. You now have two tables created in your warehouse, one called order_stats and one called customers . customers depends on order_stats and will start running when order_stats is completed. For more detailed info on managing dependencies in Dataform, see our docs.
https://docs.dataform.co/getting-started-tutorial/managing-dependencies
2021-11-27T02:48:23
CC-MAIN-2021-49
1637964358078.2
[]
docs.dataform.co
Hello everyone! My company has been working on migrating users from their previous version of windows to Windows 10 1909. We use SCCM to deploy the update. Here's a little background info: We created a security group for Windows 10 1909 update and we have added all of our computers to that security group so they may receive the update. Ideally what was supposed to happen is once the computer is added to the group the computer should automatically receive the update in their Software Center. We have computers still on Windows 10 1703 and 1803 and we are moving them straight to 1909. We deployed the 1909 feature update to all of our computers through Software Central. A lot of our users who pass the compatibility test are having the issue where the feature 1909 update is not appearing in their -updates tab- in Software center. Though other computers have no issue viewing the update in their Software Center. We believe one issue could be that the Security Group for Windows 1909 Update is not putting the update file in the correct location. One solution we have found is manually adding the affected computers to the designated file or file path for the update to appear. However this is proving to be very tedious and time consuming, can someone help pinpoint where the issue is or can someone help present other solutions for the update to appear in the affected computers Software center?
https://docs.microsoft.com/en-us/answers/questions/574503/windows-10-1909-deployment-issues.html
2021-11-27T03:51:48
CC-MAIN-2021-49
1637964358078.2
[]
docs.microsoft.com
Overview of Microsoft IntelliTest IntelliTest enables you to find bugs early, and reduces test maintenance costs. Using an automated and transparent testing approach, IntelliTest can generate a candidate suite of tests for your .NET code. Test suite generation can be further guided by correctness properties you specify. IntelliTest will even evolve the test suite automatically as the code under test evolves. Note IntelliTest is available in Enterprise edition only. It is supported for C# code that targets the .NET Framework. .NET Core and .NET Standard are not currently supported. Characterization tests IntelliTest enables you to determine the behavior of code in terms of a suite of traditional unit tests. Such a test suite can be used as a regression suite, forming the basis for tackling the complexity associated with refactoring legacy or unfamiliar code. Guided test input generation IntelliTest uses an open code analysis and constraint solving approach to automatically generate precise test input values; usually without the need for any user intervention. For complex object types, it automatically generates factories. You can guide test input generation by extending and configuring the factories to suit your requirements. Correctness properties specified as assertions in code will also be used automatically to further guide test input generation. IDE integration IntelliTest is fully integrated into the Visual Studio IDE. All of the information gathered during test suite generation (such as the automatically generated inputs, the output from your code, the generated test cases, and their pass or fail status) appears within the Visual Studio IDE. You can easily iterate between fixing your code and rerunning IntelliTest, without leaving the Visual Studio IDE. The tests can be saved into the solution as a Unit Test Project, and will be automatically detected afterwards by Visual Studio Test Explorer. Complement existing testing practices Use IntelliTest to complement any existing testing practices that you may already follow. If you want to test: - Algorithms over primitive data, or arrays of primitive data: - write parameterized unit tests - Algorithms over complex data, such as compiler: - let IntelliTest first generate an abstract representation of the data, and then feed it to the algorithm - let IntelliTest build instances using custom object creation and data invariants, and then invoke the algorithm - Data containers: - write parameterized unit tests - let IntelliTest build instances using custom object creation and data invariants, and then invoke a method of the container and recheck invariants afterwards - write parameterized unit tests that call different methods of the implementation, depending on the parameter values - An existing code base: - use Visual Studio's IntelliTest Wizard to get started by generating a set of parameterized unit tests (PUTs) The Hello World of IntelliTest IntelliTest finds inputs relevant to the tested program, which means you can use it to generate the famous Hello World! string. This assumes that you have created a C# MSTest-based test project and added a reference to Microsoft.Pex.Framework. If you are using a different test framework, create a C# class library and refer to the test framework documentation on how to set up the project. The following example creates two constraints on the parameter named value so that IntelliTest will generate the required string: using System; using Microsoft.Pex.Framework; using Microsoft.VisualStudio.TestTools.UnitTesting; [TestClass] public partial class HelloWorldTest { [PexMethod] public void HelloWorld([PexAssumeNotNull]string value) { if (value.StartsWith("Hello") && value.EndsWith("World!") && value.Contains(" ")) throw new Exception("found it!"); } } Once compiled and executed, IntelliTest generates a set of tests such as the following set: - "" - "\0\0\0\0\0" - "Hello" - "\0\0\0\0\0\0" - "Hello\0" - "Hello\0\0" - "Hello\0World!" - "Hello World!" Note For build issues, try replacing Microsoft.VisualStudio.TestPlatform.TestFramework and Microsoft.VisualStudio.TestPlatform.TestFramework.Extensions references with a reference to Microsoft.VisualStudio.QualityTools.UnitTestFramework. Read Generate unit tests with IntelliTest to understand where the generated tests are saved. The generated test code should include a test such as the following code: [TestMethod] [PexGeneratedBy(typeof(global::HelloWorldTest))] [PexRaisedException(typeof(Exception))] public void HelloWorldThrowsException167() { this.HelloWorld("Hello World!"); } It's that easy! Additional resources: - Watch the Channel 9 video - Read this overview on MSDN Magazine Important attributes - PexClass marks a type containing PUT - PexMethod marks a PUT - PexAssumeNotNull marks a non-null parameter using Microsoft.Pex.Framework; [..., PexClass(typeof(Foo))] public partial class FooTest { [PexMethod] public void Bar([PexAssumeNotNull]Foo target, int i) { target.Bar(i); } } - PexAssemblyUnderTest binds a test project to a project - PexInstrumentAssembly specifies an assembly to instrument [assembly: PexAssemblyUnderTest("MyAssembly")] // also instruments "MyAssembly" [assembly: PexInstrumentAssembly("Lib")] Important static helper classes - PexAssume evaluates assumptions (input filtering) - PexAssert evaluates assertions - PexChoose generates new choices (additional inputs) - PexObserve logs live values to the generated tests [PexMethod] void StaticHelpers(Foo target) { PexAssume.IsNotNull(target); int i = PexChoose.Value<int>("i"); string result = target.Bar(i); PexObserve.ValueForViewing<string>("result", result); PexAssert.IsNotNull(result); } Limitations This section describes the limitations of IntelliTest: Nondeterminism IntelliTest assumes that the analyzed program is deterministic. If it is not, IntelliTest will cycle until it reaches an exploration bound. IntelliTest considers a program to be non-determistic if it relies on inputs that IntelliTest cannot control. IntelliTest controls inputs provided to parameterized unit tests and obtained from the PexChoose. In that sense, results of calls to unmanaged or uninstrumented code are also considered as "inputs" to the instrumented program, but IntelliTest cannot control them. If the control flow of the program depends on specific values coming from these external sources, IntelliTest cannot "steer" the program towards previously uncovered areas. In addition, the program is considered to be non-determistic if the values from external sources change when rerunning the program. In such cases IntelliTest loses control over the execution of the program and its search becomes inefficient. Sometimes it is not obvious when this happens. Consider the following examples: - The result of the GetHashCode() method is provided by unmanaged code, and is not predictable. - The System.Random class uses the current system time to deliver truly random values. - The System.DateTime class provides the current time, which is not under the control of IntelliTest. Concurrency IntelliTest does not handle multithreaded programs. Native code IntelliTest does not understand native code, such as x86 instructions called through P/Invoke. It does not know how to translate such calls into constraints that can be passed to the constraint solver. Even for .NET code, it can only analyze code it instruments. IntelliTest cannot instrument certain parts of mscorlib, including the reflection library. DynamicMethod cannot be instrumented. The suggested workaround is to have a test mode where such methods are located in types in a dynamic assembly. However, even if some methods are uninstrumented, IntelliTest will try to cover as much of the instrumented code as possible. Platform IntelliTest is supported only on the X86, 32-bit .NETframework. Language In principle, IntelliTest can analyze arbitrary .NET programs, written in any .NET language. However, in Visual Studio it supports only C#. Symbolic reasoning IntelliTest uses an automatic constraint solver to determine which values are relevant for the test and the program under test. However, the abilities of the constraint solver are, and always will be, limited. Incorrect stack traces Because IntelliTest catches and "rethrows" exceptions in each instrumented method, the line numbers in stack traces will not be correct. This is a limitation by design of the "rethrow" instruction.
https://docs.microsoft.com/en-us/visualstudio/test/intellitest-manual/?view=vs-2019
2021-11-27T04:00:52
CC-MAIN-2021-49
1637964358078.2
[]
docs.microsoft.com
Fallback Test Bank To access the fallback test bank in the playground environment, the country DE and sort code 88888888 has to be entered when selecting a bank. In the "bank login method" selection, choose Test Transportweg Deutschland. Entering the following values as a PIN in the bank login step instructs the underlying systems to execute different scenarios that can happen at different banks: If "complete" is used as a PIN in the first step there is also the possibility to enter specific values for the next step to test even further: Also for the authorization-step there are some special keywords to create specific behaviors:
https://docs.openbanking.klarna.com/xs2a/test-bank-fallback.html
2021-11-27T02:28:21
CC-MAIN-2021-49
1637964358078.2
[]
docs.openbanking.klarna.com
use Office 365 URLs and IP address ranges to make sure your network is configured correctly. Note These DNS records also apply to Teams, especially in a hybrid Teams and Skype for Business scenario, where certain federation issues could arise. External DNS records required for Office 365 Single Sign-On External DNS records required for SPF Important. to prevent:spf.protection.outlook:
https://docs.microsoft.com/en-us/microsoft-365/enterprise/external-domain-name-system-records?view=o365-worldwide
2021-11-27T03:36:59
CC-MAIN-2021-49
1637964358078.2
[array(['../media/e05b1c78-1df0-4200-ba40-6e26b7ead68f.png?view=o365-worldwide', 'Domain.'], dtype=object) ]
docs.microsoft.com
Prepare for cluster peering Before creating a cluster peering relationship, you must verify that the time on each cluster is synchronized with an external Network Time Protocol (NTP) server, and determine the subnets, ports, and passphrases that you want to use. If you are running ONTAP 9.2 or earlier, determine the passphrase that you want to use for each cluster peer relationship. The passphrase must include at least eight characters. Starting with ONTAP 9.3, you can generate the passphrase from the remote cluster while creating the cluster peer relationship. Identify the subnets, IP addresses, and ports that you will use for intercluster LIFs. By default, the IP address is automatically selected from the subnet. If you want to specify the IP address manually, you must ensure that the IP address either is already available in the subnet or can be added to the subnet later. Information about subnets is available in the Network tab. The following table assumes that each cluster has four nodes. If a cluster has more than four nodes, you can record the ports on another piece of paper.
https://docs.netapp.com/us-en/ontap-sm-classic/peering/task_preparing_for_cluster_peering.html
2021-11-27T03:27:45
CC-MAIN-2021-49
1637964358078.2
[]
docs.netapp.com
How do I…¶ This section contains a number of smaller topics with links and examples meant to provide relatively concrete answers for specific tool development scenarios. … deal with index/reference data?¶ Galaxy’s concept of data tables are meant to provide tools with access reference datasets or index data not tied to particular histories or users. A common example would be FASTA files for various genomes or mapper-specific indices of those files (e.g. a BWA index for the hg19 genome). Galaxy data managers are specialized tools designed to populate tool data tables. … cite tools without an obvious DOI?¶ In the absence of an obvious DOI, tools may contain embedded BibTeX directly. Futher reading: - bibtex.xml (test tool with a bunch of random examples) - bwa-mem.xml (BWA-MEM tool by Anton Nekrutenko demonstrating citation of an arXiv article) - macros.xml (Macros for vcflib tool demonstrating citing a github repository) … declare a Docker container for my tool?¶ Galaxy tools can be decorated to with container tags indicated Docker container ids that the tools can run inside of. The longer term plan for the Tool Shed ecosystem is to be able to automatically build Docker containers for tool dependency descriptions and thereby obtain this Docker functionality for free and in a way that is completely backward compatible with non-Docker deployments. Further reading: - Complete tutorial on Github by Aaron Petkau. Covers installing Docker, building a Dockerfile, publishing to Docker Hub, annotating tools and configuring Galaxy. - Another tutorial from the Galaxy User Group Grand Ouest. - Landing page on the Galaxy Wiki - Impementation details on Pull Request #401 … do extra validation of parameters?¶ Tool parameters support a validator element (syntax) to perform validation of a single parameter. More complex validation across parameters can be performed using arbitrary Python functions using the code file syntax but this feature should be used sparingly. Further reading: - validator XML tag syntax on the Galaxy wiki. - fastq_filter.xml (a FASTQ filtering tool demonstrating validator constructs) - gffread.xml (a tool by Jim Johnson demonstrating using regular expressions with validatortags) - code_file.xml, code_file.py (test files demonstrating defining a simple constraint in Python across two parameters) - deseq2 tool by Björn Grüning demonstrating advanced codefile validation. … check input type in command blocks?¶ Input data parameters may specify multiple formats. For example <param name="input" type="data" format="fastq,fasta" label="Input" /> If the command-line under construction doesn’t require changes based on the input type - this may just be referenced as $input. However, if the command-line under construction uses different argument names depending on type for instance - it becomes important to dispatch on the underlying type. In this example $input.ext - would return the short code for the actual datatype of the input supplied - for instance the string fasta or fastqsanger would be valid responses for inputs to this parameter for the above definition. While .ext may sometimes be useful - there are many cases where it is inappropriate because of subtypes - checking if .ext is equal to fastq in the above example would not catch fastqsanger inputs for instance. To check if an input matches a type or any subtype thereof - the is_of_type method can be used. For instance $input.is_of_type('fastq') would check if the input is of type fastq or any derivative types such as fastqsanger. … handle arbitrary output data formats?¶ If the output format of a tool’s output cannot be known ahead of time, Galaxy can be instructed to “sniff” the output and determine the data type using the same method used for uploads. Adding the auto_format="true" attribute to a tool’s output enables this. <output name="out1" auto_format="true" label="Auto Output" /> … determine the user submitting a job?¶ The variable $__user_email__ (as well as $__user_name__ and $__user_id__) is available when building up your command in the tool’s <command> block. The following tool demonstrates the use of this and a few other special parameters available to all tools. … test with multiple value inputs?¶ To write tests that supply multiple values to a multiple="true" select or data parameter - simply specify the multiple values as a comma seperated list. Here are examples of each: … test dataset collections?¶ Here are some examples of testing tools that consume collections with type="data_collection" parameters. Here are some examples of testing tools that produce collections with output_collection elements. … test discovered datasets?¶ Tools which dynamically discover datasets after the job is complete, either using the <discovered_datasets> element, the older default pattern approach (e.g. finding files with names like primary_DATASET_ID_sample1_true_bam_hg18), or the undocumented galaxy.json approach can be tested by placing discovered_dataset elements beneath the corresponding output element with the designation corresponding to the file to test. <test> <param name="input" value="7" /> <output name="report" file="example_output.html"> <discovered_dataset designation="world1" file="world1.txt" /> <discovered_dataset designation="world2"> <assert_contents> <has_line line="World Contents" /> </assert_contents> </discovered_dataset> </output> </test> The test examples distributed with Galaxy demonstrating dynamic discovery and the testing thereof include: … test composite dataset contents?¶ Tools which consume Galaxy composite datatypes can generate test inputs using the composite_data element demonstrated by the following tool. Tools which produce Galaxy composite datatypes can specify tests for the individual output files using the extra_files element demonstrated by the following tool. … test index (.loc) data?¶ There is an idiom to supply test data for index during tests using Planemo. To create this kind of test, one needs to provide a tool_data_table_conf.xml.test beside your tool’s tool_data_table_conf.xml.sample file that specifies paths to test .loc files which in turn define paths to the test index data. Both the .loc files and the tool_data_table_conf.xml.test can use the value ${__HERE__} which will be replaced with the path to the directory the file lives in. This allows using relative-like paths in these files which is needed for portable tests. An example commit demonstrating the application of this approach to a Picard tool can be found here. These tests can then be run with the Planemo test command. … test exit codes?¶ A test element can check the exit code of the underlying job using the check_exit_code="n" attribute. … test failure states?¶ Normally, all tool test cases described by a test element are expected to pass - but on can assert a job should fail by adding expect_failure="true" to the test element. … test output filters work?¶ If your tool contains filter elements, you can’t verify properties of outputs that are filtered out and do not exist. The test element may contain an expect_num_outputs attribute to specify the expected number of outputs, this can be used to verify that outputs not listed are expected to be filtered out during tool execution. … test metadata?¶ Output metadata can be checked using metadata elements in the XML description of the output. … test tools installed in an existing Galaxy instance?¶ Do not use planemo, Galaxy should be used to test its tools directly. The following two commands can be used to test Galaxy tools in an existing instance. $ sh run_tests.sh --report_file tool_tests_shed.html --installed This above command specifies the --installed flag when calling run_tests.sh, this tells the test framework to test Tool Shed installed tools and only those tools. $ GALAXY_TEST_TOOL_CONF=config/tool_conf.xml sh run_tests.sh --report_file tool_tests_tool_conf.html functional.test_toolbox The second command sets GALAXY_TEST_TOOL_CONF environment variable, which will restrict the testing framework to considering a single tool conf file (such as the default tools that ship with Galaxy config/tool_conf.xml.sample and which must have their dependencies setup manually). The last argument to run_tests.sh, functional.test_toolbox tells the test framework to run all the tool tests in the configured tool conf file. Note Tip: To speed up tests you can use a pre-migrated database file the way Planemo does by setting the following environment variable before running run_tests.sh. $ export GALAXY_TEST_DB_TEMPLATE="" … test tools against a package or container in a bioconda pull request?¶ First, obtain the artifacts of the PR by adding this comment: @BiocondaBot please fetch artifacts. In the reply one finds the links to the built package and docker image. In order to test the tool with the package add the following to the planemo call: $ planemo test ... --conda_channels LINK_TO_PACKAGE,conda-forge,bioconda,defaults ... For containerized testing the docker image needs to be loaded: $ curl -L "LINK_TO_DOCKER_IMAGE.tar.gz" | gzip -dc | docker load A planemo test will then simply use this image: $ planemo test ... --biocontainers --no_conda_auto_init ... … interactively debug tool tests?¶ It can be desirable to interactively debug a tool test. In order to do so, start planemo test with the option --no_cleanup. Inspect the output: After Galaxy starts up, the tests commence. At the start of each test one finds a message: ( <TOOL_ID> ) > Test-N. After some upload jobs, the actual tool job is started (it is the last before the next test is executed). There you will find a message like Built script [/tmp/tmp1zixgse3/job_working_directory/000/3/tool_script.sh] In this case /tmp/tmp1zixgse3/job_working_directory/000/3/ is the job dir. It contains some files and directories of interest: tool_script.sh: the bash script generated from the tool’s commandand version_commandtags plus some boiler plate code galaxy_3.sh(note that the number may be different): a shell script setting up the environment (e.g. paths and environment variables), starting the tool_script.sh, and postprocessing (e.g. error handling and setting metadata) working: the job working directory outputs: a directory containing the job stderr and stdout For a tool test that uses a conda environment to resolve the requirements one can simply change into working and execute ../tool_script.sh (works as long as no special environment variables are used; in this case ../galaxy_3.sh needs to be executed after cleaning the job dir). By editing the tool script one may understand/fix problems in the command block faster than by rerunning planemo test over and over again. Alternatively one can change into the working dir and load the conda environment (the code to do so can be found in tool_script.sh: . PATH_TO_CONDA_ENV activate). Afterwards one can execute individual commands, e.g. those found in tool_script.sh or variants. For a tool test that uses Docker to to resolve the requirements one needs to execute ../galaxy_3.sh, because it executes docker run ... tool_script.sh in order to rerun the job (with a possible edited version of the tool script). In order to run the docker container interactively execute the docker run .... /bin/bash that you find in ../galaxy_3.sh (i.e. ommitting the call of the tool_script.sh) with added parameter -it.
https://planemo.readthedocs.io/en/latest/writing_how_do_i.html
2021-11-27T02:17:10
CC-MAIN-2021-49
1637964358078.2
[]
planemo.readthedocs.io
Deployments What is a deployment?What is a deployment? A deployment is the process of delivering your web app to your provisioned server(s). Cleavr lets you manage various facets of the deployment process to fit your particular needs. Trigger a deploymentTrigger a deployment Before you trigger a deployment, ensure you have your web app’s environment variables setup on your servers and the appropriate hooks configured and enabled. There are several ways to trigger a deployment: - Cliking the Deploy button for the web app - Push-to-deploy will deploy when new commits are submitted - GitHub Actions, available for NodeJS apps, will trigger a deployment with either method above Cancel a deploymentCancel a deployment If you need to cancel an active deployment for whatever reason, you can cancel on the web app’s deployment page. Deployment statusDeployment status As a deployment is occurring, a deployment in process status will be visible. Once complete, the status will show as Active or Error. If the deployment was cancelled, the status will show as Cancelled for that deployment. Health checksHealth checks Once a deployment is complete, Cleavr will display ping results and status codes from various global locations. Deployment troubleshootingDeployment troubleshooting In case of an error when deploying a web app, select the deployment row of interest to view the deployment actions. Each deployment action will show a status for that action. If one action fails, then the proceeding steps will be marked as Aborted. For the action that errors, select the row and then select the Log at the bottom of the page to view the log details for that action. Typically, the reason for failure can be found in the log. If more information is required, first, double check that the order of deployment hooks and the details of the hooks make sense for the application you are deploying. If the hooks are appropriate, then the next recommended place to check for troubleshooting are the logs. Helpful logs are located in: - Web App Log Report (for NodeJS applications) - PM2 Logs - click the Load PM2 Logs button in the deployment details page for NodeJS apps - Services Logs - located in the server section For app specific troubleshooting help, check out Deployment RollbackDeployment Rollback If you need to rollback to a previous deployment, select the Rollback option located under the overflow menu.
https://docs.cleavr.io/deployments/
2021-11-27T02:51:00
CC-MAIN-2021-49
1637964358078.2
[array(['/images/deployment/cleavr-deployment-details.png', 'Cleavr deployment details for active and past deployments'], dtype=object) array(['/images/deployment/cleavr-deployment-ping-results.png', 'Cleavr deployment ping results from global servers'], dtype=object) array(['/images/deployment/cleavr-deployment-step-error.png', 'Cleavr deployment step error details'], dtype=object) array(['/images/deployment/cleavr-deployment-rollback.png', 'Cleavr deployment rollback'], dtype=object) ]
docs.cleavr.io
Upcoming Webinars There are currently no scheduled webinars. Sign up to receive updates about upcoming webinars and trainings. You can also checkout our Youtube channel for more videos. Previous Webinars Getting Started - Webinar series: Intro to Flywheel - Webinar series: Intro to the Flywheel CLI - Webinar series: Advance UI - Data Management Importing Data Manage Data - OHBM 2020: Data Curation and Machine Learning Presentation - OHBM 2020: Open science, reuse, and reproducibility
https://docs.flywheel.io/hc/en-us/articles/360044852353-Webinars
2021-11-27T03:11:22
CC-MAIN-2021-49
1637964358078.2
[]
docs.flywheel.io
- Gitaly and NFS deprecation - Known kernel version incompatibilities - Fast lookup of authorized SSH keys - NFS server - NFS client - Testing NFS - NFS in a Firewalled Environment - Known issues - Troubleshooting. File system performance can impact overall GitLab performance, especially for actions that read or write to Git repositories. For steps you can use to test file system performance, see File system Performance Benchmarking. Gitaly and NFS deprec need to unset the feature flag by using: sudo gitlab-rake gitlab:features:unset_rugged If the Rugged feature flag is explicitly set to either true or false, GitLab uses:. We have noticed this behavior in an issue about refs not found after a push, where newly added loose refs can be seen as missing on a different client with a local dentry cache, as described in this issue. is still significant. Upgrade to Gitaly Cluster as soon as possible. Avoid will also affect performance. We recommend that the log files be stored on a local volume. For more details on the experience of using a cloud-based file systems with GitLab, see this Commit Brooklyn 2019 video. Avoid Finding the requests that are being made to NFS In case of NFS-related problems, it can be helpful to trace the file system requests that are being made by using perf: sudo perf trace -e 'nfs4:*' -p $(pgrep -fd ',' puma && pgrep -fd ',' unicorn) On Ubuntu 16.04, use: sudo perf trace --no-syscalls --event 'nfs4:*' -p $(pgrep -fd ',' puma && pgrep -fd ',' unicorn)
https://docs.gitlab.com/13.12/ee/administration/nfs.html
2021-11-27T02:04:51
CC-MAIN-2021-49
1637964358078.2
[]
docs.gitlab.com
Purpose The CursorDBC parcel returns to the application a cursor row identifier to after a SELECT FOR CURSOR statement. Usage Notes The information returned is meaningful only for use in a subsequent CursorHost parcel. The CursorDBC parcel is generated by the database. Parcel Data The following table lists field information for the CursorDBC parcel. Fields Processor identifies the location of the row. Row identifies the row associated with the cursor.
https://docs.teradata.com/r/bh1cB~yqR86mWktTVCvbEw/uhguR7Ltana9mP94Jo_RUQ
2021-11-27T03:16:26
CC-MAIN-2021-49
1637964358078.2
[]
docs.teradata.com
Renders a Sprite for 2D graphics. //This example outputs Sliders that control the red green and blue elements of a sprite's color //Attach this to a GameObject and attach a SpriteRenderer component using UnityEngine; public class Example : MonoBehaviour { SpriteRenderer m_SpriteRenderer; //The Color to be assigned to the Renderer’s Material Color m_NewColor; //These are the values that the Color Sliders return float m_Red, m_Blue, m_Green; void Start() { //Fetch the SpriteRenderer from the GameObject m_SpriteRenderer = GetComponent<SpriteRenderer>(); //Set the GameObject's Color quickly to a set Color (blue) m_SpriteRenderer.color = Color.blue; } void OnGUI() { //Use the Sliders to manipulate the RGB component of Color //Use the Label to identify the Slider GUI.Label(new Rect(0, 30, 50, 30), "Red: "); //Use the Slider to change amount of red in the Color m_Red = GUI.HorizontalSlider(new Rect(35, 25, 200, 30), m_Red, 0, 1); //The Slider manipulates the amount of green in the GameObject GUI.Label(new Rect(0, 70, 50, 30), "Green: "); m_Green = GUI.HorizontalSlider(new Rect(35, 60, 200, 30), m_Green, 0, 1); //This Slider decides the amount of blue in the GameObject GUI.Label(new Rect(0, 105, 50, 30), "Blue: "); m_Blue = GUI.HorizontalSlider(new Rect(35, 95, 200, 30), m_Blue, 0, 1); //Set the Color to the values gained from the Sliders m_NewColor = new Color(m_Red, m_Green, m_Blue); //Set the SpriteRenderer to the Color defined by the Sliders m_SpriteRenderer.color = m_NewColor; } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2018.2/Documentation/ScriptReference/SpriteRenderer.html
2021-11-27T02:15:09
CC-MAIN-2021-49
1637964358078.2
[]
docs.unity3d.com
Armory-extended Halyard Deprecation What does this mean? On 2021 Sept 01, Armory-extended Halyard, Armory’s proprietary extension of Halyard, will be deprecated with an and End of Support date of 2021 Dec 30. This means that after 2021 Sept 01, no new development is done specific to the feature except for critical (P0/1) bugs and security CVE fixes. Support for critical (P0/1) bugs and security CVE fixes will end on 2021 Dec 30. Additionally, releases after 2.26 require you to use the Armory Operator. Any current workflows that use Armory-extended Halyard may break if you try to upgrade to a release that is later than 2.26. We encourage all Armory-extended Halyard users to migrate to the Armory Operator, the only supported method of installing and configuring Armory’s products beyond the 2.26 release. Why is Armory removing support? The Armory Operator has a superset of the functionality provided by Armory-extended Halyard, including a more native Kubernetes installation experience. In an effort to create the best possible installation experience, we are consolidating supported installation tools to only the Armory Operator. Focusing on a single installation experience allows us to develop requested features at a faster pace. We understand that this decision may be disruptive to you, so we are giving as much advance notice as possible. For more information about how Armory handles feature and technology deprecation, please see. Am I affected? If your company currently uses Armory-extended Halyard, then yes. You can still access Armory-extended Halyard until 2021 Dec 30; however, Armory is unable to support issues solely related to Armory-extended Halyard you may experience after 2021 Sept 01. What do I need to do? To assure the best user-experience possible, we recommend migrating to the Armory Operator. We have instructions for migrating to the Operator on this page:. What happens if I don’t act in time? As mentioned above, if your company uses Armory-extended Halyard, you will not be able to install any releases beyond 2.26 unless you migrate to the Armory Operator. Additionally, after 2021 Dec 30, Armory cannot guarantee the availability of Armory-extended Halyard.)
https://docs.armory.io/docs/feature-status/deprecations/halyard-deprecation/
2021-11-27T01:55:08
CC-MAIN-2021-49
1637964358078.2
[]
docs.armory.io
Deleting an endpoint Endpoints can serve content until they are deleted. Delete the endpoint if it should no longer respond to playback requests. You must delete all endpoints from a channel before you can delete the channel. If you delete an endpoint, the playback URL stops working. You can use the AWS Elemental MediaPackage console, the AWS CLI, or the MediaPackage API to delete an endpoint. For information about deleting an endpoint through the AWS CLI or MediaPackage API, see the AWS Elemental MediaPackage API Reference. To delete an endpoint (console) Access the channel that the endpoint is associated with, as described in Viewing channel details. On the channel details page, choose the endpoint name. On the endpoint details page, choose Delete endpoint. On the Delete Endpoints page, choose Save all.
https://docs.aws.amazon.com/mediapackage/latest/ug/endpoints-delete.html
2021-11-27T03:48:35
CC-MAIN-2021-49
1637964358078.2
[]
docs.aws.amazon.com
Getting Started¶ Welcome to the Krita Manual! In this section, we’ll try to get you up to speed. If you are familiar with digital painting, we recommend checking out the Introduction Coming From Other Software category, which contains guides that will help you get familiar with Krita by comparing its functions to other software. If you are new to digital art, just start with Installation, which deals with installing Krita, and continue on to Starting Krita, which helps with making a new document and saving it, Basic Concepts, in which we’ll try to quickly cover the big categories of Krita’s functionality, and finally, Navigation, which helps you find basic usage help, such as panning, zooming and rotating. When you have mastered those, you can look into the dedicated introduction pages for functionality in the User Manual, read through the overarching concepts behind (digital) painting in the General Concepts section, or just search the Reference Manual for what a specific button does. Contents:
https://docs.krita.org/en/user_manual/getting_started.html
2021-11-27T02:40:59
CC-MAIN-2021-49
1637964358078.2
[]
docs.krita.org
Stream. Dispose Method Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Overloads Dispose()). Applies to Dispose(Boolean) protected: virtual void Dispose(bool disposing); protected virtual void Dispose (bool disposing); abstract member Dispose : bool -> unit override this.Dispose : bool -> unit Protected Overridable Sub Dispose (disposing As Boolean) Parameters true to release both managed and unmanaged resources; false to release only unmanaged resources. Remarks(Boolean) method..
https://docs.microsoft.com/en-us/dotnet/api/system.io.stream.dispose?view=net-5.0
2021-11-27T03:51:12
CC-MAIN-2021-49
1637964358078.2
[]
docs.microsoft.com
. SR-D18879 · Issue 500640 Logic update made to PrepareColors to resolve calculation ambiguity Resolved in Pega Version 8.3.121527 · Issue 491736 Hidden filters will not be shown on the resulting report Resolved in Pega Version 8.3.122885 · Issue 494162 Added check for empty class properties to report scheduler Resolved in Pega Version 8.3.1189 Updated support for numeric values in text columns during Excel export Resolved in Pega Version 8.3.1369 Return from drilldown in frameless portal corrected Resolved in Pega Version 8.3.127687 · Issue 500051 Mashup Export to Excel works on first use Resolved in Pega Version 8.3.1. SR-D30272 · Issue 499438 Primary page paramter explictly set to resolve fix/edit Scheduled Tasks issue Resolved in Pega Version 8.3.1 After upgrade, the top section of report details was getting blanked out when using the Reporting | Components | Scheduled Task to open a report and "update" it to fix issues. This was an unintended side effect of updating the Review harness to display the Perform harness when clicking Update, which was done to correct an earlier issue with UI validations when using the out of the box schedule report functionality. To resolve this, pxScheduleTaskWrapper has been updated to explicitly set the parameter page according to the primary page's values when not in the report browser.
https://docs.pega.com/platform/resolved-issues?f%5B0%5D=%3A29991&f%5B1%5D=resolved_capability%3A9031&f%5B2%5D=resolved_version%3A31246&f%5B3%5D=resolved_version%3A32541&f%5B4%5D=resolved_version%3A34296
2021-11-27T02:26:40
CC-MAIN-2021-49
1637964358078.2
[]
docs.pega.com
UDN Search public documentation: TexturingGuidelines > Epic Games Texturing Guidelines UE3 Home > Texture Artist > Epic Games Texturing Guidelines UE3 Home > Texture Artist > Epic Games Texturing Guidelines Epic Games Texturing Guidelines Overview This document summarizes some lessens we have learned about creating textures. After switching to Lightmass for our Global Illumination lighting solution we found many of our textures we had created were too dark, had too much contrast and contained too much noise. Some information shared in here may feel counterproductive to making good looking visuals. One thing to remember is in the past we have created textures that represent many aspects of a material with one texture, therefore a lot of data was "baked" into a single texture. Now that lighting and materials have become more complex we should separate and remove some of that data so that we represent materials in a realistic manner. Gamma Space vs Linear Space Before talking about specific textures and techniques we should understand what happens when a texture is manipulated in Photoshop and imported into Unreal Engine. When you create a texture in Photoshop you should be using a color profile of sRGB. This means your texture is stored with a gamma curve applied which is called Gamma Space. When your texture is imported into UE3 with default settings (sRGB value enabled) you are telling the engine to convert your texture to Linear Space before calculating lighting. There's a lot of math and technical information behind what is happening and why we do it but the important thing to remember is values you painted in gamma space are darker in linear space. For example, if you wanted to paint a surface that reflected 50% of light, Photoshop would tell you 50% grey is 127,127,127. 127,127,127 in gamma space converts to 55,55,55 or 21% light reflected in Linear Space. This means your scene would be darker and reflect much less light when calculating Global Illumination than you had intended. The curve below depicts the distribution of values in gamma space vs linear space. When you paint an image in Photoshop it's storing the image and measuring the brightness using this curve. Diffuse Textures The first thing to consider when creating diffuse textures is you are not painting a final image of a surface. You're painting a surface property that represents light reflected at many angles. This means the diffuse texture should not have large shadows or light variances in value. Any large amount of ambient occlusion should be represented by LightmassAO or SSAO. Sometimes details represented in normal maps will still require a small amount of AO represented in the diffuse texture but this value should not be too dark. Secondly, diffuse textures should be bright and have low contrast values. If you create a texture that is too dark you are limiting its ability to be bright in the game. You should remember that the texture you are creating is describing how bright that surface is when lit by a 100% bright white light. Also consider that if you paint a texture too dark or include ambient occlusion that is too dark you will inhibit the surfaces ability to show shadows and lighting. Textures with too much noise and too high of contrast will also make it difficult to read a surface's shape and lighting. If you want a guideline for Photoshop for what will be middle grey when rendered you can use 186 on the histogram as "middle". Specular Textures For various reasons in the past (using renderers that were not in linear space and approximating complex materials in simple lighting) artists have painted specular maps with varying amounts of color. A good example is skin. In the past artists would paint blue or sometimes orange specular textures. Skin's specular response is actually white. Very few materials actually have a color specular, most of them are metals. Here you can see that the specular reflection is white. Specular Power/Masks Specular Power and masks are often used to drive a lerp of two values. This means that you usually do not want this texture to go through the gamma-correction process. You can either pack these textures in an alpha channel (does not get gamma corrected) or pack them in channels together and uncheck []sRGB in its texture properties. Doing this means that using a value of 0 will be a 100% blend of value A, using a value of 255 will be a 100% blend value of B and using 127 will be an absolute 50% blend of the two. Emissive Textures Emissive textures should attempt to use the full range of values to avoid compression and precision artifacts. In the past some emissive textures had been created with dim textures and multiplied by large numbers in the material to compensate. This will create banding and incorrect colors in the emissive rendering. It's best to create your Emissive texture then use the Levels or Curves tools in Photoshop to maximize the range they utilize. Below is an emissive texture that only uses a small range of values and is multiplied by a very large number in the material compared to one that uses a broader range of values and is multiplied by a more reasonable number in the material. Normal maps One important thing to remember about editing normal maps is that it is always good to normalize your blending operations. This means either blend normal maps using Crazybump or if you have blended them in photoshop, import them into Crazybump and re-export and it'll normalize your map for you. Also remember that multiplying detail normal maps by values higher than 1,1,0 can expose DXT compression artifacts and create materials that do not light correctly in all situations. Special Notes on Skin Skin is a complex material that we now represent with lighting models more complex than phong. In directX11 we do this with Screen Space Sub-Surface Scattering, in DirectX9 we do this with CustomLighting that utilizes a colored lighting falloff with a soft normal. Both techniques treat the diffuse texture as epidermal layer and colorize it to represent the meaty red lighting under the skin. This means the diffuse texture for skin should be much more light and pale than we have typically created. Photos of skin already have the "material" in them and contain the redness that is created by SSS so when you create your texture you essentially have to remove that from the texture so that it can be done by the material.
https://docs.unrealengine.com/udk/Three/TexturingGuidelines.html
2021-11-27T02:43:58
CC-MAIN-2021-49
1637964358078.2
[array(['rsrc/Three/TexturingGuidelines/gamma_curve.jpg', 'gamma_curve.jpg'], dtype=object) array(['rsrc/Three/TexturingGuidelines/gamma_vs_Linear.jpg', 'gamma_vs_Linear.jpg'], dtype=object) array(['rsrc/Three/TexturingGuidelines/histogram.jpg', 'histogram.jpg'], dtype=object) array(['rsrc/Three/TexturingGuidelines/dark_textures.jpg', 'dark_textures.jpg'], dtype=object) array(['rsrc/Three/TexturingGuidelines/dark_spheres.jpg', 'dark_spheres.jpg'], dtype=object) array(['rsrc/Three/TexturingGuidelines/gamma_affects_on_Lighting.jpg', 'gamma_affects_on_Lighting.jpg'], dtype=object) array(['rsrc/Three/TexturingGuidelines/detail_difference.jpg', 'detail_difference.jpg'], dtype=object) array(['rsrc/Three/TexturingGuidelines/emissive.jpg', 'emissive.jpg'], dtype=object) array(['rsrc/Three/TexturingGuidelines/skin.jpg', 'skin.jpg'], dtype=object) ]
docs.unrealengine.com
You can also configure writing of events to the vCenter Server streaming facility. Streaming events is supported only for the vCenter Server. The streaming of events to a remote syslog server is disabled by default. You can enable and configure the streaming of vCenter Server events to a remote syslog server from the vCenter Server Management Interface. Procedure - In the vSphere Client, navigate to the vCenter Server instance. - Select the Configure tab. - Expand Settings option, and select Advanced Settings. - Click EDIT SETTINGS. - Click on the filter text box present in the Name column of the table header. Type vpxd.event, and press Enter. - Enable or disable the vpxd.event.syslog.enabled option.By default, this option is enabled. - Click SAVE.
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.monitoring.doc/GUID-FD51CE83-8B2A-4EBA-A16C-75DB2E384E95.html
2021-11-27T03:41:04
CC-MAIN-2021-49
1637964358078.2
[]
docs.vmware.com
All Kubernetes clusters have two categories of users: service accounts and normal users. Kubernetes manages authentication for service accounts, but the cluster adminstrator, or a separate service, manages authentication for normal users. Konvoy configures the cluster to use OpenID Connect (OIDC), a popular and extensible user authentication method, and installs Dex, a popular, open-source software that integrates your existing Identity Providers with Kubernetes. To begin, set up an Identity Provider with Dex, then use OIDC as the Authentication method. Identity ProviderIdentity Provider An Identity Provider (IdP) is a service that lets you manage identity information for users, including groups. A Konvoy cluster uses Dex as its IdP. Dex, in turn, delegates to one or more external IdPs. If you use already use one or more of the following IdPs, you can configure Dex to use them: Identity Provider ProceduresIdentity Provider Procedures Set up an identity provider with Dex AuthenticationAuthentication OpenID Connect is an extension of the OAuth2 authentication protocol. As required by OAuth2, the client must be registered with the IdP; in this case, Dex. Do this by passing the name of the application and a callback/redirect uri. These handle the processing of the OpenID token after the user authenticates successfully. Upon registration Dex returns a “client_id” and a “secret”. Authentication requests use these, between the client and Dex, to identify the client. Users access Konvoy in two ways: - To interact with Kubernetes API, usually through kubectl. - To interact with the Konvoy Ops Portal, which has GUI dashboards for Konvoy, Kommander, Prometheus, Kibana, etc. In Konvoy, Dex comes pre-configured with a client for the above access use cases. The clients talk to Dex for authentication. Dex talks to the configured IdP (Identity Provider, for example LDAP, SAML etc.) to perform the actual task of authenticating the user. If the user authenticates successfully Dex pulls the user’s information from the IdP and forms an OpenID token. This token contains this information and returns it to the respective client’s callback url. The client or end user uses this token for communicating with the Konvoy Ops Portal or Kubernetes API respectively. This figure illustrates these components and their interaction at a high level: Authentication ProceduresAuthentication Procedures Users & GroupsUsers & Groups Users & Groups ProceduresUsers & Groups Procedures TroubleshootTroubleshoot Troubleshoot OIDC and Dex Set up an identity provider with Dex How to set up an identity provider with Dex… Generate a Client Access Token How to generate a Client Access Token… Change the Group Prefix Access and change the OIDC Group Prefix… Change the Access Token Lifetime Changing the Access Token Lifetime.… Troubleshoot OIDC and Dex How to troubleshoot OpenID Connect (OIDC) and Dex…
https://docs.d2iq.com/dkp/konvoy/1.7/access-authentication/oidc/
2021-11-27T03:00:15
CC-MAIN-2021-49
1637964358078.2
[array(['./oidc-auth-flow-with-dex.png', 'OIDC authentication flow'], dtype=object) ]
docs.d2iq.com
IP pools group ranges of IP addresses. They can be designated as public and private IP addresses. For operations with pools, enter Network → IP pools. To create an IP. To assign an IP address to a server from a certain pool when ordering and releasing that server: - Press Pool general settings. - Select the Pools for deallocation of servers. - Select the Pools for deallocation of servers. IP addresses from this pool will be assigned to servers in racks that are not assigned with any pool. - Press Save. Note Only pools with IPv4 addresses can be used to release and provision servers.
https://docs.ispsystem.com/dcimanager-admin/networks/pools-management
2021-11-27T01:56:43
CC-MAIN-2021-49
1637964358078.2
[]
docs.ispsystem.com
Restoring a Snapshot backup for Linux and Windows Contributors If data loss or data corruption occurs, you can restore Unified Manager to the previous stable state with minimum loss of data. You can restore the Unified Manager Snapshot database to a local or remote operating system by using the Unified Manager maintenance console. What you’ll need You must have the root user credentials for the Linux host and administrative privileges for Windows host machine on which Unified Manager is installed. You must have a user ID and password authorized to log in to the maintenance console of the Unified Manager server. The restore feature is platform-specific and version-specific. You can restore a Unified Manager backup only on the same version of Unified Manager. Connect to the IP address or fully qualified domain name of the Unified Manager system. Log in to the system with the root user credentials. Enter the command maintenance_consoleand press Enter. In the maintenance console Main Menu, enter the number for the Backup Restore option. Enter the number for Backup and Restore using NetApp Snapshot. If you are performing a restore onto a new server, after installing Unified Manager do not launch the UI or configure any clusters, users, or authentication settings when the installation is complete. Enter the number for Configure NetApp Snapshot Backup and configure the Snapshot backup settings as they were configured on the original system. Enter the number for Restore using NetApp Snapshot. Select the Snapshot backup file that you want to restore and press Enter. After the restore process is complete, log in to the Unified Manager user interface. After you restore the backup, if the Workflow Automation server does not work, perform the following steps: On the Workflow Automation server, change the IP address of the Unified Manager server to point to the latest machine. On the Unified Manager server, reset the database password if the acquisition fails in step 1.
https://docs.netapp.com/us-en/active-iq-unified-manager/health-checker/task_restore_snapshot_backup.html
2021-11-27T03:38:37
CC-MAIN-2021-49
1637964358078.2
[]
docs.netapp.com
and Desktops operations at any time. To grant the user specific permissions at any point, associate them with the respective role, at the DataCenter level at a minimum.. To ensure that you use a clean base image for creating new Create AppDisks Valid for VMware vSphere minimum version 5.5 and XenApp and XenDesktop minimum version 7.8. Delete AppDisks Valid for VMware vSphere minimum version 5.5 and XenApp and XenDesktop minimum version 7.8. Obtain and import a certificateObtain need to right-click on Internet Explorer and choose Run as Administrator to download or install the certificate. - Open your web browser and make a secure web connection to the vCenter server (for example). - Accept the security warnings. - Click on on the address bar displaying the certificate error. - View the certificate. Import the certificate into the certificate store on each Cloud Connector. - Click Install certificate,: In the connection creation wizard: - Select the VMware connection type. - Specify the address of the access point for the vCenter SDK. - Specify the credentials for a VMware user account you set up earlier that has permissions to create new VMs. Specify the username in the form domain/username. VMware SSL thumbprintVM Citrix Virtual Apps and Desktops, even if not by the Controllers. When creating a vSphere host connection in Studio, a dialog box allows you to view the certificate of the machine you are connecting to. You can then choose whether to trust it.
https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/install-configure/install-prepare/vmware.html
2018-10-15T16:16:10
CC-MAIN-2018-43
1539583509326.21
[]
docs.citrix.com
Contents CX Contact Solutions Guide CX Contact is part of a fully customizable cloud solution that enables you to set up, run, and fine-tune your outbound campaigns. Through a series of Genesys PureEngage Cloud products, you can define your routing application, import contact lists, set up a campaign, apply compliance rules, and monitor and assess your campaign through real-time and historical reports. Here are the four stages you'll go through to make the most out of CX Contact in PureEngage Cloud: Stage 1—Configuring outbound routing Stage 2—Setting up a campaign Stage 3—Dialing and call handling Stage 4—Monitoring a campaign Stage 1—Configuring outbound routing Genesys handles the initial configuration of outbound routing. Using Platform Administration, we configure the routing points, agent groups, and virtual queues. We use Designer to configure voice scripting and call flow. After that, you can log into these applications at any time to tweak the settings. Here are a few instructions: - How to set up Outbound Routing - How to create and modify DNs - How to set up agents - How to add the Route Call Block in Designer Stage 2—Setting up a campaign Before you sign into the CX Contact application to set up your campaigns, you should visit the Campaign Structure and Terminology page in the CX Contact Help manual to learn about the following five components that make up a campaign: After that, you can log into CX Contact to set up and manage your campaigns. Many of the key features and tasks available to you in CX Contact are listed in the table below. Note: Clicking any of the links in this table will take you to the CX Contact Help manual. Stage 3—Dialing and bridging to an agent Dialing When you set up a campaign in CX Contact, you'll need to choose a dialing mode or IVR mode that best suits your campaign. Your choice will depend on the type of campaign you're running, the number of agents (if any) assigned to the campaign, and compliance regulations. Depending on the dialing mode or IVR mode selected, you can also apply pacing and optimization parameters to influence dialing behavior. Refer to the Dialing modes and IVR modes page for a complete description of each dialing mode and IVR mode as well as the pacing options and optimization parameters that apply to each. Before starting a dialing event, the system refers to all selected pacing options, optimization parameters, call treatments, and compliance rules in place. It places the call and then, for agent-assisted campaigns, hands the call off to an agent once it detects a voice on the line. Call handling All interactions take place via the Agent Desktop application. The way in which an agent handles a call depends on the dialing mode or IVR mode used in a campaign. For Predictive and Progressive dialing and IVR modes, the outbound calls are directed to the agents' workstation and dialed automatically. The video to the left shows you how agents handle these calls. In a Preview dialing mode, agents preview the customer case information and then manually dial the customer's phone number. For a complete description of how agents use Agent Desktop to handle each type of outbound call, refer to the Outbound campaigns page in the Agent Desktop Help manual. Stage 4—Monitoring a campaign Once you've set up your campaigns, you'll want to monitor the status of those that are still running and look at the results of those that have ended. You can use a series of Genesys reporting tools to accomplish these tasks. Real-time reporting To monitor the status of an ongoing campaign in real-time, you have two options: the CX Contact campaigns dashboard and Genesys Pulse. CX Contact campaigns dashboard The CX Contact campaigns dashboard provides a statistical overview of call activity for each campaign group that is currently running. If a campaign group contains multiple contact lists, the data is broken down by contact list. For a complete description of fields and metrics displayed on the campaigns dashboard, refer to the Campaigns dashboard page in the CX Contact Help menu. Genesys Pulse Use Genesys Pulse to generate in-depth reports on agent activity and campaign activity. In the Genesys Pulse application, you can add a report widget to your report dashboard, choose a template or define your own, select objects and statistics to include in your report, and specify default settings – like the name, refresh rate, and type of widget. And then you can save and download your report as a CSV file. For a complete list of available agent statistics through Pulse, refer to the Agent Statistics page in the Pulse Help manual. For a complete list of available campaign statistics through Pulse, refer to the Campaign Statistics page in the Pulse Help manual. Historical reports Now you want to retrieve statistics of a campaign that has ended. OK, you have two options: - CX Contact List Export - Genesys Interactive Insights CX Contact List Export Using the List Automation feature in CX Contact, you can schedule an automatic list export for when a campaign ends. This list export will contain call result fields. For more information about this feature, refer to the List Automation page in CX Contact. Interactive Insights Interactive Insights uses data stored in our database and presents it in readable reports. There are four Outbound Engagement Reports, described in the table below. Refer to the Interactive Insights Outbound Contact reports page in the Historical Reporting manual for more information. Feedback Comment on this article:
https://docs.genesys.com/Documentation/PSAAS/latest/Administrator/CXCSolutions
2018-10-15T15:24:27
CC-MAIN-2018-43
1539583509326.21
[]
docs.genesys.com
Troubleshooting¶ The following steps may help you troubleshooting your setup. Check if the Sqreen extension is loaded¶ By taking a look at the phpinfo() page, the Sqreen extension section should be present. If it's not, then probably PHP was not correctly configured to load the Sqreen PHP extension. Please visit the manual installation of the php extension section guide. The following information can be retrieved from this table: the extension version, the daemon version, the connection status between the extension and the daemon (on the Connected line, in the first section). Ensure the agent is running¶ A sqreen-agent process should be running: Shell$ ps aux | grep sqreen sqreen 19456 0,0 0,0 2522856 6656 s013 S+ Ven10 0:55.18 sqreen-agent If it's not, please ensure the sqreen-agent was installed and configured to start automatically. Check PHP logs¶ The PHP errors (in the FPM logs or in the Apache logs) may contain Sqreen related entries. Check the agent logs¶ The sqreen-agent logs should inform you that the agent was successfully started: Shell$ cat /var/log/sqreen/sqreen.log Check the agent debug logs¶ The agent can be configured to report debug logs as well: Shell$ sqreen-agent --log_level=DEBUG Ensure the extension can reach the daemon¶ Since the PHP extension must be able to reach the daemon, run the following command (from the PHP host if different from the daemon host) to confirm the setup is OK: Shell$ curl 127.0.0.1:7773 If you are using a different host for the daemon and your PHP host, replace 127.0.0.1 with the address/port of the host running the daemon. The command will either: timeout (the connection was successfully performed) terminate with a Connection refused error. In the case of a timeout, everything is fine: the daemon is listening. In case of a Connection refused error, it seems this host cannot join the daemon. Check your network configuration. Retrieve additional logs¶ The Sqreen extension may be configured in debug mode. In the sqreen.ini file, add the following directive: Inisqreen.log_level = 'debug' Restart Apache or FPM and visit your website. The extension should create log files in the /tmp directory or the directory configured in the log_location configuration key. Please share those logs with us for further investigation.
https://docs.sqreen.io/sqreen-for-php/troubleshooting/
2018-10-15T14:51:55
CC-MAIN-2018-43
1539583509326.21
[]
docs.sqreen.io
Rich Presence Introductionlink Rich Presence (RP) is brief overview of what active players are currently doing in their game. To have RP in a game you need a Rich Presence Script (RPS) which is created by Developers. The Script check the player's game memory and as programmed reports the values of certain addresses with definitions assigned by the Developer such as which stage the player is on, how many lives they have, if the game is paused, what game mode they are playing, what the player has accomplished, etc. This information is reported back to the website. Every game published should have at least a basic RPS. Example of RP in action: To see the RP live in a game click on the RetroAchievements menu in your emulator and then click on Rich Presence Monitor. A small window will show you your active RP. (Good for debugging) The best way to understand Rich Presence is to look at various examples in game, look at the addresses used and look at how the text is displayed in the Rich Presence Monitor and on site. How Does it work?link Every time a game is launched, it fetches the achievements in a 'patch' file for the ROM which details all the achievements and memory addresses (and leaderboards) that can be watched for. It will also request a Rich Presence Script for the currently loaded ROM. The emulator will report back to the website every 120 seconds. Similarly, every 120 seconds or so, the 'active players' box on the frontpage will refresh, detailing the last known activity of all active players. If there isn't a rich presence script given, the text will be 'earning achievements' if playing a game with achievements, 'playing [game]' if playing a game without achievements, or 'developing achievements' if the memory dialog is open and visible. The RPS for each game can be found under the development section on each game's page: Example (Super Mario Bros.)link Format:Digit FormatType=VALUE Lookup:Mode 0=[Demo] 2=[World Complete] Lookup:Paused 0x81=▌▌ 0x80=▌▌ 1=▌▌ Lookup:Star 5=🌟 4=🌟 3=🌟 2=🌟 1=🌟 Lookup:Powerup 0=Small 1=Super 2=Fire Lookup:Swimming 1= swimming Lookup:Status 0= [Loading] 1= taking a vine warp 2= entering a warp pipe 3= entering a warp pipe 4= 🚩 5= [Stage Complete] 6= [Game Over] 7= [Entering Area] 9= growing 0xA= shrinking 0xB= 💀 0xC= powering up Lookup:Quest 0x0=1st 0x1=2nd Display: @Mode(0xh770)@Paused(0xh776)@Star(0xM79f_0xN79f_0xo79f_0xP79f_0xQ79f_0xR79f)@Powerup(0xh0756) Mario in @Digit(0xh75f_v1)-@Digit(0xh75c_v1)@Swimming(0xh704)@Status(0xhe), 🚶:@Digit(0xh75a_v1), @Quest(0xh7fc) Quest It breaks down into a series of Lookup objects, Format objects and one Display object. Lookupslink Lookups are defined like this: Lookup:NameOfLookup Value1=Text When This Value Value2=Text When Another Value ... We give the Lookup a value, consisting of a series of memory addresses and modifiers. More about this later. Formatlink Format tables are defined like this: Format:Score FormatType=VALUE Begin with Format:, then the name of the format type. On the next line, give FormatType=, then one of the following: VALUE, SCORE (or POINTS), TIME (or FRAMES), SECS, MILLISECS, or OTHER. VALUE: generic value, no leading zeroes. SCORE/ POINTS: "000130 points" TIME/ FRAMES: value describes the number of frames elapsed, and will be turned into 00:00.00 SECS: value describes the number of seconds elapsed, and will be turned into 00:00 MILLISECS: value describes the number of hundredths of a second elapsed, and will be turned into 00:00.00 Displaylink Display will be a string that gets shown in the 'Active Players' box on the front page. It refers to the previously defined Lookup and Format objects using a single '@'. It then specifies a name for the lookup or format (case sensitive!), and immediately after, in brackets, a series of memory values specifying what to send to that lookup or format object. @Powerup(0xh756) This means use the Lookup or Format that's called Powerup, and give it whatever value is in 0xh756. Example Lookup Breakdownlink @Mode(0xh770)- Lookup for the address that shows if the game is in demo mode or a world has been completed. @Paused(0xh776)- Lookup for the address that shows if the game is paused (3 values are used, two of them are for pausing and unpausing). @Star(0xM79f_0xN79f_0xo79f_0xP79f_0xQ79f_0xR79f)- Lookup for the address of if Mario has Star invincibility. More on this later. @Powerup(0xh756)- Lookup for the address that show if Mairo is Small, big or has fire power. Mario in- Static text to string lookup and format objects together. @Digit(0xh75f_v1)- Digitis a format object defined as a value. The address 0xh75f is the World minus 1 (because it it 0 based, as in it starts counting at 0). _v1Means + value 1. _v+1is also correct. -- More static text to split World and Level. as in the hypen in World 1-1. @Digit(0xh75c_v1)- Another use of the Digitformat object. This time It's looking up the stage. World 1-X. @Swimming(0xh704)- Lookup for the address that shows if the player is swimming. @Status(0xhe)- Lookup for the address that shows Mario's status, such as going through pipes. , 🚶:- More static text. 🚶 is a symbol for lives. @Digit(0xh75a_v1)- Third use of the Digitformat object. This time it's checking the player lives address. ,- Static text. @Quest(0xh7fc)A lookup to see if the player is in normal or on the 2nd quest, hardmode. Quest- Static Text. Address sizelink To specify what size of address you are are checking there are various characters used. (capitalization is ignored) - A 16bit address is default and has no character designation. At 0x10 the address is two bytes - 16 bits. - An 8bit address's character is h(or H). At 0xh10 the address is one byte - 8 bits. xxxx xxxx - An upper4 address's character is u(or U). At 0xu10 the address is one nibble - 4 bits. xxxx 0000 - A lower4 address's character is l(or L). At 0xl10 the address is one nibble - 4 bits. 0000 xxxx - A bit0 address's character is m(or M). At 0xm10 the address is one bit, the lowest bit: 0000 000x - A bit1 address's character is n(or N). At 0xn10 the address is one bit, the second bit: 0000 00x0 - A bit2 address's character is o(or O). At 0xo10 the address is one bit, the third bit: 0000 0x00 - A bit3 address's character is p(or P). At 0xp10 the address is one bit, the fourth bit: 0000 x000 - A bit4 address's character is q(or Q). At 0xq10 the address is one bit, the fifth bit: 000x 0000 - A bit5 address's character is r(or R). At 0xr10 the address is one bit, the sixth bit: 00x0 0000 - A bit6 address's character is s(or S). At 0xs10 the address is one bit, the seventh bit: 0x00 0000 - A bit7 address's character is t(or T). At 0xt10 the address is one bit, the top bit: x000 0000 - A 32bit address's character is x(or X). At 0xx10 the address is four bytes and 32 bits. Summarizing on a table: Conditional Display Stringslink Display: ?0x 000085=0?Title Screen ?0xT00007c=1?Custom Map in @Landscape(0xH00016c) Playing Battle @Battle(0x 00007c*0.2) in @Landscape(0xH00016c) The existing Display: marker is still used to indicate the start of the display block. If the next line starts with a question mark, it is considered to be a conditional display string. The portion of the line between the two question marks is the conditional clause. If the conditional clause evaluates true, then the remaining portion of the line is used as the display string. If it does not evaluate true, then processing proceeds to the next line. If it starts with a question mark, the same process repeats. If it does not start with a question mark, the entire line is used as the default display string. NOTE: A default display string is required in case none of the conditional display strings match. If you only have conditional display strings, the script will appear to do nothing. Looking at this example, if the 16-bit value at $0085 is 0, the display string is Title Screen. If not, the next line is examined. If the 7th bit of $007C is 1, the display string is Custom Map in @Landscape(0xH00016c). If not, the final line does not have a conditional clause and is used. Display strings associated with a conditional clause support all of the same syntax as the default display string. In this example, you can see the @Landscape lookup is used in both the conditional display string and the default display string. The lookup itself only has to be defined once. The conditional phrase supports all of the previously mentioned address accessors as well as AND (_) and OR (S) logic. Note that OR clauses still require a 'core' group, just like achievements. ?0xH1234=32_0xH2345=0?and example if the 8-bit value at $1234 is 32 and the 8-bit value at $2345 is 0, display and example ?0xH1234=32_0xH2345=1S0xH2345=2?or example if the 8-bit value at $1234 is 32 and the 8-bit value at $2345 is 1 or 2, display or example Binary Coded Decimal (BCD)link BCD is when the values are store in an address from as 0-9 (one digit) or 0-99 (two digits). Keep in mind that most often values are stored in hexidecimal, but sometime games will store them in this way and here's the best way to handle these addresses in your display. BCD decoding treats each hex character as a decimal digit. If the memory inspector shows 86 (in hex), the result of BCD decoding the value would be 86 (in decimal). For value objects you can use the BCD prefix, as in b0xh1234. This also works with leaderboard values. Note that you still need to specify the size of the BCD memory address. b0x1234 reads a 16-bit value. b0xh1234 reads an 8-bit value and b0xX1234 reads a 32-bit value. NOTE: Support for 16-bit and 32-bit BCD decoding is a feature of the 0.075 toolkit. This is most commonly used for score and time, but often other types of display values. Limitslink - 16,000 character limit for script - 100 character limit for what is displayed - Unicode characters are allowed - Using &in text will cut off the script after the & - Using the character +Will not display Syntax Detailslink - Lookup keys are decimal by default and hex if you place the prefix of 0x. This mean 1== 0x1, 2== 0x2, 9== 0x9, 10== 0xa, 100== 0x64, etc. - Lookup/Format names are case specific and must exactly match the usage in the Display string: @test(0x1234)will not find "Format:Test" - Lookup/Format cannot contain spaces before or after the lookup name. @test(0x1234)will not find "Format:test " or "Format: test" - If your Lookup/Format cannot be found, nothing will be displayed. This is a @test(0x1234).will result in This is a .if testcannot be found. - Comments can be added anywhere in the script. A double slash ( //) indicates the remaining portion of the line should be ignored when processing the script. Note: comments still apply toward the script size limit. Tips and Trickslink - Lookup names can be as short as a single character if you need to squeeze in a few extra characters. - Leading zeros can be removed from addresses ( 0xh0001can be shortened to 0xh1). - Turning all your values from hex into decimal will take up less characters. - Unicode characters don't always "take up less space" they often take up to four system characters. - If a lookup doesn't contain a mapping for a value, it will result in a blank (no space) value. You can change this behavior by adding a single line to the lookup mapping '*' to the desired fallback value. - Each Lookupor Formatnamed mapping can be referenced multiple times with the same or different addresses. You can define a single Format:Number FormatType=VALUEinstead of defining individual ones for Lives, Score, Level, etc. - Putting spaces in your lookups sometimes before or after can allow you to hide certain lookups when they are not needed, like how @Pause, @Star, @Swimming, and @Mode do. Value Propertieslink When using lookup and format objects @object() it's possible to combine and perform calculations. This can be used to correctly display a score, in game time, etc. or make more advanced lookups. Example @Score(0x28*10_0x29*1000_0x26*100000) points This means use the Lookup or Format Score, and give it the sum of: - 0x28 times 10, ADD - 0x29 times 1000, ADD - 0x26 times 100000 _adds the addresses together. - Or you can add a static value 0x28_v10. This adds 10 to your total, as in whatever the value of 0x28+ 10 will be displayed. You can also subtract 0x28_v-10. - If you'd like to subtract an address you need to multiply the address by -1. 0x29*-1. 0x29is now negative. - If you'd like to perform division you'll need to multiply by a decimal. 0x26*.5. 0x26will output 1/2 of the value at 0x26. - And you can string everything together: 0x28*10_0x29*-1_0x26*.625_v-10. - You can also add addresses together to give you lookups based on the sums of various addresses. This is used in the example in @Star. It's looking up sum the 6 lowest bits of the address 0xh79f. The way this address works is that so long as there is a value there Mario is invincible star mario and it counts down from hex value 0x23 (35 decimal) to 0. 23 in binary is 0010 0011 meaning the max sum of these bits could 5 during 0001 1111 when the count down reaches hex value 0x1f (31 decimal). Unicode Standard Symbolslink ▌▌=Paused 🔁=Loop number 🚶=Lives. Other symbols that represent the game clearly are also suitable. 🐰=in a Bugs bunny Game, 🐵=in a Donkey Kong Country game, ✈=In a jet plane game, 💞=Continues 💯=Points ⏰=In Game Time/Game Clock 🔑=Keys 💣=Bombs ☰=Menu ❤️ or ❤=In a game with hearts (e.g. Zelda) 💰=Money ⚖=Difficulty Developing Rich Presencelink The toolkit does not currently have an integrated Rich Presence editor, but you can test local changes before putting them on the server. Once you've started a game and the current Rich Presence has been downloaded from the server, you can find it in RACache\Data\XXX-Rich.txt where XXX is the game ID. The Rich Presence Monitor (openable from the RetroAchievments menu) reads this file and shows the current value every second while the window is open. If you make changes to the XXX-Rich.txt file, and reselect the menu option, it will read the new changes and allow you to immediate test them without applying the changes to the server. Continue to make changes and reselect the menu option until the script is behaving as you expect, then copy the contents to the server page to make it available to everyone else. NOTE: The XXX-Rich.txt file is overwritten with the current server data each time the game is opened. As long as you still have the file open in an editor, you can always save your changes over the updated file after reopening the game.
http://docs.retroachievements.org/Rich-Presence/
2018-10-15T15:08:30
CC-MAIN-2018-43
1539583509326.21
[array(['https://i.imgur.com/E5097sz.png', 'Example of RP in action'], dtype=object) array(['https://i.imgur.com/XkCZoLG.png', 'Rich Presence Monitor'], dtype=object) array(['https://i.imgur.com/sqxOjyL.png', 'Dev click'], dtype=object) array(['https://i.imgur.com/e7qoaNx.png', 'RP shown'], dtype=object)]
docs.retroachievements.org