content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Factory Tab
Switch to this page if you want to generate random phrases from scratch, using one of several KIM Factories, or generate new variations of existing phrases that have been generated previously.
Select a Factory from the pop-up menus in order to generate new Phrases, or select an already generated phrase to recall its original factory settings. Every generated phrase is basically a new Factory, of which you can generate more variations, or alter its settings and spawn off a new series of different phrases.
- Bundles
- Select from this pop-up menu a KIM Bundle of factories.
- Factories
- Select from this pop-up menu a KIM Factory from the current bundle.
- Factory User Interface
- Each Factory has a distinct user interface that reflects its structure. Think of it as a pre-wired modular synthesizer. Navigate the structure of the Factory using the tabs.
- Tip: Open the Help Browser for information on each Factory.
-
- Generates one or ten new phrases based on the current settings.
-
- Marks a phrase as a favorite. Withyou can eventually cleanup a pool to only retain the phrases you marked this way.
-
- Deletes the currently selected phrase.
Important: Every generated phrase retains the settings used to generate it, so you can return to it later and continue to spawn off new variations. A carefully configured generated phrase therefore is a new Factory in its own right.
Tip: Of course you can edit a generated phrase manually in any way. Phrase generation is only the first step meant to make it easier to come up with new phrases from scratch.
Note: This feature is available with the Pro edition. | https://docs.cognitone.com/synfire/EN/interface/PartLibraryPoolFactory.html | 2022-09-24T22:14:56 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['../images/mps-Pro/PartFactory.png', None], dtype=object)] | docs.cognitone.com |
Legacy Documentation
You are using the documentation for version 4.3.8. Go here for the latest version.
Check_0<<
Checkboxes and Multiselect can have multiple items selected, whereas Select and Radio Buttons only allow a single item selection.
Multiple Items¶
Fields where multiple items can be selected (Checkboxes and Multiselect)}
Note
You can use a single variable for Checkboxes and Multiselect, e.g. {field_name}, and you will get a comma-separated list of the labels.
Single Items¶
Single-choice fields, such as Select and Radio Buttons,¶
{field_name markup='ul'}
Which will render as
<ul> <li>Green</li> <li>Blue</li> <li>Orange</li> </ul>
Backspace Parameter¶
When used as a tag pair, the multi option fields. | https://docs.expressionengine.com/v4/fieldtypes/select.html | 2022-09-24T23:30:56 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['../_images/valuelabel1.png', '../_images/valuelabel1.png'],
dtype=object)
array(['../_images/valuelabel2.png', '../_images/valuelabel2.png'],
dtype=object) ] | docs.expressionengine.com |
Running your Program
This section will explain how to compile a C++ program that uses the Panda3D libraries. On Windows, this is done using the Microsoft Visual C++ compiler; on all other systems, this can be done with either the Clang or the GCC compiler. Choose the section that is appropriate for your operating system.
Compiling your Program with Visual Studio
The descriptions below assume Microsoft Visual Studio 2015, but they should also work for 2017, 2019 and 2022.
Setting up the project
When creating a new project in Visual Studio, be sure to select the template for a “Win32 Console application” under the Visual C++ templates category. We recommend disabling “Precompiled headers” for now. (Don’t worry, you can still change these things later.)
When you created your project, the first thing you’ll need to do is change “Debug” to “Release” below the menu bar, as shown in the image below. This is because the SDK builds of Panda3D are compiled in Release mode as well. The program will crash mysteriously if the setting doesn’t match up with the setting that was used to compile Panda3D. This goes for the adjacent platform selector as well; select “x64” if you use the 64-bit Panda3D SDK, and “x86” if you use the 32-bit version.
Now, open up the project configuration pages. Change the “Platform Toolset” in the “General” tab to “v140_xp” (if you wish your project to be able to work on Windows XP) or “v140”.
Furthermore, we need to go to C/C++ -> “Preprocessor Definitions” and remove
the
NDEBUG symbol from the preprocessor definitions. This was
automatically added when we switched to “Release” mode, but having this
setting checked removes important debugging checks that we still want to keep
until we are ready to publish the application.
Now we are ready to add the paths to the Panda3D directories. Add the following paths to the appropriate locations (replace the path to Panda3D with the directory you installed Panda3D into, of course):
Include Directories
C:\Panda3D-1.10.11-x64\include
Library Directories
C:\Panda3D-1.10.11-x64\lib
Then, you need to add the appropriate Panda3D libraries to the list of “Additional Dependencies” your project should be linked with. The exact set to use varies again depending on which features of Panda3D are used. This list is a reasonable default set:
libp3framework.lib libpanda.lib libpandaexpress.lib libp3dtool.lib libp3dtoolconfig.lib libp3direct.lib
This should be enough to at least build the project. Press F7 to build your project and start the compilation process. You may see several C4267 warnings; these are harmless, and you may suppress them in your project settings.
There is one more step that needs to be done in order to run the project, though. We need to tell Windows where to find the Panda3D DLLs when we run the project from Visual Studio. Go back to the project configuration, and under “Debugging”, open the “Environment” option. Add the following setting, once again adjusting for your specific Panda3D installation directory:
PATH=C:\Panda3D-1.10.11-x64\bin;%PATH%
Now, assuming that the project built successfully, you can press F5 to run the program. Of course, not much will happen yet, because we don’t have any particularly interesting code added. The following tutorial will describe the code that should be added to open a Panda3D window and start rendering objects.
Compiling your Program with GCC or Clang
On platforms other than Windows, we use the GNU compiler or a compatible
alternative like Clang. Most Linux distributions ship with GCC out of the
box; some provide an easily installable package such as
build-essential
on Ubuntu or the XCode Command-Line Tools on macOS. To obtain the latter, you
may need to register for an account on the
Apple developer site.
Having these two components, we can proceed to compile. The first step is to
create an .o file from our .cxx file. We need to specify the location of the
Panda3D include files. Please change the paths in these commands to the
appropiate locations. If using clang, use
clang++ instead of
g++.
g++ -c filename.cxx -o filename.o -std=gnu++11 -O2 -I{panda3dinclude}
You will need to replace
{panda3dinclude} with the location of the
Panda3D header files. On Linux, this is likely
/usr/include/panda3d/.
On macOS, this will be in
/Library/Developer/Panda3D/include/ in Panda3D
1.10.5 and higher or
/Developer/Panda3D/include/ in older versions.
To generate an executable, you can use the following command:
g++ filename.o -o filename -L{panda3dlibs} -lp3framework -lpanda -lpandafx -lpandaexpress -lp3dtoolconfig -lp3dtool -lp3direct
As above, change {panda3dlibs} to point to the Panda3D libraries. On Linux
this will be
/usr/lib/panda3d or
/usr/lib/x86_64-linux-gnu/panda3d,
whereas on macOS it will be
/Library/Developer/Panda3D/lib or
/Developer/Panda3D/lib, depending on your exact version of Panda3D.
Here is an equivalent SConstruct file, organized for clarity:
pandaInc = '/usr/include/panda3d' pandaLib = '/usr/lib/panda3d' Program('filename.cpp', CCFLAGS=['-fPIC', '-O2', '-std=gnu++11'], CPPPATH=[pandaInc], LIBPATH=pandaLib, LIBS=[ 'libp3framework', 'libpanda', 'libpandafx', 'libpandaexpress', 'libp3dtoolconfig', 'libp3dtool', 'libp3direct'])
To run your newly created executable, type:
./filename
If it runs, congratulations! You have successfully compiled your own Panda3D program!
Note
On macOS,”. | https://docs.panda3d.org/1.10/cpp/introduction/running-your-program | 2022-09-24T22:10:46 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['../../_images/msvc-2015-release-x64.png',
'../../_images/msvc-2015-release-x64.png'], dtype=object)
array(['../../_images/msvc-2015-additional-deps.png',
'../../_images/msvc-2015-additional-deps.png'], dtype=object)] | docs.panda3d.org |
Open TFS-Controlled Project
Test Studio provides built-in integration with TFS source control.
The below steps guide you how to open a project, which is stored in the TFS server and create a local copy of it on your hard drive..
In general, Test Studio does not check out a project from source control when the project opens. However, if one or more project files are out of date (for example, they were created by an older version of Test Studio), Test Studio will attempt to check them out for an update. This will continue until an up-to-date version of the files are checked into source control. | https://docs.telerik.com/teststudio/automated-tests/source-control/tfs/open-tfs-project | 2022-09-24T22:50:47 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['/teststudio/img/features/source-control/open-tfs-project/fig1.png',
'Connect'], dtype=object)
array(['/teststudio/img/features/source-control/open-tfs-project/fig2.png',
'Connect to TFS'], dtype=object)
array(['/teststudio/img/features/source-control/open-tfs-project/fig5.png',
'Checkin Checkout'], dtype=object)
array(['/teststudio/img/features/source-control/open-tfs-project/fig6.png',
'Individual Tests'], dtype=object) ] | docs.telerik.com |
Breaking: #96044 - Harden method signature of logicalAnd() and logicalOr()¶
See forge#96044
Description¶
The method signature of
\TYPO3\CMS\Extbase\Persistence\QueryInterface::logicalAnd()
and
\TYPO3\CMS\Extbase\Persistence\QueryInterface::logicalOr() changed.
As a consequence the method signature of
\TYPO3\CMS\Extbase\Persistence\Generic\Query::logicalAnd()
and
\TYPO3\CMS\Extbase\Persistence\Generic\Query::logicalOr() changed as well.
Both methods do no longer accept an array as first parameter. Furthermore both
methods have two mandatory
\TYPO3\CMS\Extbase\Persistence\Generic\Qom\ConstraintInterface
parameters to form a logical conjunction on.
Both methods do indeed accept an infinite number of further constraints.
The
logicalAnd() method does now reliably return an instance of
\TYPO3\CMS\Extbase\Persistence\Generic\Qom\AndInterface instance
while the
logicalOr() method returns a
\TYPO3\CMS\Extbase\Persistence\Generic\Qom\OrInterface
instance.
Impact¶
This change impacts all usages of said methods with
either just one array parameter containing all constraints
or passing just a single constraint
An array is no longer accepted as first parameter because it does not guarantee the minimum number (2) of constraints is given.
Just one constraint is no longer accepted because in that case, the
incoming constraint would have simply been returned which is not compatible
with returning either a
AndInterface or a
OrInterface but just
a
ConstraintInterface.
Affected Installations¶
All installations that passed all constraints as array or that passed just one constraint.
Migration¶
The migration is the same for
logicalAnd() and
logicalOr()
since their param signature is the same. The upcoming example will show a
migration for a
logicalAnd() call.
Example:
$query = $this->createQuery(); $query->matching($query->logicalAnd([ $query->equals('propertyName1', 'value1'), $query->equals('propertyName2', 'value2'), $query->equals('propertyName3', 'value3'), ]));
In this case an array is used as one and only method argument. The migration is easy and quickly done. Simply don't use an array:
$query = $this->createQuery(); $query->matching($query->logicalAnd( $query->equals('propertyName1', 'value1'), $query->equals('propertyName2', 'value2'), $query->equals('propertyName3', 'value3'), ));
Things become a little more tricky as soon as the number of constraints is below 2 or unknown before runtime.
Example:
$constraints = []; if (...) { $constraints[] = $query->equals('propertyName1', 'value1'); } if (...) { $constraints[] = $query->equals('propertyName2', 'value2'); } $query = $this->createQuery(); $query->matching($query->logicalAnd($constraints));
In this case there needs to be a distinction of number of constraints in the code base:
$constraints = []; if (...) { $constraints[] = $query->equals('propertyName1', 'value1'); } if (...) { $constraints[] = $query->equals('propertyName2', 'value2'); } $query = $this->createQuery(); $numberOfConstraints = count($constraints); if ($numberOfConstraints === 1) { $query->matching(reset($constraints)); } elseif ($numberOfConstraints >= 2) { $query->matching($query->logicalAnd(...$constraints)); } | https://docs.typo3.org/c/typo3/cms-core/main/en-us/Changelog/12.0/Breaking-96044-HardenMethodSignatureOfLogicalAndAndLogicalOr.html | 2022-09-24T23:41:48 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.typo3.org |
Configuring the Internal Image Registry
Image Registry¶
Resources¶
- [1]
Prerequisites¶
- Cluster administrator permissions.
- A cluster on bare metal.
- Provision persistent storage for your cluster, such as Red Hat OpenShift Container Storage. To deploy a private image registry, your storage must provide ReadWriteMany access mode.
- Must have "100Gi" capacity.
To start the image registry, you must change ManagementState Image Registry Operator configuration from Removed to Managed. Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC.
$ oc edit configs.imageregistry/cluster apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: ... spec: ... managementState: Managed storage: pvc: claim: ...
We want to enable the image pruner, to occationally prune images in the registry.
$ oc edit imagepruner.imageregistry/cluster apiVersion: imageregistry.operator.openshift.io/v1 kind: ImagePruner metadata: name: cluster spec: suspend: false ...
Check the status of the deployment:
oc get clusteroperator image-registry | https://docs.infra.centos.org/operations/ci/configuring_image_registry/ | 2022-09-24T22:44:36 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.infra.centos.org |
Updated on September 8, 2022
Get PALM tokens
The PALM token is the native token for the ecosystem and is required to cover transaction (gas) costs on the network.
Use the request form
Users can request testnet or mainnet PALM by completing this form:
Use the bridge
The Palm bridge can be used to transfer tokens between the Palm network and Ethereum. When assets are transferred from Ethereum to the Palm network, the bridge will top up the depositor’s account with a small amount of PALM.
Palm provides the following environment-specific bridges:
See how to use the Palm Mainnet bridge.
Use the faucet
Palm partners have access to faucet APIs to top up accounts with PALM. These faucets are available across Palm network environments. For more information on getting access to these faucet APIs, please contact us. | https://docs.palm.io/Get-Started/Tokens/ | 2022-09-24T23:51:47 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.palm.io |
M-Stream™ File Transfer Acceleration
last updated on: April 21, 2022
Transferring large volumes of data can be very difficult. Historically it would take users hours or even days to move very large files. The file would be first downloaded from the source system onto their local desktop (if there’s enough room). Only when that’s successful can an upload to the target server begin and fingers crossed an unreliable network doesn’t derail the transfer.
M-Stream™ File Transfer Acceleration, available with the Enterprise File Fabric™ platform, allows users to transfer files much more quickly. M-Stream transfers files in parallel streams, speeding up large file downloads, uploads, and movement between storage tiers.
M-Stream File Transfer Acceleration is activated when moving and copying large files, folders and objects. It's supported both in desktop tools (uploading and downloading) and in the appliance itself (server-to-server copies). It may also be used by applications using the Enterprise File Fabric API.
Configuration
The minimum file size and number of threads used for M-Stream are configurable. The number of segments may be determined based on the total size of the file and the specific target provider.
To change the minimum file size and number of threads, log in as the Appliance Administrator (appladmin) and navigate to Settings > Site Functionality.
The setting M-Stream minimum file size is the size in megabytes above which a file will split into segments for copy/move operations using multiple system threads. For best performance, we recommend a minimum file size of 100 MB. A value of '0' disables M-Stream.
The setting M-Stream number of threads controls the number of threads that will be used during an M-Stream copy or move operation. For best performance, we recommend one thread per CPU core. The default is 2 and the maximum number of threads that can be created is 10.
Both “M-Stream™ number of threads” and “M-Stream™ upload number of threads” are internally capped at 10. If you set a larger value, the effective value will be 10.
Provider Support
M-Stream File Transfer Acceleration is supported for storage providers that support multipart uploading and range reads (S3 and OpenStack Swift APIs) as well as file systems that support random I/O such as block-storage providers like CIFS and NFS.
Chunking must be disabled for the target provider. (An Organization Administrator can change from the Dashboard via the Provider Settings.)
MPU/DLO/SLO Requirements
For the following provider types to support M-Stream multi-stream uploads of large files, whether directly or as the destination of copy or move operations, MPU must be enabled in the provider settings.
- OpenS3
- Cloudian
- Scality
- Mini Object Storage
- ActiveScale
- Dell EMC ECS S3
- Dell EMC Elastic Cloud Storage
- Backblaze B2 Cloud Storage
- Caringo Swarm (DataCore)
- Ceph
- Cleversafe
- HostingSolutions.it,
- Igneous
- Leonovus
For Swift/OpenStack providers to support M-Stream multi-stream uploads of large files, whether directly or as the destination of copy or move operations, DLO or SLO must be enabled in the provider settings.
See Multipart Uploading (MPU, DLO) for more information on configuring storage providers for M-Stream. | https://docs.storagemadeeasy.com/mstream | 2022-09-24T23:33:30 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['/_media/mstream/m-stream-graphic.png', None], dtype=object)] | docs.storagemadeeasy.com |
Core Payment Options
Table of Contents & Description.
- Enter your email address in the PayPal Email field, which is the most important information you need to configure before taking payments.
If you don't have an account with PayPal, be sure to sign up for a business account - It's free.
Sandbox
You can enable PayPal Sandbox to test your checkout process during your store's development. With sandbox enabled, no payments/money is taken. You can read more on the PayPal developer website here.
Once you've tested your checkout process with the Sandbox enabled and are confident with the results, be sure to disable the Sandbox before launching your site so you can accept payments from your customers.
Shipping Options
There are two shipping options:
- Send shipping details to PayPal - You can opt to have shipping addresses sent to PayPal to create shipping labels instead of billing addresses.
- Address override option - While we recommend you keep it disabled, it can be used to prevent address information from being changed.
Note: When selecting the option to send Shipping details, PayPal verifies addresses and can often reject the customer if they don't fully recognise it.
Advanced Options
Under the Advanced Options, there are a few fields you can use as needed.
- Receiver Email can be used if the email address that receives payments is different from the primary email address on your PayPal account.
- Invoice Prefix to add a prefix to PayPal invoices. This is helpful if you're using the same PayPal account for more than one website/store.
- Payment Action lets you choose to capture funds immediately or only authorise.
- Page Style is an optional setting where you can enter the name of a custom page from your PayPal account.
- PayPal Identity Token can be used to verify payments if you have any IPN issues.
API Credentials
Three fields to paste your PayPal API credentials into: API Username, API Password, and API Signature. Learn how to get that information from PayPal here.
Data passed to PayPal
The PayPal gateway passes individual line items to PayPal (product name, price, and quantity) unless:
- Your prices are defined as including tax.
- You have more than nine line items, including shipping - PayPal only supports up to nine items.
This is to prevent rounding errors and to ensure correct totals are charged. When line items are not sent, the items are grouped and named "Order #x".
PayPal IPN URL
It's good practice to set up your PayPal IPN URL. In your PayPal Business account, go to: "Profile > Profile and Settings > My Selling Tools". Click on "Instant Payment Notifications" to set your URL.
Use the following, replacing example.com with your own URL:
Auto-Return
You can set up auto-return in your PayPal account, which will take customers to a receipt page. For example, use the following URL and replace example.com with your own URL:
example.com/checkout/order-received/
Regardless of this setting, it redirects dynamically to the correct receipt page.
Add
?utm_nooverride=1 to the end of your URL to ensure that transactions (i.e conversions) are credited to the original traffic source, rather than PayPal.
I have pending orders, but no payment was received.
If the customer abandons the order in PayPal (and not your website), the order shows as Pending (unpaid). No action is necessary.
When the hold stock time is reached, the order will be automatically cancelled.
Do I need an SSL certificate?
The payment is made offsite on PayPal's website and not your checkout. Adding an SSL certificate is optional but definitely encouraged. More and more an SSL certificate is becoming a ranking factor in search results, and it keeps any information transmitted to and from your website secure.
I'm getting payments, but orders are still pending.
In this case the PayPal IPN is failing. Open up a support ticket.
Why do I get an Internal Server Error?
If you see an "Internal Server Error" message after hitting the purchase button, the email address you entered in the PayPal settings in incorrect.
What is PayPal IPN?
PayPal Instant Payment Notifications (PayPal IPN) tells your store that payment has been successful (or not). To learn more about how to set this up on PayPal, click here. There can only be one IPN URL set per PayPal account, so you need to use an account that is not already using an IPN for any other purpose.
Can I use the same PayPal account for more than one website?
We use an "Invoice" system. There is a setting that "blocks duplicate invoices". With two websites using invoices, it is inevitable that there will be duplicate invoice numbers. By de-selecting that option in PayPal, the invoices have no issue. Please check that the Invoice Prefix in your website's store settings (Under Checkout > PayPal) is different for each website.
Do my customers need a PayPal account?
PayPal has an option that allows customers to checkout without needing an account. The setting is called "PayPal account optional", and can be found inside your PayPal account in the My Profile section. Go to "My Selling Tools > Website Preferences", and turn on PayPal account optional. To learn more, click here.
Is PayPal Seller Protection in effect?
Yes, if you use your Shipping address. No, if you use your Billing address (default). The PayPal API does not recognise more than one address.
Message says "Seller only accepts payments from unencrypted payments", why?
This message is from PayPal. The error displays when your PayPal account profile is set to only accept payments from "encrypted" buttons but your item button code is not encrypted. This interrupts the payment process and displays the error message. To turn off this option, head to your PayPal account profile section, and go to "Selling Preferences > Website Payment Preferences". In the "Encrypted Website Payments" section, select "off".
Message says "This invoice has already been paid", why?
Order numbers may not be unique if you're running multiple stores or use your PayPal account for other things. To avoid this issue, head to "Store Settings > Checkout > PayPal" in your admin dashboard, and set a unique prefix for your store's invoices.
Why don't customers see a link to download virtual/downloadable products after paying and getting redirected back to my website?
This could be a sign that the IPN isn't working correctly with your website. A possible workaround is to enable Payment Data Transfer (PDT). Within your PayPal account settings, head to "Selling Preferences > Website Payment Preferences", turn on "Auto Return", add a return URL, and turn on "Payment Data Transfer". Once you have saved the settings, navigate back to the Website Settings page and you will see the PDT Identity Token. You can paste the PDT Identity Token in your admin panel under "Store Settings > Checkout > PayPal" in the "PayPal Identity Token" field.
Cheque
Overview
The Cheque gateway is a payment gateway that doesn't require payment to be made online. Orders made using Cheque are set to "On Hold".
You, as the store owner, should confirm that cheques have cleared before processing orders. It's important to verify that you are paid before shipping an order and marking it complete. For more info, see "Managing Orders".
Setup
In your admin panel, go to "Settings > Store Settings > Checkout > Cheque", and configure your settings.
- Enable/Disable
- Title - Choose the title shown to customers at checkout.
- Description - Add info which will be shown to customers if they choose the Cheque option at checkout.
- Instructions - Add instructions explaining to the customer how they can pay you by cheque.
Cash on Delivery
Overview
Cash on Delivery (COD) is a payment gateway that doesn’t require payment to be made online. Orders using Cash on Delivery are set to Processing until payment is made upon delivery of the order by you or your shipping method.
You, as the store owner, need to confirm payment was collected before marking orders complete. For more information, see "Managing Orders".
Setup
In your admin panel, go to "Settings > Store Settings > Checkout > Cash on Delivery", and configure your settings.
- Enable/Disable
- Title - Choose the title shown to customers at checkout.
- Description - Add info which will be shown to customers if they choose the Cash on Delivery option at checkout.
- Instructions - Explain how to pay via Cash on Delivery.
- Enable for shipping methods - Choose which shipping methods will offer Cash on Delivery.
- Enable for virtual orders - Tick the box to allow COD for virtual products.
BACS (Bank Transfer)
Overview
Bank Account Clearing System (BACS), commonly known as direct bank transfer or wire transfer, is a gateway that doesn’t require payment to be made online.
Orders made using BACS are set as "On Hold". You, as the store owner, need to confirm payment has cleared before shipping or marking orders "Processing" or "Complete". For more information, see "Managing Orders".
Setup
In your admin panel, go to "Settings > Store Settings > Checkout > BACS", and configure your settings.
- Enable/Disable
- Title - Choose the title shown to customers at checkout.
- Description - Add info which will be shown to customers if they choose the BACS option at checkout.
- Instructions - Add instructions explaining to the customer how to pay via BACS to your bank account(s).
- Account Details - Enter account name and number, bank name, routing number, IBAN and/or SWIFT/BIC numbers shown to customers on the Order Recieved page and in order emails after checking out. | https://docs.thatwebsiteguy.net/core-payment-options/ | 2022-09-24T22:52:29 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.thatwebsiteguy.net |
Introduction
This Sample demonstrates the PayloadFactory mediator to perform transformations as an alternative to XSLT mediator, which is demonstrated in Sample 8: Introduction to Static and Dynamic Registry Resources and Using XSLT Transformations . In this sample, the ESB implements the Message Translator EIP and acts as a translator between the client and the back-end server when mediating a message to this format:
<p:getquote xmlns: <p:request> <p:code>IBM</p:code> </p:request> </p:getquote>
But the service expects the message in this format:
<p:getquote xmlns: <p:request> <p:symbol>IBM</p:symbol> </p:request> </p:getquote>
Similarly, the service will send the response in this format:
<m:checkpriceresponse xmlns: <m:symbol>IBM</m:symbol> <m:last>84.76940826343248</m:last> </m:checkpriceresponse>
But the client expects the response in this format:
<m:checkpriceresponse xmlns: <m:code>IBM</m:code> <m:price>84.76940826343248</m:price> </m:checkpriceresponse>
To resolve this discrepancy, we will use the PayloadFactory mediator to transform the message into the request format required by the service and the response format required by the client.
Prerequisites
Refer to Prerequisites section in ESB Samples Setup page.
Building the Sample
1. Start the ESB with sample 17_17.xml as shown below:
<definitions xmlns=""> <sequence name="main"> <in> <!-- using payloadFactory mediator to transform the request message --> <payloadFactory> <format> <m:getQuote xmlns: <m:request> <m:symbol>$1</m:symbol> </m:request> </m:getQuote> </format> <args> <arg xmlns: </args> </payloadFactory> </in> <out> <!-- using payloadFactory mediator to transform the response message --> <payloadFactory> <format> <m:CheckPriceResponse xmlns: <m:Code>$1</m:Code> <m:Price>$2</m:Price> </m:CheckPriceResponse> </format> <args> <arg xmlns: <arg xmlns: </args> </payloadFactory> </out> <send/> < multiple modes. For instructions on this sample client and its operation modes, refer to Stock Quote Client.
1. Run the custom quote client as '
ant stockquote -Dmode=customquote ...' from
<ESB_HOME>/samples/axis2Client directory.
ant stockquote -Daddurl= -Dtrpurl= -Dmode=customquote
2. Analyze the ESB's debug log output. The incoming message is transformed by the Payload-factory-factory mediator. | https://docs.wso2.com/pages/viewpage.action?pageId=26842502&navigatingVersions=true | 2022-09-24T22:43:39 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.wso2.com |
gamepadshift – Tracks button presses read through a shift register.¶
Note
gamepadshift is deprecated in CircuitPython 7.0.0 and will be removed in 8.0.0.
Use
keypad instead.
Available on these boards
- Adafruit EdgeBadge
- Adafruit PyGamer
- Adafruit Pybadge
- The Open Book Feather
-.
The button presses are accumulated, until the
get_pressedmethod is called, at which point the button state is cleared, and the new button presses start to be recorded.
Only one
gamepadshift.GamePadShiftmay be used at a time.
get_pressed(self) → int¶
Get the status of buttons pressed since the last call and clear it.
Returns an 8-bit number, with bits that correspond to buttons, which have been pressed (or held down) since the last call to this function set to 1, and the remaining bits set to 0. Then it clears the button state, so that new button presses (or buttons that are held down) can be recorded for the next call. | https://circuitpython.readthedocs.io/en/7.0.x/shared-bindings/gamepadshift/index.html | 2021-10-16T08:30:10 | CC-MAIN-2021-43 | 1634323584554.98 | [] | circuitpython.readthedocs.io |
.
Features
Improve more.
Building blocks gives you more control to quickly create and launch any kind of website you want!
With GuTemplate,.
Our plugin uses default Gutenberg blocks & some custom blocks that will help you create the most incredible website!
100% compatibility
The plugin is fully compatible with other Gutenberg plugins and themes.
There are 3 pro custom blocks available.!
The GuTemplate Library comes with the following templates:
- Headings – 15 types of headings
- Columns – 4 types of columns
- Icon Boxes – 12 types of icon boxes
- Pricing Tables – 6 types of pricing tables
- Support Team – 4 types of support team boxes
- Images gallery – 2 types of images gallery
- Buttons – 4 types of custom buttons
Full Features List:
- Easy Install
- Online Documentation & How to Videos
- Works with any WordPress Theme
- 6 icon packs available – Iconic, Brands, Icon Moon free, Icon Moon Pro, Typicons, Linecons, Material Icons & Font Awesome
- Fully customizable icons
- Icon styles – Color & Size
- Background Style – Background Color, Opacity & Border Radius
- Custom Margins, Paddings, Shadow, Border
- Icon Alignment – Align to left, right or center | https://docs.aa-team.com/gutemplate/documentation/features-6/ | 2021-10-16T09:48:20 | CC-MAIN-2021-43 | 1634323584554.98 | [array(['http://aa-team.com/public/livedemo/gutemplate/gutemplate.jpg',
None], dtype=object)
array(['http://aa-team.com/public/livedemo/gutemplate/gutemplate03.jpg',
None], dtype=object)
array(['http://aa-team.com/public/livedemo/gutemplate/gutemplate02.jpg',
None], dtype=object) ] | docs.aa-team.com |
Budgeting for projects in Cerebro is a system to plan expenses for a specific task. You can specify budget for any task and keep records of related expenditures. These properties are summed up in the hierarchy of tasks. So, the budget of the project is equal to the sum of the budgets of all the tasks within. Costs are calculated similarly.
In the Task properties there is a Budget section, where you can specify the value of the tasks and maintain records of expenditure
Budget of a task is represented in Budget field, its value represented in conditional units. The Operating costs cell indicates the amount of funds spent on the current task.
The cell Total represents the sum of all planned and current expenditures, respectively.
To receive a report on the costs you have to press the “+” in the right side of the Budget panel:
After that, the panel will have additional tools for writing expense reports, i.e. compiling records of payments. The table displays a list of all payments related to the task.
Payment creation fields are located below the table. They are: Payment cost, date/time and comments. Once you fill out these fields, click the Send payment button.
The sum of all payments are in the Operating costs box, excluding the payments with the canceled status. If you want to cancel a payment record, click the right mouse button and select Cancel payment from the menu. You can also cancel the payment button Delete.
In Navigator, budgeting is represented by three columns - budget, costs and balance. These values are calculated as follows:
budget - planned total budget of a task and all its subtasks;
costs - the amount of current expenses for a task and all its subtasks;
balance - the difference between the budget and costs. | https://docs.cerebrohq.com/en/articles/3309487-budgeting | 2021-10-16T09:30:52 | CC-MAIN-2021-43 | 1634323584554.98 | [array(['https://downloads.intercomcdn.com/i/o/246791121/06ecaffd1d276fb970752365/%D0%A1%D0%BD%D0%B8%D0%BC%D0%BE%D0%BA+%D1%8D%D0%BA%D1%80%D0%B0%D0%BD%D0%B0+2020-09-16+%D0%B2+22.38.22.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/246795552/ec60cb7e36ed784bfbd770ef/%D0%A1%D0%BD%D0%B8%D0%BC%D0%BE%D0%BA+%D1%8D%D0%BA%D1%80%D0%B0%D0%BD%D0%B0+2020-09-16+%D0%B2+22.51.25.png',
None], dtype=object) ] | docs.cerebrohq.com |
Configuration input ¶
Input definition sources ¶
ECS Files Composer aims to provide a lof of flexibility to user in how they want to configure the job definition. The inspiration of the files input schema comes from AWS CloudFormation ConfigSets.files which could be defined in JSON or YAML.
So in that spirit, so long as the file can be parsed down into an object that complies to the JSON Schema , the source can be varied.
From environment variable ¶
The primary way to override configuration on the fly with containers is either change ENTRYPOINT/CMD or environment variables.
So in that spirit, you can define a specific environment variables or simply use the default one, ECS_CONFIG_CONTENT
You can do the JSON string encoding yourself, or more simply you could do that in docker-compose, as follows
version: "3.8" services: files-sidecar: environment: AWS_CONTAINER_CREDENTIALS_RELATIVE_URI: "/creds" AWS_DEFAULT_REGION: ${AWS_DEFAULT_REGION:-eu-west-1} ECS_CONFIG_CONTENT: | files: /opt/files/test.txt: content: >- test from a yaml raw content owner: john group: root mode: 600 /opt/files/aws.template: source: S3: BucketName: ${BUCKET_NAME:-sacrificial-lamb} Key: aws.yml /opt/files/ssm.txt: source: Ssm: ParameterName: /cicd/shared/kms/arn commands: post: - file /opt/files/ssm.txt /opt/files/secret.txt: source: Secret: SecretId: GHToken
From JSON or YAML file ¶
If you prefer to use ECS Files Composer as a CLI tool, or simply to test (don’t forget about IAM permissions!) the configuration itself, you can write the configuration into a simple file.
So, you could have the following config file for the execution
files: /tmp/test.txt: content: >- test from a yaml raw content owner: john group: root mode: 600 /tmp/aws.template: source: S3: BucketName: ${BUCKET_NAME:-sacrificial-lamb} Key: aws.yml /tmp/ssm.txt: source: Ssm: ParameterName: /cicd/shared/kms/arn commands: post: - file /tmp/ssm.txt /tmp/secret.txt: source: Secret: SecretId: GHToken /tmp/public.json: source: Url: Url: /tmp/aws.png: source: Url: Url:
And run
ecs_files_composer -f test.yaml
From AWS S3 / SSM / SecretsManager ¶
This allows the ones who might need to generate the job instruction/input through CICD and store the execution file into AWS services.
Hint
If to retrieve the configuration file from another account, you can specify a Role ARN to use.
Hint
When running on ECS and storing the above configuration, you can use AWS ECS Task Definition Secrets which creates an environment variable for you. Therefore, you could just indicate to use that.
JSON Schema ¶
The input for ECS Files Composer has to follow the JSON Schema below.
{ "$schema": "", "type": "object", "typeName": "input", "description": "Configuration input definition for ECS Files Composer", "required": [ "files" ], "properties": { "files": { "type": "object", "uniqueItems": true, "patternProperties": { "^/[\\x00-\\x7F]+$": { "$ref": "#/definitions/FileDef" } } }, "certificates": { "type": "object", "additionalProperties": false, "properties": { "x509": { "uniqueItems": true, "patternProperties": { "^/[\\x00-\\x7F]+$": { "$ref": "#/definitions/X509CertDef" } } } } }, "IamOverride": { "type": "object", "$ref": "#/definitions/IamOverrideDef" } }, "definitions": { "FileDef": { "type": "object", "additionalProperties": true, "properties": { "path": { "type": "string" }, "content": { "type": "string", "description": "The raw content of the file to use" }, "source": { "$ref": "#/definitions/SourceDef" }, "encoding": { "type": "string", "enum": [ "base64", "plain" ], "default": "plain" }, "group": { "type": "string", "description": "UNIX group name or GID owner of the file. Default to root(0)", "default": "root" }, "owner": { "type": "string", "description": "UNIX user or UID owner of the file. Default to root(0)", "default": "root" }, "mode": { "type": "string", "description": "UNIX file mode", "default": "0644" }, "context": { "type": "string", "enum": [ "plain", "jinja2" ], "default": "plain" }, "ignore_if_failed": { "type": "boolean", "description": "Whether or not the failure to retrieve the file should stop the execution", "default": false }, "commands": { "type": "object", "properties": { "post": { "$ref": "#/definitions/CommandsDef", "description": "Commands to run after the file was retrieved" }, "pre": { "$ref": "#/definitions/CommandsDef", "description": "Commands executed prior to the file being fetched, after `depends_on` completed" } } } } }, "SourceDef": { "type": "object", "properties": { "Url": { "$ref": "#/definitions/UrlDef" }, "Ssm": { "$ref": "#/definitions/SsmDef" }, "S3": { "$ref": "#/definitions/S3Def" }, "Secret": { "$ref": "#/definitions/SecretDef" } } }, "UrlDef": { "type": "object", "properties": { "Url": { "type": "string", "format": "uri" }, "Username": { "type": "string" }, "Password": { "type": "string" } } }, "SsmDef": { "type": "object", "properties": { "ParameterName": { "type": "string" }, "IamOverride": { "$ref": "#/definitions/IamOverrideDef" } } }, "SecretDef": { "type": "object", "required": [ "SecretId" ], "properties": { "SecretId": { "type": "string" }, "VersionId": { "type": "string" }, "VersionStage": { "type": "string" }, "IamOverride": { "$ref": "#/definitions/IamOverrideDef" } } }, "S3Def": { "type": "object", "required": [ "BucketName", "Key" ], "properties": { "BucketName": { "type": "string", "description": "Name of the S3 Bucket" }, "BucketRegion": { "type": "string", "description": "S3 Region to use. Default will ignore or retrieve via s3:GetBucketLocation" }, "Key": { "type": "string", "description": "Full path to the file to retrieve" }, "IamOverride": { "$ref": "#/definitions/IamOverrideDef" } } }, "IamOverrideDef": { "type": "object", "description": "When source points to AWS, allows to indicate if another role should be used", "properties": { "RoleArn": { "type": "string" }, "SessionName": { "type": "string", "default": "S3File@EcsConfigComposer", "description": "Name of the IAM session" }, "ExternalId": { "type": "string", "description": "The External ID to use when using sts:AssumeRole" }, "RegionName": { "type": "string" }, "AccessKeyId": { "type": "string", "description": "AWS Access Key Id to use for session" }, "SecretAccessKey": { "type": "string", "description": "AWS Secret Key to use for session" }, "SessionToken": { "type": "string" } } }, "CommandsDef": { "type": "array", "description": "List of commands to run", "items": { "type": "string", "description": "Shell command to run" } }, "X509CertDef": { "type": "object", "additionalProperties": true, "required": [ "certFileName", "keyFileName" ], "properties": { "dir_path": { "type": "string" }, "emailAddress": { "type": "string", "format": "idn-email", "default": "[email protected]" }, "commonName": { "type": "string", "format": "hostname" }, "countryName": { "type": "string", "pattern": "^[A-Z]+$", "default": "AW" }, "localityName": { "type": "string", "default": "AWS" }, "stateOrProvinceName": { "type": "string", "default": "AWS" }, "organizationName": { "type": "string", "default": "AWS" }, "organizationUnitName": { "type": "string", "default": "AWS" }, "validityEndInSeconds": { "type": "number", "default": 8035200, "description": "Validity before cert expires, in seconds. Default 3*31*24*60*60=3Months" }, "keyFileName": { "type": "string" }, "certFileName": { "type": "string" }, "group": { "type": "string", "description": "UNIX group name or GID owner of the file. Default to root(0)", "default": "root" }, "owner": { "type": "string", "description": "UNIX user or UID owner of the file. Default to root(0)", "default": "root" } } } } }
AWS IAM Override ¶
ECS Files Composer uses AWS Boto3 as the SDK. So wherever you are running it, the SDK will follow the priority chain of credentials in order to figure out which to use.
In the case of running it on AWS ECS, your container will have an IAM Task Role associated with it. You are responsible for configuring the permissions you want to give to your service.
The IamOverride definition allows you to define whether the credentials used by the tool should be used to acquire temporary credential by assuming another role.
This can present a number of advantages, such as retrieving files from another AWS Account than the one you are currently using to run the application.
IAM Override Priority ¶
When building the boto3 session to use, by default the boto3 SDK will pick the first valid set of credentials.
If you specify the IamOverride properties at the root level , as follows
files: /path/to/file1: source: S3: BucketName: some-bucket Key: file.txt IamOverride: RoleArn: arn:aws:iam::012345678901:role/role-name
Then all subsequent API calls to AWS will be made by assuming this IAM role.
If however you needed to change the IamOverride for a single file, or use two different profiles for different files, then apply the IamOverride at that source level, as follows.
files: /path/to/file1: source: S3: BucketName: some-bucket Key: file.txt IamOverride: RoleArn: arn:aws:iam::012345678901:role/role-name /path/to/file2: source: Ssm: ParameterName: /path/to/parameter IamOverride: RoleArn: arn:aws:iam::012345678901:role/role-name-2 /path/to/file3: source: S3: BucketName: some-other-other-bucket Key: file.txt
In the above example,
/path/to/file1, assume role and use arn:aws:iam::012345678901:role/role-name
/path/to/file2, assume role and use arn:aws:iam::012345678901:role/role-name-2
/path/to/file3, use the default credentials found by the SDK
Attention
If the SDK cannot find the credentials and an AWS Source is defined, the program will throw an exception.
Environment Variables substitution ¶
ECS Files composer was created with the primary assumption that you might be running it in docker-compose or on AWS ECS. When you define environment variables in docker-compose or ECS Compose-X , the environment variables are by default interpolated.
docker compose allows to not interpolate environment variables, but it is all or nothing, which might not be flexible enough.
So to solve that, the environment files substitution has decided to use the AWS CFN !Sub function syntax to declare literal variables that shall not be interpolated.
So for example, if you have in docker-compose the following
docker-compose and compose-x would interpolate the value for ${ENV} and ${CONNECT_BUCKET} from the execution context. But here, you defined that ENV value should be stg , and it will create an environment variable that gets exposed to the container at runtime.
To avoid this situation and have the environment variable interpolated at runtime within the context of your container, not the context of docker-compose or ECS Compose-X, simply write it with ${!ENV_VAR_NAME} .
So this would give us the following as a result.
When running, ECS Compose-X (or ECS Files Composer) will not interpolate the environment variable and remove the ! from the raw string and allow the environment variable name to remain intact once rendered. | https://docs.files-composer.compose-x.io/input.html | 2021-10-16T09:22:03 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.files-composer.compose-x.io |
The Pipe Bending Die Data node (the PipeBendingDieData sheet in the Pipe Bending Manufacturability Rules.xls workbook) defines die data for pipe bending machines.
Absolute Pipe Bend Radius (Optional)
Type the largest pipe bend radius for the die.
Default Minimum Tangent Length Between Bends (Optional)
Enter the minimum distance between two bends that the die can produce. You can enter zero. This value is used as the minimum tangent length between bends when you do not define the minimum tangent length in the minimum bend-to-bend tangent length data on the basis of the bend type.
Minimum Grip Length (Required)
Enter the minimum grip length for the die. You can enter zero length if applicable. This length overrides the applicable value from the Minimum Pipe Length rule.
Minimum Pull Length (Required)
Enter the minimum pull length for the die. You can enter zero length if applicable. This length overrides the applicable value from the Minimum Pipe Length rule.
Nominal Piping Diameter (Required)
Enter the die NPD.
Nominal Piping Diameter UOM (Required)
Specify the units of measurement, for example in or mm, for the Nominal Piping Diameter column.
Pipe Bend Radius Multiplier (Optional)
Type the pipe bend radius multiplier that apply to this die.
Pipe Bending Machine Name (Required)
Enter the bending machine code for which you are defining die data. The bending machines codes are defined in the Pipe Bending Machine Name column on the PipeBendingMachineData sheet. The sheet is located in the Pipe Bending Manufacturability Rules.xls workbook. | https://docs.hexagonppm.com/r/en-US/Intergraph-Smart-3D-Piping-Reference-Data/13/64526 | 2021-10-16T09:29:02 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.hexagonppm.com |
Column Advanced Tab - Positioning
This article will show you how to access and use the the Positioning of column advanced tab.
1. Select the specific column you want to customize.
2. Click on Advance Settings (cog icon) then on Advance Options button to show the OP3 builder sidebar.
3. On OP3 builder sidebar, click the Positioning arrow to show the available options.
In there you can customize the column gutter, margin and padding and column height. Don't forget to save the changes when done editing. | https://docs.optimizepress.com/article/2145-column-advanced-tab-positioning | 2021-10-16T07:56:56 | CC-MAIN-2021-43 | 1634323584554.98 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/598dc2f12c7d3a73488be89a/images/5c9b4e300428633d2cf41a97/file-sJauf9qHH8.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/598dc2f12c7d3a73488be89a/images/5c9b4eda0428633d2cf41aa0/file-PdAY8okB9H.png',
None], dtype=object) ] | docs.optimizepress.com |
smrtPhone is built using the Twilio platform. Twilio is simply the most advanced cloud communications platform on the web, and it is used in many of the modern VoIP systems out there like CallRail, AirCall, Uber and many more.
What phone technology do you use?
The technology behind smrtPhone
Written by Tudor Deak
Updated over a week ago
Updated over a week ago | https://docs.smrtphone.io/en/articles/2790271-what-phone-technology-do-you-use | 2021-10-16T09:46:35 | CC-MAIN-2021-43 | 1634323584554.98 | [array(['https://static.intercomassets.com/avatars/2931972/square_128/IMG-20191028-WA0035-1627075468.jpg?1627075468',
'Tudor Deak avatar'], dtype=object) ] | docs.smrtphone.io |
You deploy three Global Manager nodes and join them to form a cluster. Procedure Deploy Global Manager NodesYou deploy three Global Manager nodes in the VMware Cloud Foundation management domain. Join Global Manager Nodes to Form a ClusterJoin the three Global Manager nodes you deployed in the VMware Cloud Foundation management domain to form a cluster. Create Anti-Affinity Rule for Global Manager Cluster in VMware Cloud FoundationCreate an anti-affinity rule to ensure that the Global Manager nodes run on different ESXi hosts. If an ESXi host is unavailable, the Global Manager nodes on the other hosts continue to provide support for the NSX management and control planes. Assign a Virtual IP Address to Global Manager ClusterTo provide fault tolerance and high availability to Global Manager nodes, assign a virtual IP address (VIP) to the Global Manager cluster in VMware Cloud Foundation. Parent topic: Configuring NSX Federation in VMware Cloud Foundation Next topic: Prepare Local Manager for NSX Federation in VMware Cloud Foundation | https://docs.vmware.com/en/VMware-Cloud-Foundation/4.3/vcf-admin/GUID-6A737890-4123-44FB-8714-0D5889F81B0B.html | 2021-10-16T07:55:49 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.vmware.com |
VideoContentMetadata()Initialize an instance of the current class using the new keyword along with this constructor. Properties: Name Type Description title string Title of the video description string Video description duration number Duration of the video in milliseconds closedCaptions object Object containing information about the closed captions available contentType string A string indicating the type of content in the stream (ex. "video"). hostedAtURL string The url the video is being hosted from | https://docs.brightcove.com/apidocs-ooyala/analytics_framework/Analytics.EVENT_DATA_VideoContentMetadata.html | 2021-10-16T09:30:05 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.brightcove.com |
...
- Datasets: See View for Imported Datasets.
- This category includes reference datasets that were imported from other flows.
- Recipes: See View for Recipes.
- Outputs: See View for Outputs.
- References: This category covers reference objects datasets that are created from the recipes in your flow. See View for ReferencesReference Datasets. | https://docs.trifacta.com/pages/diffpagesbyversion.action?pageId=160407571&selectedPageVersions=2&selectedPageVersions=3 | 2021-10-16T09:19:31 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.trifacta.com |
Crucial Points Io Keep in Mind When Selecting A Suitable Online Pharmacy
Seeking to be served by the right online pharmacy matters and that is not easy about to choose if you are new. Quacks have increased over the past years. You are never too safe for quacks.. That is because of canadian pharmacy prices their pure intentions towards you. Family and friends are the closest to you, therefore they will focus on what canadian pharmacy prices will advantage you. That means that you can trust them because they will always provide you with the truth. Moreover, they will guide you towards getting the best services. Using them as a source of information can be helpful because they have already received the services, hence they understand how the electrician works. Consider looking for sources that are real to avoid falling into trouble.
Other sources that might help in this journey of obtaining information is the internet, magazines, and newspapers.. Therefore, you can never go wrong with this source of information because there is only an advertisement for the best electrician. You should also note that the internet has a wide range of information.
If you want to know what is latest in the world, seek information from magazines and newspapers. A lot of check it out electricians go through a lot of things before they are featured in magazines. That means that magazines only publish articles of professionals. Foundation is everything, especially for these electricians featured in magazines. You will also get check it out to see amazing pictures that will change your mind when it comes to choosing a particular electrician. The process of an electrician being featured in magazines takes time. Read the following points to familiarize yourself with the electrician that you should look for.
Look for an electrician that is diverse in terms of offering services. They should also provide services in large numbers. That means that different clients should get various services in one building. It is also crucial to inquire about the time they have in business. That is because they always meet deadlines. Frustrations will be the least of your worries because the electrician will serve you well. | http://docs-prints.com/2020/12/25/how-to-achieve-maximum-success-with-10/ | 2021-10-16T08:58:27 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs-prints.com |
What to Consider When Picking the Best Collection Service Provider
The confirmation of a solid collection service provider is very tough. It is on the grounds that they are a basic number of them out there. So that you can find the best collection service provider, it is fundamental to do far reaching investigation.. Take your chance to guarantee that the award of the collection service provider is valid.. You should utilize a collection service provider with amazing experience.. A collection service provider that has been out there offering sorts of help for at any rate five years ought to be the ideal one that you need to employ. Time and cash are a piece of the things that you will save once you consider to do this.
Considering to request suggestion is an additional tip that you can’t dismiss during your excursion for the best collection service provider. This tip will make your solicitation to be simple..
Another significant guide that you need to consider once you are out there to locate the best collection service provider is price. A dependable collection service provider to enlist is the one with the ability to offer you amazing services at a rate that you can afford. | http://docs-prints.com/2021/03/27/lessons-learned-about-36/ | 2021-10-16T09:00:37 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs-prints.com |
Help us make the web more programmable ✨Join our team
Run an actor or task and retrieve data via API
Learn how to perform the most common integration workflow - run the job => wait => collect data. Integrate Apify actors with your applications.
The most common integration of Apify with your system is usually very simple. You need to run an actor or task, wait for it to finish, then collect the data. With all the features Apify provides, new users may not be sure of the standard/easiest way to implement this. So, let's dive in and see how you can do it.
Remember to check out our API documentation with examples in different languages and live API console. We also recommend testing the API with a nice desktop client like Postman.
There are 2 ways to use the API:
- Synchronously – Runs shorter than 5 minutes.
- Asynchronously – Runs longer than 5 minutes.
Run an actor or task
API endpoints for actors and tasks and their usage (for both sync and async) are essentially the same. If you are unsure of the difference between an actor and task, read about it in the tasks documentation. In brief, tasks are just pre-configured inputs for actors.
To run (or "call" in API language) an actor/task, you will need a few things:
Name or ID of the actor/task. The name is in the format
username~actorNameor
username~taskName.
Your API token. You can find it on the Integrations page in the Apify app (make sure it does not leak anywhere!).
Possibly an input or other settings if you want to change the default values (e.g. memory or build). Apify is no reason to provide a build since a task already has only one specific actor build.
Input JSON
Most actors would not be much use if you could not pass any input to change their behavior. And even though each task already has an input, it is handy to be able to always overwrite with the API call.
An actor or task's can be any JSON object, so its structure really depends only on the specific actor. This input JSON should be sent in the POST request's body.
If you want to run one of the major actors from Apify Store, you usually do not need to provide all possible input fields. Good actors have reasonable defaults for most fields.
Let's try to run the most popular actor – the generic Web Scraper.
The full input with all possible fields is pretty long and ugly, so we will not show it here. As it has default values for most of its fields, we can provide just a simple JSON input.
We will send a POST request to the endpoint below and add the JSON as a body.
This is how it can look in Postman.
If we press Send, it will immediately return some info about the run. The
status will be either
READY (which means that it is waiting to be allocated on a server) or
RUNNING (99% of cases).
We will later use this run info JSON to retrieve the data. You can also get this info about the run with another call to the Get run endpoint.
Synchronous flow
If each of your runs is shorter than 5 minutes, you can use a single synchronous endpoint. The connection is held for up to 5 minutes.
If your run exceeds this time limit, the response will be a run object containing information about the run and the status
RUNNING. If that happens, you need to restart the run asynchronously and wait for the run to finish.
Synchronous runs with dataset output
Most actor runs will store their data in the default dataset. The Apify API provides run-sync-get-dataset-items endpoints for actors and tasks. These endpoints allow you to run an actor and receive the items from the default dataset.
A simple example of calling a task and logging the dataset items in Node.js.
// Use your favourite HTTP client const got = require('got'); // Specify your API token // (find it at) const myToken = 'rWLaYmvZeK55uatRrZib4xbZs'; // Start apify/google-search-scraper actor // and pass some queries into the JSON body const response = await got({ url: `{myToken}`, method: 'POST', json: { queries: 'web scraping\nweb crawling' }, responseType: 'json', }); const items = response.body; // Log each non-promoted search result for both queries items.forEach((item) => { const { nonPromotedSearchResults } = item; nonPromotedSearchResults.forEach((result) => { const { title, url, description } = result; console.log(`${title}: ${url} --- ${description}`); }); });
Synchronous runs with key-value store output
Key-value stores are useful for storing files like images, HTML snapshots or JSON data. The Apify API provides run-sync endpoints for
actors
and tasks. These allow you to run a specific task and receive the output. By default, they return the
OUTPUT record from the default key-value store.
For more detailed information, check the API reference.
Asynchronous flow
For runs longer than 5 minutes the process consists of three steps.
Wait for the run to finish
There may be cases where we need to simply run the actor and go away. But in any kind of integration, we are usually interested in its output. We have three basic options for how to wait for the actor/task to finish.
waitForFinish parameter
This solution is quite similar to the synchronous flow. To make the POST request wait, add the
waitForFinish parameter. It can have a value from
0 to
300, which is time in seconds (maximum wait time is 5 minutes). You can extend the example URL like this:
Again, the final response will be the run info object, however now its status should be
SUCCEEDED or
FAILED. If the run exceeds the
waitForFinish duration, the status will still be
RUNNING.
You can also use
waitForFinish parameter with the GET Run endpoint to implement a smarter polling system.
Webhooks
If you have a server, webhooks are the most elegant and flexible solution. You can simply set up a webhook for any actor or task, and that webhook sends a POST request to your server after an event occurs.
Usually, this event is a successfully finished run, but you can also set a different webhook for failed runs, etc.
The webhook will send you a pretty complicated JSON, but usually you are only interested in the
resource object. It is essentially the run info JSON from the previous sections. You can leave the payload template as is as for our use case, since it is what we need.
Once you receive this request from the webhook, you know the event happened and you can ask for the complete data. Do not forget to respond to the webhook with a 200 status. Otherwise, it will ping you again.
Polling
There are cases where you do not have a server and the run is too long to use a synchronous call. In these cases, periodic polling of the run status is the solution.
You run the actor with the usual API call shown above. This will run the actor and give you back the run info JSON. From this JSON, extract the
id field. It is the ID of the actor run that you just started.
Then, you can set an interval that will poll the Apify API (let's say every 5 seconds). In every interval, you will call the Get run endpoint to retrieve the run's status. Simply replace the
RUN_ID with the ID you extracted earlier in the following URL.
Once you receive a
status of
SUCCEEDED or
FAILED, you know the run has finished and you can cancel the interval and collect the data.
Collect the data
Unless you have used the synchronous call.
Retrieve a dataset
If you are scraping products or any list of items with similar fields, the dataset is your storage of choice. Do not forget, though, that dataset items are immutable: you can only add. You can learn about them in the documentation. We will only mention that you can pass a
format parameter that transforms the response into popular formats like CSV, XML, Excel, RSS, etc.
The items are paginated, which means you can ask only for a subset of the data. Specify this using the
limit and
offset parameters. There is actually an overall limit of 250,000 items that the endpoint can return per request. To retrieve more, you will need to send more requests incrementing the
offset parameter.
Retrieve a key-value store
Key-value stores are mainly useful if you have a single output or any kind of files that cannot be stringified such as images or PDFs.
When you want to retrieve anything from record's content, call Get record endpoint. Again, no need for a token for simple GET requests.
If you do not know the keys (names) of the records in advance, you can retrieve just the keys with List keys endpoint.
Keep in mind that you can get a maximum of 1000 keys per request, so you will need to paginate over the keys using the
exclusiveStartKey parameter if you have more than 1000 keys. To do this, after each call, take the last record key and provide it as the
exclusiveStartKey parameter. You can do this until you get 0 keys back. | https://docs.apify.com/tutorials/integrations/run-actor-and-retrieve-data-via-api | 2021-10-16T08:50:13 | CC-MAIN-2021-43 | 1634323584554.98 | [array(['https://apify-docs.s3.amazonaws.com/master/assets/tutorials/images/run-actor-postman.webp',
'Run an actor via API in Postman'], dtype=object)
array(['https://apify-docs.s3.amazonaws.com/master/assets/tutorials/images/run-info-postman.webp',
'Actor run info in Postman'], dtype=object)
array(['https://apify-docs.s3.amazonaws.com/master/assets/tutorials/images/webhook.webp',
'Webhook example'], dtype=object) ] | docs.apify.com |
Other Information / Our Release Process
Note: You are currently reading the documentation for Bolt 2.2. Looking for the documentation for Bolt 5.0 instead?
Since branching Bolt v1 and commencing on the path to Bolt v2 we've spent time thinking about how, as a community based project, to. | https://docs.boltcms.io/2.2/other/release-process | 2021-10-16T09:11:46 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.boltcms.io |
Analyze Cost for GCP
Harness Cloud Cost Management (CCM) allows you to view your Google Cloud Platform (GCP) costs, understand what is costing the most, and analyze cost trends. CE displays data for all your GCP products (such as Compute Engine, Cloud Storage, BigQuery, and so on), projects, SKUs, location, and labels and also provides details on:
- GCP cloud cost spending trends
- The GCP products costing the most in a selected time range. For example, how much Compute Engine cost last week
- Primary cost contributor, such as product, project, SKUs, or region
- GCP spending by region, such as us-west1 or us-east4
In this topic:
Before You Begin
- Cloud Cost Management Overview
- Enable Continuous Efficiency for Google Cloud Platform (GCP)
- Cost Explorer Walkthrough
Step: Analyze GCP Cost
- In Cloud Cost Management, click Explorer, and then click GCP in the top navigation. The GCP products:
- Products: Each of your active products with their cloud costs is displayed.
- Projects: Each of your Cloud projects with their cloud costs is displayed.
- SKUs: Each SKU you are using.
- Location: Each Google Cloud service location you are currently running services in.
- Label: Each label that you are using to organize your Google Cloud instances.
- No Grouping: The total GCP cloud cost.
- To get further granular details, use Filter by options.
Step: Analyze Discounts
GCP provides certain discounts depending on your usage and commitment, such as:
- Sustained use discounts are automatic discounts for running specific Compute Engine resources a significant portion of the billing month.
- Committed use discounts (CUDs) provide discounted prices in exchange for your commitment to use a minimum level of resources for a specified term. The discounts are flexible, cover a wide range of resources, and are ideal for workloads with predictable resource needs.
You can view the discounts for your GCP products, projects, SKUs, location, and labels. To analyze and understand the discounts, perform the following steps:
- In Cloud Cost Management, click Explorer, and then click GCP in the top navigation. The GCP products are displayed.
- Select a date range, Group by, and Filter by options. Based on your selection, Discounts are calculated and displayed.
- Uncheck the Discounts checkbox to exclude discount detail from the table and the chart.
- Uncheck the Discounts checkbox in the Select Columns to exclude the discounts detail from the table view only. | https://docs.harness.io/article/oo4vs4exhz-analyze-cost-for-gcp | 2021-10-16T09:12:14 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.harness.io |
I want to use the Graph API to create a folder in a mailbox that belongs to Exchange Online.
After getting the token, I was able to create a folder with the following command.
However, it cannot be created as a child folder of the specified folder.
Is it possible to create directly under the inbox?
$url = ""
$Body = @"
{
"displayName": "TestFolder",
"parentFolderId": "**"
}
"@
Invoke-RestMethod -Uri $url -Method Post -Body $Body -ContentType 'application/json; charset=utf-8' -Headers $headerParams | ConvertTo-Json | https://docs.microsoft.com/en-us/answers/questions/22570/create-mailbox-folder-with-graph-api.html | 2021-10-16T10:34:42 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.microsoft.com |
🎥 Using and Sharing Templates in the Personal Template Cloud
The Personal Template Cloud allows you to not only save and use your templates, but you can also share them.
Once you have saved your pages and sections in the Personal Template Cloud, you'll need to know how to manage, use, and share those templates.
The following video will explain in detail how to do that:
How to Use Your Saved Page Templates
To use a Page Template, you can go to "Optimizepress3 > Create New Page" and then choose the "My Templates" category:
Once you choose a template, you would just create a page the same as you would any other templates in the template library.
How to Use Your Saved Section Templates
To use a Section Template, you'll need to be in the OptimizeBuilder editing a page. Click on the Sections Icon:
Go to the "My Sections" category, and choose either light or dark and you'll see your section templates.
Note that when you save section templates you can choose to save them as a light or dark template.
You simply drag the section template over to where you want to insert the section on the page:
How to Share Your Templates - Method 1:
Each OptimizePress account has a unique "Share ID" that can be found on the "Your Templates" page. If someone shares that ID with you, you can share the page template with them by logging into my.optimizepress.com and clicking on "Your Templates" and click on the blue share icon below the template:
Then just enter the share ID in the share form and click the "Share Now" Button
Your template will then be copied to the user account that the share ID came from.
Shared templates do not count towards the template allowances.
How to Share Your Templates - Method 2:
We also now have a way to share templates with a unique share link. From within the "My Templates" area of my.optimizepress.com you can click the yellow icon there to share the template via a special share link
Once you do that, then you would just need to copy the share link and share it with anyone you want.
When someone clicks on that link, they'll be asked to sign into their OptimizePress account to accept the shared template.
They will also see the name of the template, a screenshot, as well as a link to view the full template. If you would like to see how this works, you may use the following link to grab the Pricing Table Switcher template I created for this demo by clicking here. The link opens in a new window so you can continue with the rest of this guide.
Shae Templates And Earn Affiliate Commission
If you noticed the "Get OptimizePress Now" button on the template share page, you can add your affiliate ID to that
To add your affiliate ID, you would get that from our First Promotor service we use to track affiliates.
If you are not an affiliate you may signup through this link (opens in new window):
Then, go to your profile in my.optimizepress.com
Then click "Edit Profile" button
Then enter your Affiliate ID and click save.
Now you should see that the link on the template sharing page should lead to your own affiliate link.
If you have any questions about anything in this guide, please reach out to our support team and they'll be happy to guide you further. | https://docs.optimizepress.com/article/2326-using-and-sharing-templates-in-the-personal-template-cloud | 2021-10-16T08:57:26 | CC-MAIN-2021-43 | 1634323584554.98 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/598dc2f12c7d3a73488be89a/images/614799502b380503dfdf2855/file-xN4arOFQ2I.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/598dc2f12c7d3a73488be89a/images/61479c4d0754e74465f1196b/file-xvfZBlffV9.gif',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/598dc2f12c7d3a73488be89a/images/61479e1f2b380503dfdf2860/file-9WJB0bYWHO.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/598dc2f12c7d3a73488be89a/images/61479f1300c03d672075860d/file-X5LoPAf0vL.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/598dc2f12c7d3a73488be89a/images/61479f9d2b380503dfdf2862/file-FzZmppusLy.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/598dc2f12c7d3a73488be89a/images/6147a04f00c03d672075860e/file-JHqilWqBrY.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/598dc2f12c7d3a73488be89a/images/6147a2ec0332cb5b9e9abcc0/file-PjVbhDATjV.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/598dc2f12c7d3a73488be89a/images/6147a4e00332cb5b9e9abcc5/file-8WDT1yX4cM.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/598dc2f12c7d3a73488be89a/images/6147a4eb00c03d6720758616/file-owBbaqW2u8.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/598dc2f12c7d3a73488be89a/images/6147a50e0332cb5b9e9abcc6/file-1NgSfWU7ro.jpg',
None], dtype=object) ] | docs.optimizepress.com |
Agree Command Setup
Server Raids? Not with Poni!
The agree module is an entry gate to your server. Before they join, they must declare they agree to the rules of the server and in a pinch, you can
-lock the channel to stop anyone new from entering.
This can be achieved in two ways, displayed below.
Taking away @everyone's permissions
When we specify @everyone, we don't mean all of your members in your server, we're referring to the role. The Agree command is a very crucial module for stopping server raids right at the front door of the server itself. To make sure this works properly, we need to make sure @everyone has no read or write permissions as pictured below. This measure is put in place to stop them from accessing core parts of your server and forcing them to pass through your agree channel.
Modifying @Member
Now we need to provide what we just took away from the @everyone back into the @Member role. This will be the base role in your server so we recommend taking away embedding, image, and TTS.
Creating the -agree channel and setting some permissions
To create a new channel, right click and select
Create Channel. Give it an appropriate name and click
Create Channel as pictured below.
Once the channel exists, click the edit channel option beside it. From here, enter the Permissions section. Here we can apply the needed permission rules for this channel. Enable permission for users with the @everyone role to Read Messages, Send Messages and Read Message History as pictured below. Repeat and apply to all channels you believe new members should see EG: rules.
Next, click the small plus icon and select our Member role. We now need to do the opposite by denying the Member role access to Read Messages, Send Messages and Read Message History as pictured below.
Entry method #1: manual
Greeting message
Now we're almost there, but we need to let our members know they need to type -agree to get into the core features of the server. The best way to do this is with an embed. Let's create one with Poni! The base embed command is executed by typing the following:
Syntax
-embed your_text_here
A more advanced embed exists called -rembed or a Rich Embed. It follows the same syntax but with this kind of embed you can customize the colour and thumbnail in post.
Let's Test It!
Invite a friend and ask for them to execute the
-agree command. If all goes well, they should receive the Member role and have access to the rest of your server!
Entry method #2: reaction
With this method, you are able to generate a permanent message for users to click on to agree. This is far simpler but new discord users may not understand that a message reaction can equal entry. You must make it clear in your agree message. This can be generated by doing the following:
Syntax
-agree_message your_text_here
To gain the member role, all that has to be done is click the pre-applied check mark emoji reaction to the message. do not clear this reaction.
Let's Test It!
Invite a friend and ask for them to click the reaction. If all goes well, they should receive the Member role and be given access to the rest of your server! | https://docs.ponibot.com/agree/ | 2021-10-16T08:59:27 | CC-MAIN-2021-43 | 1634323584554.98 | [array(['../everyone.png', None], dtype=object)
array(['../member.png', None], dtype=object)
array(['../new_channel.png', None], dtype=object)
array(['../everyone_perms.png', None], dtype=object)
array(['../member_perms.png', None], dtype=object)
array(['../embedagree.png', None], dtype=object)
array(['../agreemessage.png', None], dtype=object)] | docs.ponibot.com |
Overview
This document provides guides for setting appropriate values and various user environment settings of kernel parameters for installing and operating Altibase in the AIX (Advanced Interactive eXecutive) Operating System.
In this document, the guide is presented only for operating system related items to be set before Altibase is installed.
Kernel Parameters
When operating Altibase on the AIX OS, it describes the types of kernel parameters that need to be changed, and why the settings need to be changed, and introduces how to change the kernel parameters.
For details related to each kernel parameter, refer to the guide provided by IBM.
Posix AIO
Posix AIO is a kernel parameter in AIX that allows a process to simultaneously process disk I/O processing and application program operations, resulting in improved performance.
If the corresponding kernel parameter is not set, Altibase cannot be used, so it must be set in advance.
However, starting from AIX 6.1, the default value of Posix AIO is 'Available', there is no need to set it separately.
File Cache
This kernel parameter is not an essential element to change, but the proper file cache setting prevents the swap out of the memory area used by Altibase and suppresses the requirement for swap out. It is recommended to minimize the phenomenon that the I/O delay time leads to the degradation of Altibase.
For reference, there are no relevant parameters under AIX 5.2 ML03, and there is no change starting from AIX 6.1 so it is not necessary to refer to it depending on the operation system version.
File cache is a kind of system buffer managed at the operating system level to solve the bottleneck caused by the speed difference between main memory and auxiliary memory. These file caches are managed by unique policies of each operating system, but commonly have a direct correlation with the swap policy.
Swapping itself has the usefulness of handling applications or data files larger than main memory, but in systems where long-term resident applications such as DBMS are operated, the disk I/O delay of the operating system layer due to swapping since the response time of the DBMS may be irregular or delayed with time. So file cache is a consideration factor depending on the system use.
Therefore, in order to guarantee Altibase's consistent response time, it is recommended to set the file cache and swap-related kernel parameters in advance so that swap does not occur as much as possible.
Configuration on AIX
The default memory manager of AIX is to convert unused memory areas to file cache as much as possible.
In this state, if an additional memory allocation requso if the system does noest occurs and there is insufficient free memory, the process swaps out the memory being used or the area that is not often accessed from the file cache area, and then allocates it.
At this time, when a transaction approaching the swapped out area occurs, the performance is not uniform.
In this way, the process by which the memory manager acquires the memory requested by the process or the file cache in order to allocate additional memory is called stealing. The stealing target can be specified by the file cache-related kernel parameter lru_file_repage.
Generally, three kernel parameters are changed as well as lru_file_repage.
When set as above, stealing occurs only in the file cache when the file cache share compared to the total memory is used above minperm.
Since this setting can be checked during AIX operation, if performance or other problems occur, it is recommended to change it in consultation with AIX engineers.
Resource Limitations
In the case of AIX, some of the resource limit items are set through the following kernel parameter changes, not the user configuration file setting using ulimit, which is commonly uand the soft-limit means that the hard-limit can sed.
Altibase is a single-process, multi-thread-based application program, so if the system does not have any application programs other than Altibase, there is no need to specifically consider the number of process limits per user. In some cases, it may be necessary to appropriately expand the limit on the number of processes per user by predicting the number.
The relevant kernel parameters are as follows.
How to Change
To change the kernel parameters related to file cache, use the virtual memory-related kernel utility emo, and for other changes, describe the method using the system kernel-related utility smit.
Generally, the user needs to connect to the root account, and it is recommended to restart the system after changing to properly apply even the kernel parameters applied in real-time.
Posix AIO
After running smit, move the items in the order of "Devices", "Asynchronous I/O" to change the 'Defined' state of "Configured Defined Asynchronous I/O" to 'Available'.
However, if only this process is performed, the following process must be performed as the previous Posix AIO setting may be reset when the system is restarted.
After running smit, move the items in the order of “Devices”, “Asynchronous I / O”, “Posix Asynchronous I / O”, and “Change / Show Characteristics of Asynchronous I / O”, change the item of “STATE to be configured at system restart” to Available.
The current configuration of AIO-related kernel parameter can be checked as follows.
Check AIO related kernel parameter settings
File Cache
The change method is as follows.
How to change file cache
The confirmation method is as follows.
How to check file cache
Resource Limitation
After running smit, move to "System Environments", "Change/Show Characteristics of Operating System", and change the "Maximum number of PROCESSES allowed per user" value to the number of processes that can be run simultaneously.
User Settings
This describes resource limits, environment variables, and various environment settings of user accounts in the system for operating Altibase in the AIX OS.
For instructions and specifics related to configuration, refer to the guide provided by IBM.
Resource Limitations
In the UNIX operating system, logical limits are set for available resources on a user account basis. Among the resource limit items, the items that need to be expanded for stable service operation as follows.
It is intended to remove problems that may occur due to logical limitations even when there are abundant physical resources when expanding the memory and data file area used by a specific user. This setting has no effect on other processes. It is recommended to set the maximum value allowed by (unlimited if possible).
For example, the meaning of open files includes not only the files accessed by the process, but also the number of communication sockets. If Altibase is operated in an environment where this value is limited to 10, it means that more than 10 sessions can be accessed simultaneously. (Considering the file used by Altibase, there may not be an accessible session.)
To change the method, edit the environment configuration file using, the ulimit command, edit the system resource configuration file, or use the kernel-related utilities provided for each operating system.
Hard-Limit & Soft-Limit
Resource limit values are divided into the concept of hard-limit and soft-limit. The hard-limit means the maximum value of kernel-level resource limit that cannot be changed except the system administrator account (root), and the soft-limit means that the current user account can change up to the hard-limit. (refer to the ulimit -S/-H option for details.)
The soft-limit is effective while the user maintains a session by accessing it, and changes are immediately reflected. However, when other sessions of the same user account are connected, the existing soft-limit is reflected, so the ulimit command is often added to the user account configuration file.
However, this method may not be intended due to the global hard-limit, so it is recommended to systematically apply it through editing system-wide resource configuration files rather than applying user account units using environment configuration files.
For reference, the system configuration file related to the user resource limit in AIX is /etc/security/limits.
Environment Variables
Setting for Multi-Threaded Applications
Separate environment variables need to be set for Altibase, a multi-thread based application program. For reference, this document mentions only representative ones, but it should be noted that all environment variables related to multi-threads supported by AIX need to be considered.
The following items are recommended environment variables to prevent performance degradation in multi-threaded SMP systems.Altibase can be operated without setting the environment variable, but must be set because failures may occur due to unknown causes.
More information on environmental variables can be found on the IBM website.
Summary
For stable operation of Altibase on the AIX operating system, kernel parameter settings and user environment settings must be performed in advance. If the setting is not performed properly, it should be noted that the problem can be caused by each limit value even though the system has sufficient resources.
Setting Examples
Kernel Parameters
Refer to the table below and set the appropriate kernel parameters. For reference, in AIX, some of the resource limit items are adjusted by changing kernel parameters.
User Resource Limits
Refer to the table below, if possible, set it to unlimited.
User Environment Variables
In the case of sh, bash, and ksh, examples of settings required environment variables using the environment setting file are as follows. In the case of csh, it is declared through a shell command such as setenv instead of export.
User Environment Variable Setting Examples
For reference, in the case of ksh, an error may occur when defining another environment variable using the environment variable with it being predefined.
Enclosure
AIX Memory Related Patches
There is a bug that can cause a memory leak in the heapmin function provided by the AIX platform.
Related IBM official document is as follows.
As a measure of this, the user must patch or upgrade to the AIX native compiler where AIX bug IV28577 is resolved.
It can be checked whether or not the patch is done with the following command.
Check heapmin Related Patch
If there is no patch, no value is displayed. It is recommended to perform patch or upgrade through AIX engineer.
In addition, it is recommended to apply the latest patch to avoid various problems known in AIX.
Limit of the Number of IPC Channels
Among the semaphore parameters of AIX, the semume value basically limits the number of semaphore undo entries. It is automatically set in AIX and set to 1024, and cannot be changed by the user.
Altibase uses undo entries to ensure resources between IPC connections, and from Altibase 5.1.5.72 or later, undo entries for each IPC channel have been changed from the previous two to use three.
Since the semume (number of undo entry resources) is fixed at 1024, Altibase uses 2 or 3 undo entries per IPC channel based on version 5.1.5.72, so the maximum number of IPC channels that can be used is limited.
Therefore, the maximum number of IPC channels that can be used for each version is as follows:
Up to 512 IPC channels (= 1024/2) can be used under 5.1.5.72 version
Up to 341 IPC channels (= 1024/3) can be used in 5.1.5.72 or later | https://docs.altibase.com/display/arch/AIX+Setup+Guide+for+Altibase | 2022-06-25T13:10:52 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.altibase.com |
community.crypto.acme_account_info module – Retrieves information on ACME accounts.acme_account_info.
Synopsis
Allows to retrieve information on accounts a CA supporting the ACME protocol, such as Let’s Encrypt.
This module only works with the ACME v2 protocol.
Requirements
The below requirements are needed on the host that executes this module.
either openssl or cryptography >= 1.5
ipaddress
Parameters
Notes
Note
The community.crypto.acme_account module allows to modify, create and delete ACME accounts.
This module was called
acme_account_factsbefore Ansible 2.8. The usage did not change.
Supports
- community.crypto.acme_account
Allows to create, modify or delete an ACME account.
Examples
- name: Check whether an account with the given account key exists community.crypto.acme_account_info: account_key_src: /etc/pki/cert/private/account.key register: account_data - name: Verify that account exists assert: that: - account_data.exists - name: Print account URI ansible.builtin.debug: var: account_data.account_uri - name: Print account contacts ansible.builtin.debug: var: account_data.account.contact - name: Check whether the account exists and is accessible with the given account key acme_account_info: account_key_content: "{{ acme_account_key }}" account_uri: "{{ acme_account_uri }}" register: account_data - name: Verify that account exists assert: that: - account_data.exists - name: Print account contacts ansible.builtin.debug: var: account_data.account.contact
Return Values
Common return values are documented here, the following are the fields unique to this module:
Collection links
Issue Tracker Repository (Sources) Submit a bug report Request a feature Communication | https://docs.ansible.com/ansible/latest/collections/community/crypto/acme_account_info_module.html | 2022-06-25T13:40:09 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.ansible.com |
theforeman.foreman.job_invocation module – Invoke Remote Execution Jobs
Note
This module is part of the theforeman.foreman collection (version 3.4.0).
You might already have this collection installed if you are using the
ansible package.
It is not included in
ansible-core.
To check whether it is installed, run
ansible-galaxy collection list.
To install it, use:
ansible-galaxy collection install theforeman.foreman.
To use it in a playbook, specify:
theforeman.foreman.job_invocation.
New in version 1.4.0: of theforeman.foreman
Synopsis
Invoke and schedule Remote Execution Jobs
Requirements
The below requirements are needed on the host that executes this module.
requests
Parameters
Examples
- name: "Run remote command on a single host once" job_invocation: search_query: "name ^ (foreman.example.com)" command: 'ls' job_template: "Run Command - SSH Default" ssh: effective_user: "tester" - name: "Run ansible command on active hosts once a day" job_invocation: bookmark: 'active' command: 'pwd' job_template: "Run Command - Ansible Default" recurrence: cron_line: "30 2 * * *" concurrency_control: concurrency_level: 2
Return Values
Common return values are documented here, the following are the fields unique to this module:
Collection links
Issue Tracker Homepage Repository (Sources) | https://docs.ansible.com/ansible/latest/collections/theforeman/foreman/job_invocation_module.html | 2022-06-25T14:33:12 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.ansible.com |
PR #1) is labeled as ready for merging, Aviator picks the head commit for your base branch (typically
masteror
main), creates a new branch (
branch A), and applies all the commits of the labeled PR as a squash commit in
branch A. Aviator will then validate the CI for this new squash commit. If a second PR (
PR #2) is labeled while the CI for the first one is still running, Aviator will now pick the latest commit from
branch Aand create a new branch (
branch B) and apply the commits of the second PR as a squash commit to
branch B. This way the CI validation can happen in parallel using a speculative commit strategy.
branch A. Likewise if the CI for PR #2 passes, it fast forwards the head of your base branch to the head of
branch B. By using this strategy, Aviator can both maintain the linear history of your PRs and also ensure that the builds pass on each commit.
branch A, and will recreate
branch Bwith only the commits of
PR #2. This way Aviator detects failures before the commits hit the base branch, ensuring that the base branch will always remain green.
ON
ON
merge_modetype to
parallelin the yaml configuration.
use_fast_forwardingto
truein the
parallel_modeconfiguration.
required_checkssection. You can also override separate required checks for the parallel pipeline.
aviatorbot in
Restrict who can push to matching branchesonly if you use this setting.
mq-tmp-
masteror
main) for CI execution. | https://docs.aviator.co/how-to-guides/fast-forwarding | 2022-06-25T13:14:18 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.aviator.co |
PulseSensor_Spark (community library)
Summary
PulseSensor Amped library v1.4 for Particle devices
Example Build Testing
Device OS Version:
This table is generated from an automated build. Success only indicates that the code compiled successfully.
Library Read Me
This content is provided by the library maintainer and has not been validated or approved. PulseSensor Library adapted for Spark Core by Paul Kourany, June 2015 from:
Pulse Sensor Amped 1.4
Works with Particle Core, Photon and Electron !!!
Update Jan 16/2018
Fixed conflict with
Signal variable and Particle firmware definition of the same name. Also removed SparkIntervalTimer library dependency in properties file so user can attach the latest Build (IDE) version.
Update Oct 24/2016
Removed the outdated SparkIntervalTimer library code so user can attach the latest Build (IDE) version.
In order for this app to compile correctly, the following Partible Build (Web IDE) library MUST be attched:
- SparkIntervalTimer
This code is for Pulse Sensor Amped by Joel Murphy and Yury Gitman
Pulse Sensor purple wire goes to Analog Pin A2 (see PulseSensor_Spark.h for details)
Pulse Sensor sample aquisition and processing happens in the background via a hardware Timer interrupt. 2mS sample rate. PWM on selectable pins A0 and A1 will not work when using this code, because the first allocated timer is TIMR2! The following variables are automatically updated: rawSignal : D7 LED (onboard LED) will blink with heartbeat. If you want to use pin D7 for something else, specifiy different pin in Interrupt.h It will also fade an LED on pin fadePin with every beat. Put an LED and series resistor from fadePin to GND. Check here for detailed code walkthrough:
New to v1.5 (Sept 2015)
Works with Paritcle Core and Photon!
New to v1.4
ASCII Serial Monitor Visuals - See the User's Pulse & BPM via the Serial port. Open a serial monitor for this ASCII visualization.
To Turn On ASCII Visualizer:
`// Regards Serial OutPut -- Set This Up to your needs
static boolean serialVisual = false; // Set to 'false' by Default.
to: ```// Regards Serial OutPut -- Set This Up to your needs static boolean serialVisual = true; // Re-set to 'true' to do ASCII Visual Pulse : )
That's it's! Upload and open your Serial Monitor.
Pulse Sensor Amped 1.4 by Joel Murphy and Yury Gitman
Browse Library Files | https://docs.particle.io/reference/device-os/libraries/p/PulseSensor_Spark/ | 2022-06-25T14:44:41 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.particle.io |
# Policies Configuration With the CLI
You can use the Insights CLI to manage the configuration of Policies. Be sure to first read the Insights CLI documentation which covers installation and preparation.
Check out the Policy Configurator documentation on use cases for configuring Policies.
# Pushing Policies Configuration to Insights
When pushing configuration of Policies to Insights, the CLI expects a
settings.yaml file in the current directory.
The file should follow the following format:
checks: $reportType: # You can find this in the Action Items or Policy UI (e.g. `polaris`) $eventType: # You can find this in the Action Items or Policy UI (e.g. `runAsRootAllowed`) severity: <critical/high/medium/low/none> ci: block: <true/false> admission: block: <true/false>
For OPA policies, the
$reportType is
opa and the
$eventType is the Policy name.
Once the file has been created, use the following command to push the Policies Configuration:
insights-cli push settings
# Pushing Policies Configuration Example
Create the
settings.yaml file:
checks: polaris: runAsRootAllowed: severity: medium livenessProbeMissing: severity: high ci: block: true admission: block: false
Next use the Insights CLI to push these configurations to Insights:
insights-cli push settings
The customizations in
settings.yamlwill override any previous customizations made in Insights. For example, if the above yaml was later pushed without
livenessProbeMissing, that Policy would revert to the default values.
# Verifying the Configuration of Policies
- In Insights, go to the
Policypage
- In the Policies table, for the
Configurationcolumn select the
Customizedfilter
This should show you the Policies that have been modified using the
settings.yaml file.
# Pushing Policies Configuration Along With Other Configurations
Configuration of Policies can be pushed to Insights along with other Insights configurations using the single command
insights-cli push all. For additional information see | https://insights.docs.fairwinds.com/configure/cli/settings/ | 2022-06-25T13:24:03 | CC-MAIN-2022-27 | 1656103035636.10 | [] | insights.docs.fairwinds.com |
8. Gradient descent¶
Gradient Descent is a method. You start by defining the initial parameters values and from there on Gradient Descent iteratively adjusts the values, using calculus, so that they minimize the given cost-function
8.1. Gradient¶
A gradient measures how much the output of a function changes if you change the inputs a little bit. It. Said it more mathematically, a gradient is a partial derivative with respect to its inputs.
8.2. Cost Function¶
It is a way to determine how well the machine learning model has performed given the different values of each parameters. The linear regression model, the parameters will be the two coefficients, \(\beta\) and \(m\) :
The cost function will be the sum of least square methods. Since the cost function is a function of the parameters \(\beta\) and \(m\), we can plot out the cost function with each value of the coefficients. (i.e. Given the value of each coefficient, we can refer to the cost function to know how well the machine learning model has performed). The cost function looks like:
- during the training phase, we are focused on selecting the ‘best’ value for the parameters (i.e. the coefficients), \(x\)’s will remain the same throughout the training phase
- for the case of linear regression, we are finding the value of the coefficients that will reduce the cost to the minimum a.k.a the lowest point in the mountainous region.
8.3. Method¶
The Cost.
- Each point in this two-dimensional space represents a line. The height of the function at each point is the error value for that line. Some lines yield smaller error values than others (i.e., fit our data better). When we run gradient descent search, we will start from some location on this surface and move downhill to find the line with the lowest error.
- The horizontal axes represent the parameters (\(w\) and \(\beta\)) and the cost function \(J(w, \beta)\) is represented on the vertical axes. You can also see in the image that gradient descent is a convex function.
- we want to find the values of \(w\) and \(\beta\) that correspond to the minimum of the cost function (marked with the red arrow). To start with finding the right values we initialize the values of \(w\) and \(\beta\) with some random numbers and Gradient Descent then starts at that point.
- Then it takes one step after another in the steepest downside direction till it reaches the point where the cost function is as small as possible.
8.4. Algorithm¶
Moving forward to find the lowest error (deepest valley) in the cost function (with respect to one weight) we need to tweak the parameters of the model. Using calculus, we know that the slope of a function is the derivative of the function with respect to a value. This slope always points to the nearest valley.
We can see the graph of the cost function(named \(Error\) with symbol \(J\)) against just one weight. Now if we calculate the slope(let’s call this \((\frac{dJ}{dw})\) of the cost function with respect to this one weight, we get the direction we need to move towards, in order to reach the local minima(nearest deepest valley).
Note
The gradient (or derivative) tells us the incline or slope of the cost function. Hence, to minimize the cost function, we move in the direction opposite to the gradient.
8.4.1. Steps¶
-.
- Update the weights by an amount proportional to \(G\), i.e. \(w = w - ηG\)
- Repeat until the cost \(J(w)\) stops reducing, or some other pre-defined termination criteria is met.
Important
In step 3, \(\eta\) is the learning rate which determines the size of the steps we take to reach a minimum. We need to be very careful about this parameter. High values of η may overshoot the minimum, and very low values will reach the minimum very slowly.
8.5. Learning Rate¶
How big the steps are that Gradient Descent takes into the direction of the local minimum are determined by the so-called learning rate. It determines how fast or slow we will move towards the optimal weights. In order for Gradient Descent to reach the local minimum, we have to set the learning rate to an appropriate value, which is neither too low nor too high.
This is because if the steps it takes are too big, it maybe will not reach the local minimum because it just bounces back and forth between the convex function of gradient descent like you can see on the left side of the image below. If you set the learning rate to a very small value, gradient descent will eventually reach the local minimum but it will maybe take too much time like you can see on the right side of the image.
Note
When you’re starting out with gradient descent on a given problem, just simply try 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1 etc. as it’s learning rates and look which one performs the best.
8.6. Convergence¶
Once the agent, after many steps, realize the cost does not improve by a lot and it is stuck very near a particular point (minima), technically this is known as convergence. The value of the parameters at that very last step is known as the ‘best’ set of parameters, and we have a trained model.
8.7. Types of Gradient Descent¶
Three popular types of Gradient Descent, that mainly differ in the amount of data they use.
8.7.1. Batch Gradient Descent¶
Also called vanilla gradient descent, calculates the error for each example within the training dataset, but only after all training examples have been evaluated, the model gets updated. This whole process is like a cycle and called a training epoch.
- Advantages of it are that it’s computational efficient, it produces a stable error gradient and a stable convergence.
- Batch Gradient Descent has the disadvantage that the stable error gradient can sometimes result in a state of convergence that isn’t the best the model can achieve. It also requires that the entire training dataset is in memory and available to the algorithm.
8.7.2. Stochastic gradient descent (SGD)¶
In vanilla gradient descent algorithms, we calculate the gradients on each observation one by one; In stochastic gradient descent we can chose the random observations randomly. It is called stochastic because samples are selected randomly (or shuffled) instead of as a single group (as in standard gradient descent) or in the order they appear in the training set. This means that it updates the parameters for each training example, one by one. This can make SGD faster than Batch Gradient Descent, depending on the problem.
- One advantage is that the frequent updates allow us to have a pretty detailed rate of improvement. The frequent updates are more computationally expensive as the approach of Batch Gradient Descent.
- The frequency of those updates can also result in noisy gradients, which may cause the error rate to jump around, instead of slowly decreasing.
8.7.3. Mini-batch Gradient Descent¶
Is a combination of the concepts of SGD and Batch Gradient Descent. It simply splits the training dataset into small batches and performs an update for each of these batches. Therefore it creates a balance between the robustness of stochastic gradient descent and the efficiency of batch gradient descent.
- Common mini-batch sizes range between 50 and 256, but like for any other machine learning techniques, there is no clear rule, because they can vary for different applications. It is the most common type of gradient descent within deep learning.
Citations
Footnotes
References | https://machinelearning101.readthedocs.io/en/latest/_pages/08_gradient_decent.html | 2022-06-25T13:04:12 | CC-MAIN-2022-27 | 1656103035636.10 | [] | machinelearning101.readthedocs.io |
System.Microseconds
From Xojo Documentation
Method
Returns the number of microseconds (1,000,000th of a second) that have passed since the user's device was started.
Syntax
result=Microseconds
Notes
Because modern operating systems can stay running for so long, it's possible for the device's internal counters to "roll over." This means that if you are using this function to determine how much time has elapsed, you may encounter a case where this time is inaccurate.
The machine's internal counters might or might not continue to advance while the machine is asleep, or in a similar power-saving mode. Therefore, this function might not be suitable for use as a long-term timer.
Sample Code
This code displays in message box the number of minutes the device has been on: | http://docs.xojo.com/index.php?title=System.Microseconds&oldid=70249 | 2022-06-25T14:46:40 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.xojo.com |
Installing and Configuring MPIO
Applies To: Windows Server 2008 R2
This section explains how to install and configure Microsoft Multipath I/O (MPIO) on Windows Server 2008 R2.
Install MPIO on Windows Server 2008 R2
MPIO is an optional feature in Windows Server 2008 R2, and is not installed by default. To install MPIO on your server running Windows Server 2008 R2, perform the following steps.
To add MPIO on a server running Windows Server.
MPIO configuration and DSM installation
Open the MPIO control panel either by using the Windows Server 2008 R2 Control Panel or by using Administrative Tools.
To open the MPIO control panel by using the Windows Server 2008 R2 Control Panel
On the Windows Server 2008 R2 desktop, click Start, click Control Panel, and then in the Views list, click Large Icons.
Click MPIO.
On the User Account Control page, click Continue.
To open the MPIO control panel by using Administrative Tools
On the Windows Server 2008 R2 desktop, click Start, point to Administrative Tools, and then click MPIO.
On the User Account Control page, click Continue.
The MPIO control panel opens to the Properties dialog box.
Note
To access the MPIO control panel on Server Core installations, open a command prompt and type MPIOCPL.EXE.
MPIO.
Note
In the Add MPIO Support dialog box, the vendor ID (VID) and product ID (PID) that are needed are provided by the storage provider, and are specific to each type of hardware. You can list the VID and PID for storage that are already connected to the server by using the mpclaim tool at the command prompt. The hardware ID is an 8-character VID plus a 16-character PID. This combination is sometimes referred to as a VID/PID. For more information about the mpclaim tool, see Referencing MPCLAIM Examples.
-.
Note
Devices that are connected by using Microsoft Internet SCSI (iSCSI) are not displayed on the Discover Multi-Paths tab..
Note
We recommend using vendor installation software to install the vendor’s DSM. If the vendor does not have a DSM setup tool, you can alternatively install the vendor’s DSM by using the DSM Install tab on the MPIO control panel..
Claim iSCSI-attached devices for use with MPIO
Note
This process causes the Microsoft DSM to claim all iSCSI-attached devices regardless of their vendor ID and product ID settings. For information about how to control this behavior on an individual VID/PID basis, see Referencing MPCLAIM Examples.
To claim an iSCSI-attached device for use with MPIO
Open the MPIO control panel, and then click the Discover Multi-Paths tab.
Select the Add support for iSCSI devices check box, and then.
Configure the load-balancing policy setting for a Logical Unit Number (LUN)
MPIO LUN load balancing is integrated with Disk Management. To configure MPIO LUN load balancing, open the Disk Management graphical user interface.
Note
Before you can configure the load-balancing policy setting by using Disk Management, the device must first be claimed by MPIO. If you need to preselect a policy setting for disks that are not yet present, see Referencing MPCLAIM Examples.
To configure the load-balancing policy setting for a LUN, in the Select the MPIO policy list, click the load-balancing policy setting that you want.
If desired, click Details to view additional information about the currently configured DSM.
Note
When using a DSM other than a Microsoft DSM, the DSM vendor may use a separate interface to manage these policies.
Note
For information about DSM timer counters, see Configuring MPIO Timers..
To configure the preferred path setting, double-click the path that you want to designate as a preferred path.
Note
The setting only works with the Failover Only MPIO policy setting.
- Select the Preferred check box, and then click OK.
Install MPIO on Server Core installations of Windows Server 2008 R2. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ee619752(v=ws.10)?redirectedfrom=MSDN | 2022-06-25T14:58:14 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.microsoft.com |
Microsoft Security Bulletin MS15-016 - Important
Vulnerability in Microsoft Graphics Component Could Allow Information Disclosure (3029944)
Published: February 10, 2015
Version: 1.0
Executive Summary
This security update resolves a privately reported vulnerability in Microsoft Windows. The vulnerability could allow information disclosure if a user browses to a website containing a specially crafted TIFF image. This vulnerability would not allow an attacker to execute code or to elevate their user rights directly, but it could be used to obtain information that could be used to try to further compromise the affected system.
This security update is rated Important for all supported releases of Microsoft Windows. For more information, see the Affected Software section.
The security update addresses the vulnerability by correcting how Windows processes TIFF image format files. For more information about the vulnerability, see the Vulnerability Information section.
For more information about this update, see Microsoft Knowledge Base Article 3029944..
Vulnerability Information
TIFF Processing Information Disclosure Vulnerability - CVE-2015-0061
An information disclosure vulnerability exists when Windows fails to properly handle uninitialized memory when parsing certain, specially crafted TIFF image format files. The vulnerability could allow information disclosure if an attacker runs a specially crafted application on an affected system.
An attacker could host a specially crafted website that is designed to exploit this vulnerability and then convince a user to view the website. This could also include instant messenger or email message that takes users to the attacker's website. It could also be possible to display specially crafted web content by using banner advertisements or by using other methods to deliver web content to affected systems. TIFF image format files., 2015): Bulletin published.
Page generated 2015-02-10 8:32Z-08:00. | https://docs.microsoft.com/en-us/security-updates/SecurityBulletins/2015/ms15-016?redirectedfrom=MSDN | 2022-06-25T14:21:31 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.microsoft.com |
PLATINUM
This is a collection of rules based on the presence of indicators of compromise publicly reported as associated with this malicious actor.
PLATINUM is an activity group that has been active since at least 2009. This group has targeted victims associated with governments and related organizations in South and Southeast Asia.
Other names for this threat
TwoForOne
Did this page help you? | https://docs.rapid7.com/insightidr/platinum/ | 2022-06-25T13:19:55 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.rapid7.com |
To refactor the messenger’s package name to your own package name,
1. You have to be on the Android part of your android studio
2. Right click on com.procrea8.
silebookMessenger -> Rename Package. 3. Then, type in your package name -> Refactor
Note:
All package name in the code must be the same, most especially
1. the second package name in google service.json file
2. the java code package name.
3. the applicationId in your build.gradle or app file
| http://docs.crea8social.com/docs/site-management/package-name-in-crea8social-messenger/ | 2019-10-13T22:20:21 | CC-MAIN-2019-43 | 1570986648343.8 | [array(['https://image.prntscr.com/image/Dy9obOGnQ8GFrvbSMycynw.png', None],
dtype=object)
array(['https://image.prntscr.com/image/f0pvP96XSHOwluFh1Cdn8g.png', None],
dtype=object)
array(['https://image.prntscr.com/image/pjk_qWQjRbiKuqUtxU3mbw.png', None],
dtype=object) ] | docs.crea8social.com |
Content Style Guide¶
Contents
Writing Style¶
When using pronouns in reference to a hypothetical person, gender neutral pronouns (they/their/them) should be used.
All documents (other than those for internal use only) must be written clearly and simply so that a non-expert is able to understand them. Preferably documents should be readable by students.
Any jargon used needs to be clearly explained and should be considered as a glossary definition.
Capitalisation Rules¶
In the majority of cases capitalisation should not be used for keywords and titles, with the following exceptions, where the phrase refers to a commonly used term that is often capitilised in the literature:
- Computer Science.
- Computational Thinking.
- Digital Technologies (note that this is the correct form to refer to the subject area in NZ, with caps and plural; if it’s referring to something other than the subject then use lower case e.g. “smartphones and other digital technologies”, or even better, avoid the phrase e.g. “smartphones and other digital devices”).
- Sorting Network.
- Numeracy.
- Literacy.
The following wouldn’t be capitalised: binary number(s), digits, binary digits
Extra notes for specific content¶
Glossary¶
The following are added to the glossary and linked to where the words are used:
- All Computer Science, programming, and Computational Thinking jargon.
- All education jargon.
- All curriculum language that is not broadly used internationally.
Topic¶
Topic description must apply to all the units within the topic. It is one introductory paragraph, less than 150 words, which gives a big picture overview of why this topic is being taught/is relevant, and what it will cover.
Learning outcomes¶
Learning outcomes are written using language familiar to teachers and simple enough that it is understandable for students. Learning outcomes always begin with a verb.
Programming challenges¶
There needs to be enough scaffolding to support students to be able to achieve a result, independently.
When listing Scratch block:
Separate out all the blocks that “click” together, leaving all the information inside where the parameter is written. All duplicates of a block should be displayed.
Where a variable is inserted into another block, those blocks stay together, example below:
All join blocks are displayed as one and all the variables/text are included, example below:
For blocks containing join blocks keep the join block within the parent block, example below.
Loops should keep the condition blocks, but the blocks within the loop should be extracted. | https://cs-unplugged.readthedocs.io/en/develop/author/content_style_guide.html | 2019-10-13T23:46:07 | CC-MAIN-2019-43 | 1570986648343.8 | [] | cs-unplugged.readthedocs.io |
- Cloud V2 for Developers
- API Overview
- Basic Topics
- API PaaS Tutorials
- Building Custom Search Integrations Using Coveo Cloud PaaS
- Search API
- Usage Analytics Write API
- Usage Analytics Read API
- Push API
- Overview
- Usage Example
- Tutorials
- Limits
- Leading Practices
- Managing Items and Permissions
- Managing Security Identities in a Security Identity Provider
- Updating the Status of a Push Source
- Resetting a Push Source
- Creating a File Container
- Understanding the orderingId Parameter
- Understanding the API Processing Delay
- Troubleshooting API Error Codes
- API Reference
- Activity API
- Authorization Server API
- Field API
- Index API
- Notification API
- Platform API
- Security Cache API
- Source API
- Source Logs API
- Indexing Pipeline Customization Tools Overview
- Indexing Pipeline Extensions
- Coveo On-Premises Crawling Module
- Coveo on Elasticsearch
- Coveo Cloud V2 - API Reference
Push API
The Push API exposes services which allow you to push items and their permission models into a source, and security identities into a security identity provider, rather than letting standard Coveo crawlers pull this content.
There is no graphical user interface sitting on top of the Push API, so you need to perform your own HTTP calls when you want to use its services.
You may want to consider using the open source Coveo Push API SDK for Python to interact with the Push API.
The following diagram provides a visual overview of the main interactions between custom crawlers, the Push API and a Coveo Cloud organization.
The articles in this section explain how to use the Push API.
Interactive generated reference documentation is also available through Swagger UI (see Coveo Cloud Platform API - Push API). | https://docs.coveo.com/en/68/ | 2019-10-13T23:00:45 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.coveo.com |
Extensions¶
Fundamentally, JupyterLab is designed as an extensible environment. JupyterLab extensions can customize or enhance any part of JupyterLab. They can provide new themes, file viewers and editors, or renderers for rich outputs in notebooks..
JupyterLab extensions are npm packages (the standard package format in Javascript development). You can search for the keyword jupyterlab-extension on the npm registry to find extensions. For information about developing extensions, see the developer documentation.
Note
If you are a JupyterLab extension developer, please note that the extension developer API is not stable and will evolve in the near future.
In order to install JupyterLab extensions, you need to have Node.js installed.
If you use
conda, you can get it with:
conda install -c conda-forge nodejs
If you use Homebrew on Mac OS X:
brew install node
You can also download Node.js from the Node.js website and install it directly.
Using the Extension Manager¶
To manage your extensions, you can use the extension manager. By default, the manager is disabled. You can enable it by searching Extension Manager in the command palette.
You can also
You can install new extensions into the application using the command:
jupyter labextension install my-extension
where
my-extension is the name of a valid JupyterLab extension npm package
on npm. Use the
my-extension@version
syntax to install a specific version of an extension, for example:
jupyter labextension install [email protected]
You can also install an extension that is not uploaded to npm, i.e.,
my-extension can be a local directory containing the extension, a gzipped
tarball, or a URL to a gzipped tarball.
We encourage extension authors to add the
jupyterlab-extension
GitHub topic to any repository with a JupyterLab extension to facilitate
discovery. You can see a list of extensions by searching GitHub for the
jupyterlab-extension
topic.
You can list the currently installed extensions by running the command:
jupyter labextension list
Uninstall an extension by running the command:
jupyter labextension uninstall my-extension
where
my-extension is the name of the extension, as printed in the
extension list. You can also uninstall core extensions using this
command (you can always re-install core extensions later).
Installing:
jupyter lab build:.
Disabling Extensions¶
You can disable specific JupyterLab extensions (including core extensions) without rebuilding the application by running the command:
jupyter labextension disable my-extension
This will prevent the extension from loading in the browser, but does not require a rebuild.
You can re-enable an extension using the command:
jupyter labextension enable my-extension
Advanced Usage¶
Any information that JupyterLab persists is stored in its application directory,
including settings and built assets.
This is separate from where the Python package is installed (like in
site_packages)
so that the install directory is immutable.
The application directory.
Note that the application directory is expected to contain the JupyterLab
static assets (e.g. static/index.html). If JupyterLab is launched
and the static assets are not present, it will display an error in the console and in the browser.
JupyterLab Build Process¶
To rebuild the app directory, run
jupyter lab build. By default, the
jupyter labextension install command builds the application, so you
typically do not need to call
build directly.
Building consists of:
Populating the
staging/directory using template files
Handling any locally installed packages
Ensuring all installed assets are available
Bundling the assets
Copying the bundled assets to the
staticdirectory
jupyter lab --core-mode.
JupyterLab Application Directory¶
The JupyterLab application directory contains the subdirectories
extensions,
schemas,
settings,
staging,
static, and
themes. The default application directory mirrors the location where
JupyterLab was installed. For example, in a conda environment, it is in
<conda_root>/envs/<env_name>/share/jupyter/lab. The directory can be
overridden by setting a
JUPYTERLAB_DIR environment variable.
It is not recommended to install JupyterLab in a root location (on Unix-like
systems). Instead, use a conda environment or
pip install --user jupyterlab
so that the application directory ends up in a writable location.
Note: this folder location and semantics do not follow the standard Jupyter config semantics because we need to build a single unified application, and the default config location for Jupyter is at the user level (user’s home directory). By explicitly using a directory alongside the currently installed JupyterLab, we can ensure better isolation between conda or other virtual environments.
extensions¶
The
extensions directory has the packed tarballs for each of the
installed extensions for the app. If the application directory is not
the same as the
sys-prefix directory, the extensions installed in
the
sys-prefix directory will be used in the app directory. If an
extension is installed in the app directory that exists in the
sys-prefix directory, it will shadow the
sys-prefix version.
Uninstalling an extension will first uninstall the shadowed extension,
and then attempt to uninstall the
sys-prefix version if called
again. If the
sys-prefix version cannot be uninstalled, its plugins
can still be ignored using
ignoredPackages metadata in
settings.
schemas¶
The
schemas directory contains JSON
Schemas that describe the settings used by
individual extensions. Users may edit these settings using the
JupyterLab Settings Editor.
settings¶
The
settings directory may contain
page_config.json,
overrides
are an array of strings. The following sequence of checks are performed
against the patterns in
disabledExtensions and
deferredExtensions.
If an identical string match occurs between a config value and a package name (e.g.,
"@jupyterlab/apputils-extension"), then the entire package is disabled (or deferred).
If the string value is compiled as a regular expression and tests positive against a package name (e.g.,
"disabledExtensions": ["@jupyterlab/apputils*$"]), then the entire package is disabled (or deferred).
If an identical string match occurs between a config value and an individual plugin ID within a package (e.g.,
"disabledExtensions": ["@jupyterlab/apputils-extension:settings"]), then that specific plugin is disabled (or deferred).
If the string value is compiled as a regular expression and tests positive against an individual plugin ID within a package (e.g.,
"disabledExtensions": ["^@jupyterlab/apputils-extension:set.*$"]), then that specific plugin is disabled (or deferred).
An example of a
page_config.json file is:
{ "disabledExtensions": [ "@jupyterlab/toc" ], "terminalsAvailable": false }
overrides.json
You can override default values of the extension settings by
defining new default values in an
overrides.json file.
So for example, if you would like
to set the dark theme by default instead of the light one, an
overrides.json file containing the following lines needs to be
added in the application settings directory (by default this is the
share/jupyter/lab/settings folder).
{ "@jupyterlab/apputils-extension:themes": { "theme": "JupyterLab Dark" } }
build_config.json
The
build_config.json file is used to track the local directories
that have been installed using
jupyter labextension install <directory>, as well as core extensions
that have been explicitly uninstalled. An example of a
build_config.json file is:
{ "uninstalled_core_extensions": [ "@jupyterlab/markdownwidget-extension" ], "local_extensions": { "@jupyterlab/python-tests": "/path/to/my/extension" } }
staging and static¶
The
static directory contains the assets that will be loaded by the
JuptyerLab application. The
staging directory is used to create the
build and then populate the
static directory.
Running
jupyter lab will attempt to run the
static assets in the
application directory if they exist. You can run
jupyter lab --core-mode to load the core JupyterLab application
(i.e., the application without any extensions) instead.
JupyterLab User Settings Directory¶
The user settings directory contains the user-level settings for Jupyter extensions.
By default, the location is
~/.jupyter/lab/user-settings/, where
~ is the user’s home directory. This folder is not in the JupyterLab application directory,
because these settings are typically shared across Python environments.
The location can be modified using the
JUPYTERLAB_SETTINGS_DIR environment variable. Files are automatically created in this folder as modifications are made
to settings from the JupyterLab UI. They can also be manually created. The files
follow the pattern of
<package_name>/<extension_name>.jupyterlab-settings.
They are JSON files with optional comments. These values take precedence over the
default values given by the extensions, but can be overridden by an
overrides.json
file in the application’s settings directory.
JupyterLab Workspaces Directory¶
JupyterLab sessions always reside in a workspace. Workspaces contain the state
of JupyterLab: the files that are currently open, the layout of the application
areas and tabs, etc. When the page is refreshed, the workspace is restored.
By default, the location is
~/.jupyter/lab/workspacess/, where
~ is the user’s home directory. This folder is not in the JupyterLab application directory,
because these files are typically shared across Python environments.
The location can be modified using the
JUPYTERLAB_WORKSPACES_DIR environment variable. These files can be imported and exported to create default “profiles”,
using the workspace command line tool. | https://jupyterlab.readthedocs.io/en/stable/user/extensions.html | 2019-10-13T22:38:01 | CC-MAIN-2019-43 | 1570986648343.8 | [array(['../_images/extension_manager_default.png',
'../_images/extension_manager_default.png'], dtype=object)
array(['../_images/extension_manager_discover.png',
'Screenshot showing the discovery extension listing.'],
dtype=object)
array(['../_images/extension_manager_search.png',
'Screenshot showing an example search result'], dtype=object)
array(['../_images/extension_manager_rebuild.png',
'Screenshot showing the rebuild indicator'], dtype=object)] | jupyterlab.readthedocs.io |
Represents a persistable Grails domain class.
The name of the default ORM implementation used to map the class
Returns a map of constraints applied to this domain class with the keys being the property name and the values being ConstrainedProperty instances
Returns the default property name of the GrailsClass. For example the property name for a class called "User" would be "user"
Retreives the validator for this domain class
Sets the validator for this domain class
validator- The domain class validator to set | http://docs.grails.org/4.0.0/api/grails/core/GrailsDomainClass.html | 2019-10-13T23:46:44 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.grails.org |
True geometry additions and subtraction for complex creations
ZBrush includes a complete Boolean system in addition to its other Boolean-style features like DynaMesh and Remesh All. These features all use the SubTool operators to define if a SubTool will be used as an Addition, Subtraction or Intersection model.
This Boolean tool is composed of two main elements:
- The Live Boolean mode found in the Render >> Render Booleans sub-palette lets you preview in real-time the results of Boolean operations on your SubTools. You can move, scale, rotate, duplicate, change the operation mode and even sculpt in this mode. In the default ZBrush UI, the Live Boolean switch is readily accessed to the left of the Edit mode button.
- The Make Boolean Mesh function, found in the Tool >> SubTool >> Boolean sub-palette converts all Boolean operations to a new Tool. These results can be reused for further Boolean operations inside of ZBrush or exported to other 3D applications.
This Boolean tool has been optimized to be ultra-fast. A model composed of several millions of polygons can be converted from the preview to real geometry in just a few seconds, although highly complex models can take up to a few minutes.
The Boolean function will work with almost all ZBrush features, so long as the models are some form of PolyMesh 3D. DynaMesh models and those with multiple subdivision levels will work perfectly, as well as low polygon models created using the ZModeler brush. Advanced features like ArrayMesh and NanoMesh are supported as well since the resulting models are PolyMeshes.
3D Primitives, ZSpheres ZSketches, or other render-time effects (such as MicroMesh) are not supported by the Boolean system until they are converted to PolyMeshes.
To learn more about Live Boolean explore the pages below:
- Important Notice about Boolean Operations
- Boolean Process
- Data Support and Preservation
- Live Boolean Mode
- Basic Boolean Process in Action
- Advanced Boolean Process in Action
- Performance
- Boolean Resulting Topology Issues and Errors
- Main Boolean Functions
- Geometry and Topology Analysis Functions
- Boolean Preferences | http://docs.pixologic.com/user-guide/3d-modeling/modeling-basics/creating-meshes/live-boolean/ | 2019-10-13T23:20:33 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.pixologic.com |
Creating InsertMesh and InsertMultiMesh Brushes
Combined with DynaMesh, insertion brushes become an incredibly powerful tool in your ZBrush repertoire. Because of this importance, you can now quickly create new brushes in a few simple steps by transforming your active Tool (and its SubTools) into an Insert brush, allowing you to insert this model into another mesh.
Creating an InsertMesh or InsertMultiMesh Brush
Follow the steps below to create your own InsertMesh or InsertMultiMesh Brush:
1. Load your Tool and define the position that you wish it to have when inserting it on the surface of future meshes.
- The orientation used will be in relation to the screen plane, i.e. how the mesh appears on the canvas. You can create several brushes with different orientations if you want, simply by rotating the model on screen and creating a brush each time.
- For an InsertMultiMesh brush created from subtools the same orientation will be used for all subtool meshes.
2. To avoid potential scale issues, you can (optionally) unify the mesh by clicking on the Tool >> Deformation >> Unify button. This automatically resizes the model to the ideal size for ZBrush to work with. For a model with subtools, you can press Tool >> Deformation >> Repeat To Other after unifying one subtool to unify the others.
3. Create your Insert brush:
- To create an InsertMesh brush, click on the Brush >> Create InsertMesh button. A new brush will appear in the Brush palette with an icon corresponding to the current Tool.
- To create an InsertMultiMesh brush, click on the Brush >> Create InsertMultiMesh button. (This requires a model with multiple SubTools.) A new brush will appear in the Brush palette with an icon corresponding to the last SubTool. The creation of the brush doesn’t take account of the visibility of SubTools, so that even if some SubTools are hidden, the brush will include them.
- You can also create an InsertMultiMesh brush from single meshes by using the Append option when using the Create InsertMesh button.
Note:
Make sure you’re happy with the SubTool names before you create an InsertMultiMesh brush! Each mesh within the brush will be identified by the SubTool name. ABC selection will therefore be a lot easier with helpful SubTool names.
You may also use only part of your model as an Insert mesh. This is done by hiding the polygons that you do not wish to become part of the brush. Only the visible polygons will be converted to an InsertMesh.
The Demo Soldier has been converted to a MultiMesh Insert Brush. Each of its SubTools has become a mesh ready to be inserted
Notes:
For DynaMesh it is advised to use volumes. In this case you would not want to hide polygons. For using the Insert brushes to replace polygons within another model, the mesh must have an opening and so you will often need to hide polygons before creating the InsertMesh.
Depending the shape of a replacement part, it may be useful to crease the mesh edges before converting the surface to an InsertMesh. The inserted mesh will smooth together with the model it’s being inserted into. Creasing before creating the Insert brush can avoid having to crease every time you use the brush!
If you wish to use your InsertMesh or InsertMultiMesh brush in future sessions, you must save it after creating it! Be sure to use Brush >> Save As to retain the brush for future use.
InsertMesh brushes & PolyPaint
You can include polypaint in an InsertMesh brush. When using the brush, turn off Colorize while inserting to preserve the polypaint. | http://docs.pixologic.com/user-guide/3d-modeling/sculpting/sculpting-brushes/insert-mesh/creating-imms/ | 2019-10-13T23:06:43 | CC-MAIN-2019-43 | 1570986648343.8 | [array(['http://docs.pixologic.com/wp-content/uploads/2013/02/DemoS-IMM.jpg',
'DemoSoldier IMM brush'], dtype=object) ] | docs.pixologic.com |
Therma-Tru Installation & Finishing Instructions
Therma-Tru Recommended Best Practices Use Water Resistive Barrier and Flexible Flashing:
We recommend the use of a Water Resistive Barrier (WRB) applied to the exterior sheathing (OS or other) and the use of an adhesive or flexible flashing product to seal around the opening. The WRB should be cut in the opening (following manufacturer’s guidelines) with the head of the flap taped up, to be sealed later in Step 11. The flashing should be applied in an overlapping manner as shown, always working from the bottom up (follow manufacturer’s guidelines).
Use a Sill Pan: We recommend you first “dry fit” the sill pan in the opening, folowing the instructions furnished with a sill pan. Place the right and left sill pan ends tight against the sides of the opening. Check the center section for proper length and if necessary, cut with a hack saw or tin snips. Be sure to allow 2 inches of overlap at the joints
Note
Use only the PVC cement provided in the sill pan kit to flue the pieces together. The sill pan must be sealed to the sub-floor using an Elastomeric or Polyurethane sealant, but do not apply sealant to the bottom of the sill when using a sill pan.
Step 1: Check Door Unit
- A
- Check width and height.
Check and Prepare Opening
- A
- Is the opening the correct size for the door unit? Check it against the door frame size now, before installation. The opening should be frame height plus 1/2” and frame width plus 1/2” to 3/4.” Fix any problems now.
- B
- Is the sub floor level and solid? Provide a flat, level, clean weight bearing surface so the sill pan or sill can be properly caulked and sealed to the opening. Scrape sand or fill as required
Note
If additional floor covering clearance is required, attach the shim board to the sub floor. Be sure to caulk well under the shim board.
Is the opening square? Check all corners with a framing square. Double check by comparing diagonal measurements. Fix any problems now.
Check to be sure the framing walls around the opening are in the same lane. Do this by performing a “ ring test” for plumb. String Test for Plumb: Attach a string diagonally across the opening from the outside, as shown. The string(s) should gently touch in the center, if not the opening is “out of plum” by twice that distance and needs to be corrected. Flip the string over itself to check both planes. Fix any problems now. **An “out of plumb” condition is one of the most common reasons door units leak air and water.
Caulk the Sub Floor
- A
- On the sub floor at opening place 3 very large beads of sealant. Run beads full width of the opening.
- B
- Use Only Elastomeric or Polyurethane sealant
- C
- Use an Entire Tube when Caulking along the Sub Floor
Installation With a Sill Pan
- A
- Place the right and left sill pan ends onto the caulk beads and tightly against the side of the opening.
- B
- Then liberally coat the overlapped areas and the recessed areas of the pieces with the PV cement provided. Place center section(s) in position and hold pieces together long enough to ensure a good bond.
- C
- For added protection spread a bead of caulk along the glue joints and to prevent air infiltration, run a bead of caulk along the lower interior edge of a sill an. Additional caulking could affect the performance of the sill pan.
Do Not Caulk the bottom of the Sill when using a Sill Pan.
Installation Without a Sill Pan
Lay the door unit on edge or face so that the bottom surface of the sill can be caulked.Place very large beads of caulk across the full width of the sill. Additionally, place beads of caulk along the junction of the sill and the jamb and on the bottom surface of the jambs and brickmould.
Note
If a sill extender is used, place a large bead of caulk at the junction of the extender and the sill approach.
Step 3 cont.: Caulking Back Side of Brickmould
Important!
Apply sealant to the back side of brickmould around the entire perimeter of the door unit. A 1/2 - 5/8” bead of Elastomeric or Polyurethane caulk is essential.
Step 4: Place Unit in Opening and Temporarily Fasten
- A
- Lift the unit up. With top edge tilted away from opening, center the unit and place sill down onto sill pan or caulk beads and tilt into opening.
- B
- For all door unit configurations, note the hinge location and mark those locations on the jamb faces near the door surfaces. Pre-drill 1/8 inch diameter holes at these locations for screw placement. A counter sink bit will help conceal the screw heads.
-.
-.
Step 4 cont.:Plumb Hinge Side Jamb
- A
- Work from side of the door that is weatherstripped.
- B
- Use a 6’ level and plumb the hinge side jamb both ways (right to left and inside to outside).
- C
- Place screws through the hinge side jamb into the studs, at each remaining hinge location, as shown in the diagrams. Use #8 x 2-1/2” or 3” exterior grade screws.
Do Not, drive the screws completely in at this time.For Single or Double Doors, place screws at each hinge location, so shims can be placed behind hinges above screws. The screws will keep the shims from falling down while adjustments are being made.
- D
- For Sidelite units, fasten the jamb on then hinge side of the door.
- E
- For Double Door and Patio Units, fasten the fixed or passive side of the unit first.
Step 5: Shim at Hinge Locations and Secure Hinge Jamb.
- A
- Leave door fastened and closed with transport clip..
- B
- Shim above screws, behind each hinge location, between the opening and the jamb.
- C
- Use a 6 foot level and re-check hinge jamb to ensure it is plumb and straight.
- D
- Finish driving screws tight in the middle first then top and bottom last.
Step 6: Adjust Rest of Frame and Fasten
- A
- From the weatherstrip side of the door, check weatherstrip margins and contact.
- B
- Make frame adjustments so the weatherstrip contacts the door surface equally at the top, middle and bottom, an even 3/8” to 1/2” when fully closed.
- C
- Secure the lock side jamb with #8 x 2-1/2 or 3” screws through pre-drilled holes at the top and bottom. Do not drive screws tight at this time.
- D
- From the swing side of the door, shim above the screw locations and make adjustments so the margins between the door and frame are even top to bottom.
Note
For Double Doors, make adjustments that effect the alignment, margins and weatherstrip contact between the doors. Also follow the Astragal Site Package Instructions for details on properly setting the slide bolt hole locations.
Adjust Rest of Frame and Fasten
Re-Check everywhere for plumb and square, and an even weatherstrip contact.
Finish driving all screws tight
Step 7: Remove Transport Clip and Open Door
- A
- Remove the transport clip.
- B
- Open and close door to check for smooth operation.
- C
- With the door open, drill 1/8” diameter pilot holes in the top hinge in the 2 screw hole locations closest to the weatherstrip. Then, install the #10 x 2-1/2” screws (provided) through the hinge, into the stud, to anchor the door frame and prevent sagging.
For Sidelite and Patio Units: With the door open, check to determine if the 2-1/2" long hinge screws were pre-installed in the hinges. If not, drill 1/8" diameter pilot holes and install the long hinge screws in the hole locations closest to the weatherstrip.
- D
- Close the door and carefully shim between the jamb and the opening behind the adjustable strike plate area
- E
- Open the door and drill 1/8" diameter pilot holes and install the #8 x 2-1/2" screws (provided) through the strike plate holes to secure the lock side jamb and provide security
- F
- Adjust strike plate in or out for proper weatherstrip contact and door operation, then finish tightening screws.
Step 8: Adjust Sill
- A
- Your door unit may have an adjustable threshold cap. When properly adjusted, it should be snug and slightly difficult to pull a dollar bill out from under the door when it is fully closed. The dollar bill should be able to be removed without tearing.
This check should be performed at each adjustment screw location
- B
- After adjusting the threshold cap, ensure that the weatherstrip is flush with the top of the threshold cap. Trim as necessary.
Step 9: Install Corner Seal Pads - Inswing Units Only
- A
- Apply sealant (Polyurethane or Elastomeric) at the joint where the threshold cap meets the door jambs.
- B
- Remove the self-stick paper from the corner seal pads and apply to the door jamb, with the bottom lined up evenly with the top of the threshold cap. When the pad is correctly installed, the tab is on top and the narrow part is on the bottom.
- C
- The bottom of the pad is the same width of the threshold cap to help with alignment during installation
Step 10: Additional Frame Anchoring:
If sill is prepared for anchoring screws, place appropriate screws through the sill into the sub floor where needed. (Primary on Outswing Sills)
We recommend that you provide additional frame anchoring as shown here. Certain states or jurisdictions, notably Florida and the coast of Texas, have specific installation requirements and may require installation in strict accordance with the product approval for a specific product. You should always check with the local authority having jurisdiction for any specific instalation requirements that may apply. Specific product approval installation instructions, including those required for the High Velocity Zone (HVHZ) are also available at
Doors With Sidelites:
Shim above mull post or jambs separating doors and sidelites. Screw through the frame into the header, adjacent to the shims.
Double Doors:
Place temporary shims above the center of the head frame, where doors meet. Pre-drill and insert a screw through frame into header, then remove the temporary shims.
Patio Doors:
Shim above the mull post(s), pre-drill and insert a screw through the frame into the header, at either side of the post.
Step 11: Weatherproof, Finish and Maintain
- A
- Provide and maintain a properly installed cap or head flashing to protect top of surfaces from water intrusion damage. Tape and properly seal the top flap of the Water Resistive Barrier (WRB) over the head flashing.
- B
- Caulk around entire "weather" side of unit, sealing along the brickmould to the flashing material or siding and seal all joints between the jambs and moldings.
- C
- Seal the joints between the exterior hardware trim and the door face to prevent air and water infiltration.
- D
- Place and set galvanized finish nails through the brickmold around the perimeter. Use exterior grade screws if you are installing a storm door to the brickmould. Countersink all fasteners and cover with exterior grade putty.
- E
- Add insulation material to the cavity between the opening and the unit to reduce air infiltration and heat transfer.
- F
- All Therma-Tru Steel doors must be finished within several days of the installation date for continued warranty coverage. For Fiberglass doors the finishing requirement is within 6 months of installation.
- G
- Paint or stain according to Therma-Tru Finishing instructions. Do Not paint or stain the weatherstrip, it is "friction-fit" and easily removed for painting or staining.All 6 sides of the doors must be finished. For out-swing doors the sides, top and bottom must be inspected and maintained as regularly as all other surfaces. All bare wood surfaces such as the door frame exposed to weather should be primed and painted or stained and top coated within two weeks of exposure for best performance.
Maintain or replace sealants and finishes as soon as any deterioration is evident. For semi-gloss or glossy paint or clear coats, do this when the surface becomes dull or rough. More sever climates and exposures will require more frequent maintenance
Finishing Instructions
Work only when temperatures are between 50° and 90°F and with relative humidity below 80%. Do not finish in direct sunlight
Steel and Smooth-Star Doors:
To Paint Doors: Clean first with a mild detergent and water or use a TSP (tri-sodium phosphate) solution. Rinse well and allow it to dry completely. Mask off the hardware, glass and remove weatherstripping before painting. Use high-uqlity acrylic latex house paint, following manufacturer's directions for application. Use exterior grade finishes for outside surfaces. Paint edges and exposed ends of door.
Doorlite Frames:
Remove any exess glazing ealant by first spraying with a window cleaner or water. Use a single edge razor blade to score the glazing along the edge of the frame. Holding the razor blade at a 45 degree angle, scrape glazing from glass. Wipe remaining residue off with window cleaner or mineral spirits. Clean frame with a mild detergent and water, or use a TSP solution. Rinse well and allow to dry completely. Mask off glass. Prime door lite frames with an alkyd- or acrylic-based primer. Allow primer to dry before applying finish paint coats. Use high-quality acrylic latex house paint, following manufacturer's application instructions. Use exterior grade finishes for outside surfaces.
Classic-Craft and Fiber-Classic Doors:
To Finish Doorlite Frames and Panel Inserts: Remove any excess glazing sealant by first sprying with a window cleaner or water. Use a single edge razor blade to score the glazing along the edge of the frame. Holding the razor blade at a 45° angle, scrape glazing from glass. Wipe remaining residue off with window cleaner or mineral spirits. Mask off glass. Paint or stain using same materials as for the door.
To Paint Doors:
Clean first with a mild detergent and water or use a TSP (tri-sodium phosphate) solution. Rinse well and allow to dry completely. Prime with an alkyd- or acrylic- based primer. Allow primer to dry completely, then paint with acrylic latex house paint, following the paint manufactuer's application instructions. Use a primer and paint that areas that are compatible. Use exterior grade finishes for outside surfaces. Paint edges and exposed ends of door.
To Stain Doors:
Clean first with a clean cloth and mineral spirits and allow to air dry or wash door with mild detergent and water, or a TSP (tri-sodium phosphate) solution. Rinse well and allow to dry completely. For stained surfaces, we only recommend the use of the stain and clear coat products found the the Therma-Tru Same-Day Stain Finishing Kit. Apply stain with a rag. The longer the stain is left to "setup" before wiping off, the darker the color will be. Using a clean rag, wpe off the stain to the color shade you desire. Remove any excess stain from the panel grooves with the foam brush provided; allow the stain to dry for at least 6 hours before applying topcoat. See Therma-Tru Same-Day stain Finishing Kit instructions for compete details.
Warning:
Modification or machining of this product can release wood dust, a substance known to the State of California to cause cancer. | https://docs.grandbanksbp.com/article/92-therma-tru-installation-finishing-instructions | 2019-10-13T22:30:59 | CC-MAIN-2019-43 | 1570986648343.8 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/550c6910e4b061031401efcb/images/55e05edec69791085647d19e/file-PajjNLF0wu.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/550c6910e4b061031401efcb/images/55e08d9dc69791085647d3ae/file-phn0hzBBGM.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/550c6910e4b061031401efcb/images/55e08e119033600a29528b4b/file-oX9nJzpuy2.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/550c6910e4b061031401efcb/images/55e08e23c69791085647d3af/file-4z9z1AtERE.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/550c6910e4b061031401efcb/images/55e0943cc69791085647d3bf/file-kNDayadqJg.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/550c6910e4b061031401efcb/images/55e094cb9033600a29528b61/file-lN4PItZmlT.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/550c6910e4b061031401efcb/images/55e0955ac69791085647d3c3/file-RO2CKxkQVW.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/550c6910e4b061031401efcb/images/55e0a4a39033600a29528b9d/file-aboL2BCpFI.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/550c6910e4b061031401efcb/images/55e0a4f5c69791085647d402/file-a8YO4LySLN.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/550c6910e4b061031401efcb/images/55e0a60b9033600a29528ba8/file-BL7Jj7vhgg.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/550c6910e4b061031401efcb/images/55e0a78ec69791085647d414/file-eGsLjrKsc0.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/550c6910e4b061031401efcb/images/55e0ab039033600a29528bcd/file-hE7FEaKTEU.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/550c6910e4b061031401efcb/images/55e0b02fc69791085647d432/file-SpyMKgyVNA.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/550c6910e4b061031401efcb/images/55e0b0a5c69791085647d435/file-UdEZCKq6B4.png',
None], dtype=object) ] | docs.grandbanksbp.com |
All content with label amazon+aws+client_server+dist+gridfs+import+infinispan+listener+migration+mvcc+notification.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, release, query, jbossas, lock_striping, nexus, guide, schema, cache, s3, memcached, grid,
jcache, api, xsd, ehcache, maven, documentation, write_behind, ec2, 缓存, hibernate, custom_interceptor, setup, clustering, eviction, concurrency, out_of_memory, jboss_cache, index, events, batch, configuration, hash_function, buddy_replication, loader, write_through, cloud, remoting, tutorial, read_committed, jbosscache3x, xml, distribution, cachestore, data_grid, cacheloader, resteasy, cluster, development, permission, websocket, async, transaction, interactive, xaresource, build, searchable, demo, installation, scala, client, non-blocking, jpa, filesystem, tx, gui_demo, eventing, testng, infinispan_user_guide, standalone, hotrod, webdav, snapshot, repeatable_read, docs, consistent_hash, batching, store, jta, faq, 2lcache, as5, jsr-107, docbook, jgroups, lucene, locking, rest, hot_rod
more »
( - amazon, - aws, - client_server, - dist, - gridfs, - import, - infinispan, - listener, - migration, - mvcc, - notification )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/amazon+aws+client_server+dist+gridfs+import+infinispan+listener+migration+mvcc+notification | 2019-10-13T23:49:53 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.jboss.org |
All content with label archetype+as5+cache+ec2+ehcache+eviction+gridfs+gui_demo+hash_function+infinispan+jcache+listener.
Related Labels:
podcast, expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, query, deadlock, intro, pojo_cache, lock_striping, jbossas, nexus, guide,
schema, amazon, s3, grid, memcached, test, api, xsd, maven, documentation, youtube, userguide, write_behind, 缓存, hibernate, aws, interface, clustering, setup, fine_grained, concurrency, out_of_memory, jboss_cache, import, index, events, configuration, buddy_replication, loader, pojo, write_through, cloud, remoting, mvcc, tutorial, notification, presentation, murmurhash2, xml, read_committed, jbosscache3x, distribution, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, development, transaction, async, interactive, xaresource, build, searchable, demo, scala, installation, client, non-blocking, migration, jpa, filesystem, tx, user_guide, article, eventing, client_server, testng, infinispan_user_guide, murmurhash, snapshot, repeatable_read, webdav, hotrod, docs, consistent_hash, batching, store, whitepaper, jta, faq, spring, 2lcache, jsr-107, lucene, jgroups, locking, rest, hot_rod
more »
( - archetype, - as5, - cache, - ec2, - ehcache, - eviction, - gridfs, - gui_demo, - hash_function, - infinispan, - jcache, - listener )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/archetype+as5+cache+ec2+ehcache+eviction+gridfs+gui_demo+hash_function+infinispan+jcache+listener | 2019-10-13T23:42:45 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.jboss.org |
All content with label as5+batching+cache+eviction+gridfs+index+infinispan+jbosscache3x+listener+replication+searchable+snapshot.
Related Labels:
podcast, expiration, publish, datagrid, coherence, interceptor, server,, concurrency, fine_grained, out_of_memory, jboss_cache, import, events, configuration, batch, hash_function, buddy_replication, pojo, write_through, cloud, mvcc, notification, tutorial, presentation, read_committed, xml, distribution, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, development, websocket, transaction, interactive, xaresource, build, demo, installation, scala, client, migration, non-blocking, jpa, filesystem, tx, user_guide, article, gui_demo, eventing, client_server, testng, infinispan_user_guide, repeatable_read, webdav, hotrod, docs, consistent_hash, store, whitepaper, jta, faq, 2lcache, spring, jsr-107, lucene, jgroups, locking, rest, hot_rod
more »
( - as5, - batching, - cache, - eviction, - gridfs, - index, - infinispan, - jbosscache3x, - listener, - replication, - searchable, - snapshot )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/as5+batching+cache+eviction+gridfs+index+infinispan+jbosscache3x+listener+replication+searchable+snapshot | 2019-10-13T23:59:56 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.jboss.org |
Items
This article explains how the different types of items supported in RadLayoutControl can be used. The items types are shown in Figure1.
Figure 1: Item Types
- LayoutControlItem: This item holds the controls added to a RadLayoutControl. Each control is added to an item and then the item is added to the layout control. This grants you control over the control sizing and position. The most important properties are:
- MinSize: Gets or sets the item’s minimum size.
- MaxSize: Gets or sets the item’s maximum size.
- DrawText: specifies if the item's text should be drawn.
- TextPosition: Gets or set the text position according to the hosted control.
- TextSizeMode: Controls the size mode of the item's text part. The possible values are ProportionalSize (the text is resized proportionally) and FixedSize (the text size is fixed). Depending on this value the corresponding size property is used.
- TextProportionalSize: Gets or sets the proportional text size when the proportional size mode is used.
- TextFixedSize: Gets or sets the text size when the fixed size mode is used.
- LayoutControlLabelItem: Basic item that allows displaying text and/or image.
- LayoutControlSeparatorItem: Stands as a separator, its orientation is determined by the position it is placed in. You can use the Thickness property to set the item width.
- LayoutControlGroupItem: A group with a header and another layout container inside of it, can be collapsed, has its own Items collection. Its most important properties are:
- HeaderElement: Gives access to the underlying CollapsiblePanelHeaderElement.
- HeaderHeight: Gets or set the header height.
- IsExpanded: Indicates if the group is expanded.
- TextAlignment: Gets or set the header text alignment.
- ShowHeaderLine: With this property you can hide the horizontal line in the group header.
- LayoutControlSplitterItem: This item allows the resizing of the containers on its both ends. You can set its width with the Thickness property.
- LayoutControlTabbedGroup: Holds LayoutControlGroupItems and allows switching between them in a tabbed interface. Its most important properties are:
- TabStrip: Gives access to the tabs. This way you can change their properties.
- ItemGroups: This is the collection which holds the underlying groups.
- SelectedGroup: Gets or sets the selected group. | https://docs.telerik.com/devtools/winforms/controls/layoutcontrol/items | 2019-10-13T22:59:07 | CC-MAIN-2019-43 | 1570986648343.8 | [array(['images/layoutcontrol-items001.png', 'layoutcontrol-items 001'],
dtype=object) ] | docs.telerik.com |
- Download the product from the WSO2 product page.
If you are using the Admin Services to count the users and roles, follow the below steps:>
Access the WSDL of UserStoreCountService service by browsing. If the WSDL is loading, access the methods of the service through SoapUI. Here, you will have access to additional methods (CountByClaimsInDomain, countClaims) than from the Management Console.
By default, only JDBC user store implementations support this service but the functionality can be extended to LDAP user stores or any other type of user store as well. | https://docs.wso2.com/display/IS520/Counting+users+and+roles+using+Management+Console+and+Admin+Services | 2019-10-13T22:52:03 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.wso2.com |
How To Setup Google FCM
FCM is used for push notification which you need to setup on android studio and your website. First register your FCM account at and create a project for your app as show in the image below
Open the newly created project to your credential details, which you will provide to us when create a app build request. Follow the below steps
- Open the newly created project for your site
- As show in image below, click on the settings icon -> project settings to get the details
- Click on CLOUD Message tab to get the FCM Web Api Key
- Take note of the Server Key as FCM Web Api Key you will provide that for the app setup as shown in the image below
- Now click on add app, if you have not added the android app before as show in the image below.
- Select Add Firebase to your Android app
- In the package name type com.procrea8.sitename please replace sitename with your domain For example : mysite.com the package name will be com.procrea8.mysite
- Now click the add App button
- Now download the google-service.json file and attach with the app build request as shown below
- Provide these details in your app setup request as show below
Thanks for reading | http://docs.crea8social.com/docs/setup-2/google-settings-instruction/ | 2019-10-13T23:05:17 | CC-MAIN-2019-43 | 1570986648343.8 | [array(['http://image.prntscr.com/image/03dfebda082b473db174e956df4edc91.png',
None], dtype=object) ] | docs.crea8social.com |
Configurations are interface layouts that you have either created or that come with ZBrush. ZBrush ships with several configurations that you can access by pressing the Load Next User Interface Layout button in the upper right area of the interface. You can also save your own by pressing Store Config in the Preference > Config sub-palette.
Saving A New Default Configuration
- Press Preferences: Custom UI: Customize
- Customize the interface. To learn how to customize the interface click here.
- When done, press Preferences:Config:Store Config to store the current interface layout as you standard interface.
- ZBrush will give you a message that your settings have been saved successfully. The keyboard shortcut for this action is Ctrl+Shift+I.
Saving A Configuration
To save an interface configuration that you only occassionaly use do the following:
- Press Preferences: Custom UI: Customize
- Customize the interface. To learn how to customize the interface click here.
- When done, press Preferences:Config:Save UI.
- Press Preferences: Config: Load Ui
- Navigate to the interface you want to load and press Enter.
Restoring ZBrush’s Default Interface
You can restore ZBrush’s interface to the interface that ZBrush shipped with.
- Press Preferences: Restore Standard UI
This returns you to the same layout that ZBrush shipped with. Any colors and special settings within the palettes (such as memory management) will not be changed. In other words, only the positions of elements are changed by clicking on any of these layout buttons. Your personal settings will remain unaffected regardless of the layout.
You can also restore your own custom interface when in a different configuration.
- Press Preferences: Config: Restore Custom UI
You will be returned to the custom layout that you assigned earlier. | http://docs.pixologic.com/user-guide/customizing-zbrush/interface-layout/ | 2019-10-13T22:41:42 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.pixologic.com |
Important: #84910 - Deny direct FAL commands for form definitions¶
In order to control settings in user provided form definitions files and only
allow manipulations by using the backend form editor (or direct file access
using e.g. SFTP) form file extensions have been changed form simple
.yaml
to more specific
.form.yaml.
Direct file commands by using either the backend file list module or implemented
invocations of the file abstraction layer (FAL) API are denied per default and
have to allowed explicitly for the following commands for files ending with the
new file suffix
.form.yaml:
- plain command invocations
- create (creating new, empty file having
.form.yamlsuffix)
- rename (renaming to file having
.form.yamlsuffix)
- replace (replacing an existing file having
.form.yamlsuffix)
- move (moving to different file having
.form.yamlsuffix)
- command and content invocations - content signature required
- add (uploading new file having
.form.yamlsuffix)
- setContents (changing contents of file having
.form.yamlsuffix)
In order to grant those commands,
\TYPO3\CMS\Form\Slot\FilePersistenceSlot
has been introduced (singleton instance).
// Allowing content modifications on a $file object with // given $newContent information prior to executing the command $slot = GeneralUtility::makeInstance(FilePersistenceSlot::class); $slot->allowInvocation( FilePersistenceSlot::COMMAND_FILE_SET_CONTENTS, $file->getCombinedIdentifier(), $this->filePersistenceSlot->getContentSignature($newContent) ); $file->setContents($newContent);
In contrast to plain command invocations, those having content invocations
(
add and
setContents, see list of commands above) require a content signature
as well in order to be executed. The previous example demonstrates that for the
setContents command.
Extensions that are modifying (e.g. post-processing) persisted form definition files using the file abstraction layer (FAL) API need to adjust and extend their implementation and allow according invocations as outlined above.
See Issue #84910 .. index:: Backend, FAL, ext:form | https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/8.7.x/Important-84910-DenyDirectFALCommandsForFormDefinitions.html | 2019-10-14T00:00:43 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.typo3.org |
After you complete the installation, you can reset the following passwords:
Instructions on resetting each of these passwords are included in the sections that follow.‑service/bin/apigee‑service apigee‑openldap \ change‑ldap‑password ‑o OLD_PASSWORD ‑n NEW_PASSWORD
- Run the following command to store the new password for access by the Management Server:
/opt/apigee/apigee‑service/bin/apigee‑service edge‑management‑server \ store_ldap_credentials ‑p NEW_PASSWORD
To reset the system admin password:
- Edit the silent config file that you used to install the Edge UI to set the following properties:
APIGEE_ADMINPW=NEW_PASSWORD.
- On the UI node, stop the Edge UI:
/opt/apigee/apigee-service/bin/apigee-service edge-ui stop
- Use the
apigee-setuputility.
- On the Management Server, create a new XML file. In this file, set the user ID to "admin" and define the password, first name, last name, and email address using the following format:
<User id="admin"> <Password><![CDATA[password]]></Password> <FirstName>first_name</FirstName> <LastName>last_name</LastName> <EmailId>email_address</EmailId> </User>
- On the Management Server, execute the following command:
curl ‑u "admin_email_address:admin_password" ‑H \ "Content‑Type: application/xml" ‑H "Accept: application/json" ‑X POST \ "" ‑d @your_data_file
Where your_data_file is the file you created in the previous step.
Edge updates your admin password on the Management Server.
- Delete the XML file you created. Passwords should never be permanently stored in clear text., as the following example shows:
‑service/bin/apigee‑service apigee‑setup reset_user_password ‑u [email protected] ‑p Foo12345 ‑a [email protected] ‑P adminPword
cp ~/Documents/tmp/hybrid_root/apigeectl_beta2_a00ae58_linux_64/README.md ~/Documents/utilities/README.md
Shown below is an example config file that you can use with the "-f" option:
[email protected] USER_PWD="Foo12345" APIGEE_ADMINPW=ADMIN_PASSWORD
You can also use the Update user API to change the user password.
SysAdmin must:
- Set the password on any one Cassandra node and it will be broadcast to all Cassandra nodes in the ring
- Update the Management Server, Message Processors, Routers, Qpid servers, and Postgres servers on each node with the new password
For more information, see.
To reset the Cassandra password:
-is the IP address of the Cassandra node.
9042is the Cassandra port.
- The default user is
cassandra.
- The default password is '
cassandra'. If you changed the password previously, use the current password. If the password contains any special characters, you must wrap it in single quotes.
- Execute the following command as the
cqlsh>prompt to update the password:
ALTER USER cassandra WITH PASSWORD 'NEW_PASSWORD';
If the new password contains a single quote character, escape it by preceding it with a single quote character.
- Exit the
cqlshtool:)
The Cassandra password is now changed.
Reset directories to
/opt/apigee/apigee-postgresql/pgsql/bin.
- Set the PostgreSQL "postgres" user password:
- Login to PostgreSQL database using the command:
psql -h localhost -d apigee -U postgres
- When prompted, enter the existing "postgres" user password as "postgres".
- At the PostgreSQL command prompt, enter the following command to change the default password:
ALTER USER postgres WITH PASSWORD 'new_password';
On success, PostgreSQL responds with the following:
ALTER ROLE
- Exit PostgreSQL database using the following command:
:
ALTER USER apigee WITH PASSWORD 'new_password';
- Exit PostgreSQL database using the command:
\q
You can set the "postgres" and "apigee" users' passwords to the same value or different values.
- Set
APIGEE_HOME:
export APIGEE_HOME=/opt/apigee/edge-postgres-server
- Encrypt the new password:
sh /opt/apigee/edge-postgres-server/utils/scripts/utilities/passwordgen.sh new_password
This command returns an encrypted password. The encrypted password starts after the ":" character and does not include the ":"; for example, the encrypted password for "apigee1234"file to set the following properties. If this file does not exist, create it.
- Make sure the file is owned by "apigee" user:
chown apigee:apigee management-server.properties
- Update all Postgres Server and Qpid Server nodes with the new encrypted password.
- On the Postgres Server or Qpid Server node, change to the following directory:
/opt/apigee/customer/application
- Open the following files for edit:
postgres-server.properties
qpid-server.properties
If these files do not exist, create them.
- Add the following properties to the files:
- Update the SSO component (if SSO is enabled):
Connect to or log in to the node on which the
apigee-ssocomponent is running. This is also referred to as the SSO server.
In AIO or 3-node installations, this node is the same node as the Management Server.
If you have multiple nodes running the
apigee-ssocomponent, you must perform these steps on each node.
- Open the following file for edit:
/opt/apigee/customer/application/sso.properties
If the file does not exist, create it.
- Add the following line to the file:
conf_uaa_database_password=new_password_in_plain_text
For example:
conf_uaa_database_password=apigee1234
- Execute the following command to apply your configuration changes to the
apigee-ssocomponent:
/opt/apigee/apigee-service/bin/apigee-service apigee-sso configure
- Repeat these steps for each SSO server.
- Restart the following components in the following
- SSO server:
/opt/apigee/apigee-service/bin/apigee-service apigee-sso restart | https://docs.apigee.com/private-cloud/v4.19.06/resetting-passwords | 2019-10-13T23:26:51 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.apigee.com |
Welcome to the Kentico 7 Installer documentation!
What is the Kentico Installer?
The Kentico Installer (or in short just Installer) is our new tool for installing Kentico. It offers an easy installation for inexperienced users in just a few clicks and provides developers with a clear and compact installation interface.
The previous installation procedure
This tool replaces the previous four-step installation procedure (Setup, Web installer, Database setup and New site wizard) and also the Silent Install tool. If you are looking for the documentation of the installation procedure before the Kentico Installer (Kentico CMS 7.0 version before 5/2013), see the Kentico CMS 7 Developers Guide.
Where to start?
We strongly recommend to uninstall old Kentico program files before installing this new version:
- In Windows Start menu, type Add or remove programs and press Enter.
- Select Kentico CMS 7.0 in the Programs and Features list.
- Click Uninstall.
You can begin by installing Kentico for evaluation purposes using the Quick installation button.
Or you can check if your development server meets the recommended configuration and then:
- Install Kentico on your local computer with your own preferred settings using the Custom installation button. If you come across any difficulties with selecting the right options, see Installing Kentico (Questions and Answers) for recommendations.
-- or --
- Develop your website on a remote server or set up a production environment. You will find detailed procedures for this type of installation in the Deploying Kentico to a live server section.
How to get Kentico Installer if I do not have it yet?
You can read about the release of the new Installer in this blog post .
Where to get more information?
To learn how to develop websites, see the Tutorial.
To see complete Kentico CMS 7 documentation, open the Kentico CMS 7 Developers Guide.
You can also visit our portal for developers with blogs, forums, knowledge base and other documentation material at DevNet.
If you need advice on how to use Kentico CMS, feel free to write to [email protected]. The support team operates non-stop and will be happy to help you. | https://docs.kentico.com/display/K7I/Kentico+7+Installer+Home | 2019-10-13T23:48:52 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.kentico.com |
Be careful using this function with non-zero amqp.timeout (you may check at AMQPConnection::getTimeout), because it looks like timeout value says how long to wait for a new message from broker before die in way like
Fatal error: Uncaught exception 'AMQPConnectionException' with message 'Resource temporarily unavailable' in /path/to/your/file.php:12
Stack trace:
#0 /path/to/your/file.php(12): AMQPQueue->consume(Object(Closure), 128)
#1 {main}
thrown in /path/to/your/file.php on line 12
As for notes about blocking, system resources greediness and so and so, you can investigate how it works by looking in amqp_queue.c for read_message_from_channel C function declaration and PHP_METHOD(amqp_queue_class, consume) method declaration. For me it works perfectly without any uncommon resources usage or I/O performance degradation under the load of 10k 64b message per second with delivery time for less than 0.001 sec.
OS: FreeBSD *** 8.2-RELEASE FreeBSD 8.2-RELEASE #0: Sat Mar ****** 2011 root@*****:**** amd64
PHP: PHP Version => 5.3.10, Suhosin Patch 0.9.10, Zend Engine v2.3.0
php AMQP extnsion:
amqp
Version => 1.0.9
Revision => $Revision: 327551 $
Compiled => Dec 2* 2012 @ *****
AMQP protocol version => 0-9-1
librabbitmq version => 0.2.0
Directive => Local Value => Master Value
amqp.auto_ack => 0 => 0
amqp.host => localhost => localhost
amqp.login => guest => guest
amqp.password => guest => guest
amqp.port => 5672 => 5672
amqp.prefetch_count => 3 => 3
amqp.timeout => 0 => 0
amqp.vhost => / => /
AMQP broker: RabbitMQ 3.0.1, Erlang R14B04
Definitely, such loop will block main thread, but due to single-thread PHP nature it's completely normal behavior. To exit this consumption loop your callback function or method (i prefer to use closures, btw) should return FALSE.
The benefit of this function is that you don't have manually iteration for all messages, and what is more important, if there is no unprocessed messages in queue it will wait for such for you.
So you have just to run you consumer (one or many) and optionally time to time check whether they still alive just for reason if you are not sure about callbacks or memory-limit-critical stuff | http://docs.php.net/manual/fr/amqpqueue.consume.php | 2014-03-07T15:00:03 | CC-MAIN-2014-10 | 1393999645327 | [] | docs.php.net |
Packages generated via a scaffold make use of a system created by Ian Bicking named PasteDeploy. PasteDeploy defines a way to declare WSGI application configuration in an .ini file.
Pyramid uses this configuration file format, all Pyramid scaffolds render any .ini file. Such a section should consists of global parameters that are shared by all the applications, servers and middleware defined within the configuration file. The values in a [DEFAULT] section will be passed to your application’s main function as global_config (see the reference to the main function in __init__.py). | http://docs.pylonsproject.org/projects/pyramid/en/latest/narr/paste.html | 2014-03-07T14:59:52 | CC-MAIN-2014-10 | 1393999645327 | [] | docs.pylonsproject.org |
Updating the support groups for change templates
You can give other support groups the permission to use this change template. These groups cannot modify or delete the template. To update support groups, you first must save the change template. When you finish defining the change template, open the Change Template form in search mode and locate your change template. You then can configure the relationships settings.
To update the support groups
- From the Application Administration Console, click the Custom Configuration tab.
- From the Application Settings list, choose Change Management > Template > Template, then click Open. The Change Template form appears.
- In the Authoring For Groups tab, click Update. The Add Support Group dialog box appears.
- Add or delete any support groups that you have permissions for.
- Close the dialog box. The information is updated in the Groups that use this template table.
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/itsm91/updating-the-support-groups-for-change-templates-608491279.html | 2020-10-20T06:24:18 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.bmc.com |
A basic entity record is the top level of any documents containing the results of entity queries. If you enable the option to collect additional detail and/or meta data, these additional document fragments are appended as children to the top-level record. If the basic record contains a reference to an organization, the organization data is also appended as a child to the top-level record. You can use XPATH expressions to map data at any level of the document.
For more information, see XML feed configuration.
The feed configuration for vCloud collection extends the XML feed configuration. There are additional properties to support an optional entity filter, as well as enabling or disabling the collection of entity detail data and/or user-entered meta data fields.
For more information on filters, see Query Filter Expressions in the vCloud API Programming Guide.(numberOfVMs!=0;isPrimary==true)
All records produced reflect the job’s
selectDate parameter as the usage end date. After aggregation, the usage is assumed to be for the entire day (24 hours). The minimum interval supported for calculation of charges is one day.
Because the majority of supported attributes are distinct across entity types, there are no common identifiers for all entity types. It is recommended to write the XML output to a file to use as a reference when mapping all attributes for an entity type.
For more information on XML output files, see Application notes.
The sample collection job collects the virtual machine entity type and captures the
numberOfCpus and
memoryMB attributes as resources. These attributes are available in the basic virtual machine record.
(c) Copyright 2017-2020 Hewlett Packard Enterprise Development LP | https://docs.consumption.support.hpe.com/Older_versions/Cloud_Cruiser_4/03Collecting%2C_transforming%2C_and_publishing/VMware_vCloud/Data_mapping | 2020-10-20T06:46:21 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.consumption.support.hpe.com |
Now you run the PowerBuilder application to see how it works, and you can test the retrieve, insert, delete, and update capabilities of the DataWindow.
Click the Run button (
) in the PowerBar1.
The database connection is established, and the MDI frame for the application appears.
Select Tutorial > Open from the menu bar.
The w_cusdata sheet window appears.
The DataWindow control shows all of the columns retrieved from the Customer table. And you are ready to retrieve, insert, delete, and/or update data for the DataWindow control.
Click the Retrieve button.
This retrieves data from the Customer table.
Click the Insert button.
This clears (resets) the DataWindow, allowing you to add information for a new row that you will insert into the data source. The cursor is in the Customer ID box in the DataWindow control.
Add a new customer row by entering information in the boxes in the DataWindow.
Typing information for a new customer
The Customer ID number must be unique. To avoid duplicate numbers, use a four-digit number for your new database entry, or scroll down the list of three-digit customer numbers in the DataWindow and select an ID number that does not appear in the list.
Enter values for the remaining fields.
The phone number and zip code use edit masks to display the information you type. You must enter numbers only for these data fields. To specify the state in which the customer resides, you must click the arrow next to the state column and select an entry from the drop-down list box.
Click the Delete button.
The customer is deleted from the DataWindow immediately but is not deleted from the database unless you select the Update option. In this particular situation, the Update operation may fail, because rows in other tables in the database may refer to the row that you are trying to delete.
You should be able to delete any row that you have added to the database.
Click the Update button.
This sends the new customer data to the database and displays a confirmation message, as coded in the script for the ue_update event.
Click OK in the message box.
Click the Close button on the top right corner of the window to exit the application.
The application terminates and you return to the Menu painter.
Close the Menu painter. | https://docs.appeon.com/2015/getting_started/ch03s06s03.html | 2020-10-20T05:51:55 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.appeon.com |
[feature explanation] What does the Defer Processing toggle do?
When we create automation rules we can set to have them defer their processing to our background cronjob worker, which processes every two minutes, or we can disable defer processing to have the job process immediately.
Some jobs we will want to disable defer processing. For example, if our automation rule has an action to create a WordPress user account from a generated lead and automatically sign them into WordPress, we will want to disable defer processing so the cookies can be set into the browser immediately.
Also if for some reason we are targeting trigger actions that occur during a backend AJAX call, we can opt to disable defer processing in order to lighten the load of our backend cron worker. If we are running rules that occur every page visit and we have a high amount of traffic then we will not want to put that processing load onto our background cron worker. We will want to disable defer processing and process our rules immediately as our trigger fires.
On the other hand there might be a scenario where our trigger fires during a form submission. Deferring the processing of the rule to our cronworker can same precious resources, allowing the page transition to occur faster.
The decision to enable or disable defer processing will be a technical one. For most cases you will be okay leaving this toggle set to enabled. | https://docs.inboundnow.com/guide/what-does-the-defer-processing-toggle-do/ | 2020-10-20T05:42:16 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.inboundnow.com |
Developer’s Guide (wazo-confd)¶
wazo-confd resources are organised through a plugin mechanism. There are 2 main plugin categories:
-or
PUT) and another for dissociating (
DELETE)
The following diagram outlines the most important parts of a plugin:
Plugin architecture of wazo-confd
- Resource
Class that receives and handles HTTP requests. Resources use flask-restful for handling requests.
There are 2 kinds of resources: ListResource for root URLs and ItemResource for URLs that have an ID. ListResource will handle creating a resource (, and. | https://wazo.readthedocs.io/en/wazo-20.01/system/wazo-confd/developer.html | 2020-10-20T06:08:10 | CC-MAIN-2020-45 | 1603107869933.16 | [array(['../../_images/wazo-confd-plugin-architecture.png',
'../../_images/wazo-confd-plugin-architecture.png'], dtype=object)] | wazo.readthedocs.io |
Changelog for package rosbridge_library
0.11.5 (2020-04-08)
Add script for dockerized development shell (
#479
) * Add script for dockerized development shell * Fix queue dropping test
Subscriber concurrency review (
#478
) * Lock access to SubscriberManager public methods Prevent subscribe during unsubscribe critical section. * Unsubscribing an unsubscribed topic is an error This branch must not be ignored. * Cleanup some redundant syntax in subscribers impl
Fix queue blocking (
#464
) * Unblock QueueMessageHandler.handle_message The thread was holding the lock while pushing to a potentially blocking function. Rewrite the logic and use a deque while we're at it. * Add test for subscription queue behavior Guarantee that the queue drops messages when blocked.
Python 3 updates/fixes (
#460
) * rosbridge_library, rosbridge_server: Update package format Add Python3 conditional dependencies where applicable. * rosbridge_library: Fix pngcompression for Python 3 * rosapi: Use catkin_install_python for scripts
Fixing wrong header/stamp in published ROS-messsages (
#472
) When publishing a message to ROS (i.e. incoming from rosbridge_server's perspective), timestamps in the Header attributes all point to the same Time object iff the message contains multiple Header attributes (typically the case if a ROS message contains other ROS messages, e.g. ...Array-types) and rosparam use_sim_time is true.
Contributors: Alexey Rogachevskiy, Matt Vollrath, danielmaier
0.11.4 (2020-02-20)
Concurrency review (
#458
) * Safer locking in PublisherConsistencyListener * Safer locking in ros_loader * Print QueueMessageHandler exceptions to stderr * Register before resuming outgoing valve * Don't pause a finished socket valve
Add cbor-raw compression (
#452
) The CBOR compression is already a huge win over JSON or PNG encoding, but it’s still suboptimal in some situations. This PR adds support for getting messages in their raw binary (ROS-serialized) format. This has benefits in the following cases: - Your application already knows how to parse messages in bag files (e.g. using [rosbag.js](
), which means that now you can use consistent code paths for both bags and live messages. - You want to parse messages as late as possible, or in parallel, e.g. only in the thread or WebWorker that cares about the message. Delaying the parsing of the message means that moving or copying the message to the thread is cheaper when its in binary form, since no serialization between threads is necessary. - You only care about part of the message, and don't need to parse the rest of it. - You really care about performance; no conversion between the ROS binary format and CBOR is done in the rosbridge_sever.
Fix typos in rosbridge_library's description (
#450
)
Python 3 fix for
dict::values
(
#446
) Under Python 3, values() returns a view-like object, and because that object is used outside the mutex, we were getting
RuntimeError: dictionary changed size during iteration
under some circumstances. This creates a copy of the values, restoring the Python 2 behaviour and fixing the problem.
Contributors: Jan Paul Posma, Matt Vollrath, Mike Purvis, miller-alex
0.11.3 (2019-08-07)
0.11.2 (2019-07-08)
0.11.1 (2019-05-08)
fixed logwarn msg formatting in publishers (
#398
)
Contributors: Gautham P Das
0.11.0 (2019-03-29)
BSON can send Nan and Inf (
#391
)
Contributors: akira_you
0.10.2 (2019-03-04)
Fix typo (
#379
)
Contributors: David Weis
0.10.1 (2018-12-16)
Inline cbor library (
#377
) Prefer system version with C speedups, but include pure Python implementation.
Contributors: Matt Vollrath
0.10.0 (2018-12-14)
CBOR encoding (
#364
) * Add CBOR encoding * Fix value extraction performance regression Extract message values once per message. * Fix typed array tags Was using big-endian tags and encoding little-endian. Always use little-endian for now since Intel is prevalent for desktop. Add some comments to this effect. * Update CBOR protocol documentation More information about draft typed arrays and when to use CBOR. * Fix 64-bit integer CBOR packing Use an actual 64-bit format.
use package format 2, remove unnecessary dependencies (
#348
)
removing has_key for python3, keeping backwards compatibility (
#337
) * removing has_key for python3, keeping backwards compatibility * py3 change for itervalues, keeping py2 compatibility
Contributors: Andreas Klintberg, Dirk Thomas, Matt Vollrath
0.9.0 (2018-04-09)
Fix typo in function call
Add missing argument to InvalidMessageException (
#323
) Add missing argument to InvalidMessageException constructor
Make unregister_timeout configurable (
#322
) Pull request
#247
introduces a 10 second delay to mitigate issue
#138
. This change makes this delay configurable by passing an argument either on the command line or when including a launch file. Usage example:
`xml <launch> <include
file="$(find
rosbridge_server)/launch/rosbridge_websocket.launch">
<arg
name="unregister_timeout"
value="5.0"/>
</include> </launch> `
Closes
#320
message_conversion: create stand-alone object inst (
#319
) Catching the ROSInitException allows to create object instances without an initialized ROS state
Fixes
#313
by fixing has_binary in protocol.py (
#315
) * Fixes
#313
by fixing has_binary in protocol.py Checks for lists that have binary content as well as dicts * Minor refactoring for protocol.py
fix fragment bug (
#316
) * fix bug that lost data while sending large packets * fixed travis ci failed by @T045T * fixed travis ci failed by @T045T * travis ci failed * fix rosbridge_library/test/experimental/fragmentation+srv+tcp test bug * sync .travis.yaml * fix the service_response message bug * fix the fragment paring error * fix indentation of "service" line
add graceful_shutdown() method to advertise_service capability This gives the service a bit of time to cancel any in-flight service requests (which should fix
#265
). This is important because we busy-wait for a rosbridge response for service calls and those threads do not get stopped otherwise. Also, rospy service clients do not currently support timeouts, so any clients would be stuck too. A new test case in test_service_capabilities.py verifies the fix works
Add rostest for service capabilities and fix bugs also fixed some typos
Fix Travis config (
#311
) * fix Travis config dist: kinetic is currently unsupported * fix rostests for some reason, rostest seems to hide the rosout node - changed tests to use other services
Contributors: Anwar, Johannes Rothe, Jørgen Borgesen, Nils Berg, Phil, WH-0501, elgarlepp
0.8.6 (2017-12-08)
Import StringIO from StringIO if python2 and from io if python3 fixes
#306
(
#307
)
Contributors: Jihoon Lee
0.8.5 (2017-11-23)
Raise if inappropriate bson module is installed (Appease
#198
) (
#270
) * Raise Exception if inappropriate bson module is installed (Related to
#198
)
Add Python3 compatibility (
#300
) * First pass at Python 3 compatibility * message_conversion: Only call encode on a Python2 str or bytes type * protocol.py: Changes for dict in Python3. Compatible with Python 2 too. * More Python 3 fixes, all tests pass * Move definition of string_types to rosbridge_library.util
Contributors: Junya Hayashi, Kartik Mohta
0.8.4 (2017-10-16)
0.8.3 (2017-09-11)
Type conversion convention correction, correcting issue
#240
Contributors: Alexis Paques
0.8.2 (2017-09-11)
0.8.1 (2017-08-30)
remove ujson from dependency to build in trusty (
#290
)
Contributors: Jihoon Lee
0.8.0 (2017-08-30)
Cleaning up travis configuration (
#283
) configure travis to use industial ci configuration. Now it uses xenial and kinetic
Merge pull request
#272
from ablakey/patch-1 Prevent a KeyError when bson_only_mode is unset.
Update protocol.py Prevent a KeyError when bson_only_mode is unset.
Merge pull request
#257
from Sanic/bson-only-mode Implemented a bson_only_mode flag for the TCP version of rosbridge
Merge pull request
#247
from v-lopez/develop Delay unregister to mitigate
#138
Change class constant to module constant
Reduce timeout for tests Tests will sleep for 10% extra of the timeout to prevent some situations were the test sleep ended right before the unregister timer fired
Fix test advertise errors after delayed unregister changes
Fix missing tests due to delayed unregistration
Move UNREGISTER_TIMEOUT to member class so it's accessible from outside
minor change in variable usage
Implemented a bson_only_mode flag for the TCP version of rosbridge; This allows you to switch to a full-duplex transmission of BSON messages and therefore eliminates the need for a base64 encoding of binary data; Use the new mode by starting:'roslaunch rosbridge_server rosbridge_tcp.launch bson_only_mode:=True' or passing '--bson_only_mode' to the rosbridge_tcp.py script
Delay unregister to mitigate !138
Contributors: Andrew Blakey, Jihoon Lee, Nils Berg, Patrick Mania, Victor Lopez
0.7.17 (2017-01-25)
adjust log level for security globs Normal operation (i.e. no globs or successful verification of requests) is now silent, with illegal requests producing a warning.
add missing import
correct default values for security globs also accept empty list as the default "do not check globs" value in addition to None. Finally, append rosapi service glob after processing command line input so it's not overwritten
Added services_glob to CallServices, added globs to rosbridge_tcp and rosbridge_udp, and other miscellanous fixes.
As per the suggestions of @T045T, fixed several typos, improved logging, and made some style fixes.
Added new parameters for topic and service security. Added 3 new parameters to rosapi and rosbridge_server which filter the topics, services, and parameters broadcast by the server to match an array of glob strings.
Contributors: Eric, Nils Berg
0.7.16 (2016-08-15)
Fixed deprecated code in pillow
Contributors: vladrotea
0.7.15 (2016-04-25)
changelog updated
Contributors: Russell Toris
0.7.14 (2016-02-11)
Another fix for code
Replaced += with ''.join() for python code
Default Protocol delay_between_messages = 0 This prevents performance problems when multiple clients are subscribing to high frequency topics. Fixes
#203
Contributors: Matt Vollrath, kiloreux.7.5 (2014-12-26)
0.7.4 (2014-12-16)
0.7.3 (2014-12-15)
0.7.2 (2014-12-15)
0.7.1
update changelog
Contributors: Jihoon Lee
0.7.1 (2014-12-09) | http://docs.ros.org/en/noetic/changelogs/rosbridge_library/changelog.html | 2020-10-20T05:55:02 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.ros.org |
Troubleshooting Issues with Amazon Sumerian Scenes
This topic lists common errors and issues that you might encounter when using the Sumerian editor and player. If you find an issue that is not listed here, you can use the Feedback button on this page to report it.
Issue: (Chrome) Can't enter virtual reality mode.
If using Oculus Rift or Oculus Rift S, you may need to set the following flags to use virtual reality mode in Chrome.
Oculus hardware support –
#oculus-vrto
Enabled
XR device sandboxing –
#xr-sandboxto
Disabled
If using OpenVR hardware such as HTC Vive or HTC Vive Pro, you may need to set the following flags to use virtual reality mode in Chrome.
OpenVR hardware support –
#openvrto
Enabled
XR device sandboxing –
#xr-sandboxto
Disabled
To access Chrome flags, type chrome://flags into your search bar.
Issue: Browser uses the wrong GPU for hardware acceleration.
If you have multiple graphics cards, you may need to configure your system to use the right GPU for browser applications. For example, the NVIDIA control panel has an option named target GPU that you can set for each application. | https://docs.aws.amazon.com/sumerian/latest/userguide/sumerian-troubleshooting.html | 2020-10-20T06:07:40 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.aws.amazon.com |
Filter: Everything Submit Search Colour Banding Node. In this example, a Colour Banding node has been connected to a fire created with particle effects. Refer to the following example to connect this effect: Properties The Colour Banding node requires you to configure it with four colours, and to assign each of these colours with a threshold. Colours with a high threshold will replace brighter areas of the input image, and colours with a low threshold will replace its darker areas. Hence, the order in which you configure these colours does not matter. TIP If you only want to use 1, 2 or 3 colours, you can exclude a colour by setting its threshold to 100. Parameter Description Name Allows you to enter a name for the node. Threshold The minimum value required in the source pixel for this colour to replace the source image colour. The threshold can be set to a value between 0 and 100. A high setting will make the colour replace brighter areas of the input image, whereas a low setting will make it replace darker areas. Colours are applied on the output image in order of their threshold. The colour with the lowest threshold is applied first, then the one with the second lowest threshold is applied over it, followed by the one with the second highest threshold and finally the one with the highest threshold. Setting this parameter to 100 will remove the colour from the effect, allowing you to use less than four colours if preferred. Colour The colour with which to fill the zone that exceeds the threshold: Red: The amount of red in the colour. Green: The amount of green in the colour. Blue: The amount of blue in the colour. Alpha: The amount of alpha in the colour. Colour Swatch Opens the Colour Picker dialog, in which you can visually select a colour for the zone. Blur The amount of blurriness to apply to the zone. Invert Matte Inverts the matte used to generate the effect. By default, the effect is applied to the opaque areas of the matte drawing. When this option is enabled, the effect is applied to the transparent areas of the matte instead. | https://docs.toonboom.com/help/harmony-16/premium/reference/node/filter/colour-banding-node.html | 2020-10-20T06:07:07 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.toonboom.com |
Crown Bay CRB¶
U-Boot support of Intel Crown Bay board relies on a binary blob called Firmware Support Package (FSP) to perform all the necessary initialization steps as documented in the BIOS Writer Guide, including initialization of the CPU, memory controller, chipset and certain bus interfaces.
Download the Intel FSP for Atom E6xx series and Platform Controller Hub EG20T, install it on your host and locate the FSP binary blob. Note this platform also requires a Chipset Micro Code (CMC) state machine binary to be present in the SPI flash where u-boot.rom resides, and this CMC binary blob can be found in this FSP package too.
- ./FSP/QUEENSBAY_FSP_GOLD_001_20-DECEMBER-2013.fd
- ./Microcode/C0_22211.BIN
Rename the first one to fsp.bin and second one to cmc.bin and put them in the board directory.
Note the FSP release version 001 has a bug which could cause random endless loop during the FspInit call. This bug was published by Intel although Intel did not describe any details. We need manually apply the patch to the FSP binary using any hex editor (eg: bvi). Go to the offset 0x1fcd8 of the FSP binary, change the following five bytes values from orginally E8 42 FF FF FF to B8 00 80 0B 00.
As for the video ROM, you need manually extract it from the Intel provided BIOS for Crown Bay here, using the AMI MMTool. Check PCI option ROM ID 8086:4108, extract and save it as vga.bin in the board directory.
Now you can build U-Boot and obtain u-boot.rom:
$ make crownbay_defconfig $ make all | https://u-boot.readthedocs.io/en/latest/board/intel/crownbay.html | 2020-10-20T06:19:23 | CC-MAIN-2020-45 | 1603107869933.16 | [] | u-boot.readthedocs.io |
Conference Itinerary
Conference Itinerary
The 2017-2018 Community Health Leadership Conference hosted by the Mitchell Wolfson Sr. Department of Community Service (DOCS) will be hosted at the University of Miami Miller School of Medicine from Friday, December 7th to Saturday, December 8th, 2018. Like last year, the Conference coincides with the Liberty City Health Fair, held in the Liberty City neighborhood of Miami. We hope to allow attendees a comprehensive exposure to DOCS and a holistic Conference experience (Keynote Speakers, Poster/Oral Presentations, Workshops, u0026amp; Awards Gala). Given the opportunity to gather with other student leaders from throughout the country, we truly hope that you will return home to your communities sharing goals of improving student involvement, health outreach and community service. For more information on what you’ll be able to attend, please see the tentative agenda below. We hope to see you here in Miami on December 7th!”
Tentative Agenda
Friday
- Breakfast
- Key Note Speaker: Hansel Tookes, MD
- Key Note Speaker: Erin Kobetz, PhD, MPH
- Oral Presentations
- Lunch
- Poster Presentations
- Awards Gala
Saturday
- Breakfast
- Liberty City Health Fair
- Lunch
- Small Group Workshops | https://umdocs.mededu.miami.edu/docs-student-leadership-retreat/retreat-itinerary/ | 2020-10-20T06:33:27 | CC-MAIN-2020-45 | 1603107869933.16 | [] | umdocs.mededu.miami.edu |
SimplEOS is a wallet created by the highly technical team at EOSRIO. It is a wallet created solely for the EOS ecosystem and fully integrated with all features available in the EOS.IO software.
SimplEOS is available for download on MacOS, Windows, or Linux.
Choose the account you want to connect - this will allow SimplEOS to interact with Bloks.
You should see your account username in the top right corner if you have successfully logged in. | https://docs.bloks.io/login/desktop-wallets/simpleos | 2020-10-20T05:43:32 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.bloks.io |
Creating a qualitative cohort analysis
Do.
What are qualitative cohorts, anyway? include:
- The set of all users that were acquired from an ad campaign
- The set of all users whose first purchase included a coupon (or didn’t)
- The set of all users who are of a certain age
How does that differ from the normal cohort builder?.
What information should I send to support to set up my analysis?
Creating a qualitative cohort report in the Report Builder involves our analyst team creating some advanced calculated columns on the necessary tables.
To build these, submit a support ticket (and reference this article!). Here’s what we’ll need to know:
The metric you want to perform your cohort analysis with and what table it uses (example: Revenue, built on the orders table).
The user segments you want to define and where that information lives in your database (example: different values of User’s referral source, native to the users table and relocated down to the orders).
The cohort date you want your analysis to use (example: the User’s first order date timestamp). This example would allow us to look at each segment and ask “How does a user’s revenue grow in the months following their first order date?”.
The time interval that you want to see the analysis over (example: weeks, months, or quarters after the User’s first order date).
Once our analyst team responds to the above, you will have a couple of new advanced calculated columns to build out your report! Then you’ll be able to follow the below directions to do this.
Creating the qualitative cohort analysis:
Set the time interval to None. This is because we’ll eventually group by the time interval as a dimension instead of using the usual time options.
Set the time range to the window of time you want the report to cover.:
Group by the dimension with the group by option
Select all values of the dimension in which you are interested
With the Show top/bottom option, select the “top” X months that you’re interested in, and sort by the Months between this order and customer’s first order date dimension. | https://docs.magento.com/mbi/data-analyst/dev-reports/create-qual-cohort-analysis.html | 2020-10-20T06:31:40 | CC-MAIN-2020-45 | 1603107869933.16 | [array(['/mbi/images/qualcohort1.gif', None], dtype=object)
array(['/mbi/images/qualcohort2.gif', None], dtype=object)] | docs.magento.com |
Adding a fax reception DID¶
If you want to receive faxes from Wazo, you need to add incoming calls definition with the
Application destination and the
This applies even if you want the action to be different from sending an email, like putting it on a FTP server. You’ll still need to enter an email address in these cases even though it won’t be used. wazo wazo wazo wazo-agid restart
Using the advanced features¶
The following features are only available via the
/etc/xivo/asterisk/xivo_fax.conf
configuration file. wazo/wazo:
wazo-provd-cli -c 'devices.using_plugin("xivo-cisco-spa3102-5.1.10").reconfigure()'
Then reboot the devices:
wazo-provd-cli -c 'devices.using_plugin("xivo-cisco-spa3102-5.1.10").synchronize()'
Most of this template can be copy/pasted for a SPA2102 or SPA8000. | https://wazo.readthedocs.io/en/wazo-20.01/administration/fax/fax.html | 2020-10-20T06:46:14 | CC-MAIN-2020-45 | 1603107869933.16 | [] | wazo.readthedocs.io |
Security Considerations¶
Deprecation notice
This page is now deprecated and serves as an archive only. For up-to-date information, please have a look at our security policy and published security advisories.
As a deployment tool, Argo CD needs to have production access which makes security a very important topic. The Argoproj team takes security very seriously and continuously working on improving it. Learn more about security related features in Security section.
Overview of past and current issues¶
The following table gives a general overview about past and present issues known to the Argo CD project. See in the Known Issues section if there is a work-around available if you cannot update or if there is no fix yet.
Known Issues And Workarounds¶
A recent security audit (thanks a lot to Matt Hamilton of ) has revealed several limitations in Argo CD which could compromise security. Most of the issues are related to the built-in user management implementation.
CVE-2020-1747, CVE-2020-14343 - PyYAML library susceptible to arbitrary code execution¶
Summary:
Details:
PyYAML library susceptible to arbitrary code execution when it processes untrusted YAML files.
We do not believe Argo CD is affected by this vulnerability, because the impact of CVE-2020-1747 and CVE-2020-14343 is limited to usage of awscli.
The
awscli only used for AWS IAM authentication, and the endpoint is the AWS API.
CVE-2020-5260 - Possible Git credential leak¶
Summary:
Details:
Argo CD relies on Git for many of its operations. The Git project released a
security advisory
on 2020-04-14, describing a serious vulnerability in Git which can lead to credential
leakage through credential helpers by feeding malicious URLs to the
git clone
operation.
We do not believe Argo CD is affected by this vulnerability, because ArgoCD does neither
make use of Git credential helpers nor does it use
git clone for repository operations.
However, we do not know whether our users might have configured Git credential helpers on
their own and chose to release new images which contain the bug fix for Git.
Mitigation and/or workaround:
We strongly recommend to upgrade your ArgoCD installation to either
v1.4.3 (if on v1.4
branch) or
v1.5.2 (if on v1.5 branch)
When you are running
v1.4.x, you can upgrade to
v1.4.3 by simply changing the image
tags for
argocd-server,
argocd-repo-server and
argocd-controller to
v1.4.3.
The
v1.4.3 release does not contain additional functional bug fixes.
Likewise, hen you are running
v1.5.x, you can upgrade to
v1.5.2 by simply changing
the image tags for
argocd-server,
argocd-repo-server and
argocd-controller to
v1.5.2.
The
v1.5.2 release does not contain additional functional bug fixes.
CVE-2020-11576 - User Enumeration¶
Summary:
Details:
Argo version v1.5.0 was vulnerable to a user-enumeration vulnerability which allowed attackers to determine the usernames of valid (non-SSO) accounts within Argo.
Mitigation and/or workaround:
Upgrade to ArgoCD v1.5.1 or higher. As a workaround, disable local users and use only SSO authentication.
CVE-2020-8828 - Insecure default administrative password¶
Summary:
Details:
Argo CD uses the
argocd-server pod name (ex:
argocd-server-55594fbdb9-ptsf5) as the default admin password.
Kubernetes users able to list pods in the argo namespace are able to retrieve the default password.
Additionally, In most installations, the Pod name contains a random "trail" of characters. These characters are generated using a time-seeded PRNG and not a CSPRNG. An attacker could use this information in an attempt to deduce the state of the internal PRNG, aiding bruteforce attacks.
Mitigation and/or workaround:
The recommended mitigation as described in the user documentation is to use SSO integration. The default admin password should only be used for initial configuration and then disabled or at least changed to a more secure password.
CVE-2020-8827 - Insufficient anti-automation/anti-brute force¶
Summary:
Details:
ArgoCD before v1.5.3 does not enforce rate-limiting or other anti-automation mechanisms which would mitigate admin password brute force.
Mitigation and/or workaround:
Rate-limiting and anti-automation mechanisms for local user accounts have been introduced with ArgoCD v1.5.3.
As a workaround for mitigation if you cannot upgrade ArgoCD to v1.5.3 yet, we recommend to disable local users and use SSO instead.
CVE-2020-8826 - Session-fixation¶
Summary:
Details:
The authentication tokens generated for built-in users have no expiry.
These issues might be acceptable in the controlled isolated environment but not acceptable if Argo CD user interface is exposed to the Internet.
Mitigation and/or workaround:
The recommended mitigation is to change the password periodically to invalidate the authentication tokens.
CVE-2018-21034 - Sensitive Information Disclosure¶
Summary:
Details:
In Argo versions prior to v1.5.0-rc1, it was possible for authenticated Argo users to submit API calls to retrieve secrets and other manifests which were stored within git.
Mitigation and/or workaround:
Upgrade to ArgoCD v1.5.0 or higher. No workaround available
Reporting Vulnerabilities¶
Please have a look at our security policy for more details on how to report security vulnerabilities for Argo CD. | https://argo-cd.readthedocs.io/en/latest/security_considerations/ | 2022-09-25T02:40:29 | CC-MAIN-2022-40 | 1664030334332.96 | [] | argo-cd.readthedocs.io |
Git Generator¶
The Git generator contains two subtypes: the Git directory generator, and Git file generator.
Warning
Git generators are often used to make it easier for (non-admin) developers to create Applications.
If the
project field in your ApplicationSet is templated, developers may be able to create Applications under Projects with excessive permissions.
For ApplicationSets with a templated
project field, the source of truth must be controlled by admins
- in the case of git generators, PRs must require admin approval.
Git Generator: Directories¶
The Git directory generator, one of two subtypes of the Git generator, generates parameters using the directory structure of a specified Git repository.
Suppose you have a Git repository with the following directory structure:
├── argo-workflows │ ├── kustomization.yaml │ └── namespace-install.yaml └── prometheus-operator ├── Chart.yaml ├── README.md ├── requirements.yaml └── values.yaml
This repository contains two directories, one for each of the workloads to deploy:
- an Argo Workflow controller kustomization YAML file
- a Prometheus Operator Helm chart
We can deploy both workloads, using this example:
apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: cluster-addons namespace: argocd spec: generators: - git: repoURL: revision: HEAD directories: - path: applicationset/examples/git-generator-directory/cluster-addons/* template: metadata: name: '{{path[0]}}' spec: project: "my-project" source: repoURL: targetRevision: HEAD path: '{{path}}' destination: server: namespace: '{{path.basename}}'
(The full example can be found here.)
The generator parameters are:
{{path}}: The directory paths within the Git repository that match the
pathwildcard.
{{path[n]}}: The directory paths within the Git repository that match the
pathwildcard, split into array elements (
n- array index)
{{path.basename}}: For any directory path within the Git repository that matches the
pathwildcard, the right-most path name is extracted (e.g.
/directory/directory2would produce
directory2).
{{path.basenameNormalized}}: This field is the same as
path.basenamewith unsupported characters replaced with
-(e.g. a
pathof
/directory/directory_2, and
path.basenameof
directory_2would produce
directory-2here).
Note: The right-most path name always becomes
{{path.basename}}. For example, for
- path: /one/two/three/four,
{{path.basename}} is
four.
Whenever a new Helm chart/Kustomize YAML/Application/plain subdirectory is added to the Git repository, the ApplicationSet controller will detect this change and automatically deploy the resulting manifests within new
Application resources.
As with other generators, clusters must already be defined within Argo CD, in order to generate Applications for them.
Exclude directories¶
The Git directory generator will automatically exclude directories that begin with
. (such as
.git).
The Git directory generator also supports an
exclude option in order to exclude directories in the repository from being scanned by the ApplicationSet controller:
apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: cluster-addons namespace: argocd spec: generators: - git: repoURL: revision: HEAD directories: - path: applicationset/examples/git-generator-directory/excludes/cluster-addons/* - path: applicationset/examples/git-generator-directory/excludes/cluster-addons/exclude-helm-guestbook exclude: true template: metadata: name: '{{path.basename}}' spec: project: "my-project" source: repoURL: targetRevision: HEAD path: '{{path}}' destination: server: namespace: '{{path.basename}}'
(The full example can be found here.)
This example excludes the
exclude-helm-guestbook directory from the list of directories scanned for this
ApplicationSet resource.
Exclude rules have higher priority than include rules
If a directory matches at least one
exclude pattern, it will be excluded. Or, said another way, exclude rules take precedence over include rules.
As a corollary, which directories are included/excluded is not affected by the order of
paths in the
directories field list (because, as above, exclude rules always take precedence over include rules).
For example, with these directories:
. └── d ├── e ├── f └── g
Say you want to include
/d/e, but exclude
/d/f and
/d/g. This will not work:
- path: /d/e exclude: false - path: /d/* exclude: true
Why? Because the exclude
/d/* exclude rule will take precedence over the
/d/e include rule. When the
/d/e path in the Git repository is processed by the ApplicationSet controller, the controller detects that at least one exclude rule is matched, and thus that directory should not be scanned.
You would instead need to do:
- path: /d/* - path: /d/f exclude: true - path: /d/g exclude: true
Or, a shorter way (using path.Match syntax) would be:
- path: /d/* - path: /d/[f|g] exclude: true
Root Of Git Repo¶
The Git directory generator can be configured to deploy from the root of the git repository by providing
'*' as the
path.
To exclude directories, you only need to put the name/path.Match of the directory you do not want to deploy.
apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: cluster-addons namespace: argocd spec: generators: - git: repoURL: revision: HEAD directories: - path: '*' - path: donotdeploy exclude: true template: metadata: name: '{{path.basename}}' spec: project: "my-project" source: repoURL: targetRevision: HEAD path: '{{path}}' destination: server: namespace: '{{path.basename}}'
Git Generator: Files¶
The Git file generator is the second subtype of the Git generator. The Git file generator generates parameters using the contents of JSON/YAML files found within a specified repository.
Suppose you have a Git repository with the following directory structure:
├── apps │ └── guestbook │ ├── guestbook-ui-deployment.yaml │ ├── guestbook-ui-svc.yaml │ └── kustomization.yaml ├── cluster-config │ └── engineering │ ├── dev │ │ └── config.json │ └── prod │ └── config.json └── git-generator-files.yaml
The directories are:
guestbookcontains the Kubernetes resources for a simple guestbook application
cluster-configcontains JSON/YAML files describing the individual engineering clusters: one for
devand one for
prod.
git-generator-files.yamlis the example
ApplicationSetresource that deploys
guestbookto the specified clusters.
The
config.json files contain information describing the cluster (along with extra sample data):
{ "aws_account": "123456", "asset_id": "11223344", "cluster": { "owner": "[email protected]", "name": "engineering-dev", "address": "" } }
Git commits containing changes to the
config.json files are automatically discovered by the Git generator, and the contents of those files are parsed and converted into template parameters. Here are the parameters generated for the above JSON:
aws_account: 123456 asset_id: 11223344 cluster.owner: [email protected] cluster.name: engineering-dev cluster.address:
And the generated parameters for all discovered
config.json files will be substituted into ApplicationSet template:
apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: guestbook namespace: argocd spec: generators: - git: repoURL: revision: HEAD files: - path: "applicationset/examples/git-generator-files-discovery/cluster-config/**/config.json" template: metadata: name: '{{cluster.name}}-guestbook' spec: project: default source: repoURL: targetRevision: HEAD path: "applicationset/examples/git-generator-files-discovery/apps/guestbook" destination: server: '{{cluster.address}}' namespace: guestbook
(The full example can be found here.)
Any
config.json files found under the
cluster-config directory will be parameterized based on the
path wildcard pattern specified. Within each file JSON fields are flattened into key/value pairs, with this ApplicationSet example using the
cluster.address as
cluster.name parameters in the template.
As with other generators, clusters must already be defined within Argo CD, in order to generate Applications for them.
In addition to the flattened key/value pairs from the configuration file, the following generator parameters are provided:
{{path}}: The path to the directory containing matching configuration file within the Git repository. Example:
/clusters/clusterA, if the config file was
/clusters/clusterA/config.json
{{path[n]}}: The path to the matching configuration file within the Git repository, split into array elements (
n- array index). Example:
path[0]: clusters,
path[1]: clusterA
{{path.basename}}: Basename of the path to the directory containing the configuration file (e.g.
clusterA, with the above example.)
{{path.basenameNormalized}}: This field is the same as
path.basenamewith unsupported characters replaced with
-(e.g. a
pathof
/directory/directory_2, and
path.basenameof
directory_2would produce
directory-2here).
{{path.filename}}: The matched filename. e.g.,
config.jsonin the above example.
{{path.filenameNormalized}}: The matched filename with unsupported characters replaced with
-.
Note: The right-most directory name always becomes
{{path.basename}}. For example, from
- path: /one/two/three/four/config.json,
{{path.basename}} will be
four.
The filename can always be accessed using
{{path.filename}}.
Webhook Configuration¶
When using a Git generator, ApplicationSet polls Git repositories every three minutes to detect changes. To eliminate this delay from polling, the ApplicationSet webhook server can be configured to receive webhook events. ApplicationSet supports Git webhook notifications from GitHub and GitLab. The following explains how to configure a Git webhook for GitHub, but the same process should be applicable to other providers.
Note
ApplicationSet exposes the webhook server as a service of type ClusterIP. An Ingress resource needs to be created to expose this service to the webhook source.
1. Create the webhook in the Git provider¶
In your Git provider, navigate to the settings page where webhooks can be configured. The payload
URL configured in the Git provider should use the
/api/webhook endpoint of your ApplicationSet instance
(e.g.). If you wish to use a shared secret, input an
arbitrary value in the secret. This value will be used when configuring the webhook in the next step.
Note
When creating the webhook in GitHub, the "Content type" needs to be set to "application/json". The default value "application/x-www-form-urlencoded" is not supported by the library used to handle the hooks
2. Configure ApplicationSet with the webhook secret (Optional)¶
Configuring a webhook shared secret is optional, since ApplicationSet will still refresh applications generated by Git generators, even with unauthenticated webhook events. This is safe to do since the contents of webhook payloads are considered untrusted, and will only result in a refresh of the application (a process which already occurs at three-minute intervals). If ApplicationSet is publicly accessible, then configuring a webhook secret is recommended to prevent a DDoS attack.
In the
argocd-secret kubernetes secret, include
After saving, please restart the ApplicationSet pod for the changes to take effect. | https://argo-cd.readthedocs.io/en/stable/operator-manual/applicationset/Generators-Git/ | 2022-09-25T01:55:40 | CC-MAIN-2022-40 | 1664030334332.96 | [array(['../../../assets/applicationset/webhook-config.png',
'Add Webhook Add Webhook'], dtype=object) ] | argo-cd.readthedocs.io |
Start-Citrixceipupload¶
Requests the Citrix Telemetry Service to upload ceip data to the Citrix Insight Services (CIS) or to copy the data to a folder (for manual upload to CIS).
Syntax¶
Start-CitrixCeipUpload -OutputPath <String> [<CommonParameters>]
Detailed Description¶
The Start-CitrixCeipUpload cmdlet request an upload of the collected anonymous data.
If the OutputPath parameter is specified, the upload is directed to the specified file. The data may then be uploaded manually using the CIS web site.
Related Commands¶
Parameters¶
Input Type¶
System.String¶
This cmdlet accepts a string as input that populates the OutputPath parameter. | https://developer-docs.citrix.com/projects/citrix-virtual-apps-desktops-sdk/en/1808/TelemetryModule/Start-CitrixCeipUpload/ | 2022-09-25T02:38:52 | CC-MAIN-2022-40 | 1664030334332.96 | [] | developer-docs.citrix.com |
Reboot or power-off Address Manager and managed DNS/DHCP Server appliances or VMs from the Main Session mode of the Administration Console.
Occasionally you may need to reboot or shut down the appliance or VM (for example, to reset the startup services).
To reboot:
- From Main Session mode, type reboot and press ENTER. The Administration Console executes the command and the server reboots. Once reboot is complete, you will be returned to the Login prompt.
To power-off:
- From Main Session mode, type poweroff and press ENTER. The Administration Console shuts down the server. | https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Rebooting-and-powering-off/9.2.0 | 2022-09-25T02:53:37 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.bluecatnetworks.com |
(MI_Factory_Base_01), the simple hierarchy looks like this.
/FactoryGame/MasterMaterials/MM_FactoryMaster Material
/FactoryGame/-Shared/Material/MI_Factory_Base_01Instance of MM_Factory
/FactoryGame/-Shared/Material/MI_Factory_Base_01_Emsiv_AOInstance of MI_Factory_Base_01, AO maps, and overlay density are created.
UV channel 0 is used for mapping the main surface, UV channel 1 is sometimes used for adding an AO map. Vertex colours are ignored.
How to use
Do you have a custom AO bake you wish to apply to your model?
No? Apply
MI_Factory_Base_01 directly as your base material and do not create any instances.
Yes? Create a material instance of
MI_Factory_Base_01_Emsiv_AO and apply your AO map to it and customize the AO settings to taste. If desired you may copy
/Utils/Materials/MI_MachineStarter into your mod folder and rename it, but you will need to set the custom AO map post Update 4.2 were provide by CSS (#PraiseBen) and are implemented in the material found in your Unreal starter project.
When marked as paintable the main Albedo texture is multiplied into the paint colours gated by the texture’s alpha. The alpha value is packed to allow for selection between several paint colours, these thresholds can be adjusted using the primary and secondary bias values, below are the defaults:
0.50 < x ⇐ 1.0 : Albedo
0.25 < x < 0.50 : Primary paint color
0.0 ⇐ x < 0.25 : Secondary paint color
The 'ReflectionMap' is a Linear Texture (sRGB off) which is channel packed surface property texture (MREO) with the following properties:
R:Metalness
G:Roughness
B:Emission Mask
A:Ambient Occlusion. Given the complexity of its use there are some examples.
Located in the
/Utils/Materials/ folder you will find several start materials you can copy into your mod and reconfigure at will:
MI_BakedMachineStarter:Common setup for factory machines like workbenches and power poles.
MI_BeltItemStarter:Root material as needed for Conveyor Rendering.
MI_BeltItemStarter-LOD1:Conveyor material for close inspection.
MI_BeltItemStarter-LOD2:Conveyor material for distant rendering.. | https://docs.ficsit.app/satisfactory-modding/v3.1.1/Development/Modeling/MainMaterials.html | 2022-09-25T02:50:59 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.ficsit.app |
Next: Processing Data in Cell Arrays, Previous: Indexing Cell Arrays, Up: Cell Arrays [Contents][Index]
One common use of cell arrays is to store multiple strings in the same
variable. It is also possible to store multiple strings in a
character matrix by letting each row be a string. This, however,
introduces the problem that all strings must be of equal length.
Therefore, it is recommended to use cell arrays to store multiple
strings. For cases, where the character matrix representation is required
for an operation, there are several functions that convert a cell
array of strings to a character array and back.
char and
strvcat convert cell arrays to a character array
(see Concatenating Strings), while the function
cellstr
converts a character array to a cell array of strings:
a = ["hello"; "world"]; c = cellstr (a) ⇒ c = { [1,1] = hello [2,1] = world }
Create a new cell array object from the elements of the string array strmat.
Each row of strmat becomes an element of cstr. Any trailing spaces in a row are deleted before conversion.
To convert back from a cellstr to a character array use
char.
See also: cell, char. to the string
argument:
c = {"hello", "world"}; strcmp ("hello", c) ⇒ ans = 1 0
The following string functions support cell arrays of strings:
char,
strvcat,
strcat (see Concatenating Strings),
strcmp,
strncmp,
strcmpi,
strncmpi (see Comparing Strings),
str2double,
deblank,
strtrim,
strtrunc,
strfind,
strmatch, ,
regexp,
regexpi
(see Manipulating Strings) and
str2double
(see String Conversions).
The function
iscellstr can be used to test if an object is a
cell array of strings.
Return true if every element of the cell array cell is a character string.
See also: ischar, isstring.
Next: Processing Data in Cell Arrays, Previous: Indexing Cell Arrays, Up: Cell Arrays [Contents][Index] | https://docs.octave.org/v4.4.1/Cell-Arrays-of-Strings.html | 2022-09-25T03:06:07 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.octave.org |
stkETH Node Operators¶
Overview¶
pSTAKE currently onboards validators for the ETH2 staking product based on a whitelisting mechanism. This process will soon be evolved to make the protocol more inclusive for all node operators, keeping the network’s decentralisation in mind.
Furthermore, to maximise security, pSTAKE asks whitelisted node operators to deposit 1 ETH of their own with our withdrawal credentials instead of asking them to submit a list of validator keys. When the validator key is picked up by the beacon chain we verify the validity of the key by checking the withdrawal credentials. This was implemented after the reported issue where if users have previously submitted the same key with different withdrawal credentials, they are in control of the funds staked on it when withdrawals get enabled.
Additional notes for node operators: * Node Operators can use any client for running the ETH1 chain node and the ETH2 beacon chain node. * We strongly recommend node operators to have alerting mechanisms in place to monitor their nodes and validators. Uptime will be monitored and non-performing validators will be removed from the whitelist. * pSTAKE will also monitor validators for the rewards generated and slashing events. * Read more about the onboarding process here. | https://docs.pstake.finance/stkETH_Node_Operators_Overview/ | 2022-09-25T02:27:44 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.pstake.finance |
Rehive consists of three parts: Platform, Extensions, and Applications. This overview assumes that you are familiar with these. We offer some extensions out of the box, but it is also possible to build your own custom extensions.
We’ll look at the typical structure for integrating with a banking partner. For each flow, we will highlight which Rehive endpoints to use.
The first place to start is by familiarizing yourself with the Get Started section. This gives a basic overview of how to start building an extension as well as the ground-level endpoints you’ll need to use. You should also familiarize yourself with Rehive’s Standard Configurations to ensure that the correct subtypes are used for specific transactions.
Support and enablement
The Account Manager expectations article outlines what support is available from Rehive when building a custom extension. Whether or not you are on a subscription plan that includes an account manager:
- You must verify that your chosen 3rd party provider has support for the common endpoints for the integration requirements.
- You must consider how data is stored and linked to users in your extension.
- You should familiarize yourself with the Rehive Help Center.
- Rehive may offer a call to go over the flows if documentation is not sufficient or to identify variances. The banking partner should be included as a specialist.
- Rehive does not provide codebase-level reviews for custom extensions.
- Support will be provided if any Rehive endpoints are not working - please take note of the guidelines for reporting technical issues. | https://docs.rehive.com/building/flow-guidelines/introduction/ | 2022-09-25T00:58:09 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.rehive.com |
This chapter contains the following information:
PL/R is a procedural language. With the Greenplum Database PL/R extension you can write database functions in the R programming language and use R packages that contain R functions and data sets.
For information about supported PL/R versions, see the Greenplum Database Release Notes..
Note: You can use the gpssh utility to run bash shell commands on multiple remote hosts.
The PL/R extension is available as a package. Download the package from VMware Tanzu Network and.
Before you install the PL/R extension, make sure that your Greenplum Database is running, you have sourced
greenplum_path.sh, and that the
$MASTER_DATA_DIRECTORY and
$GPHOME variables are set.
Download the PL/R extension package from VMware Tanzu Network.
Follow the instructions in Verifying the Greenplum Database Software Download to verify the integrity of the Greenplum Procedural Languages PL/R software.
Copy the PL/R package to the Greenplum Database master host.
Install the software extension package by running the
gppkg command. This example installs the PL/R extension on a Linux system:
$ gppkg -i plr-3.0.3-gp6-rhel7_x86_64.gppkg
Source the file
$GPHOME/greenplum_path.sh.
Restart Greenplum Database.
$ gpstop -r;'
PL/R is registered as an untrusted language.
When you remove PL/R language support from a database, the PL/R routines that you created in the database will no longer work.
For a database that no longer requires the PL/R language, remove support for PL/R with the SQL command
DROP
For Ubuntu systems, remove R from all Greenplum Database host systems. These commands remove R from an Ubuntu system.
$ sudo apt remove r-base $ sudo apt remove r-base-core
Removing
r-base does not uninstall the R executable. Removing
r-base-core uninstalls the R executable.
The following are simple PL/R examples.
This function generates an array of numbers with a normal distribution using the R function
rnorm().
CREATE OR REPLACE FUNCTION r_norm(n integer, mean float8, std_dev float8) RETURNS float8[ ] AS $$ x<-rnorm(n,mean,std_dev) return(x) $$ LANGUAGE 'plr';
The following
CREATE TABLE command uses the
r_norm() function to populate the table. The
r_norm() function creates an array of 10 numbers.
CREATE TABLE test_norm_var AS SELECT id, r_norm(10,0,1) as x FROM (SELECT generate_series(1,30:: bigint) AS ID) foo DISTRIBUTED BY (id);
Assuming your PL/R function returns an R
data.frame as its output, unless you want to use arrays of arrays, some work is required to see your
data.frame from PL/R as a simple SQL table:
Create a
TYPE in a Greenplum database with the same dimensions as your R
data.frame:
CREATE TYPE t1 AS ...
Use this
TYPE when defining your PL/R function
... RETURNS SET OF t1 AS ...
Sample SQL for this is given in the next example.
The SQL below defines a
TYPE and runs hierarchical regression using PL/R:
--Create TYPE to store model results DROP TYPE IF EXISTS wj_model_results CASCADE; CREATE TYPE wj_model_results AS ( cs text, coefext float, ci_95_lower float, ci_95_upper float, ci_90_lower float, ci_90_upper float, ci_80_lower float, ci_80_upper float); --Create PL/R function to run model in R DROP FUNCTION IF EXISTS wj_plr_RE(float [ ], text [ ]); CREATE FUNCTION wj_plr_RE(response float [ ], cs text [ ]) RETURNS SETOF wj_model_results AS $$ library(arm) y<- log(response) cs<- cs d_temp<- data.frame(y,cs) m0 <- lmer (y ~ 1 + (1 | cs), data=d_temp) cs_unique<- sort(unique(cs)) n_cs_unique<- length(cs_unique) temp_m0<- data.frame(matrix0,n_cs_unique, 7)) for (i in 1:n_cs_unique){temp_m0[i,]<- c(exp(coef(m0)$cs[i,1] + c(0,-1.96,1.96,-1.65,1.65, -1.28,1.28)*se.ranef(m0)$cs[i]))} names(temp_m0)<- c("Coefest", "CI_95_Lower", "CI_95_Upper", "CI_90_Lower", "CI_90_Upper", "CI_80_Lower", "CI_80_Upper") temp_m0_v2<- data.frames(cs_unique, temp_m0) return(temp_m0_v2) $$ LANGUAGE 'plr'; --Run modeling plr function and store model results in a --table DROP TABLE IF EXISTS wj_model_results_roi; CREATE TABLE wj_model_results_roi AS SELECT * FROM wj_plr_RE('{1,1,1}', '{"a", "b", "c"}');
R packages are modules that contain R functions and data sets. You can install R packages to extend R and PL/R functionality in Greenplum Database.
Greenplum Database provides a collection of data science-related R libraries that can be used with the Greenplum Database PL/R language. You can download these libraries in
.gppkg format from VMware Tanzu Network. For information about the libraries, see R Data Science Library Package.
Note: If you expand Greenplum Database and add segment hosts, you must install the R packages in the R installation of the new hosts.
For an R package, identify all dependent R packages and each package web URL. The information can be found by selecting the given package from the following navigation page:
As an example, the page for the R package arm indicates that the package requires the following R libraries: Matrix, lattice, lme4, R2WinBUGS, coda, abind, foreign, and MASS.
You can also try installing the package with
R CMD INSTALL command to determine the dependent packages.
For the R installation included with the Greenplum Database PL/R extension, the required R packages are installed with the PL/R extension. However, the Matrix package requires a newer version.
From the command line, use the
wget utility to download the
tar.gz files for the arm package to the Greenplum Database master host:
wget
wget
Use the gpscp utility and the
hosts_all file to copy the
tar.gz files to the same directory on all nodes of the Greenplum Database cluster. The
hosts_all file contains a list of all the Greenplum Database segment hosts. You might require root access to do this.
gpscp -f hosts_all Matrix_0.9996875-1.tar.gz =:/home/gpadmin
gpscp -f /hosts_all arm_1.5-03.tar.gz =:/home/gpadmin
Use the
gpssh utility in interactive mode to log into each Greenplum Database segment host (
gpssh -f all_hosts). Install the packages from the command prompt using the
R CMD INSTALL command. Note that this may require root access. For example, this R install command installs the packages for the arm package.
$R_HOME/bin/R CMD INSTALL Matrix_0.9996875-1.tar.gz arm_1.5-03.tar.gz
Ensure that the package is installed in the
$R_HOME/library directory on all the segments (the
gpssh can be used to install the package). For example, this
gpssh command list the contents of the R library directory.
gpssh -s -f all_hosts "ls $R_HOME/library"
The
gpssh option
-s sources the
greenplum_path.sh file before running commands on the remote hosts.
Test if the R package can be loaded.
This function performs a simple test to determine if an R package can be loaded:
CREATE OR REPLACE FUNCTION R_test_require(fname text) RETURNS boolean AS $BODY$ return(require(fname,character.only=T)) $BODY$ LANGUAGE 'plr';
This SQL command checks if the R package arm can be loaded:
SELECT R_test_require('arm');
You can use the R command line to display information about the installed libraries and functions on the Greenplum Database host. You can also add and remove libraries from the R installation. To start the R command line on the host, log into the host as the
gadmin user and run the script R from the directory
$GPHOME/ext/R-3.3.3/bin.
This R function lists the available R packages from the R command line:
> library()
Display the documentation for a particular R package
> library(help="<package_name>") > help(package="<package_name>")
Display the help file for an R function:
> help("<function_name>") > ?<function_name>
To see what packages are installed, use the R command
installed.packages(). This will return a matrix with a row for each package that has been installed. Below, we look at the first 5 rows of this matrix.
> installed.packages()
Any package that does not appear in the installed packages matrix must be installed and loaded before its functions can be used.
An R package can be installed with
install.packages():
> install.packages("<package_name>") > install.packages("mypkg", dependencies = TRUE, type="source")
Load a package from the R command line.
> library(" <package_name> ")
An R package can be removed with
remove.packages
> remove.packages("<package_name>")
You can use the R command
-e option to run functions from the command line. For example, this command displays help on the R package MASS.
$ R -e 'help("MASS")'. - The R Project home page. - The home page for PivotalR, a package that provides an R interface to operate on Greenplum Database tables and views that is similar to the R data.frame. PivotalR also supports using the machine learning package MADlib directly from R.
The following links highlight key topics from the R documentation. | https://docs.vmware.com/en/VMware-Tanzu-Greenplum/6/greenplum-database/GUID-analytics-pl_r.html | 2022-09-25T01:23:29 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.vmware.com |
17.1.4.14. ViewStateKind¶
- enum eprosima::fastdds::dds::ViewStateKind¶
Indicates whether or not an instance is new.
For each instance (identified by the key), the middleware internally maintains a view state relative to each DataReader. This view state can have the following values:
NEW_VIEW_STATE SampleInfo::disposed_generation_count and the SampleInfo::no_writers_generation_count.
NOT_NEW_VIEW_STATE indicates that the DataReader has already accessed samples of the same instance and that the instance has not been reborn since.
Once an instance has been detected as not having any “live” writers and all the samples associated with the instance are “taken” from the DDSDataReader, the middleware can reclaim all local resources regarding the instance. Future samples will be treated as “never seen.”
Values:
- enumerator NEW_VIEW_STATE¶
New instance.This latest generation of the instance has not previously been accessed. | https://fast-dds.docs.eprosima.com/en/latest/fastdds/api_reference/dds_pim/subscriber/viewstatekind.html | 2022-09-25T02:18:42 | CC-MAIN-2022-40 | 1664030334332.96 | [] | fast-dds.docs.eprosima.com |
Rexpy¶
Rexpy infers regular expressions on a line-by-line basis from text data examples.
To run the rexpy tool:
tdda rexpy [inputfile]
Command-line Tool¶
Usage:
rexpy [FLAGS] [input file [output file]]
If input file is provided, it should contain one string per line; otherwise lines will be read from standard input.
If output file is provided, regular expressions found will be written to that (one per line); otherwise they will be printed.
FLAGS are optional flags. Currently:
-h, --header Discard first line, as a header. -?, --help Print this usage information and exit (without error) -g, --group Generate capture groups for each variable fragment of each regular expression generated, i.e. surround variable components with parentheses e.g. '^([A-Z]+)\-([0-9]+)$' becomes '^[A-Z]+\-[0-9]+$' -q, --quote Display the resulting regular expressions as double-quoted, escaped strings, in a form broadly suitable for use in Unix shells, JSON, and string literals in many programming languages. e.g. ^[A-Z]+\-[0-9]+$ becomes "^[A-Z]+\-[0-9]+$" -u, --underscore Allow underscore to be treated as a letter. Mostly useful for matching identifiers Also allow -_. -d, --dot Allow dot to be treated as a letter. Mostly useful for matching identifiers. Also -. --period. -m, --minus Allow minus to be treated as a letter. Mostly useful for matching identifiers. Also --hyphen or --dash. -v, --version Print the version number. -V, --verbose Set verbosity level to 1 -VV, --Verbose Set verbosity level to 2 -vlf, --variable Use variable length fragments -flf, --fixed Use fixed length fragments
Python API¶
The
tdda.rexpy.rexpy module also or as a dictionary: if a dictionary, the values are assumed to be string frequencies._groups(pattern, examples)¶
Analyse the contents of each group (fragment) in pattern across the examples it matches.
- Return zip of
- the characters in each group
- the strings in each group
- the run-length encoded fine classes in each group
- the run-length encoded characters in each group
- the group itself
all indexed on the (zero-based) group number._non_matches()¶
Returns all example strings that do not match any of the regular expressions in results._groups(pattern, examples)¶
Refine the categories for variable run-length-encoded patterns provided by narrowing the characters in the groups.).
sample(nPerLength)¶
Sample strings for potentially faster induction. Only used if over a hundred million distinct strings are given. For now.
specialize(patterns)¶
Check all the catpure groups in each patterns and simplify any that are sufficiently low frequency.
vrle2re(vrles, tagged=False, as_re=True)¶
Convert variable run-length-encoded code string to regular expression
- class
tdda.rexpy.rexpy.
Fragment¶
Container for a fragment.
Attributes:
re: the regular expression for the fragment
group: True if it forms a capture group (i.e. is not constant)
tdda.rexpy.rexpy.
capture_group(s)¶
Places parentheses around s to form a capure group (a tagged piece of a regular expression), unless it is already a capture group., example_freqs, dedup=False)¶
Find patterns, in order of # of matches, and pull out freqs. Then set overlapping matches to zero and repeat. Returns ordered dict, sorted by incremental match rate, with number of (previously unaccounted for) strings matched.
tdda.rexpy.rexpy.
pdextract(cols)¶
Extract regular expression(s) from the Pandas column (
Series) object or list of Pandas columns given.
All columns provided should be string columns (i.e. of type np.dtype(‘O’), possibly including null values, which will be ignored.
Example use:
import pandas as pd from tdda.rexpy import pdextract df = pd.DataFrame({'a3': ["one", "two", pd, example_freqs,, example_freqs, dedup, example_freqs, dedup.
Examples¶
The
tdda.rexpy module includes a set of examples.
To copy these examples to your own rexpy-examples subdirectory (or to a location of your choice), run the command:
tdda examples rexpy [mydirectory] | https://tdda.readthedocs.io/en/tdda-1.0.24/rexpy.html | 2022-09-25T02:02:40 | CC-MAIN-2022-40 | 1664030334332.96 | [] | tdda.readthedocs.io |
Emergency Data Export
What is an Emergency Export?
If a user runs into synchronization problems, Anveo Mobile App provides an alternative way to send data from the mobile device to Microsoft Dynamics NAV 2018 via e-mail. Please select menu option “Emergency Data Export” in the main menu of Anveo Mobile App on your mobile device. Anveo Mobile App creates an encrypted file that you can send to your responsible person for Microsoft Dynamics NAV 2018.
Import Data
Select your corresponding Anveo User Device in Microsoft Dynamics NAV 2018 and import the file.
The file is encrypted with user-specific information that requires the corresponding Anveo User Device entry in Microsoft Dynamics NAV 2018. Do not delete this entry in Microsoft Dynamics NAV 2018, otherwise, you will not be able to decrypt and import the file any more.
This data export should be used for synchronization problems only, or if the user cannot synchronize very important data because of public wi-fi firewall rules / limitations. You can only transfer data from the mobile device to Microsoft Dynamics NAV 2018. An update of the mobile device is not possible this way.
The import merely transfers the data from the Anveo Mobile App to Microsoft Dynamics. Only new data is read in. Data that has already been received or processed is not read in again.
In order to be able to import emergency export data the affected device needs to have at least one entry in the ACF Anveo User Device table with the corresponding AnveoUserDeviceID. The affected device also requires a LastProcessedEntryNo of at least 1.
Processing Data
Import and processing of data are two separate steps. After importing the file, open list Receiving Tasks from your Anveo User Devices in Microsoft Dynamics NAV 2018. Mark all entries with status Data received and process them. Already processed entries cannot be processed again. This ensures data consistency if you import files twice or if you continue synchronization afterwards.
Continue after an Emergency Export
After an emergency export and manual processing has been performed, it is recommended to reinitialize the device. For this the user should reload the data in the main menu or in the start menu. All data is transferred cleanly and newly to the device.
Note: A retransmission may take a longer time and require a larger volume of data. Therefore, this procedure must be coordinated with the system administrator. | https://docs.anveogroup.com/en/manual/anveo-mobile-app/customize-the-user-interface/emergency-data-export/?product_name=anveo-mobile-app&product_version=Version+11&product_platform=Microsoft+Dynamics+NAV+2018 | 2022-09-25T01:16:45 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.anveogroup.com |
Rocket loader
Rocket Loader prioritizes your website's content (text, images, fonts etc) by deferring the loading of all of your JavaScript until after rendering.
This type of loading (known as asynchronous loading) leads to the earlier rendering of your page content. Rocket Loader handles both inline and external scripts while maintaining the order of execution.
If you've elements in the above fold that require JavaScript to render, it's better to disable Rocket Loader. | https://docs.flyingproxy.com/article/90-rocket-loader | 2022-09-25T02:34:21 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.flyingproxy.com |
Guidance for metric and dimension names 🔗
Splunk Observability Cloud has two main categories of data names: existing names and custom names.
Existing names 🔗
When you use an existing data collection integration such as the collectd agent or the AWS CloudWatch integration, the integration defines metric, dimension, and event names for you. To learn more, see Metric name standards.
To make it easier for you to find and work with metrics coming in from different sources, Splunk Infrastructure Monitoring pulls data from different sources, transforms them, and returns them in a unified format called virtual metrics. See Virtual metrics in Splunk Infrastructure Monitoring for more information.
Custom names 🔗
When you send custom metrics, dimensions, or events to Splunk Infrastructure Monitoring, you choose your own names. To learn more about custom event names, see Guidance for custom event names.
Modify naming schemes you sent to other metrics systems 🔗
Splunk Infrastructure Monitoring lets you associate arbitrary key-value pairs called dimensions with metrics. Dimensions let you represent multi-dimensional data without overloading your metric name with metadata.
If you send metrics that you previously sent to other metrics systems such as Graphite or New Relic, then modify the naming scheme to leverage the full feature set of Splunk Observability Cloud.
If you already have metrics with period-separated names, use Splunk OTel parse dimensions from metric names. To learn more, see Example: Custom metric name and dimensions.
Metric name standards 🔗
Metrics are distinct numeric measurements that change over time. Metrics are generated by system infrastructure, application instrumentation, or other hardware or software. The following are some examples of metrics:
Count of GET requests received
Percent of total memory in use
Network response time in milliseconds
If you apply a calculation to the metric before you send it, you can also use the calculation as part of the description. For example, if you calculate the ninety-fifth percentile of measurements and send the result in a metric, use p95 as part of the metric name. This table lists the type of information you can apply to a metric name:
On the other hand, some information is better to include in a dimension instead of a metric name, such as description of hardware or software being measured. For example, don’t use
production1 to indicate that the measurement is for a particular host. To learn more, see Dimension name and value standards.
Create metric names using a hierarchical left to right structure 🔗
Start at the highest level, then add more specific values as you proceed. In this example, all of these metrics have a dimension key called
hostname with values such as analytics-1, analytics-2, and so forth. These metrics also have a customer dimension key with values org-x, org-y, and so on. The dimensions provide an infrastructure-focused or a customer-focused view of the analytics service usage. For more information on gauge metrics, see Identify metric types.
Start with a domain or namespace that the metric belongs to, such as analytics or web.
Next, add the entity that the metric measures, such as jobs or http.
At your discretion, add intermediate names, such as errors.
Finish with a unit of measurement. For example, the SignalFlow analytics service reports the following metrics:
analytics.jobs.total: Gauge metric that periodically measures the current number of executing jobs
analytics.thrift.execute.count: Counter metric that’s incremented each time new job starts
analytics.thrift.execute.time: Gauge metric that measures the time needed to process a job execution request
analytics.jobs_by_state: Counter metric with a dimension key called state, incremented each time a job reaches a particular state.
Use different metric names to indicate metric types 🔗
It is necessary to use different metric names to indicate metric types. When you create metric names, follow these best practices:
Give each metric its own name
When you define your own metric, give each metric a name that includes a reference of the metric type.
Avoid assigning custom metric names that include dimensions. For example, if you have 100 server instances and you want to create a custom metric that tracks the number of disk writes for each one, differentiante between the instances with a dimension.
Metric types and rollups 🔗
In Infrastructure Monitoring, all metrics have a single metric type, with a specific default rollup. A rollup is a statistical function that takes all the data points for an MTS over a time period and outputs a single data point. Observability Cloud applies rollups after it retrieves the data points from storage but before it applies analytics functions. For more information on rollups, see Rollups in Data resolution and rollups in charts.
The following list shows the types and their default rollups:
Gauge metric: Average
Counter metric: Sum
Cumulative counter: Delta. This measures the change in the value of the metric from the previous data point.
To track a measurable value using two different metric types, use two metrics instead of one metric with two dimensions. For example, suppose you have a
network_latency measurement that you want to send as two different types:
Gauge metric: Average network latency in milliseconds
Counter metric: Total number of latency values sent in an interval
Send the measurement using two different metric names, such as
network_latency.average and
network_latency.count, instead of one metric name with two dimensions type:average and type:count.
Dimension name and value standards 🔗
Dimensions are arbitrary key-value pairs you associate with metrics. They can be numeric or non-numeric. Some dimensions, such as host name and value, come from a system you’re monitoring. You can also create your own dimensions. Metrics identify a measurement, whereas dimensions identify a specific aspect of the system that’s generating the measurement or characterizes the measurement.
Dimension names have the following requirements:
UTF-8 string, maximum length of 128 characters (512 bytes).
Must start with an uppercase or lowercase letter. The rest of the name can contain letters, numbers, underscores (_) and hyphens (-).
Must not start with the underscore character (_).
Must not start with the prefix
sf_, except for dimensions defined by Observability Cloud such as
sf_hires.
Must not start with the prefix
aws_,
gcp_, or
azure_.
Dimension values are UTF-8 strings with a maximum length of 256 UTF-8 characters (1024 bytes). Numbers are represented as numeric strings.
You can have up to 36 dimensions per MTS.
To ensure readability, keep names and values to 40 characters or less.
Length limits for metric name, dimension name, and dimension value 🔗
Metric and dimension length specifications:
Metric names up to 256 characters
Dimension names up to 128 characters
Dimension values up to 256 characters
Example: dimensions 🔗
The following are some examples of dimensions:
"hostname": "production1"
"region": "emea"
Benefits of dimensions 🔗
The following are some examples of benefits of dimensions:
Classification of different streams of data points for a metric.
Simplified filtering and aggregation. For example, SignalFlow lets you filter and aggregate data streams by one or more dimensions.
Types of information that are suitable for dimension values 🔗
The following are some examples of types of information that you can add to dimensions:
Categories rather than measurements: If doing an arithmetic operation on dimension values results in something meaningful, you don’t have a dimension.
Metadata for filtering, grouping, or aggregating
Name of entity being measured: For example
hostname,
production1
Metadata with large number of possible values: Use one dimension key for many different dimension values.
Non-numeric values: Numeric dimension values are usually labels rather than measurements.
Example: Custom metric name and dimensions 🔗
For example, consider the measurement of HTTP errors.
You want to track the following data:
Number of errors
HTTP response code for each error
Host that reported the error
Service (app) that returned the error
Suppose you identify your data with a long metric name instead of a metric name and a dimension. A metric name that represented the number of HTTP response code 500 errors reported by the host named myhost for the service checkout would have to be the following:
web.http.myhost.checkout.error.500.count.
As a result of using this metric name, you’d experience the following:
To visualize this data in a Splunk Infrastructure Monitoring chart, you would have to perform a wildcard query with the syntax
web.http.*.*.error.*.count.
To sum up the errors by host, service, or error type, you would have to change the query.
You couldn’t use filters or dashboard variables in your chart.
You would have to define a separate metric name to track HTTP 400 errors, or errors reported by other hosts, or errors reported by other services.
Leverage dimensions to track the same data you can do the following:
Define a metric name that describes the measurement you want, which is the number of HTTP errors:
web.http.error.count. The metric name includes the following:
web: Your name for a family of metrics for web measurements
http.error: Your name for the protocol you’re measuring (http) and an aspect of the protocol (error)
count: The unit of measure
Define dimensions that categorize the errors. The dimensions include the following:
host: The host that reported the error
service: The service that returned the error
error_type: The HTTP response code for the error
When you want to visualize the error data using a chart, you can search for “error count” to locate the metric by name. When you create the chart, you can filter and aggregate incoming metric time series by host, service, error_type, or all three. You can add a dashboard filter so that when you view the chart in a specific dashboard, you don’t have the chart itself.
Considerations for metrics and dimensions names for your organization 🔗
Keep this guidance in mind so that you can create a consitent naming proccess in your organization.
Use a single consistent delimiter in metric names. Using a single consistent delimiter in metric names helps you search with wildcards. Use periods or underscores as delimiters. Don’t use colons or slashes.
Avoid changing metric and dimension names. If you change a name, you have to update the charts and detectors that use the old name. Infrastructure Monitoring doesn’t do this automatically.
Remember that you’re not the only person using the metric or dimension. Use names that others in your organization can identify and understand. Follow established conventions. To find out the conventions in your organization, browse your metrics using the Metric Finder.
Guidelines for working with low and high cardinality data 🔗
Send low-cardinality data only in metric names or dimension key names. Low-cardinality data has a small number of distinct values. For example, the metric name
web.http.error.count for a gauge metric that reports the number of HTTP request errors has a single value. This name is also readable and self-explanatory. For more information on gauge metrics, see Identify metric types.
High-cardinality data has a large number of distinct values. For example, timestamps are high-cardinality data. Only send this kind of high-cardinality data in dimension values. If you send high-cardinality data in metric names, Infrastructure Monitoring might not ingest the data. Infrastructure Monitoring specifically rejects metrics with names that contain timestamps. High-cardinality data does have legitimate uses. For example, in containerized environments, container_id is usually a high-cardinality field. If you include container_id in a metric name such as
system.cpu.utilization.<container_id>, instead of having one MTS, you would have as many MTS as you have containers.
Guidance for custom event names 🔗
Custom events are collections of key-value pairs you can send to Infrastructure Monitoring to display in charts and view in event feeds. For example, you can send “release” events whenever you release new code and then correlate metric changes with releases by overlaying the release events on charts. The Metric and dimension key naming standards also apply to custom event naming. | https://docs.signalfx.com/en/latest/metrics-and-metadata/metric-names.html | 2022-09-25T02:11:23 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.signalfx.com |
Using Kong with TriggerMesh
The Kong Ingress Controller can be configured as the network layer for TriggerMesh, enabling it to perform internal and external routing.
The steps in this article guide you through the installation and configuration process, referring to external links when information beyond this scope is needed.
Pre-requisite
Knative Serving needs to be installed on a Kubernetes cluster, follow the instructions at the documentation to install it, skipping the network layer.
Knative networking layer
Kong is a networking layer option for Knative, you don't need to install any of the other choices at the project's documentation.
This guide was written using:
- Kubernetes
v1.22
- Knative
v1.0.0
- Kong Ingress Controller
2.3.1(includes Kong
2.8)
Install Kong Ingress Controller
Kong Ingress Controller can be installed using either the YAML manifest at their repository or helm charts.
When using YAML, apply the provided manifest:
kubectl apply -f
When using Helm follow their installation instructions.
Once Kong is installed take note of the IP address or public CNAME of the
kong-proxy service at the
kong namespace.
kubectl -n kong get svc kong-proxy NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kong-proxy LoadBalancer 10.98.223.191 35.141.22.45 80:30119/TCP,443:31585/TCP 1m
In the example above the external IP address
35.141.22.45 was provisioned.
Configure Kong Network Layer For Knative
Knative Ingress Class
We will configure Knative to use
kong as the Ingress class:
kubectl patch configmap/config-network \ --namespace knative-serving \ --type merge \ --patch '{"data":{"ingress.class":"kong"}}'
Setup Knative Domain
Use the Kong Ingress external IP or CNAME to configure your the domain name resolution as explained at Knative's documentation.
In this example we are not configuring a real DNS but using free wildcard domain tools like sslip.io or nip.io
kubectl patch configmap/config-domain \ --namespace knative-serving \ --type merge \ --patch '{"data":{"35.141.22.45.nip.io":""}}'
Once this is done, the setup is complete.
Try It
Test Connectivity
Send a request to the configured domain and make sure that a 404 response is returned:
curl -i HTTP/1.1 404 Not Found Date: Wed, 11 May 2022 12:01:21 GMT Content-Type: application/json; charset=utf-8 Connection: keep-alive Content-Length: 48 X-Kong-Response-Latency: 0 Server: kong/2.8.1 {"message":"no Route matched with those values"}
The 404 response is expected since we have not configured any services yet.
Test Service
Deploy a Knative Service:
kubectl apply -f - <<EOF apiVersion: serving.knative.dev/v1 kind: Service metadata: name: helloworld-go namespace: default spec: template: spec: containers: - image: gcr.io/knative-samples/helloworld-go env: - name: TARGET value: TriggerMesh HTTP/1.1 200 OK Content-Type: text/plain; charset=utf-8 Content-Length: 20 Connection: keep-alive Date: Wed, 11 May 2022 12:10:26 GMT X-Kong-Upstream-Latency: 16 X-Kong-Proxy-Latency: 1 Via: kong/2.8.1 Hello TriggerMesh!
By inspecting the returned headers for the request above we can tell that it was proxied by Kong, latency headers being added to the response.
Test Kong Plugins For Knative Services
Kong supports plugins to customize knative services requests and responses.
First, let's create a KongPlugin resource:
kubectl apply -f - <<EOF apiVersion: configuration.konghq.com/v1 kind: KongPlugin metadata: name: add-response-header config: add: headers: - 'demo: injected-by-kong' plugin: response-transformer EOF
Next, we will update the Knative service created before and add an annotation to the template:
kubectl apply -f - <<EOF apiVersion: serving.knative.dev/v1 kind: Service metadata: name: helloworld-go namespace: default spec: template: metadata: annotations: konghq.com/plugins: add-response-header spec: containers: - image: gcr.io/knative-samples/helloworld-go env: - name: TARGET value: TriggerMesh EOF
Kubernetes namespace
Note that the annotation
konghq.com/plugins is not added to the Service definition itself but to the
spec.template.metadata.annotations.
Let's make the request again:
curl -i HTTP/1.1 200 OK Content-Type: text/plain; charset=utf-8 Content-Length: 20 Connection: keep-alive Date: Wed, 11 May 2022 13:48:53 GMT demo: injected-by-kong X-Kong-Upstream-Latency: 3 X-Kong-Proxy-Latency: 0 Via: kong/2.8.1 Hello TriggerMesh!
As we can see, the response has the
demo header injected. | https://docs.triggermesh.io/guides/kong-ingress/ | 2022-09-25T01:21:37 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.triggermesh.io |
Each of the data types in the table can be read or written from the AI Engine as either scalars or in vector groups. However, there are certain restrictions on valid groupings based on the bus data width supported on the AI Engine to programmable logic interface ports or through the stream-switch network. The valid combinations for AI Engine kernels are vector bundles totaling up to 32-bits or 128-bits. The accumulator data types are only used to specify cascade-stream connections between adjacent AI Engines. Its valid groupings are based on the 384-bit wide cascade channel between two processors.
Note: To use these data types, it is necessary to use
#include <adf.h>in the kernel source file. | https://docs.xilinx.com/r/en-US/ug1076-ai-engine-environment/Stream-Data-Types | 2022-09-25T01:42:23 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.xilinx.com |
Configure
Configuring Greenlight 2.0
Greenlight is a highly configurable application. The various configuration options can be found below. When making a changes to the
.env file, in order for them to take effect you must restart you Greenlight container. For information on how to do this, see Applying
.env Changes.
Using a Different Relative Root
By default Greenlight is deployed to the
/b sub directory. If you are running Greenlight on a BigBlueButton server you must deploy Greenlight to a sub directory to avoid conflicts.
If you with to use a relative root other than
/b, you can do the following:
- Change the
RELATIVE_URL_ROOTenvironment variable.
- Update the
/etc/bigbluebutton/nginx/greenlight.nginxfile to reflect the new relative root.
- Restart Nginx and the Greenlight server.
If you are not deploying Greenlight on a BigBlueButton server and want the application to run at root, simply set the
RELATIVE_ROOT_URL to be blank.
Setting a Custom Branding Image
You can now setup branding for Greenlight through its Administrator Interface.
User Authentication
Greenlight supports four types of user authentication. You can configure any number of these, and Greenlight will dynamically adjust to which ones are configured.
In Application (Greenlight)
Greenlight has the ability to create accounts on its own. Users can sign up with their name, email and password and use Greenlight’s full functionality.
By default, the ability for anyone to create a Greenlight account is enabled. To disable this, set the
ALLOW_GREENLIGHT_ACCOUNTS option in your
.env file to false. This will not delete existing Greenlight accounts, but will prevent new ones from being created.
Google OAuth2
You can use your own Google account, but since Greenlight will use this account for sending out emails, you may want to create a Google account related to the hostname of your BigBlueButton server. For example, if your BigBlueButton server is called
example.myserver.com, you may want to create a Google account called
greenlight_notifications_myserver.
You need a Google account to create an OAuth 2
CLIENT_ID and
SECRET. The will enable users of Greenlight to authenticate with their own Google account (not yours).
Login to your Google account, and click the following link
If you want to see the documentation behind OAuth2 at Google, click the link.
First, create a Project click the “CREATE” link.
In the menu on the left, click “Credentials”.
Next, click the “OAuth consent screen” tab below the “Credentials” page title.
From here take the following steps:
- Choose any application name e.g “Greenlight”
- Set “Authorized domains” to your hostname eg “hostname” where hostname is your hostname
- Set “Application Homepage link” to your hostname e.g “” where hostname is your hostname
- Set “Application Privacy Policy link” to your hostname e.g “” where hostname is your hostname
- Click “Save”
- Click “Create credentials”
- Select “OAuth client ID”
- Select “Web application”
- Choose any name e.g “bbb-endpoint”
- Under “Authorized redirect URIs” enter “” where hostname is your hostname
- Click “Create”
A window should open with your OAuth credentials. In this window, copy client ID and client secret to the
.env file so it resembles the following (your credentials will be different).
GOOGLE_OAUTH2_ID=1093993040802-jjs03khpdl4dfasffq7hj6ansct5.apps.googleusercontent.com GOOGLE_OAUTH2_SECRET=KLlBNy_b9pvBGasf7d5Wrcq
The
GOOGLE_OAUTH2_HD environment variable is optional and can be used to restrict Google authentication to a specific Google Apps hosted domain.
GOOGLE_OAUTH2_HD=example.com
Office365 OAuth2
You will need an Office365 account to create an OAuth 2 key and secret. This will allow Greenlight users to authenticate with their own Office365 accounts.
To begin, head over to the following site and sign in to your Office365 account:
In the menu on the left, click “Azure Active Directory”.
Under the “Manage” tab, click “App registrations”.
From here take the following steps:
- Click “New Registration”
- Choose any application name e.g “bbb-endpoint”
- Set the Redirect URI to your url (must be https): “”
- Click “Register”
Once your application has been created, Under the “Overview” tab, copy your “Application (client) ID” into the
OFFICE365_KEY environment variable in your
.env file.
Finally, click the “Certificates & secrets” under the “Manage” tab
From here take the following steps:
- Click “New client secret”
- Choose the “Never” option in the “Expires” option list
- Copy the value of your password into the
OFFICE365_SECRETenvironment variable in your
.envfile
LDAP Auth_AUTHis the preferred authentication method. (See below)
LDAP_PASSWORDis the password for the account to perform user lookup.
LDAP_ROLE_FIELDis the name of the attribute that contains the user role. (Optional)
LDAP_FILTERis the filter which can be used to only allow a specific subset of users to authenticate. (Optional)
LDAP_ATTRIBUTE_MAPPINGallows you to specify which attributes in your LDAP server match which attributes in Greenlight (Optional - See below)
LDAP_AUTH
When setting the authentication method, there are currently 3 options:
"simple": Uses the account set in
LDAP_BIND_DNto look up users
"user": Uses the user’s own credentials to search for his data, enabling authenticated login to LDAP without the need for a user with global read privileges.
"anonymous": Enables an anonymous bind to the LDAP with no password being used.
LDAP_ROLE_FIELD
Greenlight can automatically assign a matching role to a user based on their role in the LDAP Server. To do that:
- Create a role in Greenlight with the exact same name as the LDAP role
- Set the role permissions for the newly created role
- Repeat for all possible roles
- Set
LDAP_ROLE_FIELDequal to the name of the attribute that stores the role
- Restart Greenlight
Once you have signed in with that user, they will automatically be given the Greenlight role that matches their LDAP role.
LDAP_ATTRIBUTE_MAPPING
When a LDAP user signs into Greenlight, the LDAP gem looks up the LDAP user and stores some information that is passed back to Greenlight.
You can find a list of the defaults in the table below. For rows with multiple attributes, the gem will use the first available attribute starting with the leftmost attribute in the row.
To make changes to the attribute that the gem uses, you can set the
LDAP_ATTRIBUTE_MAPPING variable in your
.env using the following format:
LDAP_ATTRIBUTE_MAPPING=variablename1=ldapattribute1;variablename2=ldapattribute2;variablename3=ldapattribute3; For any variable that is not set, the default from above will be used.
IMPORTANT NOTE: variablename refers to the variable name in the leftmost column above, NOT the Greenlight attribute name
For example, if you would like to match the Greenlight users name to
displayName in your LDAP server and the Greenlight username to
cn then you would use the following string:
LDAP_ATTRIBUTE_MAPPING=name=displayName;nickname=cn;
Example Setup
Here are some example settings using an OpenLDAP server.
LDAP_SERVER=host LDAP_PORT=389 LDAP_METHOD=plain LDAP_UID=uid LDAP_BASE=dc=example,dc=org LDAP_AUTH=simple LDAP_BIND_DN=cn=admin,dc=example,dc=org LDAP_PASSWORD=password LDAP_ROLE_FIELD=userRole LDAP_FILTER=(&(attr1=value1)(attr2=value2))
If your server is still running you will need to recreate the container for changes to take effect.
See Applying
.env Changes section to enable your new configuration.
If you are using an ActiveDirectory LDAP server, you must determine the name of your user id parameter to set
LDAP_UID. It is commonly ‘sAMAccountName’ or ‘UserPrincipalName’.
LDAP authentication takes precedence over all other providers. This means that if you have other providers configured with LDAP, clicking the login button will take you to the LDAP sign in page as opposed to presenting the general login modal.
Twitter OAuth2
Twitter Authentication is deprecated and will be phased out in a future release.
Setting up File Storage
In order to use Preupload Presentation, you must first make some choices regarding your deployments. If you are upgrading from a version earlier than
2.7, there are some extra changes needed in order to get it up and running. If you first installed Greenlight at version
2.7 or later, you can skip directly to Choosing Storage Location.
Updating From Version Prior to 2.7
If you are updating from a version prior to
2.7 you must make the following changes inorder for Preupload Presentation to work.
Update docker-compose.yml
Using your preferred text editor (examples below will use
nano), edit the following file:
nano ~/greenlight/docker-compose.yml
Find the following line (Line 19)
- ./log:/usr/src/app/log
Add the following line BELOW the above line (mnking sure to keep the same spacing as the line above)
- ./storage:/usr/src/app/storage
Once completed, your
docker-compose.yml should look like this (note the last 2 lines):
version: '3' services: app: entrypoint: [bin/start] image: bigbluebutton/greenlight:v2 container_name: greenlight-v2 env_file: .env restart: unless-stopped ports: - 127.0.0.1:5000:80 volumes: - ./log:/usr/src/app/log - ./storage:/usr/src/app/storage
Update NGINX
By default, only files that are < 1 MB are allowed to uploaded due to some NGINX rules. To get around that, you must add a specific rule to large files
Using your preferred text editor (examples below will use
nano), edit the following file:
nano /etc/bigbluebutton/nginx/greenlight.nginx
At the very bottom, add the following lines (again making sure to keep consistent spacing):
# Allow larger body size for uploading presentations location ~ /preupload_presentation$ { client_max_body_size 30m; proxy_pass; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_http_version 1.1; }; }
Finally, reload NGINX
sudo systemctl restart nginx
PLEASE NOTE: If your Greenlight deployment is deployed without the
/b (or any other relative root), you can skip the remainder of this step
Using your preferred text editor (examples below will use
nano), edit the following file:
nano /etc/bigbluebutton/nginx/greenlight.nginx
At the very bottom, add the following lines (again making sure to keep consistent spacing):
location /rails/active_storage { return 301 /b$request_uri; }; } location /rails/active_storage { return 301 /b$request_uri; }
Finally, reload NGINX
sudo systemctl restart nginx
Choosing Storage Location
When using Preupload Presentation, Greenlight needs a location to store the presentations uploaded by the room owners. At the moment, there are 3 places that you can choose from:
- Local (Default)
- Amazon S3
- Google Cloud Services Cloud Storage
Local
By default, local storage is set up to work automatically and will store all files in
~/greenlight/storage
Amazon S3
In order to store files in S3, you must set the following values in your
.env file. A good guide to follow can be found here.
AWS_ACCESS_KEY_IDis your AWS Account Access Key (see the guide above)
AWS_SECRET_ACCESS_KEYis your AWS Account Secret Access Key (see the guide above)
AWS_REGIONis the region that your S3 bucket is in
AWS_BUCKETis the name of the S3 bucket
Google Cloud Services Cloud Storage
In order to store files in Cloud Storage, you must set the following values in your
.env file.
GCS_PROJECT_IDis the id of the project in which your storage is currently in
GCS_PRIVATE_KEY_IDcan be found in the credentials.json file
GCS_PRIVATE_KEYcan be found in the credentials.json file
GCS_CLIENT_EMAILcan be found in the credentials.json file
GCS_CLIENT_IDcan be found in the credentials.json file
GCS_CLIENT_CERTcan be found in the credentials.json file
GCS_PROJECTis the name of the project in which your storage is currently in
GCS_BUCKETis the name of the bucket
Use PostgreSQL instead of SQLite
Greenlight can be set to use either a local in-memory SQLite database or a production-ready PostgreSQL database.
For any new installs, Greenlight is configured to use PostgreSQL by default.
If you installed Greenlight before v2.5 was released, your deployment is configured to use SQLite by default. If you are using SQLite, we highly recommend that you make the change to PostgreSQL.
Converting SQLite database to PostgreSQL without losing data
It is possible to convert an existing SQLite database to PostgreSQL without losing any of your data.
You’ll need to generate a random password that will be used in 3 different instances. Generate one by running
openssl rand -base64 24
For the remainder of these instructions, replace RANDOM_PASSWORD_REPLACE_ME with the password that was generated with the command from above
First, ensure you are in your Greenlight directory and that Greenlight is not running
cd ~/greenlight docker-compose down
Second, replace your
docker-compose.yml with the new
docker-compose.yml to include the PostgreSQL container
docker run --rm bigbluebutton/greenlight:v2 cat ./docker-compose.yml > docker-compose.yml
Next, edit your
docker-compose.yml to include your SQLite container (You can use vi, vim, nano or any text editor)
vim docker-compose.yml
There are 3 lines that need to be changed. When making the changes, make sure the spacing remains consistent.
- The first change is removing the
#before
# - ./db/production:/usr/src/app/db/production
The second change is replacing
- ./db/production:/var/lib/postgresql/data
With
- ./db/production-postgres:/var/lib/postgresql/data
- The third change is replacing RANDOM_PASSWORD_REPLACE_ME with the password you generated in the earlier step
NOTE: If you cloned the repository and are building your own image, make sure you also make the change to point to your image instead of the default one. If you are installed using the basic Install instructions, you can skip this step.
services: app: entrypoint: [bin/start] image: <image name>:release-v2
The next step is configuring the
.env file so that it connects to the PostgreSQL database. Edit your
.env file
vim .env
and add the following lines to any part of the
.env file (Making sure to replace the RANDOM_PASSWORD_REPLACE_ME)
DB_ADAPTER=postgresql DB_HOST=db DB_NAME=greenlight_production DB_USERNAME=postgres DB_PASSWORD=RANDOM_PASSWORD_REPLACE_ME
Next, test your current configuration by running
docker-compose up -d
If you see the following error, it is due to the spacing of your
docker-compose.yml file. For reference, the spacing should look like this
ERROR: yaml.parser.ParserError: while parsing a block mapping in "./docker-compose.yml", line 4, column 3 expected <block end>, but found '<block mapping start>' in "./docker-compose.yml", line 23, column 4
If no errors appear, continue to the next step. Once the containers have spun up, we need to create a new database in PostgreSQL to store our data in (Making sure to replace the RANDOM_PASSWORD_REPLACE_ME)
docker exec greenlight-v2 psql "postgresql://postgres:RANDOM_PASSWORD_REPLACE_ME@db:5432" -c 'CREATE DATABASE greenlight_production_new'
Assuming that worked successfully, the console should output:
CREATE DATABASE
Finally, copy the SQLite database and convert it to a PostgreSQL database. (Making sure to replace the RANDOM_PASSWORD_REPLACE_ME)
docker exec greenlight-v2 bundle exec sequel -C sqlite:///usr/src/app/db/production/production.sqlite3 postgres://postgres:RANDOM_PASSWORD_REPLACE_ME@db:5432/greenlight_production_new
Assuming that worked successfully, the console should output:
Databases connections successful Migrations dumped successfully Tables created Begin copying data . . . Database copy finished in 2.741942643 seconds
Finally, edit your
.env file to point at the new database by replacing the line that we added in the earlier step
DB_NAME=greenlight_production
With:
DB_NAME=greenlight_production_new
Now, restart Greenlight and you should be good to go.
You can verify that everything went smoothly if you are able to sign into the accounts you had made before starting this process.
Errors after migration
If you encounter any errors after the migration, you can very easily switch back to your previous setup by removing the
.env variables that were added during this switch.
Just remove these lines and restart Greenlight
DB_ADAPTER=postgresql DB_HOST=db DB_NAME=greenlight_production DB_USERNAME=postgres DB_PASSWORD=RANDOM_PASSWORD_REPLACE_ME
Upgrading PostgreSQL versions
Before you begin, please note that this process may take some time for large databases. We recommend you schedule maintenance windows and avoid attempting to do this upgrade quickly.
Create a dump of your database
cd ~/greenlight docker exec greenlight_db_1 /usr/bin/pg_dumpall -U postgres -f /var/lib/postgresql/data/dump.sql docker-compose down
Create a backup of your database
sudo cp -a db db.bak sudo mv db/production/dump.sql . sudo rm -r db/
Switch PostgreSQL versions
Edit your
docker-compose.yml file
nano docker-compose.yml
Replace:
image: postgres:9.5
With:
image: postgres:13-alpine
Import database dump into new database
Start Greenlight
docker-compose up -d
Wait a couple of seconds and then run:
docker exec greenlight_db_1 /usr/local/bin/psql -U postgres -c "DROP DATABASE greenlight_production;"
If you get an error stating that “greenlight_production does not exist”, wait a few more seconds then try again. (Repeat until successful)
Finally:
sudo mv dump.sql db/production/ docker exec greenlight_db_1 /usr/local/bin/psql -U postgres -f /var/lib/postgresql/data/dump.sql sudo rm db/production/dump.sql
Sign in and confirm that all users, rooms and other settings are present and correct.
Errors
If you run into any issues, you can always replace your new database with the previous information. To do so, take down Greenlight and edit your
docker-compose.yml file
cd ~/greenlight docker-compose down nano docker-compose.yml
Replace:
image: postgres:13.2-alpine9.5
With:
image: postgres:9.5
Then, replace your current database fold with the back up you made during the upgrade process
sudo cp -a db db-new.bak sudo cp -a db.bak db
Start Greenlight and confirm that all users, rooms and other settings are present and correct.
Improving Greenlight’s Performance Under Load
Under heavy load, a single Greenlight server with stock settings might have trouble keeping up with the incoming requests. To improve Greenlight’s performance, you can increase the number of workers used by the underlying PUMA server. When increasing the number of workers, it’s important to note that the more workers you have, the more memory + CPU usage by Greenlight.
To set the number of workers, add
WEB_CONCURRENCY=1 to your
.env file.
It is recommended to slowly increment the variable by 1, and then monitoring your server to ensure there is enough memory + CPU to continue. Unless on a very strong server, it is recommended to keep the variable <= 3.
Adding Terms and Conditions
Greenlight allows you to add terms and conditions to the application. By adding a
terms.md file to
app/config/ you will enable terms and conditions. This will display a terms and conditions page whenever a user signs up (or logs on without accepting yet). They are required to accept before they can continue to use Greenlight.
The
terms.md file is a Markdown file, so you can style your terms and conditions as you wish.
To add terms and conditions to your docker container, create a
terms.md file in the
~/greenlight directory. Then, add the following volume to your
docker-compose.yml file.
- ./terms.md:/usr/src/app/config/terms.md
Applying
.env Changes
After you edit the
.env file, you are required to restart Greenlight in order for it to pick up the changes. Ensure you are in the Greenlight directory when restarting Greenlight. To do this, enter the following commands:
If you installed using the “Install” Instructions
docker-compose down docker-compose up -d
If you installed using the “Customize” Instructions
docker-compose down ./scripts/image_build.sh <image name> release-v2 docker-compose up -d
See also | https://docs.bigbluebutton.org/greenlight/gl-config.html | 2022-09-25T00:59:37 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.bigbluebutton.org |
OKD supports Microsoft Azure Disk Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OKD cluster. Persistent volume claims are specific to a project or namespace and can be requested by users.
Storage classes OKD.
In the OKD Azure volumes as persistent volumes, because OKD formats them before the first use. | https://docs.okd.io/4.8/storage/persistent_storage/persistent-storage-azure.html | 2022-09-25T01:00:58 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.okd.io |
importing.
Преимущества:
Недостатки:
Unity can import proprietary files from the following 3D modeling software:
Warning: Unity converts proprietary files to .fbx files as part of the import process. However,.
Преимущества:
Недостатки:. | https://docs.unity.cn/ru/2021.1/Manual/3D-formats.html | 2022-09-25T01:50:06 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.unity.cn |
15.8. Unique network flows¶
This section explains which APIs should be used on Fast DDS in order to have unique network flows on specific topics.
15.8.1. Background¶
IP networking is the pre-dominant inter-networking technology used nowadays. Ethernet, WiFi, 4G/5G telecommunication, all of them rely on IP networking.
Streams of IP packets from a given source to destination are called packet flows or simply flows. The network QoS of a flow can be configured when using certain networking equipment (routers, switches). Such pieces of equipment typically support 3GPP/5QI protocols to assign certain Network QoS parameters to specific flows. Requesting a specific Network QoS is usually done on the endpoint sending the data, as it is the one that usually haves complete information about the network flow.
Applications may need to use specific Network QoS parameters on different topics.
This means an application should be able to:
Identify the flows being used in the communications, so they can correctly configure the networking equipment.
Use specific flows on selected topics.
15.8.2. Identifying a flow¶
The 5-tuple is a traditional unique identifier for flows on 3GPP enabled equipment. The 5-tuple consists of five parameters: source IP address, source port, destination IP address, destination port, and the transport protocol (example, TCP/UDP).
15.8.2.1. Definitions¶
Network flow: A tuple of networking resources selected by the middleware for transmission of messages from a DataWriter to a DataReader, namely:
-
Transport protocol: UDP or TCP
-
Transport port
-
Internet protocol: IPv4 or IPv6
-
IP address
Network Flow Endpoint (NFE): The portion of a network flow specific to the DataWriter or the DataReader. In other words, each network flow has two NFEs; one for the DataWriter, and the other for the DataReader.
15.8.2.2. APIs¶
Fast DDS provides the APIs needed to get the list of NFEs used by a given DataWriter or a DataReader.
On the DataWriter,
get_sending_locators()allows the application to obtain the list of locators from which the writer may send data.
On the DataReader,
get_listening_locators()allows the application to obtain the list of locators on which the reader is listening.
15.8.3. Requesting unique flows¶
A unique flow can be created by ensuring that at least one of the two NFEs are unique. On Fast DDS, there are two ways to select unique listening locators on the DataReader:
The application can specify on which locators the DataReader should be listening. This is done using RTPSEndpointQos on the DataReaderQos. In this case it is the responsibility of the application to ensure the uniqueness of the locators used.
The application can request the reader to be created with unique listening locators. This is done using a PropertyPolicyQos including the property
"fastdds.unique_network_flows". In this case, the reader will listen on a unique port outside the range of ports typically used by RTPS.
15.8.4. Example¶
The following snippet demonstrates all the APIs described on this page:
// Create the DataWriter DataWriter* writer = publisher->create_datawriter(topic, DATAWRITER_QOS_DEFAULT); if (nullptr == writer) { // Error return; } // Create DataReader with unique flows DataReaderQos drqos = DATAREADER_QOS_DEFAULT; drqos.properties().properties().emplace_back("fastdds.unique_network_flows", ""); DataReader* reader = subscriber->create_datareader(topic, drqos); // Print locators information eprosima::fastdds::rtps::LocatorList locators; writer->get_sending_locators(locators); std::cout << "Writer is sending from the following locators:" << std::endl; for (const auto& locator : locators) { std::cout << " " << locator << std::endl; } reader->get_listening_locators(locators); std::cout << "Reader is listening on the following locators:" << std::endl; for (const Locator_t& locator : locators) { std::cout << " " << locator << std::endl; } | https://fast-dds.docs.eprosima.com/en/latest/fastdds/use_cases/unique_network_flows/unique_network_flows.html | 2022-09-25T02:10:56 | CC-MAIN-2022-40 | 1664030334332.96 | [] | fast-dds.docs.eprosima.com |
Trusted Applications¶
This document tells how to implement a Trusted Application for OP-TEE, using OP-TEE’s so called TA-devkit to both build and sign the Trusted Application binary. In this document, a Trusted Application running in the OP-TEE os is referred to as a TA. Note that in the default setup a private test key is distributed along with the optee_os source is used for signing Trusted Applications. See TASign for more details, including offline signing of TAs.
TA Mandatory files¶
The Makefile for a Trusted Application must be written to rely on OP-TEE TA-devkit resources in order to successfully build the target application. TA-devkit is built when one builds optee_os.
To build a TA, one must provide:
-
Makefile, a make file that should set some configuration variables and include the TA-devkit make file.
-
sub.mk, a make file that lists the sources to build (local source files, subdirectories to parse, source file specific build directives).
-
user_ta_header_defines.h, a specific ANSI-C header file to define most of the TA properties.
-
An implementation of at least the TA entry points, as extern functions:
TA_CreateEntryPoint(),
TA_DestroyEntryPoint(),
TA_OpenSessionEntryPoint(),
TA_CloseSessionEntryPoint(),
TA_InvokeCommandEntryPoint()
TA file layout example¶
As an example, hello_world looks like this:
hello_world/ ├── ... └── ta ├── Makefile BINARY=<uuid> ├── Android.mk Android way to invoke the Makefile ├── sub.mk srcs-y += hello_world_ta.c ├── include │ └── hello_world_ta.h Header exported to non-secure: TA commands API ├── hello_world_ta.c Implementation of TA entry points └── user_ta_header_defines.h TA_UUID, TA_FLAGS, TA_DATA/STACK_SIZE, ...
TA Makefile Basics¶
Required variables¶
The main TA-devkit make file is located in optee_os at
ta/mk/ta_dev_kit.mk. The make file supports make targets such as
all and
clean to build a TA or a library and clean the built objects.
The make file expects a couple of configuration variables:
- TA_DEV_KIT_DIR
Base directory of the TA-devkit. Used by the TA-devkit itself to locate its tools.
- BINARY and LIBNAME
These are exclusive, meaning that you cannot use both at the same time. If building a TA,
BINARYshall provide the TA filename used to load the TA. The built and signed TA binary file will be named
${BINARY}.ta. In native OP-TEE, it is the TA UUID, used by tee-supplicant to identify TAs. If one is building a static library (that will be later linked by a TA), then
LIBNAMEshall provide the name of the library. The generated library binary file will be named
lib${LIBNAME}.a
- CROSS_COMPILE and CROSS_COMPILE32
Cross compiler for the TA or the library source files.
CROSS_COMPILE32is optional. It allows to target AArch32 builds on AArch64 capable systems. On AArch32 systems,
CROSS_COMPILE32defaults to
CROSS_COMPILE.
Optional variables¶
Some optional configuration variables can be supported, for example:
- O
Base directory for build objects filetree. If not set, TA-devkit defaults to ./out from the TA source tree base directory.
Example Makefile¶
A typical Makefile for a TA looks something like this
# Append specific configuration to the C source build (here log=info) # The UUID for the Trusted Application BINARY=8aaaf200-2450-11e4-abe2-0002a5d5c51b # Source the TA-devkit make file include $(TA_DEV_KIT_DIR)/mk/ta_dev_kit.mk
sub.mk directives¶
The make file expects that current directory contains a file
sub.mk that is
the entry point for listing the source files to build and other specific build
directives. Here are a couple of examples of directives one can implement in a
sub.mk make file:
# Adds /hello_world_ta.c from current directory to the list of the source # file to build and link. srcs-y += hello_world_ta.c # Includes path **./include/** from the current directory to the include # path. global-incdirs-y += include/ # Adds directive -Wno-strict-prototypes only to the file hello_world_ta.c cflags-hello_world_ta.c-y += -Wno-strict-prototypes # Removes directive -Wno-strict-prototypes from the build directives for # hello_world_ta.c only. cflags-remove-hello_world_ta.c-y += -Wno-strict-prototypes # Adds the static library foo to the list of the linker directive -lfoo. libnames += foo # Adds the directory path to the libraries pathes list. Archive file # libfoo.a is expected in this directory. libdirs += path/to/libfoo/install/directory # Adds the static library binary to the TA build dependencies. libdeps += path/to/greatlib/libgreatlib.a
Android Build Environment¶
OP-TEE’s TA-devkit supports building in an Android build environment. One can
write an
Android.mk file for the TA (stored side by side with the Makefile).
Android’s build system will parse the
Android.mk file for the TA which in
turn will parse a TA-devkit Android make file to locate TA build resources. Then
the Android build will execute a
make command to built the TA through its
generic Makefile file.
A typical
Android.mk file for a TA looks like this (
Android.mk for
hello_world is used as an example here).
# Define base path for the TA sources filetree LOCAL_PATH := $(call my-dir) # Define the module name as the signed TA binary filename. local_module := 8aaaf200-2450-11e4-abe2-0002a5d5c51b.ta # Include the devkit Android make script include $(OPTEE_OS_DIR)/mk/aosp_optee.mk
TA Mandatory Entry Points¶
A TA must implement a couple of mandatory entry points, these are:
TEE_Result TA_CreateEntryPoint(void) { /* Allocate some resources, init something, ... */ ... /* Return with a status */ return TEE_SUCCESS; } void TA_DestroyEntryPoint(void) { /* Release resources if required before TA destruction */ ... } TEE_Result TA_OpenSessionEntryPoint(uint32_t ptype, TEE_Param param[4], void **session_id_ptr) { /* Check client identity, and alloc/init some session resources if any */ ... /* Return with a status */ return TEE_SUCCESS; } void TA_CloseSessionEntryPoint(void *sess_ptr) { /* check client and handle session resource release, if any */ ... } TEE_Result TA_InvokeCommandEntryPoint(void *session_id, uint32_t command_id, uint32_t parameters_type, TEE_Param parameters[4]) { /* Decode the command and process execution of the target service */ ... /* Return with a status */ return TEE_SUCCESS; }
TA Properties¶
Trusted Application properties shall be defined in a header file named
user_ta_header_defines.h, which should contain:
-
TA_UUIDdefines the TA uuid value
-
TA_FLAGSdefine some of the TA properties
-
TA_STACK_SIZEdefines the RAM size to be reserved for TA stack
-
TA_DATA_SIZEdefines the RAM size to be reserved for TA heap (TEE_Malloc() pool)
Refer to TA Properties to understand how to configure these macros.
Hint
UUIDs can be generated using python
python -c 'import uuid; print(uuid.uuid4())'
or in most Linux systems using either
cat /proc/sys/kernel/random/uuid # Linux only uuidgen # available from the util-linux package in most distributions
Example of a property header file¶
#ifndef USER_TA_HEADER_DEFINES_H #define USER_TA_HEADER_DEFINES_H #define TA_UUID { 0x8aaaf200, 0x2450, 0x11e4, \ { 0xab, 0xe2, 0x00, 0x02, 0xa5, 0xd5, 0xc5, 0x1b} } #define TA_FLAGS (TA_FLAG_EXEC_DDR | \ TA_FLAG_SINGLE_INSTANCE | \ TA_FLAG_MULTI_SESSION) #define TA_STACK_SIZE (2 * 1024) #define TA_DATA_SIZE (32 * 1024) #define TA_CURRENT_TA_EXT_PROPERTIES \ { "gp.ta.description", USER_TA_PROP_TYPE_STRING, "Foo TA for some purpose." }, \ { "gp.ta.version", USER_TA_PROP_TYPE_U32, &(const uint32_t){ 0x0100 } } #endif /* USER_TA_HEADER_DEFINES_H */
Note
It is recommended to use the
TA_CURRENT_TA_EXT_PROPERTIES as above to
define extra properties of the TA.
Note
Generating a fresh UUID with suitable formatting for the header file can be done using:
python -c "import uuid; u=uuid.uuid4(); print(u); \ n = [', 0x'] * 11; \ n[::2] = ['{:12x}'.format(u.node)[i:i + 2] for i in range(0, 12, 2)]; \ print('\n' + '#define TA_UUID\n\t{ ' + \ '0x{:08x}'.format(u.time_low) + ', ' + \ '0x{:04x}'.format(u.time_mid) + ', ' + \ '0x{:04x}'.format(u.time_hi_version) + ', \\ \n\n\t\t{ ' + \ '0x{:02x}'.format(u.clock_seq_hi_variant) + ', ' + \ '0x{:02x}'.format(u.clock_seq_low) + ', ' + \ '0x' + ''.join(n) + '} }')"
Checking TA parameters¶
GlobalPlatforms TEE Client APIs
TEEC_InvokeCommand() and
TEE_OpenSession() allow clients to invoke a TA with some invocation
parameters: values or references to memory buffers. It is mandatory that TA’s
verify the parameters types before using the parameters themselves. For this a
TA can rely on the macro
TEE_PARAM_TYPE_GET(param_type, param_index) to get
the type of a parameter and check its value according to the expected parameter.
For example, if a TA expects that command ID 0 comes with
params[0] being a
input value,
params[1] being a output value, and
params[2] being a
in/out memory reference (buffer), then the TA should implemented the following
sequence:
TEE_Result handle_command_0(void *session, uint32_t cmd_id, uint32_t param_types, TEE_Param params[4]) { if ((TEE_PARAM_TYPE_GET(param_types, 0) != TEE_PARAM_TYPE_VALUE_IN) || (TEE_PARAM_TYPE_GET(param_types, 1) != TEE_PARAM_TYPE_VALUE_OUT) || (TEE_PARAM_TYPE_GET(param_types, 2) != TEE_PARAM_TYPE_MEMREF_INOUT) || (TEE_PARAM_TYPE_GET(param_types, 3) != TEE_PARAM_TYPE_NONE)) { return TEE_ERROR_BAD_PARAMETERS } /* process command */ ... } TEE_Result TA_InvokeCommandEntryPoint(void *session, uint32_t command_id, uint32_t param_types, TEE_Param params[4]) { switch (command_id) { case 0: return handle_command_0(session, param_types, params); default: return TEE_ERROR_NOT_SUPPORTED; } }
All REE Filesystem Trusted Applications need to be signed. The
signature is verified by optee_os upon loading of the TA. Within the
optee_os source is a directory
keys. The public part of
keys/default_ta.pem will be compiled into the optee_os binary and the
signature of each TA will be verified against this key upon loading. Currently
keys/default_ta.pem must contain an RSA key.
Warning
optee_os comes with a default private key in its source to facilitate easy development, testing, debugging and QA. Never deploy an optee_os binary with this key in production. Instead replace this key as soon as possible with a public key and keep the private part of the key offline, preferably on an HSM.
TAs are signed using the
sign_encrypt.py script referenced from
ta/mk/ta_dev_kit.mk in optee_os. Its default behaviour is to sign a
compiled TA binary and attach the signature to form a complete TA for
deployment. For offline signing, a three-step process is required: In a
first step a digest of the compiled binary has to be generated, in the second
step this digest is signed offline using the private key and finally in the
third step the binary and its signature are stitched together into the full TA.
Offline Signing of TAs¶
There are two types of TAs that can be signed offline. The in-tree TAs, which come with the OP-TEE
OS (for example the
pkcs11 TA) and are generated during the compilation of the TA DEV KIT. The
second type are any external TAs coming from the user. In both cases however, the signing process
is the same.
Offline signing is done with the following sequence of steps:
0. (Preparation) Generate a 2048 or 4096 bit RSA key for signing in a secure environment and extract the public key. For example
openssl genrsa -out rsa2048.pem 2048 openssl rsa -in rsa2048.pem -pubout -out rsa2048_pub.pem
1. Build the OP-TEE OS with the variable
TA_PUBLIC_KEY set to the public
key generated above
TA_PUBLIC_KEY=/path/to/public_key.pem make all
The build script will do two things:
- It will embed the
TA_PUBLIC_KEYkey into the OP-TEE core image, which will be used toauthenticate the TAs.
- It will generate .stripped.elf files of the in-tree TAs and sign them with the dummy keypointed to by
TA_SIGN_KEY, thus creating .ta files. Note that the generated .ta files arenot to be used as they are not compatible with the public key embedded into the OP-TEE core image.
2. Build any external TA. Same as with the in-tree TAs, the building procedure can use the dummy key
pointed to by
TA_SIGN_KEY, however they are not to be used due to the incompatibility reasons
mentioned in the paragraph above.
There are now two ways to generate the final .ta files. Either re-sign the .ta files with a customized sign_encrypt.py script (left to the user to implement) or stitch the .stripped.elf files and their signatures together (explained in steps 3-5). In both cases however, note that the private key used must be the one generated in step 0.
Manually generate a digest of the generated .stripped.elf files using
sign_encrypt.py digest --key $(TA_SIGN_KEY) --uuid $(user-ta-uuid)
Sign this digest offline, for example with OpenSSL
base64 --decode digestfile | \ openssl pkeyutl -sign -inkey $TA_SIGN_KEY \ -pkeyopt digest:sha256 -pkeyopt rsa_padding_mode:pss \ -pkeyopt rsa_pss_saltlen:digest -pkeyopt rsa_mgf1_md:sha256 | \ base64 > sigfile
or with pkcs11-tool using a Nitrokey HSM
echo "0000: 3031300D 06096086 48016503 04020105 000420" | \ xxd -c 19 -r > /tmp/sighdr cat /tmp/sighdr $(base64 --decode digestfile) > /tmp/hashtosign pkcs11-tool --id $key_id -s --login -m RSA-PKCS-PSS --hash-algorithm SHA256 --mgf MGF1-SHA256 \ --input-file /tmp/hashtosign | \ base64 > sigfile
Manually stitch the TA and signature together
sign_encrypt.py stitch --key $(TA_SIGN_KEY) --uuid $(user-ta-uuid)
By default, the UUID is taken as the base file name for all files. Different file
names and paths can be set through additional options to
sign_encrypt.py. Consult
sign_encrypt.py --help for a full list of options and parameters. | https://optee.readthedocs.io/en/latest/building/trusted_applications.html | 2022-09-25T02:36:59 | CC-MAIN-2022-40 | 1664030334332.96 | [] | optee.readthedocs.io |
Introduction to ApplicationSet controller¶
Introduction¶
The ApplicationSet controller is a Kubernetes controller that adds support for an
ApplicationSet CustomResourceDefinition (CRD). This controller/CRD enables both automation and greater flexibility managing Argo CD Applications across a large number of clusters and within monorepos, plus it makes self-service usage possible on multitenant Kubernetes clusters.
The ApplicationSet controller works alongside an existing Argo CD installation. Argo CD is a declarative, GitOps continuous delivery tool, which allows developers to define and control deployment of Kubernetes application resources from within their existing Git workflow.
Starting with Argo CD v2.3, the ApplicationSet controller is bundled with Argo CD.
The ApplicationSet controller, supplements Argo CD by adding additional features in support of cluster-administrator-focused scenarios. The
ApplicationSet controller provides:
- The ability to use a single Kubernetes manifest to target multiple Kubernetes clusters with Argo CD
- The ability to use a single Kubernetes manifest to deploy multiple applications from one or multiple Git repositories with Argo CD
- Improved support for monorepos: in the context of Argo CD, a monorepo is multiple Argo CD Application resources defined within a single Git repository
- Within multitenant clusters, improves the ability of individual cluster tenants to deploy applications using Argo CD (without needing to involve privileged cluster administrators in enabling the destination clusters/namespaces)
Note
Be aware of the security implications of ApplicationSets before using them.
The ApplicationSet resource¶
This example defines a new
guestbook resource of kind
ApplicationSet:
apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: guestbook spec: generators: - list: elements: - cluster: engineering-dev url: - cluster: engineering-prod url: - cluster: finance-preprod url: template: metadata: name: '{{cluster}}-guestbook' spec: project: my-project source: repoURL: targetRevision: HEAD path: guestbook/{{cluster}} destination: server: '{{url}}' namespace: guestbook
In this example, we want to deploy our
guestbook application (with the Kubernetes resources for this application coming from Git, since this is GitOps) to a list of Kubernetes clusters (with the list of target clusters defined in the List items element of the
ApplicationSet resource).
While there are multiple types of generators that are available to use with the
ApplicationSet resource, this example uses the List generator, which simply contains a fixed, literal list of clusters to target. This list of clusters will be the clusters upon which Argo CD deploys the
guestbook application resources, once the ApplicationSet controller has processed the
ApplicationSet resource.
Generators, such as the List generator, are responsible for generating parameters. Parameters are key-values pairs that are substituted into the
template: section of the ApplicationSet resource during template rendering.
There are multiple generators currently supported by the ApplicationSet controller:
- List generator: Generates parameters based on a fixed list of cluster name/URL values, as seen in the example above.
- Cluster generator: Rather than a literal list of clusters (as with the list generator), the cluster generator automatically generates cluster parameters based on the clusters that are defined within Argo CD.
- Git generator: The Git generator generates parameters based on files or folders that are contained within the Git repository defined within the generator resource.
- Files containing JSON values will be parsed and converted into template parameters.
- Individual directory paths within the Git repository may be used as parameter values, as well.
- Matrix generator: The Matrix generators combines the generated parameters of two other generators.
See the generator section for more information about individual generators, and the other generators not listed above.
Parameter substitution into templates¶
Independent of which generator is used, parameters generated by a generator are substituted into
{{parameter name}} values within the
template: section of the
ApplicationSet resource. In this example, the List generator defines
cluster and
url parameters, which are then substituted into the template's
{{cluster}} and
{{url}} values, respectively.
After substitution, this
guestbook
ApplicationSet resource is applied to the Kubernetes cluster:
- The ApplicationSet controller processes the generator entries, producing a set of template parameters.
- These parameters are substituted into the template, once for each set of parameters.
- Each rendered template is converted into an Argo CD
Applicationresource, which is then created (or updated) within the Argo CD namespace.
- Finally, the Argo CD controller is notified of these
Applicationresources and is responsible for handling them.
With the three different clusters defined in our example --
engineering-dev,
engineering-prod, and
finance-preprod -- this will produce three new Argo CD
Application resources: one for each cluster.
Here is an example of one of the
Application resources that would be created, for the
engineering-dev cluster at
1.2.3.4:
apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: engineering-dev-guestbook spec: source: repoURL: targetRevision: HEAD path: guestbook/engineering-dev destination: server: namespace: guestbook
We can see that the generated values have been substituted into the
server and
path fields of the template, and the template has been rendered into a fully-fleshed out Argo CD Application.
The Applications are now also visible from within the Argo CD UI:
The ApplicationSet controller will ensure that any changes, updates, or deletions made to
ApplicationSet resources are automatically applied to the corresponding
Application(s).
For instance, if a new cluster/URL list entry was added to the List generator, a new Argo CD
Application resource would be accordingly created for this new cluster. Any edits made to the
guestbook
ApplicationSet resource will affect all the Argo CD Applications that were instantiated by that resource, including the new Application.
While the List generator's literal list of clusters is fairly simplistic, much more sophisticated scenarios are supported by the other available generators in the ApplicationSet controller. | https://argo-cd.readthedocs.io/en/stable/operator-manual/applicationset/ | 2022-09-25T00:52:41 | CC-MAIN-2022-40 | 1664030334332.96 | [array(['../../assets/applicationset/Introduction/List-Example-In-Argo-CD-Web-UI.png',
'List generator example in Argo CD Web UI'], dtype=object) ] | argo-cd.readthedocs.io |
Getting started with Aista Magic Cloud in 4 minutes
Aista Magic Cloud is a low-code CRUD generator allowing you to generate CRUD apps in one second by clicking a button. It works by wrapping your database into Hyperlambda HTTP API endpoints. Below is the CRUD generator that automatically creates your web API.
Generate CRUD endpoints for your database
First you need a database. You can connect to an existing database in the Management/Config menu item. Click
the button that says “Add connection string” and make sure you use
{database} as your database selector such
that Magic can dynamically connect to all databases in your database server. If you don’t have a database yourself,
and only want to play around with Magic, you can find example databases in the Management/Plugins section.
Choose any plugin that starts with “SQLite” and ends with “DB”, and click “Install”.
Then go to Tools/CRUD Generator. Choose your database and click “Crudify all tables”. You can also select individual tables and configure these as you wish. This allows you to for instance apply authorisation requirements for your individual endpoints, add reCAPTCHA requirements for invoking endpoints, publish web socket messages as endpoints are invoked, log invocations to endpoints, etc.
When you are done generating your CRUD API, you can go to Analytics/Endpoints to play with your endpoints. This component is similar to Swagger, and allows you to see which arguments your endpoints can handle, to easily implement some kind of frontend. Below is a video demonstrating the entire process.
Generate SQL endpoints
Magic allows you to create HTTP endpoints with SQL only. This allows you to compose some SQL statement, and rapidly wrap it inside an HTTP endpoint. You can find this component in the Tools/CRUD Generator/SQL section of your dashboard. Choose your database, provide some SQL, add arguments that you reference in your SQL, and click the “Generate” button. Below is a screenshot of the process.
SQL Studio
In addition to the above, Magic contains a web based SQL Studio, allowing you to execute SQL towards your database of choice. This component works transparently towards SQL Server, MySQL, SQLite, and PostgreSQL, and allows you to save frequently used SQL snippets, and do basic administration of your databases. The SQL component in Magic supports syntax highlighting on your tables, autocomplete, and most other features you would expect from an SQL Workbench type of component. To trigger the autocompletion feature use CTRL+SPACE on Windows or FN+CONTROL+SPACE on your mac. When you have created some SQL statement and saved it, you can load this SQL from the SQL endpoint CRUD Generator to wrap it inside an HTTP SQL endpoint.
Hyper IDE
Magic also contains its own IDE or integrated development environment, a fully fledged web based IDE. Hyper IDE provides syntax highlighting for most popular programming languages, in addition to autocomplete for Hyperlambda. With Hyper IDE you can edit your code, save it, and immediately see the result of your modifications by executing your endpoint using the menu dropdown button.
Use Magic from anywhere!
Notice, although it’s obviously more convenient to use a desktop computer as your primary development machine, you can use all components in Magic from your phone if required. Below is a video where we demonstrate the Crudifier and Hyper IDE from an iPhone.
Below you can find more information about Magic, such as tutorials and its reference documentation. | https://docs.aista.com/ | 2022-09-25T01:07:19 | CC-MAIN-2022-40 | 1664030334332.96 | [array(['https://raw.githubusercontent.com/polterguy/polterguy.github.io/master/images/backend-crud.jpg',
'CRUD API generator'], dtype=object)
array(['https://raw.githubusercontent.com/polterguy/polterguy.github.io/master/images/sql-web-api.jpg',
'Creating a Web API using SQL'], dtype=object)
array(['https://raw.githubusercontent.com/polterguy/polterguy.github.io/master/images/sql-autocomplete.jpg',
"Magic's web based SQL Workbench"], dtype=object)
array(['https://raw.githubusercontent.com/polterguy/polterguy.github.io/master/images/hyper-ide-actions.jpg',
"Magic's Hyper IDE"], dtype=object) ] | docs.aista.com |
Installing plugins for playing movies and music
As a Fedora user and system administrator, you can use these steps to install additional multimedia plugins that enable you to play various video and audio types.
Prerequisites
Procedure
Use the
dnfutility to install packages that provide multimedia libraries:
sudo dnf install gstreamer1-plugins-{bad-\*,good-\*,base} gstreamer1-plugin-openh264 gstreamer1-libav --exclude=gstreamer1-plugins-bad-free-devel sudo dnf install lame\* --exclude=lame-devel sudo dnf group upgrade --with-optional Multimedia | https://docs.fedoraproject.org/ur_PK/quick-docs/assembly_installing-plugins-for-playing-movies-and-music/ | 2022-09-25T03:02:34 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.fedoraproject.org |
Why do my co-workers and I have too many/few banked hours?
There are several reasons as to why you might have too many or too few banked hours. It could be absence categories that have not been setup correctly or even wrong co-worker schedules.
Abscences that have not been set-up correctly.
If you discover that a co-worker or yourself have too many banked hours or too few, the first thing you should check is whether you have used an
abscence type that is not properly setup
. In general this could mean that you have either used an abscence type that has added to your banked hours when it should not have. For instance sick leave or "time off in-lieu" that has not been checked to substract from banked hours or to add to banked hours might lead to significant discrepancies in banked hours.
Once you have made the necessary adjustments to the abscence types, these will have reotractive effect on the hours balance, automatically adjusting the hours balance to the correct amount.
Schedules that do not cover the proper period of employment.
A common mistake when facing big discrepancies in the banked hours balance is that the schedule has not been set up properly. For instance, a new employee starts logging hours, but no schedule has been assigned, then this will lead to Moment adding all hours worked as banked hours.
In order to set this up properly, you have to go to co-workers > choose the co-worker' > settings and then click "schedule"
Set up a schedule that properly covers the co-workers employment period. Once this has been defined, it will retroactively affect the hours balance, adjusting and reducing the amount of banked hours that the employee has.
Why not just adjusting the hours balance at the end of the month or year?
Event though it is fully feasible to do adjustments to the hours balance at the end of the month or at the end of the year, the main reason as to why this is not recommendable:
It doesnt fix the underlying problems:
The problems occured in the first place because you didn't have proper routines for setting up schedules, didn't know how to do it or simply forgot. Adjusting the hours is just a stop-gap measure, it will not solve the issue that you have to set up proper routines about how hours should be registered and what types of absences are supposed to be used. If you don't set up schedules properly, it will on
By adjusting schedules and the absence types you limit the possibility for banked hours being wrong again.
How to register furloughing (permitteringer) in Moment?
How to add co-workers to projects between two partner companies
Last modified
1yr ago
Copy link
Outline
Abscences that have not been set-up correctly.
Schedules that do not cover the proper period of employment.
Why not just adjusting the hours balance at the end of the month or year? | https://docs.moment.team/help/resources/co-workers/why-do-i-and-my-co-workers-have-too-many-few-banked-hours | 2022-09-25T02:03:55 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.moment.team |
User management functionality is provided by default in all WSO2 Carbon-based products and is configured in the
<PRODUCT_HOME>/repository/conf/user-mgt.xml file. This documentation explains how to set up a repository for storing authorization information (role-based permissions) and how to change the relevant configurations.. See the related topics for information on how user stores are configured.
The repository that stores Permissions should always be an RDBMS. The Authorization Manager configuration in the user-mgt.xml file (stored in the
<PRODUCT_HOME>/repository/conf/ directory) connects the system to this RDBMS.
Follow the instructions.> <Property name="GetAllRolesOfUserEnabled">false<>
Related Topics
- Configuring User Stores: This topic explains how the repositories for storing information about Users and Roles are configured.
- Setting up the Physical Database: This section explains how you can set up a new RDBMS and configure it for your system. | https://docs.wso2.com/display/IS570/Configuring+the+Authorization+Manager | 2022-09-25T03:00:33 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.wso2.com |
- API >
- API Resources >
- Atlas Users >
- Create an Atlas User
Create an Atlas User¶
On this page
Note
Groups and projects are synonymous terms. Your
{GROUP-ID} is the
same as your project ID. For existing groups, your group/project ID
remains the same. The resource and corresponding endpoints use the
term
groups.
The following resource creates a new Atlas user and assigns it to one or more of your Atlas projects and organizations.:
Important
Atlas limits Atlas user membership to a maximum of 250 Atlas per team.
Atlas limits Atlas user membership to 500 Atlas users per project and 500 Atlas users without first removing existing Atlas users from the organization membership.
Resource¶
Create a new user. | https://docs.atlas.mongodb.com/reference/api/user-create/ | 2020-01-18T01:55:47 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.atlas.mongodb.com |
Memory¶
This section of the library includes components for manipulating values that change over time. The components in this section are implemented by using simple Boolean or switching logic together with a One Frame Delay component to obtain the value that a variable had on the previous frame of execution.
Is First Frame¶
The Is First Frame component outputs
True on the first frame of execution and
False on every subsequent
frame. The example implementation simply feeds a literal Boolean value into a One Frame Delay.
Held and Transitioned¶
These components are similar to the standard Transition primitive in CertSAFE, except that they do not have a specific
literal value that they are looking for. The Held component outputs
True if the input value was the same on the
previous frame as on this frame. The Transitioned component outputs
True if the input value was different on the
previous frame than on this frame. Internally, these components use the Is First Frame component so that they
never output
True on the first frame of execution, regardless of the input value.
Hysteresis¶
This component takes a numeric input and compares it against lower and upper bound values. The component has a Boolean
state output, which starts out initialized to
False. If the input goes above the on limit (upper bound), the
output changes to
True. If the input drops below the off limit (lower bound), the output changes to
False.
Internally, this component uses an E Latch so that, if the input is simultaneously above the on
limit and below the off limit, the output retains its previous value. (This can only happen if the off limit is
greater than or equal to the on limit.) | https://docs.certsafe.com/example-components/memory.html | 2020-01-18T01:42:00 | CC-MAIN-2020-05 | 1579250591431.4 | [array(['../_images/is-first-frame.png', '../_images/is-first-frame.png'],
dtype=object)
array(['../_images/held-transitioned.png',
'../_images/held-transitioned.png'], dtype=object)
array(['../_images/hysteresis.png', '../_images/hysteresis.png'],
dtype=object) ] | docs.certsafe.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.