content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
GenericI2c (community library)
Summary
Library providing class for controlling and reading i2c devices
Example Build Testing
Device OS Version:
This table is generated from an automated build. Success only indicates that the code compiled successfully.
Library Read Me
This content is provided by the library maintainer and has not been validated or approved.
About
This library provides a class called i2cDevices to manage generic control of any i2c slaves connected to the master.
- Particle Photon (not tested with Electron but should work)
- Basic understanding of developing on the Particle platform
How to use this library
The libary must be imported into your application. This can be done through the Particle WEB IDE:
- Click Libraries
- Select GenericI2c (type into textbox to filter libraries)
- Click "Include in App" button
- Select the App you want to include the library in
- Click Add to this app
For more information see Particles's documentation
Example use
Once the Library is included in your applicaiton you should see an include statement at the top like this:
//This #include statement was automatically added by the Particle IDE. #include "GenericI2c/GenericI2c.h"
Now you need to instanciate an object of the library for use in your application like this:
GenericI2c i2cController;
Here is an example use case for the class, triggering relays based on a temperature using the following products:
-
-
// This #include statement was automatically added by the Particle IDE. #include "GenericI2c/GenericI2c.h" //Set variables for Particle cloud String temperature = ""; String relayStatus = ""; //Instantiate I2C class i2cDevices devices; //Set addresses for connected devices int relayAddr = 32; int tempAddr = 72; void processTemp(int vals[]); void processRelays(int val[]) void setup() { Particle.variable("relayStatus", relayStatus); Particle.variable("Temperature", temperature); //Initialize I2C communication devices.init(); //Add devices for use devices.addDevice(relayAddr, "0,252|6,252"); devices.addDevice(tempAddr, "1,96"); //initialize devices devices.initDevices(); } void loop() { if(millis()%1000==0){ //Once a second check the temperature and relay status devices.readI2cCommand(tempAddr, "0,2", processTemp); devices.readI2cCommand(relayAddr, "10,1", processRelays); } } //Callback function for reading temperature void processTemp(int vals[]){ //Process temperature to get a real world value float temp = vals[0]; temp += (float)vals[1]/256.00; float tempF = temp*1.8+32; //Set the temperature variable temperature = String(tempF,2); if(tempF > 73){ //If the temperature in Fahrenheit is greater than 73, turn on both relays devices.sendCommand(relayAddr, "10,3"); }else{ //Otherwise turn them both off devices.sendCommand(relayAddr, "10,0"); } } //Callback function for checking relay status void processRelays(int val[]){ relayStatus = val[0]; relayStatus += " - Relay 1 is "; if(val[0] == 3 || val[0] == 1) { relayStatus += "on"; }else{ relayStatus += "off"; } relayStatus += ", Relay 2 is "; if(val[0]>1){ relayStatus += "on"; }else{ relayStatus += "off"; } }
Public accessible methods
void init();
This method simply initializes I2C communication, it MUST be called before any read or write commands.
String scan();
Scan the whole range of I2C busses and place the available devices into a Log.
bool addDevice(int address); bool addDevice(int address, String initCmds);
This method adds a device to the registry of the class. If no commands are sent a placeholder of 0 is set for the initialization routine.
int initDevices();
Initialize all devices in registry with their appropriate initialization commands.
bool deviceExists(int address);
Checks if a device exists in the registry. This command DOES NOT validate that the device is connected.
String getDevice(int address); String getDevice(int index, int &address);
Fetches the device from the registry, returns the initialization commands and, in the second case, sets the address. (index refers to the index in the registry)
int sendCommand(String command); int sendCommand(int addr, String command);
Send command to a device. If the address is sent first, it must be omitted from the command string. The command string should be a comma delimitted list of integers and may be of a variable length. This method will retry 3 times if the command fails
int sendCommands(String command); int sendCommands(int addr, String command);
Sends multiple commands to a device, exactly like sendCommand, except it expects a pipe (|) delimited list of commands. This method will retry the entire list of commands 3 times if the final command fails
int processCommand(String command); int processCommand(int addr, String command);
Processes a command, similar to sendCommand except it will never retry
int readI2cCommand(String command); int readI2cCommand(int addr, String command); int readI2cCommand(String command, void (*fptr)(int*)); int readI2cCommand(int addr, String command, void (*fptr)(int*));
Send read commands to device, if address is sent it must be omitted from the commands. The final part of the comma delimited read command should be the number of bytes expected. A function may be sent in as the last argument and will be used as a callback, sending the bytes received to it.
int read(int addr, int registerAddress, int bytes, int* buff);
Not terribly useful by itself. This is the function that does the work for readI2cCommand
Browse Library Files | https://docs.particle.io/cards/libraries/g/GenericI2c/ | 2021-05-06T01:00:54 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.particle.io |
tag selected by the user.
Makes a tag selection field.
Tag field in an Editor window.
using UnityEngine; using UnityEditor;
// Change the Tag and/or the layer of the selected GameObjects.
class EditorGUITagLayerField : EditorWindow { string selectedTag = ""; int selectedLayer = 0;
[MenuItem("Examples/Tag - Layer for Selection")] static void Init() { EditorWindow window = GetWindow<EditorGUITagLayerField>(); window.position = new Rect(0, 0, 350, 70); window.Show(); }
void OnGUI() { selectedTag = EditorGUI.TagField( new Rect(3, 3, position.width / 2 - 6, 20), "New Tag:", selectedTag); selectedLayer = EditorGUI.LayerField( new Rect(position.width / 2 + 3, 3, position.width / 2 - 6, 20), "New Layer:", selectedLayer);
if (Selection.activeGameObject) { if (GUI.Button(new Rect(3, 25, 90, 17), "Change Tags")) { foreach (GameObject go in Selection.gameObjects) go.tag = selectedTag; }
if (GUI.Button(new Rect(position.width - 96, 25, 90, 17), "Change Layers")) { foreach (GameObject go in Selection.gameObjects) go.layer = selectedLayer; } } }
void OnInspectorUpdate() { Repaint(); } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/2020.1/Documentation/ScriptReference/EditorGUI.TagField.html | 2021-05-06T01:21:43 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.unity3d.com |
Shared Module Block
Contents
Use this block when you want to reference a Shared Module in an application.
Shared Modules are useful for reusing code from multiple applications, as well as for splitting larger applications into smaller and more manageable chunks. Once you have created a Shared Module, you can use the Shared Module block to invoke the module into your application.
If you change a Shared Module, you also change all of the applications that use that module. If an application uses the Latest version of a module, and the application is published, it starts using the new state of the Shared Module. If an application uses a specific version of the Shared Module (not the Latest), it does not receive the latest changes in the Shared Module, even if the application is published again.
For more information about how to create and manage shared modules, see Shared Module.
Module tab
Select a Shared Module or Template.
All Shared Modules that have at least one version are listed. Once a Shared Module is selected, all published versions of the module are shown. Usually the latest version should be selected, unless there is a incompatibility with the latest version.
Templates are used only with the Callback block. They are read-only and cannot be edited or deleted. | https://all.docs.genesys.com/PEC-ROU/Current/Designer/SharedMod | 2021-05-06T01:50:01 | CC-MAIN-2021-21 | 1620243988724.75 | [] | all.docs.genesys.com |
Setting up Hive metastore for Atlas
As Administrator, you might plan to recommend Atlas for Hive metadata management and data governance. You have to check that Hive metastore for Atlas is set up, so users can build catalogs of data assets, classify, and govern the assets. If Atlas is not set up you learn how to do so.
- In Cloudera Manager, click .
- Choose a method based on the results of your search:
- If Cloudera Manager finds the Atlas Service, check the checkbox to enable the Hive Metastore hook in your Cloudera Manager instance.
- If Cloudera Manager does not find the Atlas Service, in Hive Service Advanced Configuration Snippet (Safety Valve) for atlas-application properties, enter an XML snippet in the value element that provides the name of your Atlas service,
myatlasservicein the example below.
<property> <name>atlas_service</name> <value>myatlasservice</value> </property>
- Save changes.
- Restart the Hive metastore service. | https://docs.cloudera.com/cdp-private-cloud/latest/migrate-hdp-hive-workloads/topics/amb-enable-hms-cm.html | 2021-05-06T02:03:59 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.cloudera.com |
Hydra employs PoS (Proof of Stake) consensus mechanism, which is different from Bitcoin's PoW (Proof of Work). The mining process in PoS system is called staking.
Basic requirements for staking:
Run a Hydra fullnode, and keep online (Since Hydra is using PoS, we don't need any mining machine, just PC or even Raspberry Pi can run a fullnode);
Have some Hydra in the wallet (fullnode)(Any amount of Hydra can be used for staking, more Hydra means higher possibility to stake).
If you have no Hydra yet, please get some from market before you doing following staking settings.
Currently, Hydra Core wallet is the only wallet that support Hydra PoS staking. Note that other wallets like mobile wallet and Hydra Electrum are not able to stake for the time being.
Two ways to stake:
Method 1:Staking with hydrad, using command line, suitable for Linux/OSX/Windows/Raspberry Pi users who are familiar with command line tools.
Method 2:Staking with
hydra-qt wallet, with GUI, suitable for common users.
Either way works in the same way for staking, so you can choose either method you like.
To run
hydrad, please refer to "How to deploy Hydra node".
Follow the guidance to run
hydrad:
./hydrad -daemon
Staking is default on for hydrad, so no need for other options if you only want to stake.
First you can generate a new address with:
./hydra-cli getnewaddress
This will generate a new address with Prefix 'H'. You can send some HYDRA to this new generated address for staking. You can generate as many addresses as you like, and send arbitrary HYDRA as you like for staking.
Note:The coin should wait for 500 blocks before being able to stake, i.e. about 17 hours to MATURE..
After the Hydra node syncing to the latest block, you can check current balance with
./hydra-cli getbalance or get utxo list with
./hydra-cli listunspent.
Please do following steps after your coin is mature.
Check current staking info with:
./hydra HYDRA that is staking right now, with unit 10^-8HYDRA, here in the example, we have 0.532HYDRA staking.
expectedtime stands for the expected time that you will get a reward, the unit is second.
If your wallet is not encrypted, you can skip this section. However, for security, we recommend you encrypt your wallet. (How to encrypt?)
Hydra wallet can be encrypted with
encryptwallet. However, staking will be stopped when it is encrypted. For example,
./hydra:
./hydra-cli walletpassphrase "" 99999999 true
The meaning of the arguments can be found in the documents "How to encrypt?".
After unlocking, you can double check
getstakinginfo, it should look the same with previous unlocked result,
staking become true.
How to use Hydra-qt tutorial? please refer to Hydra qt wallet tutorial. Current supported platform: Mac/Linux/Windows.
Launch the wallet.
If you already have some Hydra in your wallet, you might skip this step.
If not, please send some Hydra to your wallet first. (How to receive?).
Note:The coin should wait for 500 blocks before being able to stake, i.e. about 17 hours to MATURE..
Make sure to activate staking in the "stake" tab of your wallet.
The flash sign at the bottom of wallet shows staking info :
Solid black flash means it is staking now. For more information, you can put your mouse on the flash, e.g.:
Staking: if it is staking;
Your weight is: How many Hydra are able to used for staking, unit is Hydra;
Network weight is: How many Hydra are staking in the network, unit is Hydra;
Expected time: expected time to get reward, unit is Day.
Hollow flash means it is not staking
Possible reasons for not staking:
1.There is no coins of no mature coins (more than 500 confirmations(blocks)) -- Solution: send some Hydra to the wallet and wait for 500 blocks (about 17 hours);
2.Wallet is locked/encrypted -- Solution: unlock the wallet for staking. (How to unlock?)
No flash sign means staking is disabled
3.Staking is disabled -- Solution: enable staking in the hydra.conf (-staking=true)(How to set hydra.conf?)
HYDRA's block rewards are distributed in an incremental inflationary model.)
Once a stake is successful, you will get the reward immediately e.g. a 15.08 HYDRA
The staked coins (UTXO) will be locked for 500 blocks, during this period, it cannot be spent nor be used to stake.
Staking is by default enabled for Hydra wallet. If you need to disable staking for some reason (for example exchanges are always recommended to disable staking), you might following anyone of the 3 ways below:
1 Add
-staking=false when running Hydra node:
./hydrad -staking=false -daemon
For qt wallet, it is like:
./hydra-qt -staking=false
2 Add config
staking=false in hydra.conf;(How to set hydra.conf?)
3 Encrypt wallet, since encrypted wallet will automatically stop staking.(How to unlock?) | https://docs.hydrachain.org/staking-hydra-coins/how-to-stake-with-qtum | 2021-05-06T01:01:04 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.hydrachain.org |
When entering a project, you will see the following startup view.
There are three main components to be aware of when navigating in Oliasoft WellDesign:
Global Save Button, Is used to save new information. This button has to be clicked on in order to save new information, i.e. navigating away from the Oliasoft WellDesign webpage without clicking the save button will cause your last inputted data to be deleted.
Module Menu Button, Is used to expand the module menu which displays the modules and sections to be found in Oliasoft WellDesign.
Settings Menu Button, Is used to enter user and company settings. These are not linked to a certain project, but are global settings for you as a user.
The Module Menu button expands a menu displaying all the modules and sections to be found in Oliasoft WellDesign. As seen in the figure, the modules are set up chronologically according to industry workflows and indicate typical data results required in subsequent sections. For example, a trajectory design has to be entered prior to adding a casing design.
When entering a section in the application, you will be informed if some data has to be defined prior to performing any work in that section. For example, you will be told to define a well schematic design prior to performing temperature simulations.
The Settings Menu button expands a menu displaying the following options:
A Switch Company option will be visible for users with access to multiple organizations and allows the user to switch easily between the companies. | https://docs.oliasoft.com/user-guides-1/how-to-navigate-in-the-application | 2021-05-06T00:33:58 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.oliasoft.com |
Glossary Permissions
As an administrator user of an organization, you have the option to define which are the user roles with permissions to modify a TX glossary of a specific project in Transifex
In order to do that:
Visit your organization settings page and click on Glossary
Click on the gear icon of the specific glossary you want to define the permissions for and then click 'User Permissions'
In the pop-up window, define which are the user roles you want to have the permissions to add/edit/delete glossary entries
Once you apply the changes, the user roles which haven't been checked will no longer be authorized to apply changes to your project's glossary. A proper message will appear to them in Transifex Web Editor: | https://docs.transifex.com/glossary/glossary-permissions | 2021-05-06T01:16:36 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.transifex.com |
Custom Post Types
This section will detail the different post types that require explanation. The custom post types in use on the website are as follows.
- News
- People
- Videos
- Publications
- Events
- Courses
- Partners
- Philanthropist Resources
- CLCL Nonprofits
- Summit Blog Posts | https://pacs.fairnorth-docs.com/custom-post-types/ | 2021-05-05T23:49:44 | CC-MAIN-2021-21 | 1620243988724.75 | [] | pacs.fairnorth-docs.com |
Migrating Advanced Content Modules
Originally, the advanced layouts where handled through the use of custom functionality we called Advanced Content.
This section provides an explanation of how to reproduce the functionality from Advanced Content using the blocks editor and custom blocks. The following is a screenshot of the legacy Advanced Content Modules. Each will be addressed individually with an explanation of how to recreate them using standard or custom blocks.
WYSIWYG
Use the native block functionality for most WYSIWYG formats. Use the Styled Panel Container to add a custom background behind your content.
Multiple Columns
Use the native Gutenberg columns block functionality. Use the Styled Panel Container to add a custom background behind your content.
Alternating Images
Use the Alternating Image Row block to achieve the same layouts as the Alternating Images advanced content module.
Accordion
There isn’t an out-of-the-box accordion block that comes with the Gutenberg editor. We installed a plugin that adds this functionality. Use the Accordion Item block supplied by that plugin.
Box Grid
Use the regular columns block, and add a Grid Box block within each column.
Photo Link Grid
Use the regular columns block, and add a Photo Link Box block within each column.
Download Boxes
Use the regular columns block, and add a Download Box block within each column.
People
Use the People Grid block.
Upcoming Events
Use the Event Grid block.
Videos
Use the Video Grid block.
Publications
Use the Publication Grid block.
Post Grid
Use the Post Grid block.
PDF Embed
The PDF embedder plugin comes with it’s own custom block called PDF Embedder.
Social Links
Use the out-of-the-box Social Icons block.
Use the out-of-the-box Twitter block to insert a Twitter feed. | https://pacs.fairnorth-docs.com/gutenberg-block-editor-migration/migrating-advanced-content-modules/ | 2021-05-06T01:38:19 | CC-MAIN-2021-21 | 1620243988724.75 | [] | pacs.fairnorth-docs.com |
Suggested for Advanced users
This Docker image starts up a node using the latest wallet binary.
It can pulled using
docker pull locktrip/hydra-node and started with:
docker run -d -P --name Hydra -v /src/hydra:/hydra -e TZ=Europe/Sofia -i -t -p 3338:3338 locktrip/hydra-node
Make sure to set your time zone according to your locale. Startup flags can be set as well such as
-staking=false to disable staking.
Special credit to the following HYDRA community members who have contributed to the expansion of the community tools section: | https://docs.hydrachain.org/community-tools/docker-image-for-hydra-node | 2021-05-06T01:14:14 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.hydrachain.org |
Add a virtual host to the collection
Select the suited HttpDriver instance, filtered by address and port pair
Select a virtual host match for the specified request according to RFC 7230 criteria
Retrieve the group's default host
Retrieve an array of unique socket addresses on which hosts should listen
Retrieve stream encryption settings by bind address | https://docs.kelunik.com/amphp/aerys/v0.4.3/classes/aerys/vhostcontainer | 2021-05-06T01:05:01 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.kelunik.com |
.
-.
The following tips can help you with searching and filtering.
-.
- Select a panel and drag it to its new position.
Create an inline panel for a dashboard
When you create an inline panel, you select a visualization and specify a search for the panel.
- Select Edit to open the dashboard editor.
-.
- Select Edit to open the dashboard editor.
-.
- Select Edit to open the dashboard editor.
- details.
-
If this is your first time working with Simple XML, see Editing simple XML. See also the Simple XML Reference for more information on panel configurations.
-! | https://docs.splunk.com/Documentation/Splunk/latest/Viz/AddPanels | 2021-05-06T00:23:16 | CC-MAIN-2021-21 | 1620243988724.75 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
MSD Module¶
Overview
Details
The
freud.msd module provides functions for computing the
mean-squared-displacement (MSD) of particles in periodic systems.
- class
freud.msd.
MSD¶
Bases:
freud.util._Compute [CPC+11].
- Parameters
box (
freud.box.Box, optional) – If not provided, the class will assume that all positions provided in calls to
compute()are already unwrapped. (Default value =
None).
mode (str, optional) – Mode of calculation. Options are
'window'and
'direct'. (Default value =
'window').
compute(self, positions, images=None, reset=True)¶
Calculate the MSD for the positions provided.
Note
Unlike most methods in freud, accumulation for the MSD is split over points rather than frames of a simulation. The reason for this choice is that efficient computation of the MSD requires using the entire trajectory for a given particle. As a result, when setting
reset=False, you must provide the positions of each point over the full length of the trajectory, but you may call
computemultiple times with different subsets the points to calculate the MSD over the full set of positions. The primary use-case is when the trajectory is so large that computing an MSD on all particles at once is prohibitively expensive.
- Parameters
positions ((\(N_{frames}\), \(N_{particles}\), 3)
numpy.ndarray) – The particle positions over a trajectory. If neither box nor images are provided, the positions are assumed to be unwrapped already.
images ((\(N_{frames}\), \(N_{particles}\), 3)
numpy.ndarray, optional) – The particle images to unwrap with if provided. Must be provided along with a simulation box (in the constructor) if particle positions need to be unwrapped. If neither are provided, positions are assumed to be unwrapped already. (Default value =
None).
reset (bool) – Whether to erase the previously computed values before adding the new computation; if False, will accumulate data (Default value: True).
- property
msd¶
The mean squared displacement.
- Type
\(\left(N_{frames}, \right)\)
numpy.ndarray
- property
particle_msd¶
The per particle based mean squared displacement.
- Type
\(\left(N_{frames}, N_{particles} \right)\)
numpy.ndarray
plot(self, ax=None)¶
Plot MSD.
- Parameters
ax (
matplotlib.axes.Axes, optional) – Axis to plot on. If
None, make a new figure and axis. (Default value =
None)
- Returns
Axis with the plot.
- Return type
- | https://freud.readthedocs.io/en/v2.5.1/modules/msd.html | 2021-05-06T00:39:15 | CC-MAIN-2021-21 | 1620243988724.75 | [] | freud.readthedocs.io |
Loop (community library)
Summary
Advanced Looping library with timeouts and stuff
Example Build Testing
Device OS Version:
This table is generated from an automated build. Success only indicates that the code compiled successfully.
Library Read Me
This content is provided by the library maintainer and has not been validated or approved.
Loop
Advanced looping class for arduino[-like] devices with timeouts and callbacks
Browse Library Files | https://docs.particle.io/cards/libraries/l/Loop/ | 2021-05-05T23:53:58 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.particle.io |
nokia-5110-lcd (community library)
Summary
An Implementation of Sparkfun's Nokia 5110 Library for the Spark Core
Example Build Testing
Device OS Version:
This table is generated from an automated build. Success only indicates that the code compiled successfully.
Library Read Me
This content is provided by the library maintainer and has not been validated or approved.
SparkCore-Nokia5110LCD
A library for manipulating Nokia's 5110 LCD for the Spark Core. Implementation based on Sparkfun's Nokia 5110 LCD Library.
Components Required
- A Nokia 5110 LCD (get at sparkfun.com or eBay or Amazon)
Example Usage
See this flashable, extensive example for details, or, in a nutshell:
Nokia5110LCD::Display nokiaLCD(D0, D1, D2, A0); // SCE pin, RESET pin, Data/Command, and backlight pins void setup() { nokiaLCD.begin(); nokiaLCD.updateDisplay(); // with displayMap untouched, SFE logo } void loop() { // send various chars, strings, shapes to the buffer and write them to the display... }
Nuances
The first three parameters in the constructor can be any output pin
The Backlight pin should be able to implement PWM (aka analog pin)
The Serial data and Serial clock pins should stay at the default A3 and A5, respectively.
Building locally
If you are building locally, place the files here:
..\core-firmware\inc\nokia-5110-lcd.h ..\core-firmware\src\application.cpp (renamed from example.cpp) ..\core-firmware\src\nokia-5110-lcd.cpp
Browse Library Files | https://docs.particle.io/cards/libraries/n/nokia-5110-lcd/ | 2021-05-06T00:54:27 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.particle.io |
Table Definitions and Relationships
The database diagram is very common to the class diagram of the RadScheduleView interfaces:
Table Definitions
We have table definitions in the database according for the following types in the RadScheduleView:
Relationships
Here are some explanations about the keys and the relationships in the data tables:
- There is no table definition for the IRecurrenceRule type because we don’t need it. Storing the RecurrencePattern is enough to generate the recurrence rules at run-time.
- We cannot save the Brush type into the database directly, that’s why we can save a string that represents the color and convert the string to SolidColerBrush object when the TimeMarkers & Categories are loaded.
- The SqlAppointmentResource and SqlExceptionResources are cross-tables between:
- SqlAppointments & SqlResources
- SqlExceptionAppointments & SqlResources | https://docs.telerik.com/devtools/silverlight/controls/radscheduleview/populating-with-data/binding-to-database/binding-to-db-datatier | 2021-05-06T00:27:10 | CC-MAIN-2021-21 | 1620243988724.75 | [array(['images/radscheduleview_populating_with_data_scheduleViewDataBaseDiagram.png',
'radscheduleview populating with data schedule View Data Base Diagram'],
dtype=object) ] | docs.telerik.com |
19.7 DAS Acquisition Object Details
The DAS Acquisition object (see Figure 19.6-1 ).
Note that the DAS Acquisition object is derived from AbstractObject, which also defines a set of mandatory and optional attributes. The “uuid” (defined in AbstractObject) and the “AcquisitionId” attributes (defined in DasAcquisition) have different purposes and they must NOT have the same value. The “uuid” identifies the entire dataset package, which may include raw and/or FBE and/or spectra data (see Section 19.5.3 HDF5 File Array Configuration Options). The “AcquisitionId” identifies the DAS acquisition job. One example is that a raw-only package is created for an acquisition. Then a separate FBE-only package is derived from this raw dataset. Because these two packages are separate (there are two EPC files), they have different values for the “uuid” under DasAcquisition, but they share the same value for “AcquisitionId” because they refer to the same acquisition job. | http://docs.energistics.org/PRODML/PRODML_TOPICS/PRO-DAS-000-036-0-C-sv2000.html | 2021-05-06T01:09:22 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.energistics.org |
see summary information either showing you the overall translation progress of all the resources in the project. Specifically, you'll see the percentage of untranslated strings, translated strings, reviewed strings (1st review), and proofread strings (2nd review, if enabled).
The color code is the same as in the Editor:
- Grey indicates untranslated strings
- Green indicates translated strings
- Blue indicates reviewed strings
- Purple indicates proofread strings
- Join requests – If you're crowdsourcing translations for your project (only public projects can receive join team requests), you can see how many people have requested to join the team that's assigned to the selected project. Click on of the page show you the translation progress for each of your project's languages. If a language has been 100% translated, it'll be marked "Ready for use."
Otherwise, you'll see how many strings still need to be translated.
If you're interested in seeing progress for a specific resource file, go to Resources and you'll see the percentage of completion for that particular resource.
Timestamp
The timestamps in the dashboard show the user's local time.
Note
Join requests, project language requests, and open issues are not available in the All projects view. | https://docs.transifex.com/tracking/dashboard | 2021-05-06T01:17:12 | CC-MAIN-2021-21 | 1620243988724.75 | [array(['https://cdn.transifex.com/docs-images/Dashboard_proofread.png',
'Dashboard_proofread.png#asset:4309'], dtype=object)
array(['https://cdn.transifex.com/docs-images/dashboard1_180908_145411.png',
'dashboard1_180908_145411.png#asset:5365'], dtype=object)
array(['https://cdn.transifex.com/docs-images/dashboard2.png',
'dashboard2.png#asset:5366'], dtype=object) ] | docs.transifex.com |
The.
Choose one of the following import options if you want to auto-import records:
Merge on Best Match - If more than one match is found in the catalog for a given record, Evergreen will attempt to perform the merge/overlay action with the best match as defined by the match score and quality metric.
Quality ratio affects only the Merge on Single Match and Merge on Best Match options.
Under Copy Import Actions, choose Auto-overlay In-process Acquisitions Copies if you want to overlay temporary copies that were created by the Acquisitions module. The system will attempt to overlay copies that:
Browse to find the appropriate file, and click Upload. The file will be uploaded to a queue. The file can be in either MARC or MARCXML format.
The screen will display records that have been uploaded to your queue. Above the table there are three sections:
Queue Filters provides options for limiting which records display in the table.
If Evergreen indicates that matching records exist, then click the Matches link to view the matching records. Check the box adjacent to the existing record that you want to merge with the incoming record.
A pop up window will offer you the same import choices that were present on the Import Records screen. You can choose one of the import options, or click Import.
The screen will refresh. The Queue Summary indicates that the record was imported. The Import Time column records the date that the record was imported. Also, the Imported As column should now display the database ID (also known as the bib record number) for the imported record.
You can confirm that the record was imported by using the value of the Imported As column by selecting the menu Cataloging → Retrieve title by database ID and using the supplied Imported As number. Alternatively, you can search the catalog to confirm that the record was imported. | http://docs.evergreen-ils.org/reorg/3.0/cataloging/_import_records.html | 2018-07-15T23:19:46 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.evergreen-ils.org |
ebay stationary bike home and interior astounding of exercise cycling cardio workout from exquisite brisbane.
Related Post
Shoreline Marine Hummingbird Gps Fish Finders Raider Helmets Adidas Ladies Golf Shoes Mobile Laser Printer Yellow Sleeping Bag No Refunds Sign Used Trampoline Sleep Mat For Adults Hot Tub Clearance Backyard Discovery Thunder Ridge Sunny Fitness Remote Control Toy For Girls Fine Ballpoint Pens Pop Up Tent Kid | http://top-docs.co/ebay-stationary-bike/ebay-stationary-bike-home-and-interior-astounding-of-exercise-cycling-cardio-workout-from-exquisite-brisbane/ | 2018-07-15T22:53:42 | CC-MAIN-2018-30 | 1531676589022.38 | [array(['http://top-docs.co/wp-content/uploads/2018/01/ebay-stationary-bike-home-and-interior-astounding-of-exercise-cycling-cardio-workout-from-exquisite-brisbane.jpg',
'ebay stationary bike home and interior astounding of exercise cycling cardio workout from exquisite brisbane ebay stationary bike home and interior astounding of exercise cycling cardio workout from exquisite brisbane'],
dtype=object) ] | top-docs.co |
The JWST MIRI coronographic imaging mode requires target acquisition procedures.
Introduction
Parent pages: MIRI Operations → MIRI Target Acquisitions
See also: MIRI Coronagraphic Recommended Strategies
MIRI coronagraphic imaging observations require precise and accurate positioning of a bright source at the location of maximum attenuation by the Lyot spot mask or 4QPMs: for the 4QPM, this is the apex between the four quadrants; for the Lyot, it is at the center of the occulting spot.
For the 4QPM, the required absolute accuracy of placing a star at the apex is 10 mas (1-σ per axis), but the ultimate positioning of the object on the mask requires a repeatable precision of 5 mas (1-σ per axis).
For the Lyot coronagraph, the pointing accuracy and precision are less stringent due to the 3 λ/D spot size. This relaxes the requirements to 22.5 mas. The neutral density filter requirements for the target acquisition have ensured that Vega can be observed in the coronagraph's subarray mode.
Two effects make the TA process complex:
- For the 4QPM coronagraphs, the phase mask can distort the image of a star close to its center and undermine the accuracy of the centroid determination.
- The detector arrays have latent images that could mimic planets or other exciting astronomical phenomena if the centroiding process leaves multiple acquisitions to acquire additional information about the pointing (see MIRI Coronagraphic Recommended Strategies). None of the strategies quite reaches the desired centering performance for the 4QPM coronagraphs (the Lyot is much more relaxed in this area), so further optimization is expected during commissioning.
There are four acquisition filters available for MIRI TA: F560W, F1000W, F1500W and a neutral density filter FND 1 (which is needed for TA in the case of very bright sources, especially in the case of the 4QPMs; see MIRI Coronagraphic Recommended Strategies).
Due to the fact that spacecraft roll orientations are very restricted, the observer is allowed to select which of the four locations within the coronagraphic subarray to perform the target acquisition (TA). They will also have the option to repeat the entire observation, but with the TA performed within a region of the subarray that is diagonally opposed to the original TA (i.e,. a SECOND EXPOSURE). This ability ensures that the observer can mitigate against the effect of latency due to the acquisition of successive images.
Software processing requirements for the target acquisition image include a flat field of the 64 × 64 pixel region of interest (ROI) surrounding the coronagraph sweet spots of which there will be 16 in the baseline strategy. A centroiding algorithm for the targets in the sweet spots is outlined in Lajoie et al. (2014a). These exposures will be normally short; therefore, cosmic rays should not be an issue.
Lyot coronagraph target acquisition
Main article: MIRI Coronagraph Masks
See also: JWST High-Contrast Imaging Optics
For Lyot coronagraphy, the point source will be placed in one of the four target acquisition ROIs in the Lyot coronagraphic field of view ( MASKLYOT a.k.a. LYOT, 304 × 320 pixels). The readout times for each subarray in FAST mode is 0.240 s. Given the brightness of some sources, it is possible that the target acquisition will leave a latent image in the TA region of interest, which will persist in the science image.
To mitigate confusing the latent image with a nearby faint source, it will be optimal to take two coronagraphic observations: one with target acquisition using the 1st ROI and one with target acquisition using a second 2nd ROI that is diagonally opposed to the first one. Any latent images will be different between the two coronagraphic observations, allowing for discrimination of faint sources and these latent images. Discrimination is possible since the observations taken with the 1st target acquisition region will not have latent images in the 2nd target acquisition region because the latent images are variable in time; that is, the latent images in the 1st ROI will have decayed by the time the 2nd ROI target acquisition observations are performed.).
4QPM target acquisition
Main article: MIRI Coronagraph Masks
See also: JWST High-Contrast Imaging Optics
There are several possible strategies for 4QPM TA, which are discussed in detail by Lajoie et al. (2012, 2013, 2014a, 2014b). The baseline approach is described as follows on the assumption that offset slew accuracy is consistent with NASA’s pre-launch estimates.
First, the target is initially placed at a fiducial location within one of the four quadrants. An exposure is obtained, a centroid is found for the target, and the offset necessary to move the target to the optimal location at the center of the coronagraph is calculated. The observatory then makes a small angle maneuver (SAM) to place the target at the center of the apex of the 4QPM. For 4QPM coronagraphy, there are specific readout subarrays defined for each mask (MASK1550, MASK114, and MASK1065).
In scenarios where the potential contribution of latent images in the science observations poses serious concerns for the coronagraph science goals, a SECOND EXPOSURE can be performed. Here, the target acquisition (followed by a science exposure) will be repeated in the quadrant diagonally opposed to the quadrant in which the initial TA was performed. This allows for discrimination between latent images and faint sources because the latents are variable in time: the 1st observation will not not have latents present in the ROI of the 2nd TA and latents in the 1st TA ROI will have decayed by the time the 2nd TA observation is completed.
Related links
Instrument Related Links
JWST User Documentation Home
Mid-Infrared Instrument, MIRI
MIRI Overview
MIRI Coronagraphic Imaging
MIRI Optics and Focal Plane
MIRI Filters and Dispersers
APT Related Articles
JWST Astronomers Proposal Tool, APT
JWST APT website
ETC Related Articles
JWST Exposure Time Calculator
JWST ETC website
MIRI Performance Related Links
MIRI Sensitivity
MIRI Bright Source Limits
MIRI Detector Related Articles
MIRI Detector Overview
MIRI Detector Readout Overview
MIRI Detector Subarrays
MIRI Detector Readout Fast
MIRI Detector Readout Slow
References
Lajoie, C.-P., Soummer, R., Hines, D., 2012, JWST-STScI-003065,
Simulations of Target Acquisition with MIRI Four-Quadrant Phase Mask Coronagraph (II).
Lajoie, C.-P., Hines, D., Soummer, R., and The Coronagraphs Working.
This page has no comments. | https://jwst-docs.stsci.edu/display/JTI/MIRI+Coronagraphic+Imaging+Target+Acquisition | 2018-07-15T23:10:32 | CC-MAIN-2018-30 | 1531676589022.38 | [] | jwst-docs.stsci.edu |
This resource address creates a CPF configuration for the database.
Upon success, MarkLogic Server returns status code 201 (Created). If the payload is malformed or the database doesn't exist, a status code of 400 (Bad Request) is returned. A status code of 401 (Unauthorized) is returned if the user does not have the necessary privileges.
manage-userrole, or the following privilege:
conversion-enabledboolean logically maps onto the
conversion-enabledsetting contained within domain configuration.
Note: The
properties described here are for XML payloads. In general they are the same for JSON, with
the exception that, in JSON,
permissions is expressed in singular form. For
example, in JSON,
permissions is instead
permission and the format
is as shown in the example below.
domain-name
restart-user-name
eval-module
eval-root
conversion-enabled
permissions
This is a complex structure with the following children:
permission
This is a complex structure with the following children:
role-name
capability
# load default pipelines (Status Change Handling & Flexible Replication) # on the Triggers database. curl -X POST --anyauth --user admin:admin --header \ "Content-Type:application/json" -d'{"operation":"load-default-cpf-pipelines"}' \ # create a CPF domain for Flexible Replication on the Triggers database. cat domain_payload.json ==> { "domain-name": "myDomain", "description": "mydesc", "scope": "directory", "uri": "/", "depth": "infinity", "eval-module": "Modules", "eval-root": "/", "pipeline":["Status Change Handling","Flexible Replication"] } curl -X POST --anyauth --user admin:admin --header \ "Content-Type:application/json" -d@domain_payload.json \ # Install and configure CPF on the Triggers database. cat setup-cpf.json ==> { "domain-name": "myDomain", "restart-user-name": "admin", "eval-module": "Modules", "eval-root": "/", "conversion-enabled": true, "permission": [{ "role-name": "app-user", "capability": "read" }] }' curl -X POST --anyauth --user admin:admin --header "Content-Type:application/json" \ [email protected]
CommentsThe commenting feature on this page is enabled by a third party. Comments posted to this page are publicly visible.
| http://docs.marklogic.com/REST/POST/manage/v2/databases/%5Bid-or-name%5D/cpf-configs | 2018-07-15T22:54:40 | CC-MAIN-2018-30 | 1531676589022.38 | [array(['/images/i_speechbubble.png', None], dtype=object)] | docs.marklogic.com |
adjective
The indexer cluster state in which the cluster has both:
In the case of a multisite indexer cluster, the number of bucket copies must also fulfill the site-specific requirements for the replication and search factors.
A complete cluster meets the designated requirements for disaster tolerance.
A complete cluster is also a valid cluster.
In Managing Indexers and Clusters of Indexers: | http://docs.splunk.com/Splexicon:Complete | 2018-07-15T23:04:09 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.splunk.com |
What does this integration do?
The Pipedrive connector makes your MadKudu predictive analytics available to your sales team in Pipedrive.
Deals, persons and organizations attributes are synced with MadKudu.
MadKudu will enrich Deals with your MadKudu predictive analytics (see here for details) based on the primary contact.
You can now find out which Deals to prioritize, when it is a right time to reach out, and what are the right things to say to close more customers.
How to set it up?
1. API integration
Log in Pipedrive (make sure you have admin privileges) and go to Settings/API.
Add the API key to your MadKudu account in the Pipedrive settings.
2. Field creation and configuration
Creation of the fields
Once the API integration is set up, MadKudu will create and update the MadKudu standard fields in Pipedrive.
Configuration of the fields
Contact your Account Manager if you want to make modifications to your MadKudu fields.
What Pipedrive permissions do I need?
The Pipedrive user who generates the API key is required to have the admin permissions.
To find out whether or not your user has those permissions, go to Settings/Users & Permissions to check
What data will this integration obtain from Pipedrive?
We limit ourselves to the minimum information needed to run our predictions.
By default, MadKudu pulls the following objects along with their standard attributes:
- Persons
- Organizations
- Deals
What data will this integration write/edit in Pipedrive
We limit ourselves to writing/editing data only in the fields created by the integration.
Additional data can updated if you are on the enterprise plan. Contact us at [email protected] for more information. | https://docs.madkudu.com/integrations/pipedrive/ | 2018-07-15T22:51:52 | CC-MAIN-2018-30 | 1531676589022.38 | [array(['/static/integrations_pipedrive_1.png', None], dtype=object)
array(['/static/integrations_pipedrive_3.png', None], dtype=object)] | docs.madkudu.com |
Resilience in Amazon EC2 Auto Scaling
The Amazon global infrastructure is built around Amazon Regions and Availability Zones.
Amazon Amazon Regions and Availability Zones, see Amazon global infrastructure
Related topics
Resilience in Amazon EC2 in the Amazon EC2 User Guide for Linux Instances | https://docs.amazonaws.cn/en_us/autoscaling/ec2/userguide/disaster-recovery-resiliency.html | 2022-09-25T05:11:02 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.amazonaws.cn |
Write data with no-code third-party technologies
A number of third-party technologies can be configured to send line protocol directly to InfluxDB.
If you’re using any of the following technologies, check out the handy links below to configure these technologies to write data to InfluxDB (no additional software to download or install).
Many third-party integrations are community contributions. If there’s an integration missing from the list below, please open a docs issue to let us know.
(Write metrics and log events only) Vector 0.9 or later
Apache JMeter 5.2 or later
Configure Vector
- View the Vector documentation:
- For write metrics, InfluxDB Metrics Sink
- For log events, InfluxDB Logs Sink
- Under Configuration, click v2 to view configuration settings.
- Scroll down to How It Works for more detail:
Configure Apache NiFi
See the InfluxDB Processors for Apache NiFi Readme for details.
Configure OpenHAB
See the InfluxDB Persistence Readme for details.
Configure Apache JMeter
To configure Apache JMeter, complete the following steps in InfluxDB and JMeter.
In InfluxDB
- Find the name of your organization (needed to create a bucket and token).
- Create a bucket using the influx CLI and name it
jmeter.
- Create a token.
In JMeter
- Create a Backend Listener using the InfluxDBBackendListenerClient implementation.
- In the Backend Listener implementation field, enter:
org.apache.jmeter.visualizers.backend.influxdb.influxdbBackendListenerClient
- Under Parameters, specify the following:
- influxdbMetricsSender:
org.apache.jmeter.visualizers.backend.influxdb.HttpMetricsSender
- influxdbUrl: (include the bucket and org you created in InfluxDB)
- application:
InfluxDB2
- influxdbToken: your InfluxDB API token
- Include additional parameters as needed.
- Click Add to add the InfluxDBBackendListenerClient implementation.
Configure Apache Pulsar
See InfluxDB sink connector for details.
Configure FluentD
See the influxdb-plugin-fluent Readme for. | https://docs.influxdata.com/influxdb/v2.4/write-data/no-code/third-party/ | 2022-09-25T06:04:28 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.influxdata.com |
Permission error
You do not have permission to create this discussion page, for the following reasons below:
- Editing of this page is limited to registered doc users in the group: Email Confirmed.
- You must confirm your email address before editing pages. Please set and validate your email address through your user preferences.
- Joomla! Documentation has restricted the ability to create new pages. You can go back and edit an existing page, or log in or create an account. | https://docs.joomla.org/index.php?title=Archived_talk:Working_with_Mootools_1.3&action=edit§ion=new&preloadtitle=Edit%20Requests%2017%20Aug%202022%2023:26 | 2022-09-25T05:25:03 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.joomla.org |
- database_name_1
- user_name_1
- Containing database or user for view_name_1 if something other than the current database or user.
- view_name_1
- Name of the recursive view to be created or replaced.
- If view_name_1 is not fully qualified, the default database is used.
- For information about naming database objects, see Teradata Vantage™ - SQL Fundamentals, B035-1141.
- column_name
- Mandatory name of a view column or column set. If more than one column is specified, list their names in the order in which each is to be displayed for the view. | https://docs.teradata.com/r/Teradata-VantageTM-SQL-Data-Definition-Language-Syntax-and-Examples/September-2020/View-Statements/CREATE-RECURSIVE-VIEW-and-REPLACE-RECURSIVE-VIEW/CREATE-RECURSIVE-VIEW-and-REPLACE-RECURSIVE-VIEW-Syntax-Elements | 2022-09-25T05:58:51 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.teradata.com |
Ansible Plugins
A common usage pattern is the use of a third-party credential management system for managing passwords and keys for accessing hosts and services. In the past, this sort of environment was tricky to manage with Ansible AWX. Now, with the combination of Ansible AWX Custom Credentials, it has become simple.
Akeyless plugins are available in three flavors:
Updated 24 days ago
Did this page help you? | https://docs.akeyless.io/docs/ansible-awx-plugin | 2022-09-25T04:01:22 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.akeyless.io |
Phase I - The Rise of the Divine OX
Alpha and Omega would not be without the support of the Alpha and Omega community. Rather than giving early access to investors or selected elite groups, we want everyone in the Alpha and Omega community to have a fair chance of participation.
Introducing Liquidity Bootstrap Event (LBE) for OX, the governance token for the Alpha DAO.
Our LBE will be open for all. Similar to the Olympus IDO, the Alpha LBE will be the main source to provide initial liquidity and the treasury’s initial backing for OX.
50,000 OX tokens will be minted for the event. The initial price of the OX token will be 10 BUSD/OX for whitelisted addresses.
This will result in $500,000 worth of BUSD in total sales from the LBE.
The first $300K BUSD will be used to bootstrap the liquidity and provide initial treasury backing for the OX token.
$200,000 BUSD will be paired with 10,000 OX tokens on PancakeSwap, establishing the initial extrinsic price of $20 BUSD per OX with initial liquidity of $400,000.
$100,000 BUSD will be deposited into the treasury, backing the initial OX supply.
This will be done within 24 hours after the LBE event.
We will announce the exact time when the initial liquidity will be added, providing total transparency to our community.
What about the LP token?
The $400,000 worth of LP tokens will be deposited to the treasury via a prolonged and controlled bonding process, with 1/100 a time across 100 epochs.
The balance of $200,000 BUSD will be used to support the Alpha and Omega marketing and development work. We believe this is essential for promoting and maintaining the health of the Alpha and Omega protocols. Only the funds exceeding the initial $300,000 will be used for the development. That means, that if the total funds raised from the LBE is only $350,000, $300,000 will be used to initial liquidity and treasury backing, and $50K will be used to support development work.
Why do we believe this is important?
We are going to be 100% transparent with the Alpha and Omega community. We are proposing a fixed rate round only to ensure the Alpha and Omega DAO can be sustained. Future development work can be funded via the Alpha and Omega DAO through a DAO voting procedure.
We plan to launch the bonding function within 48 hours after the LBE event. We will target a 10% discount positive ROI for bonders across the board for both the OX-BUSD LP pool and the BUSD pool.
If OX tokens are not sold out during the LBE, any remaining OX tokens will be sent back to the Alpha DAO. The exact launch time of the LBE will be announced in our
Discord
.
The Need for Alpha & Omega
Phase II - The Bringing of the OM and Expansion of Alpha DAO
Last modified
2mo ago
Copy link
Outline
What about the LP token?
Why do we believe this is important? | https://docs.alphadao.financial/alpha-omega-dao/phase-i-the-rise-of-the-divine-ox | 2022-09-25T04:14:24 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.alphadao.financial |
WorkSpace bundles and images
A WorkSpace bundle is a combination of an operating system, and
storage, compute, and software resources. When you launch a WorkSpace, you select the bundle
that meets your needs. The default bundles available for WorkSpaces are called public bundles. For more information about the various public
bundles available for WorkSpaces, see Amazon WorkSpaces
Bundles
If you've launched a Windows or Amazon Linux WorkSpace and have customized it, you can create a custom image WorkSpace image and the underlying compute and storage configuration that you select. You can then specify this custom bundle when you launch new WorkSpaces to ensure that the new WorkSpaces have the same consistent configuration (hardware and software).
If you need to perform software updates or to install additional software on your WorkSpaces, you can update your custom bundle and use it to rebuild your WorkSpaces.
Contents | https://docs.amazonaws.cn/en_us/workspaces/latest/adminguide/amazon-workspaces-bundles.html | 2022-09-25T05:43:53 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.amazonaws.cn |
Open Channel
Open Channel enables thousands of users to chat publicly.
An open channel is for public conversation. Any user may join and participate in it without any access restriction. It is a perfect fit when you want to have a fixed number of channels for your application.
Creating an Open Channel
An open channel can be created with the help of
create() function of
OpenChannel class.
cc.OpenChannel.create(CHANNEL_NAME, function(error, openChannel) { if(!error){ console.log("Open Channel successfully created", openChannel) } });annelListQuery = cc.OpenChannel.createOpenChannelListQuery(); openChannelListQuery.get(function(error, openChannelList){ if(!error){ console.log("My Open Channels List Retreived", openChannelList) } })
Getting instance of Open Channel based on ID
It is possible to retrieve an instance of an Open Channel if you have the channel ID by using
get() function of
OpenChannel class.
cc.OpenChannel.get(CHANNEL_ID, function(error, openChannel) { if(!error){ console.log("Open Channel Successfully retrieved", openChannel) } })
CHANNEL_IDis the ID of your open channel.
Joining an Open Channel
To be able to participate in an open channel, you first need to join it. You may use
join() function of
OpenChannel class for this.
cc.OpenChannel.get(CHANNEL_ID, function(error, openChannel) { openChannel.join(function(error) { if(!error){ console.log("Open Channel Successfully Joined") } }) })
Leaving an Open Channel
To stop receiving messages and notifications from an Open Channel, you would need to leave the open channel. You may use
leave() function of
OpenChannel class for this.
cc.OpenChannel.get(CHANNEL_ID, function(error, openChannel) { openChannel.leave(function(error) { if(!error){ console.log("Open Channel Successfully Left") } }) })
Sending Messages in Open Channel
A participant of an open channel can send two types of messages in it:
TEXT: a text message
ATTACHMENT: a binary attachment like image, document etc.
Sending Text Message
openChannel.sendMessage(MESSAGE, function(error, message){ if(!error){ console.log("Message sent successfully", message) } })
Please note
openChannel is the instance of Class
OpenChannel.
Sending Attachment
openChannel.sendAttachment(ATTACHMENT, function(progress, total){ console.log("Tracking progress of upload", progress, total) }, function(error, message){ console.log("Attachment sent successfully", message) })
Receiving Messages
In ChatCamp,
ChannelListener is used to handle information from the server. This information can be a new chat message, a new open channel created by some other user etc.
To receive a message in an Open Channel,
onOpenChannelMessageReceived ChannelListener is used.
let channelListener = new cc.ChannelListener(); channelListener.onOpenChannelMessageReceived = function(openChannel, message) { console.log("New Message received in open channel") } cc.addChannelListener(CHANNEL_LISTENER_ID, channelListener);
Retrieving Previous Messages
Previous messages of an open channel can be retrieved. Participants of the open channel can only retrieve it.
let previousMessageListQuery ="
Updated less than a minute ago | https://docs.chatcamp.io/docs/javascript-chat-open-channel | 2022-09-25T04:21:27 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.chatcamp.io |
$ oc adm must-gather --image=quay.io/openshift/origin-cluster-logging-operator
When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.
The
must-gather tool enables you to collect diagnostic information for project-level resources, cluster-level resources, and each of the OpenShift Logging components.
For prompt support, supply diagnostic information for both OKD and OpenShift Logging.
The
oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues.
For your OpenShift Logging environment,
must-gather collects the following information:
Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level
Cluster-level resources, including nodes, roles, and role bindings at the cluster level
OpenShift Logging resources in the
openshift-logging and
openshift-operators-redhat namespaces, including health status for the log collector, the log store, and the log visualizer OpenShift Logging environment.
To collect OpenShift Logging information with
must-gather:
Navigate to the directory where you want to store the
must-gather information.
Run the
oc adm must-gather command against the OpenShift Logging image:
$ oc adm must-gather --image=quay.io/openshift/origin-cluster-logging-operator. | https://docs.okd.io/4.7/logging/troubleshooting/cluster-logging-must-gather.html | 2022-09-25T05:32:11 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.okd.io |
Support.
New property pxCommitDateTime
Valid from Pega Version 7.1.8
A new property, pxCommitDateTime, records the time when a record or updated rule was committed to the database. This property also allows for incremental extracts when running B import and export Intelligent Virtual Assistant or Email Bot training data
Valid from Pega Version 8.3
You can now copy Pega Intelligent Virtual Assistant™ (IVA) or Pega Email Bot™ training data between Pega Platform™ application environments by performing an export and import action. Importing and exporting training data between Pega Platform application environments results in greater accuracy of entity detection. Entities are detected by the system in a chat conversation, including attachments, to help respond to the user correctly, and consist of proper nouns that fall into a commonly understood category such as a person, organization, or a ZIP code. The ability to copy training data also makes it easier to maintain IVA or Email Bot, and build its artificial intelligence.
For more information, see Copying training data to another environment and Copy training data and model to another IVA or Email Bot environment. | https://docs.pega.com/platform/release-notes-archive?f%5B0%5D=%3A9031&f%5B1%5D=%3A31541&f%5B2%5D=releases_capability%3A9031&f%5B3%5D=releases_capability%3A28506&f%5B4%5D=releases_note_type%3A983&f%5B5%5D=releases_version%3A7101&f%5B6%5D=releases_version%3A7111&f%5B7%5D=releases_version%3A7146&f%5B8%5D=releases_version%3A29101 | 2022-09-25T04:11:44 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.pega.com |
Automating Processes with
cron and
crontab¶
Prerequisites¶
- A machine running Rocky Linux
- Some comfort with modifying configuration files from the command-line using your favorite editor (
viis used here)
Assumptions¶
- Basic knowledge of bash, python, or other scripting/programming tools, and the desire to have a script run automatically
- That you are either running as the root user or have switched to root with
sudo -s
(You can run certain scripts in your own directories as your own user. In this case, switching to root is not necessary.)
- We assume that you're pretty cool.
Introduction¶
Linux provides the cron system, a time-based job scheduler, for automating processes. It's simple and yet quite powerful. Want a script or program to run every day at 5 PM? This is where you set that up.
The crontab is essentially a list where users add their own automated tasks and jobs, and it has a number of options that can simplify things even further. This document will explore some of these. It's a good refresher for those with some experience, and new users can add the
cron system to their repertoire.
anacron is discussed briefly here in reference to the
cron "dot" directories.
anacron is run by
cron, and is advantageous for machines that are not up all the time, such as workstations and laptops. The reason for this is that while
cron runs jobs on a schedule, if the machine is off when the job is scheduled, the job does not run. With
anacron the job is picked up and run when the machine is on again, even if the scheduled run was in the past.
anacron though, uses a more randomized approach to running tasks where the timing is not exact. This makes sense for workstations and laptops, but not so much for servers. This can be a problem for things like server backups, for instance, that need to run at a specific time. That's where
cron continues to provide the best solution for server administrators. All that being said, server administrators and workstation or laptop users can gain something from both approaches. You can easily mix and match based on your needs. For more information on
anacron see anacron - Automating commands.
Starting Easy - The
cron Dot Directories¶
Built into every Linux system for many versions now, the
cron "dot" directories help to automate processes quickly. These show up as directories that the
cron system calls based on their naming conventions. They are called differently, however, based on which process is assigned to call them,
anacron or
cron. The default behavior is to use
anacron, but this can be changed by a server, workstation or laptop administrator.
For Servers¶
As stated in the introduction,
cron normally runs
anacron these days to execute scripts in these "dot" directories. You may, though, want to use these "dot" directories on servers as well, and if that is the case, then there are two steps that you can take to make sure that these "dot" directories are run on a strict schedule. To do so, we need to install a package and remove another one:
dnf install cronie-noanacron
and
dnf remove cronie-anacron
As you might expect, this removes
anacron from the server and reverts to running tasks within the "dot" directories on a strict schedule. This is defined by this file:
/etc/cron.d/dailyjobs, which has the following contents:
# Run the daily, weekly, and monthly jobs if cronie-anacron is not installed SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root # run-parts 02 4 * * * root [ ! -f /etc/cron.hourly/0anacron ] && run-parts /etc/cron.daily 22 4 * * 0 root [ ! -f /etc/cron.hourly/0anacron ] && run-parts /etc/cron.weekly 42 4 1 * * root [ ! -f /etc/cron.hourly/0anacron ] && run-parts /etc/cron.monthly
This translates to the following:
- run scripts in
cron.dailyat 04:02:00 every day.
- run scripts in
cron.weeklyat 04:22:00 on Sunday every week.
- run scripts in
cron.monthlyat 04:42:00 on the first day of every month.
For Workstations¶
If you want to run scripts on a workstation or laptop in the
cron "dot" directories, then there is nothing special that you need to do. Simply copy your script file into the directory in question, and make sure it is executable. Here are the directories:
/etc/cron.hourly- Scripts placed here will run one minute past the hour every hour. (this is run by
cronregardless of whether
anacronis installed or not)
/etc/cron.daily- Scripts placed here will run every day.
anacronadjusts the timing of these. (see tip)
/etc/cron.weekly- Scripts placed here will run every 7 days, based on the calendar day of the last run time. (see tip)
/etc/cron.monthly- Scripts placed here will run monthly based on the calendar day of the last run time. (see tip)
Tip
These are likely to be run at similar (but not exactly the same) times every day, week, and month. For more exact running times, see the @options below.
So provided you're alright with just letting the system auto-run your scripts, and allowing them to run sometime during the specified period, then it makes it very easy to automate tasks.
Note
There is no rule that says a server administrator cannot use the randomized run times which
anacron uses to run scripts in the "dot" directories. The use case for this would be for a script that is not time sensitive.
Create Your Own
cron¶
Of course, if the automated, randomized times don't work well in For Workstations above, and the scheduled times in the For Servers above, then you can create your own. In this example, we are assuming you are doing this as root. see Assumptions To do this, type the following:
crontab -e
This will pull up root user's
crontab as it exists at this moment in your chosen editor, and may look something like this. Go ahead and read through this commented version, as it contains descriptions of each field that we will be using next:
#
Notice that this particular
crontab file has some of its own documentation built-in. That isn't always the case. When modifying a
crontab on a container or minimalist operating system, the
crontab will be an empty file unless an entry has already been placed in it.
Let's assume that we have a backup script that we want to run at 10 PM at night. The
crontab uses a 24 hour clock, so this would be 22:00. Let's assume that the backup script is called "backup" and that it is currently in the /usr/local/sbin directory.
Note
Remember that this script needs to also be executable (
chmod +x) in order for the
cron to run it.
To add the job, we would:
crontab -e
crontab stands for "cron table" and the format of the file is, in fact, a loose table layout. Now that we are in the
crontab, go to the bottom of the file and add your new entry. If you are using
vi as your default system editor, then this is done with the following keys:
Shift : $
Now that you are at the bottom of the file, insert a line and type a brief comment to describe what is going on with your entry. This is done by adding a "#" to the beginning of the line:
# Backing up the system every night at 10PM
Now hit enter. You should still be in the insert mode, so the next step is to add your entry. As shown in our empty commented
crontab (above) this is m for minutes, h for hours, dom for day of month, mon for month, and dow for day of week.
To run our backup script every day at 10:00, the entry would look like this:
00 22 * * * /usr/local/sbin/backup
This says run the script at 10 PM, every day of the month, every month, and every day of the week. Obviously, this is a pretty simple example and things can get quite complicated when you need specifics.
The @options for
crontab¶
Another way to run jobs at a strictly scheduled time (i.e., day, week, month, year, etc.) is to use the @options, which offer the ability to use more natural timing. The @options consist of:
@hourlyruns the script every hour of every day at 0 minutes past the hour. (this is exactly the result of placing your script in
/etc/cron.hourlytoo)
@dailyruns the script every day at midnight.
@weeklyruns the script every week at midnight on Sunday.
@monthlyruns the script every month at midnight on the first day of the month.
@yearlyruns the script every year at midnight on the first day of January.
@rebootruns the script on system startup only.
Note
Using these
crontab entries bypasses the
anacron system and reverts to the
crond.service whether
anacron is installed or not.
For our backup script example, if we used use the @daily option to run the backup script at midnight, the entry would look like this:
@daily /usr/local/sbin/backup
More Complex Options¶
So far, everything we have talked about has had pretty simple options, but what about the more complex task timings? Let's say that you want to run your backup script every 10 minutes during the day (probably not a very practical thing to do, but hey, this is an example!). To do this you would write:
*/10 * * * * /usr/local/sbin/backup
What if you wanted to run the backup every 10 minutes, but only on Monday, Wednesday and Friday?:
*/10 * * * 1,3,5 /usr/local/sbin/backup
What about every 10 minutes every day except Saturday and Sunday?:
*/10 * * * 1-5 /usr/local/sbin/backup
In the table, the commas let you specify individual entries within a field, while the dash lets you specify a range of values within a field. This can happen in any of the fields, and on multiple fields at the same time. As you can see, things can get pretty complicated.
When determining when to run a script, you need to take time and plan it out, particularly if the criteria are complex.
Conclusions¶
The cron/crontab system is a very powerful tool for the Rocky Linux systems administrator or desktop user. It can allow you to automate tasks and scripts so that you don't have to remember to run them manually. There are more examples provided here:
- For machines that are not on 24 hours a day, explore anacron - Automating commands.
- For a concise description of
cronprocesses, check out cronie - Timed Tasks
While the basics are pretty easy, you can get a lot more complex. For more information on
crontab head up to the crontab manual page. On most systems, you can also enter
man crontab for additional command details. You can also simply do a web search for "crontab" which will give you a wealth of results to help you fine-tune your
crontab skills.
Author: Steven Spencer
Contributors: Ezequiel Bruni | https://docs.rockylinux.org/guides/automation/cron_jobs_howto/ | 2022-09-25T05:39:48 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.rockylinux.org |
Subscribing to the Talent Recruiter APIs
Navigate to and click Sign up. You need to have an HRID account in order to sign up. You already have an HRID account if you have a user in Talent Recruiter or Talent Manager.
When you are logged in, navigate to the “Products” section and subscribe to the “Auth” product and the “Talent Recruiter” product.
Subscriptions need to be approved by Talentech, and you will get an email after the subscriptions have been approved.
When you subscriptions have been approved, you can find the subscription keys in your “Profile” page as shown in the screenshot below. | https://talentech-docs.atlassian.net/wiki/spaces/DOCS/pages/716275724/Subscribing+to+the+Talent+Recruiter+APIs | 2022-09-25T04:41:17 | CC-MAIN-2022-40 | 1664030334514.38 | [] | talentech-docs.atlassian.net |
Is Gravity Flow suitable for an intranet?
Yes. Gravity Flow was designed specifically for intranet and extranet scenarios where all the users have an account. So, by default, the inbox and status pages will not be accessible to users who have not logged in.
It's possible to open the inbox up to non-registered users either by assigning steps to an email field or by allowing anonymous access in the shortcode attributes. | https://docs.gravityflow.io/article/65-is-gravity-flow-suitable-for-an-intranet | 2022-09-25T04:16:24 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.gravityflow.io |
New in version 2014.7.0.
This state is useful for firing messages during state runs, using the SMTP protocol
server-warning-message: smtp.send_msg: - name: 'This is a server warning message' - profile: my-smtp-account - recipient: [email protected]
salt.states.smtp.
send_msg(name, recipient, subject, sender=None, profile=None, use_ssl='True', attachments=None)¶
Send a message via SMTP
server-warning-message: smtp.send_msg: - name: 'This is a server warning message' - profile: my-smtp-account - subject: 'Message from Salt' - recipient: [email protected] - sender: [email protected] - use_ssl: True - attachments: - /var/log/syslog - /var/log/messages
The message to send via SMTP | https://docs.saltproject.io/en/3004/ref/states/all/salt.states.smtp.html | 2022-09-25T05:05:17 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.saltproject.io |
Preparing the installation
It is recommended to think about the following elements prior to the installation of TrilioVault for Openstack.
Tenant Quotas eventual consistency¶
AWS S3 object consistency model includes:
1.
Read-after-write
2.
Read-after-update
3. Cluster¶.
Deployment Guide - Previous
TrilioVault network considerations
Next - Deployment Guide
Spinning up the TrilioVault VM
Last modified
11mo ago
Export as PDF
Copy link
Outline
Tenant Quotas
AWS S3 eventual consistency¶
TrilioVault Cluster¶ | https://docs.trilio.io/openstack/deployment/preparing-the-installation | 2022-09-25T05:05:43 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.trilio.io |
:
Translation depends on the target language, and formatting usually depends on
the target country. This information is provided by browsers in the
Accept-Language header. However, the time zone isn’t readily available.
The words “internationalization” and “localization” often cause confusion; here’s a simplified definition::
llor.
Accept-LanguageHTTP header using this format. Examples:
it,
de-at,
es,
pt-br. Language codes are generally represented in lowercase, but the HTTP
Accept-Languageheader is case-insensitive. The separator is a dash.
.pofile extension. | https://django.readthedocs.io/en/3.1.x/topics/i18n/index.html | 2022-09-25T05:55:52 | CC-MAIN-2022-40 | 1664030334514.38 | [] | django.readthedocs.io |
Go-live Checklist
Use this checklist to ensure a smooth transition from TEST to LIVE environment..
Add production webhook endpoint
Webhooks are server to server communications that Cashfree uses to communicate securely with you. To get more information on webhooks and how to add them go here.
Do not go-live without signature verification if you are using webhooks
To ensure you are processing only Cashfree's webhook requests, verify the signature which you receive along with the data.
Updated about 1 year ago | https://docs.cashfree.com/docs/go-live-checklist-1 | 2022-09-25T04:53:14 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.cashfree.com |
Incident Service Webhook
Incident Service
Cashfree Payments will notify you whenever we create an incident at our end. An incident implies that the issuing bank is facing high failure rates or has scheduled a maintenance activity during that time. The former could be due to many reasons and until the failure rates go down, we would recommend customers to use alternative payment instruments.
There are two channels through which we notify. You can subscribe to either of the channels by adding your email address and webhook endpoint in the merchant dashboard.
Email - Cashfree Payments will send an email alert when an issuer is facing downtime or a scheduled incident.
Webhooks - Cashfree Payments will invoke a server to server call whenever an incident is created. You can use this webhook and update your payment page accordingly.
Webhook Schema
Sample Payload for Creation of Incident
{ "data": { "incident": { "end_time": null, "id": "INCIDENT_MEDIUM_KarurVysyaBank_a7259c79-25a8-4b86-bcab-71562a85c386", "impact": "MEDIUM", "message": "We are facing issues with KVB bank UPI payments. ", "start_time": "2021-04-16T14:00:00+05:30", "status": "OPEN", "type": "UNSCHEDULED" }, "instruments": { "upi": { "issuers": [ "Karur Vysya Bank" ] } } }, "event_time": "2021-04-16T14:10:36+05:30", "type": "HEALTH_ALERT", "version": 1 }
Sample Payload When Incident is Resolved
{ "data": { "incident": { "end_time": "2021-04-16T18:20:24+05:30", "id": "INCIDENT_MEDIUM_KarurVysyaBank_a7259c79-25a8-4b86-bcab-71562a85c386", "impact": "MEDIUM", "message": "Payment mode up", "start_time": "2021-04-16T14:00:00+05:30", "status": "RESOLVED", "type": "UNSCHEDULED" }, "instruments": { "upi": { "issuers": [ "Karur Vysya Bank" ] } } }, "event_time": "2021-04-16T18:20:24+05:30", "type": "HEALTH_ALERT", "version": 1 }
Payload
- At least one of the children needs to be present
Wallet
Net Banking
UPI
Card
Payload Headers
Cashfree Payments will send two custom headers for every webhook being invoked by our system.
Signature Generation
The signature must be used to verify if the request has not been tampered with. To verify the signature at your end, you will need your Cashfree Payment Gateway secret key along with the payload.
//The payload here refers to the raw request sent by Cashfree to your endpoint. No modifications need to be done to this payload. payload := {"data":{"bank_name":"Test Bank", "card_type":"Visa","health":"DEGRADED", "incident_end_time":"2021-04-07T00:20:30", "incident_id":"INCIDENT_HIGH_Test Bank_954b95zz-f11a-test-abcd-0eb0e8608847", "incident_impact":"High", "incident_start_time":"2021-04-06T00:20", "incident_type":"Scheduled", "is_resolved":false, "issuers":[], "message":"We are facing high failure issues in Payment gateway at the moment and will keep you updated about the issue.", "payment_gateway":null,"payment_mode":"DEBIT_CARD", "scope":"PaymentMode"}, "type":"HEALTH_ALERT","version":1} # timestamp is present in the header x-webhook-timestamp timestamp := 1617695238078 signedPayload := $timestamp.$payload expectedSignature := Base64Encode(HMACSHA256($signedPayload, $merchantSecretKey))
Bank Names
- Bank Name
- Axis Bank
- Bank of Baroda - Retail Banking
- Bank of India
- Bank of Maharashtra
- Canara Bank
- Catholic Syrian Bank
- Central Bank of India
- City Union Bank
- Deutsche Bank
- DBS Bank Ltd
- DCB Bank - Personal
- Dhanlakshmi Bank
- Federal Bank
- HDFC Bank
- ICICI Bank
- IDBI Bank
- IDFC Bank
- Indian Bank
- Indian Overseas Bank
- IndusInd Bank
- Jammu and Kashmir Bank
- Karnataka Bank Ltd
- Karur Vysya Bank
- Kotak Mahindra Bank
- Laxmi Vilas Bank - Retail Net Banking
- Punjab & Sind Bank
- Punjab National Bank - Retail Net Banking
- RBL Bank
- Saraswat Bank
- South Indian Bank
- Standard Chartered Bank
- State Bank Of India
- Tamilnad Mercantile Bank Ltd
- UCO Bank
- Union Bank of India
- Yes Bank Ltd
- Bank of Baroda - Corporate
- Bank of India - Corporate
- DCB Bank - Corporate
- Lakshmi Vilas Bank - Corporate
- Punjab National Bank - Corporate
- State Bank of India - Corporate
- Union Bank of India - Corporate
- Axis Bank Corporate
- Dhanlaxmi Bank Corporate
- ICICI Corporate Netbanking
- Ratnakar Corporate Banking
- Shamrao Vithal Bank Corporate
- Equitas Small Finance Bank
- Yes Bank Corporate
- Bandhan Bank- Corporate banking
- Barclays Corporate- Corporate Banking - Corporate
- Indian Overseas Bank Corporate
- City Union Bank of Corporate
- HDFC Corporate
- Shivalik Bank
- AU Small Finance
- Bandhan Bank - Retail Net Banking
- Shamrao Vitthal Co-operative Bank
- Tamil Nadu State Co-operative Bank
All Banks (Note: This parameter will be sent when all banks are having a downtime)
Wallet Names
- Name of the wallet
- FreeCharge
- MobiKwik
- Ola Money
- Reliance Jio Money
- Airtel Money
- Paytm
- Amazon Pay
- PhonePe
Updated 4 months ago | https://docs.cashfree.com/docs/incident-service | 2022-09-25T05:44:56 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.cashfree.com |
domainvalue in your Chartbeat tag set to your staging site ID (e.g. staging.mysite.com) when running on staging, and your production site ID (e.g. mysite.com) when in production.
config.domainvariable in our snippet.
config.useCanonical = true, or it should match the value assigned to
config.pathif your site makes use of our custom path variable. Sometimes this value is just the path portion of the URL, and sometimes it includes your domain name as well.
config.useCanonicalDomainis set to
truein your Chartbeat code, this key should carry the domain set in your canonical URL. If that variable is not set to true, we will access the domain from
document.location.
config.uidin our snippet.
config.sectionsvariable in our snippet.
config.authorsvariable in our snippet. | https://docs.chartbeat.com/cbp/tracking/standard-websites/qa-web-integration | 2022-09-25T04:19:33 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.chartbeat.com |
In the "Dispatches" tab you will find all of the forms which have been Dispatched to your device. These are usually forms which have pre-populated answers that need to be completed.
They can be dispatched to your device from the Device Magic Dashboard, a different device, or 3rd party software you have connected to Device Magic via our API.
Once the form is Dispatched, it shows up on the device with the form partially completed. Answers already populated in fields in the form and you can either update those answers or simply complete the form. This is handy for things like inspection approvals or parts requests among others.
Viewing Dispatch Locations on the Mobile App:
Note: To learn how to add locations to your dispatches, view this article.
If you would like this feature enabled on your account, please contact [email protected].
Dispatched forms that include Locations will have a map icon in the Dispatches tab.
To view the Dispatch locations on a map, click the Map icon at the top of the screen.
You will then be directed to a map that shows the Dispatch locations represented by numbered pins.
Click on a pin or a dispatch in the list below the map to select the dispatch. Then click "Open" to open the dispatched form you have selected.
Swipe left and right across the tab to toggle between entries. By clicking "Get Directions", the Maps app will open and the directions to the dropped pin will populate.
To navigate away from the map and back to your list of Dispatches, click the List icon at the top.
To learn more about how Dispatch works, click here.
Other Useful Articles:
If you have any questions or comments feel free to send us a message at [email protected]. | https://docs.devicemagic.com/en/articles/709786-dispatches-tab | 2022-09-25T05:19:27 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.devicemagic.com |
Your Child's Android tablet or phone needs permission to run the Connect App without limits from the battery power saver.
Applies to
Insights and Premium Members
Child's Device
- Android 8 / 8.1 Oreo (2017)
- Android 9 Pie (2018)
Parent Notifications
- To do in Connect App
- Connect App pop-up notification
Excludes: Devices in the home only using the filtered Family WiFi on the Family Zone Box in Australia and New Zealand
Your next steps
Older versions of Android hardware would turn off the tools used by the Connect App to provide cyber safe web content at all times on your Child's phone or tablet.
Your Child has an Android 10 or newer Device
Clear the Red Alert from the Connect App on a Parent's phone or tablet. No changes are needed for the Battery Management in Android 10 or newer.
You have your Child's Android 8 or Android 9 Device
First, you will need to update your Child's Android Operating System to ensure the battery management software is current.
As Google has updated their Android Operating System, they have stopped supporting older versions of Android. The Connect App is updated to work with the versions of Android currently supported by Android manufacturers.
To find the version of Android on your Child's phone and if an update is available see:
Next, reboot the phone or tablet and complete the following steps.
- Open the Connect App
- In the red banner at the top of the screen, Tap to fix
The Android Settings will open
- Tap on Battery Optimization
- On the drop-down menu, select All Apps
- Tap on Connect App
- Select Don’t optimize or Not optimized
Tap Done
- Tap the left arrow to return to the Connect App
- Restart your Child's Android 8 Device one last time to refresh the battery manager
If you have any issues with battery optimization settings in Android 9 Pie (2018), see Google Android:
Your Child's Android is more than 5 years old
The Alert may be appearing because the Android phone or tablet is running an unsupported, older version of the Android Operating System. Check to see if the version of Android your Child is using is still supported.
To find the version of Android on your Child's phone and if an update is available see:
As Google has updated their Android Operating System, they have stopped supporting older versions of Android. The Connect App is updated to work with the current versions of Android.
Find the versions of Android we support here:
We will do our best to support older Android devices where possible.: | https://docs.familyzone.com/help/red-alert-android-battery-manager-permissions | 2022-09-25T04:18:07 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.familyzone.com |
Due Date
Available as of Gravity Flow v2.5.
The display format for the due date is based on the WordPress date settings.
Additional display options for due date details
- Notifications
via modifiers of the {current_step} merge tag
{current_step:due_date} will show the month, day and year
{current_step:due_status} will show whether the entry is 'Pending' or 'Overdue' which is very useful for reminder notifications
- Inbox shortcode column
Add the due date to your inbox shortcodes: [gravityflow page="inbox" due_date="true"]
- Status shortcode column
Add the due date to your status shortcodes: [gravityflow page="status" due_date="true"] | https://docs.gravityflow.io/article/261-due-date | 2022-09-25T05:00:00 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/555e4a9be4b027e1978e1d69/images/61f189a168cd260cc2d34c15/file-AlrqYeKYJE.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/555e4a9be4b027e1978e1d69/images/61f189d52130e51694680d60/file-5AUHTu7Hxg.png',
None], dtype=object) ] | docs.gravityflow.io |
Information for "Bug Squad/box-footer" Basic information Display titlePortal:Bug Squad/box-footer Default sort keyBug Squad/box-footer Page length (in bytes)97 NamespacePortal Page ID25om Hutchison (talk | contribs) Date of page creation17:57, Page properties Transcluded templates (2)Templates used on this page: Template:Box-footer (view source) Portal:Box-footer (view source) Page transcluded on (1)Template used on this page: User:MATsxm/en (view source) Retrieved from "" | https://docs.joomla.org/index.php?title=Portal:Bug_Squad/box-footer&action=info | 2022-09-25T05:37:22 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.joomla.org |
logTranslation review logUpload logUser creation logUser merge logUser rename logUser rights logPerformer:Target (title or User:username for user):Search titles starting with this textFrom date (and earlier):Tag filter:Show additional logs:Thanks logPatrol logTag logUser creation log Show No matching items in log. Retrieved from "" | https://docs.joomla.org/index.php?title=Special:Log&page=Category:Module_Management/sw | 2022-09-25T04:29:48 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.joomla.org |
Alerts report displays a list of alerts triggered during a specified time interval.
- Select a client from the All Clients list.
- Select Reports > Add > Alert Report.
- To fetch the list of reports relevant to the filtered attributes, select the attributes and click Generate Now.
- Click Apply Schedule to generate the alerts report at a specific time. | https://docs.opsramp.com/platform-features/feature-guides/alerts-templates/create-alert-reports/ | 2022-09-25T04:32:24 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.opsramp.com |
Documentation
Documentation
The Conference Documentation
Quick Links:
Translate and Get a Premium theme for FREE
Getting Started
Introduction
WordPress Requirements & Checklists
Theme Installation & Activation
How to Install & Activate The Conference WordPress Theme?
Recommended Plugins
What are the Recommended Image Sizes?
How to Import Demo Content?
Configure Header & Footer Section
How to configure Site Logo/ Name & Tagline to your website?
How to Create & Edit Navigation Menu?
How to Add Footer Widgets?
How to Add Custom Link in Header Section?
Appearance Settings
How to Change Theme Colors?
How to change website Background?
How to Configure Layout Settings?
Homepage Settings
How to Set up the Front/Landing/Home Page and Blog Page?
How to Configure Banner Section?
How to Configure About Section?
How to Configure Stat Counter Section?
How to Configure Recent Conferences Section?
How to Configure Speakers Section?
How to Configure Testimonial Section?
How to Configure Call To Action Section?
How to Configure Blog Section?
How to Configure Contact Section?
How to Configure Google Map Section?
Page Settings
How to Configure Post and Pages Settings?
General Settings
How to Setup Newsletter on your Website?
SEO & Performance Settings
How to Add Google Analytics to your website?
How to Configure SEO Settings?
FAQ's
How to edit footer copyright information?
How to add Additional CSS Codes?
How to change the Logo Size of your website?
Why is the Customizer not showing up?
How to Update Theme & Plugins? | https://docs.rarathemes.com/docs/the-conference/ | 2022-09-25T04:23:11 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.rarathemes.com |
Script task
Script task activity executes processes that modeler or implementer defines a script in a language that the engine can interpret.
You can use the Script format as shown in the examples.Groovy JavaScript
You can use the Script as shown in the example.
var today = new Date(); var a = new Date(today.getFullYear(), today.getMonth()+1, 0); var b = a.toString(); execution.setVariable("tarih",b);
You can use the Execution listeners as shown in the examples.Start End Take
You can use the Multi-instance type as shown in the examples.
Default, only one instance is created:None
Activities are created in parallel. This is a good practice for User tasks activity:Parallel
Activities are created sequentially. This is a good practice for Service tasks activity:Sequential
You can use the Cardinality (Multi-instance) as shown in the examples.${number} 2
You can use the Collection (Multi-instance) as shown in the example.2
You can use the Element variable (Multi-instance) as shown in the example.elementvar
You can use the Completion Condition (Multi-instance) as shown in the example.true | https://docs.robusta.ai/docs/documentation-2022-05/robusta-rpa-components/activities/script-task/ | 2022-09-25T04:02:30 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.robusta.ai |
DEPRECATION WARNING
This documentation is not using the current rendering mechanism and is probably outdated. The extension maintainer should switch to the new system. Details on how to use the rendering mechanism can be found here.
What does it do?¶
- This extension implements the master template based on TypoScript and HTML and provided by the NRW user group on the 8 th July 2007. See the news list typo3.ug.nrw . Their German and English manual has been published here by Bernd Wilke: Newslist announcement
- The constants and setup of the “Master Template” must be included manually using include directives in the format <INCLUDE_TYPOSCRIPT: source=”…” >. | https://docs.typo3.org/typo3cms/extensions/mastertemplate/stable/ExtMasterTemplate/Introduction/WhatDoesItDo/Index.html | 2022-09-25T05:59:00 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.typo3.org |
The SQLGetInfo function returns information about the driver and data source. To find out whether a specific function is supported in the driver, call SQLGetFunctions.
For more information about the ODBC interface, see the ODBC Programmer's Reference.
ODBC Driver for WordPress supports all deprecated functions for backward compatibility.
The following table lists the currently supported ODBC functions. | https://docs.devart.com/odbc/wordpress/supported_odbc_api_functions.htm | 2022-09-25T04:09:37 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.devart.com |
Applies to: Premium Members
My child’s Age Profile didn’t automatically update
Age Profiles and the associated internet rules should update on the night of your child’s birthday assuming the settings haven’t been changed and their date of birth is correct.
- Check and update your child’s date of birth
Change a Child's Details
- Put your Child in a younger or older age group
Change Your Child's Age Profile
- Reset your Child's settings to allow the Age Profile to update on their birthday
Troubleshooting Automatic Updates to Age Profiles
I can’t see any settings in the Internet Filtering Rules
When you access the Internet Filtering Rules you should be able to see the default Online Safety Expert Profile that has been applied, the red time periods are when that category is blocked, the blue ones are allowed. If you’re unable to see any settings in Internet Filtering Rules then there may be an issue with your account or your browser.
- Log into your account and check the Internet Filtering Rules in another browser or the Family Zone app on your parent device to see if they show.
Customize Filtering Rules
- Make sure that the internet rules haven’t been changed by showing overrides
Reset Internet Filtering Rules
The Internet Filtering Rules aren’t applying to my child’s device
If your child is accessing something that they shouldn’t or they can’t access something when they should could indicate an issue with the device or your Internet Filtering Rules.
- Check the website to be sure that it is blocked or allowed
Check a Website Rating
- On your Child's device, open Safari, Chrome or a web browser and go to the diagnostic page at
- Check the device is assigned to your child
Check Device Restrictions
- Check that the right Age Profile has been applied to your child
Change a Child's Details
- Check that Family Zone filtering hasn’t been turned off
Temporarily Turn Off Filtering a Child's Phone or Tablet
- Check that they are not on a Safe Network
Check Safe Networks on a Child's Phone or Tablet
-.
- Reset the Internet Filtering Rules
Reset Internet Filtering Rules
- Create a new user and move the devices onto the new user
Add a Child
Then remove the old Child profile
Remove a Child or Parent | https://docs.familyzone.com/help/troubleshooting-expert-profiles | 2022-09-25T05:20:10 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.familyzone.com |
$ sysctl -a |grep commit
In an overcommited state, the sum of the container compute resource requests and limits exceeds the resources available on the system. Overcommitment might be desirable in development environments where a trade-off object specifying limits and defaults, this adjusts the container limit and request to achieve the desired level of overcommit.
After these overrides, the container limits and requests must still be validated by any
LimitRange objectRange objects with caution. | https://docs.okd.io/4.7/nodes/scheduling/nodes-scheduler-overcommit.html | 2022-09-25T04:46:35 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.okd.io |
apiVersion: v1 kind: Namespace metadata: name: vran-acceleration-operators labels: openshift.io/cluster-monitoring: "true"
Hardware accelerator cards from Intel accelerate 4G/LTE and 5G Virtualized Radio Access Networks (vRAN) workloads. This in turn increases the overall compute capacity of a commercial, off-the-shelf platform.
The vRAN Dedicated Accelerator ACC100, based on Intel eASIC technology is designed to offload and accelerate the computing-intensive process of forward error correction (FEC) for 4G/LTE and 5G technology, freeing up processing power. Intel eASIC devices are structured ASICs, an intermediate technology between FPGAs and standard application-specific integrated circuits (ASICs).
Intel vRAN Dedicated Accelerator ACC100 support on OpenShift Container Platform uses one Operator:
OpenNESS Operator for Wireless FEC Accelerators
The role of the OpenNESS Operator for Intel Wireless forward error correction (FEC) Accelerator is to orchestrate and manage the devices exposed by a range of Intel vRAN FEC acceleration hardware within the OpenShift Container Platform cluster.
One of the most compute-intensive 4G/LTE and 5G workloads is RAN layer 1 (L1) FEC. FEC resolves data transmission errors over unreliable or noisy communication channels. FEC technology detects and corrects a limited number of errors in 4G/LTE or 5G data without the need for retransmission.
The FEC device provided by the Intel vRAN Dedicated Accelerator ACC100 supports the vRAN use case.
The OpenNESS SR-IOV Operator for Wireless FEC Accelerators provides functionality to create virtual functions (VFs) for the FEC device, binds them to appropriate drivers, and configures the VFs queues for functionality in 4G/LTE or 5G deployment.
As a cluster administrator, you can install the OpenNESS SR-IOV Operator for Wireless FEC Accelerators by using the OpenShift Container Platform CLI or the web console.
As a cluster administrator, you can install the OpenNESS SR-IOV Operator for Wireless FEC Accelerators by using the CLI.
A cluster installed on bare-metal hardware.
Install the OpenShift CLI (
oc).
Log in as a user with
cluster-admin privileges.
Create a namespace for the OpenNESS SR-IOV Operator for Wireless FEC Accelerators by completing the following actions:
Define the
vran-acceleration-operators namespace by creating a file named
sriov-namespace.yaml as shown in the following example:
apiVersion: v1 kind: Namespace metadata: name: vran-acceleration-operators labels: openshift.io/cluster-monitoring: "true"
Create the namespace by running the following command:
$ oc create -f sriov-namespace.yaml
Install the OpenNESS SR-IOV Operator for Wireless FEC Accelerators in the namespace you created in the previous step by creating the following objects:
Create the following
OperatorGroup custom resource (CR) and save the YAML in the
sriov-operatorgroup.yaml file:
apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: vran-operators namespace: vran-acceleration-operators spec: targetNamespaces: - vran-acceleration-operators
Create the
OperatorGroup CR by running the following command:
$ oc create -f sriov-operatorgroup.yaml
Run the following command to get the
channel value required for the next step.
$ oc get packagemanifest sriov-fec -n openshift-marketplace -o jsonpath='{.status.defaultChannel}'
stable
Create the following Subscription CR and save the YAML in the
sriov-sub.yaml file:
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-fec-subscription namespace: vran-acceleration-operators spec: channel: "<channel>" (1) name: sriov-fec source: certified-operators (2) sourceNamespace: openshift-marketplace
Create the
Subscription CR by running the following command:
$ oc create -f sriov-sub.yaml
Verify that the Operator is installed:
$ oc get csv -n vran-acceleration-operators -o custom-columns=Name:.metadata.name,Phase:.status.phase
Name Phase sriov-fec.v1.1.0 Succeeded
As a cluster administrator, you can install the OpenNESS SR-IOV Operator for Wireless FEC Accelerators by using the web console.
Install the OpenNESS SR-IOV Operator for Wireless FEC Accelerators by using the OpenShift Container Platform web console:
In the OpenShift Container Platform web console, click Operators → OperatorHub.
Choose OpenNESS SR-IOV Operator for Wireless FEC Accelerators from the list of available Operators, and then click Install.
On the Install Operator page, select All namespaces on the cluster. Then, click Install.
Optional: Verify that the SRIOV-FEC Operator is installed successfully:
Switch to the Operators → Installed Operators page.
Ensure that OpenNESS SR-IOV Operator for Wireless FEC Accelerators is listed in the vran-acceleration-operators project with a Status of InstallSucceeded.
If the console does not indicate that the Operator is installed, perform the following troubleshooting steps:
Go to the Operators → Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.
Go to the Workloads → Pods page and check the logs for pods in the
vran-acceleration-operators project.
Programming the Intel vRAN Dedicated Accelerator ACC100 exposes the Single Root I/O Virtualization (SRIOV) virtual function (VF) devices that are then used to accelerate the forward error correction (FEC) in the vRAN workload. The Intel vRAN Dedicated Accelerator ACC100 accelerates 4G and 5G Virtualized Radio Access Networks (vRAN) workloads. This in turn increases the overall compute capacity of a commercial, off-the-shelf platform. This device is also known as Mount Bryce.
The SR-IOV-FEC Operator handles the management of the FEC devices that are used to accelerate the FEC process in vRAN L1 applications.
Configuring the SR-IOV-FEC Operator involves:
Creating the virtual functions (VFs) for the FEC device
Binding the VFs to the appropriate drivers
Configuring the VF queues for desired functionality in a 4G or 5G deployment
The role of forward error correction (FEC) is to correct transmission errors, where certain bits in a message can be lost or garbled. Messages can be lost or garbled due to noise in the transmission media, interference, or low signal strength. Without FEC, a garbled message would have to be resent, adding to the network load and impacting throughput and latency.
Intel FPGA ACC100 5G/4G card.
Node or nodes installed with the OpenNESS Operator for Wireless FEC Accelerators.
Enable global SR-IOV and VT-d settings in the BIOS for the node.
RT kernel configured with Performance Addon Operator.
Log in as a user with
cluster-admin privileges.
Change to the
vran-acceleration-operators project:
$ oc project vran-acceleration-operators
Verify that the SR-IOV-FEC Operator is installed:
$ oc get csv -o custom-columns=Name:.metadata.name,Phase:.status.phase
Name Phase sriov-fec.v1.1.0 Succeeded
Verify that the
sriov-fec pods are running:
$ oc get pods
NAME READY STATUS RESTARTS AGE sriov-device-plugin-j5jlv 1/1 Running 1 15d sriov-fec-controller-manager-85b6b8f4d4-gd2qg 1/1 Running 1 15d sriov-fec-daemonset-kqqs6 1/1 Running 1 15d
sriov-device-plugin expose the FEC virtual functions as resources under the node
sriov-fec-controller-manager applies CR to the node and maintains the operands containers
sriov-fec-daemonset is responsible for:
Discovering the SRIOV NICs on each node.
Syncing the status of the custom resource (CR) defined in step 6.
Taking the spec of the CR as input and configuring the discovered NICs.
Retrieve all the nodes containing one of the supported vRAN FEC accelerator devices:
$ oc get sriovfecnodeconfig
NAME CONFIGURED node1 Succeeded
Find the physical function (PF) of the SR-IOV FEC accelerator device to configure:
$ oc get sriovfecnodeconfig node1 -o yaml
status: conditions: - lastTransitionTime: "2021-03-19T17:19:37Z" message: Configured successfully observedGeneration: 1 reason: ConfigurationSucceeded status: "True" type: Configured inventory: sriovAccelerators: - deviceID: 0d5c driver: "" maxVirtualFunctions: 16 pciAddress: 0000:af:00.0 (1) vendorID: "8086" virtualFunctions: [] (2)
Configure the number of virtual functions and queue groups on the FEC device:
Create the following custom resource (CR) and save the YAML in the
sriovfec_acc100cr.yaml file:
apiVersion: sriovfec.intel.com/v1 kind: SriovFecClusterConfig metadata: name: config (1) spec: nodes: - nodeName: node1 (2) physicalFunctions: - pciAddress: 0000:af:00.0 (3) pfDriver: "pci-pf-stub" vfDriver: "vfio-pci" vfAmount: 16 (4) bbDevConfig: acc100: # Programming mode: 0 = VF Programming, 1 = PF Programming pfMode: false numVfBundles: 16 maxQueueSize: 1024 uplink4G: numQueueGroups: 0 numAqsPerGroups: 16 aqDepthLog2: 4 downlink4G: numQueueGroups: 0 numAqsPerGroups: 16 aqDepthLog2: 4 uplink5G: numQueueGroups: 4 numAqsPerGroups: 16 aqDepthLog2: 4 downlink5G: numQueueGroups: 4 numAqsPerGroups: 16 aqDepthLog2: 4
Apply the CR:
$ oc apply -f sriovfec_acc100cr.yaml
After applying the CR, the SR-IOV FEC daemon starts configuring the FEC device.
Check the status:
$ oc get sriovfecclusterconfig config -o yaml
status: conditions: - lastTransitionTime: "2021-03-19T11:46:22Z" message: Configured successfully observedGeneration: 1 reason: Succeeded status: "True" type: Configured inventory: sriovAccelerators: - deviceID: 0d5c driver: pci-pf-stub maxVirtualFunctions: 16 pciAddress: 0000:af:00.0 vendorID: "8086" virtualFunctions: - deviceID: 0d5d
Check the logs:
Determine the pod name of the SR-IOV daemon:
$ oc get po -o wide | grep sriov-fec-daemonset | grep node1
sriov-fec-daemonset-kqqs6 1/1 Running 0 19h
View the logs:
$ oc logs sriov-fec-daemonset-kqqs6
{"level":"Level(-2)","ts":1616794345.4786215,"logger":"daemon.drainhelper.cordonAndDrain()","msg":"node drained"} {"level":"Level(-4)","ts":1616794345.4786265,"logger":"daemon.drainhelper.Run()","msg":"worker function - start"} {"level":"Level(-4)","ts":1616794345.5762916,"logger":"daemon.NodeConfigurator.applyConfig","msg":"current node status","inventory":{"sriovAccelerat ors":[{"vendorID":"8086","deviceID":"0b32","pciAddress":"0000:20:00.0","driver":"","maxVirtualFunctions":1,"virtualFunctions":[]},{"vendorID":"8086" ,"deviceID":"0d5c","pciAddress":"0000:af:00.0","driver":"","maxVirtualFunctions":16,"virtualFunctions":[]}]}} {"level":"Level(-4)","ts":1616794345.5763638,"logger":"daemon.NodeConfigurator.applyConfig","msg":"configuring PF","requestedConfig":{"pciAddress":" 0000:af:00.0","pfDriver":"pci-pf-stub","vfDriver":"vfio-pci","vfAmount":2,"bbDevConfig":{"acc100":{"pfMode":false,"numVfBundles":16,"maxQueueSize":1 024,"uplink4G":{"numQueueGroups":4,"numAqsPerGroups":16,"aqDepthLog2":4},"downlink4G":{"numQueueGroups":4,"numAqsPerGroups":16,"aqDepthLog2":4},"uplink5G":{"numQueueGroups":0,"numAqsPerGroups":16,"aqDepthLog2":4},"downlink5G":{"numQueueGroups":0,"numAqsPerGroups":16,"aqDepthLog2":4}}}}} {"level":"Level(-4)","ts":1616794345.5774765,"logger":"daemon.NodeConfigurator.loadModule","msg":"executing command","cmd":"/usr/sbin/chroot /host/ modprobe pci-pf-stub"} {"level":"Level(-4)","ts":1616794345.5842702,"logger":"daemon.NodeConfigurator.loadModule","msg":"commands output","output":""} {"level":"Level(-4)","ts":1616794345.5843055,"logger":"daemon.NodeConfigurator.loadModule","msg":"executing command","cmd":"/usr/sbin/chroot /host/ modprobe vfio-pci"} {"level":"Level(-4)","ts":1616794345.6090655,"logger":"daemon.NodeConfigurator.loadModule","msg":"commands output","output":""} {"level":"Level(-2)","ts":1616794345.6091156,"logger":"daemon.NodeConfigurator","msg":"device's driver_override path","path":"/sys/bus/pci/devices/0000:af:00.0/driver_override"} {"level":"Level(-2)","ts":1616794345.6091807,"logger":"daemon.NodeConfigurator","msg":"driver bind path","path":"/sys/bus/pci/drivers/pci-pf-stub/bind"} {"level":"Level(-2)","ts":1616794345.7488534,"logger":"daemon.NodeConfigurator","msg":"device's driver_override path","path":"/sys/bus/pci/devices/0000:b0:00.0/driver_override"} {"level":"Level(-2)","ts":1616794345.748938,"logger":"daemon.NodeConfigurator","msg":"driver bind path","path":"/sys/bus/pci/drivers/vfio-pci/bind"} {"level":"Level(-2)","ts":1616794345.7492096,"logger":"daemon.NodeConfigurator","msg":"device's driver_override path","path":"/sys/bus/pci/devices/0000:b0:00.1/driver_override"} {"level":"Level(-2)","ts":1616794345.7492566,"logger":"daemon.NodeConfigurator","msg":"driver bind path","path":"/sys/bus/pci/drivers/vfio-pci/bind"} {"level":"Level(-4)","ts":1616794345.74968,"logger":"daemon.NodeConfigurator.applyConfig","msg":"executing command","cmd":"/sriov_workdir/pf_bb_config ACC100 -c /sriov_artifacts/0000:af:00.0.ini -p 0000:af:00.0"} {"level":"Level(-4)","ts":1616794346.5203931,"logger":"daemon.NodeConfigurator.applyConfig","msg":"commands output","output":"Queue Groups: 0 5GUL, 0 5GDL, 4 4GUL, 4 4GDL\nNumber of 5GUL engines 8\nConfiguration in VF mode\nPF ACC100 configuration complete\nACC100 PF [0000:af:00.0] configuration complete!\n\n"} {"level":"Level(-4)","ts":1616794346.520459,"logger":"daemon.NodeConfigurator.enableMasterBus","msg":"executing command","cmd":"/usr/sbin/chroot /host/ setpci -v -s 0000:af:00.0 COMMAND"} {"level":"Level(-4)","ts":1616794346.5458736,"logger":"daemon.NodeConfigurator.enableMasterBus","msg":"commands output","output":"0000:af:00.0 @04 = 0142\n"} {"level":"Level(-4)","ts":1616794346.5459251,"logger":"daemon.NodeConfigurator.enableMasterBus","msg":"executing command","cmd":"/usr/sbin/chroot /host/ setpci -v -s 0000:af:00.0 COMMAND=0146"} {"level":"Level(-4)","ts":1616794346.5795262,"logger":"daemon.NodeConfigurator.enableMasterBus","msg":"commands output","output":"0000:af:00.0 @04 0146\n"} {"level":"Level(-2)","ts":1616794346.5795407,"logger":"daemon.NodeConfigurator.enableMasterBus","msg":"MasterBus set","pci":"0000:af:00.0","output":"0000:af:00.0 @04 0146\n"} {"level":"Level(-4)","ts":1616794346.6867144,"logger":"daemon.drainhelper.Run()","msg":"worker function - end","performUncordon":true} {"level":"Level(-4)","ts":1616794346.6867719,"logger":"daemon.drainhelper.Run()","msg":"uncordoning node"} {"level":"Level(-4)","ts":1616794346.6896322,"logger":"daemon.drainhelper.uncordon()","msg":"starting uncordon attempts"} {"level":"Level(-2)","ts":1616794346.69735,"logger":"daemon.drainhelper.uncordon()","msg":"node uncordoned"} {"level":"Level(-4)","ts":1616794346.6973662,"logger":"daemon.drainhelper.Run()","msg":"cancelling the context to finish the leadership"} {"level":"Level(-4)","ts":1616794346.7029872,"logger":"daemon.drainhelper.Run()","msg":"stopped leading"} {"level":"Level(-4)","ts":1616794346.7030034,"logger":"daemon.drainhelper","msg":"releasing the lock (bug mitigation)"} {"level":"Level(-4)","ts":1616794346.8040674,"logger":"daemon.updateInventory","msg":"obtained inventory","inv":{"sriovAccelerators":[{"vendorID":"8086","deviceID":"0b32","pciAddress":"0000:20:00.0","driver":"","maxVirtualFunctions":1,"virtualFunctions":[]},{"vendorID":"8086","deviceID":"0d5c","pciAddress":"0000:af:00.0","driver":"pci-pf-stub","maxVirtualFunctions":16,"virtualFunctions":[{"pciAddress":"0000:b0:00.0","driver":"vfio-pci","deviceID":"0d5d"},{"pciAddress":"0000:b0:00.1","driver":"vfio-pci","deviceID":"0d5d"}]}]}} {"level":"Level(-4)","ts":1616794346.9058325,"logger":"daemon","msg":"Update ignored, generation unchanged"} {"level":"Level(-2)","ts":1616794346.9065044,"logger":"daemon.Reconcile","msg":"Reconciled","namespace":"vran-acceleration-operators","name":"pg-itengdvs02r.altera.com"}
Check the FEC configuration of the card:
$ oc get sriovfecnodeconfig node1 -o yaml
status: conditions: - lastTransitionTime: "2021-03-19T11:46:22Z" message: Configured successfully observedGeneration: 1 reason: Succeeded status: "True" type: Configured inventory: sriovAccelerators: - deviceID: 0d5c (1) driver: pci-pf-stub maxVirtualFunctions: 16 pciAddress: 0000:af:00.0 vendorID: "8086" virtualFunctions: - deviceID: 0d5d (2)
OpenNESS is an edge computing software toolkit that you can use to onboard and manage applications and network functions on any type of network.
To verify all OpenNESS features are working together, including SR-IOV binding, the device plugin, Wireless Base Band Device (bbdev) configuration, and SR-IOV (FEC) VF functionality inside a non-root pod, you can build an image and run a simple validation application for the device.
For more information, go to openess.org.
Node or nodes installed with the OpenNESS SR-IOV Operator for Wireless FEC Accelerators.
Real-Time kernel and huge pages configured with the Performance Addon Operator.
Create a namespace for the test by completing the following actions:
Define the
test-bbdev namespace by creating a file named
test-bbdev-namespace.yaml file as shown in the following example:
apiVersion: v1 kind: Namespace metadata: name: test-bbdev labels: openshift.io/run-level: "1"
Create the namespace by running the following command:
$ oc create -f test-bbdev-namespace.yaml
Create the following
Pod specification, and then save the YAML in the
pod-test.yaml file:
apiVersion: v1 kind: Pod metadata: name: pod-bbdev-sample-app namespace: test-bbdev (1) spec: containers: - securityContext: privileged: false capabilities: add: - IPC_LOCK - SYS_NICE name: bbdev-sample-app image: bbdev-sample-app:1.0 (2) command: [ "sudo", "/bin/bash", "-c", "--" ] runAsUser: 0 (3) resources: requests: hugepages-1Gi: 4Gi (4) memory: 1Gi cpu: "4" (5) intel.com/intel_fec_acc100: '1' (6) limits: memory: 4Gi cpu: "4" hugepages-1Gi: 4Gi intel.com/intel_fec_acc100: '1'
Create the pod:
$ oc apply -f pod-test.yaml
Check that the pod is created:
$ oc get pods -n test-bbdev
NAME READY STATUS RESTARTS AGE pod-bbdev-sample-app 1/1 Running 0 80s
Use a remote shell to log in to the
pod-bbdev-sample-app:
$ oc rsh pod-bbdev-sample-app
sh-4.4#
Print the VF allocated to the pod:
sh-4.4# printenv | grep INTEL_FEC
PCIDEVICE_INTEL_COM_INTEL_FEC_ACC100=0.0.0.0:1d.00.0 (1)
Change to the
test-bbdev directory.
sh-4.4# cd test/test-bbdev/
Check the CPUs that are assigned to the pod:
sh-4.4# export CPU=$(cat /sys/fs/cgroup/cpuset/cpuset.cpus) sh-4.4# echo ${CPU}
This prints out the CPUs that are assigned to the
fec.pod.
24,25,64,65
Run the
test-bbdev application to test the device:
sh-4.4# ./test-bbdev.py -e="-l ${CPU} -a ${PCIDEVICE_INTEL_COM_INTEL_FEC_ACC100}" -c validation \ -n 64 -b 32 -l 1 -v ./test_vectors/*"
Executing: ../../build/app/dpdk-test-bbdev -l 24-25,64-65 0000:1d.00.0 -- -n 64 -l 1 -c validation -v ./test_vectors/bbdev_null.data -b 32 EAL: Detected 80 lcore(s) EAL: Detected 2 NUMA nodes Option -w, --pci-whitelist is deprecated, use -a, --allow option instead EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: Probing VFIO support... EAL: VFIO support initialized EAL: using IOMMU type 1 (Type 1) EAL: Probe PCI driver: intel_fpga_5ngr_fec_vf (8086:d90) device: 0000:1d.00.0 (socket 1) EAL: No legacy callbacks, legacy socket not created =========================================================== Starting Test Suite : BBdev Validation Tests Test vector file = ldpc_dec_v7813.data Device 0 queue 16 setup failed Allocated all queues (id=16) at prio0 on dev0 Device 0 queue 32 setup failed Allocated all queues (id=32) at prio1 on dev0 Device 0 queue 48 setup failed Allocated all queues (id=48) at prio2 on dev0 Device 0 queue 64 setup failed Allocated all queues (id=64) at prio3 on dev0 Device 0 queue 64 setup failed All queues on dev 0 allocated: 64 + ------------------------------------------------------- + == test: validation dev:0000:b0:00.0, burst size: 1, num ops: 1, op type: RTE_BBDEV_OP_LDPC_DEC Operation latency: avg: 23092 cycles, 10.0838 us min: 23092 cycles, 10.0838 us max: 23092 cycles, 10.0838 us TestCase [ 0] : validation_tc passed + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + Test Suite Summary : BBdev Validation Tests + Tests Total : 1 + Tests Skipped : 0 + Tests Passed : 1 (1) + Tests Failed : 0 + Tests Lasted : 177.67 ms + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + | https://docs.openshift.com/container-platform/4.7/scalability_and_performance/cnf-optimize-data-performance-acc100.html | 2022-09-25T05:13:14 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.openshift.com |
Telerik UI for Xamarin is a
professional grade UI component library for building modern and feature-rich applications. To try it out sign up for a free 30-day trial.
New to Telerik UI for Xamarin?
Telerik UI for Xamarin is a professional grade UI component library for building modern and feature-rich applications. To try it out sign up for a free 30-day trial.
Xamarin.iOS Wrappers
Telerik UI for Xamarin suite includes Xamarin.iOS wrappers built on top of truly native iOS components, allowing you to build unique and visually stunning iOS applications. Our controls give you great customization flexibility to accommodate as many app scenarios as possible.
To learn more please visit the Telerik UI for Xamarin product overview page.
Controls Overview
Our suite features the following controls for development with Xamarin.iOS:
Alert: Highly customizable alert view component that offers different predefined animations, easy to use Block API, many customization options.
AutoCompleteTextView: An input control that provides users with suggestions, based on the text or characters they’ve already typed into the search bar. Once the autocomplete shows a list of suggestions, the user can select one or more of them. The control provides means for easy customization and data management. To make working with data easier for developers,
TKAutoCompleteTextViewworks seamlessly with the Telerik DataSource control which serves as a mediator between the raw suggestions data and the UI component which serves as suggestion view.
Calendar: A calendar control that features week, month and year views as well as multiple dates selection and flexible API for customization.
Chart: A versatile charting component that offers full customization, great performance and intuitive object model. Its API allows creating complex charts with stunning animations and appearance.
DataForm:.
DataSource: It is a non-visual component that consumes data from various sources. It supports data shaping operations like sorting, filtering and grouping. It adopts the most used data enabled UI controls in iOS: UITableView and UICollectionView to automate the presentation of its data. TKDataSource works perfectly with TKListView, TKChart and TKCalendar too.
Gauges: A highly customizable component that allows you to show the current status of a value within a range of upper and lower bounds, illustrate progress towards a goal or a summary of a fluctuating metric..
SideDrawer: Helps you add extra space to your application. It extends the popular slide-out design pattern which is mainly associated with navigational purposes. The control is highly customizable and allows developers to embed any type of content inside the sliding panel.
Trial Version and Commercial License
Telerik UI for Xamarin Xamarin License Agreement to get acquainted with the full terms of use.
Support Options
For any issues you might encounter while working with Telerik UI for Xamarin, use any of the available support channels:
- UI for Xamarin license holders and active trialists can take advantage of the outstanding customer support delivered by the developers building the library. To submit a support ticket, use the UI for Xamarin dedicated support system.
- UI for Xamarin forums are part of the free support you can get from the community and from the UI for Xamarin team on all kinds of general issues.
- UI for Xamarin feedback portal provides information on the features in discussion and also the planned ones for release.
- You may still need a tailor-made solution for your project. In such cases, go straight to Progress Services. | https://docs.telerik.com/devtools/xamarin/nativecontrols/ios/introduction | 2022-09-25T04:43:18 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.telerik.com |
Introduction¶
What does it do?¶
With the TYPO3 Beaconizer you can harvest links from authority files (BEACON files), automatically enrich your website content with related links (“see also” links) and open your data to external applications with dynamically generated BEACON files.
BEACON is a simple text based file format to exchange hyperlinks. A BEACON file contains a 1-to-1 (or 1-to-n) mapping from normdata identifiers to links. Each link consists of an URL with an optional annotation (read more about BEACON files and their purpose in the English Wikipedia).
A BEACON file connects your TYPO3 pages and data to the outside world via harvestable links. At the same time you can provide your users with context related links in your detail views. Have a look at this poster which explains the benefits of using BEACON.
Features¶
The TYPO3 Beaconizer consists of three components:
- a scheduler job for harvesting links from BEACON files
- a BEACON generator for your data (you can map any TYPO3 table)
- a seeAlso plugin that generates context related links for authority identifiers
Screenshots¶
Credits¶
This extension is developed by the Digital Academy of the Academy of Sciences and Literature | Mainz for our Digital Humanities Projects. | https://docs.typo3.org/p/digicademy/beaconizer/main/en-us/Introduction/Index.html | 2022-09-25T04:25:41 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.typo3.org |
What was decided upon? (e.g. what has been updated or changed?) For the July 20 launch, add Text Me to this area and work later to get it in the Send To Tool Bar.
Why was this decided? (e.g. explain why this decision was reached. It may help to explain the way a procedure used to be handled pre-Alma) This should be available but the link will be closer to the call number and not in the send to area at go live. The send to area is controlled differently and we cannot add the feature without using the email format which is too long.
Who decided this? (e.g. what unit/group) User Interface
When was this decided?
Additional information or notes. | https://docs.library.vanderbilt.edu/2018/10/15/call-number-to-text/ | 2022-09-25T05:27:11 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.library.vanderbilt.edu |
Sample Event Subscription Layout
This command subscribes to the different events of checkout. As shown below, it is possible to register for more than one event hook at a time. The request body attributes are the same for every event subscription request. The attributes are described in the table below the sample.
Sample Request Body:
{ command: 'registerCallback', version: 1.0, method: 'POST', data:[ { callBackCommand : 'Checkout', hookName: 'checkoutStarted' }, { callBackCommand : 'Checkout', hookName: 'checkoutComplete' }, { callBackCommand : 'Checkout', hookName: 'checkoutSyncComplete' } ] }
Sample Response Body:
{ command: 'registerCallback', version: 1.0, method: 'POST', responseType: 'registered', hooks: //This is an array which shows which events the app //has attempted to register to. [ { //When the event has been successfully //registered, the response is as shown //below. event : 'Checkout', hookName: 'checkoutStarted', status: 200, message: 'Event registered successfully', error: null }, { event : 'Checkout', hookName: 'checkoutComplete', status: 200, message: 'Event registered successfully', error: null }, { //When the event registration is //unsuccessful, the response is as //shown below. event : 'Checkout', hookName: 'checkoutSyncComplete', status: 402, message: null, error: 'Error occurred when registering call back events, please check your request format.' } ] }
Sample Triggered Callback Response Body:
{ command: 'registerCallback', version: 1.0, method: 'POST', responseType: 'triggered', callbackCommand: 'customerAdded', hookName: 'customerAddedToCart', data: { email: '[email protected]' } }
Attributes Description:
This table details the properties of each attribute in an event subscription request.
Updated 11 months ago
Did this page help you? | https://docs.oliverpos.com/docs/sample-event-subscription-layout | 2022-09-25T05:52:15 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.oliverpos.com |
Multiple notifications are delivered for the same Pulse post
Valid from Pega Version 7.3
When a user posts a Pulse comment, the users who subscribe to the activity feed might receive more than one notification if the posted comment triggers several notification rules.
For example, when a user posts a comment on a case that he or she follows, any response to the comment can result in the following notifications:
- Notification when a new comment is posted on the followed case (triggers the pyAddPulsePost notification definition)
- Notification when a comment is a response to the user’s original post (triggers the pyAddPulsePost notification definition)
- Notification when the user is mentioned in the comment (triggers the pyAddUserMentionedPost notification definition) | https://docs.pega.com/platform/release-notes-archive?f%5B0%5D=%3A9031&f%5B1%5D=%3A31541&f%5B2%5D=releases_capability%3A9026&f%5B3%5D=releases_capability%3A9051&f%5B4%5D=releases_capability%3A9061&f%5B5%5D=releases_capability%3A9096&f%5B6%5D=releases_note_type%3A985&f%5B7%5D=releases_version%3A7081&f%5B8%5D=releases_version%3A7106&f%5B9%5D=releases_version%3A7111&f%5B10%5D=releases_version%3A7126 | 2022-09-25T06:18:20 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['https://docs.pega.com/sites/default/files/images/release-notes/138771_warning.png',
'Warning message dialog'], dtype=object)
array(['https://docs.pega.com/sites/default/files/images/PRPC/articles/157331/cellproperties.png',
'Cell Properties dialog'], dtype=object) ] | docs.pega.com |
Manage metrics data that is received from a UDP data input. If your system uses multiple pipeline sets, use a TCP or HTTP Event Collector data input for metrics data. For more about metrics, see the Metrics manual..! | https://docs.splunk.com/Documentation/Splunk/7.2.0/Indexer/Pipelinesets | 2022-09-25T05:40:18 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
How to Write Email for Requesting Something
When writing an email to request something, there are many things to keep in mind. You’ll need to be direct and friendly in the subject line, but still have a professional tone. If you’re sending an official request, you may want to use Dear, while a casual email may be best written as “Hey.” Make sure to include the sender’s full name, position, or company name in the subject line. Otherwise, the project could be delayed or halted if the sender’s name or title are not clear.
Writing a subject line
Writing a subject line for an email is essential to attracting the attention of your audience. People sign up for marketing emails expecting information that will help them make the right decision. For this purpose, you can entertain them with something fun or interesting or warn them of the dangers of Lyme disease and ticks. For a creative approach, you can use alliteration, a technique used to draw attention.
The subject line should communicate what the email is about, what the recipient should do once they open the message. A subject line that is vague or unclear will irritate people and decrease the chances of them opening the email. For example, instead of using “Please respond,” write “Thoughts on X topic are required” or “FYI.”
Providing proof of need
When writing an email, providing proof of need can be very helpful in persuading the reader to comply with your request. This is especially true for requests for donations or volunteer work. The reader will be more likely to comply with your request if they feel that they will benefit from the action. As a result, a clear description of how the request will benefit the reader is an essential part of the email.
Providing supporting documents
When writing an email for requesting something, provide supplemental documents to support your request. Supplemental documents can help you prove your point and prove that you have the proper information to comply. Including them with your request will add credibility to your request and provide the reader with all the information they need to comply. Here are some examples of how to provide supporting documents when writing email for requesting something. You may want to consider incorporating your supporting documents in the body of your email.
Before you start writing your letter, organize your thoughts and write out your request. The types of documents to include will depend on the type of request. Make a list of all the benefits of the requested item and the steps you need to take to achieve your goal. In addition, gather any supporting documents needed. For example, if you are requesting money from a company, you can include the annual property tax statement.
Keeping a professional attitude
Keeping a professional attitude when writing email is important. You may be requesting information, but you should remember that your email will be a permanent record of your interaction with the recipient. Keeping a professional attitude can go a long way in establishing a positive relationship. By using basic manners and displaying gratitude, you will be building a foundation for future business relations. It can even help you get what you want, such as a job or a relationship.
In the body of your email, state clearly what you are requesting. Make sure to show that you admire their work or appreciate their efforts. Introduce yourself and include your full name and job title. Then, state your request and provide any other details the recipient may need. If there are any documents or files attached to your email, make sure to include them as well. In your final paragraph, summarize the request in an easy-to-read manner. | https://authentic-docs.com/how-to-write-email-for-requesting-something/ | 2022-09-25T05:52:46 | CC-MAIN-2022-40 | 1664030334514.38 | [] | authentic-docs.com |
Missing-value Fisher-z test
Perform a testwise-deletion Fisher-z independence test to data sets with missing values. With testwise-deletion, the test makes use of all data points that do not have missing values for the variables involved in the test.
Usage
from causallearn.utils.cit import CIT mv_fisherz_obj = CIT(data_with_missingness, "mv_fisherz") # construct a CIT instance with data and method name pValue = mv mv_fisherz cg = pc(data_with_missingness, 0.05, mv_fisherz)
explicitly calculating the p-value of a test: then you need to declare the
mv_fisherz_objand then call it as above, instead of using
mv_fisherz(data, X, Y, condition_set)as before. Note that now
causallearn.utils.cit.mv_fisherzis a string
"mv, “mv_fisherz”.
kwargs: e.g.,
cache_path. See Advanced Usages.
Returns
p: the p-value of the test. | https://causal-learn.readthedocs.io/en/latest/independence_tests_index/mvfisherz.html | 2022-09-25T04:00:44 | CC-MAIN-2022-40 | 1664030334514.38 | [] | causal-learn.readthedocs.io |
In this article we'll be covering how to move an assigned device back to "waiting for approval". In order to do this, all you have to do is go to your "Devices" page which you can access from your Management Console by clicking "Devices".
Once you are there, you will see a gear icon for each Device. Click it and select Edit:
You will see that the Status has been updated to Pending Assignment. You can re-approve the Device by clicking the gear icon and selecting Approve Device.
You can approve and un-approve whenever you wish, but keep in mind that each time a device is assigned to the account, charges are going to be incurred.
This concludes our overview of moving a device back to "Pending" status. If you have any questions or comments feel free to send us a message at [email protected]. | https://docs.devicemagic.com/en/articles/392975-move-device-back-to-waiting-approval | 2022-09-25T04:44:01 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['https://downloads.intercomcdn.com/i/o/279963086/616c3a2a4db804508a5a6146/Screen+Shot+2020-12-21+at+2.49.48+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/279963847/14f1289539ab0f0f41aab54a/Screen+Shot+2020-12-21+at+2.51.06+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/210894947/35d3292975bb49aadd39cdde/Screen+Shot+2020-05-20+at+4.10.50+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/279964106/83394fdc43f0b5ca80084b42/Screen+Shot+2020-12-21+at+2.53.19+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/279964366/51a4bfc3bdf7f7f27cee3b3a/Screen+Shot+2020-12-21+at+2.53.53+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/279964927/124ed93a7b6d5cd709b93f5a/Screen+Shot+2020-12-21+at+2.54.27+PM.png',
None], dtype=object) ] | docs.devicemagic.com |
Depending on the provider's settings, two types of payments are available in your client area:
- advance payment — you top up the balance of the Personal account in any convenient way, and then pay for services with it. You can also enable auto-renewal from your Personal account and thus configure auto-payment for your services;
- payment for the service by the selected payment method — you pay for the service directly by any payment method without using your personal account. For example, by bank card or transfer. You can also save your bank card or e-wallet details in Payment methods. You can then use this card or wallet to pay for services and subscribe to services. Read more about this in Payment methods.
Advance payment
You can issue an advance payment invoice if you want to top up your personal account balance and use the money to buy new services or renew existing ones.
If you create an advance payment without linking to a specific service, the invoice will indicate "Advance payment".
Account topup
To top up the balance of your personal account:
- Open the settings of your client area in the upper right corner, click Add funds.
- Specify the amount, currency and select the payment method. Press the Pay button.
- Select an existing payer or create a new one. Specify information about the payer. Press Continue.
- Press the Pay button to confirm the payment. After that, you will be redirected to the website of the selected payment system to make the payment.
The funds will be credited to your personal account.You will find the paid invoice under Finance and documents → Invoices tab. When the payment is received, the status of the invoice will change from "Payment in progress" to "Sent".
Auto-payment
Auto-payment is an automatic topup of your personal account when your balance is low. Making a payment will not require your participation, and the balance will be topped up to an amount sufficient for a month. At the moment, auto-payment is performed through the following systems:
- Stripe;
- PayMaster;
- SimplePay;
- PayOnline.
Note:
To set up auto-payment:
- Open the settings of your client area in the upper right corner, click Auto-payment .
On the settings form, specify a limit in the form of the maximum amount to top up the account by in a month and press the Unlimited button.
- Select the payment method and press Confirm.
- Select an existing payer or create a new one. Press Continue.
- Enter the information about the payer and press Continue.
- Press Continue to confirm.
- The page of the payment system will open. Enter the details of your payment method and make a test payment — the system will check that your card works, and the money will be credited back. The payment system will now be able to make payments for your provider's services automatically.
Payment by selected method directly
- Go to Billing → Auto payment → Configure.
In the setup form, select a payment method and the maximum amount you can top up your balance in a month. Click Continue.
Payment by selected method
Select a payer or create a new one. Click Next.
Specify the payer information and click Next.
Click Confirm. This will allow the payment system to automatically make payments for your provider's services.
How to pay by Wire transfer
When paying by wire transfer, a bank account is automatically generated to pay for services. Invoice formation algorithm:
Step 1. Choose Wire transfer and click Pay.
Step 2. Select a payer. You can select an existing payer or add a new one.
Step 3. Fill out the form and click Pay. The system will generate an invoice with banking details of both parties. You can print the invoice in the Accounts section.
Checking the balance and spent funds
The balance of the Personal account is displayed in the upper right corner.
You can see information about funds withdrawn from the account for ordering goods and services under Billing → Expenses.
Expense history
You can set your expense display period or select one of the available default options: month, quarter, 6 months, year.
- Date — the date when the funds were withdrawn from your account;
- Operation description — the product or service for which your account has been charged;
- Amount — the amount withdrawn for a good or service;
- Tax — the tax paid for a good or service;
- Not paid — unpaid amount for a product or service;
- Transaction number;
- Payment number — the payment from which the funds were taken to pay for the specified service. It can be either an advance payment or a payment for a certain service.
To view information about funds debited from the account for ordering goods and services, go to Billing → Expenses.
- Name — the product or service for which your account was debited;
- Date — the date on which the funds were withdrawn from your account;
- Amount — the amount written off for the product or service;
- Payments — payment from which funds were taken to pay for the specified service. It can be both advance payment and payment for a certain service.
Payment for a particular service
If you have chosen a service, you can pay for it immediately. To do this, click Pay or Add to cart. Enter the amount and choose the payment method. When paying for a particular service you will be redirected to the payment system’s web page. After your payment has been processed, the service will be activated
Note:
Services that are charged daily cannot be renewed. Charges for them are deducted from the personal account. | https://docs.ispsystem.com/b6c/client-area/payment-in-client-area | 2022-09-25T04:11:25 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.ispsystem.com |
security key-manager external azure update-client-secret
Contributors
Update Client Secret for Azure Key Vault
Availability: This command is available to cluster and Vserver administrators at the advanced privilege level.
Description
This command provides a way to update the client secret that is used for the Azure Key Vault (AKV) configured for the given Vserver. The command is initially set by running the security key-manager external azure enable command. This command is not supported if AKV has not been enabled for the Vserver.
Parameters
-vserver <Vserver Name>- Vserver
Use this parameter to specify the Vserver for which the AKV client secret is to be updated.
Examples
The following example updates the AKV client secret for the data Vserver v1.
cluster-1::> security key-manager external azure update-client-secret -vserver v1 Enter new client secret: Re-enter new client secret: | https://docs.netapp.com/us-en/ontap-cli-98/security-key-manager-external-azure-update-client-secret.html | 2022-09-25T04:36:10 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.netapp.com |
$ ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> (1)
In OKD version 4: (2)) pullSecret: '{"auths": ...}' (1) sshKey: ssh-ed25519 AAAA... (6).. specify the advanced network configuration for your cluster,. | https://docs.okd.io/4.7/installing/installing_gcp/installing-gcp-network-customizations.html | 2022-09-25T06:09:37 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.okd.io |
Build, deploy and manage your applications across cloud- and on-premise infrastructure
Single-tenant, high-availability Kubernetes clusters in the public cloud
The fastest way for developers to build, host and scale applications in the public cloud
Guidelines
Updated the commands in the Support Arbitrary User IDs section to enable root group access in the Dockerfile.
In the Support Arbitrary User IDs section, updated export LD_PRELOAD=libnss_wrapper.so to be export LD_PRELOAD=/usr/lib64/libnss_wrapper.so to prevent errors from the library not being found.
export LD_PRELOAD=libnss_wrapper.so
export LD_PRELOAD=/usr/lib64/libnss_wrapper.so
OpenShift Container Platform 3.3 initial release. | https://docs.openshift.com/container-platform/3.3/creating_images/revhistory_creating_images.html | 2022-09-25T05:43:49 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.openshift.com |
If you enable this option small images will be replaced with an inline base64 encoded version, so you can reduce the number of HTTP requests.
Is it useful for HTTP2?
Yes. Higher number of requests on HTTP2 is not that big issue as it was on HTTP 1, but still can improve the speed and save some server resources, because the server have to deal less files on the server to serve the request. | https://docs.swiftperformance.io/knowledgebase/inline-small-images/ | 2022-09-25T04:46:16 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.swiftperformance.io |
Choose your operating system:
Windows
macOS
Linux
The VR Template is designed to be a starting point for all of your virtual reality (VR) projects in Unreal Engine. It includes encapsulated logic for teleport locomotion, an example VR spectator Blueprint and common input actions, such as grabbing and attaching items to your hand.
This page shows you how to get started with the VR Template and how to use it to create your own VR experiences.
Supported Devices
The VR Template is compatible with the following devices:
Oculus Mobile
Quest 1
Quest 2
Oculus PC
Rift S
Quest with Oculus Link
Steam VR
Valve Index
HTC Vive
Windows Mixed Reality
Before you can use the VR Template, your device will need to be set up for development in Unreal Engine. See the documentation in XR Development for your device to ensure it's set up correctly.
OpenXR
The VR Template uses the OpenXR framework, the multi-company standard for VR and AR development. With the OpenXR plugin in Unreal Engine, the template's logic works on multiple platforms and devices without any platform-specific checks or calls.
The OpenXR plugin in Unreal Engine supports extension plugins. You can find extension plugins in the Marketplace or create your own to add functionality to OpenXR that isn't currently in the engine by default.
Pawn, Game Mode, and Player Start
The following objects determine the rules of the VR Template experience and how it's set up.
Input
Input in the VRTemplate is based on the Action and Axis Mappings Input System in Unreal Engine. See Input with OpenXR for more information on how to set up input with OpenXR.
Movement
Movement in VR experiences is referred to as locomotion. In the VR Template, there are two types of locomotion: Teleport and Snap Turn. In the Blueprint Editor, open the VRPawn to see how both are implemented.
Teleport
To teleport to different locations in the Level:
Move your right Motion Controller's thumbstick or trackpad in the direction you want to move. The Teleport visualizer appears in the level to indicate where you will move to.
Release the thumbstick or trackpad to teleport to the selected location.
Snap Turn
To rotate your virtual character without moving your head, move your left Motion Controller's thumbstick or trackpad in the direction you want to turn.
Setting Allowed Areas for Movement
The Level uses a Navigation Mesh to mark where the user is allowed to move. See the NavModifierVolume asset NavModifier_NoTeleport as an example of how this can be implemented. The asset's Area Class parameter is set to NavArea_Null, which prevents the user from moving to that volume.
The teleport visualizer disappears when it's in the NavModifier_NoTeleport volume.
Press the P key on your keyboard to toggle the visualization of navigable areas.
Grab
The VR Template shows a couple different ways to pick up objects and attach them to your hands.
To grab an object in the level:
Reach out to a grabbable object, then press and hold the Grip button on your Motion Controller.
This creates a Sphere Trace around the position of your Motion Controller's GripLocation. If there's an Actor with a GrabComponent in that sphere, it becomes attached to the hand that pressed the Grip button.
Release the Grip button on the same Motion Controller to detach the object from your hand.
To enable grabbing on an actor, add the GrabComponent blueprint to the actor and select the Grab Type in the Details window. On BeginPlay, the collision profile of the component's parent is set to PhysicsActor, which is the trace channel used for the Sphere Trace in VRPawn.
You can open the GrabComponent Blueprint class to see how the grab system is implemented in the VR template.
GrabType
You can set the type of grab in an object's GrabComponent to define how the object attaches to your hand.
The following grab types are defined in the GrabType Enum asset:
Free: The Actor stays in the position and orientation relative to where the Motion Controller grabbed it. See the small cubes as an example.
Snap: The Actor has a specific position and orientation relative to the Motion Controller. See the pistols as an example.
Custom: With this option, you can add your own logic for the grab action, using the OnGrabbed and OnDropped events. You can also utilize other exposed variables, such as the bIsHeld boolean variable, which is a flag that specifies whether an object is currently held by the user. Examples of other types of custom grab actions that can be created include two-handed grabs, dials, levers, and other complex behaviors, such as Actors that don't overlap geometry when held.
You can define a haptic effect that occurs when grabbing an object by setting the OnGrabHaptic Effect variable in the GrabComponent for the object. An example of a haptic effect includes Motion Controller vibrations.
Important functions used in GrabComponent are abstracted into a common interface using the Blueprint Interface asset VRInteraction BPI. With this Blueprint Interface asset, the VRPawn can simplify its logic and avoid using multiple checks for each kind of object. For more on Blueprint Interface and their benefits, see Blueprint Interface.
Pressing the menu button on your motion controller opens the VR Template's menu. The menu is built with Unreal Motion Graphics (UMG). See UMG UI Designer for more information on how to use this tool.
In the Content Browser, Content > VRTemplate > Blueprints > WidgetMenu is the UMG asset for designing and creating logic for the menu, and the Menu Blueprint defines how the controller interacts with the Widget.
VR Spectator
With Spectator Screen Mode, others can view the VR experience while someone uses the head-mounted display (HMD). The VR Template implements an example of Spectator Screen Mode using the VR Spectator Blueprint.
There are two ways to enable Spectator Mode in the VR Template:
Press Tab to toggle Spectator Mode during the session.
Set VRSpectator bSpectatorEnabled to true.
VR Spectator is not compatible with Mobile VR devices, such as Oculus Quest..
You can control the VR Spectator to view the user with the HMD in the virtual world, with your mouse and keyboard or a gamepad using the following inputs:
You can use the VR Spectator as a starting point for implementing multiplayer experiences on the same PC. | https://docs.unrealengine.com/5.0/en-US/vr-template-in-unreal-engine/ | 2022-09-25T04:27:55 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['./../../Images/sharing-and-releasing-projects/xr-development/getting-started-with-xr-development/vr-template/image_2.jpg',
'User stacking cubes in VR with their motion controllers'],
dtype=object)
array(['./../../Images/sharing-and-releasing-projects/xr-development/getting-started-with-xr-development/vr-template/image_3.jpg',
'Input action and axis mappings'], dtype=object)
array(['./../../Images/sharing-and-releasing-projects/xr-development/getting-started-with-xr-development/vr-template/image_4.jpg',
'User moving the teleport indicator with their motion controller'],
dtype=object)
array(['./../../Images/sharing-and-releasing-projects/xr-development/getting-started-with-xr-development/vr-template/image_5.jpg',
'User performing snap turns to rotate their view without moving'],
dtype=object)
array(['./../../Images/sharing-and-releasing-projects/xr-development/getting-started-with-xr-development/vr-template/image_6.jpg',
'The teleport indicator disappears when in the No Teleport Zone'],
dtype=object)
array(['./../../Images/sharing-and-releasing-projects/xr-development/getting-started-with-xr-development/vr-template/image_7.jpg',
'Visualization of the navigable areas in the level'], dtype=object)
array(['./../../Images/sharing-and-releasing-projects/xr-development/getting-started-with-xr-development/vr-template/image_10.jpg',
'User interacting with the VR menu with two options: restart and real life'],
dtype=object) ] | docs.unrealengine.com |
.triggersfile
.secretsfile
.wayscriptdirectories and nested files
cmd+sor
ctrl-sto save edits to your Lair's files. You will see a
homedirectory to sync WayScript files between remote and your local device. Our tool will then create a new directory with the names of your workspaces within the specified directory and download files present on remote.
gitor other VCS on the workspace directory or for individual Lair directories to track changes or push to other remote locations. You can also clone repos directly into your workspace or Lair directories.
homedirectory on your local machine is initialized with a strict directory structure and helper files to ensure data integrity in the WayScript file system. Please exercise caution when completing the following local file operations:
node_modules) to your Lair directory may result in an error while syncing. We recommend building your tool outside the Lair filesystem and then copying your project folder into your Lair directory.
myworkspace > myfile.py) may result in an invalid workspace state and require you to reconfigure your
homedirectory.
.wayscriptdirectories and nested files stored under your workspace root directory and Lair directories may result in an unresponsive app state. These files are used for configuration or identification purposes. | https://docs.wayscript.com/building-tools/file-system | 2022-09-25T06:16:11 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.wayscript.com |
Creating vertices and edges in DataStax Studio
Steps to add Gremlin code to a notebook to create a simple two vertex, one edge graph.
Prerequisites
- DSE 5.1 installed, configured, and running.
- DataStax Studio installed and running.
- A connection from Studio to a DSE cluster.
- An existing notebook.
Add Gremlin code to a notebook to create a simple two vertex, one edge graph.
Procedure
- If necessary, start DataStax Studio and open the notebook you previously created.
When you create a notebook, an empty graph instance is created and named after the value in the graph name field in the notebook's connection. A local variable,
g, is defined automatically and bound to that graph.
- Add a cell to the notebook.
- Ensure that Gremlin is selected in the menu for the notebook cell editor mode:
- Add the code to the cell to create some vertices and edges for the graph.
schema.config().option('graph.schema_mode').set('Development') Vertex firstVertex = graph.addVertex(label, 'user', 'id', 1, 'name', 'Jo Dowe', 'role', 'customer') Vertex secondVertex = graph.addVertex(label,'product', 'id', 2, 'name', 'fountain pen') firstVertex.addEdge('bought', secondVertex) g.V()
These lines of code, put DSE Graph in development mode, create two vertices, and connect them with a single edge.
- Select Run Cell to execute the code.
- Select Graph in Display toolbar. (By default, the Table view is displayed.)
- Hover your mouse over a vertex to display its properties.
See the DSE Graph documentation. | https://docs.datastax.com/en/dse/5.1/dse-dev/datastax_enterprise/studio/gs/createVerticesEdgesStudio.html | 2021-10-15T23:19:02 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.datastax.com |
Run
Configuration Class
Represents configuration for experiment runs targeting different compute targets in Azure Machine Learning.
The RunConfiguration object encapsulates the information necessary to submit a training run in an experiment. Typically, you will not create a RunConfiguration object directly but get one from a method that returns it, such as the submit method of the Experiment class.
RunConfiguration is a base environment configuration that is also used in other types of configuration steps that depend on what kind of run you are triggering. For example, when setting up a PythonScriptStep, you can access the step's RunConfiguration object and configure Conda dependencies or access the environment properties for the run.
For examples of run configurations, see Select and use a compute target to train your model.
- Inheritance
- azureml._base_sdk_common.abstract_run_config_element._AbstractRunConfigElementRunConfiguration
Constructor
RunConfiguration(script=None, arguments=None, framework=None, communicator=None, conda_dependencies=None, _history_enabled=None, _path=None, _name=None, command=None)
Parameters
The relative path to the Python script file. The file path is relative to the source directory passed to submit.
The targeted framework used in the run. Supported frameworks are Python, PySpark, TensorFlow, and PyTorch.
The communicator used in the run. The supported communicators are None, ParameterServer, OpenMpi, and IntelMpi. Keep in mind that OpenMpi requires a custom image with OpenMpi installed. Use ParameterServer or OpenMpi for AmlCompute clusters. Use IntelMpi for distributed training jobs.
- conda_dependencies
- CondaDependencies
When left at the default value of False, the system creates a Python environment,
which includes the packages specified in
conda_dependencies.
When set true, an existing Python environment can be specified with the python_interpreter setting.
The command to be submitted for the run. The command property can also be used instead of script/arguments. Both command and script/argument properties cannot be used together to submit a run. To submit a script file using the command property - ['python', 'train.py', '--arg1', arg1_val] To run an actual command - ['ls']
Remarks
We build machine learning systems typically to solve a specific problem. For example, we might be interested in finding the best model that ranks web pages that might be served as search results corresponding to a query. Our search for the best machine learning model may require us try out different algorithms, or consider different parameter settings, etc.
In the Azure Machine Learning SDK, we use the concept of an experiment to capture the notion that different training runs are related by the problem that they're trying to solve. An Experiment then acts as a logical container for these training runs, making it easier to track progress across training runs, compare two training runs directly, etc.
The RunConfiguration encapsulates execution environment settings necessary to submit a training run in an experiment. It captures both the shared structure of training runs that are designed to solve the same machine learning problem, as well as the differences in the configuration parameters (e.g., learning rate, loss function, etc.) that distinguish distinct training runs from each other.
In typical training scenarios, RunConfiguration is used by creating a ScriptRunConfig object that packages together a RunConfiguration object and an execution script for training.
The configuration of RunConfiguration includes:
Bundling the experiment source directory including the submitted script.
Setting the Command line arguments for the submitted script.
Configuring the path for the Python interpreter.
Obtain Conda configuration for to manage the application dependencies. The job submission process can use the configuration to provision a temp Conda environment and launch the application within. The temp environments are cached and reused in subsequent runs.
Optional usage of Docker and custom base images.
Optional choice of submitting the experiment to multiple types of Azure compute.
Optional choice of configuring how to materialize inputs and upload outputs.
Advanced runtime settings for common runtimes like spark and tensorflow.
The following example shows how to submit a training script on your local machine.
from azureml.core import ScriptRunConfig, RunConfiguration, Experiment # create or load an experiment experiment = Experiment(workspace, "MyExperiment") # run a trial from the train.py code in your current directory config = ScriptRunConfig(source_directory='.', script='train.py', run_config=RunConfiguration()) run = experiment.submit(config)
The following example shows how to submit a training script on your cluster using the command property instead of script and arguments.=['python', 'train.py', '--arg1', arg1_val], compute_target=cluster, environment=env) script_run = experiment.submit(config)
The following sample shows how to run a command on your cluster.=['ls', '-l'], compute_target=cluster, environment=env) script_run = experiment.submit(config)
Variables
- environment
- Environment
The environment definition. This field configures the Python environment. It can be configured to use an existing Python environment or configure to setup a temp environment for the experiment. The definition is also responsible for setting the required application dependencies.
The maximum time allowed for the run. The system will attempt to automatically cancel the run if it took longer than this value.
- history
- HistoryConfiguration
The configuration section used to disable and enable experiment history logging features.
- spark
- SparkConfiguration
When the platform is set to PySpark, the Spark configuration section is used to set the default SparkConf for the submitted job.
- hdi
- HdiConfiguration
The HDI configuration section takes effect only when the target is set to an Azure HDI compute. The HDI Configuration is used to set the YARN deployment mode. The default deployment mode is cluster.
- docker
- DockerConfiguration
The Docker configuration section is used to set variables for the Docker environment.
- tensorflow
- TensorflowConfiguration
The configuration section used to configure distributed TensorFlow parameters.
This parameter takes effect only when the
framework is set to TensorFlow, and the
communicator to ParameterServer. AmlCompute is the only
supported compute for this configuration.
- mpi
- MpiConfiguration
The configuration section used to configure distributed MPI job parameters.
This parameter takes effect only when the
framework is set to Python, and the
communicator to OpenMpi or IntelMpi. AmlCompute is
the only supported compute type for this configuration.
- pytorch
- PyTorchConfiguration
The configuration section used to configure distributed PyTorch job parameters.
This parameter takes effect only when the
framework is set to PyTorch, and the
communicator to Nccl or Gloo. AmlCompute is
the only supported compute type for this configuration.
- paralleltask
- ParallelTaskConfiguration
The configuration section used to configure distributed paralleltask job parameters.
This parameter takes effect only when the
framework is set to Python, and the
communicator to ParallelTask. AmlCompute is
the only supported compute type for this configuration.
- data_references
- dict[str, DataReferenceConfiguration]
All the data sources are available to the run during execution based on each configuration. For each item of the dictionary, the key is a name given to the data source and the value is a DataReferenceConfiguration.
- output_data
- OutputData
All the outputs that should be uploaded and tracked for this run.
- amlcompute
- AmlComputeConfiguration
The details of the compute target to be created during experiment. The configuration only takes effect when the compute target is AmlCompute.
- services
- dict[str, ApplicationEndpointConfiguration]
Endpoints to interactive with the compute resource. Allowed endpoints are Jupyter, JupyterLab, VS Code, Tensorboard, SSH, and Custom ports.
Methods
delete
Delete a run configuration file.
Raises a UserErrorException if the configuration file is not found.
delete(path, name)
Parameters
A user selected root directory for run configurations. Typically this is the Git Repository or the Python project root directory. The configuration is deleted from a sub directory named .azureml.
load
Load a previously saved run configuration file from an on-disk file.
If
path points to a file, the RunConfiguration is loaded from that file.
If
path points to a directory, which should be a project directory, then the RunConfiguration is loaded
from <path>/.azureml/<name> or <path>/aml_config/<name>.
load(path, name=None)
Parameters
A user selected root directory for run configurations. Typically this is the Git Repository or the Python project root directory. For backward compatibility, the configuration will also be loaded from .azureml or aml_config sub directory. If the file is not in those directories, the file is loaded from the specified path.
Returns
The run configuration object.
Return type
save
Save the RunConfiguration to a file on disk.
A UserErrorException is raised when:
The RunConfiguration can't be saved with the name specified.
No
nameparameter was specified.
The
pathparameter is invalid.
If
path is of the format <dir_path>/<file_name>, where <dir_path> is a valid directory, then the
RunConfiguration is saved at <dir_path>/<file_name>.
If
path points to a directory, which should be a project directory, then the RunConfiguration is saved
at <path>/.azureml/<name> or <path>/aml_config/<name>.
This method is useful when editing the configuration manually or when sharing the configuration with the CLI.
save(path=None, name=None, separate_environment_yaml=False)
Parameters
Indicates whether to save the Conda environment configuration. If True, the Conda environment configuration is saved to a YAML file named 'environment.yml'.
A user selected root directory for run configurations. Typically this is the Git Repository or the Python project root directory. The configuration is saved to a sub directory named .azureml.
Return type
Attributes
auto_prepare_environment
Get the
auto_prepare_environment parameter. This is a deprecated and unused setting.
target
Get compute target where the job is scheduled for execution.
The default target is "local" referring to the local machine. Available cloud compute targets can be found using the function compute_targets.
Returns
The target name | https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.runconfiguration?view=azure-ml-py | 2021-10-16T01:20:40 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.microsoft.com |
Active Directory LDAP
The Active Directory LDAP plugin for InsightConnect generally supports:
- User account creation, password reset, enablement and disablement
- Object modification and deletion
- LDAP queries
Complete list of Active Directory LDAP Plugin Actions
To see all available actions with the Active Directory LDAP plugin, see the Documentation tab of the Extension Library.
Active Directory LDAP Connection Configuration
The Active Directory LDAP plugin requires:
- A domain username and password
- The fully qualified hostname or IP address of an Active Directory Domain Controller
- The port (389 or 636) for LDAP or LDAPS (default is LDAP/port 389)
User Account Privileges and Logging
Remember your InsightConnect connection to Active Directory LDAP will inherit all privileges of the domain user account configured in the connection. Use of the least privilege model is recommended. All actions taken by this account will be logged according to your logging configuration in Active Directory.
There are several ways to create a new Active Directory LDAP connection:
- From InsightConnect's home page, navigate to
Settings>
Plugins & Tools>
Connections, click Add Connection, and select the Rapid7 InsightVM Cloud plugin from the
Pluginslist
- From the workflow builder, add an action step, select the plugin, select an action, and click Add a New Connection in the
Choose a Connectionstep
- From the workflow import wizard, click Add a New Connection in the
Configure Detailsstep for the plugin
Once you've reached the connection configuration screen:
- Name the connection (eg,
AD <username>)
- Choose an Orchestrator that can communicate with your AD domain controller
- Create a new credential, name the credential, and enter your domain user or service account username and password (alternatively, select an existing credential)
- Enter the fully qualified hostname or IP address for your AD domain controller (if you have multiple domain controllers, any one that your Orchestrator can communicate with will do)
- The default port is 389 for AD LDAP connections. If you have enabled LDAPS on your domain controller, then the port should be changed to 636 (if you're unsure, try 389 first and be sure to check your connection test after saving!)
- Set SSL to
falsefor LDAP connections over port 389. Set SSL to
truefor LDAPS connections over port 636.
- In most cases, Chase Referrals should be set to
trueto ensure LDAP requests are completed. An LDAP Referral provides a reference to an alternate location in which an LDAP Request may be processed. In a partitioned directory, by definition, the entire directory is not always available on any one Directory Service Agent. Setting this value to
trueensures your request will be routed to an Active Directory server that can process it.
- Click Save and check your connection to confirm it succeeds
Troubleshooting
Issues with Active Directory LDAP connections are typically related to either networking issues, where your Orchestrator cannot communicate with the domain controller, or credential issues, where the provided username and password fail to authenticate to the specified domain controller. Some common error messages and associated troubleshooting recommendations are below.
Invalid Server Address
You might receive the below error message when your Orchestrator cannot communicate with the specified host. Confirm you have the correct hostname or IP address in the connection configuration. If the issue persists, it is likely your Orchestrator cannot contact the specified domain controller host due to a networking issue. Consult your network administrator for help.
1The service this plugin is designed for is currently unavailable. Try again later. If the issue persists, please contact support. Response was: invalid server address
Connection Reset by Peer
You might receive the below error message when the port and SSL settings in your connection are misaligned. Be sure to use port 389 (LDAP) with SSL set to
false and port 636 (LDAPS) with SSL set to
true.
1The service this plugin is designed for is currently unavailable. Try again later. If the issue persists, please contact support. Response was: socket ssl wrapping error: [Errno 104] Connection reset by peer
Invalid Credentials
You might receive the below error message when your Orchestrator cannot authenticate to the domain controller with the provided username and password. Edit the credential and update the username and/or password, then retry your connection test.
1Invalid username or password provided. Verify your username and password are correct. Response was: automatic bind not successful - invalidCredentials | https://docs.rapid7.com/insightconnect/ad-ldap/ | 2021-10-15T22:57:14 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.rapid7.com |
each
The Each tag, given a string containing values separated by a known separator, splits it up along the separator and returns each value.
For example, assume we have a variable 'msg' -
<cms:set
Passing variable 'msg' as parameter to each like this -
<cms:each msg <cms:show item /><br> </cms:each>
will make available each of the '|' separated words as variable named 'item' (which is then being displayed using show tag).
hello world how do you do
The above example could have been written without specifying 'sep', because the default separator is '|'.
One real world scenario for using each is while handling submission of forms containing multiple checkboxes. If more than one checkbox is selected, the checkbox variable contains a '|' separated string with values for each selected checkbox.
Parameters
- var
- as
- sep
var
The string to split. If passed as the first parameter, the name 'var' can be omitted and only the value passed. e.g.
<cms:each var=msg >..</cms:each>
<cms:each msg >..</cms:each>
both of the above are same.
as
Name of the variable as which each of the values obtained after splitting the string will be made available.
By default, the variable is named 'item'.
If you wish to use some other name, it can be specified thus -
<cms:each msg <cms:show my_var/><br> </cms:each>
sep
The separator along which the provided string is split.
By default, the pipe character '|' is assumed as the separator.
If any other character is being used in the string, it can be specified this way -
<cms:each msg..</cms:each>
Variables
- item
item
This is the default variable that contains the value obtained.
If any other variable is specified using the as parameter, as explained in parameters, then this variable will not be set.
The specified variable will be set instead. | https://docs.couchcms.com/tags-reference/each.html | 2021-10-16T00:46:03 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.couchcms.com |
modo¶
Job Submission¶
You can submit jobs from within modo by using the integrated submitter (7xx and up), running SubmitModoToDeadline.pl script, or you can submit them from the Monitor.
To run the integrated submitter within modo 7xx or later, after it’s been installed:
Render -> Submit To Deadline
To run the integrated submitter within modo 6xx or earlier, after it’s been installed:
Under the system menu, choose Run Script
Choose the DeadlineModoClient.pl script from [Repository]\submission\Modo\Client
Setting Up A Scene For Rendering¶
In order to set up a Modo scene for rendering you must have at least one render output that is both enabled and has an output filename set. The render outputs can be found within the shader tree (Shading tab in the default Modo Layout).
Submission Options¶
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options are explained in the Draft and Integration documentation. The modo specific options are:
Job Options
These are the general modo options:
Modo Project Folder: The project folder set for the scene. This is used to map path’s that are relative to the project directory.
Render With V-Ray: Enable this option to use V-Ray’s renderer instead of modo’s renderer. This requires the V-Ray for modo plugin to be installed on your render nodes.
Pass Group: The pass group to render, or blank to not render a pass group.
Submit Each Pass Group As A Separate Job: If enabled, a separate job will be submitted for each Pass Group in the scene.
Override Output
You have the option to override where the rendered images will be saved. If this is disabled, Deadline will respect the output paths in the modo Output items in your scene file. If this is enabled, be sure to set the Output Pattern appropriately if your scene has multiple passes, output items, or left and right eye views.
Override Render Output: Enable to override where the rendered images are saved.
Output Folder: The folder where the rendered images will be saved.
Output File Prefix: The prefix for the image file names (extension is not required).
Output Pattern: The pattern for the image file names.
Output Format: The format of the rendered images. Note that you can choose the layered PSD or EXR formats here, and that Tile Rendering supports the layered EXR format.
Tile Rendering Options
Enable Tile Rendering to split up a single frame into multiple tiles.
Enable Tile Rendering: If enabled, the frame will be split into multiple tiles that are rendered individually and can be assembled after.
Frame To Tile Render: The frame that will be split up.
Tiles In X: Number of horizontal tiles.
Tiles In Y: Number of vertical tiles..
Use Jigsaw: Enable to use Jigsaw for tile rendering.
Open Jigsaw Panel: Opens the Jigsaw UI
Reset Jigsaw Background: Resets the background of the jigsaw regions
Save Jigsaw Regions: Saves the Jigsaw Regions to the scene File
Load Jigsaw Regions: Loads the save Jigsaw Regions and sends them to the open panel.
Interactive Distributed Rendering¶
You can submit interactive modo Distributed Rendering jobs to Deadline. The instructions for installing the integrated submission script can be found further down this page. The interactive submitter will submit a special modo server job to reserve render nodes.
Note that this feature is only supported in modo 7xx and later.
Submission Options¶
The general Deadline options are explained in the Job Submission documentation. The modo Distributed Rendering specific options are:
Maximum Servers: The maximum number of modo Servers to reserve for distributed rendering.
Use Server IP Address instead of Host Name: If checked, the Active Servers list will show the server IP addresses instead of host names.
- Use V-Ray To Render: If checked, V-Ray’s distributed rendering will be used instead of Modo’s.
Note that this option requires V-Ray for Modo to be installed, and is only supported by Modo 701 SP5, Modo 801 SP2 or higher, and Modo 901 or higher.
Rendering¶
After you’ve configured your submission options, press the Reserve Servers button to submit the modo Server job. After the job has been submitted, you can press the Update Servers button to update the job’s ID and Status in the submitter. As nodes pick up the job, pressing the Update Servers button will also show them in the Active Servers list. Once you are happy with the server list, press Start Render to start distributed rendering.
Note that the modo Server process can sometimes take a little while to initialize. This means that a server in the Active Server list could have started the modo Server, but it’s not fully initialized yet. If this is the case, it’s probably best to wait a minute or so after the last server has shown up before pressing Start Render.
After the render is finished, you can press Release Servers or close the submitter to mark the modo Server job as complete so that the render nodes can move on to another job.
Network Rendering Considerations¶
This Article provides some useful information for setting up modo for network rendering.
Cross-Platform Rendering Considerations¶
In order to perform cross-platform rendering with mod.
Note that Deadline supports path mapping for any texture paths within the modo scene file (see the Path Mapping setting in the modo Plug-in Configuration section below). However, the modo scene file stores its paths a bit differently, for example:
In Deadline 7.2.1 and later, you can enable the Massage modo Paths setting in the modo Plugin Configuration and Deadline will automatically convert the internal scene path into a normal looking path before feeding it into the path mapping system.
In Deadline 7.2 or earlier, you will have to add special Mapped Paths entries in the Repository Options for these paths to get replaced properly. For example, you might already have a Mapped Path entry like this to handle paths from Mac OS X to Windows:
Replace Path: /Volumes/share/ Windows Path: \\server\share\ Linux Path: Mac Path:
However, the modo scene file will probably be storing texture paths as “Volumes:share/” instead of “/Volumes/share/”. This means you’ll need another Mapped Path entry that looks like this:
Replace Path: Volumes:share/ Windows Path: \\server\share\ Linux Path: Mac Path:
To go from Windows to Mac OS X, you’ll need a Mapped Path entry that looks like this. Note that “server:share/” represents the UNC path “//server/share/”:
Replace Path: server:share/ Windows Path: Linux Path: Mac Path: /Volumes/share
Finally, If you wish to disable the Path Mapping setting in the modo Plug-in Configuration, but still wish to perform cross-platform rendering with modo, you must ensure that your modo scene file is on a network shared location, and that any footage or assets that the project uses is in the same folder. Then when you submit the job to Deadline, you must make sure that the option to submit the scene file with the job is disabled. If you leave it enabled, the scene file will be copied to and loaded from the Worker’s local machine, and thus won’t be able to find the footage.
Plug-in Configuration¶
You can configure the modo plug-in settings from the Monitor. While in super user mode, select Tools -> Plugins Configuration and select the modo Worker on a particular platform will be used from the exe list.
Render Executables
modo Executable: The path to the modo executable file used for rendering. Enter alternative paths on separate lines. Different executable paths can be configured for each version installed on your render nodes.
Geometry Cache
Auto Set Geometry Cache: Enable this option to have Deadline automatically set the modo geometry cache before rendering (based on the geometry cache buffer below).
Geometry Cache Buffer (MB): When auto-setting the geometry cache, Deadline subtracts this buffer amount from the system’s total memory to calculate what the geometry cache should be set to.
Path Mapping (For Mixed Farms)
Enable Path Mapping: If enabled, a temporary modo file will be created locally on the Worker for rendering because Deadline does the path mapping directly in the modo file. This feature can be turned off if there are no Path Mapping entries defined in the Repository Options.
Resolve Path Aliases: If enabled, paths in the modo file will have their path aliases replaced with the path as defined on the submitting machine before pathmapping is applied.
Massage modo Paths: If enabled, paths in the modo file will be massaged to look like regular paths before being fed into Deadline’s path mapping system. For example, a path that starts with “server:share/” within the modo file will be massaged to start with “//server/share/” before path mapping is applied.
CPU Affinity
Limit Threads To CPU Affinity: If enabled, the number of render threads will be limited to the CPU affinity of the rendering Worker.
Integrated Submission Script Setup¶
This section describes how to install the integrated submission scripts for modo. The integrated submission scripts and the following installation procedures have been tested with modo 7xx and later.
You can either run the Submitter installer or manually install the submission script.
Submitter Installer¶
Run the Submitter Installer located at
<Repository>/submission/Modo/Installers.
Manual Installation¶
7xx or later:
Open modo, and select System -> Open User Scripts Folder.
Copy the DeadlineModo folder from
[Repository]\submission\Modo\Clientto this User Scripts folder.
Restart modo, and you should find the Submit To Deadline menu item in your Render menu.
6xx or earlier:
Under the system menu, choose Run Script.
Choose the DeadlineModoClient.pl script from
[Repository]\submission\Modo\Client
Alternatively, you can also copy this script to your local machine and run it from there. You should do this if the path to your Deadline repository is a UNC path and you are running modo on Windows OS.
Custom Sanity Check¶
A CustomSanityChecks.py file can be created in
[Repository]\submission\Modo\Main, and will be executed if it exists when the user clicks the Submit button in the integrated submitter. This script will let you override any of the properties in the submission script prior to submitting the job. You can also use it to run your own checks and display errors or warnings to the user. Finally, if the RunSanityCheck method returns False, the submission will be cancelled.
Here is a very simple example of what this script could look like:
import lx import lxu import lxu.command import lxifc def errordialog(title, message): lx.eval('dialog.setup error') lx.eval('dialog.title {%s}' % title) lx.eval('dialog.msg {%s}' % message) try: lx.eval('+dialog.open') except: pass def RunSanityCheck(): lx.eval( 'user.value deadlineDepartment {The Best Department!}' ) lx.eval( 'user.value deadlinePriority 33' ) lx.eval( 'user.value deadlineConcurrentTasks 2' ) errordialog( 'Error', 'This is a custom sanity check!' ) return True
You can open the LoadDeadlineModoUI.py file from
[Repository]\submission\Modo\Client\DeadlineModo\pyscripts to see the available Deadline modo values.
Distributed Rendering Script Setup¶
You can either run the DR Submitter installer or manually install the DR submission script.
Submitter Installer¶
Run the Submitter Installer located at
<Repository>/submission/ModoDBR/Installers.
Manual Installation¶
7xx or later only:
Open modo, and select System -> Open User Scripts Folder.
Copy the DeadlineModoDBR folder from
[Repository]\submission\ModoDBR\Clientto this User Scripts folder.
Which versions of modo are supported?
Modo 3xx and later are supported.
Which versions of modo can I use for interactive distributed rendering?
Modo 7xx and later are supported.
When rendering with modo on Windows, it hangs after printing out “@start modo_cl [48460] Luxology LLC”.
We’re not sure of the cause, but a known fix is to copy the ‘perl58.dll’ from the ‘extra’ folder into the main modo install directory (“C:Program FilesLuxologymodo601").
When rendering with modo on Mac OS X, the Worker icon in the Dock changes to the modo icon, and the render gets stuck.
This is a known problem that can occur when the Worker application is launched by double-clicking it in Finder. There are a few known workarounds:
-
Start the Launcher application, and launch the Worker from the Launcher’s Launch menu.
-
Launch the Worker from the terminal by simply running ‘DEADLINE_BIN/deadlineworker’ or ‘DEADLINE_BIN/deadlinelauncher -slave’, where DEADLINE_BIN is the Deadline bin folder.
-
Use ‘modo’ as the render executable instead of ‘modo_cl’.
When tile rendering, each tile is rendered, but there is image data in the “unrendered” region of each tile.
This happens when there is a cached image in the modo frame buffer. Open up modo on the offending render node(s) and delete all cached images to fix the problem.
When rendering my Workers are ignoring the CPU Affinity.
Some renderers modify the CPU affinity when a render starts. In these cases you can enable the Plugin Configuration option “Limit Threads To CPU Affinity” which will limit the number of threads the renderer can use.
Plugin Error Messages¶
This is a collection of known mod. | https://docs.thinkboxsoftware.com/products/deadline/10.1/1_User%20Manual/manual/app-modo.html | 2021-10-16T00:27:19 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['../_images/modo_dbr.png', '../_images/modo_dbr.png'], dtype=object)
array(['../_images/cp_modo.png', '../_images/cp_modo.png'], dtype=object)] | docs.thinkboxsoftware.com |
You uninstall vSphere Replication by unregistering the appliance from vCenter Server and removing it from your environment.
Prerequisites
- Verify that the vSphere Replication appliance is powered on.
- Stop all existing outgoing or incoming
Note:. | https://docs.vmware.com/en/vSphere-Replication/8.2/com.vmware.vsphere.replication-admin.doc/GUID-EA052863-034D-47EE-B513-9D1F514A6A2D.html | 2021-10-16T00:34:02 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.vmware.com |
Source code for f5.bigip.tm
# coding=utf-8 # """Classes and functions for configuring BIG-IP""" # f5.bigip.resource import OrganizingCollection from f5.bigip.tm.analytics import Analytics from f5.bigip.tm.asm import Asm from f5.bigip.tm.auth import Auth from f5.bigip.tm.cm import Cm from f5.bigip.tm.gtm import Gtm from f5.bigip.tm.ltm import Ltm from f5.bigip.tm.net import Net from f5.bigip.tm.security import Security from f5.bigip.tm.shared import Shared from f5.bigip.tm.sys import Sys from f5.bigip.tm.transaction import Transactions from f5.bigip.tm.util import Util from f5.bigip.tm.vcmp import Vcmp | https://f5-sdk.readthedocs.io/en/latest/_modules/f5/bigip/tm.html | 2021-10-15T23:03:12 | CC-MAIN-2021-43 | 1634323583087.95 | [] | f5-sdk.readthedocs.io |
What are Modded OS?
Our team modifies Modded OS to achieve maximum performance. We ship it with all the essential software pre-installed. Themes and icons are selected to match every element of the system.
When should you choose Modded OS?When should you choose Modded OS?
Every time you don't want to keep yourself busy optimizing things and fixing those theming issues, finding the correct PPA for installing the working version of the software. All in all, it allows you to enjoy Linux without making you do everything necessary.
Why should you choose Modded OS?Why should you choose Modded OS?
- Pre-installed themes and customized UI for the ready-to-go experience. If you want to use it for your work, you will surely have an easy and smooth workflow on Modded OS customized UI.
- Daily use software such as Browser, GIMP, File Managers, Visual Studio Code, Music Player, Sticky Notes, Libre Office, pre-configured repositories and updated systems are delivered to you
- Optimised for a performance increase when compared to regular distro
- Separate channel for Modded OS users on our Discord server
How do they look like?How do they look like?
You can take a look at our Modded OS Gallery.
How to install a Modded OS?How to install a Modded OS?
You can choose between 4 Modded distributions that Andronix offers.
These are the following things that need to be done before you're ready to use the distro-
- Download the Modded OS file.
- Set up the user, locale, and other required things.
- Start the distro with ./andro*{distro}*.sh
- Start the VNC server and connect to it using a VNC-viewer.
For a more verbose guide, please refer to this section of the docs
What OS are modded by Andronix?What OS are modded by Andronix?
You can select amongst the following distros-
- Ubuntu XFCE 20.04
- Ubuntu KDE 18.04
- Manjaro XFCE 19
- Debian XFCE 10 | http://docs.andronix.app/modded-os/modded-os/ | 2021-10-15T22:36:04 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.andronix.app |
SVM peering enables you to establish a peer relationship between two storage virtual machines (SVMs) for data protection.
You must have created a peer relationship between the clusters in which the SVMs that you plan to peer reside.
You can view the intercluster LIFs, cluster peer relationship, and SVM peer relationship in the Summary window.
When you use System Manager to create the peer relationship, the encryption status is
Enabled by default. | http://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-960/GUID-BBC8D32F-F768-448E-A6E6-6A4397ED6E66.html | 2021-10-15T22:57:11 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.netapp.com |
What is AWS IoT SiteWise?
AWS IoT SiteWise is a managed service that lets you collect, model, analyze, and visualize data from industrial equipment at scale. With AWS IoT SiteWise Monitor, you can quickly create web applications for non-technical users to view and analyze your industrial data in real time. You can gain insights about your industrial operations by configuring and monitoring metrics such as mean time between failures and overall equipment effectiveness (OEE). With AWS IoT SiteWise Edge, you can view and process your data on your local devices.
The following diagram shows the basic architecture of AWS IoT SiteWise.
How AWS IoT SiteWise works
AWS IoT SiteWise provides an asset modeling framework that you can use to build representations
of
your industrial devices, processes, and facilities. With asset models, you define
what raw
data to consume and how to process your raw data into complex metrics. You can build
and
visualize assets and models for your industrial operation in the AWS IoT SiteWise console
You can upload industrial data to AWS IoT SiteWise in the following ways:
Use AWS IoT SiteWise gateway software that runs on any platform that supports AWS IoT Greengrass, such as common industrial gateways or virtual servers. This software can read data directly from on-site servers over protocols such as the OPC-UA protocol. You can connect up to 100 OPC-UA servers to a single AWS IoT SiteWise gateway. You can also read data over the Modbus TCP and Ethernet/IP (EIP) protocol. For more information, see Ingesting data using a gateway.
Note
You can add packs to your gateway to enable edge capability. With Sitewise Edge, you can read and process data directly on-site and send it to the AWS Cloud using a AWS IoT Greengrass stream. For more information, see Enabling edge data processing.
Use AWS IoT Core rules. If you have devices connected to AWS IoT Core sending MQTT messages, you can use the AWS IoT Core rules engine to route those messages to AWS IoT SiteWise. For more information, see Ingesting data using AWS IoT Core rules.
Use AWS IoT Events actions. You can configure the IoT SiteWise action in AWS IoT Events to send data to AWS IoT SiteWise when events occur. For more information, see Ingesting data from AWS IoT Events.
Use AWS IoT Greengrass stream manager. You can configure solutions on the edge that send high-volume IoT data to AWS IoT SiteWise. For more information, see Ingesting data using AWS IoT Greengrass stream manager.
Use the AWS IoT SiteWise API. Your applications at the edge or in the cloud can directly send data to AWS IoT SiteWise. For more information, see Ingesting data using the AWS IoT SiteWise API.
You can set up SiteWise Monitor to create web applications for your non-technical employees to visualize your operations. With AWS SSO or IAM, you can configure unique logins and permissions for each employee to view specific subsets of an entire industrial operation. AWS IoT SiteWise provides an application guide for these employees to learn how to use SiteWise Monitor.
Why use AWS IoT SiteWise? SiteWise Monitor, on-premise environments, reducing duplication, effort, and development costs. You can choose where to use and store your data across multiple locations such as keeping data on-premises for data residency requirements or for use by local edge applications. You can also shop floor. Sitewise Edge continues to operate even when connectivity to the cloud is not available, for on-premises scenarios.
Use cases
- Manufacturing
With AWS IoT SiteWise, you can easily collect and use data from your equipment to identify and reduce inefficiencies and improve industrial operations. AWS IoT SiteWise helps you collect data from manufacturing lines and assembly robots, transfer it to the AWS Cloud, and structure performance metrics for your specific equipment and processes. You can use these metrics to understand the overall effectiveness of your operations and identify opportunities for innovation and improvement. You can also view.
Are you new to AWS IoT SiteWise?
If you're a first-time user of AWS IoT SiteWise, we recommend that you read about the components and concepts of AWS IoT SiteWise and set up the AWS IoT SiteWise demo.
You can complete the following tutorials to explore certain features of AWS IoT SiteWise:
See the following topics to learn more about AWS IoT SiteWise:
Related services
AWS IoT SiteWise integrates with the following AWS services so that you can develop a complete AWS IoT solution in the AWS Cloud:
AWS IoT Core – Register and control AWS IoT devices that upload sensor data to AWS IoT SiteWise. You can configure AWS IoT SiteWise to publish notifications to the AWS IoT message broker, which lets you send AWS IoT SiteWise data to other AWS services. For more information, see the following topics:
Ingesting data using AWS IoT Core
Interacting with other AWS services
What is AWS IoT? in the AWS IoT Developer Guide
AWS IoT Greengrass – Deploy edge devices that have AWS Cloud capabilities and can communicate with local AWS IoT devices. AWS IoT SiteWise gateways run on AWS IoT Greengrass to collect data from local servers and publish data to the AWS Cloud. For more information, see the following topics:
Ingesting data using a gateway
What is AWS IoT Greengrass? in the AWS IoT Greengrass Version 1 Developer Guide
AWS IoT Events – Monitor your IoT data for process failures or changes in operation, and trigger actions when such events occur. For more information, see the following topics:
Monitoring data with alarms
Monitoring with alarms in the AWS IoT SiteWise Monitor Application Guide
What is AWS IoT Events? in the AWS IoT Events Developer Guide
AWS Single Sign-On (AWS SSO) and AWS Identity and Access Management (IAM) – Create and manage user identities and permissions. SiteWise Monitor users sign in to web portals with AWS SSO or IAM credentials, and you can define which users have access to which assets' data. For more information, see the following topics:
Monitoring data with AWS IoT SiteWise Monitor
What is SiteWise Monitor? in the AWS IoT SiteWise Monitor Application Guide
What is AWS SSO? in the AWS Single Sign-On User Guide
What is IAM? in the IAM User Guide
We want to hear from you
We welcome your feedback. To contact us, visit the AWS IoT SiteWise Discussion Forums
Provide feedback at the bottom of the page.
Feedback at the top right of the page. | https://docs.aws.amazon.com/iot-sitewise/latest/userguide/what-is-sitewise.html | 2021-10-16T00:57:56 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['images/how-sw-works-with-edge.png',
'AWS IoT Greengrass "How AWS IoT SiteWise works" page screenshot.'],
dtype=object) ] | docs.aws.amazon.com |
Jenkins allows some operations to be invoked through the CLI, including operations that are useful for configuring client controllers.
You can apply identical configurations to multiple client controllers by gathering all connected controllers from the command line and performing the operations on each one.
The
list-masters CLI command on operations center provides information about all connected controllers in JSON format, allowing you to use that information to invoke commands on each client controller:
{ "version": "1", "data": { "masters": [ { "fullName": "my master", (1) "url": "", (2) "status": "ONLINE" (3) } ] } } | https://docs.cloudbees.com/docs/admin-resources/latest/cli-guide/from-cjp-masters | 2021-10-16T00:30:27 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.cloudbees.com |
content_type
The content_type tag can be used to make the web server send back the contents with the desired Content-Type in the HTTP header.
By default every web page is send back as 'text/html'.
As an example, the RSS feed requires it content type to be set as 'text/xml' for the browsers to properly recognize the feed. The following snippet does the job -
<cms:content_type 'text/xml' />
Please see Core Concepts - RSS Feeds for an example of the usage of this tag.
Parameters
- value
value
The desired content type. Some example values are 'text/xml', 'text/plain', 'text/css', 'image/gif', 'application/pdf' and 'application/zip'.
Variables
This tag is self-closing and does not set any variables of its own. | https://docs.couchcms.com/tags-reference/content_type.html | 2021-10-15T23:03:09 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.couchcms.com |
nl2br
The nl2br tag stands for 'newline to BR' and can be used to convert all newline characters in a string to HTML <BR> tags.
It comes in handy in situations where you have an editable region of textarea type and while displaying the data contained within it, you wish to replace all the newlines entered by the user with <BR> tags.
<cms:nl2br><cms:show some_variable /></cms:nl2br>
Parameters
This tag takes no parameters. All content enclosed within its opening and closing tags serve as its input.
Variables
This tag does not set any variables of its own. | https://docs.couchcms.com/tags-reference/nl2br.html | 2021-10-15T23:31:29 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.couchcms.com |
Code Metrics – Maintainability Index…
Digging Deeper.”
Code Analysis. | https://docs.microsoft.com/en-us/archive/blogs/zainnab/code-metrics-maintainability-index | 2021-10-15T23:28:39 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['https://msdnshared.blob.core.windows.net/media/MSDNBlogsFS/prod.evol.blogs.msdn.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/00/82/17/metablogapi/8507.image_49851F7F.png',
'image image'], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/MSDNBlogsFS/prod.evol.blogs.msdn.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/00/82/17/metablogapi/1565.image_2E6C5071.png',
'image image'], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/MSDNBlogsFS/prod.evol.blogs.msdn.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/00/82/17/metablogapi/1050.image_747D0D84.png',
'image image'], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/MSDNBlogsFS/prod.evol.blogs.msdn.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/00/82/17/metablogapi/3704.image_2191FA53.png',
'image image'], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/MSDNBlogsFS/prod.evol.blogs.msdn.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/00/82/17/metablogapi/7026.image_04A8D57E.png',
'image image'], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/MSDNBlogsFS/prod.evol.blogs.msdn.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/00/82/17/metablogapi/0488.image_63B562D6.png',
'image image'], dtype=object) ] | docs.microsoft.com |
Merging Similar Datasets
If data sources reside in different locations, you can select them using the advanced browser.
You can add datasets of the same format and with the same schema (data model) to any source dataset you have already defined in your workspace. These datasets will be merged together when you run the translation. For formats that support coverages, you can also add folders and subfolders.
Double-click the dataset name in the Navigator.
In the Edit User Parameter dialog, you can manually enter a path to a file, or multiple files (for example, *.tab). You can also add multiple paths, separated by a comma.
Alternatively, click Select Multiple Folders/Files from the drop-down:
This opens the Select File dialog:
.
Click OK to close the dialog, then OK to close Edit Parameter dialog and merge the selected files/folders with the original dataset. To see these results reflected in the Navigator view, float the cursor over the dataset name. | https://docs.safe.com/fme/2017.1/html/FME_Desktop_Documentation/FME_Workbench/Workbench/Merging_Similar_Datasets.htm | 2021-10-16T00:03:04 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['../Resources/Images_WB/merging_datasets.png', None], dtype=object)
array(['../Resources/Images_WB/merging_datasets1.png', None], dtype=object)] | docs.safe.com |
PointCloudSimplifier
Reduces the number of points in a point cloud by selectively keeping points based on the shape of the point cloud. The simplified and removed points are output as two discrete point clouds.<![CDATA[ ]]>
Typical Uses
- Reducing the data volume of a point cloud feature to meet processing or storage requirements, when honoring the overall shape of the original dataset is desirable.
How does it work?
The PointCloudSimplifier receives point cloud features and outputs them with fewer points than the original. The points to keep are identified by an algorithm that determines the overall shape of the point cloud, and then selectively discards points.
Areas with high rates of change (such as steep slopes) will have more points kept, and areas with low rates of change (generally flat areas) will be thinned more aggressively. This is based on the Medial Axis Transform (MAT) of the original, a representation like a skeleton of the entire point cloud feature. Individual points are considered against the MAT and evaluated for inclusion or exclusion.
The manner of generation of the MAT, method of sampling, and desired level of simplification can be optionally adjusted through parameters.
Both the simplified point cloud and a new point cloud feature containing all removed points are output.
Note: Medial Axis Transform (MAT) Simplification. This method estimates normals for each point and uses a ball-shrinking algorithm to approximate the MAT. Then it samples the point cloud based on each point’s Local Feature Size (LFS), defined by the shortest distance from the point to the medial axis. Points in areas of high curvature (low LFS) are sampled at a higher density than redundant low curvature (high LFS) points.
Examples
In this example, we will reduce the number of points in a point cloud, while honoring the original shape of it.
Note that the source dataset has over 6 million points in it. Viewed here in the FME Data Inspector at an oblique angle, we can see that it contains a number of tall structures in a fairly flat area.
The point cloud feature is routed into a PointCloudSimplifier.
In the parameters dialog, the default settings are kept.
The output point cloud has been reduced to just over 1.6 million points. Note that areas with rapid elevation change - the structures - remain largely intact, whereas large flat areas with little change in elevation have been thinned much more aggressively.
Compare the point cloud before and after simplification:
Usage Notes
- This transformer is very processing intensive. The PointCloudThinner also reduces the number of points in a point cloud, using either regular sampling intervals or first/last x points. It is much faster than the PointCloudSimplifier, but does not consider the shape of the feature.
Choosing a Point Cloud Transformer
FME has a selection of transformers for working specifically with point cloud data.
For information on point cloud geometry and properties, see Point Clouds (IFMEPointCloud).
Configuration
Input Ports
Output Ports
Point cloud features with a simplified subset of the original points.
Non-point cloud features will be routed to the <Rejected> port, as well as invalid point clouds.
Rejected features will have an fme_rejection_code attribute with one of the following values:
INVALID_GEOMETRY_TYPE
INVALID_PARAMETER_<KEYWORD>
NO_RESULTSimplifier on the FME Community.
Examples may contain information licensed under the Open Government Licence – Vancouver and/or the Open Government Licence – Canada.
Keywords: point "point cloud" cloud PointCloud LiDAR sonar | https://docs.safe.com/fme/html/FME_Desktop_Documentation/FME_Transformers/Transformers/pointcloudsimplifier.htm | 2021-10-15T23:22:40 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/PointCloudSimplifierExample01.png', None],
dtype=object)
array(['../Resources/Images/PointCloudSimplifierExample02.png', None],
dtype=object)
array(['../Resources/Images/PointCloudSimplifierExample03.png', None],
dtype=object)
array(['../Resources/Images/PointCloudSimplifierExample04.png', None],
dtype=object)
array(['../Resources/Images/PointCloudSimplifierExample05.png', None],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.safe.com |
Returns an ST_Geometry value that represents the point set union of two ST_Geometry values.
Valid Data Types
All ST_Geometry types except geometry collections.
The type that the ST_Geometry return type represents is one from the possible set of types in the following table, depending on the parameter types.
Where:
Vantage converts GeoSequence types that are involved in the ST_Union method to ST_LineString values. Therefore, ST_Union never returns a GeoSequence type. | https://docs.teradata.com/r/1drvVJp2FpjyrT5V3xs4dA/2soGKjGpMlrlJas16x0GGg | 2021-10-16T01:06:42 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.teradata.com |
By default, the upgrade process upgrades the VxRail in all clusters in a domain in parallel. If you have multiple clusters in the management domain or in a VI workload domain, you can select which clusters to upgrade. You can also choose to update the clusters in parallel or sequentially.
Prerequisites
- Ensure that the domain/clusters for which you want to perform upgrade do not have any VxRail Managers or hosts in an error state. Resolve the error state before proceeding.
- Download the VxRail update bundle.
Procedure
- Navigate to the Updates/Patches tab of the appropriate domain.
- Run the upgrade precheck.If the clusters in your workload domain have different hardware, you can run a precheck at the cluster level using the precheck API. For information on this API, select Developer Center in the left panel on the SDDC Manager Dashboard and then search for precheck in the Overview tab.
- Click View Details in the Available Updates section to display the bundle that you downloaded before starting the upgrade.The Resource Changes section displays the VxRail cluster in the workload domain that needs to be upgraded.
- Click Exit Details.
- Click Update Now or Schedule Update and select the date and time for the bundle to be applied.
- Select Enable Cluster-level selection if you want to upgrade VxRail by cluster and then select the clusters that you want to upgrade.
- Click Next.
- Select the upgrade options and click Next.By default, all clusters are upgraded in parallel. To upgrade clusters sequentially, select Enable sequential cluster upgrade.
- On the Review page, click Finish.
- Monitor the upgrade. | https://docs.vmware.com/en/VMware-Cloud-Foundation/3.10/com.vmware.vcf.vxrail.admin.doc/GUID-4C3A1F4F-9375-4EAC-97DF-186EDEEB8233.html | 2021-10-16T00:45:35 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.vmware.com |
Remote Content -- AJAX Type Introduction
Remote Content AJAX Functionality Example
Setup on the Front End
First, set up whatever it is you'll be targeting on the front end, whether that is an image, text, header, etc.
Target Your Front End Element(s)
As always with Remote Content, you have to target each element on the front end that you set up so that, when clicked, the element(s) will trigger the popup. Do this using the 'Extra CSS Selectors' field in the Click Trigger as explained in our Setting Up a Remote Content Popup document.
JavaScript
Now that we're set up on the front end, we need to add extra data to the request that will be used inside your AJAX response function. This is done via JavaScript by modifying the jQuery.fn.popmake.rc_user_args array variable. Each popup has a set of values stored in the array using the popup ID as the key.
The following JavaScript shows how to add parameters just before the AJAX request for popup 36287. We recommend using a plugin like Simple Custom CSS/JS to install the code into the footer area of your WordPress site.
/*-36287').on('popmakeRcBeforeAjax', function () { $.fn.popmake.rc_user_args[36287] = { custom: 123, name: 'Daniel' }; }); }); }(jQuery));
View the source on GitHub.
PHP Function
The custom and name parameters can then be passed along in the AJAX function shown below to customize the content.
Copy, paste, and edit the following code in your child theme's
functions.php file, or use the My Custom Functions plugin to install the code for you.
<?php /** * Customize content via the Popup Maker custom AJAX function. * * @since 1.0.0 * * @return void */ function popmake_remote_content_ajax() { echo 'Hello ' . $_REQUEST['name'] . ', you chose option ' . $_REQUEST['custom']; }
View the source on GitHub.
Adding the Function Name to Remote Content
Now all that's left is to enter 'popmake_remote_content_ajax' as the function name in the Remote Content Area box (shortcode option settings).
With these settings and code in place, your popup's content will be:
Hello Daniel, you chose option 123
As you can see, this functionality allows you ultimate flexibility because you're not actually inputting content into the WYSIWIG Popup Content Editor, but dynamically generating content using code. You can pass parameters based on the button/link they clicked, user details, or any other possibilities you can come up with. | https://docs.wppopupmaker.com/article/33-remote-content-ajax-type-introduction | 2021-10-15T23:46:59 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559c68b9e4b03e788eda1692/images/5b3d56462c7d3a099f2e2fc2/file-eAgfBvuhEr.png',
None], dtype=object) ] | docs.wppopupmaker.com |
Using the LTE Command Line
(Available in the LTE workspace only) This example shows how to use the Command Line to create a polyline.
- You could activate Polyline by clicking its icon or selecting it from the main menu, and still use the Command Line for entering data. But this example will use the Command Line for everything. In the Command Line, a tool is referenced by its "alias."
- To see the list of aliases, right-click in the command history area (above the Command field) and select Edit Command Aliases.
- This opens the Aliases page of the Customize window. The commands are listed according to their appearance in the menu bar. Polyline is in the Draw menu, so choose the Draw category.
Note: You can also open this window by selecting Tools / Workspace / Customize.
- Highlight Polyline, and its default alias, "pline",. appears on the right side.
- To make it easier to type, you can name the alias to something easy like "pl." Then click Assign, then close the window.
- To access the Command Line, you can place your cursor there, or press Tab to enter the Command field. Enter "pl" and press Enter.
Note: If dynamic fields are displayed, pressing Tab will place the cursor in the first dynamic field. You can type an alias in any of these fields, or press Tab until the cursor moves to the Command Line.
- The default action is to "Define the start point of the line."
- You can click a point on-screen or use the dynamic input fields (see "Using Dynamic Input" on page 87). To define the start point in the Command Line, type the coordinates using the format "0,0" then press Enter.
- For the end point of the first segment, type "20,0."
This creates a horizontal segment 20 units long.
- To use the local options, you can open the local menu by right-clicking. To activate a local option in the Command Line, type the first letter of the option name. So to define the next segment by "Length" type "L" and press Enter.
- Then type "15" and press Enter to set the length.
- The prompt now mentions that you can lock this value by pressing Ctrl+L, so press Ctrl+L.
- To complete the line segment, enter "A" then type "90." Press Enter twice.
This completes the second line segment. If you move the mouse around, the preview line for the third segment is locked at 15 units.
- For the next segment, enter "A" again, then type 180 and press Enter twice.
- This finishes the third segment, which is also 15 units long.
- To make the next segment an arc segment, look in the Command History for option names. When an option has two words, such as "Arc Segment," you enter the first letter of each word. So type "AS" and press Enter.
- The last step is to complete the polyline by closing it. But if you enter "C," that will invoke the Circumference option which is listed first. To invoke Close, type the entire word: "close."
- The polyline is now closed with an arc segment.
| http://docs.imsidesign.com/projects/Turbocad-2018-User-Guide-Publication/TurboCAD-2018-User-Guide-Publication/User-Interface/LTE-Command-Line/Using-the-LTE-Command-Line/ | 2021-10-15T22:36:21 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['../../Storage/turbocad-2018-user-guide-publication/using-the-lte-command-line-img0001.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/using-the-lte-command-line-img0002.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/using-the-lte-command-line-img0003.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/using-the-lte-command-line-img0004.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/using-the-lte-command-line-img0005.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/using-the-lte-command-line-img0006.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/using-the-lte-command-line-img0007.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/using-the-lte-command-line-img0008.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/using-the-lte-command-line-img0009.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/using-the-lte-command-line-img0010.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/using-the-lte-command-line-img0011.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/using-the-lte-command-line-img0012.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/using-the-lte-command-line-img0013.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/using-the-lte-command-line-img0014.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/using-the-lte-command-line-img0015.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/using-the-lte-command-line-img0016.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/using-the-lte-command-line-img0017.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/using-the-lte-command-line-img0018.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/using-the-lte-command-line-img0019.png',
'img'], dtype=object) ] | docs.imsidesign.com |
MySQL database log files Working with Amazon Aurora database log files.
Topics | https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.Concepts.MySQL.html | 2021-10-16T01:23:38 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.aws.amazon.com |
Automated investigation and response in Microsoft 365 Defender
Important
The improved Microsoft 365 security center is now available. This new experience brings Defender for Endpoint, Defender for Office 365, Microsoft 365 Defender, and more into the Microsoft 365 security center. Learn what's new.
Applies to:
- Microsoft 365 Defender
If your organization is using Microsoft 365 Defender, your security operations team receives an alert within the Microsoft 365 Defender portal whenever a malicious or suspicious activity or artifact is detected. Given the seemingly never-ending flow of threats that can come in, security teams often face the challenge of addressing the high volume of alerts. Fortunately, Microsoft 365 Defender includes automated investigation and response (AIR) capabilities that can help your security operations team address threats more efficiently and effectively.
This article provides an overview of AIR and includes links to next steps and additional resources.
Tip
Want to experience Microsoft 365 Defender? You can evaluate it in a lab environment or run your pilot project in production.
How automated investigation and self-healing works
As security alerts are triggered, it's up to your security operations team to look into those alerts and take steps to protect your organization. Prioritizing and investigating alerts can be very time consuming, especially when new alerts keep coming in while an investigation is going on. Security operations teams can feel overwhelmed by the sheer volume of threats they must monitor and protect against. Automated investigation and response capabilities, with self-healing, in Microsoft 365 Defender can help.
Watch the following video to see how self-healing works:
In Microsoft 365 Defender, automated investigation and response with self-healing capabilities works across your devices, email & content, and identities.
Tip
This article describes how automated investigation and response works. To configure these capabilities, see Configure automated investigation and response capabilities in Microsoft 365 Defender.
Your own virtual analyst
Imagine having a virtual analyst in your Tier 1 or Tier 2 security operations team. The virtual analyst mimics the ideal steps that security operations would take to investigate and remediate threats. The virtual analyst could work 24x7, with unlimited capacity, and take on a significant load of investigations and threat remediation. Such a virtual analyst could significantly reduce the time to respond, freeing up your security operations team for other important threats or strategic projects. If this scenario sounds like science fiction, it's not! Such a virtual analyst is part of your Microsoft 365 Defender suite, and its name is automated investigation and response.
Automated investigation and response capabilities enable your security operations team to dramatically increase your organization's capacity to deal with security alerts and incidents. With automated investigation and response, you can reduce the cost of dealing with investigation and response activities and get the most out of your threat protection suite. Automated investigation and response capabilities help your security operations team by:
- Determining whether a threat requires action.
- Taking (or recommending) any necessary remediation actions.
- Determining whether and what other investigations should occur.
- Repeating the process as necessary for other alerts.
The automated investigation process
An alert creates an incident, which can start an automated investigation. The automated investigation results in a verdict for each piece of evidence. Verdicts can be:
- Malicious
- Suspicious
- No threats found
Remediation actions for malicious or suspicious entities are identified. Examples of remediation actions include:
- Sending a file to quarantine
- Stopping a process
- Isolating a device
- Blocking a URL
- Other actions
For more information, see See Remediation actions in Microsoft 365 Defender.
Depending on how automated investigation and response capabilities are configured for your organization, remediation actions are taken automatically or only upon approval by your security operations team. All actions, whether pending or completed, are listed in the Action center.
While an investigation is running, any other related alerts that arise are added to the investigation until it completes. If an affected entity is seen elsewhere, the automated investigation expands its scope to include that entity, and the investigation process repeats.
In Microsoft 365 Defender, each automated investigation correlates signals across Microsoft Defender for Identity, Microsoft Defender for Endpoint, and Microsoft Defender for Office 365, as summarized in the following table:
Note
Not every alert triggers an automated investigation, and not every investigation results in automated remediation actions. It depends on how automated investigation and response is configured for your organization. See Configure automated investigation and response capabilities.
Viewing a list of investigations
To view investigations, go to the Incidents page. Select an incident, and then select the Investigations tab. To learn more, see Details and results of an automated investigation.
Training for security analysts
Use this learning module from Microsoft Learn to understand how Microsoft 365 Defender uses automated self-healing for incident investigation and response. | https://docs.microsoft.com/en-us/microsoft-365/security/defender/m365d-autoir?view=o365-worldwide | 2021-10-16T01:18:49 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.microsoft.com |
Rollbar
How to integrate Rollbar with OpenReplay and see backend errors alongside session replays.
1. Create Project Access Tokens
- Login to your Rollbar account.
- Select your project from the dropdown (top left).
- Go to Settings > Project Access Tokens.
- Click on Create new access tokens.
- In the Scope select
read; in name put
openreplay; and leave the Rate Limit to default.
- Copy your new token.
2. Enable Rollbar in OpenReplay
Paste your
Access Token in OpenReplay dashboard under 'Preferences > Integration'.
3. Propagate openReplaySessionToken
To link a Rollbar Rollbar log entry with the recorded user session, a unique token has to be propagated as an
extra_data to each backend error you wish to track.
Below is an example in Rollbar's Python API.
rollbar.report_message("A LOG ENTRY", level='error', extra_data={"openReplaySessionToken": OPENREPLAY_SESSION_TOKEN})# or if you are catching the exceptionsrollbar.report_exc_info(sys.exc_info(), level='error', extra_data={"openReplaySessionToken": str(OPENREPLAY_SESSION_TOKEN)})
The name of the tag
openReplaySessionToken is case sensitive;
Troubleshooting
If you encounter any issues, connect to our Slack and get help from our community. | https://docs.openreplay.com/integrations/rollbar/ | 2021-10-15T23:28:27 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.openreplay.com |
Alert Triggers
You can configure an Alert Trigger when you want InsightIDR to initiate a workflow in response to a security incident instead of requiring that you manually take action. The automated and immediate nature of Alert Triggers helps you mitigate and resolve issues in a very small window of time.
To get started with Alert Triggers:
- Install and Activate an Orchestrator
- Configure Alert Types
- Configure Alert Triggers
- Manage Alert Triggers
Install and Activate an Orchestrator
If you haven’t already, make sure that you have installed and activated an orchestrator before starting the Alert Trigger configuration process. Alert Triggers are designed to initiate automation workflows, so an active orchestrator must be present at the time of configuration.
If you already have installed and activated an orchestrator in your environment, proceed to the next section.
Configure Alert Types
Alert Triggers rely on investigations that InsightIDR creates in response to user actions on your network. InsightIDR determines whether or not to create these investigations for each action according to the options specified in your alert settings. Before you configure an Alert Trigger, verify that all desired alert types for which you want to run a workflow are set to Alert to ensure that InsightIDR creates an investigation when they occur.
NOTE
Alert types that are set to Notable Behavior or Disabled will not generate an investigation if InsightIDR detects them.
To verify that your alert types are set to Alert:
- On the left menu, select Settings > Alert Settings > User behavior analytics.
- Browse through the user action categories listed in the “Alert Type” column. Make sure that all desired alert types are set to Alert in the “Type” column.
- If you made any changes, click Save when finished.
Configure Alert Triggers
Now that you’ve verified that your alert types are set to generate investigations, you’re ready to create your first Alert Trigger.
To create an Alert Trigger:
- Click the Automation tab on your left menu. The “Automation” screen displays.
- Click the Alert Triggers tab. Your “Automation” screen will switch to the Alert Trigger view.
- Click Create Alert Trigger in the upper right corner. The “Create Alert Trigger” panel appears.
- Select an action category from the dropdown list. This allows you to focus your list of selectable workflows according to the kind of action that you want to take. Click Continue.
- If you want to select from all available workflows, select All Workflows.
- Select a workflow from the dropdown list. A color-coded tag on the right side of the workflow name indicates the object that the workflow accepts as input.
- Workflows that accept multiple objects appear with gray-colored tags. Hover over this tag to see what objects this workflow accepts as input.
- If you are configuring a new workflow, make sure you select the template version of the workflow you want to run. Templates are indicated by a small Rapid7 logo next to the workflow name.
- After selecting your workflow, InsightIDR will detail the steps that the workflow will move through as it executes. Click Continue when ready.
- Select one or more alert types from the dropdown list. Your workflow will trigger when InsightIDR creates an investigation based on any of these selected alert types.
- If you are configuring a new workflow from a template, configure any required connections as necessary.
- If you selected an existing workflow from the dropdown list (one that does not have a Rapid7 logo next to its name), then InsightIDR will automatically use the orchestrators and connections that were specified in the previously existing workflow.
- Verify that your configuration options are correct. Click Add Alert Trigger to save your new Alert Trigger.
Your new Alert Trigger will be enabled by default and will now appear in the Alert Triggers table.
Manage Alert Triggers
After you configure one or more Alert Triggers, you can manage them on the Alert Triggers tab of the “Automation” page.
Alert Trigger Status
By default, all newly created Alert Triggers are enabled and active as soon as you save them. If you want to disable an Alert Trigger for any reason, toggle the workflow switch to the Off position in the “Status” column. To enable a disabled Alert Trigger, toggle the switch again to the On position.
Alert Triggers with the “N/A” Status
Short for “Not Applicable”, an Alert Trigger will assume this status if a change to one of its dependencies prevents the workflow from running. Alert Triggers with the “N/A” status will not run and cannot be enabled until the underlying issue is addressed. Reasons for an Alert Trigger assuming the “N/A” status include:
- Deletion of the workflow that the Alert Trigger is configured to run
- This is often the result of a custom workflow being deleted from InsightConnect. Alert Triggers with this condition cannot be enabled again, so you must create a new Alert Trigger after you recreate the workflow in InsightConnect.
- Modification of the Alert Trigger’s alert type to a value other than Alert
- Alert Triggers can only kick off their attached workflows based on investigations that InsightIDR creates in response to user actions. An Alert Trigger will assume the “N/A” status if the alert type value is changed from Alert to something else. To enable an “N/A” workflow of this type again, verify that your alert type is set properly. | https://docs.rapid7.com/insightidr/alert-triggers/ | 2021-10-16T00:09:04 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['/api/docs/file/product-documentation__master/insightidr/images/Screen Shot 2019-04-02 at 1.14.03 PM.png',
None], dtype=object)
array(['/api/docs/file/product-documentation__master/insightidr/images/Screen Shot 2019-06-19 at 2.18.41 PM.png',
None], dtype=object)
array(['/api/docs/file/product-documentation__master/insightidr/images/Alert Trigger.png',
None], dtype=object) ] | docs.rapid7.com |
Usage Notes
Vantage implements transform functionality that, by default, allows importing and exporting an ST_Geometry type to and from the server as a CLOB in WKT format. This means that a client application can use a CLOB to insert a value into an ST_Geometry column, provided the CLOB uses the WKT format of one of the geospatial subtypes that ST_Geometry can represent. Similarly, when a client application submits a query that selects data from an ST_Geometry column, by default, Vantage exports the type as a CLOB using the WKT format of the geometry that the ST_Geometry column represents.
Teradata also provides other transforms for the ST_Geometry type that allow for the import and export of geospatial data as other types and formats. For more information, see ST_Geometry Type Transforms | https://docs.teradata.com/r/1drvVJp2FpjyrT5V3xs4dA/nPf7BC6P6v781gAEHrwtmg | 2021-10-16T01:10:42 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.teradata.com |
Cheque Payments
This article explains how to setup Cheque Payments as a payment gateway on your GetPaid website.
In order to access these options, you need to have the Cheque Payments extension installed and activated.
Overview
Generally, you'd want to use Cheque Payments as a checkout option if you are delivering good and services to a specific region in particular where cheques are still fairly common, as compared to a global clientele where cheque payments might not be feasible.
Perchance your business is selling only tangible goods or you do not have the required resources to verify the authenticity of cheque-based payments, Cheque Payments should not be enabled.
Settings Overview
- Activate - Tick the checkbox to activate cheque payments on your site.
- Checkout Title - Provide a title for the checkout field.
- Description - Add a description for the payment gateway checkout field.
- Priority - Specify the priority of the payment gateway on the checkout page.
- Instructions - Type any instructions that you might wish to convey to your customers (such as cheque payment conditions, time-frame, etc).
Usage
- 1
- Go to GetPaid > Settings > Payment Gateways > Cheque Payments.
- 2
- Tick the checkbox next to Activate.
- 3
- Customize your settings as necessary. It is a good idea to add detailed Instructions here, including your refund or cheque rejection policy.
- 4
- Save your settings.
| https://docs.wpgetpaid.com/article/422-cheque-payments | 2021-10-15T22:47:11 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e271e4904286364bc942ec9/images/5fd60b2036980410c91239ff/file-MwvRWqti6A.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e271e4904286364bc942ec9/images/5fd60b2f4eb32171b8037660/file-8SOjxRYciZ.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e271e4904286364bc942ec9/images/5fd60b57c868cb6df3a811f4/file-FXCeXvKKTR.png',
None], dtype=object) ] | docs.wpgetpaid.com |
Security¶
OS Authentication¶
Before you Begin
Before you begin using the OS Authentication mode in your Application, do nothing. Absolutely nothing. Do not create a user using your Windows username in a Form-Authentication mode. Do not give him/her password. This is not how OS Authentication works and you will end up troubleshooting your application for hours before you give up and realize that you did not have to add a new user. Just follow the steps described in the next sections and you’ll be fine.
Selecting OS Authentication¶
Navigate to your Application’s Configuration and choose the Authentication option under Security
Set your Authentication type to OS Authentication
Selecting the User Class¶
Any zAppDev Application has a pre-defined ApplicationUser class that models the user running and using the generated Application. You cannot alter that class (add, remove or change its Properties), since it's a system class. However, you can inherit it from your own classes and then use your own class (having its Base Class set to ApplicationUser or any of its other inheritors) and then add any other property you might want.
If you have your own User classes, inheriting from the ApplicationUser one, you might want to use them in the OS Authentication process. For example, imagine that you have a class named DomainUser, extending the ApplicationUser class, that keeps information such as the Department where the user is working on, his/her Manager and more. And you want to use this class in your Application to either differentiate from other users (e.g. FacebookUser, ApplicationUser) or to just use your own class in any form you might have.
To make the Application take into account your own user class, when a user is logging into your Application using OS Authentication, all you have to do is to select the class to be used within the process.
For example, the screenshot presented above:
- Sets the Authentication Type to OS
- Makes the Generated Application use the DomainUser class as its user class (instead of the default ApplicationUser) whenever a user loggs into the Application or is added
Warning
The default class to be used in OS Authentication is the standard ApplicationUser. So, if you have selected a wrong class or left the field blank, the ApplicationUser will be used
Warning
If the Authentication Process is faulted in any way, during the execution of the Generated Application, the ApplicationUser class will be used as the "current user" and the exception will be logged. So, if you cannot see all your users in an operation that, for example, fetches all the DomainUsers, then check your log file: maybe an exception was thrown silently and the system defaulted to the ApplicationUser
Enabling the First Administrator Setup¶
It can be pretty obvious that with OS Authentication you will no longer be able to use the "CreateAdmin" form to create an administrator account. You will want, however, to have an Administrator that will grant roles and permissions to other users. Here’s how to do that.
Find the FirstAdminSetup form and make sure that you have its latest version. Your “Render” controller action should look like this:
Now, after re-building your Application you will be able to use the FirstAdminSetup/Render? Page to set your Application User as the Administrator.
Running the Application¶
In order to apply all changes, make sure to rebuild your Application. Remember: just building it is not enough.
After the rebuild is finished, you will be able to run the app and see that it identifies you by your Windows Username (domain-name\username)
If you followed the Enabling the First Administrator Setup steps, you will be able to navigate to the /FirstAdminSetup/Render? Page and set yourself as the Administrator.
Remember: after granting yourself the “Administrator” role, you will no longer be able to use the FirstAdminSetup form.
Troubleshooting¶
IIS¶
Sometimes, even though the re-build finishes successfully and the web.config of your Application shows the correct mode of Authentication, you might see that your Application is still in its previous mode. Thus, even though zAppDev did everything perfectly, IIS decided to have a tantrum and completely ignore your wishes.
If this is the case, here are some things you can do.
Re-Build¶
Try to re-build your Application. If once is not enough, do it again. Twice, three times, as many times as you need to either make IIS respect you or just give up.
Manual Setup¶
If during the Re-Build step you opted for the “give up” option, you can open your IIS Server (or ask someone to do that for you, if you do not have access) and open the Authentication panel of your Application.
The Authentication settings will show you some available modes for you to choose from. If what you need is a Windows Authentication, then - Enable the “Windows Authentication” setting - Disable the “Anonymous Authentication” setting
After opening your Application again, you should be seeing the correct Authentication Mode.
Authentication¶
If all went well and your Application runs in an OS Authentication mode, but no matter what you do your Application won’t Authenticate you, here are some reasons and their solutions.
Invalid User¶
An invalid User might be the case if you ignored the “do not add a user” warning and did it anyway.
The only way for an OS Authentication to work with an already created (via form-auth) user is if the username and the password are exactly like the ones you use to login to your domain. So, if you have a username-pass like the following, to login to your computer: - Username: DEV-GR\katia - Password: MyP4ssword1@ you will have to use the exact same username and password (lowercases, uppercases, everything). Nothing else will work.
At this point, if you did add a user before changing to OS Authentication and tried to make him look like a domain user, I would suggest you just delete him and start over.
Wrong Domain¶
In some cases, the OS Authentication will not authenticate you if your user’s domain and the application’s domain are different.
Example:
- You are a Windows user with a name: DEV-GR\katia
- You try to login to an application that runs in a domain named CLMS
- The CLMS domain does not trust the DEV-GR domain
In this case you will have to speak with your Administrator that will either change the cross-domain security settings to trust your DEV-GR domain, or that will create a new user for you in the CLMS domain (e.g. CLMS\katia)
First Admin Setup¶
This section provides information regarding anything that could go wrong with the First Admin Setup page.
Deployed Application¶
If you have Deployed your Application using the OS Authentication for the very first time, then your First Admin Setup will throw an exception if you have not enabled the “Seed Security Tables” option.
Thus, change the “Seed Security Tables” to “true” in your web.config and re-run your Application.
Changed from Forms-Authentication¶
If your Application was running with Forms Authentication and you changed it to OS Authentication, then the whole “roles and permissions” logic might not work until you have thoroughly cleared your browser data.
Thus, is the First Admin Setup was successful, but you do not see yourself as an Administrator, try to delete everything from your browser history: cookies, session etc.
Extended Domain.ApplicationUser¶
zAppDev does not allow associations between the Domain.ApplicationUser and other objects. Thus, many Applications extend the Domain.ApplicationUser class and use that extension in associations with other objects.
Now, the thing with the OS Authentication is that it will create a new Instance of the Domain.ApplicationUser, without knowing about its inherited classes. However, when creating your First Administrator, you want your inherited user class to be granted the role (e.g. Domain.ActualUser) - not the base one.
In order to surpass this restriction, Vasilis Vasilopoulos has suggested a hack:
Warning
This is not the best practice. Please, use it only if absolutely necessary and try not to spread it through your applications
Danger
Do not proceed if you do not want ALL your Domain.ApplicationUser objects to be extended
Danger
Do not try to create an instance of the extended Domain.ApplicationUser class using his “username” via Mamba Code (as you would normally do)
- Open your UsersList controller and navigate to its Entry Point
- Add the following code to the Entry Point action of step 1
var sqlQuery = “ insert into YOUR_EXTENDED_TABLE (UserName) select a.UserName from security.ApplicationUsers a with(nolock) left join YOUR_EXTENDED_TABLE b with(nolock) on a.UserName = b.UserName where b.UserName is null “; CommonLib.DataContext.RunSqlQuery(sqlQuery, null);
- Run your Application
- Visit the FirstAdminSetup/Render? Page
- Mark yourself as the Administrator
- Navigate to the UsersList page
Let’s clear our what will happen.
The UsersList page will be extending (if needed) all your Domain.ApplicationUser classes. Thus, if you open it right after becoming an Administrator, you will be extended as well. Thus, your Domain.ApplicationUser and its Domain.ActualApplicationUser (that inherits the former one) will both be assigned the Administrator role. | https://docs.zappdev.com/ModelEditors/Configuration/Security/Authentication/OSAuthentication/ | 2021-10-16T00:34:53 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.zappdev.com |
Look and Feel¶
Tweak the Look and Feel parameters exposed here, to create a unique visual style that is applied consistently to the User Interface as a whole.
Here is a description of the Look and Feel parameters:
Warning
Shadows parameter must be set to a valid CSS box-shadow value.
It is suggested that you read the CSS Box Shadow reference from W3 Schools here.
Warning
Transitions parameter must be set to a valid CSS transition value.
It is suggested that you read the CSS Transitions reference from W3 Schools here.
Adjusting a Look and Feel variable¶
The LESS variable that corresponds to the Look and Feel parameter, is displayed on the left part of the parameter box, below its name (e.g. @input-height).
Note
The LESS variable name is hidden when Theme Editor is in Simple Mode
To set a value for a parameter, type a numeric value to the input box on the right of the parameter box and select a Unit from the dropdown. | https://docs.zappdev.com/ModelEditors/Theme/Configuration/LookAndFeel/ | 2021-10-15T23:16:50 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.zappdev.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.