content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Running a Utility
To run an utility, you must open your operating system's terminal or command prompt, then type the file path to the utility followed by arguments.
The command line utilities for Harmony are located in the bin directory of its installation folder. Based on the default installation path, it should be in the following location:
- Windows: C:\Program Files (x86)\Toon Boom Animation\Toon Boom Harmony 15.0 Advanced\win64\bin
- macOS (Stand Alone):/Applications/Toon Boom Harmony 15.0 Advanced/tba/macosx/bin
- macOS (Server): /Applications/Toon Boom Harmony 15.0 Advanced Network/tba/macosx/bin
- GNU/Linux:/usr/local/ToonBoomAnimation/harmonyAdvanced_15.0/lnx86_64/bin
- Open a terminal or command prompt by doing one of the following:
- Windows: Open the Start menu and, in the programs list, select Windows System > Command Prompt.
- macOS: Open LaunchPad, then select Other > Terminal.
- GNU/Linux: Open the Applications menu, then select Utilites > Terminal.
- In the command line, type cd followed by the path to the folder containing the file(s) you want to process, then press ENTER.
- You can type the first few characters of the name of a file or folder in the current directory, then press TAB to make the command line auto-complete the name. If several files or folders match the characters you typed, you might have to press TAB repeatedly to cycle through matching files and folders until the right one is selected.
- Command lines use spaces as a way to separate the parameters sent to a command. If a path contains spaces, you must enter the path between quotes, or add a backslash \ before each space, to indicate that the spaces are part of the path.
- Windows uses backslashes \ to separate the name of directories in a path. macOS and GNU/Linux use forward slashes /.
- Absolute file paths on Windows begin with a drive letter followed by a colon. By default, the drive on which Windows and your documents are located is C. Hence, most paths in Windows will start with C:\. On macOS
and GNU/Linux, absolute file paths begin with a forward slash /.
- You can type file paths that are relative to the current directory by starting the path with the name of the first folder name to navigate to. For example, if you're in the Documents folder and want to go to its MyScene subfolder, simply typing cd MyScene will work. You can also navigate to folders that are parent of the current folder by referring to a folder named with two consecutive periods ... For example, to navigate from the MyScene folder back to its Documents parent folder, typing cd .. will work.
- Type the name of the utility you wish to use, followed by its parameters. Each parameter will start with a dash -. Some parameters take extra information, such as file names or colour values. The following example launches utransform to convert the file Bg_Camp-Camp.tvg into a Targa (.tga) bitmap image:
utransform -outformat TGA Bg_Camp-Camp.tvg
The command will launch. If no errors are encountered, it will simply display its version information, then exit.
- Check if the file you expected the command to output is present. On Windows, you can use the dir command followed by a file name or pattern to query which files are in the current directory. On macOS
or GNU/Linux, you can do that with the ls command:
- At any time, you can obtain information on how to use an utility by launching it followed with the -help parameter.
utransform -help
| https://docs.toonboom.com/help/harmony-15/advanced/utilities/run-utility.html | 2018-04-19T11:57:28 | CC-MAIN-2018-17 | 1524125936914.5 | [array(['../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object)
array(['../Resources/Images/_ICONS/Producer.png', None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/_ICONS/Harmony.png', None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/_ICONS/HarmonyEssentials.png', None],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/_ICONS/HarmonyAdvanced.png', None],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/_ICONS/HarmonyPremium.png', None],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/_ICONS/Paint.png', None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/_ICONS/StoryboardPro.png', None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/_ICONS/Activation.png', None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/_ICONS/System.png', None], dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/_ICONS/Adobe_PDF_file_icon_32x32.png', None],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/HAR/Utilies/Steps/utransform_001.png', None],
dtype=object)
array(['../Resources/Images/HAR/Utilies/Steps/utransform_002.png', None],
dtype=object)
array(['../Resources/Images/HAR/Utilies/Steps/utransform_003a.png', None],
dtype=object)
array(['../Resources/Images/HAR/Utilies/Steps/utransform_003b.png', None],
dtype=object)
array(['../Resources/Images/HAR/Utilies/Steps/utransform_004a.png', None],
dtype=object) ] | docs.toonboom.com |
User Guide > Project > Optimizing > Project
Optimizing Projects
T-SBFND-002-008
You can optimize projects by flattening all drawings in your project, removing unused files, and reducing the texture size.
- Select File > Optimize Project.
The Optimize project dialog box opens.
- Select one or more of the following options:
- Remove unused elements from the project: As you create a storyboard you will delete panels or layers, update drawings, unlink sounds, and so on. Some of these files are kept for backup purposes, but they take up space and increase the size of your project on your hard drive. This option removes these unwanted elements.
- Flatten drawings in the project: Flattens all the brush or pencil line strokes of all the vector drawings in your project. This means that all overlapping strokes will no longer be editable as single strokes, but only as whole, drawn objects.
NOTE: Strokes drawn with different colours will not be flattened together.
- Reduce texture resolution of all drawings in the project: Reduces the resolution of bitmap textures in drawings that have an unnecessarily high pixel density.
NOTE: This cannot be reversed once you have reduced the resolution. These operations cannot be undone and will empty the undo list. | https://docs.toonboom.com/help/storyboard-pro-6/storyboard/project/optimize-project.html | 2018-04-19T11:57:13 | CC-MAIN-2018-17 | 1524125936914.5 | [array(['../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object)
array(['../../Resources/Images/_ICONS/Producer.png', None], dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/_ICONS/Harmony.png', None], dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/_ICONS/HarmonyEssentials.png', None],
dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/_ICONS/HarmonyAdvanced.png', None],
dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/_ICONS/HarmonyPremium.png', None],
dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/_ICONS/Paint.png', None], dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/_ICONS/StoryboardPro.png', None],
dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/_ICONS/Activation.png', None], dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/_ICONS/System.png', None], dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/_ICONS/Adobe_PDF_file_icon_32x32.png',
None], dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/SBP/Steps/23_OptimizeProject.png', None],
dtype=object) ] | docs.toonboom.com |
SublimeREPL¶
SublimeREPL is a plugin for Sublime Text 2 that lets you run interactive interpreters of several languages within a normal editor tab. It also allows connecting to a running remote interpreter (e.g. Clojure/Lein) though a telnet port.
SublimeREPL has a built-in support for command history and transferring code from open buffers to the interpreters for evaluation, enabling interactive programming.
Note
This documentation is work in progress. Details on language integrations are sorely missing. Please contribute!
Installation¶
Download Package Control, select Install Package and pick SublimeREPL from the list of available packages. You should have Package Control anyway.
Quick Start¶
SublimeREPL adds itself as a submenu in Tools. You can choose any one of the preconfigured REPLs and if it’s available in your SYSTEM PATH [1], it will be launched immediately.
Second and more user friendly way to launch any given REPL is through Command Palette. Bring up Command Palette and type “repl”. You will be presented with all preconfigured REPLs. Running REPL through Command Palette has exactly the same result as picking it from Tools > SublimeREPL menu.
You may now use a source buffer to either evaluate text from the buffer in the REPL or copy text over to the REPL without evaluation. For this to work, ensure that the language syntax definition for your source buffer matches the REPL.
Keyboard shortcuts¶
The default shortcuts shipped with SublimeREPL are listed below. If you are accustomed to another REPL keymap, or if you intend to work in REPL a lot (lispers pay attention!) you may want to rebind the keys more to your liking.
REPL keys¶
Note
The list below omits the trivial text editing keybindings (e.g. left, right etc). They are nevertheless configurable in keymap files.
Source buffer keys¶
Important
The keybindings here use Ctrl+, as a prefix (C-, in emacs notation), meaning press Ctrl, press comma, release both. Pressing the prefix combination and then the letter will immediately send the target text into the REPL and evaluate it as if you pressed enter. If you want to prevent evaluation and send the text for editing in the REPL, press Shift with the prefix combination.
Note
Default source buffer keys are identical on all platforms.
Language specific information¶
SublimeREPL’s integration with a specific language includes language-specific main menu and palette options for REPL startup, keymaps, and special REPL extensions unique to the target language. An integration may contain several different REPL modes which are based on different underlying classes.
Clojure¶
The Clojure integration supports Leiningen projects. You must install Leiningen to use Clojure integration.
If your Leiningen installation is not system-global, you may need to tweak SublimeREPL configuration (via Preferences > Package Settings > SublimeREPL > Settings - User) so that we can find your lein binary:
"default_extend_env": {"PATH": "{PATH}:/home/myusername/bin"}
To start a REPL subprocess with Leiningen project environment, open your project.clj and, while it is the current file, use the menu or the command palette to start the REPL.
- In subprocess REPL mode, the REPL is launched as a subprocess of the editor. This is the mode you should use right now.
- The telnet mode no longer works because of the changes in Leiningen and nrepl.
The source buffer “send block” command (Ctrl+, b) deserves a special mention. Performing this command while the cursor is within the body of a definition will select this (current, top-level) definition and send it to the REPL for evaluation. This means that the latest version of the function you’re currently working on will be installed in the live environment so that you can immediately start playing with it in the REPL. This is similar to [slime -]eval-defun in emacs.
Additional keybindings are available for Clojure:
Python¶
Both stock Python and Execnet integrations support virtualenv. Various ways to work with Python, including PDB and IPython, are supported.
For virtualenv created environments to be discoverable by SublimeREPL they should be created or symlinked in one of the following:
- ~/.virtualenvs default for virtualenvwrapper
- ~/.venvs default for venv
Alternatively, more paths can be added to “python_virtualenv_paths” in the SublimeREPL configuration file.
Documentation contributions from a Python specialist are welcome.
Configuration¶
The default SublimeREPL configuration documents all available configuration settings.
Frequently Asked Questions¶
SublimeREPL can’t launch the REPL process - OSError(2, ‘No such file or directory’), how do I fix that?
Sublime is unable to locate the binary that is needed to launch your REPL in the search paths available to it. This is because the subprocess REPLs are launched, as, well, subprocesses of Sublime environment, which may be different from your interactive environment, especially if your REPL is installed in a directory that is not in a system-wide path (e.g /usr/local/bin or ‘/home/myusername` on Linux, My Documents on Windows etc)
If the binary is not in your system path and you can’t or won’t change that, tweak SublimeREPL configuration:{ ... "default_extend_env": {"PATH": "{PATH}:/home/myusername/bin"} ... }
I’d like an interactive REPL for Foo and it is not supported, what do?
Chances are, you only need a minimal amount of work to add an integration, and necessary steps are described here briefly.
If you already have an interactive shell for Foo, you can use the subprocess REPL. For an example, see PHP or Lua integration in config/PHP.
If Foo provides an interactive environment over TCP, you can use the telnet REPL. For an example, see MozRepl integration
Supported languages¶
SublimeREPL currently ships with support for the following languages:
- Clisp
- Clojure
- CoffeeScript
- Elixir
- Execnet Python
- Erlang
- F#
- Groovy
- Haskell
- Lua
- Matlab
- MozRepl
- NodeJS
- OCaml
- Octave
- Perl
- PHP interactive mode
- PowerShell
- Python
- R
- Racket
- Ruby
- Scala
- Scheme
- Shell (Windows, Linux and OS X)
- SML
- Sublime internal REPL (?)
- Tower (CoffeeScript)
Structure of SublimeREPL¶
Note
If this is your first time dealing with Sublime plugins, you may find it a bit too magical. Basically, Sublime automatically scans plugin directories loads configuration files and plugin code without manual intervention, and reloads them dynamically as soon as they change. The entry points to a plugin’s code are its commands, which are Python objects that extend Sublime’s standard command class. Sublime calls them when needed. There is no initialization entry point or a “plugin loaded” callback or somesuch.
Basics of language integration: configuration and launch commands¶
A language integration in SublimeREPL consists of configuration files and, where needed, Python code. The configuration consists of:
- Menu configuration files which specify the actual REPL object configuration
- Command palette configuration files
- Optional keybinding configuration files
REPLs are started by SublimeREPL command repl_open. The command and its arguments is usually specified in the menu configuration file, and the other places refer to that configuration item by file name and ID using the run_existing_window_command command.
Simple language integrations use an existing REPL class (see below) without modification. For these integrations, no additional Python code is needed. They use one of the standard REPL classes as the base, as documented below. In most cases, this will be the subprocess based REPL class. An example of such an integration is Lua.
The menu configuration file config/Lua/Menu.sublime-menu contains:
[ { "id": "tools", "children": [{ "caption": "SublimeREPL", "mnemonic": "R", "id": "SublimeREPL", "children": [ {"command": "repl_open", "caption": "Lua", "id": "repl_lua", "mnemonic": "L", "args": { "type": "subprocess", "encoding": "utf8", "cmd": ["lua", "-i"], "cwd": "$file_path", "external_id": "lua", "syntax": "Packages/Lua/Lua.tmLanguage" } } ] }] } ]
This adds a “Lua” menu item to “Tools > SublimeREPL” which creates a Lua REPL via SublimeREPL command repl_open. The important part to take note of here is the id attribute (repl_lua). This is the ID by which the command palette configuration file refers to Lua REPL configuration.
As you can see, the main way to launch a new REPL is the SublimeREPL command
repl_open (class
ReplOpenCommand). The menu configuration file (see
above) specifies the arguments for the command that are used to locate the
desired REPL class and the settings for it so that it can be spawned.
The command configuration file config/Lua/Default.sublime-commands looks like this:
[ { "caption": "SublimeREPL: Lua", "command": "run_existing_window_command", "args": { /* Note that both these arguments are used to identify the file above and load the REPL configuration from it */ "id": "repl_lua", "file": "config/Lua/Main.sublime-menu" } } ]
It is obvious that the REPL configuration is concentrated in the menu files,
and the palette configuration only refers to those by ID and file name. The
latter is achieved by the command run_existing_window_command (class
RunExistingWindowCommandCommand)
This command is a wrapper that is used in the command palette configuration. Its function is to execute another command. It takes an ID of a configuration item and the name of a file where the configuration is stored, and scans the available Sublime configuration folders for the file and within the file for the configuration item until one is found. This allows the command palette configuration to specify a reference to the REPL configuration command instead of replicating it. For this reason, actual REPL configuration is concentrated in the menu files.
REPL classes¶
All REPL instances are descendants of
Repl. New integrations can
either provide their own class, or use one of the base classes that ship with
SublimeREPL:
- Class
SubprocessReplfor subprocess-based REPLs. The process running in the REPL is a subprocess of the editor. The input and output of the process is connected to the output and the input of the REPL
- Class
TelnetRepl. The process runs outside of the editor, presumably having been spawned externally, and the REPL connects to it over TCP via Python telnetlib.
There are three integrations that provide their own classes:
- Class
PowershellRepl. This is only used by PowerShell integration.
- Class
ExecnetRepl. This is only used by Execnet Python integration
- Class
SublimePythonRepl. A REPL over SublimeText’s internal Python interpreter.
All these can be found in the plugin’s repl/ subdirectory.
A REPL class is expected to provide a standard interface for SublimeREPL integration:
read_bytes()¶
Read and return some bytes from REPL’s incoming stream, blocking as necessary.
ReplManagerwill set up a separate thread with a
ReplReaderpump that keeps polling this method.
REPL initialization sequence¶
- User interaction causes the execution of repl_open command. Its arguments are usually taken from a menu configuration file.
- The open() method of ReplManager is called, where a Repl instance and a ReplView instance get created
- Within the ReplView constructor, the read and write loops get started. The REPL is now alive.
REPL manager¶
Class
ReplManager is responsible for managing REPL instances
(subclasses of
Repl). It initializes new REPLs by:
- Creating REPL instances
- Providing an instance of the Sublime view associated with the REPL by reusing an existing one, or creating a new one
- Creating and remembering a named
ReplViewinstance that couples between the two.
REPL views¶
A
ReplView instance is a coupling between a REPL instance and a
Sublime view. Its main responsibility is to create Sublime views and maintain
the loops that read from, and write to, the REPL.
- The incoming data from the REPL is read in a separate thread using
ReplReader, because read operations are assumed to be blocking
- The outgoing data is written into the REPL by ReplView’s method py:method:update_view_loop. This method is called by ReplView’s constructor at the very end and, as long as the associated REPL object is alive, will reschedule itself with Sublime’s py:method:set_timeout. | http://sublimerepl.readthedocs.io/en/latest/ | 2018-04-19T11:29:26 | CC-MAIN-2018-17 | 1524125936914.5 | [array(['_images/menu.png', '_images/menu.png'], dtype=object)
array(['_images/palette.png', '_images/palette.png'], dtype=object)] | sublimerepl.readthedocs.io |
- Creating Simple and Compound Expressions
- Adding Custom Expressions
- Using Operators and Operands in Policy Expressions
Simple expressions check for a single condition. An example of a simple expression is:
REQ.HTTP.URL ==
Compound expressions check for multiple conditions. You create compound expressions by connecting to one or more expression names using the logical operators && and ||. You can use the symbols to group the expression in the order of evaluation.
Compound expressions can be categorized as: | https://docs.citrix.com/ja-jp/netscaler-gateway/11/install/ng-policies-profiles-wrapper-con/ng-config-system-expressions-con/ng-config-simple-compound-expressions-tsk.html | 2018-04-19T11:36:25 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.citrix.com |
While many of these features can also be useful on a diskless system where the disk is actually on the network, using them decreases cache effectiveness and thereby increases network utilization. In an environment that.
If you disable the recycle bin, files are deleted immediately. Consequently, the file system reuses respective disk sectors and cache entries sooner.To configure the recycle bin:
Disabling offline folders is strongly recommended to prevent Windows from caching network files on its local disk – a feature with no benefit to a diskless system. Configure this feature from the target device or using Windows Group Policy.
To configure using the Windows Group Policy:
Reduce the maximum size of the Application, Security, and System logs. Configure this feature using the target device or Windows Group Policy.
To configure using the Windows Group Policy:
On the domain controller, use the Microsoft Management Console with the Group Policy snap-in to configure the domain policies for the following object:
If you have the Windows automatic updates service running on your target device, Windows periodically checks a Microsoft web site and looks for security patches and system updates. If it finds updates that have not been installed, it attempts to download them and install them automatically. Normally, this is a useful feature for keeping your system up-to-date. However, in a Provisioning Services implementation using Standard Image mode, this feature can decrease performance, or even cause more severe problems. This is because the Windows automatic updates service downloads programs that fill the write cache. When using the target device’s RAM cache, filling the write cache can cause your target devices to stop responding.
Re-booting the target device clears both the target device and Provisioning Services write cache. Doing this after an auto-update means that the automatic update changes are lost, which defeats the purpose of running automatic updates. (To make Windows updates permanent, you must apply them to a vDisk while it is in Private Image mode, as described below).
To prevent filling your write cache, disable the Windows Automatic Updates service for the target device used to build the vDisk. | https://docs.citrix.com/ko-kr/provisioning/7-8/network-components/pvs-network-utilization-reduce.html | 2018-04-19T11:34:51 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.citrix.com |
- Configure the SharePoint 2013 service
- To install the SharePoint 2013 web service
- Configure certificates for SharePoint 2013 sites
- Migrate customers to SharePoint 2013
- Provision the SharePoint 2013 service
Additionally, if you want to provision DNS for customers' SharePoint 2013 sites, ensure the DNS service is enabled and configured. By default, DNS is enabled in the SharePoint 2013 service settings.
Configuring the SharePoint 2013 service includes importing the service package file to the control panel. To import service packages, you must have the Service Schema or All Services Schema security role.
When you create a SharePoint farm through the control panel, the farm has Foundation licensing by default. You can change the license when you configure the farm. However, after the farm is provisioned to a customer, you cannot modify the license. For more information about SharePoint 2013 licensing, see SharePoint 2013 licensing and Web Apps.
A SharePoint feature pack is a collection of SharePoint features. Services Manager displays the feature packs configured on a SharePoint farm and enables you to create new feature packs from a list of the features installed on the SharePoint server. | https://docs.citrix.com/zh-cn/cloudportal-services-manager/11/ccps-services-deploy/ccps-deploy-sharepoint-2013/ccps-sharepoint-2013-config.html | 2018-04-19T11:34:35 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.citrix.com |
Define global lists
VSTS (Hosted XML) | TFS 2018 | TFS 2017 | TFS 2015 | TFS 2013
Important
This topic applies to team project customization for Hosted XML and On-premises XML process models. Hosted XML customization supports adding and updating global lists with a process update. To learn more, see Differences between VSTS and TFS process template customizations.
The Inheritance process model doesn't support global lists. For an overview of process models, see Customize your work tracking experience.
By using global lists, you can minimize the work that is required to update a list that appears in the definitions of several work item types (WITs). Global lists are pick lists that you can include within one or more fields and WIT definitions.
You can share list items among several WITs for a collection by including the list items in one or more
GLOBALLIST elements.
As you define WITs, you might find that some fields share the same values. Frequently, you can share across several WITs and even across several team projects. Some of these values, such as the build number of nightly builds, change frequently, which requires an administrator to frequently update these lists in many locations. Global lists can be especially useful when a list must be derived from an external system. For example, suppose a company maintains a separate customer database. When you file a bug that a customer discovered, the customer's name is entered into a custom
Found By Customer field.
You can manage global lists for a collection as an XML file that you can list, import, export, and delete. The name of each global list can have up to 254 Unicode characters and must be unique within a collection.
Note
There are no system-defined nor predefined global lists specified in the default processes or process templates provides. add or modify a global list, use the witadmin command-line tool to import and export the definition for global lists. See Manage global lists. To use a global list, add it to the
FIELD definition within a work item type. See All FIELD elements.
Add and manage global lists
A global list is a set of
LISTITEM elements that is stored and used globally by all team projects in a collection. Global lists are useful for fields that are defined within several types of work items, such as Operating System, Found in Build, and Fixed in Build.
You can define one or more global lists and their items by using one of the following methods in the following ways based on the process model you use:
- Within a WIT XML definition that you add to a team project or process template (Hosted XML and On-premises XML)
- Within a global list XML definition file that you import to a team project collection (On-premises XML)
- Within a global workflow XML definition file that you import to a team project collection (On-premises XML).
Note
For the Hosted XML process model, the following limits are placed on global list import:
- Total of 64 global lists
- Total of 512 items per list
- Approximately 10K items can be defined total within all global lists specified across all WITs.
Syntax structure
The following table describes the GLOBALLIST and LISTITEM elements. You can use these elements to enumerate a list of values that is presented to the user as a pick list or drop-down menu of items.
Sample global list
By adding the following syntax, you can define a global list within an XML definition file for a type of work item or a global workflow:
<GLOBALLISTS> <GLOBALLIST name="name of global list"> <LISTITEM value="List item 1" /> <LISTITEM value="List item 2" /> <LISTITEM value="List item 3" /> <LISTITEM value="List item 4" /> . . . <LISTITEM value="List item n" /> </GLOBALLIST> </GLOBALLISTS>
By using the following syntax, you can reference a global list within an XML definition file for a type of work item:
<GLOBALLISTS> <GLOBALLIST name=" name of global list 1" /> <GLOBALLIST name=" name of global list 2" /> . . . <GLOBALLIST name=" name of global list n" /> </GLOBALLISTS>
For information about the structure and location of definition files for types of work items or global workflow, see All WITD elements or GLOBALWORKFLOW, respectively.
Sample global list maintained for a project collection
To add a global list to a project collection, you can import the following syntax by using the witadmin importgloballist command:
<gl:GLOBALLISTS xmlns: <GLOBALLIST name="NameOfGlobalList"> <LISTITEM value="ListItem1" /> <LISTITEM value="ListItem2" /> <LISTITEM value="ListItem3" /> <LISTITEM value="ListItem4" /> . . . <LISTITEM value="ListItemN" /> </GLOBALLIST> </gl:GLOBALLISTS>
A global list cannot be empty. Each
GLOBALLIST element must have at least one
LISTITEM element defined.
Related articles
Are any global lists auto-populated with data?
Yes for on-premises TFS. The global list named Builds"“TeamProjectName gets appended each time a build is run. Over time, the list can become very long. Best practice is to routinely remove unused items from the list.
To learn more about using this list, see Query based on build and test integration fields. | https://docs.microsoft.com/en-us/vsts/work/customize/reference/define-global-lists?view=vsts | 2018-04-19T11:41:41 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.microsoft.com |
SSL Configuration: CSR Attributes and Certificate Extensions
Included in Puppet Enterprise 3.8. A newer version is available; see the version menu above for details.
Summary
When Puppet agent nodes request their certificates, the certificate signing request (CSR) usually just contains their certname and the necessary cryptographic information. However, agents can also embed more data in their CSR. This extra data can be useful for policy-based autosigning and for adding new trusted facts.
Status as of Early 2014
In this version of Puppet, embedding additional data into CSRs is mostly useful in deployments where:
- Large numbers of nodes are regularly created and destroyed as part of an elastic scaling system.
- You are willing to build custom tooling to make certificate autosigning more secure and useful.
It may also be useful in deployments where:
- Puppet is used to deploy private keys or other sensitive information, and you want extra control over which nodes receive this data.
If your deployment doesn’t match one of these descriptions, you may not need this feature.
Version Note
CSR attributes and certificate extensions are only available in Puppet 3.4 and newer. Access to extensions in the
$trustedhash is available in 3.5 and newer.
Timing: When Data Can be Added to CSRs / Certificates
When Puppet agent starts the process of requesting a catalog, it first checks whether it has a valid signed certificate. If it does not, it will generate a key pair, craft a CSR, and submit it to the certificate authority (CA) Puppet master. The steps are covered in more detail in the reference page about agent/master HTTPS traffic.
Once a certificate is signed, it is, for all practical purposes, locked and immutable. Thus, may into the CSR. They can be used by the CA when deciding whether or not to sign the certificate, but they are discarded after that and will not be transferred to the final certificate.
Default Behavior
By default, Puppet’s CA tools don’t do anything with custom attributes. The
puppet cert list command will not display custom attributes for any pending CSRs, and basic autosigning (autosign.conf) will not check them before signing.
Configurable Behavior
If you are using policy-based autosigning, your policy executable receives the complete CSR in PEM format. The executable can extract and inspect the custom attributes, and it.
You can also use the Puppet-specific OIDs referenced below in the section on extension requests.
Extension Requests (Permanent Certificate Data)
Extension requests are pieces of data that will be transferred to the final certificate (as extensions) when the CA signs the CSR. They will persist as trusted, immutable data, which cannot be altered after the certificate has been signed.
They may also be used by the CA when deciding whether or not to sign the certificate.
Default Behavior
When signing a certificate, Puppet’s CA tools will transfer any extension requests into the final certificate.
If trusted facts are enabled, any cert extensions can be accessed in manifests as
$trusted[extensions][<EXTENSION OID>]. Any OIDs in the ppRegCertExt range (see below) will appear using their short names, and other OIDs will appear as plain dotted numbers. See the page on facts and special variables for more information about
$trusted.
Visibility of extensions is somewhat limited:
- The
puppet cert listcommand will not display custom attributes for any pending CSRs, and basic autosigning (autosign.conf) will not check them before signing. You’ll need to either use policy-based autosigning or inspect CSRs manually with the
opensslcommand (see below).
- The
puppet cert printcommand will display any extensions in a signed certificate, under the “X509v3 extensions” section.
Puppet’s authorization system (auth.conf) does not use certificate extensions for anything.
Configurable Behavior
If you are using policy-based autosigning, your policy executable receives the complete CSR in PEM format. The executable can extract and inspect the extension requests, and it: 1.3.6.1.4.1.34380.1.1.4: 342thbjkt82094y0uthhor289jnqthpc2290 1.3.6.1.4.1.34380.1.1.3: my_ami_image 1.3.6.1.4.1.34380.1.1.1: ED803750-E3C7-44F5-BB08-41A04433FE2E
Any Puppet-specific OIDs (see below) will appear as numeric strings when using OpenSSL.
You can check for extensions in a signed certificate by running
puppet cert print <name>. In the output, look for the “X509v3 extensions” section. Any of the Puppet-specific registered OIDs (see below) will:
AWS Attributes and Extensions Population Example
Generally, you will want to use an automated script (possibly using cloud-init or a similar tool) to populate the
csr_attributes.yaml file at the time a node is provisioned.
As an example, you could enter the following script into the “Configure Instance Details —> Advanced Details” section when provisioning a new node from the AWS EC2 dashboard:
#!/bin/sh if [ ! -d /etc/puppet ]; then mkdir /etc/puppet fi erb > /etc/puppet/csr_attributes.yaml <<END custom_attributes: 1.2.840.113549.1.9.7: mySuperAwesomePassword extension_requests: pp_instance_id: <%= %x{/opt/aws/bin/ec2-metadata -i}.sub(/instance-id: (.*)/,'\1').chomp %> pp_image_name: <%= %x{/opt/aws/bin/ec2-metadata -a}.sub(/ami-id: (.*)/,'\1').chomp %> END
Assuming your image had the
erb binary available, this would populate node(s). (This is not really a problem once your provisioning system is changed to populate the data, but it can happen pretty easily when doing things manually.)
In order to start over, you’ll need to. | https://docs.puppet.com/puppet/3.8/ssl_attributes_extensions.html | 2017-10-17T02:15:32 | CC-MAIN-2017-43 | 1508187820556.7 | [] | docs.puppet.com |
Installation¶
The examples below are for a typical Ubuntu/Debian system.
Installation of oZone libraries and examples¶
oZone is a portable solution with a very easy installation process. This example assumes you want all the ozone libraries (including dependencies) to be installed at ~/ozoneroot. This is a great way to isolate your install from other libraries you may already have installed.
There are two parts, a one time process and then building just the ozone library repeatedly (if you are making changes to the examples or core code)
One time setup:
# -------------------install dependencies------------------------ sudo apt-get update sudo apt-get install git cmake nasm libjpeg-dev libssl-dev sudo apt-get install libatlas-base-dev libfontconfig1-dev lib4l-dev # ---------------------clone codebase---------------------------- git clone cd ozonebase git submodule update --init --recursive # --------------- build & install -------------------------------- export INSTALLDIR=~/ozoneroot/ # change this to whatever you want ./ozone-build.sh
Note
if you face compilation issues with ffmpeg not finding fontconfig or other package files, you need to search for libv4l2.pc, fontconfig.pc files and copy then to the lib/pkgconfig directory of your INSTALL_DIR path
Once the one time setup is done, you don’t need to keep doing it (building external dependencies take a long time) For subsequent changes, you can keep doing these steps:
# ---- Optional: For ad-hoc in-source re-building---------------- cd server cmake -DCMAKE_INSTALL_PREFIX=$INSTALLDIR -DOZ_EXAMPLES=ON -DCMAKE_INCLUDE_PATH=$INSTALLDIR/include make make install # ----- Optional: build nvrcli - a starter NVR example ---------- cd server edit src/examples/CMakeLists.txt and uncomment lines 14 and 27 (add_executable for nvrcli and target_link_libraries for nvrcli make
That’s all!
Dlib optimizations¶
If your processor supports AVX instructions,
(cat /proc/cpuinfo | grep avx) then add
-mavx in
server/CMakeLists.txt to
CMAKE-C_FLAGS_RELEASE and
CMAKE_CXX_FLAGS_RELEASE and rebuild. Note, please check before you add it, otherwise your code may core dump.
Building Documentation¶
oZone documentation has two parts:
- The API document that uses Doxygen
- The User Guide which is developed using Sphinx
API docs¶
To build the APIs all you need is
Doxygen and simply run
doxygen inside
ozonebase/server.
This will generate HTML documents. oZone uses
dot to genarate API class and relationship graphs, so you should also install dot, which is typically part of the
graphviz package.
User docs¶
You need sphinx and dependencies for generating your own user guide. The user guide source files are located in
ozonebase/docs/server/guide
# Install dependencies sudo apt-get install python-sphinx sudo apt-get install python-pip pip install sphinx_rtd_theme
And then all you need to do is
make html inside
ozonebase/docs/server/guide and it generates beautiful documents inside the
build directory. | http://ozone-framework.readthedocs.io/en/latest/installation/index.html | 2017-10-17T01:40:00 | CC-MAIN-2017-43 | 1508187820556.7 | [] | ozone-framework.readthedocs.io |
” Criteria:
- The first character is a digit in the range 1 to 9.
- The second, third and fourth characters are digits in the range 0 to 9.
- The last two characters are letters, as expressed by the last two subexpression [A-Za-z], which indicate that the last two characters should be in the range A-Z or the range a-z.
- Between the digits and the letters there can be a space, as expressed by the subexpression which consists of a space and a question mark. The question mark indicates that the space occurs never or once.
Subexpressions
A regular expression consists of a sequence of subexpressions. A string matches a regular expression if all parts of the string match these subexpressions in the same order.
A regular expression can contain the following types of subexpressions:
[ ] – a bracket expression matches a single character that is indicated within the brackets.
For example:
[abc]matches “a”, “b”, or “c“
[a-z]specifies a range which matches any lowercase letter from “a” to “z“
- These forms can be mixed: [abcx-z] matches “a”, “b”, “c”, “x”, “y”, or “z”, and is equivalent to
[a-cx-z]
- The “-” character is treated as a literal character if it is the last or the first character within the brackets, or if it is escaped with a backslash
\
- In the zip code example above the four bracket expressions
[0-9]indicates that the first 4 characters should be digits, and the two bracket expressions
[A-Za-z]indicate that the last two characters should be letters
[^ ] –.
{m,n} – matches the preceding element at least m and not more than n times.
For example, a{3,5} matches only “aaa”, “aaaa”, and “aaaaa”.
{n} – matches the preceding element exactly n times. For example,: | https://docs.mendix.com/refguide6/regular-expressions | 2017-10-17T01:47:38 | CC-MAIN-2017-43 | 1508187820556.7 | [] | docs.mendix.com |
Continuous Delivery for a single container Docker application
A single container application could be a web application, API endpoint, a microservice, or any application component that is packaged as a single Docker image. This document describes how you can use the Shippable assembly lines platform to deploy such a single container application to a Container Orchestration Service like Amazon ECS, Kubernetes, GKE, or Azure.
Assumptions
We assume that the application is already packaged as a Docker image and Continuous Deployment of your application up and running with Shippable.
- Creating a pointer to the Docker image of your application
- Specifying application options for the container
- Specifying application runtime environment variables
- Creating a Service definition
- inputs are specified in Shippable configuration files.
These are the key components of the assembly line diagram -
Resources (grey boxes)
app_imageis a required image resource that represents the docker image of the application.
op_clusteris a required cluster resource that represents the orchestration platform where the application is deployed to.
app_optionsis an optional dockerOptions resource that represents the options of the application container.
app_environmentis an optional params resource that stores key-value pairs that are set as environment variables for consumption by the application.
app_replicasis an optional replicas resource that specifies the number of instances of the container to deploy..
- Description:
app_imageis an image resource that represents_options.
- Description:
app_optionsis a dockerOptions resource that represents the options of the application container. Here we demonstrate setting container options such as setting the memory to 1024MB and exposing port 80. Shippable platform_options type: dockerOptions version: memory: 1024 portMappings: - 80:80
3. Define
app_environment.
- Description:
app_environmentis_environment - IN: app_options - IN: app_environment
5. Define
app_replicas.
- Description:
app_replicasis a replicas resource that specifies the number of instances of the container you want to deploy. Here we demonstrate running two instances of the container.
- every time the
app_image changes, i.e. each time you have a new Docker image.-basic. | http://docs.shippable.com/deploy/continuous-delivery-single-container-docker-application/ | 2017-10-17T01:50:17 | CC-MAIN-2017-43 | 1508187820556.7 | [array(['/images/deploy/usecases/deploy-single-container-docker-app.png',
None], dtype=object) ] | docs.shippable.com |
Functions¶
Functions are additional operations that can be employed when calculating values for YSLD parameters. In most cases, a value for a parameter can be the output (result) of a function.
Functions can be used in most places in a style document.
Syntax¶
Functions aren’t a parameter to itself, but instead are used as a part of the values of a parameter, or indeed in any expression. So the syntax is very general, for example:
<parameter>: ${<function>}
Functions are evaluated at rendering time, so the output is passed as the parameter value and then rendered accordingly.
List of functions¶
A reference list of functions can be found in the GeoServer User Manual and is also available in raw form in the GeoTools User Manual.
The functions can be broken up loosely into categories such as geometric, math, and string functions.
Theming functions¶
There are three important functions that are often easier to use for theming than using rules, and can vastly simplify your style documents: Recode, Categorize, and Interpolate.
Recode: Attribute values are directly mapped to styling properties:
recode(attribute,value1,result1,value2,result2,value3,result3,...)
This is equivalent to creating multiple rules with similar filters:
rules: - ... filter: ${attribute = value1} - ... <property>: result1 - ... filter: ${attribute = value2} - ... <property>: result2 - ... filter: ${attribute = value3} - ... <property>: result3
Categorize: Categories are defined using minimum and maximum ranges, and attribute values are sorted into the appropriate category:
categorize(attribute,category0,value1,category1,value2,category2,...,belongsTo)
This would create a situation where the attribute value, if less than value1 will be given the result of category0; if between value1 and value2, will be given the result of category1; if between value2 and value3, will be given the result of category2, etc. Values must be in ascending order.
The belongsTo argument is optional, and can be either succeeding or preceding. It defines which interval to use when the lookup value equals the attribute value. If the attribute value is equal to value1 and suceeding is used, then the result will be category1. If preceding is used then the result will be category0. The default is succeeding.
This is equivalent to creating the following multiple rules:
rules: - ... filter: ${attribute < value1} - ... <property>: category0 - ... filter: ${attribute >= value1 AND attribute < value2} - ... <property>: category1 - ... filter: ${attribute >= value2} - ... <property>: category2
Interpolate: Used to smoothly theme quantitative data by calculating a styling property based on an attribute value. This is similar to Categorize, except that the values are continuous and not discrete:
interpolate(attribute,value1,entry1,value2,entry2,...,mode,method)
This would create a situation where the attribute value, if equal to value1 will be given the result of entry1; if halfway between value1 and value2 will be given a result of halfway in between entry1 and entry2; if three-quarters between value1 and value2 will be given a result of three-quarters in between entry1 and entry2, etc.
The mode argument is optional, and can be either linear, cosine, or cubic. It defines the interpolation algorithm to use, and defaults to linear.
The method argument is optional, and can be either numeric or color. It determines whether entry1, entry2, ... are numbers or colors, and defaults to numeric.
There is no equivalent to this function in vector styling. The closest to this in raster styling is the color ramp.
The three theming functions can be neatly summarized by this table:
Examples¶
Display rotated arrows at line endpoints¶
The startPoint(geom) and endPoint(geom) functions take a geometry as an argument and returns the start and end points of the geometry respectively. The startAngle(geom) and endAngle(geom) functions take a geometry as an argument and return the angle of the line terminating at the start and end points of the geometry respectively. These functions can be used to display an arrow at the end of a line geometry, and rotate it to match the direction of the line:
feature-styles: - rules: - symbolizers: - line: stroke-width: 1 - point: geometry: ${endPoint(geom)} rotation: ${endAngle(geom)} size: 24 symbols: - mark: shape: 'shape://carrow' fill-color: '#000000'
Endpoint arrows
Drop shadow¶
The offset(geom, x, y) function takes a geometry and two values, and displaces the geometry by those values in the x and y directions. This can be used to create a drop-shadow effect:
feature-styles: - name: shadow rules: - symbolizers: - polygon: stroke-width: 0.0 fill-color: '#000000' fill-opacity: 0.75 geometry: ${offset(geom, 0.0001, -0.0001)} - name: fill rules: - symbolizers: - polygon: stroke-width: 0.0 fill-color: '#00FFFF'
Drop shadow
Different-colored outline¶
The buffer(geom, buffer) function takes a geometry and a value as arguments, and returns a polygon geometry with a boundary equal to the original geometry plus the value. This can be used to generate an extended outline filled with a different color, for example to style a shoreline:
feature-styles: - name: shoreline rules: - polygon: fill-color: '#00BBFF' geometry: ${buffer(geom, 0.00025)} - name: land rules: - polygon: fill-color: '#00DD00'
Buffered outline
See also:
Display vertices of a line¶
The vertices(geom) function takes a geometry and returns a collection of points representing the vertices of the geometry. This can be used to convert a polygon or line geometry into a point geometry:
point: geometry: vertices(geom)
Endpoint arrows
See also:
Angle between two points¶
The atan2(x, y) function calculates the arctangent of (y/x) and so is able to determine the angle (in radians) between two points. This function uses the signs of the x and y values to determine the computed angle, so it is preferable over atan(). The getX(point_geom) and getY(point_geom) extracts the x and y ordinates from a geometry respectively, while toDegrees(value) converts from radians to degrees:
point: symbols: - mark: shape: triangle rotation: ${toDegrees(atan2( getX(startPoint(the_geom))-getX(endPoint(the_geom)), getY(startPoint(the_geom))-getY(endPoint(the_geom))))}
See also:
Scale objects based on a large range of values¶
The log(value) function returns the natural logarithm of the provided value. Use log(value)/log(base) to specify a different base.
For example, specifying log(population)/log(2) will make the output increase by 1 when the value of population doubles. This allows one to display relative sizes on a consistent scale while still being able to represent very small and very large populations:
point: symbols: - mark: shape: circle size: ${log(population)/log(2)}
See also:
Combine several strings into one¶
The Concatenate(string1, string2, ...) function takes any number of strings and combines them to form a single string. This can be used to display more than one attribute within a single label:
text: label: ${Concatenate(name, ', ', population)}
Capitalize words¶
The strCapitalize(string) function takes a single string and capitalizes the first letter of each word in the string. This could be used to capitalize labels created from lower case text:
text: label: ${strCapitalize(name)}
See also:
Color based on discrete values¶
In certain cases, theming functions can be used in place of filters to produce similar output much more simply. For example, the Recode function can take an attribute and output a different value based on an attribute value. So instead of various filters, the entire constructions can be done in a single line. For example, this could be used to color different types of buildings:
feature-styles: - name: name rules: - symbolizers: - polygon: fill-color: ${recode(zone, 'I-L', '#FF7700', 'I-H', '#BB6600', 'C-H', '#0077BB', 'C-R', '#00BBDD', 'C-C', '#00DDFF', '', '#777777')}
In the above example, the attribute is zone , and then each subsequent pair consists of an attribute value followed by a color.
Recode Function
Color based on categories¶
The Categorize function returns a different value depending on which range (category) an attribute value matches. This can also make a style much more simple by reducing the number of filters. This example uses categorize to color based on certain values of the YEARBLT attribute:
feature-styles: - name: name rules: - symbolizers: - polygon: stroke-color: '#000000' stroke-width: 0.5 fill-color: ${categorize(YEARBLT, '#DD4400', 1950,'#AA4400', 1960,'#886600', 1970,'#668800', 1980,'#44BB00', 1990,'#22DD00', 2000,'#00FF00')}
Categorize Function
Choropleth map¶
The interpolate function can be used to create a continuous set of values by interpolating between attribute values. This can be used to create a choropleth map which shows different colors for regions based on some continuous attribute such as area or population:
feature-styles: - name: name rules: - title: fill-graphic symbolizers: - polygon: stroke-width: 1 fill-color: ${interpolate(PERSONS, 0.0, '#00FF00', 1e7,'#FF0000', 'color')}
Choropleth Map | http://docs.geoserver.org/latest/en/user/styling/ysld/reference/functions.html | 2017-10-17T02:08:05 | CC-MAIN-2017-43 | 1508187820556.7 | [array(['../../../_images/functions_arrows.png',
'../../../_images/functions_arrows.png'], dtype=object)
array(['../../../_images/functions_dropshadow.png',
'../../../_images/functions_dropshadow.png'], dtype=object)
array(['../../../_images/functions_buffer.png',
'../../../_images/functions_buffer.png'], dtype=object)
array(['../../../_images/functions_vertices.png',
'../../../_images/functions_vertices.png'], dtype=object)
array(['../../../_images/functions_recode.png',
'../../../_images/functions_recode.png'], dtype=object)
array(['../../../_images/filters_categories.png',
'../../../_images/filters_categories.png'], dtype=object)
array(['../../../_images/functions_choropleth.png',
'../../../_images/functions_choropleth.png'], dtype=object)] | docs.geoserver.org |
Administration Guide
Local Navigation
Assign an IT policy to a group
- In the BlackBerry® Administration Service, on the BlackBerry solution management menu, expand Group.
- Click Manage groups.
- In the Manage groups section, click the group that you want to assign an IT policy to.
- On the Policies tab, click Edit group.
- In the drop-down list, click an IT policy.
- Click Save all.
Next topic: Assign an IT policy to a user account
Previous topic: Change the value for an IT policy rule
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/admin/deliverables/27983/Assign_IT_policy_to_group_193553_11.jsp | 2015-06-30T03:29:52 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.blackberry.com |
Information for "Screen.mediamanager.15" Basic information Display titleHelp15:Screen.mediamanager.15 Default sort keyScreen.mediamanager.15 Page length (in bytes)2,859 Page ID62:23, 6 January 2008 Latest editorJoomlaWikiBot (Talk | contribs) Date of latest edit23:49, 9 August 2012 Total number of edits17 Total number of distinct authors5 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded templates (4)Templates used on this page: Template:Cathelp (view source) Template:Toolbaricon (view source) Chunk:Help screen toolbar Delete (view source) Chunk:Help screen toolbar Help (view source) Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Help15:Screen.mediamanager.15&action=info | 2015-06-30T03:45:46 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.joomla.org |
Basic Layout¶
The starter files generated by the
pyramid_zodb scaffold are basic, but
they provide a good orientation for the high-level patterns common to most
traversal -based Pyramid (and ZODB based) projects.
The source code for this tutorial stage can be browsed via.
App Startupaster.
-.
pyramid_zodb scaffold put the classes that implement our
resource objects, each of which happens also to be a domain model object.
Here is the source for
models.py:
Lines 3 6-12.
appmakeris used to return the application root object. It is called on every request to the Pyramid application. It also performs bootstrapping by creating an application root (inside the ZODB root object) if one does not already exist. 4. Use the
pyramid.view.view_config()configuration decoration to perform a view configuration registration. This view configuration registration will be activated when the application is started. It will be activated by virtue of it being found as the result of a scan (when Line 17
tutorial. logging configuration [loggers] keys = root [handlers] keys = console [formatters] keys = generic [logger_root] level = INFO handlers = console [handler_console] class = StreamHandler args = (sys.stderr,) level = NOTSET formatter = generic [formatter_generic] format = %(asctime)s %(levelname)-5.5s [%(name)s] %(message)s # a production system).. | http://docs.pylonsproject.org/projects/pyramid/en/1.1-branch/tutorials/wiki/basiclayout.html | 2015-06-30T03:34:19 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.pylonsproject.org |
In Focus
Industry Issues
- Industry Focus on Balance of Plant (BOP) Air-Operated Valves
- NMAC to Begin Project to Improve Member Utilization of Maintenance and Equipment Reliability Products
- Work Under Way on Comprehensive Pump Maintenance Guidance
- A Preventive Maintenance Program Designed to Manage Equipment Degradation Rather Than Just Prevent Equipment Failures
- Preventive Maintenance: A “Living” Program
- Update to NMAC Freeze Sealing Guide
- Update to NMAC On-Line Leak Sealing Guide
- NMAC Continuing Efforts with Diesel Generator Maintenance
- EPRI Revising Condition-Based Maintenance Guidance
- Development of Copper Book Continues
- Dry-Type Transformers and Reactors
- Dry Cask Handling and Storage
- Switchyard Equipment
NMAC Meetings
- EPRI MOV PPM Users Group Update
- Additional Intensive Four-Day Terry Turbine Training and TTUG Summer Meeting Scheduled
- 2010 Circuit Breaker Users Groups Meeting
- PRDIG Holds Annual Meeting
- LEMUG 2010 Activities
- 2010 Joint Pump Users Group and Large Electric Motor Users Group Meeting
- Combined Condition-Based Maintenance Meeting Planned for July 2010
- MRUG Meets New NRC MR Staff at 2009 Summer Meeting in Baltimore, Maryland
- RCRC Working to Improve System Reliability
- 2010 Transformer and Switchyard Users Group Meeting
- HRCUG Rolls Out New Collaborative Website and Announces Annual Summer Meeting
- Work Planning Managers/Supervisors Effecting Improvement in Work Package Quality
- Japanese Utilities Pursue Condition-Based Maintenance
- Japanese Valve Users Group Update
- 2010 NMAC Meetings
NMAC Members and Personnel
In Focus
Preventive Maintenance Basis Database (PMBD) Updates and Activities
PMBD 2.1 is Available
The Preventive Maintenance Basis Database (PMBD) 2.0 has been revised to be compatible with the Windows Vista® and the Windows® 7 operating systems. While in the process of revising PMBD 2.0, other identified bugs were addressed. The duplication of template updates and vulnerability calculation errors were corrected. You can now order PMBD 2.1 (Product ID 1018758) at.
New PMBD Web Service Link for PMBD Template Updates
In an effort to heighten cyber security, EPRI’s Information Technology (IT) Group has moved all Web Service links behind their mainframe firewall at the EPRI Palo Alto office. Previously, the PMBD Web Service link was located at EPRI Charlotte.
Users of PMBD Versions 1.5 and 2.0 should be advised that the Component Updates from the “Download from EPRI” link will no longer function as before.
You will be required to get Component Updates via “Import from Component Update file.” For Component Update assistance, contact the EPRI Customer Assistance Center at 800.313.3774 or send an email to [email protected].
PMBD Component Export Plugin, Version 1.0
The PMBD Component Export Plugin, Version 1.0, (1018396) has been available to the membership since the first quarter of 2009. It can be used only with PMBD 2.0 or higher. It is not designed to work with PMBD 1.5 or older versions of the PMBD.
The PMBD Component Export Plugin module provides the PMBD user with the ability to export PMBD template information and data in an XML file format. This gives the PMBD user the ability to import the information into their plant’s computerized maintenance management system (CMMS) or PM Basis analysis software, as long as it is used only by the member and they take full responsibility for maintaining the data’s integrity.
The PMBD Component Export Plugin, Version 1.0, (1018396) is available at and can be either ordered or downloaded.
PMBD Template Reviews
At present, the PMBD contains 152 component templates. To ensure that the information and data contained with in each template are accurate and up to date, NMAC and industry subject matter experts perform periodic reviews and updates. To help facilitate the review process, guidance documents have been developed along with a template review schedule. There are three review process guidance documents:
- PMBD Maintenance and Control Process Procedure
The purpose of this guidance procedure is to establish the basic expectations, along with the roles and responsibilities, for everyone involved with the reviews, updates, and additions to the PMBD.
- PMBD Content Review Initiation
The purpose of this guidance document is to provide a way to initiate the review of individual PMBD templates, assign responsible EPRI subject matter experts (SMEs), and track the review progress to completion.
- PMBD Content Review Process
The purpose of this guidance document is to provide everyone who is involved with a PMBD template review with guidance on their information review and a way to document their findings.
As previously stated, to date there are 152 component templates contained in the PMBD. To ensure that each template is up to date and accurate, a review cycle of at least every five years is recommended; however, some templates may require being reviewed more frequently because of their critical nature. Based on a five-year review cycle, this means that approximately 30 templates per year must undergo the review process.
To establish an initial review schedule, two questions were asked. 1): When was the template first added to the PMBD? 2): When was the template last reviewed? Based on this information, over the next four years, an aggressive initial review schedule has been developed for each template. Once the initial review is complete, a routine review schedule can be developed. The number of templates to be reviewed each year is as follows:
- 2010 – 64 templates
- 2011 – 39 templates
- 2012 – 29 templates
- 2013 – 17 templates
This leaves the templates that were either reviewed or added to the PMBD in 2009, and they are on the schedule to be reviewed in 2014.
At present, during the first quarter of 2010, 28 templates are being reviewed. These templates are for the following component types: transformers, relays, switchgears, and valves. These templates were selected first based on information from the membership and their operating experience (OE) history.
A list of the PMBD component templates to be reviewed in 2010 is included here with the year that they were added to the PMBD and their last known review date. A complete listing of the EPRI PMBD templates is available on either the PMBD UG website at or the EPRI collaborative website at.
PMBD Template Data and Information Access for EPRI Members
Several PMBD users (Nuclear and Generation) and members have expressed an interest in having the ability to be more involved with the PMBD component template review process.
As previously stated, there are 152 component templates in the PMBD. To ensure consistent reviews, we ask that the individuals participating in the PMBD template reviews be familiar with and adhere to the guidance provided in the PMBD Maintenance and Control Process Procedure and Content Review Process guidance document. These guidance documents are both available with the PMBD template data and information.
The PMBD component data tables, component definitions, and component task information are presently available for each of the 152 PMBD component templates, along with the PMBD Maintenance and Control Process and Content Review Process guidance documents. This information is available to PMBD users and members on the following websites:The PMBDUG Website located at
Select “Templates & Information,” and then expand the component type to the desired component.
Each component template contains the same data and information. Each will have the component’s data table (FMEA), definitions, template, and task descriptions.
The EPRI Collaboration Website located at Collaboration/Preventive Maintenance (PM) Nuclear Center/Documents/EPRI PMBD Template Library
This is the EPRI Collaboration, Preventive Maintenance (PM) Nuclear Center website.
The EPRI Collaboration website will be displayed.
Scroll down the page, and select “Preventive Maintenance (PM) Nuclear Center.”
You will be directed to a Terms of Use screen. Select “Accept,” and continue to the Preventive Maintenance (PM) Nuclear Center website.
Select “Documents,” which opens the documents library. Select “EPRI PMBD Template Library,” which opens the library to display the component types. Select the desired component type to display the individual components. Select the desired component to display the component template data and information.
Again, each component template contains the same data and information. Each will have the component’s data table Failure Modes Effects and Analysis (FMEA), definitions, template, and task descriptions.
PMBD Suggestions and Recommendations from EPRI Members
Several PMBD users (Nuclear and Generation) and members have expressed an interest in having the ability to provide suggestions and feedback for existing PMBD component templates, development of new PMBD templates, and features of the PMBD. You now can provide suggestions in two locations. The first location is at the Preventive Maintenance Basis Database User Group website at
Post your suggestions to be considered for upcoming updates and enhancements.
The second location is at the EPRI Collaboration Website at.
Log in to the EPRI website as you normally would, and select “EPRI Collaboration” from the tool bar across the top of the page. After the collaboration site has opened, scroll down the page to locate and select “Preventive Maintenance (PM) Nuclear Center.” A Terms of Use page will appear requiring you to accept the terms and conditions to proceed. When the Preventive Maintenance (PM) Nuclear Center site opens, you will see five buttons across the top of the page. Select the “Discussions” button.
Post your suggestions to be considered for upcoming updates and enhancements.
PMBD User’s Guide Update
As previously discussed, PMBD 2.0 has been revised to be Windows Vista® and Windows® 7 compatible. While in the process of revising PMBD 2.0, other identified bugs were addressed. The duplication of template updates and vulnerability calculation errors have corrected. In 2008, the PMBD 2.0 User’s Guide was developed as an aide for PMBD users. The information contained in the guide is still applicable for PMBD 2.1; however, during the first quarter of 2010, enhancements and revisions for the PMBD User’s Guide will be developed and should be available to the membership around the end of the second quarter of 2010.
PMBD Upcoming Events:
The PMBD has been developed, through membership funding, into a multipurpose, multifunctional toolbox that is intended to be a living repository and to aid its users with the analysis, development, and documentation of the maintenance strategy for plant equipment. At present, approximately 80% of the membership use the PMBD in some capacity for developing the maintenance strategy of their plant equipment. However, most users are not familiar with or have never been shown the many tools and uses that are available to them with the PMBD. In an effort to reduce this gap, informational and training web casts and training workshops are being developed to inform and train PMBD users on the multiple uses of the PMBD.
Annual PMBD UG Meeting
The next PMBD UG meeting will be held in Charlotte, North Carolina on August 10, 2010. The PMBD UG Meeting is open to all EPRI members. However, there will be a charge to cover meeting expenses for those participants who are not contributing members of the Supplemental NMAC PMBD Program. Invitations and registration information will be sent out in advance. All participants wishing to attend will be required to register for the event.
PMBD Informational and/or Training Web Casts
Below is a tentative list of upcoming web casts that you, as a user of the PMBD, should plan to sit in on. It is expected that the amount of time for these web casts will be from 1 to 2 hours, maximum. As the time for each event approaches, notifications will be sent out for your response and registration.
- PMBD 2.1 Overview
- PMBD Tool Orientation
- Introduction to Utilizing the PMBD Vulnerability Tool (three web casts)
- Determining Maintenance Program Effectiveness
- The Attributes of a “Living PM Program”
If there are other subjects that you would like to see covered, contact Jim McKee.
PMBD Training Workshops
Below is a tentative list of upcoming PMBD training workshops that you, as a user of the PMBD, should plan to attend. It is expected that the duration for each workshop will be two days. Each workshop will be limited to 10 participants minimum and 25 participants maximum. An interest survey will be sent out, at least two months in advance of each workshop, to determine the amount of interest. If enough interest is shown, additional workshops can be scheduled to accommodate the number of participants. Otherwise, the workshops will be on a first come, first served basis.
There will be a charge to cover meeting expenses. For those participants who are contributing members of the Supplemental NMAC PMBD Program, the participation fees will be reduced. Invitations and registration information will be sent out in advance. All participants wishing to attend will be required to register for the event.
- Utilizing the PMBD to Analyze and Document the Maintenance Strategy for Plant Equipment
- Utilizing the PMBD Vulnerability Tool to Maximize Limited Plant Resources
- Moving from Just Preventing Equipment Failures to Monitoring Equipment Degradation
If there are other subjects that you would like to see covered, contact Jim McKee.
If you have any questions, suggestions, and/or recommendations, or if you or anyone within your organization has the expertise for a particular component or component type and would like to participate as an SME in the review process, contact Jim McKee, 256.548.0329, [email protected].
Transformer Paper Sample
Kraft paper is used as the wire insulation for oil-immersed transformers. In Europe, transformers are typically designed to International Electrotechnical Commission (IEC) standards and use standard kraft paper for bulk insulation. Most transformers specified in the United States use thermally upgraded kraft paper as the wire insulation.
In order to determine the degradation of paper, furanic compounds in oil and the degree of polymerization of paper are used. When paper decomposes thermally, some oil-soluble chemical compounds are released in addition to carbon monoxide and carbon dioxide gases. These are known as furanic compounds, the principal one being 2-furfuraldehyde. There is growing interest in developing diagnostic techniques based on the presence of these compounds to identify when an abnormal rate of thermal decomposition of insulation exists because the test can be made on an oil sample taken while the transformer is in service.
As paper ages, it becomes brittle. A measure of the brittleness can be determined by a test called degree of polymerization (DP). The cellulose molecules in paper consist of long chains of glucose rings. As the paper ages, the length of the glucose rings shortens, which causes brittleness of the insulating paper. The DP test measures the average number of glucose rings in a paper sample. A decrease in the DP is an indicator of insulating paper aging.
Although these techniques hold promise for providing condition monitoring techniques for operating transformers, the manner in which transformers are operated does not provide a general basis from which to make comparisons. Network transformers are not typically consistently loaded, and when there is a problem, it is usually attributable to another problem, not insulation aging. However, generator step-up (GSU) transformers at base-loaded power plants provide a better test bed to obtain baseline information. Taking actual paper samples from the hot spot of failed, aged, and fully loaded transformers (that is, GSUs) will provide valuable data. This is possibly a unique opportunity for the industry to develop baseline data for the DP of kraft paper and thermally upgraded paper and to compare these data to transformer loading.
The suggested approach to this project would be:
- Obtain paper samples from GSU transformers.
- Compare paper samples taken from inaccessible areas (transformer hot spot) and also paper from accessible areas (leads) of transformers undergoing rebuild or post-failure evaluation.
- Compare DP data from the hot spot and other paper samples to furan information.
Once the project is complete, a comparison of the performance of thermally upgraded and nonthermally upgraded paper will provide insight into actual paper aging processes and transformer loading. Another outcome would be a correlation to the IEEE and IEC loading guide and determine the actual age of the paper(s). Because transformers that have been fully loaded and are being scheduled for replacement will be sought for this project, aged paper can be obtained and used to determine a direct correlation and validation of measured DP to end-of-paper-life industry guidance.
In order for this project to be successful, it will require the participation of plants that are retiring GSU transformers. When the transformer has been selected for scrapping, paper samples from accessible lead areas and the transformer hot spot will need to be obtained. After the paper samples are acquired, the paper DP will be measured and correlated to transformer loading and other data. Comparison between thermally upgraded and nonthermally upgraded paper could also be made.
This project will require considerable logistical efforts, and a final technical report is anticipated to be completed by 2012.
To provide paper samples for this project or for more information, contact Wayne Johnson, 704.595.2551, [email protected].
Industry Issues
Industry Focus on Balance of Plant (BOP) Air-Operated Valves
Recently, the Institute for Nuclear Power Operations (INPO) released their second round of issues from the newly formed Important Issues List. The purpose of this list is to raise awareness on select trends that may have emerged or reemerged in the industry. One of the issues included balance of plant (BOP) air-operated valve (AOV) failures. BOP AOVs have accounted for ~7 million MWeHrs in lost generation over the last five years, which contributed to the industry not achieving the 1.0 forced loss rate (FLR) goal. The majority of events have occurred within the feedwater system, where EPIX data indicates that positioners and controls are causes of more than half of the failures.
Lost Generation Per Year Due to AOVs
NMAC has been working with the AOV User Group (AUG) to determine a course of action for resolving some of these issues. The following projects have been proposed for 2010:
- Revision to Valve Positioner Principles and Maintenance Guide (1003091)
Published in 2002, the Valve Positioner Principles and Maintenance Guide covers the control loop description, positioner design, calibration, and maintenance practices needed for reliable operation. A gap analysis suggests two missing pieces—more focus on delivering the control signal and discussion on digital controls, including application guidance. As plants continue to perform component upgrades, more are focusing on digital positioners; however, at the time of publication, very few plants had experience with digital positioners.
- Development of a Diagnostic Guide
The purpose of this document would be to establish guidance for successful and efficient valve diagnostic tests. The document would cover discussion on which valves to test, test methods and descriptions, proper test setup, test duration, and trace analysis. The end result will be a product that helps balance staff workload while ensuring equipment reliability. Such a guide would be beneficial to both domestic and international NMAC members.
- Preventive Maintenance (PM) Basis Template Update
PM templates are designed to provide plants with a baseline approach to maintenance tasks and frequencies based on industry operating experience and failure mechanisms. Plants start with the template and incorporate their own unique operating experience, environmental conditions, and economic considerations to adjust the PM frequencies. Over time, failure modes can change or new condition monitoring tools become available, which require the templates to be revised. NMAC will be revisiting both AOV PM templates to ensure that the information is the most current.
For more information about industry initiatives to drive down BOP AOV failures, contact Nick Camilli, 704.595.2594, [email protected].
NMAC to Begin Project to Improve Member Utilization of Maintenance and Equipment Reliability Products
Over the years, EPRI has produced numerous products that both address maintenance issues on power plant equipment and focus on improving its reliability. However, the nuclear power industry does not always effectively track, assess, and ultimately implement these products into the day-to-day operations of their power plants. The net effect of this is that it reduces the value of EPRI membership and the effectiveness of technology transfer. In some ways, it keeps EPRI members from fully realizing improvements in plant operations that they could derive from their membership.
Background
In-depth reviews of events over the last two years have shown that some of the leading causes of these plant events and issues have been worker errors, worker practices, and maintenance practices. These types of errors would imply that there is a significant gap between the information that is available to maintenance personnel and the information that is needed to properly conduct basic maintenance tasks.
A review of the information available from EPRI technical reports, users groups, and other organizations show that there is a large body of detailed information that should have precluded many of the events. The question then becomes whether the current body of information contains the necessary information to prevent these events. A review of the existing information indicates that there is sufficient applicable information to avoid the events.
Having acknowledged that the information exists and is applicable, it is necessary to look for other issues that can account for the failure to use the information. One of the primary problems may actually be the large volume of information available. Additionally, staff reductions and the increased use of supplemental personnel have made the study of the many volumes of EPRI information impractical.
Project Plan
The focus of this project will be on identifying information that is specifically directed at maintenance tasks that are performed as part of preventive or corrective maintenance. Where necessary, this project will identify new or improved delivery tools for information so that the existing information can be packaged for more effective use by the members.
In order to manage this project in reasonable pieces, the information will be evaluated on a system-by-system basis. The first pilot system will be key components in the feedwater system that have historically provided the greatest challenges to members. The components that have been chosen are main feedwater pumps (including motors and turbine drives) and significant air-operated valves (including main feedwater regulating valves). There are currently several NMAC maintenance guides that are directly related to these important feedwater system components and a number of additional maintenance guides that are applicable to these feedwater system components.
The approach that will be used is to collect several representative work packages for the identified feedwater system components from members. These procedures will be evaluated step by step to identify the requisite skills for successful completion of these steps.
These skills will then be associated with existing skill verification modules in the EPRI Plant Support Engineering (PSE) Standardized Task Evaluation (STE) program. Once this step is completed, a gap analysis will be performed to identify the research gaps and technical information gaps that may exist in the existing NMAC maintenance guides. A review of more effective methods to accomplish the knowledge transfer of the existing products will be conducted, and recommendations identified.
Developing Valve Maintenance Skills
Value
This project will focus on developing methods for improving the delivery of specific information to members so that they can more effectively locate and utilize the large amounts of information that are available to them. The results will be intended for use by the maintenance technician, maintenance planner, procedure writer, and maintenance training organization. Our intent for this project is to leverage the knowledge and skills of existing EPRI and NMAC staff, as well as industry personnel with expertise in the selected areas who will serve as participants in the technical advisory group (TAG). This will enhance the value to the membership by making use of personnel who are most familiar with the issues and the problems experienced by plant operators.
If you would like to learn more about this project or to participate in this effort as a member of the TAG, contact Mike Pugh, 919.812.5162, [email protected].
Work Under Way on Comprehensive Pump Maintenance Guidance
As detailed in the November NMAC Memo, NMAC has begun a project to develop a comprehensive pump resource for use by members in managing this important power plant asset.
The project is well under way, and one of the highlights will be a web-based application that will enable members to navigate to the various NMAC products that exist but also to other resources such as pump program description and self-assessment aids, an industry pump database, and pump repair specifications.
Navigation to these resources is intended to include drop-down menus, search features, and other common methods to help members quickly locate what they are looking for. Access will continue to be through established EPRI membership controls.
Member Value
Anticipated value to members is that the final product will serve as a road map to member personnel on existing NMAC products and also resources that will enable them to better maintain and troubleshoot problems with power plant pumps. The report will help to educate and quickly familiarize new and existing plant personnel on the information that is available for maintaining their pumps.
The approach to the project is to leverage the knowledge and skills of the Pump Users Group as well as industry personnel who have been asked to serve as participants in the technical advisory group. This will enhance the value to the membership by making use of personnel who are most familiar with pumps and the problems experienced by plant operators.
If you would like to learn more about this project or to participate in the technical advisory group, contact Mike Pugh, 919.812.5162, [email protected].
A Preventive Maintenance Program Designed to Manage Equipment Degradation Rather Than Just Prevent Equipment Failures
Over the years, EPRI has provided the industry with many technical documents that have aided in the development and implementation of preventive maintenance (PM), predictive maintenance (PdM), and condition monitoring programs, all of which have been aimed at and effective in preventing equipment failures. In past years, with an abundance of labor resources, this was an effective approach to maintaining equipment reliability and availability while minimizing maintenance cost and maximizing generation capacities. However, in recent years, for plants to remain competitive and minimize power production costs, the size of the work force has been reduced. As a result, many PM, PdM, and condition monitoring tasks and activities have had to be deferred or done away with, leaving equipment susceptible to failure and unreliable.
To counteract this problem, efforts to develop, implement, and incorporate PM programs that are looking at the different equipment failure locations and their degradation mechanisms need to be implemented with a primary focus on the PM strategies that are highly effective in identifying the equipment’s largest majority of potential degradation mechanisms. With the implementation of this type of PM strategy, plants will be able to better use and focus their available resources in the areas that will provide the most information as to the overall operating condition of various equipment, thus reducing maintenance costs and maintaining equipment reliability and availability.
To aid with the development of a PM program that has the capabilities to manage equipment degradation, several areas need to be identified and documented:
- Identify the major industry equipment types and subsets of each equipment type.
- Establish equipment analysis boundaries.
- Identify each failure location, degradation mechanism, degradation influence, and discovery method or prevention opportunity for each equipment type and/or its subsets.
- Identify the most highly effective tasks for each equipment type and its subsets.
- Using the EPRI Preventive Maintenance Basis Database (PMBD) as the repository for this information, ensure that all the major industry equipment types and their subsets are included in the PMBD.
- Develop a degradation modeling tool to work in conjunction with the PMBD Vulnerability Tool to aid plant management with maintenance decisions based on risk.
This project will provide the membership with an effective way to identify, track, and monitor potential equipment degradation mechanisms so that corrective measures can be implemented in a timely manner without jeopardizing equipment availability and/or reliability. The project will have these benefits:
- It will provide the membership with a single point of reference.
- The information can be applied to all member plants.
- The application of this product will also have the ability to help eliminate ineffective or inefficient maintenance (PM, PdM, and condition monitoring) activities, resulting in a reduction of costs while maximizing available resources.
- The application of this product will identify the most effective activities that should be in place to ensure equipment reliability and availability, thus reducing equipment failures that typically result in immediate and widespread costly repairs.
As the project progresses, industry subject matter experts (SMEs) for various equipment types will be contacted. They will be asked to review and comment on the identified equipment information as it is developed and to participate in web cast discussions, if necessary.
If you or someone within your organization is interested in serving as an equipment type SME, contact Jim McKee, 256.548.0329, [email protected].
Preventive Maintenance: A “Living” Program
Over the years, EPRI has developed many technical documents in turn, some programs have been left ineffective and vulnerable to failure. A PM program is designed to be a “living” program, ever changing, ever evolving. If the program is rigid and/or becomes stagnant, it will fail, leaving equipment vulnerable.
Like plants, the industry’s engineering and maintenance work force is aging and being replaced with young, inexperienced personnel. Their knowledge of EPRI technical information offerings about implementing and maintaining a living PM program needs to be addressed. To make all this information readily available to the membership, the information needs to be developed and compiled into one reference source that contains all the attributes of a living PM program.
To aid with the development of a technical document that incorporates all the facets of a living PM program, many areas within a PM program will need to be identified, addressed, and documented:
- Identify all the attributes of a living PM program and how they relate.
- Identify and review existing EPRI technical documents, making revisions as necessary.
- Compile the living PM program information into one document, referencing existing information as necessary.
- Develop a living PM program flowchart.
This project will provide an effective way for the membership to assess their existing PM program and ensure that they have an effective living PM program implemented that does the following:
- It will provide the end user with a single point of reference.
- The information can be applied to all member plants.
- The application of this product will identify the most effective activities that should be in place to ensure continued equipment reliability, thus reducing equipment failures, which typically result in immediate and widespread costly repairs and a loss of generation.
As the project progresses, industry subject matter experts (SMEs) will be contacted and asked for their assistance with the development of the living PM program document. Serving as a technical advisory group (TAG) member, they will be asked to provide their knowledge and expertise in the development of the document. They will also be expected to provide document oversight and participate with document reviews and comments, ensuring that the final product meets or exceeds industry standards.
In an effort to minimize expenses and time away from the TAG members’ work, it is expected that the project will require at least one face-to-face meeting to organize the document, and the remaining meetings can be held via web cast. The total time of participation should be approximately 40–60 hours.
If you or someone within your organization is interested in serving as a technical advisor for this endeavor, contact Jim McKee, 256.548-0329, [email protected].
Update to NMAC Freeze Sealing Guide
Freeze sealing refers to the process of applying an external refrigerant to a point in a process piping system in order to cause the formation of a solid internal plug from the frozen process fluid contained in the pipe. Because the process fluid in question is commonly water or some mixture thereof, this process is also sometimes referred to as ice plugging. Freeze sealing is most often used to isolate a section of a piping system when no other ready means of isolation, such as valves, are available. In many cases, freeze sealing is performed specifically to perform maintenance or repairs to a system isolation valve. Freeze sealing can also be used to isolate a section of a piping system for hydrostatic testing. Three routine applications for freeze sealing at nuclear power plants are to:
- Enhance maintenance capabilities during shutdown or outage periods.
- Provide blocking in lines where valves or stops are not placed.
- Eliminate the necessity of taking a full system out of service during plant operation, which might cause a limiting condition for operation.
The current guide, Freeze Sealing (Ice Plugging) of Piping, Rev. 1 (TR-016384-R1), was issued in November of 1997. The update to this guide will be used to accomplish the following objectives:
- Update the existing technical guidance contained in EPRI TR-016384-R1.
- Address technical issues that are of concern for some utilities, such as metallurgical issues caused by the extremely low temperatures encountered when using liquid nitrogen (-320ºF/-195ºC).
- Provide increased guidance on the actual steps for safe and successful planning and execution of freeze seals on process piping in plant applications, including contingency planning if problems arise.
- Identify and summarize industry operating experience with freeze seals since 1997.
In addition to the update of this guide, NMAC will consider developing and conducting a hands-on technical workshop covering the planning, supervision, and performance of freeze seals in accordance with this new guideline, based on direction from the technical advisory group (TAG). The TAG will consist of metallurgical professionals, a freeze sealing company, and utility personnel. A few slots are available on the TAG, which offers participants an opportunity to provide collaborative input to the guide. The update to the guide is anticipated in late 2010.
For more information on this project, contact Gary Boles, 423.870.5979, [email protected].
Typical Liquid Nitrogen Freeze Jacket
Update to NMAC On-Line Leak Sealing Guide
On-line leak sealing is used extensively in industries including nuclear power plants to address leaks that should not be tolerated, but would require outages to permanently repair. The process may involve several different techniques, such as peening the leaking joint, direct injection of a leak sealant into a component, or encapsulation of the component and injection of a leak sealant into the device. Companies that specialize in such services are usually utilized to perform the work because of its hazardous nature and specialization.
The current guide, On-Line Leak Sealing - A Guide for Nuclear Power Plant Maintenance Personnel (NP-6523-D), was issued in 1989. The update to this guide will be used to accomplish the following objectives:
- Update the existing technical guidance contained in EPRI NP-6523-D.
- Identify and summarize industry operating experience with on-line leak sealing since the guide was issued in 1989.
- Ensure that the current guidance addresses issues such as foreign material exclusion (FME) and over-pressurization and provides adequate caution when working on components that are already degraded due to the effects of leaks, such as steam/water cutting of pressure-retaining parts and metallurgical degradation of the wetted parts.
- Ensure that contingency planning is adequately addressed should problems arise during the leak sealing process.
- Provide updated guidance on the actual steps for successful planning and execution of on-line leak sealing in plant applications in a safe manner.
In addition to the update of this guide, NMAC will consider developing and conducting a hands-on technical workshop covering planning, supervising, and performing on-line leak sealing in accordance with this new guideline, based on direction from the technical advisory group (TAG). The TAG will consist of metallurgical professionals, on-line leak sealing companies, and utility personnel. A few slots are still available for the TAG, which offers participants an opportunity to provide collaborative input to the guide. The update to the guide is anticipated in late 2010.
For more information on this project, contact Gary Boles, 423.870.5979, [email protected].
Typical On Line Leak Sealing Configuration Using Insert Wire, Drilled into a
Pipe Flange with a Leak Seal Injector
NMAC Continuing Efforts with Diesel Generator Maintenance
Last year NMAC worked with U.S.-based emergency diesel generator (EDG) owners groups and published an interim report titled Generator Maintenance Guide for Emergency Diesel Generators (1019146). The primary intent of this document was to provide guidance on generator maintenance and maintenance programs. NMAC is continuing this effort in 2010 and will publish revised guidance later this year.
Well-known EPRI generator consultants Isidor Kerszenbaum (of Southern California Edison) and Geoff Klempner (of AMEC NSS) will be working to produce a revised draft that will be reviewed by utility personnel and utility members of the industry’s emergency diesel generator owners groups.
Although generators have been highly reliable over the past 40 years, some plants have identified issues through inspection and testing which demonstrate that more detailed maintenance of these generators should be considered. In addition, with plants moving to life extension, some sites are investigating the maintenance that should be done to ensure a high degree of reliability throughout a 60-year plant life. Consequently, industry personnel suggested that EPRI investigate diesel generator maintenance.
EPRI 1019146 provides an overview of industry challenges regarding EDG generator maintenance, a discussion of generator issues identified through industry experience, a list of nonintrusive generator maintenance activities and their limitations, a discussion on additional prudent generator maintenance activities, and a discussion on practical strategies to address EDG generator maintenance.
For additional information on this effort or to be added to a notification list for this effort, contact Jim Sharkey, 704.595.2557, [email protected].
GE Generator (Nordberg EDG) drive end view
EPRI Revising Condition-Based Maintenance Guidance
Since the 1980s, EPRI has published numerous documents relating to predictive or condition-based maintenance technologies and programs. Many of these documents are now more than 10 years old. As a result, NMAC is performing a comprehensive review of all predictive and condition-based maintenance publications to compare and contrast existing documents with current best practices, programs, and technologies. The goal is to identify gaps, areas in need of revision, and areas where more focused effort and clarity would prove beneficial. This effort will utilize a utility technical advisory group (TAG) that was formed during previous Predictive Maintenance Users Group meetings.
EPRI’s intent is to develop a principal document on predictive and condition-based maintenance, which will include program implementation, development, metrics, etc. This document will consolidate elements of various existing EPRI documents, including:
- Predictive Maintenance Guidelines, Volumes 1-4, TR-103374
- Predictive Maintenance Program: Development and Implementation, TR-108936
- Predictive Maintenance Program Implementation Experience, TR-111915
- Performance Metrics for Condition-Based Maintenance Technology Application Programs, 1003682
- Predictive Maintenance Assessment Guidelines, TR-109241
- Predictive Maintenance Self-Assessment Guidelines for Nuclear Power Plants, 1001032
A draft of this document will be developed by EPRI personnel and consultants and reviewed by a utility TAG. The purpose of the TAG will be to review drafts and provide input and direction to the overall effort, primarily via e-mails and conference calls.
For more information on this effort or to be added to a list of interested utility personnel, contact Jim Sharkey, 704.595.2057, [email protected].
Development of Copper Book Continues
EPRI is developing a comprehensive document that reviews the current industry guidance for the application and maintenance of power transformers. The report has been given the nickname “the Copper Book.” This name represents the latest in a series of reports that have been developed by the EPRI Power Delivery and Utilization Group, with each major equipment area being assigned a different color.
The development of the Copper Book is sponsored by multiple programs within EPRI. The Nuclear, Power Delivery, and Generation sectors are combining efforts in both funding and expertise in the development of this project. The members of each sector use transformers in conducting their activities; however, each sector uses transformers in a slightly different manner. For instance, base-loaded stations typically run their transformers fully loaded for the life of the units, whereas load-following or peaking generating stations operate based on load requirements, and power delivery or networked transformers are rarely fully loaded. With the varying requirements, EPRI has set out to write a report that will address the various ways in which transformers are applied and maintained to meet the needs of their users.
The project is a multiyear effort that is focused on collecting, combining, and highlighting the current guidance for the application and maintenance of power transformers. The transformers covered in this report are primarily liquid-filled and forced-cooled transformers used for the production and transmission of electric power.
The Copper Book work for 2009 consisted of completing an update to EPRI Transformer Guidebook Development (1017734) that added fundamentals (Section 1) and maintenance (Section 9). The sections that had no detailed information were enhanced by adding references to other EPRI guides that can provide information for that specific section.
Copper Book work for 2010 will include updates to previous sections, if necessary, and will add two sections: Operations (Section 7) and Monitoring & Diagnostics (Section 8). The Power Transformer Working Group will review the content of the report along with advisors from other groups.
If you are interested in participating in the development of the Copper Book, contact Wayne Johnson, 704.595.2551, [email protected].
Dry-Type Transformers and Reactors
Whether at a power generating station, industrial facility, or large office building, dry-type transformers are typically used to provide electric power. The dry-type transformer is most adaptable to indoor service, provided sufficient cooling air is available.
As many nuclear power plants approach 30 or more years of operating life, there is a concern for the continued operation of many safety-related components, including dry-type transformers. Since the performance and availability of existing plants have been greatly improved, many plants have been slated for life extension. Some plants have performed power uprates that have either consumed much, if not all, of the original design margin that may have existed when a plant was designed or when it required more power and thus larger transformers. With these multiple drivers along with a keen focus on equipment reliability, NMAC has been tasked to look at the performance and maintenance practices for dry-type transformers used in nuclear power plants.
The project team will collect and review dry-type transformer performance information from industry sources. The information will be consolidated to provide a clear picture of equipment operating history. The focus of the project will be the transformers that power unit substations and key safety buses. A review of maintenance practices and condition monitoring guidance used to ensure reliable dry-type transformer performance will be part of the outcome of this project.
The plan is to deliver a technical maintenance guide for dry transformers and reactors by the end of 2010.
For more information, contact Wayne Johnson, 704.595.2551, [email protected].
Dry Cask Handling and Storage
Multipurpose Canister Storage Overpack, Figure 4-7
from EPRI Multipurpose Canister System Design
Synopsis Report, TR-106962
As a nuclear plant continues to operate, it becomes necessary to remove used fuel bundles and other reactor material and place them in storage pools. This material is stored there for a period of time until other storage decisions are made. However, pool space is limited, and as plants continue to operate, spent fuel pools at nuclear power plants fill up. It becomes necessary to open up more space in these pools for recently removed fuel.
When nuclear power plants were originally designed and licensed, the plan was to send spent fuel to reprocessing facilities and eventually to a geological repository. Until reprocessing or a geological repository is available, plants have to take other measures to create space in their fuel storage pools. One approach that has been used is on-site dry cask storage. Dry casks are an interim storage solution until a new repository is developed since Yucca Mountain has been abandoned.
The approach to developing this guide is to collect industry knowledge and lessons learned from operating experience related to cask handling. An industry forum conducted by the Nuclear Energy Institute (NEI) brings together the various entities that are concerned with the storage of used nuclear fuel. EPRI has done work in the dry cask area and will provide considerable input to this guide.
The guide will address the following issues:
- Handling used fuel casks
- Surveillance and monitoring of fuel storage casks
- Issues related to long-term storage of used fuel in dry casks
- Lessons learned from industry monitoring, surveillance, and inspection practices for fuel storage casks
- Possible retrieval methods, if necessary
The guide will provide an expanded knowledge base for the use of dry cask storage, cask handling issues, on-site cask transportation issues, and cask monitoring during storage. The goal is to complete the guide in 2010.
For more information, contact Wayne Johnson, 704.595.2551, [email protected].
Switchyard Equipment
Due to various issues in the industry, power plants have become responsible for the reliability of switchyard equipment. Deregulation in the power industry has caused a separation between the companies that generate power and those that deliver power to the customer. Along with deregulation, 10CFR50.65 (Maintenance Rule) and INPO AP913 have required nuclear power plants to focus on equipment that will maintain plant and component reliability. As a result of the 2005 Energy Policy Act, the North American Electric Reliability Corporation was given the power to be the electric reliability organization (ERO). This organization has developed a standard that requires plants to develop a document that defines the interface between and responsibilities of the generator operator and the associated transmission entities.
When companies were vertically integrated, the generating station could rely on another company group or division for maintaining switchyard equipment. However, with regulatory and other requirements taking effect, the plants have become responsible for maintenance of some switchyard equipment. These and other changes are driving generating station personnel to develop the skills necessary to maintain this equipment. In conjunction with equipment reliability requirements, other issues, such as power plant uprates and equipment aging, are converging that affect current switchyard equipment.
In order to fully understand the extent of switchyard equipment that plant personnel should be concerned with, a review of typical power plant switchyard designs should be done in order to focus on major equipment and/or components. The following equipment can be found in one form or another in most power plant switchyards:
Power transformers
- GSUs
- Auxiliary
- Reserves or startups
- Instrument transformer
– Voltage
– Current
Circuit breakers
- Oil
- Air
- SF6
Disconnects
- Manual
- Motor operated
Surge and lightning arresters
Bus work
Control house
- Relays
- Metering
- Communications
- DC control power
– Battery charger
– Stationary battery
In developing this guide, the project will leverage existing knowledge of equipment, such as stationary batteries, power transformers, and other components, that is available in existing EPRI reports. The Transformer and Switchyard Users Group (TSUG) can serve as a technical resource for the project.
For more information, contact Wayne Johnson, 704.595.2551, [email protected].
NMAC Meetings
EPRI MOV PPM Users Group Update
The EPRI Motor-Operated Valve (MOV) Performance Prediction Methodology (PPM) Users Group met via web cast on February 4, 2010. The group reviewed progress that was made in 2009 and discussed activities that should be pursued in 2010.
The 2009 activities that were reviewed included:
- PPM Version 3.4 was issued in December 2008. Plans called for submission of Version 3.4 for a Safety Evaluation (SE) in late 2009.
- The NRC issued a final SE in February 2009 on PPM Versions 3.1–3.3, and associated Addenda 5, 6, and 7 to the PPM Topical Report as well as Addenda 3 and 4 that document an improved gate valve unwedging method and a method for using stem nut friction coefficients measured during valve closures for determining unwedging stem nut friction coefficients. The reports associated with these Addenda to the Topical Report are being revised to include the SE, all responses to requests for additional information (RAIs) received during the NRC review, and any changes to the Addenda resulting from the review.
- A meeting with the NRC was conducted in early March 2009 to review the MCC-Based Motor Torque Periodic Verification Methodology (reported in EPRI report 1015069) in preparation for submittal to the NRC for an SE. Unfortunately, the NRC rejected our request to waive the substantial review costs for this document; accordingly, EPRI will not pursue an SE on the methodology. Any technical issues associated with this report will have to be addressed on a plant-specific basis.
- Work required to modify the PPM to allow SI units and its use with the Windows Vista® operating system will be evaluated.
- PPM training was conducted July 15–17, 2009, at EPRI Charlotte.
Potential 2010 activities include:
- PPM training will be conducted July 14–16, 2010, at EPRI Charlotte.
- Publish dash A (NRC Approved) versions of PPM Topical Report Addenda 3, 4, 5, 6, and 7 by May 2010.
- PPM Version 3.4 will be submitted .to the NRC for SE.
- Initiate a two-year project to develop PPM Version 3.5. The new version will incorporate corrections based on recent error reports and the NRC’s review of PPM Versions 3.1–3.3 including compatibility with Windows Vista® and the Windows® 7 operating systems and inclusion of SI unit capability.
For additional information, contact John Hosler, 704.595.2720, [email protected].
Additional Intensive Four-Day Terry Turbine Training and TTUG Summer Meeting Scheduled
First Intensive Four-Day Terry Turbine Hands-On Training at EPRI Charlotte
In 2008, an international EPRI member company asked NMAC to provide an intensive Terry Turbine training session for up to 15 participants (see photo above) using the training turbine that EPRI had on loan from the Tennessee Valley Authority. EPRI put together a training plan with both of its instructors, Jim Kelso and Ken Wheeler. In the four full days of the training, approximately 28 of the 32 hours were spent in hands-on training, and the balance was in the classroom, as requested. A detailed agenda is available from the Terry Turbine Users Group (TTUG) website at the NMAC subscriber site.
For 2010, NMAC is again offering to our members the intensive four-day Terry Turbine Training Workshop to be held May 17–20, 2010, at EPRI Charlotte. The session is limited to 20 participants; however, we polled our TTUG members to determine the interest in such a session and how many participants each member would send. The response indicated well over 20. Consequently, the slots were distributed equitably to all members who requested them.
A second session may be offered later this year, and a list of interested participants has already been started. If interested, contact Dave Dobbins to be considered.
The training fee is $1500 per participant for the four full days of training, which is limited to NMAC members. This methodical, detailed training is not available anywhere else in the world.
The Summer 2010 Terry Turbine User Group Meeting is scheduled for July 13–15 (Tuesday–Thursday), 2010, in the Denver Tech Center at the Embassy Hotel in Centennial, Colorado. The GE Boiling Water Reactor Owners Group (BWROG) will meet with the TTUG on Monday and on Thursday afternoon. Key areas to cover includes recent operating experience, service issues on Woodward Governors beginning in September 2010, and updates from all on their planned plant Terry Turbine controls replacement and the timing.
In odd-numbered years (for example, 2009 and 2011), NMAC holds refresher Terry Turbine training over a two-day period that covers:
- Turbine disassembly and reassembly with Jim Kelso and Ken Wheeler.
- Trip and throttle valve disassembly and reassembly with Jim Nixon of C-W Gimbel.
- Electric and hydraulic governors with ESI instructors Paul Feltz and Chris Payne. (Also, in 2009, Paul Feltz put together a special class on the Woodward 505 governor.)
- Overspeed trip types with Chan Patel of Exelon Clinton and Bill Stuart of ILD Power.
This Terry Turbine training is a four-hour refresher course—not an intensive four-day training session—that is a quick overview of most of the Terry Turbine and its systems. Here again, class sizes are limited to ensure that everyone has an opportunity to participate, not merely observe. Normally, in odd-numbered years, NMAC offers two two-day workshops—A (Monday and Tuesday) and B (Thursday and Friday)—with a meeting all day on Wednesday.
For 2011, the plan is to transition to having only one workshop again on Monday and Tuesday with a meeting following that lasts one-and-one-half days or two full days. Therefore, there will be a limit on the TTUG Workshop in 2011 of no more than 80 attendees. This process of change is being worked out with the past and present TTUG officers and NMAC management.
To register for the TTUG meeting this summer, click on this link. You do not need an EPRI ID and password to register and pay for this meeting.
For more information, contact Dave Dobbins, 704.595.2560, [email protected].
2010 Circuit Breaker Users Groups Meeting
Preparations for the 2010 circuit breaker users groups meeting are well under way. The groups will meet in Columbia, South Carolina, during the week of June 14–18. A tentative meeting agenda is posted on the EPRI Events Calendar on epri.com.
The groups are developing a list of prioritized issues that they are addressing through working groups and EPRI initiatives. This list will be discussed during the June meeting and subsequently published on the EPRI website.
The 2010 meeting agenda will focus on working group meetings to resolve specific issues, presentations on significant industry events, training, workshops, and facility tours. More specifically, this year’s meeting will feature a tour of the ABB circuit breaker facility in Florence, South Carolina, working groups for ABB circuit breakers, motor control center maintenance, guidance for training new engineers, bus maintenance and monitoring, and guidance for managing a circuit breaker program.
Specific presentations that have already been identified include:
- Columbia’s nonsegregated bus failure presentation
- Reports from each respective working group
- Overview of new EPRI documents on (1) grounding and (2) clearance and tagging
- Industry event on the topic of energizing a masin transformer with grounds still attached
- Industry analysis of operating experience over the last year
- Users group organizational update
Potential presentations include:
- Industry generator breaker failures
- Restrike voltage determination
- Primary stab degradation mechanisms and failures
- New equipment presentation by Megger
- Circuit breaker program assessments
- Circuit breaker program overview presentations by specific plants or utilities
Workshops will be provided by Westinghouse and ABB. In addition to a tour of their facility, ABB will provide workshops on operation and maintenance of their K-Line and HK circuit breakers. The Westinghouse workshop will include both DB and DS breakers and will cover mechanical inspections, adjustments, and lubrication.
For more information about any of the circuit breaker users groups, contact Jim Sharkey, 704.595.2557, [email protected].
PRDIG Holds Annual Meeting
On January 20, 2010, the 13th annual Pressure Relief Device Interest Group (PRDIG) meeting was held. Despite the present situation in the economy, no reduction in the number of meeting attendees (65) was seen.
The presentations were well organized and stimulated some excellent questions and discussions. Three of the presentations (two domestic and one international) provided a comparison of how various utilities have organized their relief valve monitoring and maintenance programs.
Updates were provided on actions pending with the ASME Code Committee for Operation & Maintenance of Nuclear Power Plants, issues on testing frequencies experienced by some plants, and information on computer monitoring programs for plant safety relief valves. A demonstration was provided on how to properly determine the “set pressure” of a liquid service relief valve.
The date for the next PRDIG meeting has been tentatively established as January 17–19, 2011, in Orlando, Florida. Details will be announced later, but we already have some topical presentations scheduled for next year.
Safety and Relief Valve Testing and Maintenance Guide (TR 105872)
NMAC is planning to update this document, which was last published in 1996. Many changes have occurred since then. Among the subjects that will be addressed are:
- Material and design changes
- Performance issues
- Changes and clarifications to the OM section of the ASME Code
- Maintenance issues
- Lessons learned
In order to cover all of the variations in valve designs and applications, particularly in the area of main steam safety valves in the BWRs, we anticipate a technical advisory group of approximately 12 individuals.
For more information, contact Bob O’Neill, 508.539.3301, [email protected].
LEMUG 2010 Activities
Picture courtesy of TVA
The EPRI Large Electric Motor Users Group (LEMUG) plans two meetings for 2010. The winter meeting is scheduled for February 1–4, 2010, in Houston, Texas. A workshop on electric motor testing will be conducted at this meeting on February 1. To reinforce the workshop material, a tour of the Toshiba Motor Manufacturing Facility has been planned.
The actual user group meeting will cover topics related to experience with on-line testing, vibration of high rpm motors, and recommended factory testing of motors. A presentation on the application of dynamic balance absorber to an operating motor will be given. A tutorial will be presented related to motor protection and the philosophy of relay selection.
The August 2010 LEMUG meeting potentially will be a joint motor and pump users group meeting. Pittsburgh, Pennsylvania, is the location being considered for this meeting. The workshop topics are still under consideration, but topics related to both pumps and motors, such as lubricants, rotor dynamics, and alignment, will be covered. Plant tours are planned for the Westinghouse and Curtiss-Wright facilities in conjunction with the August meeting.
Along with the user group workshops, case studies, and operating experience presentations, the three LEMUG working groups will be working on several reports. The working groups within LEMUG are:
- Application Working Group (Clarence Bell – Reliance, chair)
- Information Working Group (Henry Johnson – Arizona Public Service, chair)
- Maintenance Working Group (Clifford Both – Public Service Electric and Gas, chair)
The Application Working Group will be working on Replacement Criteria for NEMA Frame Motors. This document will provide guidance for replacing NEMA frame motors that are now provided as premium efficiency, which can affect the motor protection circuit.
The Information Working Group is working on a document that will assist a person in improving their motor skills. This is not a qualification document, but a document to familiarize persons with the information they will need to effectively work on electric motors.
The Maintenance Working Group is working on expanding the material in the existing EPRI report, Troubleshooting of Electric Motors (1000968). This document will provide guidance on diagnosing problems and gathering failure data.
Utility personnel are encouraged to participate with the development of the documents by contacting any of the working group chairs or Wayne Johnson.
Harry R. Smith (Exelon) is the current chair for LEMUG, and the co-chair is Camilo Rodriguez (First Energy). Richard Locke (TVA) is the coordinator and scribe for the users group.
If you are interested in participating with LEMUG or any of its activities, contact Wayne Johnson, 704.595.2551, [email protected].
2010 Joint Pump Users Group and Large Electric Motor Users Group Meeting
Alignment lecture by John Piotrowski at the 2009 PUG meeting
The NMAC Pump Users Group (PUG) continues its strong focus on training not only for young new hires, but also for more experienced workers in need of refresher training in pump fundamentals or pump operation. To that end, at the 2009 PUG meeting four separate workshop sessions were held in two tracks (fundamentals and advanced): Centrifugal Pump Basics, Vibration, Advanced Diagnostics, and Balancing Reactor Coolant Pumps/Reactor Recirculation Pumps. Added training included a half-day alignment workshop that was offered for all participants. to help the PUG to balance BWR content. Congratulations to these new officers.
At the PUG 2009 business session, voting members selected the location for the 2010 meeting. The candidates were Charlotte, North Carolina; Chattanooga, Tennessee; Chicago, Illinois; and Pittsburgh, Pennsylvania; the vote was unanimous for Pittsburgh. PUG members also voted to have a joint meeting with the Large Electric Motor Users Group (LEMUG) in the future. Coincidentally, LEMUG also voted to hold their summer 2010 meeting in Pittsburgh.
The joint meeting between PUG and LEMUG is being developed for August 16–20, 2010, at the Sheraton Station Square Hotel in Pittsburgh. The planning for this meeting is not complete, but details will be available soon. You can visit epri.com and search events in Nuclear for August 2010. (You do not need an EPRI ID or password to access the Events calendar nor to register).
The training subjects at the joint meeting of PUG and LEMUG will be such that they will be of interest to both nuclear and fossil and to both pump and motor personnel. In 2010 there will be two days of morning training workshops, now planned with three separate tracks to allow attendees to choose their own training for August 16 and 17. Plans are being developed for joint sessions covering topics of interest to all.
The PUG will continue its emphasis on training for new and for experienced members. NMAC will hold two days of workshops on Monday and Tuesday mornings, and afternoon tours to Westinghouse Waltz Mill on Monday and to Curtiss-Wright EMD will be held on Tuesday. The preliminary training subjects in three separate tracks for both mornings will be Rotor Dynamics, Lubricants, and more on Alignment.
The two workshop/tour days will be followed by two days of NMAC PUG technical meetings, discussing current and on-going pump issues that include, but are not limited to, these OE considerations: foreign material exclusion, gas intrusion, seals, testing, repair specifications, performance and effect on service life, etc. Also, on Thursday, the PUG will have its regular business meeting for members only.
Friday morning has been set aside as time to begin work on updates to NMAC guides and to form working groups for each area to be worked including Feedwater Update, Aging and Obsolescence Issues for Pumps, Foreign Material Exclusion (FME) Guide specific for pumps, etc. Tim Buyer from Dominion North Anna will lead this effort along with an interested PUG member who volunteered to work on developing these guides. Also, a session is planned on reviewing the Preventive Maintenance Basis Database (PMBD) data tables for all pumps contained in PMBD with Jim McKee of NMAC. For the PMBD, the data tables for pumps require reviewing by experts every five years.
Draft Agenda for the Joint PUG-LEMUG Meeting, 08/16–20, 2010, in Pittsburgh
NOTE: A reminder that in order to make the PUG Workshop and Meeting better organized, an Early Bird Registration discount will be available through mid-July. Because of import-export screening of every attendee by the EPRI Legal Department, unregistered attendees may be denied access to the meeting. Register online and early.
For more information, contact Jill Lucas, 704.595.2574, [email protected] or Dave Dobbins, 704.595.2560, [email protected].
Combined Condition-Based Maintenance Meeting Planned for July 2010
The Combined Condition-Based Maintenance (CBM) meeting to be held July 12–16, 2010, will be composed of the four separate groups that are meeting to discuss CBM Programs and Predictive Maintenance (PdM) Technologies solutions to equipment issues affecting our industry.
Background
The Predictive Maintenance Users Group (PdMUG) will meet for two-and-one-half days on July 12–14, 2010; the Vibration Technology Forum (VTF), will meet for two-and-one-half days on July 14–16, 2010; and the Infrared Users Group (IRUG) and Lubrication and Bearing Workshop will meet concurrently for three days on July 13–15, 2010, all in Orlando, Florida.
The PdMUG, VTF, IRUG, and Lubrication and Bearing Workshop provide a forum for EPRI members to exchange information, ideas, successes, problems, and experiences regarding diagnostic monitoring and condition assessment technologies used in predictive maintenance programs. The goal of these forums is to achieve maximum benefit from predictive maintenance programs. Each technology meeting focuses specifically on data collection, analysis, and problem resolution. Each meeting will provide open discussions and presentations in a relaxed environment along with utility problems and solutions, programmatic issues, and technical and application information. Training sessions will be provided, and new technologies and innovations will be highlighted.
Utility Roundtable
Portions of each meeting will feature utility roundtable sessions. These sessions will provide opportunities for utility personnel to share tips, techniques, and information; pose questions to their peers; discuss and attempt to resolve any technical or programmatic issues which their plant may have; and discuss their respective maintenance programs.
Case Studies
These meetings frequently contain case studies or plant experience reports that are informally presented and/or discussed by utility personnel. Case studies typically describe current or past equipment problems; completed, planned, or potential corrective actions; cost benefit information; pictures; data; and any other pertinent information that may assist attendees.
Participants are encouraged to submit and present case studies. EPRI requests that presentation material be submitted by May 3, 2010. The guidelines for case study content are available from EPRI. For each of the groups, contact the following:
PdMUG or VTF - Tom Turek, 484.631.5863, [email protected]
IRUG – Gary Noce, 484.432.9251, [email protected]
Lubrication & Bearings – Nick Camilli, 704.595.2594, [email protected]
For more information or to register for this event, go to, and select the calendar tab on the top of the menu. Select the month, and locate this event by date. Click on the link, and follow the registration instructions. If you experience difficulties or have questions, please contact the meeting planner: Judy Brown, 704.595.2694, [email protected].
MRUG Meets New NRC MR Staff at 2009 Summer Meeting in Baltimore, Maryland
The 2009 Maintenance Rule Users Group (MRUG) summer meeting was held in Baltimore, Maryland, on August 4–5, 2009. The two-day meeting with 42 participants consisted of presentations, roundtable discussions, and group breakout sessions. Recent organizational changes in the NRC Office of Nuclear Reactor Regulation resulted in new NRC staff, Steve Vaughn and Paul Bonnet, as primary contacts for the Maintenance Rule. They attended both days of the meeting, and Tim Kobetz, their Branch Chief, attended the first day. This gave members an opportunity to introduce themselves—and MRUG—to the industry’s new NRC contacts. It also gave all an excellent forum to discuss the issues that MRUG and the industry are dealing with.
Constellation Energy was the corporate sponsor for the meeting, and Jim Spina, Site VP for Calvert Cliffs, was the keynote speaker. Jim focused on equipment reliability and the role of the Maintenance Rule in achieving higher levels of reliability. He referred to the Maintenance Rule Coordinator (MRC) as the “lynch pin” for other organizations on site, particularly in the Maintenance Rule’s integration with AP-913 to improve reliability. Glen Masters (INPO), Biff Bradley and Victoria Anderson (NEI), and Steve Vaughn (NRC) gave presentations in their respective areas.
On Day 2, Denise Boyle, Frank Zurvalec, and Vicky Harte shared plant experiences from Salem, Davis-Besse, and Kewaunee, respectively. A new working group was formed to review requirements related to (a) (3) assessments and to determine whether a reduction in the assessment effort was justified. Currently, utilities engage in extensive reviews involving teams and often several person-weeks of effort. It is felt that current regulation and guidance might allow a much simpler approach that takes credit for the many programs and processes that have been started since the implementation of the Maintenance Rule.
The following working groups will meet at the winter 2010 meeting:
- Working Group D – Developing Maintenance Rule Performance Metrics
- Working Group L – NUMARC 93-01 Revisions
- Working Group M (New) – Optimize (a) (3) Periodic Assessment
The winter meeting will be held in New Orleans, Louisiana, February 2–4, 2010. The 2010 summer meeting will be in Chicago, Illinois, August 3–4, 2010.
RCRC Working to Improve System Reliability
Two Rod Control Reliability Committee (RCRC) subcommittees were formed during the last RCRC meeting (summer of 2009) to improve rod control system reliability.
New Rod Control System Cabinets
Double Gripper Modification - Subcommittee
The current rod control system design has several single-point vulnerabilities. The proposed double gripper design would eliminate single-point vulnerability for most failures. Several RCRC members volunteered to participate in the development and testing activities for the double gripper design modification.
Testing of Newly Design Circuit Boards - Subcommittee
Obsolescence and end-of-life circuit card failures are being addressed by replacement with newly designed and manufactured cards. However, since a failure of these new circuit cards could result in shutting down the reactor, extra measures to ensure compatibility, functionality, and initial reliability must be taken. This subcommittee is working to verify circuit card performance using full-scale rod control training cabinets and equipment.
Rod Control Reliability Committee
The purpose of the RCRC is to:
- Improve the long-term reliability of rod control systems by exchanging technical information, operating experience, and best maintenance practices among its members
- Collaborate to resolve technical issues, provide resources to aid in problem-solving situations, and provide the operation and maintenance perspective for component/system upgrades or design modifications
This committee addresses both Westinghouse- and Combustion Engineering- (CE-) designed systems. Power plant rod control specialists and engineers who are responsible for the maintenance, operation, and upgrading of Westinghouse and CE rod control systems are encouraged to participate.
A few of the topics currently planned for the next RCRC meeting are:
- Project and equipment performance update from the Ringhals plant, which is installing the new Westinghouse-designed digital logic cabinet and solid state power cabinets
- A detailed review of the EPRI Full-Length Rod Control System - Life Cycle Management Planning Sourcebook (1011881)
- Rod control outage maintenance lessons learned at the Robinson Plant
- Results of a logic cabinet card strategy study completed by Duke for Catawba
- Lessons learned from the phase card replacement project performed by Prairie Island
- Subcommittee updates for the beta testing of the new Westinghouse rod drive circuit boards and double gripper modifications
- CE rod control design specific topics and breakout
- Numerous other plant rod control operating experience events
The next RCRC meeting will be held at EPRI Charlotte the week of June 14, 2010. A meeting announcement will be issued later this month so attendee registration can begin.
For more information about the RCRC, contact Lee Rogers, 772.288.4369, [email protected]; RCRC chair, Richard Schreiner, 757.365.2552, [email protected]; or RCRC vice-chair, Joe Constant, 803.701.5132, [email protected].
2010 Transformer and Switchyard Users Group Meeting
The activities being pursued by each working group are provided here:
Power Transformer Working Group
- Technical resource for reviewing the EPRI “Copper Book”
- Identification and mitigation of single-point vulnerabilities in the transformer control
- Life cycle management and critical component spare readiness plan
- Specific acceptance criteria for transformer performance monitoring, preventive maintenance (PM), and testing
- Vendor oversight of new and refurbished transformers
- Oversight of corrective maintenance (CM) and PM tasks to mitigate vulnerability to operational events
- New Significant Operating Experience Report (SOER)
Switchyard Equipment Working Group
- High-voltage breaker (for example, SF6 breakers)
- Digital relays
- Switchyard walkdown checklist
Grid Reliability Working Group
- NUC-001 implementation
- NERC/Regional Reliability Organizations (RRO) lessons learned
- NERC compliance guide
The 2010 Transformer and Switchyard Users Group Meeting will build on the equipment-type workshops that have been provided at previous meetings. In 2009, a workshop and tour was held at the ABB Alamo facility. The workshop covered high-voltage bushing design and construction.
The 2010 meeting is planned for late July. The location and date for the meeting are currently being negotiated. A workshop on high-voltage circuit breakers is being considered with an associated tour of a circuit breaker factory.
For more information, contact Wayne Johnson, 704.595.2551, [email protected].
HRCUG Rolls Out New Collaborative Website and Announces Annual Summer Meeting
The Hoisting, Rigging, and Crane Users Group (HRCUG) has been active in addressing a variety of issues dealing with cranes, rigging, and materials handling issues. In the fall of 2009, the new HRCUG collaborative website was officially rolled out to users group members. Easily accessible from EPRI.com, the HRCUG collaborative web site provides a number of services to its members. In addition to announcing upcoming users group activities, there is also a discussion board where members can post questions and replies, a calendar where relevant events and activities can be posted, and a “resource shelf” where members can upload documents and other files to share with the group members.
Every summer, the HRCUG holds its annual meeting. Last year’s meeting in Knoxville, Tennessee, was an outstanding success, hosting a wide array of presenters; including members, industry vendors, and the Institute for Nuclear Power Operations (INPO). The 2010 annual meeting will be held in Shreveport, Louisiana, at the Holiday Inn, Downtown, June 22–24. Presentations are planned on issues such as crane obsolescence, materials handling, upcoming regulations, crane inspection issues, and INPO operating experiences. Additionally, we will offer a workshop on crane operations provided by Whiting Services and a workshop and factory tour on rigging hardware and devices provided by Crosby Manufacturing.
Beginning in 2010, the management of the HRCUG changed hands. Sharon Parker, who has done an outstanding job of overseeing the user group for the past six years, retired at the end of 2009. She will be greatly missed, as her contributions to the industry have been significant. EPRI Senior Project Manager Merrill Quintrell has now taken over the management and oversight of the HRCUG. Please feel free to contact him at 704.595.2530, [email protected].
Work Planning Managers/Supervisors Effecting Improvement in Work Package Quality
Southern California Edison (SCE) San Onofre Nuclear Generating Station (SONGS) hosted the Work Planning Users Group (WPUG) winter meeting January 26–28, 2010. The meeting was attended by 50 planning professionals.
A portion of the meeting was reserved for each attendee to express their site’s work planning challenges and accomplishments. This forum allowed for the identification of common issues for the group to focus on, as well as good practices that can be shared among utilities.
Work Planning Managers and Supervisors Share Issues and Strengths
The focus of the WPUG is to:
- Develop and/or share proven industry methods and processes regarding work package preparation, execution, and feedback.
- Provide consistent strategic and tactical standards by capturing industry best practices and operating experience and by further defining and improving the process for work package preparation, execution, and feedback at existing and new generation plants.
- Improve maintenance effectiveness and equipment reliability by continuously improving the work planning departmental performance.
EPRI products developed by members of the WPUG include:
- Maintenance Work Package Planning Guidance (1011903)
- Maintenance Work Package Training (and Certification) – Student Handbook (1014533)
- Work Planning Assessment Guidelines (1015253)
- Guidelines for Addressing Contingency Spare Parts (1013472)
- EPRI WPUG Collaboration Site – Work Package Nuclear Center
Work Package Quality – Top Priority
One of the biggest challenges to work package quality is the alignment of craft training and expectations to work package content. This alignment is difficult to attain even in normal circumstances, but with today’s changing work force and high ratio of first-time supplemental workers, there seems to be pressure to include more worker knowledge and expectations in the standard work package. This situation creates what has been called work package “bloat” or “swell.”
A WPUG subcommittee is working to clearly define work package bloat/swell and develop a strategy to address it. Once the subcommittee develops a draft strategy, it will be distributed to the entire WPUG for review and comment. The draft strategy is expected to be distributed to WPUG members for review and comment by early April 2010.
The next WPUG meeting is planned for July 2010. After meeting dates and location have been confirmed, a meeting notice will be sent out to WPUG members and EPRI site contacts.
If you would like to learn more about the WPUG and its activities, contact Lee Rogers, 704.595.2751, [email protected]; WPUG chair John Hodgkinson (SONGS), [email protected]; or WPUG vice-chair Steve Johnson (INL), [email protected].
Japanese Utilities Pursue Condition-Based Maintenance
As a result of regulatory changes, Japanese nuclear utilities are collectively moving toward condition-based maintenance programs. To assist in this effort, EPRI has sponsored the Japanese Reliability-Centered Maintenance–Condition-Based Maintenance (RCM–CBM) Users Group, which is similar to the U.S.-based Condition-Based Maintenance Users Group. Japanese utility personnel spend a significant amount of their meetings reviewing plant case histories to expand their knowledge base and highlight lessons learned. Their meetings contain breakout sessions on vibration analysis, oil analysis, and infrared thermography.
In 2009, the group’s chair, Mr. Hiroshi Yamashita (Tokyo Electric Power Company) outlined several issues faced by Japanese nuclear utilities with respect to the implementation of condition-based maintenance (CBM). Some of these issues included:
- The lack of case histories for each technology. Relatively speaking, since the Japanese have only recently implemented CBM, the number of data anomalies and case histories are limited.
- Establishment of threshold values and actionable processes when threshold values are reached. These processes are being defined, but further work is necessary.
- The process to move from time-based maintenance to condition-based maintenance is still being considered and addressed.
- Management awareness and sponsorship of CBM programs is a challenge. This challenge seems universal and is certainly shared by CBM programs in the United States and most likely around the world.
- Establishment of qualification and training programs. The Japanese have made considerable progress in qualifying personnel. Every year, the number of trained staff members continues to increase.
Other issues mentioned include expanding the scope of components subject to CBM, the use of process data for CBM, organization of CBM teams, program standardization, and standardization of an as-found data process.
The group has had considerable success on a number of fronts, including the development of a database (survey) of programs and practices, in which the information is updated annually. The group has initiated supplemental working group meetings centered around the three primary CBM technologies: infrared thermography, vibration, and oil analysis. In addition, other technologies are being embraced through training, including diesel engine analysis. (Refer to Diesel Engine Analysis Guide (TR-107135).) Group members believe that the RCM–CBM group has contributed to the acceleration and implementation of CBM at all nuclear sites in Japan.
The RCM–CBM group meets twice each year—in May and October. Group oversight is provided by a four-person committee made up of rotating members of the Japanese nuclear utilities.
Plant personnel from U.S. and international utilities attend these meetings to provide an international exchange forum and share their programs, experience, methods, and case histories with the Japanese. In conjunction with these meetings, benchmarking trips to Japanese plant sites are attended by predictive maintenance (PdM) program and technology experts from the United States. Utility personnel interested in sharing successful programs or unique case studies should contact Jim Sharkey.
Utilities that have supported the Japanese in their efforts include Exelon’s corporate office and Peach Bottom Nuclear Power Plant, Pacific Gas and Electric’s Diablo Canyon nuclear power plant, Southern California Edison’s corporate office and San Onofre Nuclear Generating Station, and Duke Power’s Catawba Nuclear Plant. Future support is also planned from Arizona Public Service’s Palo Verde Nuclear Generating Station and Ontario Power Generation.
For information or questions regarding this group and these EPRI initiatives, contact Jim Sharkey, 704.595.2557, [email protected].
Japanese Valve Users Group Update
Mark your calendars to attend the second annual Japanese Valve Users Group conference in October 2010. The exact date and location will be provided at a later date. The feedback collected from last year’s meeting will be used in compiling the agenda for 2010. Look for continued discussion on existing valve programs and practices along with an in-depth look at valve diagnostics.
What are the benefits of attending such a meeting? Here is some of the feedback collected from last year’s participants:
- Good information exchange, such as other utilities’ valve maintenance approaches and issues.
- Great source of networking among the various sites and organizations (engineering, maintenance, I&C).
- It was beneficial to understand U.S. valve practices, progress, and approaches.
- There was a good balance in the discussion of mechanical valves, instrumentation valves, and electromagnetic valves.
- We were able to learn about problems, concerns, and approaches taken by other utilities and plants with similar conditions, directly from the source.
- It was extremely beneficial to see the difficulties faced by maintenance in case histories across the utilities.
Japanese utilities are invited and encouraged to participate in this conference.
For more information, contact Nick Camilli. 704.595.2094, [email protected].
2010 (Canada)
- JAPCO (Japan)
- Kansai (Japan)
- KHNP (Korea)
- KRSKO (Slovenia)
- Kyushu (Japan)a
- Leibstadt (Switzerland)
- New Brunswick (Canada)
- Ontario Power (Canada)
- Santa Maria De Garona (Spain)
- Shikoku (Japan)
- Slovenske Electrarne (Slovakia)
- Taiwan Power (Taiwan)
- TEPCO (Japan)
- Tohoku (Japan)
- Trillo (Spain)
- Vandellos (Spain) | http://mydocs.epri.com/docs/CorporateDocuments/Newsletters/NMAC/1020842.html | 2015-06-30T03:29:15 | CC-MAIN-2015-27 | 1435375091587.3 | [array(['figures/1020842/InFocus_01_01.jpg', None], dtype=object)
array(['figures/1020842/InFocus_01_02.jpg', None], dtype=object)
array(['figures/1020842/InFocus_01_03.jpg', None], dtype=object)
array(['figures/1020842/InFocus_01_04.jpg', None], dtype=object)
array(['figures/1020842/InFocus_01_05.jpg', None], dtype=object)
array(['figures/1020842/InFocus_01_06.jpg', None], dtype=object)
array(['figures/1020842/InFocus_01_07.jpg', None], dtype=object)
array(['figures/1020842/InFocus_01_08.jpg', None], dtype=object)
array(['figures/1020842/InFocus_01_09.jpg', None], dtype=object)
array(['figures/1020842/Industry_01_01.jpg', None], dtype=object)
array(['figures/1020842/Industry_02_01.jpg', None], dtype=object)
array(['figures/1020842/Industry_02_02.jpg', None], dtype=object)
array(['figures/1020842/Industry_03_01.jpg', None], dtype=object)
array(['figures/1020842/Industry_03_02.jpg', None], dtype=object)
array(['figures/1020842/Industry_06_01.jpg', None], dtype=object)
array(['figures/1020842/Industry_07_01.jpg', None], dtype=object)
array(['figures/1020842/Industry_08_01.jpg', None], dtype=object)
array(['figures/1020842/Industry_10_01.jpg', None], dtype=object)
array(['figures/1020842/Industry_12_01.jpg', None], dtype=object)
array(['figures/1020842/meeting_02_01.jpg', None], dtype=object)
array(['figures/1020842/meeting_05_01.jpg', None], dtype=object)
array(['figures/1020842/meeting_06_01.jpg', None], dtype=object)
array(['figures/1020842/meeting_09_01.jpg', None], dtype=object)
array(['figures/1020842/meeting_12_01.jpg', None], dtype=object)] | mydocs.epri.com |
Information for "Framework" Basic information Display titleCategory:Framework Default sort keyFramework Page length (in bytes)252 Page ID435 Page content languageEnglish (en) Page content modelwikitext Indexing by robotsAllowed Number of redirects to this page0 Category information Number of pages66 Number of subcategories13 Number of files0 Page protection EditAllow all users MoveAllow all users Edit history Page creatorCirTap (Talk | contribs) Date of page creation07:44, 20 January 2008 Latest editorMATsxm (Talk | contribs) Date of latest edit18:54, 16 December 2014 Total number of edits9 Total number of distinct authors4 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Magic word (1)__NOTOC__ Transcluded template (1)Template used on this page: Template:CatAZ (view source) Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Category:Framework&action=info | 2015-06-30T04:55:17 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.joomla.org |
java.lang.Object
org.apache.sling.installer.api.InstallableResourceorg.apache.sling.installer.api.InstallableResource
public class InstallableResource
A piece of data that can be installed by the
OsgiInstaller
Currently the OSGi installer supports bundles and configurations,
but it can be extended by additional task factories supporting
other formats.
The installable resource contains as much information as the client
can provide. An input stream or dictionary is mandatory everything
else is optional. All optional values will be tried to be evaluated
by the OSGi installer. If such evaluation fails the resource will
be ignore during installation.
If the client provides a configuration it should use the
resource type
TYPE_PROPERTIES. Otherwise the resource
type
TYPE_FILE should be used. These two generic types
are transformed by resource transformer services to the appropriate
resource type like bundle or configuration etc. This frees the
client from having any knowledge about the provided data.
However, if the client has the knowledge about the data it can
provided a specific resource type.
The provider should provide a digest for files (input streams).
The installer will calculate a digest for dictionaries, regardless
if the provider provided a dictionary.
public static final String TYPE_PROPERTIES
getDictionary()should contain a dictionary or the
getInputStream()should point to a property or configuration file.
public static final String TYPE_FILE
getInputStream()must return an input stream to the data.
getDictionary()might return additional information.
public static final String TYPE_BUNDLE
getInputStream()must return an input stream to the bundle.
getDictionary()might return additional information. This type should only be used if the client really knows that the provided data is a bundle.
public static final String TYPE_CONFIG
getDictionary()must return a dictionary with the configuration. This type should only be used if the client really knows that the provided data is an OSGi configuration.
public static final String BUNDLE_START_LEVEL
public static final String INSTALLATION_HINT
public static final String RESOURCE_URI_HINT
TYPE_FILEand a digest for the resource is delivered. The value of this property is a string. This property might also be set for an
UpdateHandlerin order to give a hint for the (file) name the resource or dictionary should have.
public static final int DEFAULT_PRIORITY
public InstallableResource(String id, InputStream is, Dictionary<String,Object> dict, String digest, String type, Integer priority)
id- Unique id for the resource, For auto detection of the resource type, the id should contain an extension like .jar, .cfg etc.
is- The input stream to the data or
dict- A dictionary with data
digest- A digest of the data - providers should make sure to set a digest. Calculating a digest by the installer can be very expensive for input streams
type- The resource type if known, otherwise
TYPE_PROPERTIESor
TYPE_FILE
priority- Optional priority - if not specified
DEFAULT_PRIORITYis used
IllegalArgumentException- if something is wrong
public String getId()
OsgiInstallerbut should uniquely identify the resource within the namespace of the used installation mechanism.
public String getType()
nullif the type is unnown for the client.
public InputStream getInputStream()
public Dictionary<String,Object> getDictionary()
public String getDigest()
public int getPriority()
public String toString()
toStringin class
Object | http://docs.adobe.com/docs/en/cq/5-6-1/javadoc/org/apache/sling/installer/api/InstallableResource.html | 2015-06-30T03:30:01 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.adobe.com |
Managing Performance and Scaling for Amazon Aurora MySQL
Scaling Aurora MySQL DB Instances
You can scale Aurora MySQL DB instances in two ways, instance scaling and read scaling. For more information about read scaling, see Read Scaling.
You can scale your Aurora MySQL DB cluster by modifying the DB instance class for each DB instance in the DB cluster. Aurora MySQL supports several DB instance classes optimized for Aurora. The following table describes the specifications of the DB instance classes supported by Aurora MySQL.
Maximum Connections to an Aurora MySQL DB Instance
The maximum number of connections allowed to an Aurora MySQL DB instance is determined
by
the
max_connections parameter in the instance-level parameter group for
the DB instance.
The following table lists the resulting default value of
max_connections for each DB instance class available to Aurora MySQL. You
can increase the maximum number of connections to your Aurora MySQL DB instance by
scaling
the instance up to a DB instance class with more memory, or by setting a larger
value for the
max_connections parameter, up to 16,000.
If you create a new parameter group to customize your own default for the connection
limit, you'll see that the default
connection limit is derived using a formula based on the
DBInstanceClassMemory value. As shown in the preceding
table, the formula produces connection limits that increase by 1000 as the memory
doubles between progressively larger R3 and R4
instances, and by 45 for different memory sizes of T2 instances. The much lower connectivity
limits for T2 instances are because T2
instances are intended only for development and test scenarios, not for production
workloads. The default connection limits are
tuned for systems that use the default values for other major memory consumers, such
as the buffer pool and query cache. If you
change those other settings for your cluster, consider adjusting the connection limit
to account for the increase or decrease in
available memory on the DB instances. | https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Managing.Performance.html | 2018-10-15T13:21:19 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.aws.amazon.com |
Introduction
The data received from Sources might not be suitable to be stored in your destination as is. Hevo allows you to transform your data before it is replicated in your destination through Transformations.
These are the common use cases for transforming your data:
-. e.g. IPs.
How does it work?
Hevo runs your Transformations Code over each event that is received through the pipelines. To perform any transformations on your events, you can change properties of the event object received in the transform method as a parameter.
def transform(event):
return event
The event object has various convenience methods to help you write your transformations. Please refer to API Reference - Event Object in Transformations for more details. The properties of event object can be modified to your liking - fields can be added, modified, or removed.
The environment has all the standard python libraries enabled for you to write your transformations.
Below is an example of a more complex transformation:
from ua_parserimport user_agent_parser
import datetime
def transform(event):
event_name = event.getEventName()
properties = event.getProperties()
# Concatenate First Name and Last Name of the User to create Full Name
if event_name == 'users':
properties['full_name'] = properties['first_name'] + ' ' + properties['last_name']
# Parse the User Agent string
if properties.get('user_agent'):
user_agent = properties.get('user_agent')
parsed_string = user_agent_parser.ParseUserAgent(user_agent)
properties['user_agent'] = parsed_string.get('family') # Add a timestamp at which the event was streamed
properties['streamed_ts'] = datetime.datetime.now()
return event
Testing your code
Once you write your Transformations code or make changes to existing code you can test it against real sample events received by Hevo from your sources. You can also test your code against events from a specific pipeline or a specific event type. You can even modify the sample events to test your code for any edge case scenarios.
Deploying your code
Once you are satisfied with your code and have tested it, press Deploy. Once you Deploy your code it will apply to all incoming events and replayed events. We would advise you to monitor the Replay queue for any failures due to the newly deployed code.
Please sign in to leave a comment. | https://docs.hevodata.com/hc/en-us/articles/360000938073-Introduction-to-Transformations | 2018-10-15T14:13:42 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.hevodata.com |
ip (string) - Queried IP address
asn (integer) - Autonomous System Number of the queried IP
isp (string) - ISP name of the IP address
countryCode (string) - ISO 3166 Alpha-2 Code
countryName (string) - Country name in English
hostname (string) - Reverse DNS of the IP address (temporarily disabled)
block (integer)
As some Autonomous Systems resell IP prefixes to third party organisations, we provide the block parameter so you can determine your own ‘risk-level.’
"block":0 - Residential / Unclassified IP address
"block":1 - Non-residential IP address (server, VPN, proxy, etc..)
"block":2 - Non-residential & Residential (Warning: may flag innocent people)
We generally recommend people to only block
if block == 1. It provides the best balance between stopping malicious users and avoiding false positives. | https://docs.iphub.info/documentation/json-keys/ | 2018-10-15T14:07:50 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.iphub.info |
17.07.2 Release Notes
Release Date: Sep 8, 2017
See Licenses to view Legato AF Licensing Information.
The following provides a summary of changes that have been made in the 17.07.10.
Patches
The following are the list of patches that have been applied to Legato AF 17.07:
Version 17.07.1 resolved an issue where running
make with the tarball distribution of Legato AF produces an error (the build is still successful, even with the error):
Example of error message:
Version (-v) not set make: *** [stage_mkavmodel] Error 1
Resolution: download the new tarball and run
make
clean and then
make for your target: legato-17.07.1.tar.bz2
Example:
$ make clean $ make wp85 # make <target>
Version 17.07.2 resolved the following issues:
- Asset Data could not be aggregated so the Time Series data was never populated and could never be pushed to AirVantage.
- avcCompat app was missing when building for the WPx5xx or when using the wifi.sdef.
New Features
The following are the list of new features that have been added into the 17.07 Release:
AirVantage Improvements
The 17.07 release contains AirVantage Connector Service fixes and improvements to documentation:
- AirVantage AssetData and Exchange Data Tutorial have been added to AirVantage to walk users through collecting data from sensors and using the data with the AirVantage Server.
- Push Function (le_avdata_Push()) has been added to the
avdataAPI to manage sending data to the AirVantage Server.
- Credential Status Function (le_avc_GetCredentialStatus()) has been added query the status of AirVantage Server credentials.
- Data routing configuration has been added to customize the configuration of the default route from the Data Connection Service
GNSS Service
platformConstraintsGnss_SettingConfiguraton information has been added in the GNSS API.
Flash Management
Flash API has been added to provide bad image detection for certain platforms. Please check with your module vendor to see if your module supports bad image detection for flash drives.
Fixed Issues
All development work is tagged in GitHub as "17.07.07.0 and are currently under investigation:
Legato AF
- Apps that need to only run for a short time (e.g.; development, factory test, short running monitoring) may run into an issue where reboot detection may prevent the App from being able to restart or shutdown properly.
- Firmware over-the-air updates may not suspend or resume properly when the target is power cycled in the middle of a download. | https://docs.legato.io/17_07/releaseNotes17072.html | 2018-10-15T12:56:41 | CC-MAIN-2018-43 | 1539583509196.33 | [array(['LegatoReleaseVSModuleVendorRelease.png',
'LegatoReleaseVSModuleVendorRelease.png'], dtype=object)] | docs.legato.io |
Add an App Volumes administrator group who can log in to the App Volumes Manager and manage the users and groups.
About this task
You can create multiple administrator groups for a single Active Directory domain.
Note:
You cannot configure a single user as an administrator, you can only add a group as an administrator.
Prerequisites
Ensure that you have already added the group to the Active Directory database.
Procedure
- From the App Volumes Manager console, click .
- Select a domain from the drop-down list; select All to search in all domains or select a specific domain.
- Search Groups; you can filter the search query by Contains, Begins, Ends, or Equals.
- (Optional) Check the Search all domains in the Active Directory forest checkbox if you want to search in all domains.
- Click Search.
- Select the Active Directory group from the drop-down list and click Assign.
Results
All users within the group are granted administrator privileges.
What to do next
After you have added the administrators, you can configure the Machine Managers and App Volumes storage. See Configuring a Machine Manager and Configure Storage For AppStacks. | https://docs.vmware.com/en/VMware-App-Volumes/2.13.3/com.vmware.appvolumes.admin.doc/GUID-47B32890-23C2-4558-B449-471F1C37DEC3.html | 2018-10-15T13:37:47 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.vmware.com |
Perform the tasks below to define the tenant network.
If the tenant has backhaul, work with the tenant to identify an internal subnet that is not in use in their infrastructure to be used for the virtual desktops. Otherwise assign an appropriate subnet to the tenant network.
Add VLAN(s), VXLAN(s), or a Distributed Virtual Port Group (DVPG) to the tenant. At least one of these must be the Tenant Network.
Assign at least one of the added network(s) to the Desktop Manager via the Service Grid in the Service Center. These networks will be used to ensure desktop isolation and may be shared across multiple Desktop Managers.
Note:
DVPG must be configured to use ephemeral port binding.
Important: | https://docs.vmware.com/en/VMware-Horizon-DaaS/services/horizondaas.spmanual800/GUID-A518ADD2-B7C1-461F-9CC8-5DAE59749F58.html | 2018-10-15T13:03:21 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.vmware.com |
If you are using multiple domains for enrollment, user Server Name Indication to simplify installation and reduce the server overhead.
Multiple domains normally require multiple servers, SSL certificates, and IP addresses. To reduce issues and overhead, use Server Name Indication (SNI) to use only one server supported by multiple SSL certificates, and CNAME/ANAME records pointing to this SNI supported server.
For SNI, you need Windows Server 2012 R2 with IIS 8.0+.
To configure SNI:
Open the Internet Information Services (IIS) Manager.
Right click Sites, then select Add Website.
Fill in your site name, physical path, and host name. Change type to https and to check Require Server Name Indication, and selecting the matching SSL certificate.
Right-click your Web site and select Add Application.
If you have not already, you must run the WADS installer. Change the Web site from Default Web Site to the site name provided in the previous steps. Once completed, the first domain has been successfully configured. To add more domains, repeat steps 2–4 and continue to step 6.
Fill in Alias as EnrollmentServer and update the Application Pool to point to EnrollmentServer. Click the contextual dots (…) to browse for the EnterpriseEnrollment directory created after the WADS setup was run. | https://docs.vmware.com/en/VMware-Workspace-ONE-UEM/9.6/vmware-airwatch-guides-96/GUID-AW96-OP_SNI_WADS.html | 2018-10-15T12:25:55 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.vmware.com |
Workspace ONE UEM supports SCEP (Simple Certificate Enrollment Protocol) for iOS and macOS devices. If you’re looking to leverage certificates as part of your mobile deployment, SCEP allows you to securely deploy certificate enrollment requests to iOS devices, even when Workspace ONE UEM does not natively support your PKI infrastructure of choice. | https://docs.vmware.com/en/VMware-Workspace-ONE-UEM/9.6/vmware-airwatch-guides-96/GUID-AW96-SCEP_Intro.html | 2018-10-15T12:33:18 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.vmware.com |
Hex
Returns as a String the hexadecimal version of the number passed.
Syntax
result=Hex(value)
Notes
If the value is not a whole number, the decimal value will be truncated.
You can specify binary, hex, or octal numbers by preceding the number with the & symbol and the letter that indicates the number base. The letter b indicates binary, h indicates hex, and o indicates octal.
VB Compatibility Note: VB rounds the value to the nearest whole number so the Hex function will probably be changed in a future release to do this as well.
Examples
Below are examples of various numbers converted to hex:
hexVersion = Hex(5) // returns "5"
hexVersion = Hex(75) // returns "4B"
hexVersion = Hex(256) // returns "100"
Formatting a hexadecimal number
It is often useful to keep the same number of digits to represent a hexadecimal number, e.g. 8 characters for a 32 bits hexadecimal number. You can achieve such a task with the following code:
Const prefix = "00000000" // A 32-bits value is 8 digits long at max
Return Right(prefix + Hex(value), 8) // We always return 8 characters
End Function
For an Int64 integer, you can adapt it as:
Const prefix="0000000000000000" // A 64-bits value is 16 digits long at max
Return Right(prefix + Hex(value), 16) // We always return 16 characters
End Function
Converting a hexadecimal string to a number
See Also
&b, &c, &h, &o, &u literals; Bin, DecodeHex, EncodeHex, Oct functions. | http://docs.xojo.com/index.php/Hex | 2018-10-15T13:46:07 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.xojo.com |
Access Kerberos-enabled HBase cluster using a Java client
You can access Kerberos-enabled HBase cluster using a Java client.
- HDP cluster with Kerberos enabled.
- You are working in a Java 8, Maven 3 and Eclipse development environment.
- You have administrator access to Kerberos KDC.
Perform the following tasks to connect to HBase using a Java client and perform a simple Put operation to a table.
- “Download configurations”
- “Set up client account”
- “Access Kerberos-Enabled HBase cluster using a Java client” | https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/authentication-with-kerberos/content/access_kerberos_enabled_hbase_cluster_using_a_java_client.html | 2018-10-15T13:53:42 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.hortonworks.com |
Contents Now Platform Custom Business Applications Previous Topic Next Topic Service Creator ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Service Creator Service creator enables a department to offer custom services through the service catalog, such as the HR department offering tuition reimbursement for further education. Each published service has an associated record producer catalog item. Users designated as managers and editors create and design these catalog items. End users can request services by ordering the catalog item. All services belong to a published service category, which has an associated application and modules. When a user orders the catalog item for a service, the ServiceNow system creates a new task record within the application for that service category. Users designated as service fulfillers for the department complete these tasks to fulfill the service request. Service creator processThe service creator process involves requesting and publishing a service category, designating editors and service fulfillers, creating and publishing services, and submitting and fulfilling service requests.Activate Service CreatorAn administrator can activate the Service Creator plugin to access the application.Manage a serviceUsing the Service Creator, department managers can request a new service category, designate editors and service fulfillers for that category, and create and publish services. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-application-development/page/build/service-creator/concept/c_ServiceCreator.html | 2018-10-15T13:23:40 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.servicenow.com |
Add Your Connection IP Address to IP Access List
On this page
Estimated completion time: 2 minutes
An IP is a unique numeric identifier for a device connecting to a network. In Atlas, you can only connect to a cluster from a trusted IP address. Within Atlas, you can create a list of trusted IP addresses, referred to as a IP access list, that can be used to connect to your cluster and access your data.
Procedure
You must add your IP address to the IP access list before you can connect to your cluster. To add your IP address to the IP access list:
1
2
Tip
See also:
Next Steps
Now that you added your connection IP address to the IP access list, proceed to Create a Database User for Your Cluster. | https://docs.atlas.mongodb.com/security/add-ip-address-to-list/ | 2022-01-16T18:29:18 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['/assets/link.png', 'icons/link.png'], dtype=object)
array(['/assets/link.png', 'icons/link.png'], dtype=object)
array(['/assets/link.png', 'icons/link.png'], dtype=object)] | docs.atlas.mongodb.com |
Getting Started
Installation, licensing, system requirements, and basic environment setup information
Studio
Use Eggplant Performance Studio to create and define load and performance tests
Using Studio
Find detailed information on using Eggplant Performance Studio to develop your tests
Test Controller
Use Eggplant Performance Test Controller to run and monitor the tests you create
Analyzer
Use Eggplant Performance Analyzer to analyze results and report on your test runs
API Reference Manuals
Eggplant Performance includes API references for a range of virtual user types
Getting Started with JMeter
Integrate Eggplant Performance with JMeter for testing
Eggplant Training Videos
Training and tutorial videos for Eggplant products
See what other Eggplant users are talking about
Training and Certifications
Learn to use products in the Eggplant Digital Automation Intelligence Suite | https://docs.eggplantsoftware.com/epp/eggplant-performance-documentation-home.htm | 2022-01-16T19:41:11 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.eggplantsoftware.com |
General activity analytics
Watch the video below for guidance on this dashboard!
User events
We track a number of events that your users take as they interact with the mobile learning application. These events include page views, details of user course progress, individual question scores, and much more. This number represents the total number of analytics events we've captured across all your users.
Active users
This is the total number of active users using the mobile application. If the user has logged at least one analytics event on the platform we count them as 'active'. This means, for example, that if a user has looked at a page within the timeframe you selected we'll count them as active.
New user registrations
This number represents the total number of new registrations to your mobile app.
User activity
This graph shows you the event volume per day for users on your mobile application. You can use this to see which days people have been more or less active on.
User event breakdown
Here we breakdown the type of analytics event we captured. These events include raw pageviews, question answers, and much more. You can use this to get a sense of the kind of events being logged by your users, and how these make up the acivity volumes seen in the user activity chart.
Activity by user tag
This chart shows you the total event volume for all users in a given user group. Note that events will be multiple times for users in multiple user groups. This shows you the base level of activity for users in a given user group. | https://docs.learn.ink/using-analytics/general-activity/ | 2022-01-16T20:08:43 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['https://learnink-docs-assets.s3.eu-west-2.amazonaws.com/li-analytics-user-events.png',
None], dtype=object)
array(['https://learnink-docs-assets.s3.eu-west-2.amazonaws.com/li-analytics-active-users.png',
None], dtype=object)
array(['https://learnink-docs-assets.s3.eu-west-2.amazonaws.com/li-analytics-new-user-regs.png',
None], dtype=object)
array(['https://learnink-docs-assets.s3.eu-west-2.amazonaws.com/li-analytics-activity-volume.png',
None], dtype=object)
array(['https://learnink-docs-assets.s3.eu-west-2.amazonaws.com/li-analytics-events-breakdown.png',
None], dtype=object)
array(['https://learnink-docs-assets.s3.eu-west-2.amazonaws.com/li-analytics-events-breakdown-by-tag.png',
None], dtype=object) ] | docs.learn.ink |
Microsoft Defender for Cloud Apps overview
For information about Office 365 Cloud App Security, see Get started with Office 365 Cloud App Security.
Microsoft Defender for Cloud Apps is a Cloud Access Security Broker (CASB) that supports various deployment modes including log collection, API connectors, and reverse proxy. It provides rich visibility, control over data travel, and sophisticated analytics to identify and combat cyberthreats across all your Microsoft and third-party cloud services.
Microsoft Defender for Cloud Apps natively integrates with leading Microsoft solutions and is designed with security professionals in mind. It provides simple deployment, centralized management, and innovative automation capabilities.
For information about licensing, see the Microsoft Defender for Cloud Apps licensing datasheet.
What is a CASB?
Moving to the cloud increases flexibility for employees and IT teams. However, it also introduces new challenges and complexities for keeping your organization secure. To get the full benefit of cloud apps and services, an IT team must find the right balance of supporting access while protecting critical data.
This is where a Cloud Access Security Broker steps in to address the balance, adding safeguards to your organization's use of cloud services by enforcing your enterprise security policies. As the name suggests, CASBs act as a gatekeeper to broker access in real time between your enterprise users and cloud resources they use, wherever your users are located and regardless of the device they are using.
CASBs do this by discovering and providing.
Why do I need a CASB?
You need a CASB to better understand your overall cloud posture across SaaS apps and cloud services and, as such, Shadow IT discovery and app governance are key use cases. Additionally, an organization is responsible for managing and securing its cloud platform including IAM, VMs and their compute resources, data and storage, network resources, and more. So if you're an organization that uses, or is considering using, cloud apps to your portfolio of network services, you most likely need a CASB to address the additional, unique challenges of regulating and securing your environment. For example, there are many ways for malicious actors to leverage cloud apps to get into your enterprise network and exfiltrate sensitive business data.
As an organization, you need to protect your users and confidential data from the different methods employed by malicious actors. In general, CASBs should help you do this by providing a wide array of capabilities that protect your environment across the following pillars:
- Visibility: detect all cloud services; assign each a risk ranking; identify all users and third-party apps able to log in
- Data security: identify and control sensitive information (DLP); respond to sensitivity labels on content
- Threat protection: offer adaptive access control (AAC); provide user and entity behavior analysis (UEBA); mitigate malware
- Compliance: supply reports and dashboards to demonstrate cloud governance; assist efforts to conform to data residency and regulatory compliance requirements
The Defender for Cloud Apps framework
Discover and control the use of Shadow IT: Identify the cloud apps, IaaS, and PaaS services used by your organization. Investigate usage patterns, assess the risk levels and business readiness of more than 25
Defender for Cloud Apps Defender for Cloud Apps data retention and compliance, see Microsoft Defender for Cloud Apps data security and privacy.
Cloud Discovery
Cloud Discovery uses your traffic logs to dynamically discover and analyze the cloud apps that your organization is using. To create a snapshot report of your organization's cloud use, you can manually upload log files from your firewalls or proxies for analysis. To set up continuous reports, use Defender for Cloud Apps log collectors to periodically forward your logs.
For more information about Cloud Discovery, see Set up Cloud Discovery.
Sanctioning and unsanctioning an app
You can use Defender for Cloud Apps to sanction or unsanction apps in your organization by using the cloud app catalog. The Microsoft team of analysts has an extensive and continuously growing catalog of over 25,000 cloud apps that are ranked and scored based on industry standards. You can use the cloud app catalog to rate the risk for your cloud apps based on regulatory certifications, industry standards, and best practices. Then, customize the scores and weights of various parameters to your organization's needs. Based on these scores, Defender for Cloud Apps lets you know how risky an app is. Scoring is based on over 80 risk factors that might affect your environment.
App connectors
App connectors use APIs from cloud app providers to integrate the Defender for Cloud Apps cloud with other cloud apps. App connectors extend control and protection. They also give you access to information directly from cloud apps, for Defender for Cloud Apps analysis.
To connect an app and extend protection, the app administrator authorizes Defender for Cloud Apps to access the app. Then, Defender for Cloud Apps queries the app for activity logs, and it scans data, accounts, and cloud content. Defender for Cloud Apps can enforce policies, detects threats, and provides governance actions for resolving issues.
Defender for Cloud Apps uses the APIs provided by the cloud provider. Each app has its own framework and API limitations. Defender for Cloud Apps works with app providers on optimizing the use of APIs to ensure the best performance. Considering the various limitations that apps impose on APIs (such as throttling, API limits, and dynamic time-shifting API windows), the Defender for Cloud Apps engines utilize the allowed capacity. Some operations, like scanning all files in the tenant, require a large number of APIs, so they're spread over a longer period. Expect some policies to run for several hours or several days.
Conditional Access App Control protection
Microsoft Defender for Cloud Apps.
Next steps
- Read about the basics in Getting started with Defender for Cloud Apps.
If you run into any problems, we're here to help. To get assistance or support for your product issue, please open a support ticket. | https://docs.microsoft.com/en-NZ/defender-cloud-apps/what-is-defender-for-cloud-apps | 2022-01-16T18:56:12 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['media/proxy-architecture.png',
'Defender for Cloud Apps architecture diagram.'], dtype=object)] | docs.microsoft.com |
Creating actions
Various details about the action can be specified on the Details tab of the action rule. These details can be used in Strategies for making decisions. They can also be included in treatments.
Action details are classified into the following groups:
- Basic Action Attributes
Basic action attributes are available for every action and are categorized into the following sections:
- Bundle Attributes
Bundles attributes are used to specify the bundling settings for the action. Refer to the Action Bundles chapter for more details.
- Custom Attributes
The Custom Attributes section lists other proposition fields defined for use by implementers of the application. The following sets of attributes display in this section:
- Creating a New Attribute
In order to specify a new action detail, the backing proposition attribute must first be created.
- Defining action flows
The Flow contains the exact sequence of actions to be performed in the lifecycle of a particular action. A Flow consists of a variety of shapes and the connectors which connect these shapes.
- Importing and exporting actions
In addition to creating actions manually, you can also import actions created outside of Pega Customer Decision Hub by uploading a .csv file with the action data. | https://docs.pega.com/pega-customer-decision-hub-user-guide/86/creating-actions | 2022-01-16T19:23:32 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.pega.com |
Links may not function; however, this content may be relevant to outdated versions of the product.
What's new in PegaRULES Process Commander 6.1 SP2: Case Management
6.1 SP2 introduces several Case Management features for application designers, caseworkers and case managers:
New features for application designers
Explicit support for case design
The Case Type Definitions gadget (under the Process & Rules > Case Management landing page) is used to configure the following case type and work processing configurations. Using the gadget, you can:
- Construct covering relationships
- Build new case types using a standard tree gadget
- Add entirely new case types
- Use standard starting flows for the new case type
- Reuse existing case and work types
- Manage various aspects of work processing, including:
- Service levels
- Attachments Categories (and automatic attachments when work objects are created)
- Automatic and conditional instantiation of covered items when a new case is created
- Mapping roles to object access for your various case and work types
See Introduction to Case Type design.
Define case and case relationships
Using the Case Definitions gadget, you can define templates for every case, and all its subcases and tasks all of the necessary elements (content, subjects, roles, events, etc.) and the relationships between them.
Add pre-defined ad hoc work, or new ad hoc work to cases
Ad hoc work added to cases can be saved to the case type rule, as an update to the case, a new version of that case, or as an entirely new case type. This allows for capture of ad hoc work into the process in a passive, iterative manner.
Create case and subcase templates for existing applications
For existing applications, you can use the new Case Type Definition landing page to create formal Case and Task hierarchies from existing work types.
See How to create case types from existing applications.
Case type rule additions
The Case Type rule enables you to nest cover objects within other cover objects in a parent-child hierarchy. This capability helps centralize flow processing and monitoring among multiple work types in large applications.
See About Case Type rules.
Goals and deadlines
The Case Type definition gadget displays your application's cover hierarchy and allows you to configure various aspects of working processing for each case type. Goals and deadlines allows you to easily view and change the overall service level settings for the selected case type.
New features for caseworkers and case managers
The Case Manager portal
The portal provides a standard desktop for viewing and working on cases, displaying a case and all its nested subcases and tasks, and well as the associated content, users, roles, and subjects. A user can add pre-defined ad hoc work, or can add new ad hoc work to cases.
The portal allows for contextual processing of the case based on type, product, geography, subject, and time frame. Collaborative case management allows for caseworkers to be informed of the status of work that coworkers are doing.
Elements of the portal include:
- Cases — provides a comprehensive view of a caseworker's caseload.
- Tasks — shows caseworkers their most pressing tasks across all cases.
- Detailed Case View — provides a detailed view of a caseworker's cases, including attachments.
- Case Subjects — provides a view of profile information, other existing and related cases, prior interactions, related subjects, related entities.
| https://docs.pega.com/whats-new-pegarules-process-commander-61-sp2-case-management | 2022-01-16T20:38:37 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['https://docs.pega.com/sites/default/files/legacy/Common/images/home/iconHomeProduct.jpg',
'product release'], dtype=object)
array(['https://docs.pega.com/sites/default/files/legacy/Common/images/buttons/uparrow.gif',
'Scroll to top'], dtype=object)
array(['https://docs.pega.com/sites/default/files/legacy/Common/images/buttons/uparrow.gif',
'Scroll to top'], dtype=object)
array(['https://docs.pega.com/sites/default/files/legacy/Common/images/buttons/uparrow.gif',
'Scroll to top'], dtype=object)
array(['https://docs.pega.com/sites/default/files/legacy/Common/images/buttons/uparrow.gif',
'Scroll to top'], dtype=object)
array(['https://docs.pega.com/sites/default/files/legacy/Common/images/buttons/uparrow.gif',
'Scroll to top'], dtype=object)
array(['https://docs.pega.com/sites/default/files/legacy/Common/images/buttons/uparrow.gif',
'Scroll to top'], dtype=object)
array(['https://docs.pega.com/sites/default/files/legacy/DevNet/PRPCv6/KB/images/whatsnewCM.jpg',
None], dtype=object)
array(['https://docs.pega.com/sites/default/files/legacy/Common/images/buttons/uparrow.gif',
'Scroll to top'], dtype=object) ] | docs.pega.com |
What is the Business Steward Portal?
The Business Steward Portal is a tool for reviewing, modifying, and approving records that failed automated processing or that were not processed with a sufficient level of confidence. Use the Business Steward Portal to manually enter correct or additional data in a record. For example, if a customer record fails an address validation process, you could use the search tools to conduct research and determine the customer's address, then modify the record so that it contains the correct address. The modified record could then be approved and reprocessed by Spectrum™ Technology Platform, sent to another data validation or enrichment process, or written to a database, depending on your configuration. You could also use the Portal to add information that was not in the original record.
The Business Steward Portal also provides summary charts that provide insight into the kinds of data that are triggering exception processing, including the data domain (name, addresses, spatial, and so on) as well as the data quality metric that the data is failing (completeness, accuracy, consistency, and so on).
In addition, the Business Steward Portal Manage Exception page enables you to review and manage exception record activity, including reassigning records from one user to another. Finally, the Business Steward Portal Data Quality page provides information regarding trends across dataflows and stages.
For more information on exception processing, see Business Steward Module. | https://docs.precisely.com/docs/sftw/spectrum/12.2/en/webhelp/DataQualityGuide/BusinessSteward/source/ExceptionEditor/whatis.html | 2022-01-16T18:32:06 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.precisely.com |
# Requests Examples
# Individual Resources
Individual resources can be retrieved by including their ID directly in the URL path, after the URL of the collection they belong to. This is shown in the examples below:
Top level collections:
One or more JSON properties can be included or excluded specifically with the
select query parameter. The
! character is used to exclude fields:
# Resource Collections
Resource collections are accessed using plural in the URL path.
For example, to retrieve the entire collection of systems, datastreams, observations and features, you would use respectively:
# Nested Collections
Nested collections can be accessed in the same way, provided the parent resource is identified:
# Filtering Items
Collections can be filtered using query parameters:
# Paging
Paging is done by providing a
offset and
limit parameters. The
offset is ommitted for the first fetch, and then included when continuing paging through results. The next
offset is always provided along with the last page fetched:
Note that the
offset parameter should be considered an opaque identifier, not necessarily the sequential index of the next item. For certain data stores, it could be convenient to implement it as the row number at the start of the page (equivalent to the
skip parameter in SQL) but it is not a requirement, and is not the most efficient way to implement paging in large datastores. | http://docs.opensensorhub.org/v2/web/sensorweb-api/examples.html | 2022-01-16T19:49:03 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.opensensorhub.org |
Tickets
DiamanteDesk offers a great way to improve customer experience by easily receiving, reassigning and taking care of any Client requests presented as tickets.
Tickets in DiamanteDesk can be created through one of the following channels:
- through the web portal;
- via embedded forms on websites, online stores, blogs, etc.;
- via processing email requests sent to the support email address;
- by the admin users via the admin panel.
Tickets in the Admin panel can be viewed and created from:
Ticket Filters
All tickets in the system can be filtered according to each of the parameters described in the table below. To learn more about filtering in DiamanteDesk, follow this link.
Create a New Ticket
To create a new ticket, complete the following steps:
Click Create Ticket at the top right corner of the screen. The Create Ticket screen opens.
Provide all the relevant information necessary to create a new ticket:
Provide the detailed description of a ticket in the Description field. If relevant, describe the steps to reproduce the bug/error.
Click Save or Save and Close at the right top corner of the screen for a corresponding action.
Ticket Comments
Once the ticket is created and successfully saved in the system, customers can comment them from the portal and Admin users can reply from the Admin panel either publically or privately.
Whenever DiamanteDesk customer needs to add more information about the issue or make a request about the previously created ticket, the comment to this ticket can be added on the portal.
This comment automatically becomes available at the admin panel, updating admin users with new information/requests/issues occurred. Admins can comment tickets directly from the Portal as described below.
Private Comments on the Admin Panel
To add a comment to some ticket from the Admin panel, open the required ticked, scroll down to the very bottom of the page and click Comment. The Add Comment screen opens.
Notifications
Email Notifications is a DiamantDesk feature that sends automatic emails to the ticket creator and Assignee when the ticket is created or its status changes. This way, a customer is notified whether his request is being processed.
When the ticket is created, a Reporter and Assignee get the following emails, informing them on ticket details (such as Branch, Subject, Priority, Status and Source):
When the status of a ticket changes, the reporter and assignee get the respective email as well.
Configuring Notifications
This functionality can be configured according to the customer needs at System > Configuration:
On the System Configuration pane click DiamanteDesk to expose the available options and choose Notifications.
To use the default settings, select the Use Default check box in the Email Notifications field. To edit configurations, clear the the Use Default check box and set the Enable Email Notifications field to Yes or No option.
To save the changes made, click Save Settings at the top right corner of the screen.
Server Setup
The mailer settings for emails and notifications are usually configured during DiamanteDesk installation. To learn more about the process of installation, navigate to the Installation Guide section of the documentation.
To make sure that server parameters are configured correctly or you need to change them, define required configuration details in the app/config/parameters.yml file. If you need more information on how to perform configurations in the app/config/parameters.yml file, follow this link.
Watchers
Each of the tickets can be assigned to a specific user to make sure that a given issue is solved quickly and efficiently. But in order to have more control over certain high priority issues, a new Watchers feature has been added.
A Watcher is a person who gets an email notification every time a status, priority or other ticket information changes. When the ticket is created, a Reporter (the person who created a ticket) and an Assignee (the person to whom the ticket is assigned) automatically become Watchers. If the ticket is created by the admin user, he automatically becomes a ticket watcher as well.
Adding Watchers to the Ticket
Watchers can be added to the ticket in one of the following ways:
- By Customers via Email
In case the ticket is created from the email through email processing, the email sender and all the users from CC (if any) become watchers and receive email notifications each time the ticket content, status or priority is changed.
- By Customers via Portal
To add watchers to the ticket via the portal, complete the following steps:
- Log into the Portal.
- Create a new ticket. To get the detailed instructions on how to create ticket via the portal, please refer to this section.
- Once the ticket is submitted, the screen with ticket description and editing options is opened. At the right side of this screen in the Watchers section click Add Watcher to add a person or several people who will be able to control the ticket workflow.
- The Add Watcher pop-up opens. Specify the email of a person you would like to make a watcher for this ticket and click Add. If this email has been previously registered in DiamanteDesk, the system will recognize this user and add person’s first an last name to the list of watchers. If no matching account has been found, a new customer is created.
- An email confirmation is sent to a person whose email was provided at the Add Watcher screen, notifying him on the new role granted.
- By Admins via Admin Panel
Admin user can add or remove certain users from the list of ticket watchers if he has respective edit permissions. To follow the changes made in a certain ticket or add other customers/administrators to the list of Watchers, Admin should perform the following steps:
- Log into the DiamanteDesk Admin panel.
- Open the required ticket.
- To to be aware of the changes made to the ticket, click Watch on the option panel at the top of the screen.
- To add another person to the Watchers list, click Add Watcher on the option panel. The Add Watcher screen opens. Enter the name of a required user or click a list button and select a user from the list of available users. | https://docs.diamantedesk.com/en/latest/user-guide/tickets.html | 2022-01-16T19:22:59 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['img/All_tickets.png', 'Tickets'], dtype=object)
array(['img/create_ticket.png', 'Create ticket'], dtype=object)
array(['img/description_field.png', 'Create ticket'], dtype=object)
array(['img/comments_portal.png', 'Comments'], dtype=object)
array(['img/comments_backend.png', 'Comments'], dtype=object)
array(['img/status_email.png', 'Notifications'], dtype=object)
array(['img/sys_config.png', 'System configuration'], dtype=object)
array(['img/web_config.png', 'Notifications'], dtype=object)] | docs.diamantedesk.com |
Details provided in background processing logs
Pega Platform provides log categories specific to background processing. With these categories, you can diagnose and troubleshoot issues related to job schedulers, queue processors, and agents.
Background processing log categories provide detailed logs for the following areas:
- Job schedulers — For managing recurring tasks in your Pega application. For more information, refer to the Job scheduler FAQ and Job Scheduler rules.
- Queue processors — For queue management and asynchronous messenger processing. For more information, refer to the Queue processor FAQ and Queue Processor rules.
- Agents — This older generation of background processing capabilities which was replaced with job schedulers and queue processors. For more information, refer to Agents rules
The requestor_lifecycle logger
You can enable the
com.pega.TRACE.requestor_lifecycle logger to
see information that is related to the requestor that is used by queue processor
during an activity run. For more information, refer to Setting log levels for individual loggers. | https://docs.pega.com/system-administration/85/details-provided-background-processing-logs | 2022-01-16T20:46:13 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.pega.com |
WP Smart TV ArticlesStep 1. Install WP Smart TV Step 2. Adding Video Content Step 3. How to setup Roku Direct Publisher Step 4. Setup of Recipes system Step 5. Shortcodes and Elementor Pro bonus Step 6. Using the Series management system Doc navigationWP Smart TV - Add-ons → Was this article helpful to you? Yes No How can we help? Name Email subject message | https://docs.rovidx.com/docs/wp-smart-tv/ | 2022-01-16T18:11:52 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.rovidx.com |
Index:
- How to add an employee
- How to enter an employee's details
- How to change an employee's role from employee to administrator
- How to delete an employee
- Confirmation that employee has logged into the Mobile App and completed their Induction
- How to send a message to an Employee
- How to dequeue a message
- How to edit a message if still in draft format
- How to delete a message if still in draft format
Please watch the following video on how to create and manage your employees via the SiteConnect Web portal:
You can import a CSV file to add multiple employees' details. Please refer to this link for instructions on how to do this How to import and export your Employees details
If you need further assistance then please read on...
How to add an employee
To add a new Employee click on the Employees tab on the left-hand side of the main menu.
Once you have clicked on the Employees tab a new screen will appear titled Employees which will list all your employees. To add a new employee choose and click on the orange button titled +Add Employee.
How to enter an employee's details
Once you have clicked on +Add Employee a new dialog box will appear. In this dialog box you enter the details of the employee. The first details to enter, which are compulsory are:
- Contact person - enter the name of the employee
- Email address - enter the email address of the employee
- Mobile number - enter the contact number of the employee
- Job Title- enter the job title of the employee
- Address- Address of employer or employee if appropriate
- Suburb- relates to address
- City
- Country
The system will automatically generate the user name for log in purposes based on the email address of the employee. However, if that email address is already been used it will nominate the employee's name as the username, i.e. joebloggs
Entering the employee's address is an optional field
Once you have entered the above details click on Save and this will save and load this employee into your profile taking you back to the main Employees list and will display those employee's details as per below image.
When set up as an Employee you will only be able to see the Dashboard on the Web portal, QR Codes and any Pre-Qualification Items. For full functionality you will need to change the role of the employee to administrator.
How to change an employee's role from Employee to Administrator
Once an employee is created in the system they will automatically be set up as an Employee giving them limited access to the system. To give an employee full access to the system you will need to change their role to Administrator. To do this you need to click on the Edit option to the right-hand side of the screen.
This will bring up the employee's details. On the right-hand side of the screen you will see a blue box which lists the main details for that employee. Inside that box you will see a statement which says either Switch role to Administrator or Switch role to Employee. To switch this employee to an Administrator click on this statement and it will automatically change their role to Administrator. Vice versa if you want to change an employee from an Administrator back to an Employee.
Once you have clicked on this statement it will bring up a dialog box asking you to confirm if you want to make this change. Click on Yes and this will change the employee's role from Employee to Administrator giving them full access to the system.
How to Delete an Employee
Click on delete to the right-hand side of their name
You will then be asked to confirm – click on Yes.
When an employee is generated in the system they will be sent an email with their username requesting them to log in to the system and create their password.
Once they have done this they can log in to both the Web Portal and Mobile App but remember anyone set up as an Employee can only view the Dashboard, QR Codes and Pre-Qualification items in the Web Portal.
Confirmation that Employee has logged into the Mobile App & completed their Induction
In the status icon column you will potentially see three Icons.
- Induction Status
- Mobile App Status
- Health Declaration
Induction Status - A red X, orange tick or green tick indicates whether or not the inductions assigned to that employee have been completed.
- A red X = induction(s) have not been started
- An orange tick = induction(s) have been partially completed
- A green tick = all induction(s) are complete
Mobile App Status
Once they have logged into the Mobile App a Mobile Icon will appear in this field which confirms that the Mobile App is set up and ready to use.
If you hover over the Mobile Icon this will reveal the version of the App being used. Our latest version is 2.4.7. Any version less than this needs to be updated through the App store.
Health Declaration
When a new user logs into either the Web Portal or the Mobile App they must complete a COVID-19 Health Declaration. Once completed a Band-Aid Icon will appear in this field which confirms that they have completed and passed the Health Declaration.
How to send a message to an employee
You can send a message to an employee by clicking on the edit button on the right-hand side of the employee's details.
This will open a new screen with the employee's details. On the right-hand side of the screen in the blue box click on Send Message.
A new dialog box will appear called Compose Message. In this dialog box you can compose a message:
- Select the message type - Email, Push notification or SMS (SMS needs to be enabled for your account - if not enabled contact Support)
- Select the message priority - low, normal, high or critical
- Schedule to send later - click on the calendar icon to schedule for a later date if required
- Subject - enter your subject title
- Main body - enter the content of your email
- Attach files - you can attach a file(s) to the message
Once you have completed your message click on Save and then Send Message. If you don't want to send the message at that time do NOT click on Send Message and the message will be saved in draft format for you to review at a later date.
Please NOTE: You will not be able to Save and send a message until either a User, Contractor or Site is selected.
How to dequeue the message
Once you click on send message you have 1 minute to dequeue the message and retrieve it back for further editing or deleting.
To retrieve the message go to the messages menu on the left-hand side of your screen.
Locate the message and click on Dequeue on the right-hand side of your screen.
Once you click on Dequeue you will be asked to confirm if you want to dequeue this message. Once you click on Yes the message will be deleted.
How to edit the message if still in draft format
If you have not clicked on send message the message will stay in draft format. If the message is still in draft format you can edit the message by clicking on Edit in the right-hand corner of your screen and edit the message.
How to delete the message if still in draft format
Whilst still in draft format you can also chose to Delete the message by clicking on Delete on the right-hand side of your screen.
Once you have clicked on Delete you will be asked to confirm 'that you want to remove this message'. Once you click on Yes the message will be deleted.
If you need any further help or have any questions please contact the support team by email [email protected] or Ph: 0800 748 763 | https://docs.sitesoft.com/how-to-add-and-manage-your-employees | 2022-01-16T20:06:29 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['https://play.vidyard.com/7ZV1ewaddLLitXcDZ4fv46.jpg',
'Employees Cleaned'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Employees%20tab.png?width=204&name=Employees%20tab.png',
'Employees tab'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Add%20Employee.png?width=688&name=Add%20Employee.png',
'Add Employee'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Add%20Employee%20Dialog%20Box.png?width=688&name=Add%20Employee%20Dialog%20Box.png',
'Add Employee Dialog Box'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Siteconnect%20customer.png?width=656&name=Siteconnect%20customer.png',
'Siteconnect customer'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Edit%20employee.png?width=622&name=Edit%20employee.png',
'Edit employee'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Switch%20Role%20to%20Admin.png?width=688&name=Switch%20Role%20to%20Admin.png',
'Switch Role to Admin'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Automate%20Support%20Screenshots/Employees%20-%20change%20role.gif?width=445&name=Employees%20-%20change%20role.gif',
'Employees - change role'], dtype=object)
array(['https://docs.sitesoft.com/hubfs/image-png-Sep-12-2021-11-39-35-46-PM.png',
None], dtype=object)
array(['https://docs.sitesoft.com/hubfs/image-png-Sep-12-2021-11-40-02-43-PM.png',
None], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Automate%20Support%20Screenshots/Employees%20-%20status%20icons.gif?width=82&name=Employees%20-%20status%20icons.gif',
'Employees - status icons'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Edit%20employee.png?width=622&name=Edit%20employee.png',
'Edit employee'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Send%20message%20employee.png?width=688&name=Send%20message%20employee.png',
'Send message employee'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Compose%20messages.png?width=660&name=Compose%20messages.png',
'Compose messages'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Select%20blah%20blah4.png?width=688&name=Select%20blah%20blah4.png',
'Select blah blah4'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Messages%20side%20menu.png?width=201&name=Messages%20side%20menu.png',
'Messages side menu'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Deqeue%20message.png?width=688&name=Deqeue%20message.png',
'Deqeue message'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Deqeue%20diloag%20box.png?width=250&name=Deqeue%20diloag%20box.png',
'Deqeue diloag box'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Automate%20Support%20Screenshots/employees%20-%20mess%20draft.gif?width=688&name=employees%20-%20mess%20draft.gif',
'employees - mess draft'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Delete%20message.png?width=688&name=Delete%20message.png',
'Delete message'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Automate%20Support%20Screenshots/employees%20-%20delete%20mess.gif?width=255&name=employees%20-%20delete%20mess.gif',
'employees - delete mess'], dtype=object) ] | docs.sitesoft.com |
This document describes the backup system used by Taler wallets. This is the second, simplified iteration of the proposal, which leaves out multi-device synchronization.
Backup must work both with and without Anastasis.
Arbitrary number of backup providers must be supported.
Minimize information leaks / timing side channels.
Minimize potential to lose money or important information.
Since real-time sync is not supported yet, wallets should have a feature where their whole content is “emptied” to another wallet, and the wallet is reset.
Note
CG: This boils down to the existing ‘reset’ button (developer mode). Very dangerous. Could be OK if we had some way to notice the number of wallets using the same backup and then allow this ‘reset’ as longa as # wallets > 1. Still, doing so will require a handshake with the other wallets to ensure that the user doesn’t accidentally reset on both wallets at the same time, each believing the other wallet is still sync’ed. So we would need like a 2-phase commit “planning to remove”, “acknowledged” (by other wallet), “remove”. Very bad UX without real-time sync.
Even without real-time sync, the backup data must support merging with old, existing wallet state, as the device that the wallet runs on may be restored from backup or be offline for a long time.
Each wallet has a 64 (CG: 32 should be enough, AND better for URLs/QR codes/printing/writing down) byte wallet root secret, which is used to derive all other secrets used during backup, which are currently:
If the user chooses to use Anastasis, the following information is backed up in Anastasis (as the core secret in Anastasis terminology):
TBD. Considerations from Design Doc 005: Wallet Backup and Sync still apply, especially regarding the CRDT.
The user will be asked to set up backup&sync (by selecting a provider) after the first withdrawal operation has been confirmed. After selecting the backup&sync providers, the user will be presented with a “checklist” that contains an option to (1) show/print the recovery secret and (2) set up Anastasis.
The wallet will initially only withdraw enough money to pay the backup&sync/anastasis providers. Only after successful backup of the wallet’s signed planchets, the full withdrawal will be completed.
Should the exchange tell the wallet about available sync/Anastasis providers? Otherwise, what do we do if the wallet does not know any providers for the currency of the user?
Should the wallet root secret and wallet database be locally encrypted and protected via a passphrase?
What happens if the same Anastasis user has multiple wallets? Can Anastasis somehow support multiple “instances” per application?
Note
CG would definitively solve this using a more complex format for the master secret, basically serializing multiple root secret values with meta data (which wallet/device/name). | https://docs.taler.net/design-documents/009-backup.html | 2022-01-16T18:42:27 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.taler.net |
P-DARTS¶
Examples¶
# In case NNI code is not cloned. If the code is cloned already, ignore this line and enter code folder. git clone # search the best architecture cd examples/nas/pdarts python3 search.py # train the best architecture, it's the same progress as darts. cd ../darts python3 retrain.py --arc-checkpoint ../pdarts/checkpoints/epoch_2.json | https://nni.readthedocs.io/en/v1.6/NAS/PDARTS.html | 2022-01-16T18:17:09 | CC-MAIN-2022-05 | 1642320300010.26 | [] | nni.readthedocs.io |
Merging Bank Statement Lines
When importing bank account statements with Statement Intelligence, bank reconciliation lines can automatically be merged into one line based on the defined merge rules. Reconciliation lines, that have not automatically been merged during import, can manually be merged in the bank account reconciliation.
Merging of bank account entries is typically used where several credit card transfers are merged into one total per day because that is the amount posted in the general ledger.
To merge lines by merge rules
If you have defined merge rules needed to run automatic merge of bank account statement lines, the lines that comply with the merge rules will be merged during the import of the bank account statement.
If you have been working with the lines after import, that should then be merged, you can manually run the merge rule from the reconciliation journal.
- Use the
icon and search for Bank Account Reconciliations, then select the related link. This opens the list of bank account reconciliations.
- On the required bank account reconciliation, go to the action bar and select Matching > Merge Lines by Rule.
Statement Intelligence will merge lines that can be merged according to the defined merge rules.
To merge lines manually
If merge rules have not been set up, or if the bank account statement lines cannot be merged using the defined merge rules, bank statement lines can be merged by manual selection.
- Use the
icon and search for Bank Account Reconciliations, then select the related link. This opens the list of bank account reconciliations.
- On the required bank account reconciliation, mark the bank statement lines that should be merged, by pressing Ctrl on your keyboard while choosing those lines that should be merged.
- Navigate to the action bar and choose Process > Merge Lines Manually.
The lines chosen for manual merge are now merged into one line.
Information about merged lines
To get an overview of which lines have been merged into one line, navigate to the merged bank statement line and select Additional Information. This action opens a list with an overview of a description and a notification for all the lines included in the merge.
Undo merged lines
It is not possible to unmerge lines that have been merged in the reconciliation journal. If you need the lines not to be merged, you must delete the merged line, and reimport it into the reconciliation journal. If the line was merged by a merge rule, you must disable the merge rule before importing the account statement lines again.
See also
Account statement merge rules. | https://docs.continia.com/en-us/continia-payment-management/business-functionality/reconciling-bank-accounts/merging-bank-statement-lines | 2022-01-16T18:36:40 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.continia.com |
Sections - Completing the Create.
To create a section, click.
> Key parts:
A section has two key parts:
Additional creation options
- Adding.
- Converting sections to full section editor
Sections based on design templates, as well as sections created in App Studio, can leverage the full set of editing capabilities available in Dev Studio if you convert them to use the full section editor.
- Harness and section forms - Adding a section
Harness and Section forms - Adding a section
-. | https://docs.pega.com/user-experience-theme-ui-kit/86/sections-completing-create-or-save-form | 2022-01-16T20:43:40 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.pega.com |
Returns the skewness of the distribution of value_expression.
This function returns the REAL data type.
To invoke the time series version of this function, use the GROUP BY TIME clause. For more information, see Teradata Vantage™ - Time Series Tables and Operations, B035-1208.
Skewness
Skewness is the third moment of a distribution. It is a measure of the asymmetry of the distribution about its mean compared with the normal, Gaussian, distribution.
The normal distribution has a skewness of 0.
Positive skewness indicates a distribution having an asymmetric tail extending toward more positive values, while negative skewness indicates an asymmetric tail extending toward more negative values.
Computation
The equation for computing SKEW is defined as follows:
where:
Conditions That Produce a Null Result
- Fewer than three non-null data points in the data used for the computation
- STDDEV_SAMP(x) = 0
- Division by zero | https://docs.teradata.com/r/ITFo5Vgf23G87xplzLhWTA/01A1Pc0CjkF9h8TdYxmsdw | 2022-01-16T19:41:10 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.teradata.com |
Limitations
Read only.
Gets the bounding rectangle of a text field. The bounding rectangle is the smallest rectangle which contains all characters of the field's text. The rectangle is given in twips with an origin at the upper left corner of the document.
Introduced: 15.0.
public Rectangle Bounds { get; }
Public ReadOnly Property Bounds() As Rectangle
Read only. | https://docs.textcontrol.com/textcontrol/asp-dotnet/ref.txtextcontrol.textfield.bounds.property.htm | 2022-01-16T18:35:58 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.textcontrol.com |
Problems with links¶
There are several ways to write links, here we assume you are using external links with the following syntax. For more information see the references at the bottom on this page.
Common mistake #1: Missing space¶
Make sure there is a space between the anchor text and the
opening
<.
Tip
To test this yourself right now, click on “Edit on GitHub” in the top right corner of this page, fix the errors, then use “Preview changes” to view the changes (you need a GitHub account for this, if you do not have an account, go to).
Wrong syntax¶
`T3O<>`__
How this looks:
`T3O<>`__
Common mistake #2: Missing underscore (_)¶
Missing
_ or
__ at the end: | https://docs.typo3.org/m/typo3/docs-how-to-document/main/en-us/WritingReST/CommonPitfalls/Hyperlinks.html | 2022-01-16T20:08:04 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.typo3.org |
Writing redirects directly into a site’s theme code
VIP helper functions for broader redirects:
- vip_regex_redirects() — Advanced
301redirects using regex to match and redirect URLs.
Caution: Regex is expensive and using this helper function will cause regex operations to be run on every uncached page load.
- vip_substr_redirects() — Wildcard redirects based on the beginning of the request path. This is an alternative to
vip_regex_redirects()for when only a redirect from
/foo/bar/*to somewhere else is needed. | https://docs.wpvip.com/technical-references/redirects/writing-redirects-directly-into-your-theme-code/ | 2022-01-16T19:34:46 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.wpvip.com |
Developer guidance
Learn how to use Intune developer resources, such as Intune App SDK and the App Wrapping Tool to protect LOB apps, Intune Data Warehouse reporting and API to monitor Intune success, and Microsoft Graph to access Intune functionality using Azure AD.
Protect apps with Intune App SDK
Overview
Concept
How-To Guide
Use the Intune Data Warehouse
Overview
- Data warehouse overview
- Change log for the Data Warehouse API
- Intune Data Warehouse API overview
- Intune Data Warehouse data model overview | https://docs.microsoft.com/en-IN/mem/intune/developer/ | 2022-01-16T19:02:41 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.microsoft.com |
Setting up a relying party in AD FS
Use Microsoft Active Directory Federation Services (AD FS) with your Pega Robot Manager and Pega Robot Runtime implementations to implement a claims-based authorization model for providing security and implementing a federated identity. With these features, you can implement single sign-on access to systems and applications across organizational boundaries.
These instructions explain how to set up a relying party in AD FS. The following diagram provides an overview of the process. The activity you set up in this topic is highlighted by the red arrow.
To set up a relying party, you will need one of the following certificates.
- A CA-signed certificate for token-signing
- A certificate Pega Robotic Automation Support has manually approved (issued by devqa.openspan.com)
- Setting up the relying party
These instructions explain how to connect to the server that hosts AD FS version 2.0.
- Ensuring that the connection is trusted
For your configuration to function correctly, the IIS server's SSL root certificate must be trusted to establish secure communications between the server and the Pega Robot Runtime/Pega Robot Studio computer. | https://docs.pega.com/pega-robot-studio-v21-preview/robotic-process-automation/setting-relying-party-ad-fs | 2022-01-16T20:50:58 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['https://docs.pega.com/sites/default/files/drupal/dita/out/2266/robotics-user-guide/BlackjackBook/ADFS/grfx/ADFS01.png',
None], dtype=object) ] | docs.pega.com |
Implementing the security model
Security planning involves defining authentication and authorization strategies for your application:
- Authentication
- Validates your identity.
- Authorization
- Determines the work objects you can access and the application functions you can perform.
For information about defining authentication and authorization strategies for your application, see the following topics:
Authentication schemes
The Pega Platform offers the following authentication types:
- PRBasic
- Based on passwords in the Operator ID data instances and the login form. This is defined by the HTML @baseclass.Web-Login rule, which your application can override.
- PRSecuredBasic
- Similar to PRBasic, but passes credentials by using Secure Sockets Layer (SSL) with Basic HTTP authentication. The login form is defined by the HTML @baseclass.Web-Login-SecuredBasic rule, which your application can override.
- PRCustom
- Supports access to an external LDAP directory or a custom authentication scheme.
- PRExtAssign
- Supports external assignments (Directed Web Access).
- J2EEContext
- Specifies that the application server in which the Pega Platform is deployed uses JAAS to authenticate users.
Implementing your authentication scheme
Your site can use a centralized, automated means of maintaining operator data instead of maintaining it manually in your application.
- Discuss the authentication schemes with your site's security and application server teams.
- Determine the appropriate authentication type.
For more information on authentication scheme planning, see Authentication.
Authorization model
The security authorization model determines user access privileges and work object permissions for the Pega Sales Automation application. Your security authorization model is based on the operator ID privileges and territory permissions structure that you define for your sales team. Access to portals and work objects in the application is determined by operator ID privileges. The ability to read, update, and create specific work objects is determined by the territory to which the work objects belong.
For more information about configuring territories and operators, see Set up your sales team structure.
Work object permissions
The application access privileges and territory permissions that you assign to operators in Pega Sales Automation determine how a user can interact with the work objects in the application.
- Operator privileges (role-based) give the user access to particular types of work objects in the application.
- Read, update, and create permissions for work objects are controlled by the territory that owns the work object.
For example, an operator with a Sales Representative role has access to opportunity work objects; however, to update an opportunity in the Northwest territory, you must grant the operator permission to update opportunity work objects in that territory.
- You can grant different levels of access to work objects within the same territory. For example, you can give a new operator read, update, and create access for lead and opportunity work objects in the Northwest territory, but only read access to organization objects in the same territory.
- A primary territory is defined and used as the default when new work objects are created. The owner of a work object has full access to the work object, regardless of territory access.
For more information, see Setting up persona-based access rights to the User portal navigation pane and Configuring permission access templates.
Attribute Based Control (ABAC)
Attribute Based Access Control (ABAC) controls row-level or column-level security through security policy rules available as part of the Pega Platform's ABAC feature.
For more information, see Attribute-based access control and Upgrading Pega Sales Automation to use attribute-based access control (ABAC).
Client-based access control (CBAC)
Implementing client-based access control (CBAC) helps you satisfy the data privacy requirements of the European Union (EU) General Data Protection Regulation (GDPR) and similar regulations.
For more information about GDPR and CBAC, see Supporting EU GDPR data privacy rights in Pega Infinity with client-based access control.
For more information about configuring CBAC in Pega Sales Automation, see CBAC section in the Pega Sales Automation Release Notes on the Pega Sales Automation product page. | https://docs.pega.com/pega-sales-automation-implementation-guide/86/implementing-security-model | 2022-01-16T18:36:18 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.pega.com |
Creating Customized Brand for SSA
From SSA 12.0 onwards, a new file
brand.css has been introduced through
which you can customize banner and branding for SSA.
HTML Banner
This is in reference to section that appear on top part of the application. You can use this space to add your organization’s logo and apply colors in accordance with your brand guidelines.
For reference, refer to:
Customerconfigurations > analyst > theme > banner > default
Duplicate the default folder containing Pitney Bowes banner and edit the index.html file to suit your brand identity. You can create your own HTML banner as well. The HTML banner that you create would be referred in the brand.css file.
Brand.css
We have introduced a
.css file to manage all branding related aspects
through brand.css. Through this file you are given ability to:
You can find the brand.css file in the default brand’s folder:
Customerconfigurations > analyst > theme > branding > default > brand.css
Or you can also download it from here.
Creating a New Brand
Creating a new brand Using brand.css to customize your brand
Changing the locator marker for the map in your custom brand
| https://docs.precisely.com/docs/sftw/analyst/2018.2/admin_guide/en/concepts/branding_banner.html?hl=brand | 2022-01-16T18:41:12 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['../images/pointer_locator.png', None], dtype=object)] | docs.precisely.com |
Pull Secret for GitLab
Secret, is a credential for proper permissions on repositories or registries.
Add a image pull-secretAdd a image pull-secret
This quickstart describes how we have/add a secret dedicated to pull an image from GitLab on PrimeHub.
PrimeHub supports to pull an
image from a Docker registry. But what if the registry is private only? Then we need a
secret to have proper privileges to do it.
Generate a deploy token on GitLabGenerate a deploy token on GitLab
We need a deploy token generated from GitLab as a secret on PrimeHub for accessing a private registry on GitLab. Luckily, GitLab has a nice guide instructing how to Creating a Deploy Token of a specific repo.
Following the guide to have a deploy token and keeping the pair of
Username and
Token safely since they are only displayed once otherwise we have to re-generate a token again.
Add a token as a secret on PrimeHubAdd a token as a secret on PrimeHub
Log in as an administrator and switch to Admin Portal.
Enter the
Secretsmanagement and click
+ Addfor adding a secret.
Fill
Name,
Display Name(Optional) and select
Type
Image Pull.
Fill
Registry Hostwith
registry.gitlab.com.
Fill
Usernameand
Passwordwith the token generated from GitLab.
Click
Confirmto save the secret.
Alright, we have added a pull secret for our private GitLab registry, then we are able to use it to pull images. Once we add an image description which is located on our private GitLab registry on PrimeHub, we have to select this secret as an
Image Pull Secret.
MISC.MISC.
We, of course, can add secrets not only from GitLab, but also secrets from other sources, here are some references. | https://docs.primehub.io/docs/quickstart/secret-pull-image | 2022-01-16T19:59:10 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.primehub.io |
Uber1. Log in to Uber
2. Settings2. Settings
Click on your profile and then Settings
3. Security3. Security
Click on Security
4. Two-step verification4. Two-step verification
Scroll down and select 2-step verification
5. Options to receive codes5. Options to receive codes
You will be show 2 options for receiving the code. Select Security App
6. How to access QR codes6. How to access QR codes
On the next screen, it will ask for a 6-digit code to enter. Instead, click on I need help
Then, click on the Set up with another device
7. Scan QR code7. Scan QR code
A QR code will be shown to submit. It should now show that Two-factor authentication is needed for future logins.
The next time you use Uber and are prompted for a One-time passcode code, you can use the Trusona app to log in.
| https://docs.trusona.com/totp/uber/ | 2022-01-16T18:57:15 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['https://docs.trusona.com/images/totp-integration-images/uber/step-2.png',
'Profile settings'], dtype=object)
array(['https://docs.trusona.com/images/totp-integration-images/uber/step-3.png',
'Click on Security'], dtype=object)
array(['https://docs.trusona.com/images/totp-integration-images/uber/step-4.png',
'Select 2-step verification'], dtype=object)
array(['https://docs.trusona.com/images/totp-integration-images/uber/step-5.png',
'Security App option'], dtype=object)
array(['https://docs.trusona.com/images/totp-integration-images/uber/step-6.png',
'Select I need help'], dtype=object)
array(['https://docs.trusona.com/images/totp-integration-images/uber/step-6-2.png',
'Set up with another device'], dtype=object)
array(['https://docs.trusona.com/images/totp-integration-images/uber/scan.png',
'Scanning the code'], dtype=object)
array(['https://docs.trusona.com/images/totp-integration-images/uber/step-7.png',
'Finalize'], dtype=object)
array(['https://docs.trusona.com/images/totp-integration-images/uber/step-8.png',
'2FA setup complete'], dtype=object) ] | docs.trusona.com |
UAB Galaxy RNA Seq Step by Step Tutorial
Latest revision as of 08:55, 16 September 2011
[edit] Transcriptome analysis via RNA-Seq
There are several types of RNA-Seq: transcriptome, splice-variant/TSS/UTR analysis, microRNA-Seq, etc. This tutorial will focus on doing a 2 condition, 1 replicate transcriptome analysis in mouse.
[edit] Background
[edit]
[edit]
[edit].
[edit] Check quality of READ data
At this step, we check the quality of sequencing. This was already covered in the DNA-Seq tutorial, under Galaxy_DNA-Seq_Tutorial#Assessing_the_quality_of_the_data.
[edit] Align using TopHat
For the moment, TopHat is the standard NGS aligner for transcript data, as it handles splicing. In addition, it can be set to detect indels relative to the reference genome.
We'll run TopHat once for each sample (twice, in this case), providing it with 2 FASTQ files each time (forward and reverse reads).
Menu: NGS: RNA Analysis > Tophat for Illumina
[edit] Tophat Inputs
-)
[edit] TopHat output
- accepted_hits (BAM, BAI)
- Two binary files: .BAM (data) and .BAI (index)
- These are the actual paired reads mapped to their position on the genome, and split across exon junctions. This can be visualized in IGV, IGB or UCSC,)
[edit] Alignment/Mapping QC
Next we check the quality of the.
[edit].
[edit].
[edit] Cufflinks outputs
-.
[edit] Cufflinks QC
- Check that all the FPKM values in these files are not zero.
- Check that the gene symbols show up where you want them. If all the gene_id's are "CUFF.#.#", it will be hard to figure things out.
- Visualize assembled_transcripts
[edit].
[edit].
[edit] cuffcompare outputs
- combined_transcripts.gtf
- For this exercise, we only use this one output, which will be the augmented genome annotation we'll use in our final step, cuffdiff.
[edit] Fold Change Between Conditions: CuffDiff
Cuffdiff does the real work determining differences between experimental conditions. In our case, we have two conditions (control, drugged), with 1 replicate each. However, cuffdiff can handle many conditions, each with several replicates, in one go.
[edit].
[edit].
[edit]. | https://docs.uabgrid.uab.edu/w/index.php?title=UAB_Galaxy_RNA_Seq_Step_by_Step_Tutorial&diff=cur&oldid=3279 | 2022-01-16T20:01:33 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.uabgrid.uab.edu |
Making a Record Unique
To change a record from a duplicate to a unique:
- In the MatchRecordType field, enter "Unique".
- When you are done modifying records, check the Approved box. This signals that the record is ready to be re-processed by Spectrum™ Technology Platform.
- To save your changes, click the Save button. | https://docs.precisely.com/docs/sftw/spectrum/12.2/en/webhelp/DataQualityGuide/BusinessSteward/source/ExceptionEditor/DupeResolution_MakingRecordUnique.html | 2022-01-16T19:31:01 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.precisely.com |
Search tokenization
Tokenization is a process where input text is converted to tokens. Tokens are then stored in ElasticSearch and used to find matches against search terms. This makes it possible to adjust which documents are returned for a given search.
Prerequisites
VIP-CLI is installed and has been updated to the most current version.
Enterprise Search is installed.
Conversion to tokens occurs:
- When a document (e.g. post) is stored or updated in ElasticSearch.
- During a search, the search term is broken down into tokens.
Search term tokens are then compared against the document tokens to find a match. The tokenization process applies a more sophisticated set of rules during a search, which eliminate the need for full-text search and also returns more relevant document results.
Mapping
Mapping is a method for defining how ElasticSearch should understand and process individual fields within a document. Both dynamic mapping and explicit mapping are used to define data for ElasticSearch, and more in-depth explanations for both can be found in the ElasticSearch Guide.
This snippet is an example of a small part of a possible mapping section for a post:
{ ... "post_content": { "type": "text" }, "post_content_filtered": { "type": "text" }, "post_date": { "type": "date", "format": "yyyy-MM-dd HH:mm:ss" }, ... }
Analyzer
An analyzer is the main tool used to produce tokens and is a collection of rules that govern how tokenization will be executed. Enterprise Search defines a default custom analyzer that will be used if a field in mappings does not have an explicit analyzer.
More detailed information about the use of analyzers can also be found in the official ElasticSearch documentation.
How to retrieve settings
The
wp vip-search get-index-settings CLI command retrieves the current settings of an index:
vip @<app_id>.<environment> -- wp vip-search get-index-settings <indexable_type>
The default analyzer will be included in the results returned by the command.
In this example, the
get-index-settings command is run against the application and environment
@2891.develop, and is returning a specific section of the settings that sets the default analyzer:
$ vip @2891.develop -- wp vip-search get-index-settings post --format=json | jq .[][].settings.index.analysis.analyzer.default { "filter": [ "ep_synonyms_filter", "ewp_word_delimiter", "lowercase", "stop", "ewp_snowball" ], "char_filter": [ "html_strip" ], "language": "english", "tokenizer": "standard" }
Unless the predefined behavior has been modified, the configuration of a site’s index default analyzer will be the same as the example above.
How to retrieve settings for a specific filter
Settings for individual filters can be queried with the
get-index-settings command. In this example, settings for the
ep_synonyms_filter are queried:
$ vip @2891.develop -- wp vip-search get-index-settings post --format=json | jq .[][].settings.index.analysis.filter.ep_synonyms_filter
The settings returned for the example command above:
{ "type": "synonym_graph", "lenient": "true", "synonyms": [ "sneakers, tennis shoes, trainers, runners", "shoes => sneaker, sandal, boots, high heels" ] }
The example output above shows that the built-in synonym_graph is serving as a base for
ep_synonyms_filter. Also shown in the returned output are examples of defined synonyms.
This example demonstrates two important things:
- Collection of tokens is not a list, but rather a graph that can branch out and connect back in.
- Filters are not limited to one-to-one conversations. For example, two tokens
"tennis" "shoes"can become one –
"sneakers"and vice-versa.
Filters
Filters—specifically token filters—are the most useful tool for increasing search relevancy.
Token filters parse the tokens that were generated by the previous filter, and produce new tokens based on their settings. Every filter will receive the result of the previous filter. Because of this, the order of filters will have an impact on results in most cases.
In an earlier example, the filter settings returned by
get-index-settings were:
"filter": [ "ep_synonyms_filter", "ewp_word_delimiter", "lowercase", "stop", "ewp_snowball" ],
The
filter keyword sets the filters in the settings, and the filters are listed in the order which they are applied by the analyzer.
The filters listed in the example above and the filters defined below do not represent the complete list of all available filters. ElasticSearch has several additional built-in filters that can be configured to define a custom filter.
ep_synonyms_filter
The
ep_synonyms_filter custom filter allows the analyzer to handle synonyms, including multi-word synonyms.
{ "type": "synonym_graph", "lenient": "true", "synonyms": [ "sneakers, tennis shoes, trainers, runners", "shoes => sneaker, sandal, boots, high heels" ] }
In the above example, the built-in synonym_graph is set as a base for
ep_synonyms_filter. The tokens
"green" "tennis" "shoes" transform into
"green" ("sneakers"|"tennis" "shoes"|"trainers"|"runners").
The above settings allow a search query for
blue sneakers to return the same results as a query for
blue tennis shoes.
ewp_word_delimiter
The
ewp_word_delimiter custom filter uses the base word_delimeter filter serves as a tool to break down composed terms or words to tokens based on several rules..
{ "type": "word_delimiter", "preserve_original": "true" }
As an example,
"WordPress" will be broken down into two terms
"Word" "Press". This allows the search term
"Word" to match a
"WordPress" document.
lowercase
Lowercase is a built-in filter that converts all letters to lowercase, enabling search to become case insensitive.
As an example of the importance of the order of filters, if the
lowercase filter was applied before
ewp_word_delimiter, the term
"WordPress" would not be split into
"Word" "Press". The
lowercase filter would convert
"WordPress" to
"wordpress" before it was passed to the
ewp_word_delimiter filter, so the rule to split tokens at letter case transitions would not apply.
stop
The
stop filter removes a predefined list of stop word lists for several languages when applied. For the English language, stop words include
a or
the, for example. Removing these words from the token collection helps documents that are more relevant to a search term to score higher than documents that have large numbers of stop words in them.
ewp_snowball
The base snowball filter stems words into their basic form. For example,
"jumping" "fox" will be converted to
"jump" "fox".
{ "type": "snowball", "language": "english" }
The result of this filter if applied, is that if one document contains
"jumping bear", and another document contains
"high jump" in its content. They will both score the same for the search term
"jumping".
Customizing the analyzer
Several ElasticPress filters are available for customizing the way the default analyzer operates. Added customizations are only applied when the index is created, which requires the
--setup flag.
Note
Recreating the index with the
--setup option will completely drop and recreate the index. Therefore either partial results, or no results, will be returned during the time an index is in progress.
wp vip-search index --setup
Changing default filters
The
ep_default_analyzer_filters WordPress filter returns an array of filters that will be applied to the default analyzer.
This filter can be used to modify the list of token filters that will be applied to the default analyzer.
For example, to make the search case sensitive remove the
lowercase filter:
add_filter( 'ep_default_analyzer_filters', function ( $filters ) { if ( ( $key = array_search( 'lowercase', $filters ) ) !== false ) { unset( $filters[ $key ] ); } return $filters; } );
Be aware that ElasticSearch adds the
ep_synonyms_filter token filter as the first item in the array of filters returned by
ep_default_analyzer_filters.
So for example the following code:
add_filter( 'ep_default_analyzer_filters', function ( $filters ) { return array('lowercase', 'stop'); } );
will actually return the array:
array( 'ep_synonyms_filter', 'lowercase', 'stop')
Changing language
Many of the filters for customization are language-dependent. By default, the
ep_analyzer_language filter is used to change the stemming Snowball token filter, and a limited number of accepted languages can be applied.
For example, to switch language to French:
add_filter( 'ep_analyzer_language', function( $language, $context ) { return 'french'; }, 10, 2 );
Define synonyms
A list of synonyms can be defined by implementing
ep_synonyms filter.
Note that the group of synonyms is each a single string separated with commas
,.
add_filter( 'ep_synonyms', function() { return array( 'vip,automattic', 'red,green,blue' ); } );
To disable synonyms return an empty array. The
ep_synonyms_filter will then not be created nor used in the default analyzer.
add_filter( 'ep_synonyms', function() { return array(); } );
Define custom filters
Custom filters can be added by extending an existing filter.
The
ep_<indexable>_mapping WordPress filter allows mappings to be modified after its generation, but before publishing.
This is a kind of catch-all filter that can be used to make any adjustments that do not already have a custom WordPress filter defined.
In the following example, the
ep_post_mapping filter is used to define the custom token filter
my_custom_word_delimiter. This variant of the filter will not split on case change, nor on
_ or
-. Adding this to
ep_default_analyzer_filters will ensure that the filter is added to the default analyzer and is applied in the correct order of the list of token filters.
add_filter( 'ep_post_mapping', function ( $mapping ) { $mapping['settings']['analysis']['filter']['my_custom_word_delimiter'] = [ 'type' => 'word_delimiter_graph', 'preserve_original' => true, 'split_on_case_change' => false, 'type_table' => array( '_ => ALPHA', '- => ALPHA' ), ]; return $mapping; } ); add_filter( 'ep_default_analyzer_filters', function() { return array( 'my_custom_word_delimiter', 'lowercase', 'stop', 'kstem' ); } ); | https://docs.wpvip.com/how-tos/vip-search/search-tokenization/ | 2022-01-16T18:55:25 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.wpvip.com |
Service Overview¶
The Basics¶
Genesis Storage is an object storage account in our Genesis Public Cloud. Once provisioned, we will provide S3 credentials and bill for storage on-demand at current Genesis Public Cloud object storage rates.
Alternatively, you may sign-up for a Genesis Public Cloud account and provision the S3 credentials which is describe in the S3 compatibility section of Using OpenStack. | https://docs.genesishosting.com/genesis-storage/service-overview.html | 2022-01-16T18:13:08 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.genesishosting.com |
Loss of signal occurs when New Relic stops receiving data for a while; technically, we detect loss of signal after a significant amount of time has elapsed since data was last received in a time series. Loss of signal can be used to trigger or resolve a violation, which you can use to set up alerts.
Gap filling can help you solve issues caused by lost data points. When gaps are detected between valid data points, we automatically fill those gaps with replacement values, such as the last known values or a static value. Gap filling can prevent alerts from triggering or resolving when they shouldn't.
Tip
The alerts system fills gaps in actively reported signals. This signal history is dropped after 2 hours of inactivity. For gap filling, data points received after this period of inactivity are treated as new signals.
To learn more about signal loss, gap filling, and how to request access to these features, see this Explorers Hub post.
You can customize loss of signal detection and gap filling using NerdGraph. For example, you can configure how long to wait before considering the signal lost, or what value should be used for filling gaps in the time series. Here are some queries and examples you can use in our NerdGraph API explorer.
In this guide we cover the following:
Customize your loss of signal detection
Loss of signal detection opens or closes violations if no data is received after a certain amount of time. For example, if you set the duration of the expiration period to 60 seconds and an integration doesn't seem to send data for more than a minute, a loss of signal violation would be triggered.
You can configure the duration of the signal loss and whether to open a violation or close it by using these three fields in NerdGraph:
expiration.expirationDuration: How long to wait, in seconds, after the last data point is received by our platform before considering the signal as lost. This is based on the time when data arrives at our platform and not on data timestamps. The default is to leave this null, and therefore this wouldn't enable Loss of Signal Detection.
expiration.openViolationOnExpiration: If
true, a new violation is opened when a signal is lost. Default is
false. To use this field, a duration must be specified.
expiration.closeViolationsOnExpiration: If
true, open violations related to the signal are closed on expiration. Default is
false. To use this field, a duration must be specified.
View loss of signal settings for an existing condition
Existing NRQL conditions may have their loss of signal settings already configured. To view the existing condition settings, select the fields under
nrqlCondition >
expiration:
{ actor { account(id: YOUR_ACCOUNT_ID) { alerts { nrqlCondition(id: NRQL_CONDITION_ID) { ... on AlertsNrqlStaticCondition { id name nrql { query } expiration { closeViolationsOnExpiration expirationDuration openViolationOnExpiration } } } } } } }
You should see a result like this:
{ "data": { "actor": { "account": { "alerts": { "nrqlCondition": { "expiration": { "closeViolationsOnExpiration": false, "expirationDuration": 300, "openViolationOnExpiration": true }, "id": "YOUR_ACCOUNT_ID", "name": "Any less than - Extrapolation", "nrql": { "query": "SELECT average(value) FROM AlertsSmokeTestSignals WHERE wave_type IN ('min-max', 'single-gap') FACET wave_type" } } } } } }, ...
Create a new condition with loss of signal settings
Let's say that you want to create a new create a NRQL static condition that triggers a loss of signal violation after no data is received for two minutes. You would set
expirationDuration to 120 seconds and set
openViolationOnExpiration to
true, like in the example below.
mutation { alertsNrqlConditionStaticCreate( accountId: YOUR_ACCOUNT_ID policyId: YOUR_POLICY_ID condition: { name: "Low Host Count - Catastrophic" enabled: true nrql: { query: "SELECT uniqueCount(host) from Transaction where appName='my-app-name'" } signal { aggregationWindow: 60 aggregationMethod: EVENT_FLOW aggregationDelay: 120 } terms: [{ threshold: 2 thresholdOccurrences: AT_LEAST_ONCE thresholdDuration: 600 operator: BELOW priority: CRITICAL }] valueFunction: SINGLE_VALUE violationTimeLimitSeconds: 86400 expiration: { expirationDuration: 120 openViolationOnExpiration: true } } ) { id name } }
Update the loss of signal settings of a condition
What if you want to update loss of signal parameters for an alert condition? The following mutation allows you to update a NRQL static condition with new
expiration values.
mutation { alertsNrqlConditionStaticUpdate( accountId: YOUR_ACCOUNT_ID id: YOUR_STATIC_CONDITION_ID condition: { expiration: { closeViolationsOnExpiration: BOOLEAN expirationDuration: DURATION_IN_SECONDS openViolationOnExpiration: BOOLEAN } } ) { id expiration { closeViolationsOnExpiration expirationDuration openViolationOnExpiration } } }
Customize gap filling
Gap filling replaces gap values in a time series with either the last value found or a static, arbitrary value of your choice. We fill gaps only after another data point has been received after the gaps in signal (after data reception has been restored).
You can configure both the type of filling and the value, if the type is set to static:
signal.fillOption: Type of replacement value for lost data points. Values can be:
NONE: Gap filling is disabled.
LAST_VALUE: The last value seen in the time series.
STATIC: An arbitrary value, defined in
fillValue.
signal.fillValue: Value to use for replacing lost data points when
fillOptionis set to
STATIC.
Important
Gap filling is also affected by
expiration.expirationDuration. When a gap is longer than the expiration duration, the signal is considered expired and the gap will no longer be filled.
For example, here's how to create a static NRQL condition with gap filling configured:
mutation { alertsNrqlConditionStaticCreate( accountId: YOUR_ACCOUNT_ID policyId: YOUR_POLICY_ID condition: { enabled: true name: "Example Gap Filling Condition" nrql: { query: "select count(*) from Transaction" } terms: { operator: ABOVE priority: CRITICAL threshold: 1000 thresholdDuration: 300 thresholdOccurrences: ALL } valueFunction: SINGLE_VALUE violationTimeLimitSeconds: 28800 signal: { aggregationWindow: 60, aggregationMethod: EVENT_FLOW, aggregationDelay: 120, fillOption: STATIC, fillValue: 1 } } ) { id } } | https://docs.newrelic.com/docs/alerts-applied-intelligence/new-relic-alerts/advanced-alerts/alerts-nerdgraph/nerdgraph-api-loss-signal-gap-filling | 2022-01-16T20:05:17 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.newrelic.com |
This doc explains the difference between Titles and Short titles. It also delves into how these are affected and affect Paths and Redirects.
Titles
Titles are the title rendered at the top of each doc. The title of this doc is Titles vs Short Titles vs Path vs Redirects.
The title of a document is define in the front matter. This doc's title is defined below:
---title: Titles vs Short Titles vs Path vs Redirects---
Short titles
Short titles are displayed in the left nav. These are used in cases where the title is a bit too long to be easily scanned. The Short title of this doc is Titles vs Short titles.
The Short title is defined in the corresponding navigation file. This doc's short title is define in
src/content/nav/style-guide.yml:
- title: Tablespath: /docs/style-guide/quick-reference/tables
Read more about editing the left nav.
Paths and Redirects
The path and file name of a specific doc has no bearing on the title of that doc. The path for this doc is
/docs/style-guide/quick-reference/tables. Notice that although the file is named
tables, neither the title or the short title is called
Titles.
Redirects fully ignore the title, short title, and path of a doc. It will redirect you to wherever the redirect points. Read more about redirects. | https://docs.newrelic.com/docs/style-guide/formatting/titles | 2022-01-16T18:54:21 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.newrelic.com |
Context Dictionary and context switching
The Context Dictionary identifies the business entities that you wish to make a decision on as well as the associated data that is used in support of these decisions. Defining a multi-level context, allows users to define Engagement policies and Actions at the appropriate level based on their customer data structure. The Context Dictionary manages many of the runtime artifacts and rules that retrieve customer profile data for inbound and outbound channels. Strategy Canvas supports switching contexts, simplifying strategy configuration.
| https://docs.pega.com/pega-customer-decision-hub-product-overview/86/context-dictionary-and-context-switching | 2022-01-16T19:36:38 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['https://docs.pega.com/sites/default/files/drupal/dita/out/1811/strategic-apps-product-overview/Content/PegaMarketing/Resources/context-dictionary.png',
None], dtype=object) ] | docs.pega.com |
Value Finder requirements for data sets
If your organization is currently still on a Pega Customer Decision Hub version earlier than 8.6 that does not include Value Finder, you can import a data set with strategy results and customer attributes to an 8.6 environment and analyze the data there by running a Value Finder simulation test.
Ensure that your data set meets the import requirements so that Value Finder can analyze the data. For example, the data set must contain all strategy results and appropriate customer attributes that Value Finder can use to describe under-served customers. The data set must also contain the data for at least the eligibility and arbitration stages, as well as the propensities from adaptive models.
Since the structure of a strategy framework can vary and propensity values are not always available at the eligibility stage, you might have to create a new strategy to obtain the necessary eligibility and arbitration stage results from your specific strategy framework, and to attach the necessary propensities.
Best practices
When creating a data set for Value Finder analysis, follow these guidelines:
- Run a simulation and save the strategy results for all customers. Then, join these strategy results with customer data. Use a data flow to merge these sources and to ensure that there is at least one entry for every customer in the original audience.
- Do not mix inbound and outbound results as propensity ranges can vary.
- If applicable, include multiple records for the same customer ID. There may be multiple records for the same customer ID because:
- The Next-Best-Action Designer strategy consists of different stages: eligibility, applicability, suitability, and arbitration. The data set must include a pyStage column that contains at least the Eligibility and Arbitration values. The other stages are optional. If a stage is empty, this indicates that all actions were filtered out because there are no strategy results for that customer after eligibility.
- A customer can have one or more strategy results (actions or treatments) at each of these stages.
- Include a pyPropensity column that indicates the propensity for each of the strategy results obtained from the adaptive model.
Required fields
Include the following fields in the data set:
- CustomerID
- pyStage – In this column, include at least two values: Eligibility and Arbitration. You can also leave it empty when there are no actions for a customer after eligibility.
- pyPropensity – In this column, include the propensities as obtained from the adaptive model.
- Customer attributes:
- Include any fields that are useful in describing groups of under-served customers, for example, age, gender, or current product holdings.
- You can exclude customer attributes such as names, addresses, or other uniquely identifiable data. A Value Finder analysis does not require this data.
Optional fields
If you use the following fields in the prioritization, you can include them in the data set. Value Finder does not use these fields for analysis, but they can be useful for further insights:
- pyChannel
- pyDirection
- pyGroup
- pyName
- pyTreatment
- pyValue
Tabular data example
The following table shows a sample data set which Value Finder can analyze to identify gaps in customer engagement. A data set must include all customers, so that Value Finder knows how large the original population is and can identify how many customers are left without actions after eligibility.
If the decision strategy returns a result for a customer at every stage of the decision strategy funnel, include all the results in the table. If a customer does not pass the eligibility stage, you can leave the result for the stage empty, so that Value Finder knows how many customers are left without actions after eligibility.
There are multiple records for some customer IDs. For example, there are three records for customer 1, which represent three strategy results. This customer is eligible for two actions (OfferX and OfferY), one of which was selected for the customer at the arbitration stage (OfferX). | https://docs.pega.com/pega-customer-decision-hub-user-guide/86/value-finder-requirements-data-sets | 2022-01-16T20:10:40 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.pega.com |
Build notes: Pega Synchronization Server 3.1
Build notes describe the changes that are included in each build that is created for the Pega Synchronization Server. Before you install one of these builds, familiarize yourself with the new features, resolved issues, and other changes. The Pega Synchronization Server stores builds for the following products:
The Pega Synchronization Server is included with every Pega Robot Runtime build and its version number is incremented to match, regardless of whether functional changes were made. This article notes the builds that include functional changes, and any specific Pega Synchronization Engine requirements. For more information, see Pega Synchronization Sever and Package Server User Guide.
To download a Pega Robot Runtime build, see Download Pega PRA software. For information about system requirements, review the installation instructions.
Summary of changes
The following table lists the changes that are included in the various Pega Synchronization Server builds: | https://docs.pega.com/pega-rpa/build-notes-pega-synchronization-server-31 | 2022-01-16T20:27:19 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.pega.com |
Identifies the column or set of columns by which the dataset is sorted.
-.
This page has no comments. | https://docs.trifacta.com/display/r082/order_sort+Parameter | 2022-01-16T19:28:08 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.trifacta.com |
JetBrains1. Log in to JetBrains
2. Licenses2. Licenses
Locate the Licenses tab to the far left of the navigation bar and click on it.
3. Turning on Two-factor authentication3. Turning on Two-factor authentication
You should have a notification on the screen informing you that Two-factor authentication is available. Click on the hyperlink turn on two-factor auth
4. Enable Two-factor authentication & Password4. Enable Two-factor authentication & Password
Click on the Enable 2FA button
You will then be asked to re-enter your account password. Enter it and then click on Next
>>IMAGE Two-factor authentication is switched on
Setup complete! The next time you log in to JetBrains and are prompted for a One-time passcode, you can use the Trusona app to log in.
| https://docs.trusona.com/totp/jetbrains/ | 2022-01-16T19:52:23 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['https://docs.trusona.com/images/totp-integration-images/jetbrains/step-2.png',
'Licenses'], dtype=object)
array(['https://docs.trusona.com/images/totp-integration-images/jetbrains/step-3.png',
'Turn on two-factor auth'], dtype=object)
array(['https://docs.trusona.com/images/totp-integration-images/jetbrains/step-4.png',
'Enable 2FA'], dtype=object)
array(['https://docs.trusona.com/images/totp-integration-images/jetbrains/step-4-2.png',
'Re-enter password'], dtype=object)
array(['https://docs.trusona.com/images/totp-integration-images/jetbrains/step-5.png',
'Scanning the code'], dtype=object)
array(['https://docs.trusona.com/images/totp-integration-images/jetbrains/step-6.png',
'Finalize'], dtype=object) ] | docs.trusona.com |
The Zendesk Support source supports both Full Refresh and Incremental syncs. You can choose if this connector will copy only the new or updated data, or all rows in the tables and columns you set up for replication, every time a sync is run.
This source can sync data for the Zendesk Support API.
This Source Connector is based on a Singer Tap.
This Source is capable of syncing the following core Streams:
The connector is restricted by normal Zendesk requests limitation.
The Zendesk connector should not run into Zendesk API limitations under normal usage. Please create an issue if you see any rate limit issues that are not automatically retried successfully.
Zendesk API Token
Zendesk Email
Zendesk Subdomain
Generate a API access token using the Zendesk support
We recommend creating a restricted, read-only key specifically for Airbyte access. This will allow you to control which resources Airbyte should be able to access. | https://docs.airbyte.io/integrations/sources/zendesk-support | 2021-04-11T01:03:35 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.airbyte.io |
Basic monitoring
Show graphs
For a more visual approach when monitoring status, you can activate a set of autogenerated graphs to better analyze the performance markers associated with each machine. You just need to click the Show graphs toggle of the corresponding row. You can also click the Show all graphs toggle at the top if you want to show all machines' graphs.
The graphs you can see here represent each machine main parameters (load, CPU, memory and disk) during the last 24 hours.
Create Alerts
To enhance the monitoring process and help you identify performance issues, this application offers you the possibility to create alerts to be triggered whenever certain thresholds are exceeded. The procedure is as follows:
- Show graphs as shown in the section above.
- Click the New Alert button that appears in the machine row upon graphs activation.
- Specify all the required information and click Add Alert.
Add tags
If you have several machines to monitor, you can add tags to better identify them.
Be aware that tags are domain-dependent. This means that every user in the domain with access to the application will be able to see and use the tags you apply.
- Show graphs as shown in the section above.
- Click inside the Add Tag field, type the name you want for your tag and press ENTER.
- The tag is created and applied to the current machine. All tags applied will appear in the Filter Tags area next to the Add Tag field.
- If you want to remove a tag, you just need to click the cross of the corresponding tag.
Show filters
Once you have added your tags, you can filter the machines by tag to quickly spot them.
- Click the Show Filter toggle at the top. The Filter Tags area shows the tags that are applied to at least one machine.
- Click one of the tags to show only those machines with that tag. The tag will turn green to indicate it is being used as a filter.
If you click several tags, only those machines containing both tags will be shown.
- Click a tag again to stop using it as a filter or click Clear Filter at the top to stop using all of them as filters. | https://docs.devo.com/confluence/ndt/applications/systems-monitoring/basic-monitoring | 2021-04-11T01:36:15 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.devo.com |
In the Walkthrough Video create screen, there are two video options you can create - walking video or slideshow video. Walking creates a video that walks you through connected panoramas and slideshow creates a video that rotates you 360 degrees on each selected panorama.
In this tutorial, you will learn how to create a slideshow video that spins 360 degree on each of your panoramas. Please follow the following steps.
Step 1: Get started
1. Select a tour and click the Tools button.
2. In the Tools screen, click on the Walkthrough Video.
3. It will bring you to the Walkthrough Video create screen where you can start working on your slideshow video.
4. In the Walkthrough Video create screen, there are two video options you can create - walking video or slideshow video. In this case you can select slideshow.
Step 2: Start creating the slideshow video
1. Start the Slideshow video by clicking on the map the first panorama you want the video to start with. Continue the video by clicking on any dot. slideshow video. GoThru offers a variety of Royalty Free Music, but you could also upload your custom music.
7. Select the duration for one panorama to rotate between 6 and 60 seconds and then choose your preferred video size.
8. It also allows you to add your custom front and back video slide using an image, at the specified size 1920 x 1080 pixels.
9. After completing all of the video setups, click Create Video for the slideshow in the video tutorial below. | https://docs.gothru.co/how-to-create-a-slideshow-video-that-spins-360-degree-on-each-of-your-panoramas/ | 2021-04-11T01:55:41 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.gothru.co |
Templates
Templates allow you to define the generic document structure (such as a welcome email message or a quote PDF), which is then converted into an actual document based on the provided data.
If you wish to learn more about a specific topic, refer to the subsections under.
Creating a template
Templates are created and managed in the Corteza Admin.
Navigate to theand click on the New button in the top-right corner.
Enter the basic information; short name, handle, description, and template type. The template type determines the template format and implicitly implies what document types the template can render to.
A partial template is used as part of another template (for example, a generic header or a footer) and can not be rendered independently. You can convert the template from and to a partial template.
Click on the Submit button to prepare your template.
After you submit the base parameters, three new sections will appear.
The content
Let’s start by copying the default HTML sample from the toolbox.
<!DOCTYPE html> <html> <head> <meta charset='utf-8'> <meta http- <title>Title</title> <meta name='viewport' content='width=device-width, initial-scale=1'> </head> <body> <h1>Hello, world!</h1> </body> </html>
If you don’t need any dynamic content (different name for different contacts), you can stop. The above template is valid and can already be used.
If you need dynamic content, we need to cover a few more topics.
Value interpolation
Value interpolation allows you to define some placeholder that is then replaced with an actual value when the template is rendered.
In our case, this placeholder looks like this:
{{.name}}
The value will replace the above placeholder under the
name property from the provided
value object.
Let’s look at some examples.
Each example firstly defines the
value object and then the placeholder.
{ "name": "Jane" } {{.name}}
{ "contact": { "details": { "name": "Jane" } } } {{.contact.details.name}}
A complete example would look like this:
{ "contact": { "details": { "firstName": "Jane", "lastName": "Doe" } } }
<!DOCTYPE html> <html> <head> <meta charset='utf-8'> <meta http- <title>Title</title> <meta name='viewport' content='width=device-width, initial-scale=1'> </head> <body> <h1>Hello, {{.contact.details.firstName}} {{.contact.details.lastName}}!</h1> </body> </html>
Conditional rendering
Conditional rendering allows you to show or hide sections of the rendered document based on the input parameters.
ifstatement:
{{if condition}} The condition was true. {{end}}
if-elsestatement:
{{if condition}} The condition was true. {{else}} The condition was false. {{end}}
if-else if-elsestatement:
{{if condition}} The condition was true. {{else if condition}} The other condition was true. {{else}} Neither conditions were true. {{end}}
The
condition part is an expression that returns a single boolean value.
An example of an expression:
{{if .lead.cost > 1000}} The lead {{.lead.name}} was expensive! {{end}}
Handling lists
Our templates make it quite easy to work with lists of items. For example, you would like to generate a quote with lots of line items.
The syntax for iterating over a list looks like this:
{{range .listOfItems}} {{.itemName}}; {{.itemCost}}$ {{end}}
If you prefer to specify what variable the current item is stored into, use this syntax:
{{range $index, $item := .ListOfItems}} {{$item.itemName}}; {{$item.itemCost}}$ {{end}}
Using functions
Sometimes you will need to process the data further before it is rendered to the document.
Some lighter processing can be handled directly by the template engine. More complex processing should be handled by the code that is requesting to render the template.
{{functionName arg1 arg2 ... argN}}
The passed argument can be a constant or a property from the provided data.
You can also chain functions. When two functions are chained, the left function’s output is passed into the argument of the right function.
{{funcA | funcB | ... | funcN}}
Using partials
Partials allow you to keep your documents consistent by using common headers and footers. Partials can also be useful when displaying Cortezas' resources, such as displaying a record in a table.
{{template "partial_handle"}}
The
partial_handle is the handle you used when you defined the partial.
For example:
{{template "email_general_header"}}
If your partial needs to access some data that you provided to the current template (the one that uses the partial), you need to provide the second argument to the partial inclusion process.
{{template "email_general_header" .property.to.pass}}
For example, if our data looks like this:
{ "contact": { "values": {...} }, "account": { "values": {...} } }
If you want to pass the
contact to the partial, you would include your partial like so:
{{template "partial_handle" .contact}}
If you wanted to pass all of the data to your partial, you would include your partial like so:
{{template "partial_handle" .}}
You would access the
contact in your partial like so:
{{/* In case of the first example */}} Hello {{.values.FirstName}} {{/* In case of the second example */}} Hello {{.contact.values.LastName}} of the {{.account.values.Name}}
The preview
The preview section at the bottom of the page allows you to check how your documents will look like once the template is rendered.
The input box should contain a valid JSON object (render payload) with two root properties;
variables and
options:
{ "variables": {},(1) "options": {}(2) }
{ "variables": { "param1": "value1", "param2": { "nestedParam1": "value2" } }, "options": { "documentSize": "A4", "contentScale": "1", "orientation": "portrait", "margin": "0.3" } } | https://docs.cortezaproject.org/corteza-docs/2021.3/integrator-guide/templates/index.html | 2021-04-11T00:12:23 | CC-MAIN-2021-17 | 1618038060603.10 | [] | docs.cortezaproject.org |
- Reference >
- Database Commands >
- Role Management Commands >
- updateRole
updateRole¶
On this page
Definition¶
updateRole¶
Updates a user-defined role. The
updateRolecommand must run on the role’s database.
An update to a field completely replaces the previous field’s values. To grant or remove roles or privileges without replacing all values, use one or more of the following commands:
Warning
An update to the
privilegesor
rolesarray completely replaces the previous array’s values.
The
updateRolecommand uses the following syntax. To update a role, you must provide the
privilegesarray,
rolesarray, or both:
The
updateRolecommand has the following fields:
Roles¶
In the
roles field, you can specify both
built-in roles and user-defined
roles.
To specify a role that exists in the same database where is an example of the
updateRole command that
updates the
myClusterwideAdmin role on the
admin database.
While the
privileges and the
roles arrays are both optional, at least
one of the two is required:
To view a role’s privileges, use the
rolesInfo command. | https://docs.mongodb.com/master/reference/command/updateRole/ | 2019-08-17T13:56:34 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.mongodb.com |
Use etcd data as a Pillar source
New in version 2014.7.0.
python-etcd
In order to use an etcd server, a profile must be created in the master configuration file:
my_etcd_config: etcd.host: 127.0.0.1 etcd.port: 4001
After the profile is created, configure the external pillar system to use it. Optionally, a root may be specified.
ext_pillar: - etcd: my_etcd_config ext_pillar: - etcd: my_etcd_config root=/salt
Using these configuration profiles, multiple etcd sources may also be used:
ext_pillar: - etcd: my_etcd_config - etcd: my_other_etcd_config
The
minion_id may be used in the
root path to expose minion-specific
information stored in etcd.
ext_pillar: - etcd: my_etcd_config root=/salt/%(minion_id)s
Minion-specific values may override shared values when the minion-specific root appears after the shared root:
ext_pillar: - etcd: my_etcd_config root=/salt-shared - etcd: my_other_etcd_config root=/salt-private/%(minion_id)s
Using the configuration above, the following commands could be used to share a key with all minions but override its value for a specific minion:
etcdctl set /salt-shared/mykey my_value etcdctl set /salt-private/special_minion_id/mykey my_other_value
salt.pillar.etcd_pillar.
ext_pillar(minion_id, pillar, conf)¶
Check etcd for all data | https://docs.saltstack.com/en/develop/ref/pillar/all/salt.pillar.etcd_pillar.html | 2019-08-17T12:37:26 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.saltstack.com |
Installation – Fatal Error: Unable to Determine Previous Install Path
Error/Message
Fatal Error: Unable to determine previous install path. DataKeeper cannot be uninstalled or reinstalled.
Description
When performing a “Repair” or “Uninstall” of DataKeeper, the “ExtMirrBase” value is missing in the installation path of DataKeeper in the registry under HKLM\System\CurrentControlSet\Control\Session Manager\Environment.
Suggested Action
Perform one of the following:
• Under the Environment key, create “ExtMirrBase” as a REG_SZ and set the value to the DataKeeper installation path (i.e. C:\Program Files(x86)\SIOS\DataKeeper).
• To force InstallShield to perform a new install of DataKeeper, delete the following registry key:
HKLM\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\
{B00365F8-E4E0-11D5-8323-0050DA240D61}.
This should be the installation key created by InstallShield for the DataKeeper product.
このトピックへフィードバック | http://docs.us.sios.com/sps/8.6.3/ja/topic/unable-to-determine-previous-install-path | 2019-08-17T12:40:14 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.us.sios.com |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
Envoy Image
AWS App Mesh is a service mesh based on the Envoy proxy.
After you create your service mesh, virtual nodes, virtual routers, routes, and virtual
services, you add the following App Mesh Envoy container image to the ECS task or
Kubernetes pod
represented by your App Mesh virtual nodes. You can replace the
Region with any Region that App Mesh is supported in. For a list of supported regions,
see AWS Service Endpoints.
840364872350.dkr.ecr.
us-west-2.amazonaws.com/aws-appmesh-envoy:v1.11.2.0-prod
You must use the App Mesh Envoy container image until the Envoy project team merges changes that support App Mesh. For additional details, see the GitHub roadmap issue.
Envoy Configuration Variables
The following environment variables enable you to configure the Envoy containers for your App Mesh virtual node task groups.
Required Variables
The following environment variable is required for all App Mesh Envoy containers.
APPMESH_VIRTUAL_NODE_NAME
When you add the Envoy container to a task group, set this environment variable to the name of the virtual node that the task group represents: for example,
mesh/.
meshName/virtualNode/
virtualNodeName
Optional Variables
The following environment variable is optional for App Mesh Envoy containers.
ENVOY_LOG_LEVEL
Specifies the log level for the Envoy container.
Valid values:
trace,
debug,
info,
warning,
error,
critical,
off
Default:
info
AWS X-Ray Variables
The following environment variables help you to configure App Mesh with AWS X-Ray. For more information, see the AWS X-Ray Developer Guide.
ENABLE_ENVOY_XRAY_TRACING
Enables X-Ray tracing using
127.0.0.1:2000as the default daemon endpoint.
XRAY_DAEMON_PORT
Specify a port value to override the default X-Ray daemon port.
DogStatsD Variables
The following environment variables help you to configure App Mesh with DogStatsD. For more information, see the DogStatsD documentation.
ENABLE_ENVOY_DOG_STATSD
Enables DogStatsD stats using
127.0.0.1:8125as the default daemon endpoint.
STATSD_PORT
Specify a port value to override the default DogStatsD daemon port.
ENVOY_STATS_SINKS_CFG_FILE
Specify a file path in the Envoy container file system to override the default DogStatsD configuration with your own. For more information, see config.metrics.v2.DogStatsdSink in the Envoy documentation.
Envoy Stats Variables
The following environment variables help you to configure App Mesh with Envoy Stats. For more information, see the Envoy Stats documentation.
ENABLE_ENVOY_STATS_TAGS
Enables the use of App Mesh defined tags
appmesh.meshand
appmesh.virtual_node. For more information, see config.metrics.v2.TagSpecifier in the Envoy documentation.
ENVOY_STATS_CONFIG_FILE
Specify a file path in the Envoy container file system to override the default Stats tags configuration file with your own.
Access Logs
When you create your virtual nodes, you have the option to configure Envoy access logs. In the console, this is in the Advanced configuration section of the virtual node create or update workflows.
The above image shows a logging path of
/dev/stdout for Envoy access
logs. The code block below shows the JSON representation that you could use in the
AWS CLI.
"logging": { "accessLog": { "file": { "path": "
/dev/stdout" } } }
When you send Envoy access logs to
/dev/stdout, they are mixed in
with the Envoy container logs, so you can export them to a log storage and processing
service
like CloudWatch Logs using standard Docker log drivers (such as
awslogs).
To export only the Envoy access logs (and ignore the other Envoy container logs),
you can set
the
ENVOY_LOG_LEVEL to
off. For more information, see Access
logging in the Envoy documentation. | https://docs.aws.amazon.com/ja_jp/app-mesh/latest/userguide/envoy.html | 2019-10-14T03:57:35 | CC-MAIN-2019-43 | 1570986649035.4 | [array(['images/proxy.png', None], dtype=object)
array(['images/logging.png', None], dtype=object)] | docs.aws.amazon.com |
In this section you may explore and five followup to all your contacts that have been imported into your FeelBack account.
You may add contacts to your people module by the following methods:
By Import
When you import your contacts using our importer section you may upload multiple contacts at once. Use this if you need to add contacts for sending them FeelBack campaigns.
By FeelWeb
When you integrate your website or webapp to FeelBack using our widget you may add contact information thru our FeelWeb Advanced Settings.
What's Next
Explore this module thru this links: | https://docs.feelback.io/docs/introduction-to-people | 2019-10-14T03:33:03 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.feelback.io |
Contents Performance Analytics and Reporting Previous Topic Next Topic Use data from fields in related tables in a report Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Use data from fields in related tables in a report Watch video to learn how to use dot walking, dynamic filters, and database views to access data on related tables. How to use dot-walking and database views to include data from related tables in reports. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/jakarta-performance-analytics-and-reporting/page/use/reporting/concept/c_HowToAccessRelatedTables.html | 2019-10-14T04:19:53 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.servicenow.com |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
DeleteMessage
Deletes the specified message from the specified queue. To select the message to
delete, use the
ReceiptHandle of the message (not the
MessageId which.
Note
The
ReceiptHandle is associated with a specific
instance of receiving a message. If you receive a message more than
once, the
ReceiptHandle is different each time you receive a message.
When you use the
DeleteMessage action, you must provide the most
recently received
ReceiptHandle for.
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
- QueueUrl
The URL of the Amazon SQS queue from which messages are deleted.
Queue URLs and names are case-sensitive.
Type: String
Required: Yes
- ReceiptHandle
The receipt handle associated with the message to delete.
Type: String
Required: Yes
Errors
For information about the errors that are common to all actions, see Common Errors.
- InvalidIdFormat
The specified receipt handle isn't valid for the current version.
HTTP Status Code: 400
- ReceiptHandleIsInvalid
The specified receipt handle isn't valid.
HTTP Status Code: 400
Example
The following example query request deletes a message from the queue named
MyQueue. The structure of
AUTHPARAMS depends on the signature of the API request.
For more information, see
Examples of Signed Signature Version 4 Requests in the Amazon Web Services General Reference.
Sample Request ?Action=DeleteMessage &ReceiptHandle=MbZj6wDWli%2BJvwwJaBV%2B3dcjk2YW2vA3%2BSTFFljT M8tJJg6HRG6PYSasuWXPJB%2BCwLj1FjgXUv1uSj1gUPAWV66FU/WeR4mq2OKpEGY WbnLmpRCJVAyeMjeU5ZBdtcQ%2BQEauMZc8ZRv37sIW2iJKq3M9MFx1YvV11A2x/K SbkJ0= &Expires=2020-04-18T22%3A52%3A43PST &Version=2012-11-05 &AUTHPARAMS
Sample Response
<DeleteMessageResponse> <ResponseMetadata> <RequestId>b5293cb5-d306-4a17-9048-b263635abe42</RequestId> </ResponseMetadata> </DeleteMessageResponse>
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_DeleteMessage.html | 2019-10-14T03:53:58 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.aws.amazon.com |
Simple Trade¶
How it Works¶
In the Simple Trade strategy, Hummingbot executes orders after a certain time delay, as specified by the user. For limit orders, the strategy also provides the user with the additional option to cancel them after a certain time period.
Warning
The strategy is only supposed to provide the user with a basic template for developing custom strategies. Please set variables responsibly.
Prerequisites: Inventory¶
- You will need to hold inventory of currencies on the exchange on which you are trading.
- You will also need some Ethereum to pay gas for transactions on a DEX (if applicable).
Configuration Walkthrough¶
The following walks through all the steps when running
config for the first time.
Tip: Autocomplete Inputs during Configuration
When going through the command line config process, pressing
<TAB> at a prompt will display valid available inputs.
Limit Order Configuration¶
Limit orders allow you to specify the price at which you want to place the order and specify how long to wait before cancelling the order.
Configuration parameters¶
The following parameters are fields in Hummingbot configuration files (located in the
/conf folder, e.g.
conf/conf_simple_trade_strategy_[#].yml). | https://docs.hummingbot.io/strategies/simple-trade/ | 2019-10-14T05:07:14 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.hummingbot.io |
Augmented Portal
Difficulty Level: Easy
Coding Knowledge Required: Absolute Beginner
Time to Complete: 30 minutes
Masking is a technique that hides parts of a scene as if they were behind an invisible object. It can be used to create environments that extend behind, and are framed by, a target image.
This super simple experience is one of the most popular AR interactions with users. Following along with this tutorial will provide you with the basics of creating your own masked scenes in ZapWorks Studio.. “Masked Scene” and click ‘Create’.
3. Download the asset pack
Click here to download a zip file containing all the assets we'll need for this experience.
4. Import image files into the Media Library
Unzip the file you downloaded in step 3 and drag the files from the 'Image Asset' folder into the Media Library import box, located in the top right of the window. You can select multiple files and import them at the same time. “Space Page.png”, in the folder ‘Tracking Image’.
The image will then be trained. During this process, which may take up to a minute, the image is analysed from multiple angles and a target file is produced. This file is added to your Media Library, with the file extension ZPT.
6. Drag your target into the Hierarchy.
7. Create a group to keep your Hierarchy organised
In order to keep the items in your Hierarchy organised, we are going to create a group. Right click on the target and select 'New > Group'. Name this new group "Scene Ornaments".
Groups are incredibly useful for organizing your content. When you create more complex experiences it is good practice to use groups where appropriate.
8. Add the wall asset into the scene ornaments group
Click and drag the wall.png from the Media Library and into the Scene Ornaments group.
Once completed, you will be able to rotate the target in the 3d view and see that the wall is tracked to the target image.
9. Position and scale the wall
Click on the Wall in the Hierarchy. Using the Properties panel on the left hand side below the Hierarchy (image attached), scroll to find the ‘Transform’ section.
Set the position and scale the wall as follows:
Position (0, 0, -5) Scale ( 10, 10, 10) Rotation ( 0, 0, 0)
10. Make the wall invisible
You are now going to make the wall invisible. This will allow you to easily position the rest of the items correctly in the scene. You will make it visible again once you have everything where you want it.
Making sure that the wall is still selected, scroll in the Properties panel to the ‘Appearance’ section and untick
visible.
11. Positioning the rest of the content
Click on the 'Set View' option in the top right of the 3D view (it should look like an eye icon) and select 'Front'. This allows us to view the target image straight on, useful for lining up our content.
You will now set up the rest of the scene. Drag the remaining files into the scene ornaments group beneath the previous object in turn. Follow the coordinates below to position each object accordingly.
It is important to make sure that each new asset sits underneath the previous one in the Hierarchy. This is because the order in the Hierarchy determines the order of rendering on screen.
11.1) Position the telescope
Place the telescope below the wall within the Hierarchy.
Position (1.07, -0.95, -3.00) Scale ( 0.5, 0.5, 0.5) Rotation ( 0, 0, 0)
11.2) Position the cushion
Place the cushion below the telescope within the Hierarchy.
Position (-0.70, -1.05, -4.00) Scale ( 0.63, 0.63, 0.63) Rotation ( 0, 0, 0)
11.3) Position the books
Place the books below the cushion within the Hierarchy.
Position (0.25, -0.99, -2) Scale ( 0.4, 0.4, 0.4) Rotation ( 0, 0, 0)
11.4) Position the poster
Place the poster below the books within the Hierarchy.
Position (-0.72, 0.72, -4.90) Scale ( 0.9, 0.9, 0.9) Rotation ( 0, 0, 0)
11.5) Position the shelves
Place the shelves below the poster within the Hierarchy.
Position (1.1, 0.75, -4.90) Scale ( 0.6, 1.2, 1.2) Rotation ( 0, 0, 0)
12. Creating a floor
Drag the Plane object from the Media Library into the Hierarchy. Place the Plane beneath wall.png and above telescope.png. Right click on the plane node and rename it as ‘Floor’.
12.1) Position the floor
Position (0.0, -1.5, -2.5) Scale ( 5, 2.5, 1) Rotation ( -90, 0, 0)
We have rotated the floor so that it is coming towards us from the target image. This will be important in giving the impression of depth.
12.2) Set the floor color
Remaining in the Properties section, head to the 'Appearance' section and then click the color box, and set the HEX value to
#ff7ebb93.
13. Make the wall visible
Click on the wall in the Hierarchy, head to the Appearance section of the Properties panel and check the
visible box.
14. Set scene ornaments to test_3d
You are now going to set the
layerMode of the objects in our Scene Ornaments group to
test_3d. Start from the wall and go through the Scene Ornaments group.
Click the object node, then within the Properties panel, scroll to the ‘Appearance’ section and change the layerMode from overlay to “test_3d”.
Do this individually for each item in the Scene Ornaments group.
Changing the layerMode to
test_3dmeans that these objects will have to be closer to the camera than other
test_3dor
full_3dobjects in order to be visible.
15. Create a square mask in the Media Library
Click the (+) icon in the Media Library and select the 'Square Mask'. This will add a Square Mask into your Media Library.
16. Drag Square Mask into the Hierarchy
Drag your Square Mask from the Media Library into the target file. Make sure that the mask is above the Scene Ornaments group in the Hierarchy.
17. Set the Square Mask layerMode to full_3d
Click on the Square Mask in your Hierarchy. Go into the Properties panel and in the Appearance section, set the
layerMode to
full_3d.
The mask object will still hide the other objects but will be drawn to the screen first.
18. Set the transparency of the Square Mask to 0%
Click on your Square Mask within the Hierarchy. Head to the 'Appearance' section of the Properties panel and click the color box.
This will open a dialogue box. Set the transparency to 0% and click ‘OK’ to close the box.
The mask object now will not be visible but will still hide the objects behind it.
19..
20. Publishing your experience to your zapcode
Hit ‘Publish’ in the top left corner to publish it to your previously created zapcode.
Choose the zapcode that you created in Step 1.
The code must be a Studio code in order to be published to.
Congratulations
That’s it. You have completed all the steps in this tutorial and created your own masked scene. You are well on your way to learning ZapWorks Studio. Nice!
Further Reading
To learn more about the concepts covered in this tutorial, please see the following pages:
- Masking Objects - More information on the masking process.
- Tracking Images - How tracking images work.
- Using layerMode - Further details on all layerModes.
Suggested Next Steps
- Add a 3D model to the scene to give a mixture of 2D and 3D elements.
- Make your own target image and add your zapcode to it.
- Create a cool opening animation to progressively reveal the mask. | https://docs.zap.works/studio/tutorials/augmented-portal/ | 2019-10-14T04:00:54 | CC-MAIN-2019-43 | 1570986649035.4 | [array(['/static/img/zappar-studio/sbs/augmented_portal/target-image.png',
None], dtype=object)
array(['/static/img/zappar-studio/sbs/augmented_portal/target-image.png',
None], dtype=object) ] | docs.zap.works |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
Other Use Cases
In this section, you can find information about use cases that are not common to most users.
Topics
Transferring Files in Opposite Directions
Transferring data in opposite directions allows for workflows where the active application moves between locations. AWS DataSync doesn't support workflows where multiple active applications write to both locations at the same time. Use the steps in the following procedure to configure DataSync to transfer data in opposite directions.
To configure DataSync to data transfers in opposite directions
Create a location and name it Location A.
Create a second location and name it Location B.
Create a task, name it Task A-B, and then configure Location A as the source location and Location B as the destination location.
Create a second task, name it Task B-A, and then configure Location B as the source location and Location A as the destination location.
To update Location B with data from Location A, run Task A-B.
To update Location A with data from Location B, run Task B-A.
Don't run these two tasks concurrently. DataSync can transfer files in opposite directions periodically. However, it doesn't support workflows where multiple active applications write to both Location A and Location B simultaneously.
Using Multiple Tasks to Write to the Same Amazon S3 Bucket
In certain use cases, you might want different tasks to write to the same Amazon S3 bucket. In this case, you create different folders in the S3 bucket for each of the task. This approach prevents file name conflicts between the tasks, and also means that you can set different permissions for each of folders.
For example, you might have three tasks:
task1,
task2, and
task3 write to an S3 bucket named
MyBucket.
You create three folders in the bucket:
s3://MyBucket/task1
s3://MyBucket/task2
s3://MyBucket/task3
For each task, you choose the folder in
MyBucket that corresponds to the
task as the destination, and set different permissions for each of the three folders.
Allowing Amazon S3 Access From a Private VPC Endpoint
In certain cases, you might want to only allow Amazon S3 access from a private endpoint. In that case, you create an IAM policy that allows that access and attach it to the S3 bucket. If you need a policy that restricts your S3 bucket's access to DataSync VPC endpoints, contact AWS DataSync Support to get the DataSync VPC endpoint for your AWS Region.
The following is a sample policy that only allows Amazon S3 access from a private endpoint.
{ "Version": "2012-10-17", "Id": "Policy1415115909152", "Statement": [ { "Sid": "Access-to-specific-VPCE-only", "Principal": "", "Action": "s3:", "Effect": "Deny", "Resource": ["arn:aws:s3:::examplebucket", "arn:aws:s3:::examplebucket/*"], "Condition": { "StringNotEquals": { "aws:sourceVpce": "vpce-
your vpc enpoint", "aws:sourceVpce": "vpce-
DataSync vpc endpoint for your region" } } } ] }
For more information, see Example Bucket Policies for VPC Endpoints for Amazon S3 in the Amazon Simple Storage Service Developer Guide. | https://docs.aws.amazon.com/datasync/latest/userguide/other-use-cases.html | 2019-10-14T04:00:55 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.aws.amazon.com |
If you’ve been working with Beaver Builder, you might have observed that whenever you add a module or make a slight change on a page, you need to save and publish it in order to see it live.
But, in the latest version of UABB, i.e. the Ultimate Addons for Beaver Builder version 1.2.0, we’ve introduced the Live Preview button using which you will be able to see how your changes look without having to publish the entire page.
You can enable / disable this live preview option by following the steps mentioned below.
Click on “Page Builder” seen under settings, and then go to UABB – General Settings.
You’ll find an option that says Enable UABB Live Preview. You can enable or disable this by clicking on the checkbox.
Note: This option is enabled by default. You can disable it using same steps. | https://docs.brainstormforce.com/how-to-enable-disable-live-preview-feature-of-uabb/ | 2019-10-14T03:04:01 | CC-MAIN-2019-43 | 1570986649035.4 | [array(['https://docs.brainstormforce.com/wp-content/uploads/2016/08/Enable-or-Disable-UABB-Live-Preview.jpg',
'Enable or Disable UABB Live Preview'], dtype=object) ] | docs.brainstormforce.com |
This Refactoring is used to switch between the for loop and foreach loop. The foreach loop is more readable and compact. On the other hand, if your algorithm uses the iteration number for calculations, the for loop is more applicable.
In rare cases, the ForEach to For refactoring may change your code's external behavior if the IEnumerator implementation causes significant side effects, or if any of the IEnumerator members behave in a non-standard manner.
Available when the caret is on the for or foreach keyword.
Place the caret on a foreach keyword.
The blinking cursor shows the caret's position at which the Refactoring is available.
After execution, the Refactoring converts the foreach loop to a for loop or vice versa.
string result = String.Empty; for (int i = 0; i < strings.Count; i++) if (strings[i].Length > 2) result += $"{strings[i]} "; | https://docs.devexpress.com/CodeRushForRoslyn/115530/refactoring-assistance/foreach-to-for-for-to-foreach | 2019-10-14T03:34:18 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.devexpress.com |
Building a worker executable
This:
1. Download dependencies
A SpatialOS SDK release consists of a collection of artifacts (libraries, tools etc.). In this build step, you will download the necessary artifacts for building a worker.:
- C++: Setting up a C++ worker
- C#: (Under construction)
- Java: (Under construction)
- C: Setting up a worker using the C API. | https://docs.improbable.io/reference/13.6/shared/flexible-project-layout/build-process/worker-build-process | 2019-10-14T02:57:43 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.improbable.io |
ten navigation menu locations. They are :
- Off Canvas menu
- Top Bar Left
- Top Bar Right
- Primary Nav
- Navbar Primary
- Secondary nav
- Departments Menu
- All Departments Menu
- Blog menu
- Mobile Handheld Department
Enabling CSS Classes Field, Static Content Blocks & Product Category Menu options
By default CSS Classes field, Static Content Blocks and Product Categories menu items may not be available. The will have to be enabled. To enable it please click on Screen Options at the top right screen of Appearance > Menus page. In the pull down menu that appears, check CSS Classes in Show Advanced menu properties. Please also check Static Content Blocks and Product Categories in Boxes. We’ve explained it in the screenshot below. | https://docs.madrasthemes.com/electro/topics/navigation/general-guidelines/ | 2019-10-14T04:44:04 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.madrasthemes.com |
Options for this widget type include: map, heat map, and ticker.
The map option is currently limited to AWS integrations. Map alerts can be grouped via element or policies. See the Capacity Monitoring documentation for a full explanation of events and alerts, and their relationship to policies.
Set the scope by inputting an element name or choosing an element tag or attribute.
Group your alerts by element or policy.
Name your widget before selecting save. | https://docs.metricly.com/dashboards/widgets/alerts-widget/ | 2019-10-14T03:26:22 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.metricly.com |
curl --data "action=report&user=bob&project=demosthenes&x=category&y=state"
This request is asking for a top-level (no drilldown specified) summary of issues in the project demosthenes.
The x-axis of this summary will contain the issue categories, while the y-axis will reflect the issue state (such as new or existing). The response is shown here:
{ "rows":[{"id":1,"name":"C and C++"}], "columns":[{"id":-1,"name":"Existing"},{"id":-1,"name":"New"}], "data":[[14,2]], "warnings":[] }
To drill down into this information, or to follow the category tree one level down, we add a drilldown specific to the ID of the item into which we want to drill, on either the x or y axis:
curl --data "action=report&user=bob&project=demosthenes&x=category&y=state&xDrilldown=1"
This request is asking for the next level of the "C and C++" taxonomy to be shown on the x-axis, while the y-axis continues to reflect the issue state:
{ "rows":[{"id":2, "name":"Attempt to use memory after free"}, {"id":3, "name":"Buffer Overflow"}, ...], "columns":[{"id":4, "name":"Existing"}, {"id":3, "name":"Fixed"}], "data":[...], "warnings":[] }
If no such drilldown is possible (for example, asking to drill into state makes no sense), then the top-level summary of that axis will be shown instead. | https://docs.roguewave.com/en/klocwork/current/specifyingdrilldownforthereportaction | 2019-10-14T03:36:21 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.roguewave.com |
Contents IT Operations Management Previous Topic Next Topic WebLogic Discovery on Linux servers Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share WebLogic Discovery on Linux servers Discovery can collect data about WebLogic application servers running on Linux systems. Requirements The Linux - Weblogic - Find config.xml probe requires the use of these Bourne shell commands: find, cat, and dirname. The SSH credential must also have read permissions on the config.xml file. WebLogic administration server instances started via NodeManager must have the -Dweblogic.RootDirectory=<path> parameter defined and visible through the Linux ps process stat command (for each AdminServer) for the rest of the Linux WebLogic application server and web application information to be populated in the CMDB. WebLogic probes and sensors for Linux serversDiscovery identifies a Linux WebLogic application server using probes and sensors.Add sudo access for the Weblogic - Find config.xml probeFor a WebLogic application server on Linux, the Weblogic - Find config.xml probe requires sudo privileges.Data collected for WebLogic on Linux serversDiscovery collects data from WebLogic application servers running on a Linux system.Relationships created for WebLogic on Linux serversDiscovery detects specific relationships when it discovers WebLogic on a Linux server. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/geneva-it-operations-management/page/product/discovery/reference/r_WeblogicOnLinuxServers.html | 2019-10-14T03:51:33 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.servicenow.com |
[Quickstart] Setup¶
This guide walks you through how to install Hummingbot and run your first trading bot.
Below, we list what you'll need before you install and run Hummingbot.
System Requirements¶
Why this is needed: If you are installing Hummingbot either locally or on a cloud server, here are the recommended minimum system requirements:
- (Linux) Ubuntu 16.04 or later
- (Mac) macOS 10.12.6 (Sierra) or later
- (Windows) Windows 10 or later
For more information, view minimum system requirements.
Crypto Inventory¶
Why this is needed: In order to run trading bot, you'll need some inventory of crypto assets available on the exchange, or in your Ethereum wallet (for Ethereum-based decentralized exchanges).
Remember that you need inventory of both the base asset (the asset that you are buying and selling) and the quote asset (the asset that you exchange for it). For example, if you are making a market in a
BTC/USDT trading pair, you'll need some
BTC and
USDT.
In addition, be aware of the minimum order size requirements on different exchanges. For more information, please see Connectors.
API Keys¶
Why this is needed: In order to run a bot on a centralized exchange like Binance, you will need to enter the exchange API keys during the Hummingbot configuration process.
For more information on how to get the API keys for each exchange, please see API Keys.
Ethereum Wallet¶
Why this is needed: In order to earn rewards from Liquidity Bounties, you need an Ethereum wallet. In addition, you'll need to import an Ethereum wallet when you run a trading bot on an Ethereum-based decentralized exchange.
For more information on creating or importing an Ethereum wallet, see Ethereum wallet.
Ethereum Node (DEX only)¶
Why this is needed: When you run a trading bot on a Ethereum-based decentralized exchange, your wallet sends signed transactions to the blockchain via an Ethereum node.
For more information, see Ethereum node. To get a free node, see Get a Free Ethereum Node.
Cloud Server (Optional)¶
We recommend that users run trading bots in the cloud, since bots require a stable network connection and can run 24/7.
Follow the guide Set up a cloud server to set up a cloud server on your preferred cloud platform. Hummingbot is not resource-intensive so the lowest/free tiers should work.
Tip
Don't know which cloud platform to use? Read our blog post that compares and contrasts the different providers.
If you just want to test out Hummingbot, you can skip this and install locally. | https://docs.hummingbot.io/quickstart/ | 2019-10-14T05:07:43 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.hummingbot.io |
.
fxc /T ps_5_1 /Fo PixelShader1.fxc PixelShader1.hlsl
In this example:
- ps_5_1 is the target profile.
- PixelShader1.fxc is the output object file containing the compiled shader.
- PixelShader1.hlsl is the source.
fxc /Od /Zi /T ps_5_1 /Fo PixelShader1.fxc PixelShader1.hlsl.
}
For additional information on spawning a process see the reference page for CreateProcess.
Related topics | https://docs.microsoft.com/en-us/windows/win32/direct3dtools/dx-graphics-tools-fxc-using?redirectedfrom=MSDN | 2019-10-14T03:17:07 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.microsoft.com |
This guide includes the following sections:
About this API:
- Control over tile upscaling
- Ability to hide low resolution satellite background imagery
- Ability to request JPG or PNG tiles
- Ability to request relative dates (e.g. show imagery that is at least one year old)
- Richer metadata
- Explicit survey tile requests
Introduction:
- Retrieve tiles for a specified location
- Retrieve tiles of a specified survey for a specified location
You can use. | https://docs.nearmap.com/pages/viewpage.action?pageId=16581379 | 2019-10-14T03:52:58 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.nearmap.com |
Document Type
Thesis
Abstract
The current study examined the predictive validity of the SAVRY in African American and White recently adjudicated juvenile offenders in Louisiana. The sample consists of 267 community-based, male juvenile offenders, whom were tracked for an average follow-up period of 18 months. Receiver operating characteristic (ROC) analyses on the overall sample and African Americans, specifically, showed the numeric score predicted recidivism. Chi-square analyses found the SRR did not have a significant relationship with reoffending for general recidivism petitions. However, it was significant for all other forms of recidivism in the overall sample and African Americans. Hierarchical Cox regressions identified significant differences in time to reoffenses (all forms and contexts) and SAVRY scores. The study concludes that the SAVRY shows promise as a cultural invariant risk assessment tool that can predict general and violent recidivism for African American juveniles, although additional research is required to more confidently support such a claim.
Recommended Citation
Woods, Scarlet Paria, "Predictive Validity of the SAVRY within a Diverse Population of Juvenile Offenders" (2013). Psychology Theses. 22. | https://docs.rwu.edu/psych_thesis/22/ | 2019-10-14T04:22:36 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.rwu.edu |
Voice messages are normally stored in a compressed format. Although this reduces the storage requirements, it can cause problems if Unified Messaging is used in an environment that also uses audio compression, notably some Voice over IP infrastructures. Such use of multiple compression stages can result in the audio signal degrading, so that poor quality is experienced when listening to recorded messages.
Blueworx Voice Response and Unified Messaging Platform support the storage of voice messages in an uncompressed format. This facility can be used to maximize the quality of voice messages when they are used in an environment that employs its own compression mechanisms, for example a Voice over IP network using G.729A. The ability to use uncompressed messages is controlled using a Blueworx Voice Response system parameter called Voice Message Compression Type, which is in the Application Server Interface group . It can be set to compressed (default) or uncompressed. | http://docs.blueworx.com/BVR/InfoCenter/V6.1/help/topic/com.ibm.wvraix.probdet.doc/pooraudio.html | 2019-10-14T04:31:57 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.blueworx.com |
On the dashboard, we have following sections.
- Register your copy of WHMPress
Regisentitlesntitiles you to get help using our ticketing system. Keep getting latest news & updates.
- Reset WHMPress
If you want to reset WHMPress like it was never installed before, hit this button to remove all cached data from WHMCS.
- WHMCS/ Plugins Info
The right column shows WHMCS information, the date when last synced and the number of products cached. | http://docs.whmpress.com/docs/whmpress/configuration-admin-settings/dashboard/ | 2019-10-14T04:48:13 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.whmpress.com |
As a store administrator there are times where you often find yourself having to manage orders and payments on behalf of customers. Some customers might call in to modify their saved credit cards, or you might need to refund or capture payments for orders. With Drupal Commerce, you get a nice interface that let's you manage order payments and authorizations with ease.
Let's assume a scenario where the customer calls in and requests to make a change to their order. Let's say they wanted to modify the quantity ordered for one of the products. Instead of 1 quantity, they now want 2 of this item. As an admin you can go ahead and edit the item and enter "2" for the quantity. However, you now have a changed order total. You need to request more payment from the customer.
Capturing payments for an order is done from the Payments tab above.
Notice, the payments that have already been captured for our example order, is displayed in the page. For this order, the customer has already paid $118.08, now a difference of $30.74 needs to be paid. Click on the
Add payment button.
Now, select the payment type and continue with the prompt.
Once the payment is successful, you'll be notified and the new payment will be added to the list.
In order to cancel an authorization we 'Void' it. For example, on travel sites, normally when a customer adds a trip request, a payment authorization is added to the order. It is only when the trip is confirmed that the authorization becomes a charge.
Similary, in our case, let's say we had added a payment authorization for an order. However, upon processing the order, we notice that the item is out of stock. We now need to 'Void' the payment. Voiding payments are quite easy. Just as before, you first need to locate the order. Then, as you did before, click on the
Payments tab and locate the authorized payment.
Click the
Void link and confirm that you want to void the payment.
Once you confirm, the payment page should look like this, with a "(Voided)" added next to the payment that you just voided.
Payments can be refunded when they've been authorized and captured. There maybe times when you've already taken payment but need to refund an order, either due to lack of stock, damaged product, cancelled order, or some other reason. Similar to voiding payments, refunding payments follow the same process. You locate the order, click the
Payments tab, find the captured payment and hit the
Refund link. You'll be taken to a confirmation page.
Once you confirm the refund, the payment will be refunded to the customer and the refunded payment would look like this.
Found errors? Think you can improve this documentation? edit this page | https://docs.drupalcommerce.org/commerce2/user-guide/payments/managing-order-payments | 2019-10-14T03:08:46 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.drupalcommerce.org |
About upgrading to 6.2 - READ THIS FIRST
This topic contains important information and tips about upgrading to version 6.2 from an earlier version. Read it before attempting to upgrade your Splunk environment.
Important: Not all Splunk apps and add-ons are compatible with Splunk Enterprise 6.2. If you are considering an upgrade to this release, visit Splunkbase to confirm that your apps are compatible with Splunk Enterprise 6.2 of the software:
- From version 5.0 or later to 6.2 on full Splunk Enterprise.
- From version 5.0 or later to 6.2 on Splunk universal forwarders.
If you run version 4.3 of Splunk Enterprise, upgrade to 6.0 first before attempting an upgrade to 6.2. Read "About upgrading to 6.0 - READ THIS FIRST" for specifics.
If you run a version of Splunk Enterprise prior to 4.3, upgrade to 5.0 first, then upgrade to 6.2. Read "About upgrading to 5.0 - READ THIS FIRST" for tips on migrating your instance to version 5.0.
You want to know this stuff
Upgrading to 6.2 from 5.0 and later is trivial, but here are a few things you should be aware of when installing the new version:
The splunkweb service has been incorporated into the splunkd service
The
splunkweb service, which handled all Splunk Web operations and sent requests to the
splunkd service, has been disabled. The
splunkd service now handles all Splunk Enterprise services in normal operation. On Windows, the
splunkweb service installs, but does not run. See "The Splunk Web service installs but does not run" in the "Windows-specific changes" section of this topic.
If needed, you can configure Splunk Enterprise to run in "legacy mode", where
splunkweb runs as a separate service. See "Start and stop Splunk Enterprise":
New installed services open additional network ports
Splunk Enterprise installs and runs two new services: KV Store and App Server. This opens two network ports by default on the local machine: 8191 (for KV Store) and 8065 (for Appserver.) Make sure any firewall you run on the machine does not block these ports. The KV Store service also starts an additional process,
mongod. If needed, you can disable KV Store by editing
server.conf and changing the
dbPath attribute to a valid path on a file system that the Splunk Enterprise instance can reach. See "About the app key value store" in the Admin manual..
Data block signing has been removed
Data block signing has been removed from Splunk Enterprise version 6.2. The feature has been deprecated for some time. in 6.2. You can only modify the footer of the login page after an upgrade.
Windows-specific changes
New installation and upgrade procedures
The Windows version of Splunk Enterprise now.
No support for search head clustering in Windows
The search head clustering feature is only available on Splunk Enterprise running on *nix hosts at this time. To use search head clustering, you must install *nix instances of Splunk Enterprise and configure search head clustering on those instances. (GUID) enable this attribute will no longer perform this translation.
To learn about any additional upgrade issues for Splunk Enterprise, see the "Known Issues - Upgrade Issues" page in the Release Notes.
This documentation applies to the following versions of Splunk® Enterprise: 6.2.0, 6.2.1, 6.2.2, 6.2.3, 6.2.4, 6.2.5, 6.2.6, 6.2.7, 6.2.8
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/6.2.1/Installation/Aboutupgradingto6.2READTHISFIRST | 2019-10-14T03:38:30 | CC-MAIN-2019-43 | 1570986649035.4 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Distance Shadowmask is a version of the ShadowmaskA Texture that shares the same UV layout and resolution with its corresponding lightmap. More info
See in Glossary lighting mode. It is shared by all Mixed Lights in a Scene. To set Mixed lightingA Light Mode for creating indirect lighting, shadowmasks and subtractive lighting. Indirect lighting gets baked into lightmaps and light probes. Shadowmasks and light probes occlusion get generated for baked shadows. More info
See in Glossary to Distance Shadowmask:
See documentation on Mixed lighting to learn more about this lighting mode, and see documentation on Light modesA Light property that defines the use of the Light. Can be set to Realtime, Baked and Mixed. More info
See in GlossaryObjectsThe fundamental object in Unity scenes, which can represent characters, props, scenery, cameras, waypoints, and more. A GameObject’s functionality is defined by the Components attached to it. More info
See in Glossary onto dynamic GameObjects.
Within the Shadow Distance (menu: Edit > Project Settings > Quality > ShadowsA UI component that adds a simple outline effect to graphic components such as Text or Image. It must be on the same GameObject as the graphic component. More info
See in Glossary),
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/2018.2/Documentation/Manual/LightMode-Mixed-DistanceShadowmask.html | 2019-10-14T03:15:25 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.unity3d.com |
Z.CullFaces
enum CullFaces { "inherit", "back", "front", "both", "none" }
3D rendering hardware can optionally skip drawing (cull) each face in a 3D object depending on the order in which the vertices of that face are specified.
The Z.CullFaces enumeration provides a set of values that are used throughout the platform to indicate culling behavior. | https://docs.zap.works/studio/scripting/reference/cullfaces/ | 2019-10-14T04:36:49 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.zap.works |
Comparing and Analyzing experiments¶
Once you have more than 2 versions of a notebook, you will be able to use
Compare Versions present in the
Version dropdown on the top right corner.
Here you can observe all types of information about all of your versions.
Title
Time of Creation
Author
All the parameters logged under dataset.
All the parameters logged under hyperparameters.
All the parameters logged under metrics.
Notes (for author and collaborators add extra notes)
Sort¶
You can sort any column or a sub-column (For ex: accuracy or any other metric, date of creation etc.) by clicking on the column header.
Show, Hide and Reorder columns¶
You can create a custom view to analyse & compare your choice of parameters.
Click on
Configure button and then tick on the checkboxes to create a customized view. Click and drag the elements to reorder them based on your preference.
Add notes¶
You can add notes to summarize the experiment for reference or for collaborators to refer to.
View Diff between specific versions¶
Select any of the 2 versions by ticking the checkbox next to each version-row of the compare table which can be seen when you hover over any row. Click on
View Diff button to view the additions and deletion made.
Archive/Delete versions¶
Select version/versions by ticking the checkbox of the row/rows. This enables both
Archive and
Delete ready for the respective actions. | https://jovian-py.readthedocs.io/en/latest/user-guide/07-compare.html | 2019-10-14T03:35:05 | CC-MAIN-2019-43 | 1570986649035.4 | [] | jovian-py.readthedocs.io |
Display all products being offered on a single page, grouped in tabs by product groups. Products are shown with details and order button. Helps users to navigate all the products from a single page, a must have for companies with a lot of products.
[whmpress_store]
Parameters
- currency_id: Used with multi-currency, set the Currency in which price is displayed, if not mentioned session currency is used (which user have selected), if no session is found, currency set as default in WHMCS is used.
- gids: Comma separated list of WHMCS Product Groups (all groups will be shown if not provided or empty)
- pids: Comma Separated list of Product IDs (all products will be shown if not provided or empty)
- domain_products: Set this to ‘yes’ (lowercase) if you want to show only the products which require a domain or have show domain options set in WHMCS. | http://docs.whmpress.com/docs/wcop-whmcs-cart-order-pages/shortcodes/store/ | 2019-10-14T04:50:02 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.whmpress.com |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
Tutorial: Using AWS Lambda with the Mobile SDK for Android
In this tutorial, you create a simple Android mobile application that uses Amazon Cognito to get credentials and invokes a Lambda function.
The mobile application retrieves AWS credentials from an Amazon Cognito identity pool and uses them to invoke a Lambda function with an event that contains request data. The function processes the request and returns a response to the front-end. to get a Windows-integrated version of Ubuntu and Bash.-android-role.
The AWSLambdaBasicExecutionRole policy has the permissions that the function needs to write logs to CloudWatch Logs.
Create the Function
The following example uses data to generate a string response.
Note
For sample code in other languages, see Sample Function Code.
Example index.js
exports.handler = function(event, context, callback) { console.log("Received event: ", event); var data = { "greetings": "Hello, " + event.firstName + " " + event.lastName + "." }; callback(null, data); }
To create the function
Copy the sample code into a file named
index.js.
Create a deployment package.
$
zip function.zip index.js
Create a Lambda function with the
create-functioncommand.
$
aws lambda create-function --function-name AndroidBackendLambdaFunction \ --zip-file fileb://function.zip --handler index.handler --runtime nodejs8.10 \ --role arn:aws:iam::
123456789012:role/lambda-android-role
Test the Lambda Function
Invoke the function manually using the sample event data.
To test the Lambda function (AWS CLI)
Save the following sample event JSON in a file,
input.txt.
{ "firstName": "first-name", "lastName": "last-name" }
Execute the following
invokecommand:
$ aws lambda invoke --function-name AndroidBackendLambdaFunction \ --payload file://
file-path/input.txt outputfile.t
Open the Amazon Cognito console.. Therefore, make sure to select the Enable access to unauthenticated identities option.
Add the following statement to the permission policy associated with the unauthenticated identities.
{ "Effect": "Allow", "Action": [ "lambda:InvokeFunction" ], "Resource": [ "arn:aws:lambda:us-east-1:
123456789012" ] } ] }
For instructions about how to create an identity pool, log in to the Amazon Cognito console and follow the New Identity Pool wizard.
Note the identity pool ID. You specify this ID in your mobile application you create in the next section. The app uses this ID when it sends request to Amazon Cognito to request for temporary security credentials.
Create an Android Application
Create a simple Android mobile application that generates events and invokes Lambda functions by passing the event data as parameters.
The following instructions have been verified using Android studio.
Create a new Android project called
AndroidEventGeneratorusing the following configuration:
Select the Phone and Tablet platform.
Choose Blank Activity.
In the build.gradle (
Module:app) file, add the following in the
dependenciessection:
compile 'com.amazonaws:aws-android-sdk-core:2.2.+' compile 'com.amazonaws:aws-android-sdk-lambda:2.2.+'
Build the project so that the required dependencies are downloaded, as needed.
In the Android application manifest (
AndroidManifest.xml), add the following permissions so that your application can connect to the Internet. You can add them just before the
</manifest>end tag.
<uses-permission android: <uses-permission android:
In
MainActivity, add the following imports:
import com.amazonaws.mobileconnectors.lambdainvoker.*; import com.amazonaws.auth.CognitoCachingCredentialsProvider; import com.amazonaws.regions.Regions;
In the
packagesection, add the following two classes (
RequestClassand
ResponseClass). Note that the POJO is same as the POJO you created in your Lambda function in the preceding section.
RequestClass. The instances of this class act as the POJO (Plain Old Java Object) for event data which consists of first and last name. If you are using Java example for your Lambda function you created in the preceding section, this POJO is same as the POJO you created in your Lambda function code.
package
com.example....lambdaeventgenerator; public class RequestClass { String firstName; String lastName; public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public RequestClass(String firstName, String lastName) { this.firstName = firstName; this.lastName = lastName; } public RequestClass() { } }
ResponseClass
package
com.example....lambdaeventgenerator; public class ResponseClass { String greetings; public String getGreetings() { return greetings; } public void setGreetings(String greetings) { this.greetings = greetings; } public ResponseClass(String greetings) { this.greetings = greetings; } public ResponseClass() { } }
In the same package, create interface called
MyInterfacefor invoking the
AndroidBackendLambdaFunctionLambda function.
package
com.example.....lambdaeventgenerator;annotation in the code maps the specific client method to the same-name Lambda function.
To keep the application simple, we are going to add code to invoke the Lambda function in the
onCreate()event handler. In
MainActivity, add the following code toward the end of the
onCreate()code.
// Create an instance of CognitoCachingCredentialsProvider CognitoCachingCredentialsProvider cognitoProvider = new CognitoCachingCredentialsProvider( this.getApplicationContext(), "
identity-pool-id", Regions.US_WEST_2); // Create LambdaInvokerFactory, to be used to instantiate the Lambda proxy. LambdaInvokerFactory factory = new LambdaInvokerFactory(this.getApplicationContext(), Regions.US_WEST_2, cognitoProvider); // Create the Lambda proxy object with a default Json data binder. // You can provide your own data binder by implementing // LambdaDataBinder. final MyInterface myInterface = factory.build(MyInterface.class); RequestClass request = new RequestClass("John", "Doe"); // The Lambda function invocation results in a network call. // Make sure it is not called from the main thread. new AsyncTask<RequestClass, Void, ResponseClass>() { @Override protected ResponseClass doInBackground(RequestClass... params) { // invoke "echo" method. In case it fails, it will throw a // LambdaFunctionException. try { return myInterface.AndroidBackendLambdaFunction(params[0]); } catch (LambdaFunctionException lfe) { Log.e("Tag", "Failed to invoke echo", lfe); return null; } } @Override protected void onPostExecute(ResponseClass result) { if (result == null) { return; } // Do a toast Toast.makeText(MainActivity.this, result.getGreetings(), Toast.LENGTH_LONG).show(); } }.execute(request);
Run the code and verify it as follows:
The
Toast.makeText()displays the response returned.
Verify that CloudWatch Logs shows the log created by the Lambda function. It should show the event data (first name and last name). You can also verify this in the AWS Lambda console. | https://docs.aws.amazon.com/lambda/latest/dg/with-android-example.html | 2019-10-14T03:46:52 | CC-MAIN-2019-43 | 1570986649035.4 | [array(['images/lambda-android.png', None], dtype=object)] | docs.aws.amazon.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.