content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Reactor handles isotopes as well. If the reaction equation includes isotopes on the reactant side, the transformation will only occur if the molecule matches the isotope information as well. For traditional names like deuterium and tritium, the abbreviations D and T are also accepted.
A practical example for the usage is a deuterium labelling reaction: | https://docs.chemaxon.com/display/docs/isotopes.md | 2020-11-24T00:05:15 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.chemaxon.com |
system_efm32zg.c File Reference
CMSIS Cortex-M0+ System Layer for EFM32ZG devices.
- Version
- 5.6.0
License Laboratories, Inc. has no obligation to support this Software. Silicon Laboratories, Inc. is providing the Software "AS IS", with no express or implied warranties of any kind, including, but not limited to, any implied warranties of merchantability or fitness for any particular purpose or warranties against infringement of any proprietary rights of a third party.
Silicon Laboratories, Inc. will not be liable for any consequential, incidental, or special damages, or any other relief, or for any claim by any third party, arising from your use of this Software.
Definition in file
system_efm32zg.c.
#include <stdint.h>
#include "
em_device.h"
Macro Definition Documentation
Maximum HFRCO frequency.
Definition at line
63 of file
system_efm32zg.c.
Referenced by SystemMaxCoreClockGet().
HFXO frequency.
Definition at line
59 of file
system_efm32zg.c.
Referenced by SystemMaxCoreClockGet().
LFRCO frequency, tuned to below frequency during manufacturing.
Definition at line
41 of file
system_efm32zg.c.
Referenced by SystemHFClockGet(), and SystemLFRCOClockGet().
(DO_NOT_INCLUDE_WITH_DOXYGEN) LFXO frequency.
Definition at line
75 of file
system_efm32zg.c.
ULFRCO frequency.
Definition at line
43 of file
system_efm32zg.c.
Referenced by SystemULFRCOClockGet(). | https://docs.silabs.com/mcu/5.6/efm32zg/system-efm32zg-c | 2020-11-24T01:34:37 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.silabs.com |
SSH is a popular method to remotely access and manage Linux systems. This tutorial will provide instructions and tips for users new to SSH. SSH is a command like tool that can be a bit overwhelming when first learning but very fast and powerful once you learn the basics.
For users with Windows computers you need to install a SSH Client program. Mac and Linux users have SSH built in although a SSH Client can still be useful.
SSH Clients
- Windows: Putty - is a popular free SSH Client that is easy to use. Download the .msi installer
- Mac: vSSH - is a professional paid SSH Client with many nice features
Connecting to SSH from Windows
Open Putty and enter in the IP address and port for SSH on your MangoGT
Host Name =
mango@<your MangoGT IP>
Port = 2222 (default ssh port for Linux is 22 but MangoGT is 2222)
Enter the unique password from the sticker inside the MangoGT box. Note that this password is not your password into Mango but the password into the Linux operating system of the MangoGT
After logging in you'll come to the welcome screen where you can proceed with your commands
Useful SSH Commands & Included Short Cuts
List the contents of the current directory
ll
Display the present path location
pwd
Change directories (cd) to the Mango folder
cd /opt/mango
Stop and start the Mango Service
sudo service mango stop sudo service mango start
Note: "sudo" is needed to preface a command to run the command as root
Check disk space
df -h
See the ethernet configuration
ifconfig
You can find lots of ssh commands, tricks and tutorial by searching online | http://docs-v4.mango-os.com/ssh-access-to-the-mangogt | 2020-11-24T00:38:09 | CC-MAIN-2020-50 | 1606141169606.2 | [array(['https://images.squarespace-cdn.com/content/v1/58d15f4a6b8f5bad3a9a578c/1497922461071-DYROO3WGIHHAT7TFFG0S/ke17ZwdGBToddI8pDm48kDTf5dftNZAgEMp84N0GkMxZw-zPPgdn4jUwVcJE1ZvWQUxwkmyExglNqGp0IvTJZamWLI2zvYWH8K3-s_4yszcp2ryTI0HqTOaaUohrI8PItEvdc2lKat1_zS9l3r3I66oLRcSrJDndDq2-_gZq-W8/image-asset.png',
None], dtype=object)
array(['https://images.squarespace-cdn.com/content/v1/58d15f4a6b8f5bad3a9a578c/1497922730817-THE27CLLWYRZNUWISTAD/ke17ZwdGBToddI8pDm48kLsdU-ofwBc0dv5iJXm3qBsUqsxRUqqbr1mOJYKfIPR7LoDQ9mXPOjoJoqy81S2I8N_N4V1vUb5AoIIIbLZhVYxCRW4BPu10St3TBAUQYVKcIq2PyFeJ1nA-e3SkcVuqYuE5j7w2VFKjf-cCGLvAp1a-DoGhSWrct4RoYD6GQss3/image-asset.png',
None], dtype=object)
array(['https://images.squarespace-cdn.com/content/v1/58d15f4a6b8f5bad3a9a578c/1497923752435-IDYCAICZOSQTESKWZ4LG/ke17ZwdGBToddI8pDm48kOjAzxdOy8SCTtDIeaKhFgV7gQa3H78H3Y0txjaiv_0fDoOvxcdMmMKkDsyUqMSsMWxHk725yiiHCCLfrh8O1z5QHyNOqBUUEtDDsRWrJLTmrExICbACd0jVjox6hLa45z5aQYWSvZksJlij6SQjEkLsEUyh0gV0SOow_dpW36R4/image-asset.png',
None], dtype=object) ] | docs-v4.mango-os.com |
# Hooks
The hook subsystem provides a simple, passive interface to ApisCP without module extensions. Hooks come in two flavors, account and API. Account hooks are available for edit, create, delete, suspend, activate, import, and export operations. Unlike a module, a hook cannot interrupt flow.
# Account hooks
# Installation
Example hooks are provided in
bin/hooks as part of the apnscp distribution. Active hooks are located in
config/custom/hooks. All hooks follow a familiar interface, with the first argument the site identifier. Import and export hooks also include the target format as well as source/destination file respectively.
A hook is accepted if it has a ".php" or ".sh" extension. Any other matching hook is ignored.
cd /usr/local/apnscp mkdir -p config/custom/hooks cp bin/hooks/editDomain.php config/custom/hooks # Create the domain env DEBUG=1 VERBOSE=-1 AddDomain -c siteinfo,domain=hooktest.com -c siteinfo,admin_user=hooktest -c dns,enabled=0
env DEBUG=1 enables opportunistic debugging, which generates additional information at runtime.
VERBOSE=-1 is a shorthand flag to enable backtraces for all levels of error reporting. Backtraces help identify the pathway a problem bubbles up. Refer to sample code in each hook for context strategies.
# API hooks
Programming is covered in detail in PROGRAMMING.md. Hooks are simplified means of reacting to an API call in ApisCP. Unlike a surrogate, which extends a module's internals, an API hook is decoupled from implementation details and may only react to the return value (and arguments) of an API call. A hook cannot interrupt an API call nor is it always called for each invocation. A hook is called within the module context and thus has access to all private and protected properties and methods of the module class to which it binds therefore it must not be declared with the
static modifier.
TIP
Hooks are only called if the method is the first call of the module API. For tighter control, a surrogate is preferred, which is always called when the corresponding method is called.
API hooks are declared in
config/custom/boot.php as with other overrides. Hooks are intended to initialize early in the request lifecycle. Once a module is invoked, associated callbacks are frozen.
API hooks are only called if the method is the first call of the module API. For tighter control, a surrogate is preferred, which is always called when the corresponding method is called.
<?php \a23r::registerCallback('common', 'whoami', function ($ret, $args) { info("whoami called with arguments: [%s] + %d permission level", implode(', ', $args), $this->permission_level); });
Sample output
Running
cpcmd common:whoami would report the following,
INFO : whoami called with arguments: [] + 8 permission level ---------------------------------------- MESSAGE SUMMARY Reporter level: OK INFO: whoami called with arguments: [] + 8 permission level ---------------------------------------- admin
Hooks cannot interrupt flow, but can enhance it. Consider installing WordPress and bundling additional plugins at install. This would fire after WordPress has successfully installed. The following example would add Yoast SEO + WP Smushit plugins and install Hello Elementor theme using the ApisCP API.
<?php \a23r::registerCallback('wordpress', 'install', function ($ret, $args) { if (!$ret) { return; } // get arguments to wordpress:install [$hostname, $path, $opts] = $args; foreach (['wordpress-seo', 'wp-smushit'] as $plugin) { $this->wordpress_install_plugin($hostname, $path, $plugin); } // install and activate Hello Elementor theme $this->wordpress_install_theme($hostname, $path, 'hello-elementor'); });
Likewise consider the call graph for
wordpress:install:
Methods in green will be checked for callback functionality. Methods in red will not. Callbacks only work on the entry point of the module. A cross-module call (calling another method in another module) creates an entry point in a new module. An in-module call conversely does not leave the module and will not trigger a callback. Any method could be called independently and it would trigger a callback.
If greater fidelity is required, consider converting the callbacks into a surrogate. The above example may be rewritten in surrogate form as:
<?php class Wordpress_Module_Surrogate extends Wordpress_Module { public function install(string $hostname, string $path = '', array $opts = array()): bool { if (!parent::install($hostname, $path, $opts)) { return false; } foreach(['wordpress-seo', 'wp-smushit'] as $plugin) { $this->install_plugin($hostname, $path, $plugin); } $this->install_theme($hostname, $path, 'hello-elementor'); return true; } } | https://docs.apnscp.com/admin/Hooks/ | 2020-11-24T01:05:28 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.apnscp.com |
DefaultReceiveTimeout Field
[This documentation is for preview only, and is subject to change in later releases. Blank topics are included as placeholders.]
Represents the default receive time-out.
Namespace: Ws.Services.Binding
Assembly: MFWsStack (in MFWsStack.dll)
Syntax
'Declaration Public Shared ReadOnly DefaultReceiveTimeout As TimeSpan
public static readonly TimeSpan DefaultReceiveTimeout
public: static initonly TimeSpan DefaultReceiveTimeout
static val DefaultReceiveTimeout: TimeSpan
public static final var DefaultReceiveTimeout : TimeSpan
Remarks
The default receive time-out is 5 minutes.
.NET Framework Security
- Full trust for the immediate caller. This member cannot be used by partially trusted code. For more information, see dd66cd4c-b087-415f-9c3e-94e3a1835f74.
See Also
Reference
CommunicationObject Class
Ws.Services.Binding Namespace | https://docs.microsoft.com/en-us/previous-versions/jj652578(v=vs.113) | 2020-11-24T02:15:11 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.microsoft.com |
DOTS Address Validation International (AVI) can be used as a RESTful service or with SOAP. AVI is designed to take an international address, validate it and return a standardized international version of the address. Depending on the information available for a given address, AVI can return additional information about a given address. For example, it can return Delivery Point Validation (DPV) information for US addresses.
AVI can provide instant address verification and correction to websites or enhancement to contact lists. However, the output from AVI must be considered carefully before the existence or non-existence of an address is decided
Operations
This section lists the DOTS Address Validation International operations and goes into the details behind the inputs and outputs.
Operations:
GetAddressInfo
Country Support Table
This section outlines all the countries supported by DOTS Address Validation International.
Status and ResolutionLevel Tables
This section outlines the Status and ResolutionLevel values returned by the service.
InformationComponents and Status Codes
This section outlines the possible InformationComponent and Status code values returned by the service.
Errors
This section reflects details on the error outputs that can happen with the service.
Code Snippets and Sample Code
Here you'll find code snippets for various programming languages and frameworks along with links to our sample code page on the web site.
Try The API
This is where you'll go to take the API for a spin. There you can test our recommended operation GetAddressInfo.
Service.
Integrating AV.
AVI is a public web service that supports SOAP, POST and GET operations, using RESTful paradigms or simple HTTP transport calls.
The host path, or physical location of the RESTful web service is here:
The host path, or physical location of the SOAP web service is here:
A test page for the recommended operation can be found here:
AVI - Try The API
See the service references and try the other operations here:
AVI - Service Reference
The location of the WSDL, or Web Service Definition Language document, is here (This is also accessible via the "Service Definition" link.): | https://docs.serviceobjects.com/plugins/viewsource/viewpagesrc.action?pageId=178815010 | 2020-11-24T00:30:31 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.serviceobjects.com |
sys.fn_check_object_signatures (Transact-SQL)
Syntax
fn_ check_object_signatures ( { '@class' } , { @thumbprint } )
Arguments
{ '@class' }
Identifies the type of thumbprint being provided:
'certificate'
'asymmetric key'
@class is sysname.
{ @thumbprint }
SHA-1 hash of the certificate with which the key is encrypted, or the GUID of the asymmetric key with which the key is encrypted. @thumbprint is varbinary(20).
Tables Returned
The following table lists the columns that fn_check_object_signatures returns.
Remarks
Use fn_check_object_signatures to confirm that malicious users have not tampered with objects.
Permissions
Requires VIEW DEFINITION on the certificate or asymmetric key.
Examples
See Also
IS_OBJECTSIGNED (Transact-SQL) | https://docs.microsoft.com/en-us/sql/relational-databases/system-functions/sys-fn-check-object-signatures-transact-sql?view=sql-server-2017 | 2018-08-14T15:50:21 | CC-MAIN-2018-34 | 1534221209165.16 | [array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'],
dtype=object)
array(['../../includes/media/no.png?view=sql-server-2017', 'no'],
dtype=object)
array(['../../includes/media/no.png?view=sql-server-2017', 'no'],
dtype=object)
array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'],
dtype=object) ] | docs.microsoft.com |
Converting Deformation Animation to Drawings
At times, you may want to adjust your deformation animation using the drawing tools to perfect something or add detail. You may also want to change the timing and set it on double frame instead of single frame. Harmony offers you the option to convert your deformation animation to an actual drawing sequence.
-.
| https://docs.toonboom.com/help/harmony-12-2/premium-server/deformation/convert_deformation-drawing.html | 2018-08-14T16:13:41 | CC-MAIN-2018-34 | 1534221209165.16 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/HAR/Stage/Deformation/HAR12/HAR12_Baking_Drawing-001.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/Deformation/HAR12/HAR12_Baking_Drawing.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/Deformation/HAR12/HAR12_Baking_Drawing-002.png',
None], dtype=object) ] | docs.toonboom.com |
There are two ways to edit a form. You can edit the properties of a form, such as title, categories and tags and you can edit the content of a form by adding layout elements and form widgets.
Editing the properties of a form
You can edit the properties of a form in the following way:
You a form
Back To Top | https://docs.sitefinity.com/82/edit-forms | 2018-08-14T15:34:27 | CC-MAIN-2018-34 | 1534221209165.16 | [] | docs.sitefinity.com |
Caveats in Android
Below are all caveats in Android.
Know caveats? Feel free to add them!
Linker settings
When linking in release mode (or debug if you would like), the linker will remove all non-used items from the final application assembly. Since the binding system in Catel uses reflection, it might break when the linker is too aggressive when optimizing the app. To prevent optimalization, create a dummy file that uses the members of each type so the linker will not exclude them. Note that this class will never be instantiated, nor will its methods be invoked. It is purely to let the static analysis be notified of the usage.
Note that this applies to both properties and bindings
Below is an example class to force the inclusion of members in Android. For each type and its members, a method is added. Then each used property is accessed and each used event is subscribed to.
public class LinkerInclude { public void IncludeActivity(Activity activity) { activity.Title = activity.Title = string.Empty; } public void IncludeButton(Button button) { button.Text = button.Text + string.Empty; button.Click += (sender, e) => { }; } public void IncludeEditText(EditText editText) { editText.Text = editText.Text + string.Empty; editText.TextChanged += (sender, e) => { }; } public void IncludeCommand(ICatelCommand command) { command.CanExecuteChanged += (sender, e) => { }; } }
Have a question about Catel? Use StackOverflow with the Catel tag! | http://docs.catelproject.com/5.1/introduction/platform-support/caveats-in-android/ | 2019-05-19T11:27:54 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.catelproject.com |
Gradient¶
Inherits: Resource < Reference < Object
Category: Core
Description¶
Given a set of colors, this node will interpolate them in order, meaning, that if you have color 1, color 2 and color 3, the ramp will interpolate (generate the colors between two colors) from color 1 to color 2 and from color 2 to color 3. Initially the ramp will have 2 colors (black and white), one (black) at ramp lower offset 0 and the other (white) at the ramp higher offset 1.
Property Descriptions¶
- PoolColorArray colors
Gradient's colors returned as a PoolColorArray.
- PoolRealArray offsets
Gradient's offsets returned as a PoolRealArray.
Method Descriptions¶
Adds the specified color to the end of the ramp, with the specified offset
Returns the color of the ramp color at index point
Returns the offset of the ramp color at index point
Returns the number of colors in the ramp
Returns the interpolated color specified by offset
Removes the color at the index offset
Sets the color of the ramp color at index point
Sets the offset for the ramp color at index point | http://docs.godotengine.org/ko/latest/classes/class_gradient.html | 2019-05-19T10:16:44 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.godotengine.org |
Example of defining a customized technology¶
Result
In this example, we define a customized technology by extending a predefined technology in ipkiss. We add a layer which is used to open the back-end oxide (top oxide) on top of the waveguides. This can be used to open a window for a sensing application, for instance to let a gas or fluid reach a ring resonator on top of which a window is opened.
Fig. 55 Ring resonator covered by a back-end opening layer
Illustrates
- how to create a customized technology file by importing a predefined file
- how to add a new process layer
- how to define import and export settings and visualization
- how to define a virtual fabrication process
- how to alter values in the predefined technology files
Files (see: samples/ipkiss3/samples/wrapped_disk)
There are two files contained in this step.
mytech/__init__.py: this is the file which defines the customized technology
execute.py: this is the file that is executed by python. It defines the covered ring resonator
How to run this example
To run the example, run ‘execute.py’.
Getting started¶
We start by creating an empty folder for the new technology (in this example ‘mytech’), and create a file __init__.py.
The __init__.py file starts by importing a predefined technology in ipkiss, and assigning a meaningful name to the technology:
from technologies import silicon_photonics TECH.name = "CUSTOMIZED TECHNOLOGY SAMPLE"
Adding a process layer¶
In order to add a new layer, we need to do two things:
- Add the ProcessLayer - this tells ipkiss about the existance of the layer related to a specific process module
- Add one or more ProcessPurposeLayers - this tells ipkiss which combinations of the new ProcessLayer and PatterPurposes exist. In this sample we’ll add only one.
TECH.PROCESS.OXIDE_OPEN = ProcessLayer(name="OX_OPEN", extension="OXO") TECH.PPLAYER.OXIDE_OPEN = ProcessPurposeLayer(process=TECH.PROCESS.OXIDE_OPEN, purpose=TECH.PURPOSE.DF.TRENCH, name="OX_OPEN")
The TECH.PURPOSE.DF.TRENCH pattern purpose means that the geometries drawn will be etched trenches on a dark (unetched) field.
Defining rules¶
The technology tree can contain some design rules, defaults or drawing guidelines. In this case, we’ll add a few keys for the minimum width and spacing of patterns on our new OXIDE_OPEN layer:
TECH.OXIDE_OPEN = TechnologyTree() TECH.OXIDE_OPEN.MIN_WIDTH = 10.0 # minimum width of 10.0um TECH.OXIDE_OPEN.MIN_SPACING = 5.0 # minimum spacing of 5.0um
Visualization and import/export settings¶
In order to make the new layer export to GDSII (and import from GDSII), we need to tell ipkiss which layer number to use for the process layer. In the technology example shipped with Ipkiss, every process layer corresponds to a GDSII Layer number, and every pattern purpose corresponds to a GDSII datatype. This can be controlled in a more fine-grained way, but that is outside the scope of this sample.
In order to have Ipkiss visualize the layer using the .visualize() method on a Layout view, we need to set the display style for the ProcessPurposeLayer. In this case, we set it to a yellow color with a 50% transparance.
TECH.GDSII.PROCESS_LAYER_MAP[TECH.PROCESS.OXIDE_OPEN] = 10 DISPLAY_OXIDE_OPEN = DisplayStyle(color = color.COLOR_YELLOW, alpha = 0.5, edgewidth = 1.0) TECH.DISPLAY.DEFAULT_DISPLAY_STYLE_SET.append((TECH.PPLAYER.OXIDE_OPEN, DISPLAY_OXIDE_OPEN))
Overwriting technology keys¶
Overwriting technology settings which were already defined, is possible as well. We can for instance increase the default bend radius on the WG layer to 10 micrometer:
TECH.WG.overwrite_allowed.append("BEND_RADIUS") TECH.WG.BEND_RADIUS = 10.0
Example: ring resonator with back-end opening cover layer¶
In order to use our newly defined technology, we make sure to import it before anything else in the main script. Make sure it is imported before ipkiss3 and before any picazzo library components ! Make sure that your new technology folder (in this case ‘mytech’) is in your PYTHONPATH.
from mytech import TECH import ipkiss3.all as i3 from picazzo3.filters.ring import RingRect180DropFilter
We continue by defining a simple PCell which consists of a ring resonator and a cover layer:
class RingWithWindow(i3.PCell): ring_resonator = i3.ChildCellProperty() def _default_ring_resonator(self): return RingRect180DropFilter() class Layout(i3.LayoutView): def _generate_instances(self, insts): insts += i3.SRef(reference=self.ring_resonator, position=(0.0,0.0)) return insts def _generate_elements(self, elems): si = self.ring_resonator.size_info() size = (min(si.width, TECH.OXIDE_OPEN.MIN_WIDTH), min(si.height, TECH.OXIDE_OPEN.MIN_WIDTH)) # use rule in tech elems += i3.Rectangle(layer=TECH.PPLAYER.OXIDE_OPEN, center=(0.0,0.0), box_size=size) return elems
We can now instantiate the component, visualize its layout and export to GDSII:
R = RingWithWindow(name="ring_with_window") Rlayout = R.Layout() Rlayout.visualize() # show layout elements Rlayout.visualize_2d() # virtually fabricated Rlayout.write_gdsii("ring_with_window.gds")
Fig. 56 Ring resonator covered by a back-end opening layer: visualization
Fig. 57 Ring resonator covered by a back-end opening layer: GDSII
Fig. 58 Ring resonator covered by a back-end opening layer: virtually fabricated geometry
- customized Place and Route cell
Next topic
Creating a Cell from an Existing GDSII file | http://docs.lucedaphotonics.com/3.1/samples/custom_technology/index.html | 2019-05-19T10:42:40 | CC-MAIN-2019-22 | 1558232254751.58 | [array(['../../_images/ring_with_openlayer.png',
'Ring resonator with opening layer'], dtype=object)
array(['../../_images/ring_with_openlayer.png',
'Ring resonator with opening layer'], dtype=object)
array(['../../_images/ring_with_openlayer_gds.png',
'Ring resonator with opening layer'], dtype=object)
array(['../../_images/ring_with_openlayer_vfab.png',
'Ring resonator with opening layer'], dtype=object)] | docs.lucedaphotonics.com |
Testcase to check if Saldovortrag in Konten-Information for Revenue or Expense accounts from year end is not taken for the new year.
Run report Salobilanz for the last day of last year, e.g. 31.12.2015
Note the amounts for accts X and Y (numbers are examples):
Run report Salobilanz for the first day of this year, e.g. 01.01.2016, note the amounts for accts X and Y:
In Konten (Account Element), change the account type of account X to one != Expense or Revenue, check currency and select EUR | http://docs.metasfresh.org/tests_collection/testcases/Testcase_FRESH-304.html | 2019-05-19T10:17:17 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.metasfresh.org |
CreateLoginProfile
Creates a password for the specified user, giving the user the ability to access AWS services through the AWS Management Console. For more information about managing passwords, see Managing Passwords in the IAM User Guide.
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The new password for the user.
- PasswordResetRequired
Specifies whether the user is required to set a new password on next sign-in.
Type: Boolean
Required: No
- UserName
The name of the IAM user to create a password for. The user must already exist.
This parameter allows (through.
Type: LoginProfile object=h]6EszR}vJ*m &Version=2010-05-08 &AUTHPARAMS
Sample Response
<CreateLoginProfileResponse xmlns=""> <CreateLoginProfileResult> <LoginProfile> <PasswordResetRequired>false</PasswordResetRequired> <UserName>Bob</UserName> <CreateDate>2015-03-25T20:48:52.558Z</CreateDate> </LoginProfile> </CreateLoginProfileResult> <ResponseMetadata> <RequestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</RequestId> </ResponseMetadata> </CreateLoginProfileResponse>
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/IAM/latest/APIReference/API_CreateLoginProfile.html | 2019-05-19T10:53:47 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.aws.amazon.com |
Yes, we have a partner program for consultants and coaches using Spidergap with their clients.
Our partners are coaches and consultants that provide additional services that are supported by our 360 Degree Feedback tool. For example:
- Project management
- Executive coaching
- Training for line managers, administrators, etc.
You can view our existing partners and the services they provide here.
Interested in becoming a partner? Read more about the partner benefits and how to join. | https://docs.spidergap.com/spidergap-partners/does-spidergap-have-a-partner-program | 2019-05-19T10:21:50 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.spidergap.com |
Download & Install Graylog¶
Graylog can be deployed in many different ways, You should download whatever works best for you. For those who would like to do an initial lab evaluation of Graylog, we recommend starting with the virtual machine appliances.
Virtual Appliances are definitely the fastest way to get started. However, since the virtual appliances are generally not suitable for use in production, they should be used strictly for proof of concept, evaluations or lab environments.
The virtual appliances are also completely unsecured. No hardening has been done and all default services are enabled.
For production deployments users should select and deploy one of the other, more flexible, installation methods.
Operating System Packages¶
Graylog may be installed on the following operating systems.
- Ubuntu
- Debian
- RHEL/CentOS
- SLES
Most customers use package tools like DEB or RPM to install the Graylog software. Details are included in the section, Operating System Packages.
Configuration Management¶
Customers who prefer to deploy graylog via configuration management tools may do so. Graylog currently supports Chef, Puppet, Ansible.
Containers¶
Graylog supports Docker for deployment of Graylog, MongoDB and Elasticsearch. Installation and configuration instructions may be found on the Docker installation page.
Virtual Appliances¶
Virtual Appliances may be downloaded from virtual appliance download page If you are unsure what the latest stable version number is, take a look at our release page.
Supported Virtual Appliances
- OVA
- AWS-AMI
Deployment guide for Virtual Machine Appliances.
Deployment guide for Amazon Web Services.
Virtual Appliance Caveats¶
Virtual appliances are not suitable for production deployment out of the box. They do not have sufficient storage, nor do they offer capabilities like index replication that meet high availability requirements.
The virtual appliances are not hardened or otherwise secured. Use at your own risk and apply all security measures required by your organization. | http://docs.graylog.org/en/latest/pages/getting_started/download_install.html | 2019-05-19T11:35:30 | CC-MAIN-2019-22 | 1558232254751.58 | [array(['../../_images/download.png', '../../_images/download.png'],
dtype=object) ] | docs.graylog.org |
Testcase to check if there are correct relations between PMM_PurchaseCandidates and their C_Orders.
Create a PMM_PurchaseCandidate for bpartner G000X, P0001, with the WebUI (set a qty so you can order sev. TUs!)
Open the PMM_PurchaseCandidate, search for bpartner G000X, P0001, and date of test
Set a qty for Quantity to order (TU), but not for the full qty of product
Create the purchase order
In the PMM_PurchaseCandidate, click Zoom Across
=> the purchase order you created is displayed
=> the PMM_PurchaseCandidate is displayed
Zoom back into the PMM_PurchaseCandidate
Create 2 more purchase orders, and then click Zoom Across again
Zoom into the purchase orders
Create another PMM_PurchaseCandidate for bpartner G000X, P0001, with the WebUI
Open this PMM_PurchaseCandidate, and change the packing instruction or price
Set a qty for Quantity to order (TU) in this PMM_PurchaseCandidate, and also for the older one
Create the purchase order, so that it includes both PMM_PurchaseCandidates
Zoom into the purchase order, and click Zoom Across there
=> the two PMM_PurchaseCandidate are displayed (=> Procurement candidates #2) | http://docs.metasfresh.org/tests_collection/testcases/Testcase_FRESH-347.html | 2019-05-19T10:18:01 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.metasfresh.org |
Support¶
Mailing list¶
If you have general questions about Wagtail, or you’re looking for help on how to do something that these documents don’t cover, join the mailing list at groups.google.com/d/forum/wagtail.
Issues¶
If you think you’ve found a bug in Wagtail, or you’d like to suggest a new feature, please check the current list at github.com/torchbox. | http://docs.wagtail.io/en/v1.7/support.html | 2019-05-19T11:04:23 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.wagtail.io |
Basics 1 - Setup
Welcome to the Basics Tutorial!
In this lesson you’ll:
- set up your machine for development with SpatialOS and Unreal
- get the project code
- run the project locally
- use the Inspector to explore the world
- learn about entities, components and workers - the core concepts of SpatialOS
This first lesson is mostly background work you need to do before you can get going with development. Once your machine is set up and the project is running, we’ll use the project to explain the basic concepts of SpatialOS. So bear with us!
1. Set up your machine
To set up your machine for developing with SpatialOS, follow the setup guide for Windows, including the optional Unreal part. This sets up the
spatial command-line tool, which you use to build, run, and deploy your game. You’ll use it later in the lesson to build the game and run a local deployment.
At the moment, the SpatialOS Unreal SDK only supports development in Windows.
It’s done when: You run
spatial versionin a terminal, and see
'spatial' command-line tool version: <number>(or
'spatial.exe' command-line tool version: <number>) printed in your console output.
2. Set up the project
In this tutorial you’ll develop on a variation of the Unreal Starter Project that we’ve prepared.
This variation contains a pawn for the Player (replacing the cube) and a “bot” pawn.
2.1. Download the source code
Download and unzip the source code.
2.2. Check your setup
Open a terminal and navigate to the directory you just unzipped (the directory that contains
spatialos.json).
Run
spatial diagnose. This checks that you installed all the software you needed correctly.
If spatial diagnose finds errors with your setup, fix them.
It’s done when: You see
'spatial diagnose' succeededprinted in your console output.
2.3. Build the project
In same terminal window, run
spatial worker build --target=local.
This builds the game. It can take a while, so you might want to go and get a cup of tea.
If it doesn’t work first time, you can retry by running
spatial worker clean then
spatial worker build again.
It’s done when: You see
'spatial build' succeededprinted in your console output.
2.4. Run the project locally
In the same terminal window as before, run a local deployment of the game by running
spatial local launch. This can take a minute to get up and running.
It’s done when: You see
SpatialOS ready. Access the inspector at printed in your console output. However, the configuration of the project also launches one managed worker and you’ll know it’s ready when you see
The worker UnrealWorker0 registered with SpatialOS successfully. from the Unreal Editor.
Locate the Unreal project file for your project. This can be found in
PROJECT_ROOT/workers/unreal/Game
Right-click on your project’s
.uprojectfile and select Switch Unreal Engine version.
Switch engine versions to the source-built version you built previously.
Double-click on
StarterProject.uprojectto open the project in Unreal. You’ll see a scene containing just a flat surface that is the ground.
Click Play ▶ and when it connects, you’ll see some log messages. You’ll be in control of a pawn that you can move using
WASD, and you’ll see a bot standing in front of you. that has some kind of persistence or has data which needs to be replicated in different clients in your game world should be an entity.
Entities are made of components, which describe an entity’s properties (like position, health, or waypoints).
The main area of the Inspector shows you a top-down, real-time view of the entities in your game world. This project contains three entities: your Player that was created when your client connected, a Spawner (you’ll learn about this later, but essentially it’s an entity that can create new entities like your player), and the standing bot.
The bottom-right list initially shows how many entities of each type there are in the current view of the Inspector. We’ll look at this area in more detail shortly.
Workers
The top-right list shows the workers connected to SpatialOS.
Essentially, workers are just programs that can read from and modify the entities in the world you see in the Inspector.
In this project, all the workers are instances of Unreal. There are two types:
Unreal running in headless mode. These workers handle the server-side logic. In the list,
UnrealWorker0is of this type.
Unreal running as a client. There’ll be one of these in the list for every player connected. At the moment, there’s just one: yours (
UnrealClientfollowed by a list of random characters).
The number next to each worker represents its load: a measure of how much work it is doing. SpatialOS uses this number to start and stop server-side workers as necessary.
The project is configured to start with one
UnrealWorker; but if you expanded the game world, you’d need more. SpatialOS automatically allocates areas of the world to each
UnrealWorker, and seamlessly stitches the world together between them according to several patterns that you can configure.
Components
As mentioned before, entities are made up of components, and you can view these in the Inspector. To do so, select your player entity by left-clicking on it. Since there are two entities in the same position, you’ll see that you selected both:
This view lists all the entities of each type that you’ve selected. To see the individual ones, expand the list:
You’ll see there’s one player entity (in this case the entity with ID
3). Click the ID number to select the entity.
This view lists all the components of the entity, and which workers have authority over them (you’ll learn about authority in more detail later).
You can select a component to see the information it contains. Do so for the
Position one and you’ll see it has a set of coordinates
coords, which in this case are (0,0,0).
Components are really what define an entity. They can be anything you like, but common examples might be health, or stats. Every entity has a
Position component that defines its location in the SpatialOS world.
3.2. (optional) Stop a worker
To see worker management in action, you can stop the worker running the game logic, and see how SpatialOS starts a new one - without any disruption to players at all. To try this:
Click the name
Unreal.
4. Stop the game running
In Unreal, click the Stop button to stop your client.
In your terminal, stop the server side of the game by pressing Ctrl + C.
Lesson summary
In this lesson you’ve set up the SpatialOS SDK on your machine, and run the project for the first time. You’ve also learned about some fundamental SpatialOS concepts: entities, components and workers.
What’s next?
In the next lesson you’ll add basic movement to the bot as a way to learn how to modify property values of a component. | https://docs.improbable.io/reference/12.2/unrealsdk/tutorials/unreal-basics-tutorial/lesson1 | 2019-05-19T10:28:21 | CC-MAIN-2019-22 | 1558232254751.58 | [array(['https://commondatastorage.googleapis.com/improbable-docs/docs2/reference/754c5b8ac047c02a/assets/unrealsdk/unreal-basics-tutorial/lesson1/3.png',
'3'], dtype=object)
array(['https://commondatastorage.googleapis.com/improbable-docs/docs2/reference/754c5b8ac047c02a/assets/unrealsdk/unreal-basics-tutorial/lesson1/4.png',
'4'], dtype=object)
array(['https://commondatastorage.googleapis.com/improbable-docs/docs2/reference/754c5b8ac047c02a/assets/unrealsdk/unreal-basics-tutorial/lesson1/5.png',
'5'], dtype=object)
array(['https://commondatastorage.googleapis.com/improbable-docs/docs2/reference/754c5b8ac047c02a/assets/unrealsdk/unreal-basics-tutorial/lesson1/6.png',
'6'], dtype=object)
array(['https://commondatastorage.googleapis.com/improbable-docs/docs2/reference/754c5b8ac047c02a/assets/unrealsdk/unreal-basics-tutorial/lesson1/7.png',
'7'], dtype=object) ] | docs.improbable.io |
All content with label batch+hibernate_search+hot_rod+import+infinispan+jboss_cache+listener+release+scala.
Related Labels:
expiration, publish, datagrid, coherence, server, replication,, index, events, configuration, hash_function, buddy_replication, loader, xa, write_through, cloud, jsr352, tutorial, notification, jbosscache3x, read_committed, xml, distribution, cachestore, data_grid, cacheloader, resteasy, cluster, br, permission, websocket, transaction, async, interactive, xaresource, build, searchable, demo, installation, cache_server, client, jberet, migration, non-blocking,
more »
( - batch, - hibernate_search, - hot_rod, - import, - infinispan, - jboss_cache, - listener, - release, - scala )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/batch+hibernate_search+hot_rod+import+infinispan+jboss_cache+listener+release+scala | 2019-05-19T12:01:14 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.jboss.org |
All content with label client+gridfs+installation+jcache+jsr-107+repeatable_read.
Related Labels:
remoting, mvcc, datagrid, shell, tutorial, wcm, client_server, server, infinispan, read_committed, hotrod, webdav, s, started, getting, consistent_hash, data_grid, interface, clustering,
setup, lock_striping, concurrency, examples, gatein, locking, cache, hash_function, memcached, grid, demo, configuration, api, command-line, hot_rod, filesystem
more »
( - client, - gridfs, - installation, - jcache, - jsr-107, - repeatable_read )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/client+gridfs+installation+jcache+jsr-107+repeatable_read | 2019-05-19T11:49:38 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.jboss.org |
All content with label faq+gridfs+hotrod+infinispan+jta+mvcc+notification+setup.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, replication, recovery,, events, hash_function, configuration, batch, buddy_replication, loader, xa, cloud, remoting, tutorial, murmurhash2, read_committed, xml, jbosscache3x, distribution, meeting,, 2lcache, as5, jsr-107, jgroups, locking, rest, hot_rod
more »
( - faq, - gridfs, - hotrod, - infinispan, - jta, - mvcc, - notification, - setup )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/faq+gridfs+hotrod+infinispan+jta+mvcc+notification+setup | 2019-05-19T11:33:13 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.jboss.org |
DeregisterScalableTarget
Deregisters a scalable target.
Deregistering a scalable target deletes the scaling policies that are associated with it.
To create a scalable target or update an existing one, see RegisterScalableTarget.
Request Syntax
{ "ResourceId": "
string", "ScalableDimension": "
string", "ServiceNamespace": "
string" }
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
- ResourceId. More information is available in our GitHub repository. Elements
If the action is successful, the service sends back an HTTP 200 response with an empty HTTP body. deregisters a scalable target for an Amazon ECS service called
web-app that is running in the
default cluster.
Sample Request
POST / HTTP/1.1 Host: autoscaling.us-west-2.amazonaws.com Accept-Encoding: identity Content-Length: 117 X-Amz-Target: AnyScaleFrontendService.DeregisterScalableTarget X-Amz-Date: 20160506T210150Z User-Agent: aws-cli/1.10.23 Python/2.7.11 Darwin/15.4.0 botocore/1.4.8 Content-Type: application/x-amz-json-1.1 Authorization: AUTHPARAMS { "ResourceId": "service/default/web-app", "ServiceNamespace": "ecs", "ScalableDimension": "ecs:service:DesiredCount" }
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/autoscaling/application/APIReference/API_DeregisterScalableTarget.html | 2019-05-19T10:55:08 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.aws.amazon.com |
All content with label amazon+consistent_hash+import+infinispan+listener+snapshot.
Related Labels:
expiration, publish, datagrid, coherence, server, rehash, replication, transactionmanager, dist, release, query, deadlock, archetype, lock_striping, jbossas, nexus, guide, schema, state_transfer,,, scala, client, migration, rebalance, jpa, filesystem, tx, gui_demo, eventing, client_server, testng, murmurhash, infinispan_user_guide, standalone, hotrod, repeatable_read, docs, batching, jta, faq, 2lcache, as5, jsr-107, docbook, lucene, jgroups, locking, rest, hot_rod
more »
( - amazon, - consistent_hash, - import, - infinispan, - listener, - snapshot )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/amazon+consistent_hash+import+infinispan+listener+snapshot | 2019-05-19T11:54:04 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.jboss.org |
All content with label faq+hot_rod+index+infinispan+jboss_cache+listener+publish+release+s3.
Related Labels:
expiration, datagrid, interceptor, server, transactionmanager, dist, partitioning, query, deadlock, archetype, jbossas, lock_striping, nexus, guide, schema, cache, amazon, grid, jcache,
test, api, xsd, ehcache, maven, documentation, write_behind, 缓存, ec, hibernate_search, resteasy, cluster, br, development, websocket, transaction, async, interactive, xaresource, build, searchable, demo, installation, cache_server, scala, client, non-blocking, migration, filesystem, jpa, tx, gui_demo, eventing, client_server, testng, infinispan_user_guide, standalone, snapshot, hotrod, webdav, docs, consistent_hash, batching, store, jta, 2lcache, as5, jsr-107, jgroups, lucene, locking
more »
( - faq, - hot_rod, - index, - infinispan, - jboss_cache, - listener, - publish, - release, - s3 )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/faq+hot_rod+index+infinispan+jboss_cache+listener+publish+release+s3 | 2019-05-19T11:31:10 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.jboss.org |
new (new slot in vtable) (C++ Component Extensions)
The latest version of this topic can be found at new (new slot in vtable) (C++ Component Extensions).
The
new keyword indicates that a virtual member will get a new slot in the vtable.
Note
The
new keyword has many uses and meanings. For more information, see the disambiguation topic new.
All Runtimes
(There are no remarks for this language feature that apply to all runtimes.)
Windows Runtime
Not supported in Windows Runtime.
Common Language Runtime
Remarks
In a /clr compilation,
new indicates that a virtual member will get a new slot in the vtable; that the function does not override a base class method.
new causes the newslot modifier to be added to the IL for the function. For more information about newslot, see:
Requirements
Compiler option: /clr
Examples
Example
The following sample shows the effect of
new.
// newslot.cpp // compile with: /clr ref class C { public: virtual void f() { System::Console::WriteLine("C::f() called"); } virtual void g() { System::Console::WriteLine("C::g() called"); } }; ref class D : public C { public: virtual void f() new { System::Console::WriteLine("D::f() called"); } virtual void g() override { System::Console::WriteLine("D::g() called"); } }; ref class E : public D { public: virtual void f() override { System::Console::WriteLine("E::f() called"); } }; int main() { D^ d = gcnew D; C^ c = gcnew D; c->f(); // calls C::f d->f(); // calls D::f c->g(); // calls D::g d->g(); // calls D::g D ^ e = gcnew E; e->f(); // calls E::f }
Output
C::f () called D::f () called D::g () called D::g () called E::f () called
See Also
Component Extensions for Runtime Platforms
Override Specifiers | https://docs.microsoft.com/en-us/previous-versions/86hbff6c%28v%3Dvs.140%29 | 2019-05-19T10:47:01 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.microsoft.com |
GemFire Data Serialization (DataSerializable and DataSerializer)
GemFire’s
DataSerializable interface gives you quick serialization of your objects.
Data Serialization with the DataSerializable Interface. | https://gemfire.docs.pivotal.io/93/geode/developing/data_serialization/gemfire_data_serialization.html | 2019-05-19T10:45:49 | CC-MAIN-2019-22 | 1558232254751.58 | [] | gemfire.docs.pivotal.io |
@XmlElement(value="ContrastEnhancement") public interface ContrastEnhancement
In the case of a color image, the relative grayscale brightness of a pixel color is used. ?Normalize? means to stretch the contrast so that the dimmest color is stretched to black and the brightest color is stretched to white, with all colors in between stretched out linearly. ?Histogram? means to stretch the contrast based on a histogram of how many colors are at each brightness level on input, with the goal of producing equal number of pixels in the image at each brightness level on output. This has the effect of revealing many subtle ground features. A ?GammaValue? tells how much to brighten (value greater than 1.0) or dim (value less than 1.0) an image. The default GammaValue is 1.0 (no change). If none of Normalize, Histogram, or GammaValue are selected in a ContrastEnhancement, then no enhancement is performed.
@XmlElement(value="Normalize,Histogram,Logarithmic,Exponential") ContrastMethod getMethod()
@XmlElement(value="GammaValue") Expression getGammaValue()
@Extension Object accept(StyleVisitor visitor, Object extraData)
visitor- the style visitor | http://docs.geotools.org/latest/javadocs/org/opengis/style/ContrastEnhancement.html | 2019-05-19T11:11:14 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.geotools.org |
Setting up the editor environment in Wing IDE¶
As an alternative to PyCharm, you can also use Wing IDE. To install Wing IDE, please go to to download a copy. After installation, you get the fully functional product running on a time-limited license, with up to three 10-day trial period.
Creating a new project and choosing the python executable¶
Note
If you want, you can skip the instructions below and directly open the sample project. The sample project can be accessed by clicking the sample button in Luceda Control Center. This opens a folder. In this folder, there is a wing project
samples.wpr. Opening this file will start Wing IDE with the correct Python interpreter, and the samples folder is added to the project. Once you create a new project, you have to go through the steps below.
We start by creating a new project and choosing the correct Python interpreter (see figure starting a new project). Choose . For the Python executable, choose custom, and select
envs\ipkiss3\python.exe from your installation.
Then we start a new project in Wing IDE. Make sure you choose the correct Python interpreter. The python interpreter (python.exe) is the main program that runs all the IPKISS scripts that you create. For each created environment (remember, by default, the environment ipkiss3 is created), a new Python executable is created (note: do not use pythonw.exe. pythonw.exe is used to start up windowed applications, in order to suppress the console).
After pressing OK, you are asked to save the project file now or later. We suggest to save it immediately, and store it on a working location where you wish to work with IPKISS, e.g.,
C:\users\username\Documents\luceda\ipkisstest.
You can now create a new file by selecting
hello.py in the working location (this is shown in the next step, too).
The editor environment¶
The Wing IDE environment looks like the figure below (depending on the exact version of the software, there might be minor differences). The most important tabs and windows are annotated:
When you open Wing IDE for the first time, it looks like this. Important windows have been annotated.
One important additional step is setting the
PATH variable. We need to add two paths here, one is
Scripts\, the other one is
Library\mingw-w64\bin. To do this, go to , and choose Add to inherited environment. Now you can set the
PATH. Please change
C:\luceda\ipkiss_313 to the location where you installed IPKISS. This is also shown in figure setting the PATH:
PATH=C:\luceda\ipkiss_313\python\envs\ipkiss3\Scripts;C:\luceda\ipkiss_313\python\envs\ipkiss3\Library\mingw-w64\bin;${PATH}
Before we start creating additional files, it is important to add a folder to the project. This is explained in the next step.
Adding a folder to the project¶
Currently, no file or folder is added to your project. Adding a folder ensures that you can easily browse through the project files by using the project tab on the right. To add a folder, right-click inside the project pane, and press Add Existing Directory…. Then, choose the folder you wish to add to the project, and choose which files are included in the project (e.g., all files, only Python files, …):
In the project tab, right-click and press Add Existing Directory. Add your project directory to the project.
After adding the directory to the project, new files and folders that are created will automatically show up in the Project tab.
Executing your first file¶
Note that by adding the folder to your project, the
hello.py should show up in the project pane (make sure it is saved in the correct location). Now there are two ways to run a file in Python: by executing it, or by debugging it.
- To execute a file, go to.
- To debug a file, go to(alternatively, press F5, or press the green play button
).
Go ahead and execute the current file. If you execute the file, the output of the program will be written to the OS Commands pane:
Now we quickly demonstrate how to debug a file. To illustrate this, we add a few lines, and add a breakpoint in the code. We then start debugging. The program is paused at the breakpoint location:
From the breakpoint location, you can check the content of each variable, or run additional Python commands. Pressing thebutton will continue code execution.
Debugging a file goes slower than executing the file. However, when an error occurs in your program, the debugger logs the state of the program at the location where the error occurs. It allows the user to inspect variables and run additional commands to find out the source of the error. Hence, it is a very useful way to find errors in your code. In production, you are more likely to execute the file which is faster. | http://docs.lucedaphotonics.com/3.1/tutorials/python_ide/wing_ide.html | 2019-05-19T10:44:19 | CC-MAIN-2019-22 | 1558232254751.58 | [array(['../../_images/WingIDE_mainwindow.png',
'The editor main window with annotations.'], dtype=object)
array(['../../_images/WingIDE_addfolder_combined.png',
'Illustration on how to add folder to project.'], dtype=object)] | docs.lucedaphotonics.com |
The.
The most common options to specify are:
-f,--configJsonThe json config file
-c,--certificateAuthorityHostnameThe hostname of the CA
-D,--DNThe DN for the CSR (and Certificate)
-t,--tokenThe token used to prevent man in the middle attacks (this should be a long, random value and needs to be the same one used to start the CA server)
-T,--keyStoreTypeThe type of keystore to create (leave default for NiFi nodes, specify PKCS12 to create client cert)
After running the client you will have the CA's certificate, a keystore, a truststore, and a config.json with information about them as well as their passwords.
For a client certificate that can be easily imported into the browser, specify:
-T PKCS12 | https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.0.1.1/bk_administration/content/client.html | 2017-10-17T04:11:01 | CC-MAIN-2017-43 | 1508187820700.4 | [] | docs.hortonworks.com |
Clearwater Configuration Options Reference¶
This document describes all the Clearwater configuration options that
can be set in
/etc/clearwater/shared_config,
/etc/clearwater/local_config or
/etc/clearwater/user_settings.
At a high level, these files contain the following types of
configuration options: *
shared_config - This file holds settings
that are common across the entire deployment. This file should be
identical on all nodes (and any changes can be easily synchronised
across the deployment as described in this
process). *
local_config -
This file holds settings that are specific to a single node and are not
applicable to any other nodes in the deployment. They are entered early
on in the node’s life and are not typically changed. *
user_settings - This file holds settings that may vary between
systems in the same deployment, such as log level (which may be
increased on certain nodes to track down specific issues) and
performance settings (which may vary if some nodes in your deployment
are more powerful than others)
Modifying Configuration¶
You should follow this process when changing settings in “Shared Config”. For settings in the “Local config” or “User settings” you should:
- Modify the configuration file
- Run
sudo service clearwater-infrastructure restartto regenerate any dependent configuration files
- Restart the relevant Clearwater service(s) using the following commands as appropriate for the node.
- Sprout -
sudo service sprout quiesce
- Bono -
sudo service bono quiesce
- Dime -
sudo service homestead stop && sudo service homestead-prov stop && sudo service ralf stop
- Homer -
sudo service homer stop
- Ellis -
sudo service ellis stop
- Memento -
sudo service memento stop
- Vellum -
sudo service astaire stop
Local Config¶
This section describes settings that are specific to a single node and
are not applicable to any other nodes in the deployment. They are
entered early on in the node’s life and are not normally changed. These
options should be set in
/etc/clearwater/local_config. Once this
file has been created it is highly recommended that you do not change it
unless instructed to do so. If you find yourself needing to change these
settings, you should destroy and recreate then node instead.
local_ip- this should be set to an IP address which is configured on an interface on this system, and can communicate on an internal network with other Clearwater nodes and IMS core components like the HSS.
public_ip- this should be set to an IP address accessible to external clients (SIP UEs for Bono, web browsers for Ellis). It does not need to be configured on a local interface on the system - for example, in a cloud environment which puts instances behind a NAT.
public_hostname- this should be set to a hostname which resolves to
public_ip, and will communicate with only this node (i.e. not be round-robined to other nodes). It can be set to
public_ipif necessary.
node_idx- an index number used to distinguish this node from others of the same type in the cluster (for example, sprout-1 and sprout-2). Optional.
etcd_cluster- this is either blank or a comma separated list of IP addresses, for example
etcd_cluster=10.0.0.1,10.0.0.2. The setting depends on the node’s role:
- If this node is an etcd master, then it should be set in one of two ways:
- If the node is forming a new etcd cluster, it should contain the IP addresses of all the nodes that are forming the new cluster as etcd masters (including this node).
- If the node is joining an existing etcd cluster, it should contain the IP addresses of all the nodes that are currently etcd masters in the cluster.
- If this node is an etcd proxy, it should be left blank
etcd_proxy- this is either blank or a comma separated list of IP addresses, for example
etcd_proxy=10.0.0.1,10.0.0.2. The setting depends on the node’s role:
- If this node is an etcd master, this should be left blank
- If this node is an etcd proxy, it should contain the IP addresses of all the nodes that are currently etcd masters in the cluster.
etcd_cluster_key- this is the name of the etcd datastore clusters that this node should join. It defaults to the function of the node (e.g. a Vellum node defaults to using ‘vellum’ as its etcd datastore cluster name when it joins the Cassandra cluster). This must be set explicitly on nodes that colocate function.
remote_cassandra_seeds- this is used to connect the Cassandra cluster in your second site to the Cassandra cluster in your first site; this is only necessary in a geographically redundant deployment which is using at least one of Homestead-Prov, Homer or Memento. It should be set to an IP address of a Vellum node in your first site, and it should only be set on the first Vellum node in your second site.
scscf_node_uri- this can be optionally set, and only applies to nodes running an S-CSCF. If it is configured, it almost certainly needs configuring on each S-CSCF node in the deployment.
If set, this is used by the node to advertise the URI to which requests to this node should be routed. It should be formatted as a SIP URI.
This will need to be set if the local IP address of the node is not routable by all the application servers that the S-CSCF may invoke. In this case, it should be configured to contain an IP address or host which is routable by all of the application servers – e.g. by using a domain and port on which the sprout can be addressed -
scscf_node_uri=sip:sprout-4.example.net:5054.
The result will be included in the Route header on SIP messages sent to application servers invoked during a call.
If it is not set, the URI that this S-CSCF node will advertise itself as will be
sip:<local_ip>:<scscf_port>where
<local_ip>is documented above, and
<scscf_port>is the port on which the S-CSCF is running, which is 5054 by default.
User settings¶
This section describes settings that may vary between systems in the
same deployment, such as log level (which may be increased on certain
machines to track down specific issues) and performance settings (which
may vary if some servers in your deployment are more powerful than
others). These settings are set in
/etc/clearwater/user_settings (in
the format
name=value, e.g.
log_level=5).
log_level- determines how verbose Clearwater’s logging is, from 1 (error logs only) to 5 (debug-level logs). Defaults to 2.
log_directory- determines which folder the logs are created in. This folder must exist, and be owned by the service. Defaults to /var/log/ (this folder is created and has the correct permissions set for it by the install scripts of the service).
max_log_directory_size- determines the maximum size of each Clearwater process’s log_directory in bytes. Defaults to 1GB. If you are co-locating multiple Clearwater processes, you’ll need to reduce this value proportionally.
num_worker_threads- for Sprout and Bono nodes, determines how many worker threads should be started to do SIP/IMS processing. Defaults to 50 times the number of CPU cores on the system.
upstream_connections- determines the maximum number of TCP connections which Bono will open to the I-CSCF(s). Defaults to 50.
trusted_peers- For Bono IBCF nodes, determines the peers which Bono will accept connections to and from.
ibcf_domain- For Bono IBCF nodes, allows for a domain alias to be specified for the IBCF to allow for including IBCFs in routes as domains instead of IPs.
upstream_recycle_connections- the average number of seconds before Bono will destroy and re-create a connection to Sprout. A higher value means slightly less work, but means that DNS changes will not take effect as quickly (as new Sprout nodes added to DNS will only start to receive messages when Bono creates a new connection and does a fresh DNS lookup).
authentication- by default, Clearwater performs authentication challenges (SIP Digest or IMS AKA depending on HSS configuration). When this is set to ‘Y’, it simply accepts all REGISTERs - obviously this is very insecure and should not be used in production.
num_http_threads(homestead) - determines the number of HTTP worker threads that will be used to process requests. Defaults to 4 times the number of CPU cores on the system.
DNS Config¶
This section describes the static DNS config which can be used to
override DNS results. This is set in
/etc/clearwater/dns.json.
Currently, the only supported record type is CNAME and the only
component which uses this is Chronos and the I-CSCF. The file has the
format:
{ "hostnames": [ { "name": "<hostname 1>", "records": [{"rrtype": "CNAME", "target": "<target for hostname 1>"}] }, { "name": "<hostname 2>", "records": [{"rrtype": "CNAME", "target": "<target for hostname 2>"}] } ] } | http://docs.projectclearwater.org/en/stable/Clearwater_Configuration_Options_Reference.html | 2017-10-17T03:45:41 | CC-MAIN-2017-43 | 1508187820700.4 | [] | docs.projectclearwater.org |
Connect PostgreSQL to StackState in order to:
To get started with the PostgreSQL integration, create at least a read-only stackstate user with proper access to your PostgreSQL Server. Start psql on your PostgreSQL database and run:
create user stackstate with password '<PASSWORD>'; grant SELECT ON pg_stat_database to stackstate;
To verify the correct permissions run the following command:
psql -h localhost -U stackstate postgres -c \ "select * from pg_stat_database LIMIT(1);" && echo -e "\e[0;32mPostgres connection - OK\e[0m" || \ || echo -e "\e[0;31mCannot connect to Postgres\e[0m"
When it prompts for a password, enter the one used in the first command.
Configure the Agent to connect to PostgreSQL. Edit
conf.d/postgres.yaml:
init_config: instances: - host: localhost port: 5432
Restart the agent.
username(Optional) - The user account used to collect metrics, set in the Installation section above
password(Optional) - The password for the user account.
dbname(Optional) - The name of the database you want to monitor.
ssl(Optional) - Defaults to False. Indicates whether to use an SSL connection.
use_psycopg2(Optional) - Defaults to False. Setting this option to
Truewill force the StackState Agent to collect PostgreSQL metrics using psycopg2 instead of pg8000. Note that pyscopg2 does not support SSL connections.
tags(Optional) - A list of tags applied to all metrics collected. Tags may be simple strings or key-value pairs.
relations (Optional) - By default, all schemas are included. Add specific schemas here to collect metrics for schema relations. Each relation will generate 10 metrics and an additional 10 metrics per index. Use the following structure to declare relations:
relations: - relation_name: my_relation schemas: - my_schema_1 - my_schema_2
collect_function_metrics (Optional) - Collect metrics regarding PL/pgSQL functions from pg_stat_user_functions
collect_count_metrics (Optional) - Collect count metrics. The default value is
True for backward compatibility, but this might be slow. The recommended value is
False.
After you restart the agent, you should be able to run the
sudo /etc/init.d/stackstate-agent info command which will now include a section like this if the PostgreSQL integration is working:
Checks ====== [...] postgres -------- - instance #0 [OK] - Collected 47 metrics & 0 events
The Agent generates PostgreSQL metrics from custom query results. For each custom query, four components are required:
descriptors,
metrics,
query, and
relation.
queryis where you’ll construct a base SELECT statement to generate your custom metrics. Each column name in your SELECT query should have a corresponding item in the
descriptorssection. Each item in
metricswill be substituted for the first
%sin the query.
metricsare key-value pairs where the key is the query column name or column function and the value is a tuple containing the custom metric name and metric type (
RATE,
GAUGE, or
MONOTONIC). In the example below, the results of the sum of the
idx_scancolumn will appear in StackState with the metric name
postgresql.idx_scan_count_by_table.
descriptorsis used to add tags to your custom metrics. It’s a list of lists each containing 2 strings. The first string is for documentation purposes and should be used to make clear what you are getting from the query. The second string will be the tag name. For multiple tags, include additional columns in your
querystring and a corresponding item in the
descriptors. The order of items in
descriptorsmust match the columns in
query.
relationindicates whether to include schema relations specified in the
relationsconfiguration option. If set to
true, the second
%sin
querywill be set to the list of schema names specified in the
relationsconfiguration option.
custom_metrics: # All index scans & reads - descriptors: - [relname, table] - [schemaname, schema] metrics: SUM(idx_scan) as idx_scan_count: [postgresql.idx_scan_count_by_table, RATE] SUM(idx_tup_read) as idx_read_count: [postgresql.idx_read_count_by_table, RATE] query: SELECT relname, schemaname, %s FROM pg_stat_all_indexes GROUP BY relname; relation: false
The example above will run two queries in PostgreSQL:
SELECT relname, SUM(idx_scan) as idx_scan_count FROM pg_stat_all_indexes GROUP BY relname;will generate a rate metric
postgresql.idx_scan_count_by_table.
SELECT relname, SUM(idx_tup_read) as idx_read_count FROM pg_stat_all_indexes GROUP BY relname;will generate a rate metric
postgresql.idx_read_count_by_table.
Both metrics will use the tags
table and
schema with values from the results in the
relname and
schemaname columns respectively. e.g.
table: <relname>
The
postgres.yaml.example file includes an example for the SkyTools 3 Londoniste replication tool:
custom_metrics: # Londiste 3 replication lag - descriptors: - [consumer_name, consumer_name] metrics: GREATEST(0, EXTRACT(EPOCH FROM lag)) as lag: [postgresql.londiste_lag, GAUGE] GREATEST(0, EXTRACT(EPOCH FROM lag)) as last_seen: [postgresql.londiste_last_seen, GAUGE] pending_events: [postgresql.londiste_pending_events, GAUGE] query: SELECT consumer_name, %s from pgq.get_consumer_info() where consumer_name !~ 'watermark$'; relation: false
If your custom metric does not work after an Agent restart, running
sudo /etc/init.d/stackstate-agent info can provide more information. For example:
postgres -------- - instance #0 [ERROR]: 'Missing relation parameter in custom metric' - Collected 0 metrics, 0 events & 0 service checks
You should also check the
/var/log/stackstate/collector.log file for more information. | http://docs.stackstate.com/integrations/postgres/ | 2017-10-17T03:55:27 | CC-MAIN-2017-43 | 1508187820700.4 | [] | docs.stackstate.com |
Active patterns enable you to define named partitions that subdivide input data, so that you can use these names in a pattern matching expression just as you would for a discriminated union. You can use active patterns to decompose data in a customized manner for each partition.
Syntax
// Complete active pattern definition. let (|identifer1|identifier2|...|) [ arguments ] = expression // Partial active pattern definition. let (|identifier|_|) [ arguments ] = expression
Remarks
In the previous syntax, the identifiers are names for partitions of the input data that is represented by arguments, or, in other words, names for subsets of the set of all values of the arguments. There can be up to seven partitions in an active pattern definition. The expression describes the form into which to decompose the data. You can use an active pattern definition to define the rules for determining which of the named partitions the values given as arguments belong to. The (| and |) symbols are referred to as banana clips and the function created by this type of let binding is called an active recognizer.
As an example, consider the following active pattern with an argument.
let (|Even|Odd|) input = if input % 2 = 0 then Even else Odd
You can use the active pattern in a pattern matching expression, as in the following example.
let TestNumber input = match input with | Even -> printfn "%d is even" input | Odd -> printfn "%d is odd" input TestNumber 7 TestNumber 11 TestNumber 32
The output of this program is as follows:
7 is odd 11 is odd 32 is even
Another use of active patterns is to decompose data types in multiple ways, such as when the same underlying data has various possible representations. For example, a
Color object could be decomposed into an RGB representation or an HSB representation."
The output of the above program is as follows:
Red Red: 255 Green: 0 Blue: 0 Hue: 360.000000 Saturation: 1.000000 Brightness: 0.500000 Black Red: 0 Green: 0 Blue: 0 Hue: 0.000000 Saturation: 0.000000 Brightness: 0.000000 White Red: 255 Green: 255 Blue: 255 Hue: 0.000000 Saturation: 0.000000 Brightness: 1.000000 Gray Red: 128 Green: 128 Blue: 128 Hue: 0.000000 Saturation: 0.000000 Brightness: 0.501961 BlanchedAlmond Red: 255 Green: 235 Blue: 205 Hue: 36.000000 Saturation: 1.000000 Brightness: 0.901961
In combination, these two ways of using active patterns enable you to partition and decompose data into just the appropriate form and perform the appropriate computations on the appropriate data in the form most convenient for the computation.
The resulting pattern matching expressions enable data to be written in a convenient way that is very readable, greatly simplifying potentially complex branching and data analysis code.
Partial Active Patterns
Sometimes, you need to partition only part of the input space. In that case, you write a set of partial patterns each of which match some inputs but fail to match other inputs. Active patterns that do not always produce a value are called partial active patterns; they have a return value that is an option type. To define a partial active pattern, you use a wildcard character (_) at the end of the list of patterns inside the banana clips. The following code illustrates the use of a partial active pattern.
let (|Integer|_|) (str: string) = let mutable intvalue = 0 if System.Int32.TryParse(str, &intvalue) then Some(intvalue) else None let (|Float|_|) (str: string) = let mutable floatvalue = 0.0 if System.Double.TryParse(str, &floatvalue) then Some(floatvalue) else None let parseNumeric str = match str with | Integer i -> printfn "%d : Integer" i | Float f -> printfn "%f : Floating point" f | _ -> printfn "%s : Not matched." str parseNumeric "1.1" parseNumeric "0" parseNumeric "0.0" parseNumeric "10" parseNumeric "Something else"
The output of the previous example is as follows:
1.100000 : Floating point 0 : Integer 0.000000 : Floating point 10 : Integer Something else : Not matched.
When using partial active patterns, sometimes the individual choices can be disjoint or mutually exclusive, but they need not be. In the following example, the pattern Square and the pattern Cube are not disjoint, because some numbers are both squares and cubes, such as 64. The following program prints out all integers up to 1000000 that are both squares and cubes.
let err = 1.e-10 let isNearlyIntegral (x:float) = abs (x - round(x)) < err let (|Square|_|) (x : int) = if isNearlyIntegral (sqrt (float x)) then Some(x) else None let (|Cube|_|) (x : int) = if isNearlyIntegral ((float x) ** ( 1.0 / 3.0)) then Some(x) else None let examineNumber x = match x with | Cube x -> printfn "%d is a cube" x | _ -> () match x with | Square x -> printfn "%d is a square" x | _ -> () let findSquareCubes x = if (match x with | Cube x -> true | _ -> false && match x with | Square x -> true | _ -> false ) then printf "%d \n" x [ 1 .. 1000000 ] |> List.iter (fun elem -> findSquareCubes elem)
The output is as follows:
1 64 729 4096 15625 46656 117649 262144 531441 1000000
Parameterized Active Patterns
Active patterns always take at least one argument for the item being matched, but they may take additional arguments as well, in which case the name parameterized active pattern applies. Additional arguments allow a general pattern to be specialized. For example, active patterns that use regular expressions to parse strings often include the regular expression as an extra parameter, as in the following code, which also uses the partial active pattern
Integer defined in the previous code example. In this example, strings that use regular expressions for various date formats are given to customize the general ParseRegex active pattern. The Integer active pattern is used to convert the matched strings into integers that can be passed to the DateTime constructor.
open System.Text.RegularExpressions // ParseRegex parses a regular expression and returns a list of the strings that match each group in // the regular expression. // List.tail is called to eliminate the first element in the list, which is the full matched expression, // since only the matches for each group are wanted. let (|ParseRegex|_|) regex str = let m = Regex(regex).Match(str) if m.Success then Some (List.tail [ for x in m.Groups -> x.Value ]) else None // Three different date formats are demonstrated here. The first matches two- // digit dates and the second matches full dates. This code assumes that if a two-digit // date is provided, it is an abbreviation, not a year in the first century. let parseDate str = match str with | ParseRegex "(\d{1,2})/(\d{1,2})/(\d{1,2})$" [Integer m; Integer d; Integer y] -> new System.DateTime(y + 2000, m, d) | ParseRegex "(\d{1,2})/(\d{1,2})/(\d{3,4})" [Integer m; Integer d; Integer y] -> new System.DateTime(y, m, d) | ParseRegex "(\d{1,4})-(\d{1,2})-(\d{1,2})" [Integer y; Integer m; Integer d] -> new System.DateTime(y, m, d) | _ -> new System.DateTime() let dt1 = parseDate "12/22/08" let dt2 = parseDate "1/1/2009" let dt3 = parseDate "2008-1-15" let dt4 = parseDate "1995-12-28" printfn "%s %s %s %s" (dt1.ToString()) (dt2.ToString()) (dt3.ToString()) (dt4.ToString())
The output of the previous code is as follows:
12/22/2008 12:00:00 AM 1/1/2009 12:00:00 AM 1/15/2008 12:00:00 AM 12/28/1995 12:00:00 AM | https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/active-patterns | 2017-10-17T04:11:38 | CC-MAIN-2017-43 | 1508187820700.4 | [] | docs.microsoft.com |
New-Cs
Dial Plan
Syntax
New-CsDialPlan [-Identity] <XdsIdentity> [-Description <String>] [-DialinConferencingRegion <String>] [-ExternalAccessPrefix <String>] [-NormalizationRules <PSListModifier>] [-OptimizeDeviceDialing <Boolean>] [-SimpleName <String>] [-State <String>] [-City <String>] [-CountryCode <String>] [-Force] [-InMemory] [-WhatIf] [-Confirm] [<CommonParameters>]
Description.
Examples
-------------------------- EXAMPLE 1 --------------------------
New-CsDialPlan -Identity RedmondDialPlan --------------------------
New-CsDialPlan -Identity site:Redmond -SimpleName RedmondSiteDialPlan New-CsVoiceNormalizationRule -Identity "site:Redmond/SeattlePrefix" -Pattern "^9(\d*){1,5}$" -Translation "+1206 the New-CsVoiceNormalizationRule cmdlet to create a named rule that is functional for your organization. That's exactly what line 2 of this example does: it calls the New-CsVoiceNormalizationRule cmdlet --------------------------
New-CsDialPlan -Identity RedmondDialPlan -Description "Dial plan for Redmond users".
Required Parameters
A unique identifier designating the scope and name (site), the service role and FQDN, or the name (per user) to identify the dial plan. For example, a site Identity would be entered in the format site:<sitename>, where sitename is the name of the site. A dial plan at the service scope must be a Registrar or PSTN gateway service, where the Identity value is formatted like this: Registrar:Redmond.litwareinc.com. A per-user Identity would be entered simply as a unique string value..
This parameter must match the regular expression [0-9]{1,4}. This means it must be a value 0 through 9, one to four digits in length.
Default: 9
Suppresses any confirmation prompts>.
A list of normalization rules that are applied to this dial plan.
While this list and these rules can be created directly with this cmdlet, we recommend that you create the normalization rules with the New-CsVoiceNormalizationRule cmdlet, which creates the rule and assigns it to the specified dial plan.
Each time a new dial plan is created, a new voice normalization rule with default settings is also created for that site, service, or per-user dial plan. By default the Identity of the new voice normalization rule is the dial plan Identity followed by a slash followed by the name Prefix All. For example, site:Redmond/Prefix All.
Default: {Description=;Pattern=^(\d11)$;Translation=+$1;Name=Prefix All;IsInternalExtension=False } Note: This default is only a placeholder. For the dial plan to be useful, you should either modify the normalization rule created by the dial plan or create a new rule for the site, service, or user. You can create a new normalization rule by calling the New-CsVoiceNormalizationRule cmdlet; modify a normalization rule by calling.
Default: False
A display name for the dial plan. This name must be unique among all dial plans within the Skype for Business Server deployment.
This string can be up to 256 characters long. Valid characters are alphabetic or numeric characters, hyphen (-), dot (.), plus (+), underscore (_), and parentheses (()).
This parameter must contain a value. However, if you don't provide a value in the call to the New-CsDialPlan cmdlet, a default value will be supplied. The default value for a Global dial plan is Prefix All. The default for a site-level dial plan is the name of the site. The default for a service is the name of the service (Registrar or PSTN gateway) followed by an underscore, followed by the service fully qualified domain name (FQDN). For example, Registrar_pool0.litwareinc.com. The default for a per-user dial plan is the Identity of the dial plan.
This parameter is not used with this cmdlet.
Describes what would happen if you executed the command without actually executing the command.
Inputs
None.
Outputs
This cmdlet creates an object of type Microsoft.Rtc.Management.WritableConfig.Policy.Voice.LocationProfile. | https://docs.microsoft.com/en-us/powershell/module/skype/New-CsDialPlan?view=skype-ps | 2017-10-17T04:23:36 | CC-MAIN-2017-43 | 1508187820700.4 | [] | docs.microsoft.com |
Important : read our plugin support policy
The course elements are structuring primitives dedicated to contents.
In order to give a good unity of form, and to allow learners to clearly identify the educational demand, they allow a fast implementation of the contents, texts and messages. Some of them have added functionalities.
The teacher only cares about the local formatting of his message, without worrying about the visual “integration” of his message.
Course Element Subtype : Contact point.
Nota : student vue of the course element.
There are 5 types of course elements with different functionalities:
Travis-ci Continuous integration
Note : Failing status is not necessarily the sign of a non functioning module. It just says that the Moodle code standards check are not full filling. This is a continuous effort of us rewriting the plugin code to get it compliant.
We seriously plan to rewrite fundamentally this module in order to simplify the implementation. the major reason of the code complexity of this module was the inheritance of tricky constraints of Moodle 1.9 architecture for being able to produce the course element content in the course view. This has not been reviewed because we needed at early moodle 2.x stages that the plugin keep entire compatibility with Moodle 1.9 contents.
The pressure to stay full compatible with old storage model is lower now, unless we can provide a good model translator for actual component architecture to new one.
I am sure that the component will gain in maintenability, and will lower risk of technological obsolescence in the future.
What should be great to achieve as workplan:
The global goal of those changes is to make all aspect configuration possible without coding access.
As usual, such a radical change needs a lo of time, and probably budget. At the moment I lack both. So I do not expect short time resolution at the moment, but the road is traced.
Back to plugins index - Back to home | http://docs.activeprolearn.com/en/doku.php?id=mod:customlabel | 2017-10-17T04:01:27 | CC-MAIN-2017-43 | 1508187820700.4 | [] | docs.activeprolearn.com |
The AWS Schema Conversion Tool Extension Pack and Python Libraries for Data Warehouses
When you convert your data warehouse schema, the AWS Schema Conversion Tool (AWS SCT) adds an additional schema to your target DB instance. This schema implements system functions of the source database that are required when writing your converted schema to your target DB instance. The schema is called the extension pack schema.
If you are converting a transactional database, instead see The AWS Schema Conversion Tool Extension Pack and AWS Services for Databases.
The extension pack schema is named according to your source database as follows:
Greenplum:
AWS_GREENPLUM_EXT
Microsoft SQL Server:
AWS_SQLSERVER_EXT
Netezza:
AWS_NETEZZA_EXT
Oracle:
AWS_ORACLE_EXT
Teradata:
AWS_TERADATA_EXT
Vertica:
AWS_VERTICA_EXT
In two cases, you might want to install the extension pack manually:
You accidentally delete the extension pack database schema from your target database.
You want to upload custom Python libraries to emulate database functionality.
Using AWS Services to Upload Custom Python Libraries
In some cases, source database features can't be converted to equivalent Amazon Redshift features. AWS SCT contains a custom Python library that emulates some source database functionality on Amazon Redshift.
The AWS SCT extension pack wizard helps you install the custom Python library.
Before You Begin
Almost all work you do with AWS SCT starts with the following three steps:
Create an AWS SCT project.
Connect to your source database.
Connect to your target database.
If you have not created an AWS SCT project yet, see Getting Started with the AWS Schema Conversion Tool.
Before you install the extension pack, you need to convert your database schema. For more information, see Converting Data Warehouse Schema to Amazon Redshift by Using the AWS Schema Conversion Tool.
Applying the Extension Pack
Use the following procedure to apply the extension pack.
To apply the extension pack
In the AWS Schema Conversion Tool, in the target database tree, open the context (right-click) menu, and choose Apply Extension Pack.
The extension pack wizard appears.
Read the Welcome page, and then choose Next.
On the AWS Services Settings page, do the following:
If you are reinstalling the extension pack database schema only, choose Skip this step for now, and then choose Next.
If you are uploading the Python library, provide the credentials to connect to your AWS account. You can use your AWS Command Line Interface (AWS CLI) credentials if you have the AWS CLI installed. You can also use credentials that you previously stored in a profile in the global application settings and associated with the project. If necessary, choose Navigate to Project Settings to associate a different profile with the project. If necessary, choose Global Settings to create a new profile. For more information, see Storing AWS Profiles in the AWS Schema Conversion Tool.
On the Python Library Upload page, do the following:
If you are reinstalling the extension pack database schema only, choose Skip this step for now, and then choose Next.
If you are uploading the Python library, provide the Amazon S3 path, and then choose Upload Library to S3. When you are done, choose Next.
On the Functions Emulation page, choose Create Extension Pack. Messages appear with the status of the extension pack operations. When you are done, choose Finish. | http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_SchemaConversionTool.ExtensionPack.DW.html | 2017-10-17T04:12:53 | CC-MAIN-2017-43 | 1508187820700.4 | [] | docs.aws.amazon.com |
To connect an Integration Services package to a Microsoft Office Excel workbook requires an Excel connection manager.
You can create these connection managers from either the Connection Managers area in SSIS Designer or from the SQL Server Import and Export Wizard.
Connectivity components for Microsoft Excel and Access files
You may have to download the connectivity components for Microsoft Office files if they're not already installed. Download the latest version of the connectivity components for both Excel and Access files here: Microsoft Access Database Engine 2016 Redistributable.
The latest version of the components can open files created by earlier versions of Excel.
If the computer has a 32-bit version of Office, then you have to install the 32-bit version of the components, and you also have to ensure that you run the package in 32-bit mode.
If you have an Office 365 subscription, make sure that you download the Access Database Engine 2016 Redistributable and not the Microsoft Access 2016 Runtime. When you run the installer, you may see an error message that you can't install the download side-by-side with Office click-to-run components. To bypass this error message and install the components successfully, run the installation in quiet mode by opening a Command Prompt window and running the .EXE file that you downloaded with the
/quiet switch. For example:
C:\Users\<user name>\Downloads\AccessDatabaseEngine.exe /quiet
Create an Excel connection manager.
See Also
Connect to an Access Database | https://docs.microsoft.com/en-us/sql/integration-services/connection-manager/connect-to-an-excel-workbook | 2017-10-17T04:33:50 | CC-MAIN-2017-43 | 1508187820700.4 | [] | docs.microsoft.com |
Cluster Configuration Guidelines
Use the guidance in this section to help you determine the instance types, purchasing options, and amount of storage to provision for each node type in an EMR cluster.
What Instance Type Should You Use?
There are several ways to add EC2 instances to your cluster. When a cluster uses the instance groups configuration, you can add instances to the core or task instance groups, or you can add task instance groups. You can add EC2 instances manually, or you can set up automatic scaling within Amazon EMR to add instances automatically based on the value of an Amazon CloudWatch metric that you specify. For more information, see Scaling Cluster Resources. When a cluster uses the instance fleets configuration, you can change the target capacity for On-Demand Instances or Spot Instances as appropriate. For more information, see Configure Instance Fleets.
One way to plan the instances of your cluster is to run a test cluster with a representative sample set of data and monitor the utilization of the nodes in the cluster. For more information, see View and Monitor a Cluster. Another way is to calculate the capacity of the instances you are considering and compare that value against the size of your data.
In general, the master node type, which assigns tasks, doesn't require an EC2 instance with much processing power; EC2 instances for the core node type, which process tasks and store data in HDFS, need both processing power and storage capacity; EC2 instances for the task node type, which don't store data, need only processing power. For guidelines about available EC2 instances and their configuration, see Plan and Configure EC2 Instances.
The following guidelines apply to most Amazon EMR clusters.
The master node does not have large computational requirements. For most clusters of 50 or fewer nodes, consider using an m3.xlarge instance. For clusters of more than 50 nodes, consider using an m3.2xlarge.
The computational needs of the core and task nodes depend on the type of processing your application performs. Many jobs can be run on m3.large instance types, which offer balanced performance in terms of CPU, disk space, and input/output. If your application has external dependencies that introduce delays (such as web crawling to collect data), you may be able to run the cluster on m3.xlarge instances to reduce costs while the instances are waiting for dependencies to finish. For improved performance, consider running the cluster using m3.2xlarge instances for the core and task nodes. If different phases of your cluster have different capacity needs, you can start with a small number of core nodes and increase or decrease the number of task nodes to meet your job flow's varying capacity requirements.
Most Amazon EMR clusters can run on standard EC2 instance types such as m3.xlarge and m3.2xlarge. Computation-intensive clusters may benefit from running on High CPU instances, which have proportionally more CPU than RAM. Database and memory-caching applications may benefit from running on High Memory instances. Network-intensive and CPU-intensive applications like parsing, NLP, and machine learning may benefit from running on Cluster Compute instances, which provide proportionally high CPU resources and increased network performance.
The amount of data you can process depends on the capacity of your core nodes and the size of your data as input, during processing, and as output. The input, intermediate, and output data sets all reside on the cluster during processing.
By default, the total number of EC2 instances you can run on a single AWS account is 20. This means that the total number of nodes you can have in a cluster is 20. For more information about how to request that this limit be increased for your account, see AWS Limits.
In Amazon EMR, m1.small and m1.medium instances are recommended only for testing purposes and m1.small is not supported on Hadoop 2 clusters.
When Should You Use Spot Instances?
There are several scenarios in which Spot Instances are useful for running an Amazon EMR cluster.
Long-Running Clusters and Data Warehouses
If you are running a persistent Amazon EMR cluster, such as a data warehouse, that has a predictable variation in computational capacity, you can handle peak demand at lower cost with Spot Instances. You can launch your master and core instance groups as On-Demand to handle the normal capacity and launch the task instance group as Spot Instances to handle your peak load requirements.
Cost-Driven Workloads
If you are running transient clusters for which lower cost is more important than the time to completion, and losing partial work is acceptable, you can run the entire cluster (master, core, and task instance groups) as Spot Instances to benefit from the largest cost savings.
Data-Critical Workloads.
Application Testing
When you are testing a new application in order to prepare it for launch in a production environment, you can run the entire cluster (master, core, and task instance groups) as Spot Instances to reduce your testing costs.
Choose What to Launch as Spot Instances
When you launch a cluster in Amazon EMR, you can choose to launch any or all of the instance groups (master, core, and task) as Spot Instances. Because each type of instance group plays a different role in the cluster, the implications of launching each instance group as Spot Instances vary.
When you launch an instance group either as on-demand or as Spot Instances, you cannot change its classification while the cluster is running. In order to change an On-Demand Instance group to Spot Instances or vice versa, you must terminate the cluster and launch a new one.
The following table shows launch configurations for using Spot Instances in various applications.
Master Node as Spot Instance
The master node controls and directs the cluster. When it terminates, the cluster ends, so you should only launch the master node as a Spot Instance if you are running a cluster where sudden termination is acceptable. This might be the case if you are testing a new application, have a cluster that periodically persists data to an external store such as Amazon S3, or are running a cluster where cost is more important than ensuring the cluster’s completion.
When you launch the master instance group as a Spot Instance, the cluster does not start until that Spot Instance request is fulfilled. This is something to take into consideration when selecting your bid price.
You can only add a Spot Instance master node when you launch the cluster. Master nodes cannot be added or removed from a running cluster.
Typically, you would only run the master node as a Spot Instance if you are running the entire cluster (all instance groups) as Spot Instances.
Core Instance Group as Spot Instances
Core nodes process data and store information using HDFS. You typically only run core nodes as Spot Instances if you are either not running task nodes or running task nodes as Spot Instances.
When you launch the core instance group as Spot Instances, Amazon EMR waits until it can provision all of the requested core instances before launching the instance group. This means that if you request a core instance group with six nodes, the instance group does not launch if there are only five nodes available at or below your bid price. In this case, Amazon EMR continues to wait until all six core nodes are available at your Spot price or until you terminate the cluster.
You can add Spot Instance core nodes either when you launch the cluster or later to add capacity to a running cluster. Instance Groups as Spot Instances
The task nodes process data but do not hold persistent data in HDFS. If they terminate because the Spot price has risen above your bid price, no data is lost and the effect on your cluster is minimal.
When you launch one or more task instance groups as Spot Instances, Amazon EMR provisions as many task nodes as it can at your bid price. This means that if you request a task instance group with six nodes, and only five Spot Instances are available at your bid price, Amazon EMR launches the instance group with five nodes, adding the sixth later if it can.
Launching task instance groups as Spot Instances is a strategic way to expand the capacity of your cluster while minimizing costs. If you launch your master and core instance groups as On-Demand Instances, their capacity is guaranteed for the run of the cluster and you can add task instances to your task instance groups as needed to handle peak traffic or to speed up data processing.
You can add or remove task nodes using the console, the AWS CLI or the API. You can also add additional task groups using the console, the AWS CLI or the API, but you cannot remove a task group once it is created.
Calculating the Required HDFS Capacity of a Cluster
The amount of HDFS storage available to your cluster depends on these factors:
The number of EC2 instances in the core instance group.
The storage capacity of the EC2 instances.
The number and size of EBS volumes attached to core nodes.
A replication factor, which accounts for how each data block is stored in HDFS for RAID-like redundancy. By default, the replication factor is three for a cluster of 10 or more core nodes, 2 for a cluster of 4-9 core nodes, and 1 for a cluster of 3 nodes or fewer.
To calculate the HDFS capacity of a cluster, add the capacity of instance store volumes the EC2 instance types you've selected to the total volume storage you have attached with EBS and multiply the result by the number of nodes in each instance group. Divide the total by the replication factor for the cluster. For example, a cluster with 10 core nodes of type m1.large would have 850 GB of space per-instance available to HDFS: ( 10 nodes x 850 GB per node ) / replication factor of 3. For more information on instance store volumes, see Amazon EC2 Instance Store in the Amazon EC2 User Guide for Linux Instances.
If the calculated HDFS capacity value is smaller than your data, you can increase the amount of HDFS storage in the following ways:
Creating a cluster with additional EBS volumes or adding instance groups with attached EBS volumes to an existing cluster
Adding more core nodes
Choosing an EC2 instance type with greater storage capacity
Using data compression
Changing the Hadoop configuration settings to reduce the replication factor
Reducing the replication factor should be used with caution as it reduces the redundancy of HDFS data and the ability of the cluster to recover from lost or corrupted HDFS blocks. | http://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-instances-guidelines.html | 2017-10-17T04:23:35 | CC-MAIN-2017-43 | 1508187820700.4 | [] | docs.aws.amazon.com |
Difference between revisions of "Add article title to read more link"
From Joomla! Documentation
Latest revision as of 06:22,.
How
- (joomla)/components/com_content/views/category/tmpl/blog_item.php
- (joomla)/components/com_content/views/frontpage/tmpl/default_item.php
- (joomla)/components/com_content/views/section/tmpl/blog_item.php
Copy the files to your template's html folders
- (yourtemplate)/html/com_content/category/blog_item.php
- (yourtemplate)/html/com_content/frontpage/default_item.php
- (yourtemplate)/html/com_content/section/blog_item.php) | https://docs.joomla.org/index.php?title=J1.5:Add_article_title_to_read_more_link&diff=cur&oldid=86364 | 2015-11-25T08:00:58 | CC-MAIN-2015-48 | 1448398444974.3 | [] | docs.joomla.org |
Difference between revisions of "Potential backward compatibility issues in Joomla 3 and Joomla Platform 12.2"
From Joomla! Documentation
Revision as of 09:37, 22
Contents
- 1 Platform
- 1.1 General changes
- 1.2 Changes to extension installation
- 1.3_
- The module cacheing option "oldstatic" has been removed.
- Extensions need to set the registeredurlparams now, the fall back on the URL has been removed.)Langauge::loadLanguage() the argument $overwrite has been removed. It was previously unused.::toMysql() has been removed. Use JDate::toSql() instead.
-"
-. | https://docs.joomla.org/index.php?title=Potential_backward_compatibility_issues_in_Joomla_3.0_and_Joomla_Platform_12.1&diff=prev&oldid=65107 | 2015-11-25T07:19:52 | CC-MAIN-2015-48 | 1448398444974.3 | [] | docs.joomla.org |
Difference between revisions of "JForm::setFields"
From Joomla! Documentation
Revision as of::setFields
Description
Method to set some field XML elements to the form definition.
Description:JForm::setFields [Edit Descripton]
public function setFields ( &$elements $group=null $replace=true )
- Returns
- Defined on line 902 of libraries/joomla/form/form.php
See also
JForm::setFields source code on BitBucket
Class JForm
Subpackage Form
- Other versions of JForm::setFields
SeeAlso:JForm::setFields [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=JForm::setFields/11.1&diff=prev&oldid=56890 | 2015-11-25T07:13:11 | CC-MAIN-2015-48 | 1448398444974.3 | [] | docs.joomla.org |
The Manage window in Rainmeter is the primary means of controlling the application and skins. It consists of three main tabs:
- Skins: Displays a list of installed and loaded skins. This tab is used to view information about skins, manage skin settings, and control buttons to load/unload/refresh skins.
- Layouts: Used to save and load the current state of Rainmeter. Active and inactive skins, skin positions, and other Rainmeter settings.
- Settings: Controls high-level Rainmeter settings such as the interface language and logging.
Manage is accessed in several ways:
- Left-Click the Rainmeter icon in the Windows Notification Area on the taskbar.
- Right-Click the Rainmeter icon in the Windows Notification Area on the taskbar and select "Manage".
- Using the !Manage bang as an action in a skin or from the command line as a parameter to Rainmeter.exe.
- Right-Click any running skin on the desktop and select "Manage skin".
Buttons used to control Rainmeter:
- Refresh all: Refresh the entire Rainmeter application, including all currently active skins.
- Edit settings: Manually edit the settings in Rainmeter.ini with the text editor associated with .ini files.
- Open log: View the Log tab of the Rainmeter About window.
- Create .rmskin package: Package skins(s) for distribution in the .rmskin format.
The Skins tab
There are four main areas in this tab.
The skins list
List of currently installed skins. This contains all Skins found when Rainmeter is started or refreshed.
The list consists of the config folder for each skin, and the skin .ini files for each config.
- Clicking on a skin .ini file will make that skin active in the Manage tab.
- Double-clicking a skin .ini file will unload the skin if it is running, or load it if not.
- Right clicking on a config folder will allow opening the folder in Windows Explorer.
- Right clicking on a skin .ini file will allow loading, unloading or editing the skin.
The list is updated when Rainmeter is refreshed.
Active skins
This pull-down will contain a list of all currently loaded and active skins in Rainmeter.
Clicking on a skin will make that skin active in the Manage tab.
Metadata
Displays the information in the [Metadata] section of the selected skin.
This includes Name, Config, Author, Version, License and Information fields.
If a skin does not contain a [Metadata] section, the Add metadata link in this area will add an empty section with all fields.
Skin Settings
For a selected active skin, shows the current values of various settings. Changes will immediately effect the skin on the desktop.
- Coordinates: The window X and Y location of the skin on the screen in pixels.
- Position: The z-position of the skin on the desktop relative to other windows.
- Load order: The loading order of the skin on the desktop relative to other skins.
- Transparency: The transparency of the skin.
- On hover: The hide on mouse over behavior of the skin.
- Draggable: The draggable setting for the skin.
- Click through: The click through setting for the skin.
- Keep on screen: The keep on screen setting for the skin.
- Save position: The save position setting for the skin.
- Snap to edges: The snap to edges setting for the skin.
- Favorite: Adds or removes the current skin in a list of Favorites accessed with the Rainmeter context menu.
- Display monitor: Settings for the monitor on which the skin is displayed.
Use default: Primary monitor: Removes the
@Ndirective from WindowX/WindowY settings.
@0, @1, @2, ... , @32: Adds the specified monitor number to WindowX/WindowY settings.
@0represents "The Virtual Screen".
Auto-select based on window position: If checked, the WindowX/WindowY
@Nsettings are made automatically based on the position of the meter's window. This setting will be unchecked when a specific monitor is selected.
Buttons used to control skins:
- Unload / Load: Unload (make inactive) the selected skin if it is currently active, or load it if not.
- Refresh: Refresh the selected active skin.
- Edit: Edit the selected skin with the text editor associated with .ini files.
The Layouts tab
Layouts in Rainmeter are a way to save and load the current state of the Rainmeter settings. This saves the positions of currently active and inactive skins, as well as all other settings stored in the current Rainmeter.ini file. The layout can then be loaded to restore any saved state. Layouts are saved in the Rainmeter settings folder.
Note: The skin folders and files themselves are not saved with a layout.
There are two main areas in this tab.
Save new layout
Enter the desired Name: and click Save.
- Save as empty layout
Removes all [ConfigName] sections before saving.
- Exclude unloaded skins
Removes all inactive [ConfigName] sections before saving.
- Include current wallpaper
Saves the current Windows desktop wallpaper with the layout.
Note: If an existing layout is selected from the Saved layouts list or typed in, saving will replace the existing saved layout with the current state.
Saved layouts
Click on any layout name in the list.
- Load
Loads the selected layout. If a Windows desktop wallpaper was saved with the layout, it will be applied to the desktop.
-
Permanently deletes the saved layout.
- Edit
Edits the saved layout (Rainmeter.ini) file with the text editor associated with .ini files.
Global options under [Rainmeter] are not replaced when a layout is loaded, preserving local settings such as:
- ConfigEditor
- SkinPath
- DisableVersionCheck
- Language
When loading a layout, the current Rainmeter state will automatically be saved as a layout named @Backup.
Hint: A layout can be loaded from the Windows command line using the !LoadLayout bang.
"C:\Program Files\Rainmeter\Rainmeter.exe" !LoadLayout "My Saved Layout"
The current Rainmeter state will be replaced with the named layout. If Rainmeter is not running, it will be started.
The Settings tab
This tab has some high level settings for the Rainmeter application, as well as controls for Rainmeter's logging capability.
General
- Language:
Use the pull-down menu to select the desired language for all Rainmeter user interfaces. This does not have any effect on languages used in skins.
- Editor:
Enter or browse to the text editor that will be used when "Edit skin" or "Edit settings" is selected.
- Check for updates
If selected, Rainmeter will check online when started to see if the running version is the most recent release version, and will prompt the user to upgrade if not. This has no effect on beta versions of Rainmeter.
- Disable dragging
If selected, automatically sets the draggable state of all active skins to prevent dragging skins with the mouse.
- Show notification area icon
Shows or hides the Rainmeter icon in the Windows notification area (system tray).
- Use D2D rendering
Use Direct2D rendering in Rainmeter. This requires Windows 10 Windows 8.x, or Windows 7 with the Platform Update applied.
- Reset statistics
When clicked, clears all saved network and other statistics from the Rainmeter.stat file in the settings folder.
Logging
In addition to the real-time logging of errors, warnings and notices in the Log tab of the Rainmeter About window, Rainmeter can log activity to a Rainmeter.log text file, which will be created in the settings folder.
- Debug mode
If selected, a more detailed logging mode will be used in the About window, and when Log to file is active. This should only be used when debugging a problem, as leaving this level of detailed logging on can impact Rainmeter performance.
- Log to file
If selected, Rainmeter will append log entries to the Rainmeter.log file while running. Unchecking this item will turn off logging, but the Rainmeter.log file will be retained.
- Show log file
If clicked, the Rainmeter.log file will be loaded in the text editor associated with .log files.
- Delete log file
If clicked, the Rainmeter.log file will be deleted. If Log to file is currently active, it will automatically be turned off. | http://docs.rainmeter.net/manual/user-interface/manage/ | 2015-11-25T06:08:47 | CC-MAIN-2015-48 | 1448398444974.3 | [array(['../../img/user-interface/Skins01.png', None], dtype=object)
array(['../../img/user-interface/Skins02.png', None], dtype=object)
array(['../../img/user-interface/Skins03.png', None], dtype=object)
array(['../../img/user-interface/Skins04.png', None], dtype=object)
array(['../../img/user-interface/Skins05.png', None], dtype=object)
array(['../../img/user-interface/Layouts01.png', None], dtype=object)
array(['../../img/user-interface/Layouts02.png', None], dtype=object)
array(['../../img/user-interface/Layouts03.png', None], dtype=object)
array(['../../img/user-interface/Settings01.png', None], dtype=object)] | docs.rainmeter.net |
Users Groups
From Joomla! Documentation
Contents
How to access
From the administrator area, select.
- group to edit the group's properties. | https://docs.joomla.org/Help16:Users_Groups | 2015-11-25T07:34:21 | CC-MAIN-2015-48 | 1448398444974.3 | [] | docs.joomla.org |
Revision history of "JForm::getName"Form::getName (cleaning up content namespace and removing duplicated API references) | https://docs.joomla.org/index.php?title=JForm::getName&action=history | 2015-11-25T06:38:26 | CC-MAIN-2015-48 | 1448398444974.3 | [] | docs.joomla.org |
Creating a Qt Script
Using the Script Editor, you can create Qt scripts for Harmony.
For details about the scripting nodes, methods, and DbScript documentation, click the links below.
You can also find the help files in the Script Editor view. From the Script Editor menu, select Help > Scripting Interface Documentation.
- From the top menu, select Windows > Script Editor.
The Script Editor view opens listing all the available JavaScript files.
- Do one o the folowing:
The New Script dialog box appears.
- Enter a name and click on OK.
When working with Harmony Stand Alone, your custom scripts and default scripts are stored
When working with Harmony Server, your custom scripts are stored in:
- Global: [Server_Name] > USA_DB > scripts
- Environment: [Server_Name] > USA_DB > environments > [environment_name]
- Job: [Server_Name] > USA_DB > jobs > [job name]
- User: [Server_Name] > USA_DB > users > [user_name] > stage > 1200-scripts
The name of your script appears in the File column of the Script Editor view.
- Click in the right side of the Script Editor and start writing your script. Try typing in the following script:
function add3dPathCol()
{
column.add(“ColumnName”, “3DPATH”);
}
For a tutorial on coding in JavaScript, refer to docs.oracle.com/javase/tutorial/java
For a detailed Harmony script interface documentation, refer to Harmony Script Interface Documentation
You can also find the help files in the Script Editor view. In the Script Editor menu, select Help > Scripting Interface Documentation.
- To check your syntax, click Verify.
A window opens with syntax information.
- To test your script, select the script you want to run from the File column.
- Do one of the following:
A window opens to ask you for the target function to run. In the window, select the function you want to execute.
To avoid selecting the function every time you want to run your script, you can set a target function. In the Script Editor toolbar, click the Set Target
button or select Play/Debug > Set Target from the Script Editor menu. In the Function column, select the function to target and press OK.
This usually occurs if you did not select the function you wanted to run from the File column before running the script. When this happens, click Save and run your script again.
This is because the software jumped to the <<Sandbox>> item in the Function column. The <<Sandbox>> is provided to test scripts without saving them. Simply click on your newly created script and run it again.
- In the Script Editor, in the File column, select the script to delete.
- Do one of the following:
- In the File list, select the script containing the function to debug.
- Do one of the following:
A window opens listing the scripts and functions available.
- In the Functions column, select the function to debug and click OK.
The Qt Script Debugger opens.
| https://docs.toonboom.com/help/harmony-12/advanced/Content/_CORE/Scripting/Scripting_Overview/002_H1_Creating_a_Qt_Script.html | 2018-02-17T23:17:08 | CC-MAIN-2018-09 | 1518891808539.63 | [array(['../../../Resources/Images/_ICONS/Home_Icon.png', None],
dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePremium.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageAdvanced.png',
'Toon Boom Harmony 12 Stage Advanced Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageEssentials.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/controlcenter.png',
'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/scan.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/Activation.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/_ICONS/download.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Interface/HAR11/HAR11_ScriptEditor_View.png',
'Script Editor View Script Editor View'], dtype=object)
array(['../../../Resources/Images/HAR/Scripting/HAR11_NewScript.png',
'New Script Dialog Box New Script Dialog Box'], dtype=object)
array(['../../../Resources/Images/HAR/Scripting/HAR12/HAR12_SaveNetworkScript.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Scripting/HAR11_NewScript_List.png',
'New Script File New Script File'], dtype=object)
array(['../../../Resources/Images/HAR/Scripting/HAR11_SyntaxError.png',
'Syntax Error Syntax Error'], dtype=object)
array(['../../../Resources/Images/HAR/Scripting/HAR103_Set_Target_Window.png',
'Set Target Function Window Set Target Function Window'],
dtype=object)
array(['../../../Resources/Images/HAR/Scripting/HAR103_Script_Editor_Sandbox_Message.png',
'Script Editor Sandbox Message Script Editor Sandbox Message'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Scripting/HAR103_Function_List_Debug_Window.png',
'Script Debugger Function List Window Script Debugger Function List Window'],
dtype=object)
array(['../../../Resources/Images/HAR/Scripting/HAR11_Debugger.png',
'QT Script Debugger QT Script Debugger'], dtype=object) ] | docs.toonboom.com |
Installation and Configuration Guide
Local Navigation
Install the SNMP service for monitoring by the BlackBerry Monitoring Service
If you want to install the BlackBerry® Monitoring Service on a computer in the BlackBerry Domain, you must install the SNMP service on each computer that you want to install the BlackBerry® Enterprise Server on so that the BlackBerry Monitoring Service can monitor the BlackBerry Enterprise Server activity.
After you finish: To complete the SNMP service installation process, after you install the BlackBerry Enterprise Server or BlackBerry Enterprise Server
components, configure the SNMP service to monitor the activity of the BlackBerry Enterprise Server or BlackBerry Enterprise Server components.
- On the taskbar, click Start > Settings > Control Panel > Add/Remove Programs > Add/Remove Windows Components.
- Double-click Management and Monitoring Tools.
- Select the Simple Network Management Protocol check box.
- Click OK.
- When Windows Setup prompts you, install the files from the Windows installation media.
- Complete the installation wizard.
- In the Windows Services, verify that the SNMP service is running.
Next topic: Creating a BlackBerry Administration Service pool using DNS round robin that includes the BlackBerry Web Desktop Manager
Previous topic: BESMgmt.cfg properties
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/admin/deliverables/25747/Install_SNMP_service_pre_BES_installation_404438_11.jsp | 2015-06-30T03:39:20 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.blackberry.com |
Difference between revisions of "JTableExtension::bind"::bind
Description
Overloaded bind function.
Description:JTableExtension::bind [Edit Descripton]
public function bind ( $array $ignore= '' )
- Returns null|string null is operation was satisfactory, otherwise returns an error
- Defined on line 59 of libraries/joomla/database/table/extension.php
- Since
See also
JTableExtension::bind source code on BitBucket
Class JTableExtension
Subpackage Database
- Other versions of JTableExtension::bind
SeeAlso:JTableExtension::bind [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=JTableExtension::bind/11.1&diff=57843&oldid=50757 | 2015-06-30T04:04:31 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.joomla.org |
Components Weblinks Links
From Joomla! Documentation
Revision as of 18:27, 7 October 2010 by Denis mouraux (Talk | contribs)
Contents
- 1 Overview
- 2 How to Access
- 3 Description
- 4 Screenshot
- 5 Toolbar
- 6 "Web Links" / "Categories" Links
- 7 Options
-,:
-').
- Icon. The Icon to be displayed to the left of the Web Links URL. Select an image file from the drop-down list box. The images are listed from the 'images/M_images' folder.
Category Tab
-.
- # Web links. Hide or Show the number of Web Links in each category.
Categories Tab
- Top Level Category Description Hide or Show the description of the top level category or optionally override with the text from the description field found in menu item. If using Root as top level category, the description field has to be filled.
...
List Layouts Tab
Integration Tab. | https://docs.joomla.org/index.php?title=Help16:Components_Weblinks_Links&oldid=31372 | 2015-06-30T03:56:54 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.joomla.org |
Information for "Secure coding guidelines" Basic information Display titleSecure coding guidelines Default sort keySecure coding guidelines Page length (in bytes)21,476 Page ID9908:06, 25 April 2010 Latest editorCarcajou (Talk | contribs) Date of latest edit16:00, 20 April 2015 Total number of edits14 Total number of distinct authors:Page (view source) Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Secure_coding_guidelines&action=info | 2015-06-30T03:36:07 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.joomla.org |
Error handling is implemented using a hand-shaking protocol between
PDO and the database driver code. The database driver code
signals PDO that an error has occurred via a failure
(0) return from any of the interface functions. If a zero
is returned, set the field
error_code in the control
block appropriate to the context (either the pdo_dbh_t or pdo_stmt_t block).
In practice, it is probably a good idea to set the field in both blocks to
the same value to ensure the correct one is getting used.
The error_mode field is a six-byte field containing a 5 character ASCIIZ
SQLSTATE identifier code. This code drives the error message process. The
SQLSTATE code is used to look up an error message in the internal PDO error
message table (see pdo_sqlstate.c for a list of error codes and their
messages). If the code is not known to PDO, a default
Unknown Message value will be used.
In addition to the SQLSTATE code and error message, PDO will call the driver-specific fetch_err() routine to obtain supplemental data for the particular error condition. This routine is passed an array into which the driver may place additional information. This array has slot positions assigned to particular types of supplemental info:
A native error code. This will frequently be an error code obtained from the database API.
A descriptive string. This string can contain any additional information related to the failure. Database drivers typically include information such as an error message, code location of the failure, and any additional descriptive information the driver developer feels worthy of inclusion. It is generally a good idea to include all diagnostic information obtainable from the database interface at the time of the failure. For driver-detected errors (such as memory allocation problems), the driver developer can define whatever error information that seems appropriate. | http://docs.php.net/manual/pt_BR/internals2.pdo.error-handling.php | 2015-06-30T03:32:41 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.php.net |
Canvas: Designing Workflows.
Signatures are often nicknamed “subtasks” because they describe a task to be called within a task.
subtaskmethod:
>>> add.subtask((2, 2), countdown=10) tasks.add(2, 2)
There is also a shortcut using star arguments:
>>> add.s(2, 2) tasks.add(2, 2)
Keyword arguments are also supported:
>>> add.s(2, 2, debug=True) tasks.add(2, 2, debug=True)
From any signature instance you can inspect the different fields:
>>> s = add.subtask((2, 2), {'debug': True}, countdown=10) >>> s.args (2, 2) >>> s.kwargs {'debug': True} >>> s.options {'countdown': 10}
It supports the “Calling API” which means it supports
delayand
apply_asyncor being called directly..subtask(args, kwargs, **options).apply_async() >>> add.apply_async((2, 2), countdown=1) >>> add.subtask() # 2 + 4 >>>.subtask((2, 2), countdown=10) >>> s.apply_async(countdown=1) # countdown is now 1
You can also clone signatures to create derivates:
>>> s = add.s(2) proj.tasks.add(2)
>>> s.clone(args=(4, ), kwargs={'debug': True}) proj.tasks.add(2, 4, debug=True)
Immutability¶
New in version 3.0.
Partials are meant to be used with callbacks, any tasks linked or chord callbacks will be applied with the result of the parent task. Sometimes you want to specify a callback that does not take additional arguments, and in that case you can set the signature to be immutable:
>>> add.apply_async((2, 2), link=reset_buffers.subtask. E.g., e.g the operation:
>>> items = zip(xrange(1000), xrangeflows..subtask((2, 2), immutable=True)
There’s also an
.sishortcut for this:
>>> add.si(2, 2)
Now you can create a chain of independent tasks instead:
>>> res = (add.si(2, 2) | add.si(4, 4) | add.s(8, 8))() >>> res.get() 16 >>> res.parent.get() 8 >>> res.parent.parent.get() 4
Simple group
You can easily create a group of tasks to execute in parallel:
>>> from celery import group >>> res = group(add.s(i, i) for i in xrange(10))() >>> res.get(timeout=1) [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
Simple chord
The chord primitive enables us to add callback to be called when all of the tasks in a group have finished executing, which is often required for algorithms that aren’t embarrassingly parallel:
>>> from celery import chord >>> res = chord((add.s(i, i) for i in xrange is not passed on to the callback:
>>> chord((import_contact.s(c) for c in contacts), ... notify_complete.si(import_id)).apply_async()
Note the use of
.siabove which creates an immutable signature.
Blow your mind by combining
Chains can be partial too:
>>> c1 = (add.s(4) | mul.s(8)) # (16 + 4) * 8 >>> res = c1(16) >>> res.get() 160
Which xrange xrange, which in practice means adding a callback task:
>>> res = add.apply_async((2, 2), link=mul.s(16)) >>> res.get() 4
The linked task will be applied with the result of its parent
task as the first argument, which in the above case will result
in
mul(4, 16) since the result is 4.
You can also add error callbacks using the
link_error argument:
>>> add.apply_async((2, 2), link_error=log_error.s()) >>> add.subtask((2, 2), link_error=log_error.s())
Since exceptions can only be serialized when pickle is used the error callbacks take the id of the parent task as argument instead:
from __future__ import print_function import os from proj.celery import app @app.task def log_error(task_id): result = app.AsyncResult(task_id) result.get(propagate=False) # make sure result written. with open(os.path.join('/var/errors', task_id), 'a') as fh: print('--\n\n{0} {1} {2}'.format( task_id, result.result, result.traceback), file=fh)
To make it even easier to link tasks together there()
Note
It’s not possible to synchronize on groups, so a group chained to another signature is automatically upgraded to a chord:
# will actually be a chord when finally evaluated res = (group(add.s(i, i) for i in range(10)) | xsum.s()).delay()
Trails¶
Tasks will keep track of what subtasks a task calls in the
result backend (unless disabled using
Task.trail) is not fully
formed (one of the tasks has not completed yet),
but you can get an intermediate representation of the graph
too:
>>> for result, value in res.collect(intermediate=True)): .... one in the current process, and a
GroupResult
instance is returned which can be used to keep track of the results,
or tell how many tasks are ready and so on:
>>> g = group(add.s(2, 2), add.s(4, 4)) >>> res = g() >>> res.get() [4, 8]
Group also supports iterators:
>>> group(add.s(i, i) for i in xrange(100))()
A group is a signature object, so it can be used in combination with other signatures.. did not raise an exception).
failed()
Return
Trueif any of the subtasks failed.
waiting()
Return
Trueif any of the subtasks is not ready yet.
ready()
Return
Trueif all of the subtasks are ready.
completed_count()
Return the number of completed subtasks.
revoke()
Revoke all of the subtasks.
join()
Gather the results for all of the subtasks and return a list with them ordered by the order of which they were called.
Chords¶
New in version 2.3.
Note
Tasks used within a chord must not ignore their results. If the result backend is disabled for any task (header or body) in your chord you should read “Important Notes”. xrange(100))(tsum.s()).s() >>> header = [add.s(i, i) for i in range(100)] >>> result = chord(header)(callback) >>> result.get() 9900
Remember, the callback can only be executed after all of the tasks in the
header)
Error handling¶
So what happens if one of the tasks raises an exception?
This was not documented for some time and before version 3.1 the exception value will be forwarded to the chord callback.
From 3.1 errors will propagate to the callback, so the callback will not be executed
instead the callback changes',)
If you’re running 3.0.14 or later you can enable the new behavior via
the
CELERY_CHORD_PROPAGATES setting:
CELERY_CHORD_PROPAGATES = True
While the traceback may be different depending on which result backend is
being used, you can see does not respect the ordering of the header group.
Important Notes¶
Tasks used within a chord must not ignore their results. In practice this
means that you must enable a
CELERY_RESULT_BACKEND in order to use
chords. Additionally, if
CELERY_IGNORE_RESULT is set to
True
in your configuration, be sure that the individual tasks to be used within
the chord are defined with
ignore_result=False. This applies to both
Task subclasses and decorated tasks.
Example Task subclass:
class MyTask(Task): abstract = True)
Map & Starmap¶
map and
starmap are built-in tasks
that calls the hundred thousand objects each.
Some may worry that chunking your tasks results in a degradation of parallelism, but this is rarely true for a busy cluster and in practice since you)()
which means that the first task will have a countdown of 1, the second a countdown of 2 and so on. | http://celery.readthedocs.org/en/latest/userguide/canvas.html | 2015-06-30T03:28:18 | CC-MAIN-2015-27 | 1435375091587.3 | [array(['../_images/result_graph.png', '../_images/result_graph.png'],
dtype=object) ] | celery.readthedocs.org |
Joomla! 1.6
From Joomla! Documentation
Revision as of 17:16, 25 December 2011 by Chris Davenport (Talk | contribs)
Subcategories
This category has the following 2 subcategories, out of 2 total.
H
- [×] Help screen 1.6 (empty)
T
- [×] Tips and tricks 1.6 (4 P)
Pages in category ‘Joomla! 1.6’
The following 7 pages are in this category, out of 67 total.(previous 200) (next 200)
(previous 200) (next 200) | https://docs.joomla.org/index.php?title=Category:Joomla!_1.6&oldid=63898&pagefrom=Rvsjoen%2Ftutorial%2FDeveloping+an+MVC+Component%2FPart+16 | 2015-06-30T04:31:37 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.joomla.org |
Difference between revisions of "Combined Head" From Joomla! Documentation Revision as of 06:15, 15 November 2009 (view source)Addacumen (Talk | contribs)← Older edit Latest revision as of 06:42, 15 November 2009 (view source) Addacumen (Talk | contribs) Line 9: Line 9: |- |- −|{{+<td rowspan="2"><br /></td> + +<td><br /><br /><br /><br /><br /></td> +|- |{{Chunk:Contributor_Briefing_Head}} |{{Chunk:Contributor_Briefing_Head}} |} |} Latest revision as of 06:42, 15 November 2009 For Users For Contributors == This is a special Briefing Header which is displayed on each new Chapter Head page == The Chapter Head page is constructed with a Chapter Head layout template. For the template you provide 5 parameters. The name of the Chapter Head, e.g. Introduction. Must be provided. y turns the Header on. In this case you must also provide Chunk:M16_Name_Header. Blank gets nothing. y turns the Footer on. In this case you must also provide Chunk:M16_Name_Footer. Blank gets nothing. y adds "- In Preparation" to the name. Blank gets nothing. y adds the Future notice. Blank gets nothing The initial template state is a name and four values set to y to display the whole page. The Header chunk of each new Chapter Head is pointed at an advice Chunk. Create your own unique Chunk or disconnect it. The parameter order is chosen to reflect my judgement of the likelihood of parameters being dropped. Earlier = less likely to be dropped. SUGGESTION Think about how this page should be organised. Set the parameters to match your design in edit mode Save the file - all the names in red will change from Test * to your new value. Complete each of the Chunks shown in red on the finished page. As you complete a Chunk think if any part of it will be reused. If it will make it a little Chunk. Please read the Contributors Advice pages Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Chunk:Combined_Head&diff=prev&oldid=17709 | 2015-06-30T04:36:19 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.joomla.org |
Difference between revisions of "Screen.installer.15"
From Joomla! Documentation
Revision as of 19:55, 23
The Installer can be opened from Extensions -> Install/Uninstall menu item.
Description.
Screenshot
Details.
Toolbar
Quick Tips.
Points to.
Related Information. | https://docs.joomla.org/index.php?title=Help15:Screen.installer.15&diff=8403&oldid=8402 | 2015-06-30T03:37:18 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.joomla.org |
My smartphone drops calls made using Wi-Fi calling
You can only make calls using Wi-Fi calling if your BlackBerry smartphone is connected to a Wi-Fi network. To avoid dropped calls, your smartphone plays a warning tone when you are making a call using Wi-Fi calling and your Wi-Fi connection is getting weak. To make another call using Wi-Fi calling, move to an area with a stronger Wi-Fi signal.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/38106/1780436.jsp | 2015-06-30T03:36:24 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.blackberry.com |
Welcome to Alfresco documentation. The goal of this documentation is to provide information on installing, configuring, using, administering, customizing, extending, and troubleshooting Alfresco.
If you are an Alfresco Enterprise customer you can find the Alfresco Support Handbooks here.
Looking for the online documentation for another release? You can find it here | http://docs.alfresco.com/4.0/concepts/welcome-infocenter.html | 2015-06-30T03:27:19 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.alfresco.com |
About
From Joomla! Documentation
Revision as of 21:57, 25 August 2013 by Tom Hutchison (Talk | contribs)
The Joomla! Documentation Wiki contains collaborative community-contributed documentation for the Joomla! project.
Wiki Policies and Guidelines
Wiki Help
JDOC Projects (Helping the wiki)
>
Joomla! Electronic Documentation License
- License and License FAQ | https://docs.joomla.org/index.php?title=JDOC:About&oldid=102512 | 2015-06-30T05:02:36 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.joomla.org |
ElementSQLSrv:: construct/11.1 to API17:JDatabaseQueryElementSQLSrv:: construct without leaving a redirect (Robot: Moved page)
- 20:35, 27 April 2011 Doxiki2 (Talk | contribs) automatically marked revision 56402 of page JDatabaseQueryElementSQLSrv:: construct/11.1 patrolled | https://docs.joomla.org/index.php?title=Special:Log&page=JDatabaseQueryElementSQLSrv%3A%3A+construct%2F11.1 | 2015-06-30T03:31:42 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.joomla.org |
- MongoDB Integration and Tools >
- MongoDB-Based Applications
MongoDB-Based Applications¶
Please list applications that leverage MongoDB here. If you’re using MongoDB for your application, you can list yourself here by editing this page. Also, check out our Contributor Hub Project for a list of the most popular projects supporting MongoDB.
See also
- Production Deployments Companies and Sites using MongoDB
- Hosting Center
Applications Using MongoDB¶
Application for creating dashboards with drilldown and collaboration capabilities.
Content-management using TurboGears and Mongo.
Content management system built using NodeJS and MongoDB.
Cube is an open-source system for visualizing time series data, built on MongoDB, Node and D3.
ErrorApp tracks errors from your apps. It reports them to you and gathers all information and make reports available to you.
A full-featured, developer centric open source e-commerce platform that makes custom code easy, with powerful templates & expressive syntax.
Graylog2 is an open source syslog server implementation that stores logs in MongoDB and provides a Rails frontend.
Harmony is a powerful web-based platform for creating and managing websites. It helps connect developers with content editors, for unprecedented flexibility and simplicity. For more information, view Steve Smith’s presentation on Harmony at MongoSF (April 2010).
Hummingbird is a real-time web traffic visualization tool developed by Gilt Groupe.
KeystoneJS is a Node.js content management system and web application platform built on Express.JS and MongoDB.
Locomotive is an open source CMS for Rails. It’s flexible and integrates with Heroku and Amazon S3.
Mogade offers a free and simple to use leaderboard and achievement services for mobile game developers..
A flexible CMS that uses MongoDB and PHP.
A simple, web-based data browser for MongoDB.
Mongeez is an opensource solution allowing you to manage your mongo document changes in a manner that is easy to synchronize with your code changes. Check out mongeez.org.
NewsBlur is an open source visual feed reader that powers. NewsBlur is built with Django, MongoDB, Postgres and RabbitMQ.
Plugin for Quantum GIS that lets you plot geographical data stored in MongoDB.
Rubedo is a full featured open source Enterprise Content Management System, built on MongoDB and Elastic Search with Zend Framework, Sencha Ext JS and Boostrap. It offers a complete set of back-office tools to easily manage galaxies of responsive, flexible and performant applications or websites.
Open source image transcription tool.
Free and open source Q&A software, open source stackoverflow style app written in Ruby, Rails, MongoMapper and MongoDB.
Strider: Open Source Continuous Integration & Deployment Server.
Thundergrid is a simple framework written in PHP that allows you to store large files in your Mongo database in seconds.
Websko is a content management system designed for individual Web developers and cooperative teams. | http://docs.mongodb.org/ecosystem/tools/applications/ | 2015-06-30T03:29:34 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.mongodb.org |
Spring Security is written to execute within a standard Java 1.4 Runtime Environment. It also supports Java 5.0, although the Java types which are specific to this release are packaged in a separate package with the suffix "tiger" in their JAR filename..
This design offers maximum deployment time flexibility, as you can simply copy your target artifact (be it a JAR, WAR or EAR) from one system to another and it will immediately work.
Let's explore some of the most important shared components in Spring.. For this
situation you would use the
SecurityContextHolder.MODE_GLOBAL.. Alternatively,. Whilst you won't normally need to create
an
Authentication object yourself, it is fairly
common for users to query the
Authentication
object. You can use the following code block - from anywhere in your
application - to obtain the name of the authenticated user, for example:.
Another item to note from the above code fragment is that you
can obtain a principal from the
Authentication
object. The principal is just an
Object. Most of
the time this can be cast into a
UserDetails
object.
UserDetails is a central. Most
authentication providers that ship with Spring Spring a Spring, Spring Security can participate in many different authentication environments. Whilst we recommend people use Spring Security for authentication and not integrate with existing Container Managed Authentication, it is nevertheless supported - as is integrating with your own proprietary authentication system. Let's first explore authentication from the perspective of Spring Security managing web security entirely on its own, which is illustrative of the most complex and most common situation. (eg a BASIC authentication dialogue box, a cookie, a X509, an authentication
mechanism, and an
AuthenticationProvider. takes actions such as described in step
three.
After your browser decides to submit), and
that name is "authentication mechanism". After the authentication
details are collected from the user agent, an
"
Authentication request" object is built and then
presented to an
AuthenticationProvider.
The last player in the Spring Spring Spring Security doesn't mind how you put an
Authentication inside the
SecurityContextHolder. The only critical
requirement is that the
SecurityContextHolder
contains an
Authentication that represents a
principal before the
AbstractSecurityInterceptor
needs to authorize a request. such situations it's quite easy to get Spring Security to work, and still provide authorization capabilities. All you need to do is write a filter (or equivalent) that reads the third-party user information from a location, build a Spring Security-specific Authentication object, and put it onto the SecurityContextHolder. It's quite easy to do this, and it is a fully-supported integration approach.
Spring Security uses the term "secure object" to refer to any object that can have security (such as an authorization decision) applied to it. The most common examples are method invocations and web requests. (for clarification, the author disapproves of this design and instead advocates properly encapsulated domain objects together with the DTO, assembly, facade and transparent persistence patterns, but as use of anemic domain objects is the present mainstream approach, we'll talk about it here). If you just need to secure method invocations to the services layer, Spring's standard AOP (otherwise known as AOP Alliance) is to perform some web request authorization, coupled with some Spring AOP method invocation authorization on the services layer.
Each secure object type supported by Spring Security has its own to proceed (assuming access was granted)
Call the
AfterInvocationManager if configured, once the invocation
has returned.
A "configuration attribute" can be thought of as a String that has special meaning to the classes used by
AbstractSecurityInterceptor. They may be simple role names or have more complex meaning, depending on the
how sophisticated the
AccessDecisionManager implementation is.
The
AbstractSecurityInterceptor is configured with an
ObjectDefinitionSource which
it uses to look up the attributes for a secure object. Usually this configuration will be hidden from the user. Configuration
attributes will be entered as annotations on secured methods, or as access attributes on secured URLs (using the
namespace
<intercept-url> syntax)..
AbstractSecurityInterceptor and its related objects
are shown in Figure 5.1, “The key . | http://docs.spring.io/spring-security/site/docs/2.0.x/reference/html/technical-overview.html | 2015-06-30T03:42:49 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.spring.io |
Difference between revisions of "Components Weblinks Categories Edit"
From Joomla! Documentation
Revision as of 11:26, 13 June 2010
Contents. TBD
- Parent. TBD
- State. TBD
- Access. TBD
- Language. TBD
- ID. TBD
Basic Options
- Alternate Layout. TBD
- Image. TBD
Category Access Rules
TBD
Metadata Options
TBD
Toolbar
At the top right you will see the toolbar:
- Save. Save it and return to editing the menu details.
- Save & Close. TBD
- Save & New. TBD
- Save as Copy. TBD
- | https://docs.joomla.org/index.php?title=Help16:Components_Weblinks_Categories_Edit&diff=28746&oldid=28268 | 2015-06-30T03:32:56 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.joomla.org |
Information for "Manage articles using the Front-end of Joomla! 1.5" Basic information Display titleJ1.5:Manage articles using the Front-end of Joomla! 1.5 Default sort keyManage articles using the Front-end of Joomla! 1.5 Page length (in bytes)5,036 Page ID1103612:11, 29 December 2010 Latest editorWilsonge (Talk | contribs) Date of latest edit07:21, 6 June 2013 Total number of edits49’ | https://docs.joomla.org/index.php?title=J1.5:Manage_articles_using_the_Front-end_of_Joomla!_1.5&action=info | 2015-06-30T04:00:43 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.joomla.org |
Changes related to "Talk:How to use the filesystem package"
← Talk:How to use the filesystem package
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&days=30&from=&target=Talk%3AHow_to_use_the_filesystem_package | 2015-06-30T04:27:00 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.joomla.org |
Resources¶.
Quick Start¶'], }
Why Class-Based?¶.
Why
Resource vs.
ModelResource?¶.
Flow Through The Request/Response Cycle¶are checked by Django’s url resolvers.
On a match for the list view,
Resource.wrap_view('dispatch_list')is called.
wrap_viewprovides basic error handling & allows for returning serialized errors.
Because
dispatch_listwas passed to
wrap_view,
Resource.dispatch_listis called next. This is a thin wrapper around
Resource.dispatch.
dispatchdoesactually calls the requested method (
get_list).
get_listdoes the actual work of the API. It does:
- A fetch of the available objects via
Resource.obj_get_list. In the case of
ModelResource, this builds the ORM filters to apply (
ModelResource.build_filters). It then gets the
QuerySetvia
ModelResource.get_object_list(which performs
Resource.apply_authorization_limitstoapplied to each of them, causing Tastypie to translate the raw object data into the fields the endpoint supports.
- Finally, it calls
Resource.create_response.
create_responsedoes is potentially store that a request occurred for future throttling (
Resource.log_throttled_access) then either returns the
HttpResponse.
Why Resource URIs?¶.:
- Put the data model into a
Bundleinstance, which is then passed through the various methods.
- Run through all fields on the
Resource, letting each field perform its own
dehydratemethod on the
bundle.
- While processing each field, look for a
dehydrate_<fieldname>method on the
Resource. If it’s present, call it with the
bundle.
- Finally, after all fields are processed, if the
dehydratemethod.
dehydrate_FOO¶ total_score #.
dehydrate¶.
The Hydrate Cycle¶
Tastypie uses a “hydrate” cycle to take serializated data from the client
and turn it into something the data model can use. This is the reverse process
from the
dehydrate cycle. In:
- Put the data from the client into a
Bundleinstance, which is then passed through the various methods.
- If the
hydratemethod
hydratemethod.
hydrate_FOO¶): bundle.data['title'] = bundle.data['title'].lower() return bundle
The return value is the
bundle.
Per-field
hydrate¶.
Reverse “Relationships”¶()
Resource Options (AKA
Meta)¶
The inner
Meta class allows for class-level configuration of how the
Resource should behave. The following options are available:
serializer¶
Controls which serializer class the
Resourceshould use. Default is
tastypie.serializers.Serializer().
authentication¶
Controls which authentication class the
Resourceshould use. Default is
tastypie.authentication.Authentication().
validation¶
Controls which validation class the
Resourceshould use. Default is
tastypie.validation.Validation().
paginator_class¶
Controls which paginator class the
Resourceshould use. Default is
tastypie.paginator.Paginator.
Note
This is different than the other options in that you supply a class rather than an instance. This is done because the Paginator has some per-request initialization options.
throttle¶
Controls which throttle class the
Resourceshould use. Default is
tastypie.throttle.BaseThrottle().
allowed_methods¶
Controls what list & detail REST methods the
Resourceshould respond to. Default is
None, which means delegate to the more specific
list_allowed_methods&
detail_allowed_methodsoptions.
You may specify a list like
['get', 'post', 'put', 'delete', 'patch']as a shortcut to prevent having to specify the other options.
list_allowed_methods¶
Controls what list REST methods the
Resourceshould respond to. Default is
['get', 'post', 'put', 'delete', 'patch']. Set it to an empty list (i.e. []) to disable all methods.
detail_allowed_methods¶
Controls what detail REST methods the
Resourceshould respond to. Default is
['get', 'post', 'put', 'delete', 'patch']. Set it to an empty list (i.e. []) to disable all methods.
limit¶
Controls how many results the
Resourcewill show at a time. Default is either the
API_LIMIT_PER_PAGEsetting (if provided) or
20if not specified.
max_limit¶
Controls the maximum number of results the
Resourcewill show at a time. If the user-specified
limitis higher than this, it will be capped to this limit. Set to
0or
Noneto allow unlimited results.
resource_name¶
An override for the
Resourceto use when generating resource URLs. Default is
None.
If not provided, the
Resourceor
ModelResourcewill attempt to name itself. This means a lowercase version of the classname preceding the word
Resourceif present (i.e.
SampleContentResourcewould become
samplecontent).
default_format¶
Specifies the default serialization format the
Resourceshould use if one is not requested (usually by the
Acceptheader or
formatGET parameter). Default is
application/json.
filtering¶
Provides a list of fields that the
Resourcewill accept client filtering on. Default is
{}.
Keys should be the fieldnames as strings while values should be a list of accepted filter types.
ordering¶
Specifies the what fields the
Resourceshould allow ordering on. Default is
[].
Values should be the fieldnames as strings. When provided to the
Resourceby the
order_byGET parameter, you can specify either the
fieldname(ascending order) or
-fieldname(descending order).
object_class¶
Provides the
Resourcewith the object that serves as the data source. Default is
None.
In the case of
ModelResource, this is automatically populated by the
querysetoption and is the model class.
queryset¶
Provides the
Resourcewith the set of Django models to respond with. Default is
None.
Unused by
Resourcebut
Resourceshould include. A whitelist of fields. Default is
[].
excludes¶
Controls what introspected fields the
Resourceshould NOT include. A blacklist of fields. Default is
[].
include_resource_uri¶
Specifies if the
Resourceshould include an extra field that displays the detail URL (within the api) for that resource. Default is
True.
include_absolute_url¶
Specifies if the
Resourceshould include an extra field that displays the
get_absolute_urlfor that object (on the site proper). Default is
False.
always_return_data¶
Specifies all HTTP methods (except
DELETE) should return a serialized form of the data. Default is
False.
If
False,
HttpNoContent(204) is returned on
PUTwith an empty body & a
Locationheader of where to request the full resource.
If
True,
HttpResponse(200) is returned on
POST/PUTwith a body containing all the data in a serialized form.
collection_name¶
Specifies the collection of objects returned in the
GETlist will be named. Default is
objects.
Basic Filtering¶
Advanced Filtering¶.
wrap_view¶. (200 OK) if
Meta.always_return_data = True.
put_detail¶ (200
OK).
post_list¶.ikey then the item is considered “new” and is handled like a
POSTto the resource list.
- If the dict has a
resource_urikey and the
resource_urirefers to an existing resource then the item is an update; it’s treated like a
PATCHto the corresponding resource detail.
- If the dict has a
resource_uribis all or nothing. If a single sub-operation fails, the entire request will fail and all resources will be rolled back.
- For
PATCHto work, you must have
patchin your detail_allowed_methods setting.
- To delete objects via
deleted_objectsin a
PATCHrequest you must have
delete,.
build_filters¶.
apply_sorting¶.. | http://django-tastypie.readthedocs.org/en/latest/resources.html?highlight=dehydrate | 2015-06-30T03:25:35 | CC-MAIN-2015-27 | 1435375091587.3 | [] | django-tastypie.readthedocs.org |
User Guide
Local Navigation
I can't connect to the mobile network
Try the following actions:
- If your BlackBerry®.
- If you have a Wi-Fi® enabled smartphone and your wireless service provider supports UMA, verify that your connection preference isn't set to Wi-Fi Only.
Next topic: Error messages
Previous topic: Troubleshooting: Mobile network
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/36022/I_cannot_connect_to_the_mobile_network_61_1480231_11.jsp | 2015-06-30T03:43:59 | CC-MAIN-2015-27 | 1435375091587.3 | [] | docs.blackberry.com |
Remove-Hyphostingunitstorage¶
Removes storage from a hosting unit.
Syntax¶
Remove-HypHostingUnitStorage [-LiteralPath] <String> [-StoragePath <String>] [-StorageType <StorageType>] [-LoggingId <Guid>] [-BearerToken <String>] [-TraceParent <String>] [-TraceState <String>] [-VirtualSiteId <String>] [-AdminAddress <String>] [<CommonParameters>]
Detailed Description¶
Use this command to remove storage locations from a hosting unit. This does not remove the storage from the hypervisor, only the reference to the storage for the Host Service. After removal, the storage is no longer used to store hard disks (when creating new virtual machines with the Machine Creation Service). The hard disks located already on the storage remain in place and virtual machines that have been created already continue to use the storage until they are removed from the deployment. Do not use this command if the connection for the hosting unit is in maintenance mode. If the storage location to be removed no longer exists on the hypervisor for the hosting unit, you must supply a fully qualified path to the storage location.
Related Commands¶
Parameters¶
Input Type¶
System.String
You Can Pipe A String That Contains A Path To Remove-Hyphostingunitstorage (Storagepath Parameter).¶
Return Values¶
Citrix.Xdpowershell.Hostingunit¶
Remove¶
After storage is removed, it is the administrator's responsibility to maintain its contents. The Citrix XenDesktop Machine Creation Service does not attempt to clean up any data that is stored in the storage location.
If all storage is removed from the hosting unit, other features of the Machine Creation Service stops functioning until some storage is added again.
In the case of failure, the following errors can result.
Error Codes
-----------
HostingUnitsPathInvalid
The path provided is not to an item in a subdirectory of a hosting unit item.
HostingUnitStorageObjectToDeleteDoesNotExist
The storage path specified is not part of the hosting unit.
HypervisorInMaintenanceMode
The hypervisor for the connection is in maintenance mode.>Remove-HypHostingUnitStorage -LiteralPath XDHyp:\HostingUnits\MyHostingUnit -StoragePath 'XDHyp:\HostingUnits\MyHostingUnits\newStorage.storage'
Description¶
The command removes the OS storage location called "newStorage.storage" from the hosting unit called "MyHostingUnit"
Example 2¶
c:\PS>Get-ChildItem XDHYP:\HostingUnits\Host\\*.storage | Remove-HypHostingUnitStorage XDHYP:\HostingUnits\Host1
Description¶
The command removes all OS storage from all hosting units called "Host1".
Example 3¶
c:\PS>Get-ChildItem XDHYP:\HostingUnits\Host\\*.storage | Remove-HypHostingUnitStorage -StorageType PersonalvDiskStorage
Description¶
The command removes all PersonalvDisk storage from all hosting units called "Host1". | https://developer-docs.citrix.com/projects/citrix-virtual-apps-desktops-service-sdk/en/latest/HostService/Remove-HypHostingUnitStorage/ | 2022-01-16T19:47:47 | CC-MAIN-2022-05 | 1642320300010.26 | [] | developer-docs.citrix.com |
Review and send the engagement letter
This topic applies to OnPoint PCR.
The draft version of your Engagement Letter is automatically generated based on your responses in the Engagement set up, Actions (
) | Guidance | Collapsed.
Once you're satisfied with the contents of the engagement letter, select Sign Off to perform the appropriate sign off. Once the content is reviewed, you can send a copy to your client. PCR engagement letter.
Client collaboration — sending queries:
To reduce the complexity of managing data, OnPoint PCR is equipped with the Query functionality to securely handle your client collaboration.
To obtain the client signature, you can send your draft directly to clients using query documents in OnPoint PCR. are also contained within the engagement file, ensuring that the engagement team and client always know what is completed or outstanding.
To send the engagement letter to your clients:
Open the 1-250.
As the client responds to the query, you have the opportunity to review their responses before adding them into your engagement file. If the provided information: Marking a query as complete will permanently lock the document and the query will no longer be able to reopen.
To learn more about the staff-contact collaboration workflow, see Staff-contact collaboration (Queries). | https://docs.caseware.com/2020/webapps/31/da/Explore/OnPoint-PCR/Review-and-send-the-engagement-letter.htm?region=us | 2022-01-16T19:27:55 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['/documentation_files/2020/webapps/31/Content/en/Resources//CaseWare_Logos/pcr-logo.png',
None], dtype=object)
array(['/documentation_files/2020/webapps/31/Content/en/Resources//Images/1-2252.png',
'Review engagement letter.'], dtype=object) ] | docs.caseware.com |
Release 0.9.14¶
Operator Changes¶
Docker command executor (
docker: {image: name}) supports
always_pull: trueoption (@alu++).
emr>operator supports region options and secrets. See documents for details. (@saulius++)
General Changes¶
Digdag validates _error task before workflow starts (with local mode) or when a workflow is uploaded (with server mode). It was validated only when _error task actually ran but now it’s easier for us to notice broken workflows.
Fixed uncaught exception in workflow executor loop when _error tasks had a problem. This is only for being more defensive because broken _error tasks shouldn’t exist thanks to above another fix. | https://docs.digdag.io/releases/release-0.9.14.html | 2022-01-16T18:25:44 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.digdag.io |
sys.dm_hadr_database_replica_cluster_states (Transact-SQL)
Applies to:
SQL Server (all supported versions)
Returns a row containing information intended to provide you with insight into the health of the availability databases in the Always On availability groups in each Always?
Security
Permissions
Requires VIEW SERVER STATE permission on the server.
See Also
Always On Availability Groups Dynamic Management Views and Functions (Transact-SQL)
Always On Availability Groups Catalog Views (Transact-SQL)
Monitor Availability Groups (Transact-SQL)
Always On Availability Groups (SQL Server)
sys.dm_hadr_database_replica_states (Transact-SQL) | https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-hadr-database-replica-cluster-states-transact-sql?view=sql-server-ver15 | 2022-01-16T20:47:33 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.microsoft.com |
This document contains some technical rules and limits for alerts.
Permission levels
Permissions differ depending on whether you're on our original product-based pricing model or our New Relic One pricing model:
Limits
If your organization has a parent/child account structure, child accounts do not inherit a parent account's alert policies: You must create policies separately for all child accounts.
The following rules apply both to the New Relic One user interface and to the REST API (v2). | https://docs.newrelic.com/docs/alerts-applied-intelligence/new-relic-alerts/learn-alerts/rules-limits-alerts | 2022-01-16T19:37:41 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.newrelic.com |
This document explains how to install APM's .NET Framework agent and .NET Core agent on Azure Service Fabric. This is not the same as installing the Infrastructure integrations for Microsoft Azure.
Install .NET agent on Azure Service Fabric
Important
In most cases, installing the .NET agent in an Azure Service Fabric environment can be performed using the standard install procedures for either Windows or Linux. This document highlights some alternate ways you can install the agent.
You will need to ensure the agent gets installed on all nodes in your cluster. To monitor multiple nodes, you may want to integrate the install into your deployment process.
If you are using containers in your Service Fabric environment you should read Install for Docker.
You can also install the agent in a Service Fabric environment using NuGet. NuGet is often a good option for developers because the agent gets deployed along with your application. Though, using NuGet requires some manual installation procedures. See Install with NuGet.
Install using NuGet
To install the .NET agent using NuGet:
Follow the standard NuGet install procedures.
When using NuGet, you must set some environment variables. This can be done in your application's
ServiceManifest.xmlfile. See the relevant instructions below:
For the .NET Framework only: Edit your
app.configfile and add the
NewRelic.AgentEnabledapp setting:<appSettings>...<add key="NewRelic.AgentEnabled" value="true" />...</appSettings>
If your application is generating traffic, data should appear in your APM account in several minutes. If data does not appear, try these troubleshooting steps:
- Make sure that all files in the
newrelicdirectory at the root of your app was published to Azure.
- Make sure the environment variables are set correctly.
- See the general troubleshooting documentation to check for common errors. | https://docs.newrelic.com/docs/apm/agents/net-agent/azure-installation/install-net-agent-azure-service-fabric | 2022-01-16T18:43:47 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.newrelic.com |
Event code history
When multiple versions of an event code exist, the history of those event codes is displayed at the bottom of the event code information screen. This history section is only displayed when multiple versions exist. The history section enables access to the configuration of previous versions of the event code.
The history section displays the following columns: | https://docs.pega.com/pega-smart-claims-engine-user-guide/86/event-code-history | 2022-01-16T20:26:05 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.pega.com |
Resolving Duplicate Records
One approach is to group duplicate records together into collections. When you approve the records they can then be processed through a consolidation process to eliminate the duplicate records in each collection from your data.
Another approach is to edit the records so that they are more likely to be recognized as duplicates, for example correcting the spelling of a street name. When you approve the records, Spectrum™ Technology Platform reprocesses the records through a matching and consolidation process. If you corrected the records successfully, Spectrum™ Technology Platform will be able to identify the record as a duplicate.
Yet another approach to resolving duplicate records is to create a best of breed record. This combines the other two approaches by managing record collections and then editing one of the records in the collection to include fields from both the original and duplicate records. This record is then known as the best of breed record. | https://docs.precisely.com/docs/sftw/spectrum/12.2/en/webhelp/DataQualityGuide/BusinessSteward/source/ExceptionEditor/ResolvingDuplicates.html | 2022-01-16T18:14:47 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.precisely.com |
A Resource Group is the logical container in which you can create resources such as network components, computer resources, storage, etc.
Refer to the Azure documentation for more information.
In this section, we will create a Resource Group for testing LifeKeeper (we will name it LK-QSG) as follows:
- Select “Resource groups” from the home screen to browse the list of existing resource groups, then click “Create” located at the top.
- Specify the name of Resource group as LK-QSG, then select the Region to create the resource group. Note that the region used to create the resource group must have 3 availability zones.
Once you make the selections, click “Review + Create”.
- The wizard evaluates these values and you can now create the resource if the validation passes.
Click “Create” to create the resource group.
- Now the resource group is created.
フィードバック
フィードバックありがとうございました
このトピックへフィードバック | https://docs.us.sios.com/spslinux/9.6.0/ja/topic/creating-the-resource-group | 2022-01-16T18:28:56 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.us.sios.com |
Step 2: Set up the AWS CLI and AWS SDKs
The following steps show you how to install the AWS Command Line Interface (AWS CLI) and AWS SDKs that the examples in this documentation use. There are a number of different ways to authenticate AWS SDK calls. The examples in this guide assume that you're using a default credentials profile for calling AWS CLI commands and AWS SDK API operations.
For a list of available AWS Regions, see Regions and Endpoints in the Amazon Web Services General Reference.
Follow the steps to download and configure the AWS SDKs.
To set up the AWS CLI and the AWS SDKs
Download and install the AWS CLI and the AWS SDKs that you want to use. This guide provides examples for the AWS CLI, Java, Python, Ruby, Node.js, PHP, .NET, and JavaScript. For information about installing AWS SDKs, see Tools for Amazon Web Services
.
Create an access key for the user you created in Create an IAM user.
Sign in to the AWS Management Console and open the IAM console at
.
In the navigation pane, choose Users.
Choose the name of the user you created in Create an IAM user.
Choose the Security credentials tab.
Choose Create access key. Then.
If you have installed the AWS CLI, you can configure the credentials and region for most AWS SDKs by entering
aws configureat the command prompt. Otherwise, use the following instructions.
On your computer, navigate to your home directory, and create an
.awsdirectory. On Unix-based systems, such as Linux or macOS, this is in the following location:
~/.aws
On Windows, this is in the following location:
%HOMEPATH%\.aws
In the
.awsdirectory, create a new file named
credentials.
Open the credentials CSV file that you created in step 2 and copy its contents into the
credentialsfile using the following format:
[default] aws_access_key_id = your_access_key_id aws_secret_access_key = your_secret_access_key
Substitute your access key ID and secret access key for your_access_key_id and your_secret_access_key.
Save the
Credentialsfile and delete the CSV file.
In the
.awsdirectory, create a new file named
config.
Open the
configfile and enter your region in the following format.
[default] region = your_aws_region
Substitute your desired AWS Region (for example,
us-west-2) for your_aws_region.
Note
If you don't select a region, then us-east-1 will be used by default.
Save the
configfile.
Next step
Step 3: Getting started using the AWS CLI and AWS SDK API | https://docs.aws.amazon.com/rekognition/latest/dg/setup-awscli-sdk.html | 2022-01-16T18:12:06 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.aws.amazon.com |
Kubernetes' command line interface (CLI),
kubectl, can be used to run commands against a Kubernetes cluster. Because OpenShift Container Platform is a certified Kubernetes distribution, you can use the supported
kubectl binaries that ship with OpenShift Container Platform, or you can gain extended functionality by using the
oc binary.
The
oc binary offers the same capabilities as the
kubectl binary, but it extends to natively support additional OpenShift Container Platform features, including:
Full support for OpenShift Container Platform resources
Resources such as
DeploymentConfig,
BuildConfig,
Route,
ImageStream, and
ImageStreamTag objects are specific to OpenShift Container Platform distributions, and build upon standard Kubernetes primitives.
Authentication
The
oc binary offers a built-in
login command that allows authentication and enables you to work with OpenShift Container Platform projects, which map Kubernetes namespaces to authenticated users. See Understanding authentication for more information.
Additional commands
The additional command
oc new-app, for example, makes it easier to get new applications started using existing source code or pre-built images. Similarly, the additional command
oc new-project makes it easier to start a project that you can switch to as your default.
The
kubectl binary is provided as a means to support existing workflows and scripts for new OpenShift Container Platform users coming from a standard Kubernetes environment, or for those who prefer to use the
kubectl CLI. Existing users of
kubectl can continue to use the binary to interact with Kubernetes primitives, with no changes required to the OpenShift Container Platform cluster.
You can install the supported
kubectl binary by following the steps to Install the CLI. The
kubectl binary is included in the archive if you download the binary, or is installed when you install the CLI by using an RPM.
For more information, see the kubectl documentation. | https://docs.openshift.com/container-platform/4.4/cli_reference/openshift_cli/usage-oc-kubectl.html | 2022-01-16T19:34:27 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.openshift.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Get-ECSAttributeList-Cluster <String>-AttributeName <String>-AttributeValue <String>-TargetType <TargetType>-MaxResult <Int32>-NextToken <String>-Select <String>-PassThru <SwitchParameter>-NoAutoIteration <SwitchParameter>
ListAttributesreturns a list of attribute objects, one for each attribute on each resource. You can filter the list of results to a single attribute name to only return results that have that name. You can also filter the results by attribute name and value. You can do this, for example, to see which container instances in a cluster are running a Linux AMI (
ecs.os-type=linux).
ListAttributesreturned isn't used, then
ListAttributesreturns up to 100 results and a
nextTokenvalue if applicable.
nextTokenvalue returned from a
ListAttributes.
AWS Tools for PowerShell: 2.x.y.z | https://docs.aws.amazon.com/powershell/latest/reference/items/Get-ECSAttributeList.html | 2022-01-16T20:41:49 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.aws.amazon.com |
9.0.041.07
Voice Platform CTI Connector Release Notes
Helpful Links
Releases Info
Product Documentation
Genesys Products
What's New
This release contains the following new features and enhancements:
- Support for Red Hat Enterprise Linux 7 operation system. See the Genesys Voice Platform page in the Genesys Supported Operating Environment Reference Guide for more detailed information and a list of all supported operating systems. (GVP-42310)
Resolved Issues
This release contains no resolved issues.
Upgrade Notes
No special procedure is required to upgrade to release 9.0.041.07.
This page was last edited on December 16, 2020, at 14:12. | https://docs.genesys.com/Documentation/RN/9.0.x/gvp-ctic90rn/gvp-ctic9004107 | 2022-01-16T20:01:21 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.genesys.com |
Azure Active Directory documentation
Azure Active Directory (Azure AD) is a multi-tenant, cloud-based identity and access management service.
Application and HR provisioning
Device management
Identity protection
Managed identities for Azure resources
Microsoft identity platform
Privileged identity management (PIM)
Reports and monitoring
Microsoft 365
Explore Microsoft 365, a complete solution that includes Azure AD.
Azure AD PowerShell
Learn how to install and use the Azure AD PowerShell module.
Azure CLI commands for Azure AD
Find the Azure AD commands in the CLI reference. | https://docs.microsoft.com/en-IN/azure/active-directory/ | 2022-01-16T18:27:10 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.microsoft.com |
The Editor Page
The Exception Editor provides a means for you to manually review, modify, and approve exception records. The goal of a manual review is to determine which data is incorrect or missing and then update and approve it, particularly if Spectrum™ Technology Platform was unable to correct it as part of an automated dataflow process. You can then revalidate exception records for approval or reprocessing.
You can also use the Exception Editor to resolve duplicate exception records and use search tools to look up information that assists you in editing, approving, and rerunning records. | https://docs.precisely.com/docs/sftw/spectrum/12.2/en/webhelp/DataQualityGuide/BusinessSteward/source/ExceptionEditor/ViewingExceptionEditor.html | 2022-01-16T19:05:22 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.precisely.com |
Warning
We have decided not to follow through with the proposed solution in this
design doc. We care a lot about a nice upgrade path for when better
browser integration becomes available. Encouraging the
#taler:// fragment
based integration might lead merchant frontends to only support this type
of integration.
Instead, the following path will be taken:
webRequestpermission that allows
"Taler: "header based browser integration will become opt-in.
"Taler: "header.
taler://URI described in this document will not be supported, as allowing presence detection might encourage merchants to treat mobile / detached wallets as 2nd class citizens.
A new and improved mechanism for the integration of GNU Taler wallets with web browsers is proposed. The mechanism is meant for browsers that support the WebExtension API, but do not have native support for GNU Taler.
The new approach allows the wallet extension to be installed without excessive, “scary” permissions, while being simpler and still flexible.
The current browser integration of the GNU Taler wallet relies heavily being able to hook into various browser mechanisms via the following mechanisms:
webRequesthandler that is run for every request the browser makes, and looks at the status code and the presence of a “
Taler:” HTTP header.
This has multiple problems:
We first have to accept the fundamental limitation that a WebExtension is not able to read a page’s HTTP request headers without intrusive permissions. Instead, we need to rely on the content and/or URL of the fallback page that is being rendered by the merchant backend.
To be compatible with mobile wallets, merchants and banks must always render a fallback
page that includes the same
taler:// URI.
Using the only the
activeTab permission, we can access a page’s content
while and only while the user is opening the popup (or a page action).
The extension should look at the DOM and search for
taler:// links.
If such a link as been found, the popup should display an appropriate
dialog to the user (e.g. “Pay with GNU Taler on the current page.”).
Using manual triggering is not the best user experience, but works on every Website
that displays a
taler:// link.
Note
Using additional permissions, we could also offer:
taler://paylinks
taler://link.
It’s not clear if this improves the user experience though.
This mechanism improves the user experience, but requires extra support from merchants
and broader permissions, namely the
tabs permission. This permission
is shown as “can read your history”, which sounds relatively intrusive.
We might decide to make this mechanism opt-in.
The extension uses the
tabs permission to listen to changes to the
URL displayed in the currently active tab. It then parses the fragment,
which can contain a
taler:// URI, such as:
The fragment is processed the same way a “Taler: ” header is processed.
For examle, a
taler://pay/... fragment navigates to an in-wallet page
and shows a payment request to the user.
To support fragment-based detection of the wallet, a special
taler://check-presence/${redir} URL can be used to cause a navigation to
${redir} if the wallet is installed. The redirect URL can be absolute or
relative to the current page and can contain a fragment.
For example: -> (when wallet installed)
To preserve correct browser history navigation, the wallet does not initiate the redirect if
the tab’s URL changes from
${redir} back to the page with the
check-presence fragment.
The fragment-based triggering does not work well on single-page apps: It interferes with the SPA’s routing, as it requires a change to the navigation location’s fragment.
The only way to communicate with a WebExtension is by knowing its extension ID. However, we want to allow users to build their own version of the WebExtension, and extensions are assigned different IDs in different browsers. We thus need a mechanism to obtain the wallet extension ID in order to asynchronously communicate with it.
To allow the Website to obtain this extension ID, we can extend the redirection URL
of the
taler://check-presence fragment to allow a placeholder for the extension ID.{extid} -> (when wallet installed)
Warning
This allows fingerprinting, and thus should be an opt-in feature. The wallet could also ask the user every time to allow a page to obtain the
Note
To avoid navigating away from an SPA to find out the extension ID, the SPA can open a new tab/window and communicate the updated extension ID back to original SPA page.
Once the Website has obtained the extension ID, it can use the
runtime.connect() function
to establish a communication channel to the extension.
taler://URIs :-)
taler://. For a better user experience, there should also be some way to check whether some particular URI scheme has a handler. | https://docs.taler.net/design-documents/001-new-browser-integration.html | 2022-01-16T18:38:14 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.taler.net |
Inventory¶
The inventory command is used to manage the outbit host inventory. The inventory is automatically discovered when you run an action that uses the ansible plugin. Changes are logged and can be checked using the “logs” command with the type=changes option.
outbit> inventory list hostname1 outbit> inventory del name=hostname1 deleted inventory item hostname1 | https://outbit.readthedocs.io/en/develop/cli/inventory.html | 2022-01-16T18:31:44 | CC-MAIN-2022-05 | 1642320300010.26 | [] | outbit.readthedocs.io |
Exporting Populations¶
After you've gated populations, you can export events as FCS or TSV files, or create a new experiment containing your gated populations.
Howto
- Click export populations in the left sidebar.
- Select the populations to export in the Populations selector.
- Select the FCS files to export from in the FCS Files selector. One output file will be generated for each combination of population and FCS file.
- Select the output file format (FCS, TSV with header or TSV without header).
- Select the compensation to use.
- Select if you want the output data to be compensated or not. See Export Compensated Data below.
- To download a ZIP file, click download. To export the populations to a new experiment, click export to new experiment.
Tip
It's best to download large files with a stable internet connection, such as a fast ethernet connection or strong wifi signal.
Export Compensated Data¶
For TSV format, this setting writes the compensated values into the exported file.
The channel names in the header row will be bracketed to indicate that they are
compensated (e.g.
<PE-A (CD3)>).
For FCS format, this setting writes the selected compensation matrix into the exported
file's
$SPILLOVER header keyword without changing the numerical values in the
file body. That is, it makes the selected compensation the new "file-internal"
compensation.
Subsampling¶
Subsampling pseudo-randomly selects a subset of events from your files, either before or after gating.
Applying subsampling before gating lets you equalize the number of events in each file. While this can possibly enable you to make absolute numerical comparisons of the number of cells in populations between files, you must be certain that your samples are otherwise comparable. For example, if one sample contained 50 percent debris and another contained only 10 percent debris, then you cannot make absolute numerical comparisons of non-debris cells.
Applying subsampling after gating lets you look at a fixed number of a cell type of interest—for example, look at 500 dendritic cells.
You can either subsample up to an absolute number, or to a percentage.
Optionally, you can specify a seed for the random number generator. Using the same seed and subsampling parameters will yield the same set of subsampled events every time. Use this for deterministic (reproducible) subsampling.
Add Event Number¶
Selecting this option adds an "eventNumber" column to the FCS or TSV file(s), containing the index of each event as it appears in the original FCS file.
Filename Template¶
By default, exported TSV/FCS files are named with the source file's filename and
the population name. You can specify a custom template to use instead, comprised
of any combination of the source file's filename, ID and annotation values, and
the population's name and ID, along with any other fixed characters. (Characters
that are unsafe for use in filenames will be removed upon export.) To insert a
new template token, type the
{ character into the filename template field
and select the desired token from the dropdown list. | http://docs.cellengine.com/exportpops/ | 2022-01-16T19:21:02 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.cellengine.com |
For a less applied, more big-picture book, you might be interested in Alan Cooper's The Inmates are Running the Asylum, which is in part an extended (and fun to read) rant, but also contains some interesting case studies, examples, and practical suggestions.
Cooper's book is really aimed at non-programmers, and makes suggestions about development team organization, project management, etc. as well as technical suggestions.
Alan Cooper has a more practical, "how-to" style book as well, called About Face 2.0 - be sure to get the brand-new 2.0 edition, rather than the now-somewhat-dated original.
Everyone on earth recommends The Design of Everyday Things, by Donald Norman. This is another good book to check out. | http://docs.fedoraproject.org/developers-guide/s1-ui-get-religion.html | 2008-05-12T17:11:08 | crawl-001 | crawl-001-002 | [] | docs.fedoraproject.org |
Designing from Both Sides of the Screen by Ellen Isaacs and Alan Walendowski is a very practical "How To" book, similar in spirit to Joel's book but more detailed and not aimed exclusively at programmers.
A notable feature of this book is that it skips the colorful rants about bad design and misguided marketing/engineering departments that you'll find in Joel's book or Alan Cooper's book. This makes it a little less entertaining, but if you prefer to stick to the facts, you may well prefer it. It's still a well-written book, and by no means tedious.
The book has two parts; the first is a set of general UI design principles, and the second is a comprehensive walk through of creating an instant messaging application. The walk through demonstrates the process they used to design, implement, and iteratively improve the application. Ellen Isaacs was the UI designer for the IM app, and Alan Walendowski was the main programmer. They talk about give and take between the designer and programmer, and how to balance UI goals with time, resource, and engineering constraints.
Alan Cooper's About Face 2.0 is comparable to Designing from Both Sides of the Screen in that it's a practical "How To" and reference manual, so if you want more advice on concrete methodologies, have a look. | http://docs.fedoraproject.org/developers-guide/s1-ui-get-details.html | 2008-05-12T17:13:06 | crawl-001 | crawl-001-002 | [] | docs.fedoraproject.org |
Read the GNOME Human Interface Guidelines. These are pretty good guidelines developed by some interaction design professionals, and are the best guidelines to come from the open source community so far. Until we have official Fedora Project guidelines, following the GNOME guidelines will probably result in a substantially better UI than making up your own. And our tools will be consistent with each other and at least one desktop environment.
Of course you — and everyone else — will disagree with some of the guidelines. That's fine, but recognize that if everyone violates the guidelines in the 10 places where they would have done them differently, there's no value to having guidelines. Avoid silent civil disobedience, but feel free to file a bug report.
The GNOME guidelines will only apply to X11 applications, obviously, not to web applications.
Remember that UI guidelines are mostly about details - laying out your widgets properly, and such. Don't get too bogged down in this and neglect the big picture. You can be fully guideline-compliant and still have an application that doesn't make any sense. | http://docs.fedoraproject.org/developers-guide/s1-ui-gnome-guidelines.html | 2008-05-12T17:14:02 | crawl-001 | crawl-001-002 | [] | docs.fedoraproject.org |
).
-Supress. section of the patron record.
4) The lost item also adds to the count of Lost items in the patron summary on the left (or top) of the screen.
Lost Item Billing
If an item is returned after a lost bill has been paid and the library’s policy is to void the replacement fee for lost-then-returned items, there will be a negative balance in the bill. A refund needs to be made to close the bill and the circulation record. Once the outstanding amount has been refunded, the bill and circulation record will be closed and the item will disappear from the Items Out screen.
If you need to balance a bill with a negative amount, you need to add two dummy bills to the existing bills. The first one can be of any amount (e.g. $0.01), while the second should be of the absolute value of the negative amount. Then you need to void the first dummy bill. The reason for using a dummy bill is that Evergreen will check and close the circulation record only when payment is applied or bills are voided.
1).
Marking_OVERDUE.override | http://docs.evergreen-ils.org/2.11/_circulation_3.html | 2017-09-19T17:15:07 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.evergreen-ils.org |
1 Introduction
The Mendix App Store is a vibrant marketplace containing complete sample apps that can be used right away as well as various add-ons (such as connectors, widgets, and modules) that can be used to build custom apps more quickly. In the App Store, you can browse all the content, get what you need, and share the content you have created.
This document describes all the different sections of the App Store.
2 App Store Home Page
2.1 Categories
The home page of the Mendix App Store presents various content categories:
To see a detail page for each category, click View more (or View full leaderboard).
2.2 Sidebar Menu
The App Store sidebar menu lists all the pages that you can browse for content.
For details on add-ons, see Add-On Details Page.
You can also access [My App Store](), the [Modeler]() page, the [Solutions]() page, and the [Partners]() page.
3 Add-On Details Page
Clicking an App Store item in a menu will bring you to the content item’s details page. The details page presents the sections described below.
3.1 Add-On Details Section
The top of the information page for each add-on presents the following item details:
- The Name and Category of the item
- The review average (in stars) and the number of reviews
- The number of times the item has been downloaded
- A heart for favoriting the item (so it will appear in your list of favorites on the My App Store tab)
- The following buttons (depending on the type of item and what the developer added to share):
- Preview – click this to preview more information or a demo about the content
- This is only available if the developer has included a demo URL when sharing the content (for details on sharing content, see How to Share App Store Content)
- Open in Modeler – click this to open the content directly in the Modeler
- This is only available for Theme starting points (available on the Getting Started tab)
- Download – click this to download the content
- This is only available for content that has a file attached (meaning, all shared Modeler content, but not promotions; for details on sharing Modeler content, see How to Share App Store Content)
- For details on how to import downloaded App Store content into the Modeler, see How to Import and Export Objects
- Please note that the best practice is to download content from the App Store that is accessible in the Modeler, because it then downloads directly into the Modeler (for details, see How to Use App Store Content in the Modeler)
3.2 Add-On Details Tabs
The details page for each add-on and app presents the following item information tabs:
The Overview tab, with the following sections:
- Description – a description of the item
- Screenshots – screenshots of the item
- User Reviews – user reviews of the item
- To leave a review for the item, click Add Review — a section will open up where you can add text, rate the content, and submit the review (your reviews will be listed on the My App Store tab)
The Documentation tab, which can include sections such as Description, Typical usage scenario, Features and limitations, Depdencies, Installation, Configuration, Known bugs, and Frequently Asked Questions:
- Clicking Edit documentation will open up a text editor in which you can edit the App Store item’s documentation
The Statistics tab, which charts the downloads of the item over time:
The All versions tab, which lists all the versions (updates) of the item:
3.3 Additional Info Section
In the Additional Info section, you can see the following information (depending the type of content):
- The Latest version number of the item
- The Modeler version that the item Requires to work
- When the item was Published
- The type of License for the item
- The type of support Mendix offers for the item
- For details on support, see App Store Content Support (clicking the support type will also take you to this document)
- The URL for the item page that you can copy and share
- A View on GitHub link, which will take you to the GitHub source files of the content
- A link to the documentation on how to install App Store content
3.4 Developer Info Section
In the Developer Info section, you can see the following information:
- The name, job title, and Mendix level of the App Store content developer
- Clicking the developer name will bring you to his or her Community Profile
- The numbers for Added items, Updated items, and Reviews added in the Mendix App Store
- The company for which the developer works
- Clicking the company name will bring you to the company’s Partner Profile
3.5 Compatability Section
In the Compatability section, you can leave feedback on the compatability of the item. To leave feedback, follow these steps:
- Select the Modeler version on which you are using the item from the drop-down menu.
- Select the Module, Widget, or Project item version you are using from the drop-down menu.
- Click It works! or Not working to describe how the item works on your system.
Based on the responses from multiple users, the following compatability summaries are shown:
- Works! – this combination of versions works for this item!
- Not working – this combination of versions does not work for this item
- Insufficient information… – not enough people have responded yet for this combination of versions for this item
For further information on the content compatibility, you can see how many people say it works and how many people say it doesn’t work.
4 Other Pages
4.1 My App Store
The My App Store page presents all of your App Store activity:
- Your numbers for PUBLISHED CONTENT and SUBMITTED REVIEWS
- Notifications on content you favorited
- Your content Favorites
The MY APP STORE section of the sidebar menu contains the following options:
- Published – click this to see the content you have published as well as the content your company has published on the Published content page
- On the Published by me tab, you can see the last version of the content you published
- Click Manage to edit the current draft version, create a new draft version (for more information, see How to Share App Store Content), or unpublish content
- On the Published by my company tab, you can see all of the content published by your company
- Click Manage to edit content, create a new draft version (for more information, see How to Share App Store Content), or unpublish the content version you had published (if you are an organization administrator, you can unpublish any content)
- Favorites – click this to see the content you have favorited
- Stats – click this to see the content that has been downloaded the most in the previous month on the Downloads overview page
- Clicking specific App Store content on this page will show you a Downloads per month graph as well as User Reviews
The REVIEWS section of the sidebar menu contains the following options:
- Submitted – click this to see the reviews that you have submitted as well as the reviews your company has submitted
- Received – click this to see the reviews that your content has received as well as the reviews that your company’s content has received
The MY COMPANY section of the sidebar menu contains the following options:
- Profile – click this to see the profile of your company (the same profile that appears on the Partners Tab)
4.2 Modeler
On the Modeler page, you can download any version of the Modeler you need by clicking Download for the latest release or the donwload icon for a specific older release:
Clicking the Release notes icon will open the Modeler release notes for that particular version.
Clicking the Related downloads option will open a page with information relating to that Modeler version.
4.3 Solutions
The Solutions page lists off-the-shelf products that are available for reference:
Hovering your mouse pointer over a solution tile will bring up a summary of the solution:
Solutions are not available for download. However, they contain a details section, Overview and Documentation tabs, and an Additional info section. For more information on these sections, see Add-On Details Page.
4.4 Partners
The Partners page lists selected App Store partner companies:
Hovering your mouse pointer over a partner tile will bring up a summary of the company:
Clicking the partner name will bring you to the partner’s App Store details page:
On the Apps & Add-ons tab at the bottom of the partner details page, you can browse the apps and add-ons that the partner has contributed to the App Store.
On the More info tab, you can view documents that provide more information on what the partner company does:
| https://docs.mendix.com/community/app-store/app-store-overview | 2017-09-19T17:01:41 | CC-MAIN-2017-39 | 1505818685912.14 | [array(['attachments/app-store-overview/home_page.png', None], dtype=object)
array(['attachments/app-store-overview/content_detail_1.png', None],
dtype=object)
array(['attachments/app-store-overview/content_detail_2.png', None],
dtype=object)
array(['attachments/app-store-overview/content_detail_3.png', None],
dtype=object)
array(['attachments/app-store-overview/content_detail_4.png', None],
dtype=object)
array(['attachments/app-store-overview/my_app_store.png', None],
dtype=object)
array(['attachments/app-store-overview/modeler.png', None], dtype=object)
array(['attachments/app-store-overview/solutions.png', None], dtype=object)
array(['attachments/app-store-overview/solution_example.png', None],
dtype=object)
array(['attachments/app-store-overview/partners.png', None], dtype=object)
array(['attachments/app-store-overview/partner_summary.png', None],
dtype=object)
array(['attachments/app-store-overview/partners_detail3.png', None],
dtype=object)
array(['attachments/app-store-overview/partners_detail2.png', None],
dtype=object) ] | docs.mendix.com |
ActiveState's Tcl Dev Kit is a powerful set of development tools and an extended Tcl platform for professional Tcl developers.
Tcl Dev Kit is a component of ActiveTcl Pro Studio, which also includes ActiveState's Komodo IDE enhanced with integrated Tcl debugging and syntax checking. For more information on ActiveTcl Pro Studio and its benefits, see ActiveTcl Pro Studio.
Tcl Dev Kit includes the following Tcl productivity tools:
Binaries for Windows, Mac OS X, Linux, Solaris, HP-UX and AIX are available on the ActiveState web site.
-mergeis specified:
-startupis not required, do not complain when it is missing.
-output, not
-prefix.
-prefixfile read-only, preventing bogus change of modified timestamp for a file which is not written to.
dict forwhen
-onepassis active.
-onepassis active and checked code read from
stdin.
-psnXXXfor OS X bundles.
-metadataoption has been added to support metadata key/value pairs used by TEA packages ('name', 'version', 'platform' etc.). An
-infoplistoption has also been added for specifying Info.plist metadata for OS X applications
Tcl Dev Kit version 3.0 includes two new tools and numerous enhancements to existing tools. The following sections provide a brief overview of the new tools and enhancements to existing tools. Refer to the relevant chapter in the User Guide for a complete description.
Project files from previous versions of the Tcl Dev Kit are compatible with this version.
The Virtual Filesystem Explorer is used to mount and navigate system drives and volumes, to view the contents of archive files with .zip extensions, and to view the contents of starkits and starpacks. Refer to the Virtual Filesystem Explorer section in the User Guide for complete information.
Running the Virtual Filesystem Explorer:
Windows
tclvfse.tcl
Unix
tclvfse
The Cross Reference Tool is a new application that builds a database of program components, including packages, namespaces, variables, etc. Program component information can be extracted from programs and packages contained in TclApp (or Prowrap) projects, Tcl Dev Kit Package definitions (".tap files"), and from Komodo project files. The database can be explored via any of the program components. For example, the Cross Reference Tool can be used to view commands and variables within namespaces, or the source of variable definitions. Refer to the Cross Reference Tool section of the User Guide for complete information.
Running the Cross Reference Tool:
Windows
tclxref.tcl
Unix
tclxref
tcldebugger.tcl. From the Unix command line, enter
tcldebugger.
coverageswitch; coverage functionality is accessible via the View | Code Coverage menu option, or via the Code Coverage button on the toolbar.
tclchecker.tcl. From the Unix command line, enter
tclchecker.
-useoption allows a version specification, and overrides
package requirestatements. For example,
-use tk4.1will scan against version 4.1 of Tk, even if the
package requirestatement specified a different Tk version.
tclcompiler.tcl. From the Unix command line, enter
tclcompiler.
tclapp.tcl. From the Unix command line, enter
tclapp. To use the command-line version, append options to the command.
-icon pathcommand-line switch.
-logswitch.
-prefixfile, the prefix file's extension will be used as the default extension for the generated application (if no explicit output file name is specified). If the prefix has no extension, the name of the generated application will have no extension.
-pkg-acceptswitch is used (which will use the highest available version of the package). When using the graphical TclApp, a message will be displayed when opening the project file, and the highest available version of the specified package will be used.
tclpe.tcl. From the Unix command line, enter
tclpe.
tclsvc.tcl.
base-tclsvc-win32-ix86.exe") that can be used to build "portable" services. When using TclApp to generate the application, specify
base-tclsvc-win32-ix86.exeas the prefix file. This service can then be installed on any system (using the Tcl Dev Kit's Service Manager) that supports Windows NT-based services.
tclinspector.tcl. From the Unix command line, enter
tclinspector.
tbcloadpackage in the output application.
bigtclshand
bigwishinterpreters are now the default. The non-lite versions are no longer supported. | http://docs.activestate.com/tdk/5.3/Release.html | 2017-09-19T17:09:17 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.activestate.com |
Using source control
Source control lets you create checkpoints in your project, compare old revisions with newer ones and back things up to a remote host like GitHub or Bitbucket.
Superpowers projects can easily be kept under source control. While there is no built-in support at the moment, existing tools like Git and Mercurial work great.
Delay when making changes
In order to minimize its performance footprint, the Superpowers server doesn't write every change you make to a project to the disk immediately. Saving to disk might be delayed for up to 60 seconds.
When creating a new revision for your project, either wait 60s or stop your server altogether to make sure everything has been flushed out to the disk. (We'll probably have a button to flush changes in the app at some point).
What not to put under source control
There are a couple folders you probably don't want to commit to your repository:
- The
rooms/folder contains your recent chat log history
- The
trashedAssets/folder contains deleted assets
You can use a
.gitignore or
.hgignore file to blacklist those. | http://docs.superpowers-html5.com/en/getting-started/source-control | 2017-09-19T16:57:19 | CC-MAIN-2017-39 | 1505818685912.14 | [array(['/images/icon.png', None], dtype=object)] | docs.superpowers-html5.com |
FreeSWITCH generating http-json CDRs¶
Scenario¶
- FreeSWITCH with vanilla configuration adding mod_json_cdr for CDR generation.
- Modified following users (with configs in etc/freeswitch/directory/default): 1001-prepaid, 1002-postpaid, 1003-pseudoprepaid, 1004-rated, 1006-prepaid, 1007-rated.
- Have added inside default dialplan CGR own extensions just before routing towards users (etc/freeswitch/dialplan/default.xml).
- FreeSWITCH configured to generate default http-json CDRs.
- CGRateS with following components:
- CGR-SM started as prepaid controller, with debits taking place at 5s intervals.
- CGR-CDRS component receiving raw CDRs from FreeSWITCH, storing them and attaching costs inside CGR StorDB.
- CGR-CDRE exporting processed CDRs from CGR StorDB (export path: /tmp).
- CGR-History component keeping the archive of the rates modifications (path browsable with git client at /tmp/cgr_history).
Starting FreeSWITCH with custom configuration¶
/usr/share/cgrates/tutorials/fs_evsock/freeswitch/etc/init.d/freeswitch start
To verify that FreeSWITCH is running we run the console command:
fs_cli -x status
Starting CGRateS with custom configuration¶
/usr/share/cgrates/tutorials/fs_evsock/cgrates/etc/init.d/cgrates start
Check that cgrates is running
cgr-console status
CDR processing¶
At the end of each call FreeSWITCH will issue a http post with the CDR. This will reach inside CGRateS through the CDRS component (close to real-time). Once in-there it will be instantly rated and it is ready to be exported:
cgr-console 'cdrs_export CdrFormat="csv" ExportDir="/tmp"' | http://cgrates.readthedocs.io/en/latest/tut_freeswitch_json.html | 2017-09-19T16:51:09 | CC-MAIN-2017-39 | 1505818685912.14 | [] | cgrates.readthedocs.io |
Administration site is accessible only for users who are administrators of the server. The site helps to manage the service: create, delete, restore system objects, inquire and receive information about the service, read system logs.
Service administrator is a user who configures the service and manages it. This is the only user who can create billing plans. An administrator, like a manager, can create users, accounts, and units, but the main administrator's job is to create a source account with its billing plan and create users-managers.
To login to the administration site, use your login name and password. Put a check mark near Remember on this computer if needed, and press OK.
If you have forgotten the password, you can get a new one. To do this, add the variable WIALON_RESET_ADMIN_PASSWORD = 1 to the configuration file. After that, when you restart the service, a new password will be reset to the log. After applying the new password, do not forget to delete the variable or set its value as 0.
To logout from the site, press logout item (the last item in the main menu). This action will guide you back to the login page.
The structure of the site is simple and intuitively clear. On the top of the page there is the main menu which is a set of links (17 items).
Click on these links to manage the corresponding elements of the service. Find details in the topics listed below: | http://docs.wialon.com/en/pro/1401/admin/admin/start | 2017-09-19T17:09:52 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.wialon.com |
Extending Superpowers
Superpowers is a highly extensible platform. While it is primarily used to make video games, it can be repurposed for any kind of creative work. Make sure you have the development version of Superpowers.
Systems and plugins
Superpowers currently ships with a single system,
Superpowers Game which lets you make 2D and 3D games with TypeScript.
You can build your own systems to provide entirely different project types.
The system itself provides the shared components (like a game engine for instance) upon which various plugins will build to provide editors and tools.
Setting up your own system
Systems are stored in the
systems folder, inside your Superpowers app.
You can create a new system by running:
cd superpowers node server init $SYSTEM_ID
Then open
systems/$SYSTEM_ID/package.json and customize things as you like.
You'll want to edit
systems/$SYSTEM_ID/public/locales/en/system.json to describe your system, too:
{ "title": "Dummy", "description": "A demo system that doesn't do much, but does it well." }
These strings will appear in the project creation popup.
Setting up a plugin
Plugins are stored in
systems/$SYSTEM_ID/plugins/$PLUGIN_AUTHOR/$PLUGIN_NAME.
You can create a plugin by running:
cd superpowers node server init $System_ID:$PLUGIN_AUTHOR/$PLUGIN_NAME
Plugin included by default with your system should be placed in the
plugins/default author folder.
We maintain a set of common plugins including the home chat,
the settings tool and the real-time collaborative text editor widget.
You can include them in your repository as a Git submodule in
plugins/common.
Anatomy of a plugin
At the root of your plugin, you'll find a
package.json file
(following npm's package.json file format).
Here's what it might look like:
{ "name": "superpowers-$SYSTEM_ID-$PLUGIN_AUTHOR-$PLUGIN_NAME-plugin", "description": "My plugin for Superpowers Dummy", "scripts": { "build": "gulp --gulpfile=../../../../../scripts/pluginGulpfile.js --cwd=." } }
Besides
package.json, you'll find the following files at the root of your plugin:
tsconfig.jsonis used by
pluginGulpfile.jsto set TypeScript compiler options
index.d.tsis used by TypeScript to find the type definition files for Superpowers (
SupCore,
SupClient, etc.)
The plugin's
public folder contains all the files that will be exposed to the client (the user's browser or the Superpowers app).
Many of them will be generated by compiling source files (written in Jade, Stylus and TypeScript for instance)
into files that can be consumed by the browser directly (HTML, CSS and JavaScript).
A plugin can do 3 types of things:
- Define asset and/or resource classes in
data/
- Provide asset editors and/or tools in
editors/
- System-specific stuff like exposing scripting APIs, scene components for Superpowers Game, etc.
Building your plugin
Gulp is the build tool used throughout Superpowers,
and
pluginGulpfile.js
is a shared build file for your plugins that ships as part of Superpowers's main repository.
It assumes you're using Jade, Stylus and TypeScript to build your plugin. If you'd rather use something else,
you can write your own Gulpfile instead (or run whatever build command you want in your
package.json's
build script).
You can build just your plugin by typing
npm run build $PLUGIN_AUTHOR/$PLUGIN_NAME at the root of the Superpowers repository.
Or, if you have
gulp installed globally, you can directly run
npm run build inside your plugin's folder.
If you're using
pluginGulpfile.js, several things will happen when you run
npm run build in your plugin's folder.
If it exists,
data/index.ts will automatically be compiled to
data/index.js so it can be loaded by the server.
A browserified version of
data/index.js will then be generated at
public/bundles/data.js for
the client to load.
Any other folders at the root of your plugin containing an
index.ts file will have a browserified bundle generated
and placed in
public/bundles/$FOLDER_NAME.js so that it can be loaded from the client.
Editors and tools
Your plugin can expose zero or more editors in
editors/*/. Again, assuming you're using
pluginGulpfile.js:
editors/*/index.jadewill be transpiled to
public/editors/*/index.html
editors/*/index.stylwill be transpiled to
public/editors/*/index.css
editors/*/index.tswill be browserified to
public/editors/*/index.js
When an editor is associated with an asset type, it becomes an asset editor. Otherwise, it'll appear in the tool shelf in the bottom-left corner of the project window.
The query string passed to your editor is parsed and stored in
SupClient.query.
When you're in an asset editor (as opposed to a tool),
SupClient.query.asset will contain the ID of the asset that should be edited.
An SVG icon for each editor should be placed in
public/editors/*/icon.svg.
Asset classes
Superpowers projects are basically made of a tree of assets. Your plugin can define one or more asset types.
Each asset type is a class that inherits from
SupCore.Data.Base.Asset (example).
The
Asset base class itself inherits from
SupCore.Data.Base.Hash,
which stores a dictionary of data in its
.pub property, with a schema for validating data.
Properties can be marked as
mutable in the schema, allowing clients to edit them directly through
setProperty (more on editing below).
Asset classes must be registered with
SupCore.system.data.registerAssetClass in
data/index.ts.
The first parameter should be the name of the editor associated with the asset type (example).
Resources
Resources are similar to assets, but for a particular resource class, there is always a single instance of it in every project and it doesn't appear in the asset tree. They are used to store project-wide settings or information. Tools will often subscribe to one or more resources that they allow editing. A tool might edit a settings resource that is then used in the associated asset editor for project-wide configuration.
A resource class must inherit from
SupCore.Data.Base.Resource and be registered under a unique name.
For instance, the startup scene of a Superpowers Game project is stored in the
gameSettings resource.
Subscriptions and editing
In order to allow editing the project's assets and resources, editors must subscribe to them.
First of all, your editor should open a connection to the server through
SupClient.connect (example).
Once connected, you'll probably want to create an instance of
SupClient.ProjectClient
(example).
It can be used to manage subscriptions to the project's assets tree (name of each asset, type, parent, order, etc.), or to particular assets and resources
and it's easier than sending raw socket.io messages and keeping track of things yourself.
To subscribe to an asset, use
projectClient.subAsset(assetId, assetType, subscriber); (example).
The callbacks on your subscriber object will be called when various events are received from the server.
subscriber.onAssetReceived will be called as soon as the asset has been received (example).
To edit an asset, you can use
projectClient.editAsset(assetId, command, args..., callback);.
The server, through
RemoteProjectClient
will call a method starting with
server_ followed by the command you specified.
If the command's callback doesn't return an error, the server will emit back the command to every client subscribed to the asset.
In turn, on the client-side,
ProjectClient will call the corresponding
client_ method on your asset, applying the changes provided by the server.
Then it will notify all subscribers of the change.
In
server_ methods, whenever the asset is edited, you should call
this.emit("change");
to let the project server know that the asset has changed and should be scheduled for a write to disk soon.
The server saves a particular asset to disk no more often than once every 60s.
TODO: We still need to design a way to stream large assets.
Internationalization
You can place JSON localization files in
public/locales/$LANGUAGE_CODE/$NAMESPACE.json (example).
They will be made available through the
t("namespace:path.to.key") function to your Jade templates.
You can also load them up at runtime with
SupClient.i18n.load
and use them with
SupClient.i18n.t("namespace:path.to.key").
(example)
public/locales/$LANGUAGE_CODE/plugin.json should contain an
editors.*.title key for each of your editors.
Generic plugin API
You can use
SupCore.system.registerPlugin to expose bits of code or data that can be reused by other plugins.
For instance, the Superpowers Game system uses this facility to let each plugin expose TypeScript APIs, as well as component configuration classes.
Each such plugin must be attached to a context (for instance
"typescriptAPI") and given a name (example).
You should also define an interface to get type checking (example).
On the server-side,
SupCore.system.requireForAllPlugins lets you require a module for all plugins.
This can be used to conveniently load all registered plugins for a particular context (examples) .
On the client-side, you'll need to load them by appending a
<script> tag for each plugin.
To get a list of all plugins, you can fetch
/systems/${SupCore.system.id}/plugins.json (TODO: Update once plugins can be disabled per project)
(example).
Once the scripts are all loaded, you can use
SupCore.system.getPlugins to access the plugins in a typed fashion (examples).
Client-only plugins
For client-only stuff, a similar registration and retrieval API is available as
SupClient.registerPlugin and
SupClient.getPlugins.
It is used by Superpowers Game to register component editor classes and documentation pages (examples).
Running a project
When a project using your system is launched (or published), the
public/index.html file from your system will be its entry point.
It's up to you to decide what steps your system's
public/index.html should take in order to run a project. Here are a few examples:
In Superpowers Game,
public/index.html loads
public/SupRuntime.js, which is generated
from
SupRuntime/src/index.ts.
- It first loads all APIs, components and runtime code exposed by the plugins
- Then it load all the project assets
- Finally, it starts the game engine's main loop
In contrast, Superpowers LÖVE's
public/index.html
loads
public/index.js, which is generated from
player/src/index.ts.
- It checks if the user has provided the path to the
loveexecutable, and if not, asks for it
- It then downloads all the game's assets from the server into a temporary folder
- Finally, it launches the
loveexecutable, pointing it at the temporary folder
As a final example, Superpowers Web has basically no runtime to speak of.
It simply redirects to the exported project's own
index.html.
Version control
By convention, system repositories should be called
superpowers-$SYSTEM_ID
while a repository for a single plugin should be called
superpowers-$SYSTEM_ID-$PLUGIN_NAME-plugin.
We recommend that you only version your source files, not the many generated HTML, CSS and JS files.
You can use the following
.gitignore file:
**/node_modules **/*.html **/*.css **/*.js | http://docs.superpowers-html5.com/en/development/extending-superpowers | 2017-09-19T16:57:57 | CC-MAIN-2017-39 | 1505818685912.14 | [array(['/images/icon.png', None], dtype=object)] | docs.superpowers-html5.com |
python_moztelemetry¶
A simple library to fetch and analyze data collected by the Mozilla Telemetry service.
Objects collected by Telemetry are called
pings.
A ping has a number of properties (aka
dimensions) and a payload.
A session of Telemetry data analysis/manipulation typically starts with a query that filters the objects by one or more dimensions.
This query can be expressed using either an orm-like api, Dataset or a simple
function, get_pings() (deprecated). | http://python-moztelemetry.readthedocs.io/en/stable/ | 2017-09-19T17:08:31 | CC-MAIN-2017-39 | 1505818685912.14 | [] | python-moztelemetry.readthedocs.io |
Working with trees in Django forms¶
Contents
Fields¶
The following custom form fields are provided in the
mptt.forms
package.
TreeNodeChoiceField¶
This is the default formfield used by
TreeForeignKey
A subclass of ModelChoiceField which represents the tree level of each node when generating option labels.
For example, where a form which used a
ModelChoiceField:
category = ModelChoiceField(queryset=Category.objects.all())
...would result in a select with the following options:
--------- Root 1 Child 1.1 Child 1.1.1 Root 2 Child 2.1 Child 2.1.1
Using a
TreeNodeChoiceField instead:
category = TreeNodeChoiceField(queryset=Category.objects.all())
...would result in a select with the following options:
Root 1 --- Child 1.1 ------ Child 1.1.1 Root 2 --- Child 2.1 ------ Child 2.1.1
The text used to indicate a tree level can by customised by providing a
level_indicator argument:
category = TreeNodeChoiceField(queryset=Category.objects.all(), level_indicator=u'+--')
...which for this example would result in a select with the following options:
Root 1 +-- Child 1.1 +--+-- Child 1.1.1 Root 2 +-- Child 2.1 +--+-- Child 2.1.1
TreeNodeMultipleChoiceField¶
Just like
TreeNodeChoiceField, but accepts more than one value.
TreeNodePositionField¶
A subclass of ChoiceField whose choices default to the valid arguments for the move_to method.
Forms¶
The following custom form is provided in the
mptt.forms package.
MoveNodeForm¶
A form which allows the user to move a given node from one location in its tree to another, with optional restriction of the nodes which are valid target nodes for the move_to method
Fields¶
The form contains the following fields:
target– a
TreeNodeChoiceFieldfor selecting the target node for the node movement.
Target nodes will be displayed as a single
<select>with its
sizeattribute set, so the user can scroll through the target nodes without having to open the dropdown list first.
position– a
TreeNodePositionFieldfor selecting the position of the node movement, related to the target node.
Construction¶
Required arguments:
node
- When constructing the form, the model instance representing the node to be moved must be passed as the first argument.
Optional arguments:
valid_targetscontaining everything in the node’s tree except itself and its descendants (to prevent invalid moves) and the root node (as a user could choose to make the node a sibling of the root node).
target_select_size
- If provided, this keyword argument will be used to set the size of the select used for the target node. Defaults to
10.
position_choices
- A tuple of allowed position choices and their descriptions.
level_indicator
- A string which will be used to represent a single tree level in the target options.
save() method¶.
Example usage¶.objects.all(), }) | http://django-mptt.readthedocs.io/en/latest/forms.html | 2017-09-19T17:10:57 | CC-MAIN-2017-39 | 1505818685912.14 | [] | django-mptt.readthedocs.io |
You are here: Start » Technical Issues » ATL Data Types Visualizers
ATL Data Types Visualizers
Data Visualizers
Data visualizers present data during the debugging session in a human-friendly form. Microsoft Visual Studio allows users to write custom visualizers for C++ data. Adaptive Vision Library is shipped with a set of visualizers for the most frequently used ATL data types: atl::String, atl::Array, atl::Conditional and atl::Optional.
Visualizers are available for Visual Studio 2010, 2012, 2013 and 2015. Visualizers are automatically installed during installation of Adaptive Vision Library and are ready to use, but they are also available at atl_visualizers subdirectory of Adaptive Vision Library installation path.
For more information about visualizers, please refer to the MSDN.
Example ATL data visualization
Please see the example variables definition below and their visualization without and with visualizers.
atl::String str = L"Hello world"; atl::Conditional<int> nil = atl::NIL; atl::Conditional<int> conditionalFive = 5; atl::Array<int> array(3, 5);
Data preview without ATL visualizers installed:
The same data presented using AVL visualizers:
Image Watch extension
For Visual Studio 2012, 2013 and 2015 an extension Image Watch is available. Image Watch allows to display images during debugging sessions in window similar to "Locals" or "Watch". To make Image Watch work correctly with avl::Image type, Adaptive Vision Library installer provides avl::Image visualizer for Visual Studio 2012 and 2013 extension - Image Watch. If one have Image Watch extension and AVL installed, preview of images can be enabled by choosing "View->Other Windows->Image Watch" from Visual Studio menu.
avl::Image description for Image Watch extension is included in atl.natvis file, which is stored in atl_visualizers folder in Adaptive Vision Library installation directory. atl.natvis file is installed automatically during Adaptive Vision Library installation.
When program is paused during debug session, all variables of type avl::Image can be displayed in Image Watch window, as shown below:
Image displayed inside Image Watch can be zoomed. When the close-up is large enough, decimal values of pixels' channel will be displayed. Hexadecimal values can be displayed instead, if appropriate option from context menu is selected.
Image Watch is quite powerful tool - one can copy address of given pixel, ignore alpha channel and much more. All options are described in its documentation, which is accessible from the Image Watch site at:
- ImageWatch 2017 - for Microsoft Visual Studio 2017
- ImageWatch - for older versions of Microsoft Visual Studio | https://docs.adaptive-vision.com/current/avl/technical_issues/Visualizers.html | 2019-04-18T18:37:44 | CC-MAIN-2019-18 | 1555578526228.27 | [array(['../img/avl/dataVisualization1.png', None], dtype=object)
array(['../img/avl/dataVisualization2.png', None], dtype=object)
array(['../img/avl/ImageWatch.png', None], dtype=object)
array(['../img/avl/ImageWatch_ZoomDec.png', None], dtype=object)] | docs.adaptive-vision.com |
Fondy payment gateway provides tokenization technology within own Javascript SDK and Custom checkout product to help merchants to comply with PCI DSS when card form is located on merchant side.
You can find source code and documentation on JavaScript SDK on github: . Custom checkout documentation and examples.
JavaScript SDK is a set of methods which allow developer to create own HTML card form on merchant web-site and submit sensitive credit card data to secure Fondy API using JavaScript without sending sensitive information to merchant server or network.
How it works you can find below on image:
After merchant receives token and masked PAN, it can use token for server-to-server recurring payments or one-click payments with own card form within JavaScript SDK. | https://docs.fondy.eu/docs/page/tokenisation/ | 2019-04-18T19:07:27 | CC-MAIN-2019-18 | 1555578526228.27 | [] | docs.fondy.eu |
Define site boundaries and boundary groups for System Center Configuration Manager
Applies to: System Center Configuration Manager (Current Branch)
Boundaries for System Center Configuration Manager define network locations on your intranet that can contain devices that you want to manage. Boundary groups are logical groups of boundaries that you configure.
A hierarchy can include any number of boundary groups, and each boundary group can contain any combination of the following boundary types:
- IP subnet,
- Active Directory site name
- IPv6 Prefix
- IP address range
Clients on the intranet evaluate their current network location and then use that information to identify boundary groups to which they belong.
Clients use boundary groups to:
- Find an assigned site: Boundary groups enable clients to find a primary site for client assignment (automatic site assignment).
- Find certain site system roles they can use: When you associate a boundary group with certain site system roles, the boundary group provides clients that list of site systems for use during content location and as preferred management points.
Clients that are on the Internet or configured as Internet-only clients do not use boundary information. These clients cannot use automatic site assignment and can always download content from any distribution point from their assigned site when the distribution point is configured to allow client connections from the Internet.
To get started:
- First, define network locations as boundaries.
- Then continue by configuring boundary groups to associate clients in those boundaries to the site system servers they can use.
Best practices for boundaries and boundary groups
Use a mix of the fewest boundaries that meet your needs:
In the past, we recommended the use of some boundary types over others. With changes to improve performance, we now recommend you use whichever boundary type or types you choose that work for your environment, and that let you use the fewest number of boundaries you can to simplify your management tasks.
Avoid overlapping boundaries for automatic site assignment:
Although each boundary group supports both site assignment configurations and those for content location, it is a best practice to create a separate set of boundary groups to use only for site assignment. Meaning: ensure that each boundary in a boundary group is not a member of another boundary group with a different site assignment. This is because:
A single boundary can be included in multiple boundary groups
Each boundary group can be associated with a different primary site for site assignment
A client on a boundary that is a member of two different boundary groups with different site assignments will randomly select a site to join, which might not be the site you intend the client to join. This configuration is called overlapping boundaries.
Overlapping boundaries is not a problem for content location, and instead is often a desired configuration that provides clients additional resources or content locations they can use.
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/sccm/core/servers/deploy/configure/define-site-boundaries-and-boundary-groups | 2019-04-18T19:08:13 | CC-MAIN-2019-18 | 1555578526228.27 | [] | docs.microsoft.com |
WSO2 API Manager. Immutable, ephemeral Microgateway fits microservice architecture well. The Microgateway is capable of operating in lockdown environments such as IoT devices, since connectivity from the Microgateway to the API Management system is not mandatory.
The following topics explain further details about the API Microgateway, its architecture and capabilities, and how to configure and deploy it: | https://docs.wso2.com/pages/viewpage.action?pageId=92534189&navigatingVersions=true | 2019-04-18T18:21:38 | CC-MAIN-2019-18 | 1555578526228.27 | [array(['http://b.content.wso2.com/sites/all/images/zusybot-hide-img.png',
None], dtype=object) ] | docs.wso2.com |
IgDiscover¶)
Contents¶
- Installation
- Manual installation
- Test data set
- User guide
- Overview
- Obtaining a V/D/J database
- Input data requirements
- Configuration
- Running IgDiscover
- The analysis directory
- Format of output files
- Subcommands
- Germline and pre-germline filtering
- Data from the Sequence Read Archive (SRA)
- Does random subsampling influence results?
- Logging the program’s output to a file
- Caching of IgBLAST results and of merged reads
- Terms
- Questions and Answers
- How many sequences are needed to discover germline V gene sequences?
- Can IgDiscover analyze IgG libraries?
- Can IgDiscover analyze a previously sequenced library?
- Do the positions of the PCR primers make a difference to the output?
- What are the advantages to 5’-RACE compared to multiplex PCR for IgDiscover analysis?
- What is meant by ‘starting database’?
- How can I use the IMGT database as a starting database?
- How do I change the parameters of the program?
- Where do I find the individualized database produced by IgDiscover?
- What does the _S1234 at the end of same gene names mean?
- Advanced topics
- Development
- Changes | http://docs.igdiscover.se/en/latest/ | 2019-04-18T18:24:40 | CC-MAIN-2019-18 | 1555578526228.27 | [] | docs.igdiscover.se |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.