content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
UDN
Search public documentation:
Lightmassmass Static Global Illumination
UE3 Home > Level Designer > Lightmass Static Global Illumination
UE3 Home > Lighting Artist > Lightmass Static Global Illumination
UE3 Home > Level Designer > Lightmass Static Global Illumination
UE3 Home > Lighting Artist > Lightmass Static Global Illumination
Lightmass Static Global Illumination
Document Changelog: Created by Daniel Wright.
- Lightmass Static Global Illumination
- Overview
- Versions
- Legacy Support
- Lightmass Features
- Getting the best quality with Lightmass
- Getting the best lighting build times
- Lightmass settings
- Useful Tools
- For Programmers
Overview
Lightmass creates lightmaps with complex light interactions like area shadowing and diffuse interreflection. It is orthogonal to the rest of the rendering pipeline (dynamic lighting and shadowing), it just replaces the lightmaps and static shadow maps with higher quality ones. Communication between the UE3 editor and Lightmass is handled by Unreal Swarm, which manages the lighting build locally and can also distribute the lighting build to remote machines. The Swarm Agent also tracks lighting build progress and keeps you up to date with which machines are working for you, what they're working on, and how many threads each one is using. The bar near the bottom shows how much of the build is complete. The below is from a local build that used 8 threads for lighting.
Versions
Lightmass was first introduced in the QA_APPROVED_BUILD_JUN_2009 build.
Legacy Support
Getting old UE3 lighting in a new mapAs of QA_APPROVED_BUILD_JUNE_2009, new maps will use Lightmass and global illumination by default. This is controlled by
bUseGlobalIlluminationunder View->World Properties->Lightmass in UnrealEd. It is also accessible through the Lighting Build Options dialog in UnrealEd. To get old UE3 lighting, simply disable
bUseGlobalIllumination.
Converting an existing map to use LightmassMaps created before Lightmass was integrated will have
bUseGlobalIlluminationset to false, and will continue to use the old UE3 static lighting path. Here are the steps needed to convert an existing level:
- Set
bUseGlobalIlluminationto true or checked.
- Place one or more LightmassImportanceVolumes around the part of your level that needs detailed global illumination. The radius of these volumes combined will have a huge impact on your build time, so try to encompass the important parts of your level as tightly as possible.
- Disable or delete any fill lights in the level. Lightmass will calculate light bounces, so you only need to keep the dominant lights. Sky lights are much less useful with Lightmass and they just decrease your potential contrast, so it's best to disable these too if they were being used as an ambient term. Fill lights can of course be kept for more artistic control.
Lightmass Features
Area lights and shadowsWith old UE3 static lighting, lights did not have any surface area. They were treated as if all light was being emitted from a single point (or a single direction, for directional lights). Area shadows were approximated by filtering the shadow results in the lightmap texture, which means the shadow penumbra sharpness was dependent on the lightmap resolution, and the penumbra size was the same across all surfaces with the same lightmap resolution. With Lightmass, all lights are area lights by default. The shape used by Point and Spot light sources is a sphere, whose radius is set by LightSourceRadius under LightmassSettings. directional light with the old UE3.
Signed Distance Field shadowsLightmass can generate precomputed shadow maps in a different encoding, see the DistanceFieldShadows page for more info.
Diffuse InterreflectionDiffuse Interreflection is by far the most visually important global illumination lighting effect. Light bounces by default with Lightmass, and the Diffuse term of your material controls how much light (and what color) bounces in all directions. This effect is sometimes called color bleeding. It's important to remember that diffuse interreflection is incoming light reflecting equally in all directions, which means that it is not affected by the viewing direction or position. Here's a scene built by Lightmass with a single directional light and only direct lighting shown. Notice how the size of the shadow penumbras depends on the distance of the occluder. The areas that are not directly visible to the light are black. This is the result without global illumination.
Character lightingLightmass places samples in a uniform 3d grid inside the LightmassImportanceVolume.
Limitations
- Currently the light samples can use a lot of memory, their placement could be optimized by having more knowledge about where detailed indirect lighting is needed on characters (i.e. the play area).
- Dynamically lit objects not using light environments will only be affected by direct lighting.
- Light environments outside the LightmassImportanceVolume will have black indirect lighting.
Mesh Area Lights from EmissiveThe emissive input of any material you apply to a static object can be used to create mesh area lights. Mesh area lights are similar to point lights, but they can have arbitrary shape and intensity across the surface of the light. Each positive emissive texel emits light in the hemisphere around the texel's normal based on the intensity of that texel. Each neighboring group of emissive texels will be treated as one mesh area light that emits one color. The first image shows only the emissive of the underlying materials, and the second image shows the result of the 4 mesh area lights created from those emissive texels (four lights are created because there are four neighboring patches of positive emissive texels).
EmissiveLightFalloffExponentsetting allows you to control the falloff of the light emitted. It behaves just like a point light's falloff exponent, larger exponents result in light attenuating more quickly as it gets toward the influence radius of the light. The influence radius of a mesh area light is determined automatically based on the brightness of the light along with the size of the light's emissive surface area.
EmissiveBoostscales the intensity of all emissive texels, which also affects the influence radius. Note that mesh area lights add to your build time, they have about the same build time impact as a point light. Mesh area lights with a smaller influence radius will be faster to calculate lighting for.
Limitations
- Mesh area lights on meshes with a lot of triangles will increase build times. A warning is issued for meshes with > 3000 triangles, and meshes with > 5000 triangles will not be used as a mesh area light regardless of bUseEmissiveForStaticLighting.
Translucent shadowsLight passing through a translucent material that is applied to a static shadow casting mesh will lose some energy, resulting in a translucent shadow.
Translucent shadow colorThe amount of light passing through the material is called
Transmission(different from
TransmissionColorand
TransmissionMaskin the material editor, which operate on opaque materials), and ranges between 0 and 1 for each color channel. A value of 0 would be fully opaque, and a value of 1 would mean that the incident light passes through unaffected. There's no material input for Transmission, so currently it is derived from the other material inputs as follows:
- Lit materials
- BLEND_Translucent and BLEND_Additive: Transmission = Lerp(White, Diffuse, Opacity)
- BLEND_Modulate: Transmission = Diffuse
- Unlit materials
- BLEND_Translucent and BLEND_Additive: Transmission = Lerp(White, Emissive, Opacity)
- BLEND_Modulate: Transmission = Emissive
Translucent shadow sharpnessThere are several factors controlling translucent shadow sharpness. In the first image is a large light source was used (directional light with a LightSourceAngle of 5) and in the second, a small light source was used (LightSourceAngle of 0).
ExportResolutionScalein the material editor) to capture the sharp shadows..
Masked shadowsLightmass takes the opacity mask of BLEND_Masked materials into account when calculating shadows. The part of the material that gets clipped in the editor viewports also does not cause any shadowing, which allows much more detailed shadowing from trees and foliage.
Ambient OcclusionLight disabled by default, and can be enabled by checking next to
bUseAmbientOcclusionin View->World Properties->Lightmass in UnrealEd. In the first image is a scene with indirect lighting but no ambient occlusion. In the second image is the same scene with ambient occlusion applied to both the direct and indirect lighting, note the darkening where objects come together.
- bVisualizeAmbientOcclusion - Overrides lightmaps with just the occlusion factor when lighting is built. This is useful for seeing exactly what the occlusion factor is, and comparing the effects of different settings.
- MaxOcclusionDistance - Maximum distance for an object to cause occlusion on another object
- FullyOccludedSamplesFraction - Fraction of samples taken that must be occluded in order to reach full occlusion. Note that there is also a per-primitive FullyOccludedSamplesFraction, which allows controlling how much occlusion an object causes on other objects.
- OcclusionExponent - Higher exponents increase contrast
- First: Default AO settings (MaxOcclusionDistance of 200, FullyOccludedSamplesFraction of 1.0, OcclusionExponent of 1.0)
- Second: MaxOcclusionDistance of 5. Low frequency occlusion is removed, only occlusion in corners is left.
- Third: FullyOccludedSamplesFraction of 0.8. Occlusion is shifted darker across all ranges, any areas that were at 80% occluded or above saturate to black.
- Last: OcclusionExponent of 2. The occlusion transitions from midrange to saturated dark much more quickly, occlusion is pushed into corners.
Limitations
- Ambient Occlusion requires a fairly high lightmap resolution to look good, since it changes quickly in corners. Vertex lightmaps will give strange results, as large areas of meshes will be dark since vertices are often in the corners where ambient occlusion is the highest.
- Preview quality builds do not do a very good job at previewing ambient occlusion, since AO requires pretty dense lighting samples (just like indirect shadows).
Getting the best quality with Lightmass
Making lighting noticeable
Diffuse TexturesDuring rendering, lit pixel color is determined as Diffuse * Lighting, so diffuse color directly affects how visible the lighting will be. High contrast or dark diffuse textures make lighting difficult to notice, while low contrast, mid-range diffuse textures let the lighting details show through. Compare the lighting clarity between the scene in the first image, build with mid-range diffuse textures, to the scene in the second image also built with Lightmass, but with noisy, dark diffuse textures. Only the most high-frequency changes are noticeable in the scene in the second image, like the shadow transitions.
Lighting Setup
- Avoid skylights! Skylights add's important to check out the dark areas on the final target display.
Improving lighting quality
Lightmap resolutionUsing texture lightmaps with high resolution is the best way to get detailed, quality lighting. Vertex lightmaps are tessellation dependent, so they usually cannot represent lighting details like area shadows, and instead they just show the general color of the area the mesh is in. Using high lightmap resolution has the downsides of taking up more texture pool memory and increasing build times, so it's a tradeoff. Ideally, most of the lightmap resolution should be allocated around the high visual impact areas and in places where there are high frequency shadows.
Lightmass Solver qualityLightmass Solver settings are automatically set appropriately based on what quality of build is requested in the Lighting Build Options dialog. Production should give good enough quality that the artifacts are not clearly noticeable with a diffuse texture applied. There is one setting that has a big impact on quality however, and that is StaticLightingLevelScale. Lowering the scale will generate many more indirect lighting samples, which will increase the quality of indirect lighting and shadowing, at the expense of build time. In most cases this is only useful for increasing the lighting scale for large levels.
Getting the best lighting build times
There are several ways to improve your Lightmass build times:
- Only have high resolution lightmaps in areas that have high-frequency (quickly changing) lighting. Reduce the lightmap resolution for BSP and static meshes that are not in direct lighting or affected by sharp indirect shadows. This will give you high resolution shadows in the areas that are most noticable.
- Surfaces that are never visible to the player should be set to the lowest possible lightmap resolution.
- Use a LightmassImportanceVolume to contain the areas that are most important (just around the playable area).
- Optimize the lightmap resolutions across the map so that build time for meshes is more even. Use the LightmassTools#LightingTimings Dialog to find the slowest objects. The lighting build can never be faster than the slowest single object, regardless of how many machines are doing the distributed build.
Lightmass settings
LightmassImportanceVolumeManyImportanceVolume controls the area that Lightmass emits photons in, allowing you to concentrate it only on the area that needs detailed indirect lighting. Areas outside the importance volume get only one bounce of indirect lighting at a lower quality. An overhead wireframe view of MP_Jacinto is shown in the first image. The actual playable area that needs high quality lighting is the small green blob at the center. In the second image, a closeup of the playable area of MP_Jacinto is shown, with the correctly setup LightmassImportanceVolume selected. The LightmassImportanceVolume reduced the radius of the region to light from 80,000 units to 10,000 units, which is 64x less area to light.
Lighting Build Options dialog
- Build Quality - Controls how much time is spent building lighting and at what quality lighting is calculated. Preview is the fastest, lowest quality (but still representative) while Production is the slowest, highest quality that should be used for shipping levels. Each quality setting is about 3x slower than the one above it. Preview and Medium will show error coloring, but High and Production will not.
- Use Lightmass - Overrides the level's bUseGlobalIllumination for just this lighting build. Note: the level's bUseGlobalIllumination will not be permanently changed!
World Settings
- bUseGlobalIllumination - Whether the level should use Lightmass or old UE3 direct lighting. Starting with QA_APPROVED_BUILD_JUNE_2009, new maps will default to true and old maps will be set to false.
- StaticLightingLevelScale - Scale of the level relative to the scale of the game. This is used to decide how much detail to calculate in the lighting and smaller scales will greatly increase build times. The default is 1.0, meaning this level needs the same lighting scale rest of the game. A scale of 2 would mean the current level only needs to calculate indirect light interactions that are 2x bigger than the default, and lighting builds will be much faster. SP_Assault in Gears of War 2 is an example of when to use StaticLightingLevelScale. The level is huge (taking up 3/4ths of the grid in the editor) and most of the time you are driving through it, not walking, and detailed lighting is not noticeable. In this level a StaticLightingLevelScale of about 4.0 is appropriate, which greatly reduces build times.
- NumIndirectLightingBounces - Number of times light is allowed to bounce off surfaces, starting from the light source. 0 is direct lighting only, 1 is one bounce, etc. Bounce 1 takes the most time to calculate, followed by bounce 2. Successive bounces are nearly free, but also do not add very much light, as light attenuates at each bounce.
- EnvironmentColor - Color that rays which miss the scene will pick up. The environment can be visualized as a sphere surrounding the level, emitting this color of light in each direction.
- EnvironmentIntensity - Scales EnvironmentColor to allow an HDR environment color.
- EmissiveBoost - Scales the emissive contribution of all materials in the scene.
- DiffuseBoost - Scales the diffuse contribution of all materials in the scene. Increasing DiffuseBoost is an effective way to increase the intensity of the indirect lighting in a scene. The diffuse term is clamped to 1.0 in brightness after DiffuseBoost is applied, in order to keep the material energy conserving (meaning light must decrease on each bounce, not increase). If raising DiffuseBoost doesn't result in brighter indirect lighting, the diffuse term is being clamped and the Light's IndirectLightingScale should be used to increase indirect lighting instead.
- SpecularBoost - Scales the specular contribution of all materials in the scene. Currently unused.
- IndirectNormalInfluenceBoost - Lerp factor that controls the influence of normal maps with directional lightmaps on indirect lighting. A value of 0 gives a physically correct distribution of light, which may result in little normal influence in areas only lit by indirect lighting, but less lightmap compression artifacts. A value of .8 results in 80% of the lighting being redistributed in the dominant incident lighting direction, which effectively increases the per-pixel normal's influence, but causes more severe lightmap compression artifacts.
- bUseAmbientOcclusion - Enables static ambient occlusion to be calculated by Lightmass and built into your lightmaps
- DirectIlluminationOcclusionFraction - How much of the AO to apply to direct lighting
- IndirectIlluminationOcclusionFraction - How much of the AO to apply to indirect lighting
- MaxOcclusionDistance - Maximum distance for an object to cause occlusion on another object
- FullyOccludedSamplesFraction - Fraction of samples taken that must be occluded in order to reach full occlusion
- OcclusionExponent - Higher exponents increase contrast
- bVisualizeMaterialDiffuse - Override normal direct and indirect lighting with just the material diffuse term exported to Lightmass. This is useful when verifying that the exported material diffuse matches up with the actual diffuse.
- bVisualizeAmbientOcclusion - Override normal direct and indirect lighting with just the AO term. This is useful when tweaking ambient occlusion settings, as it isolates the occlusion term.
Light SettingsFor more information on setting up lighting in UE3 view the Lighting Reference and Shadowing Reference pages.
- IndirectLightingSaturation - 0 will result in indirect lighting being completely desaturated, 1 will be unchanged.
- IndirectLightScale - Scales the brightness of indirect lighting from this light. This is similar to DiffuseBoost in that it will change how much light gets bounced, but different in that it only changes how much light is emitted, not how much light gets attenuated at each bounce. Also, DiffuseBoost can only increase indirect lighting to a certain point, because Diffuse is clamped to keep its brightness below or equal to 1.0. IndirectLightScale can increase the indirect lighting intensity by any amount.
- LightSourceRadius - (Point and Spot lights only) The radius of the light's emissive sphere, NOT the light's influence, which is controlled by Radius. A larger LightSourceRadius will result in larger shadow penumbras.
- LightSourceAngle - (Directional lights only) The angle, in degrees, of the directional light's emissive disk from a receiver. Larger angles result in larger shadow penumbras. Note that this is the angle from the center of the directional light's disk to the edge of the disk, not from one edge to the other. The Sun's angle is about .25 degrees.
- ShadowExponent - Controls the falloff of shadow penumbras, or how fast areas change from fully lit to fully shadowed.
Primitive Component Settings
- DiffuseBoost - Scales the diffuse contribution of all materials applied to this object.
- EmissiveBoost - Scales the emissive contribution of all materials applied to this object.
- EmissiveLightExplicitInfluenceRadius - Direct lighting influence radius. The default is 0, which means the influence radius should be automatically generated based on the emissive light brightness. Values greater than 0 override the automatic method.
- EmissiveLightFalloffExponent - Direct lighting falloff exponent for mesh area lights created from emissive areas on this primitive.
- FullyOccludedSamplesFraction - Fraction of AO samples taken from this object that must be occluded in order to reach full occlusion on other objects. This allows controlling how much occlusion an object causes on other objects.
- bShadowIndirectOnly - If checked, this object will only shadow indirect lighting. This is useful for grass, since the geometry that is rendered is just a representation of the actual geometry and doesn't necessarily cast accurately shaped shadows. It's also useful for grass because the resulting shadows would be too high frequency to be stored in precomputed lightmaps.
- SpecularBoost - Scales the specular contribution of all materials applied to this object. Currently unused.
- bUseEmissiveForStaticLighting - Allows the mesh's material's emissive to be used to create Mesh Area Lights.
- bUseTwoSidedLighting - If checked, this object will be lit as if it receives light from both sides of its polygons.
Base Material SettingsFor more information on the material editor see the Material Editor User Guide.
- EmissiveBoost - Scales the emissive contribution of this material to static lighting.
- DiffuseBoost - Scales the diffuse contribution of this material to static lighting.
- SpecularBoost - Scales the specular contribution of this material to static lighting.
- ExportResolutionScale - Scales the resolution that this material's attributes are exported at. This is useful for increasing material resolution when details are needed.
Material Instance Contant SettingsFor more information on the material instance editor see the Material Instance Editor User Guide.
- EmissiveBoost - When checked, overrides the parent's EmissiveBoost.
- DiffuseBoost - When checked, overrides the parent's DiffuseBoost.
- SpecularBoost - When checked, overrides the parent's SpecularBoost.
- ExportResolutionScale - When checked, overrides the parent's ExportResolutionScale.
Useful Tools
For information on various tools for lightmass, including debugging strategies and troubleshooting tips, see the Lightmass Tools page.
For Programmers
Please see the Lightmass Technical Guide for information on programming and debugging Lightmass. | https://docs.unrealengine.com/udk/Three/Lightmass.html | 2017-08-16T21:36:44 | CC-MAIN-2017-34 | 1502886102663.36 | [array(['rsrc/Three/Lightmass/SwarmAgent.jpg', 'SwarmAgent.jpg'],
dtype=object)
array(['rsrc/Three/Lightmass/7OldUE3Shadows.jpg', '7OldUE3Shadows.jpg'],
dtype=object)
array(['rsrc/Three/Lightmass/7LightmassShadows.jpg',
'7LightmassShadows.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/AreaLight.jpg', 'AreaLight.jpg'],
dtype=object)
array(['rsrc/Three/Lightmass/3DirectOnly.jpg', '3DirectOnly.jpg'],
dtype=object)
array(['rsrc/Three/Lightmass/3FirstBounceOnly.jpg',
'3FirstBounceOnly.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/3SecondBounceOnly.jpg',
'3SecondBounceOnly.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/3FourBounces.jpg', '3FourBounces.jpg'],
dtype=object)
array(['rsrc/Three/Lightmass/02CharacterLighting.jpg',
'02CharacterLighting.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/02CharacterLightingLit.jpg',
'02CharacterLightingLit.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/01CharacterLighting.jpg',
'01CharacterLighting.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/SmallMeshAreaLightsEmissiveOnly.jpg',
'SmallMeshAreaLightsEmissiveOnly.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/SmallMeshAreaLights.jpg',
'SmallMeshAreaLights.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/5LargeLightSource.jpg',
'5LargeLightSource.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/5SmallLightSource.jpg',
'5SmallLightSource.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/5LowLightmapResolution.jpg',
'5LowLightmapResolution.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/5LowExportResolutionScale.jpg',
'5LowExportResolutionScale.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/TranslucentShadowIndirectLight.jpg',
'TranslucentShadowIndirectLight.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/9NoAO.jpg', '9NoAO.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/9WithAO.jpg', '9WithAO.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/92Defaults.jpg', '92Defaults.jpg'],
dtype=object)
array(['rsrc/Three/Lightmass/92MaxDist5.jpg', '92MaxDist5.jpg'],
dtype=object)
array(['rsrc/Three/Lightmass/92MaxPct80.jpg', '92MaxPct80.jpg'],
dtype=object)
array(['rsrc/Three/Lightmass/92Exponent2.jpg', '92Exponent2.jpg'],
dtype=object)
array(['rsrc/Three/Lightmass/94MidToneDiffuse.jpg',
'94MidToneDiffuse.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/94DarkDiffuse.jpg', '94DarkDiffuse.jpg'],
dtype=object)
array(['rsrc/Three/Lightmass/94MidToneDiffuseUnlit.jpg',
'94MidToneDiffuseUnlit.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/94DarkDiffuseUnlit.jpg',
'94DarkDiffuseUnlit.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/hist_Spo.jpg', 'hist_Spo.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/hist_UT.jpg', 'hist_UT.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/JacintoImportanceFar.jpg',
'JacintoImportanceFar.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/JacintoImportanceNear.jpg',
'JacintoImportanceNear.jpg'], dtype=object)
array(['rsrc/Three/Lightmass/VolumeMenu.jpg', 'VolumeMenu.jpg'],
dtype=object)
array(['rsrc/Three/Lightmass/LightingBuildOptions.jpg',
'LightingBuildOptions.jpg'], dtype=object) ] | docs.unrealengine.com |
Magento Open Source, 1.9.x
DHL
DHL offers integrated international services and tailored, customer-focused solutions for managing and transporting letters, goods and information.
To offer this shipping method to your customers, you must first open an account with DHL.
- Access ID
- Account Number
- Documents
- Non documents
- Fixed
- Percentage
- Per Order
- Per Package
- Pounds
- Kilograms
- Regular
- Specific
If you are using Specific, enter the Height, Depth, and Width of the package. Specify these numbers in centimeters.
To display the correct list of shipping methods, you must first specify the Country of Origin in Shipping Settings.
This is similar to the standard Free Shipping method, but appears in the DHL section so customers know which method is used for their order.
- All Allowed Countries
- Specific Countries
If shipping to specific countries, select each country from the Ship to Specific Countries list. | http://docs.magento.com/m1/ce/user_guide/shipping/dhl.html | 2017-08-16T21:30:20 | CC-MAIN-2017-34 | 1502886102663.36 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.magento.com |
FedEx
FedEx is one of the world’s largest shipping service companies, providing air, freight, and ground shipping services with several levels of priorities.
FedEx now uses dimensional weight to determine some shipping rates.
Complete the steps to register with FedEx and to open a business account. You will receive your FedEx account number after you enter your contact information, agree to the terms, and provide your credit card number.
- Read the documentation.
- Receive your testing credentials and test the integration.
- Complete the certification process.
- Receive your production credentials, and move to production.
Make sure to write down your credentials so you can refer to them as needed for the Magento configuration.
>.
- All Allowed Countries
- Specific Countries
If applicable, set Ship to Specific Countries to each country where your customers are allowed to ship by UPS. (Hold the Ctrl key down to select multiple options.) | http://docs.magento.com/m1/ce/user_guide/shipping/fedex.html | 2017-08-16T21:30:07 | CC-MAIN-2017-34 | 1502886102663.36 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.magento.com |
Managing Data
Except displaying sets of data, Telerik's RadGridView also allows you to manage them. You are able to execute the standard operations - Insert, Update and Delete. You can also validate the data. Besides using this functionality, you are able to control it via the several events raised at the most important key-points of the action.
More about managing data in RadGridView can be found here or you may check out the following topics: | http://docs.telerik.com/devtools/silverlight/controls/radgridview/features/overview-managing-data | 2017-08-16T21:35:54 | CC-MAIN-2017-34 | 1502886102663.36 | [] | docs.telerik.com |
Editing Control Templates
This article will show you two different approaches on how to extract and edit the default control templates of the UI for WPF controls.
- Extracting Control Templates Manually from the Theme XAML File
- Extracting Control Templates Using Visual Studio
Extracting Control Templates Manually from the Theme XAML File
Inside the installation folder of your version of UI for WPF you can find a folder named Themes.Implicit. It contains the XAML files of the different themes for all the controls.
You then have to navigate to the required theme and open the needed XAML file with your favorite editor. For example, if you are using the Office_Black theme and you need the control template of the RadListBox control, you need to go in the Themes.Implicit\OfficeBlack\Themes folder and find the Telerik.Windows.Controls.xaml file which usually corresponds to the name of the assembly the control is located in.
Figure 1: Navigating to the required XAML file
You need to extract the desired control template from the theme you are using in your application as there are differences between the templates in the different themes. Not using the correct template may lead to errors or cause undesired behavior.
When you have the file open in an editor you need to find the default style for the given control. The default styles usually follow the convention name of the control + Style, for example - RadListBoxStyle.
After you locate the style you have to navigate to the value of its Template property setter which will point you to the control template. Once you have copied the template you can easily modify it and apply it either to a single instance of the control or throughout your application by creating the appropriate style and setting its Template property.
For example, if you want to add a rounded red border around the RadListBox control you need to extract the respective control template and modify it as demonstrated in Example 1.
[XAML] Example 1: Adding a border around the RadListBox control
<Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/Telerik.Windows.Themes.Office_Black;component/Themes/Telerik.Windows.Controls.xaml"/> </ResourceDictionary.MergedDictionaries> <Style BasedOn="{StaticResource RadListBoxStyle}" TargetType="telerik:RadListBox"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="telerik:RadListBox"> <Grid> <ScrollViewer x: <!-- Here is the additional Border --> <Border CornerRadius="10" BorderBrush="Red" BorderThickness="1"> <ItemsPresenter/> </Border> <ScrollViewer.InputBindings> <KeyBinding Command="telerikPrimitives:ListControl.SelectAllCommand" Key="A" Modifiers="Control"/> </ScrollViewer.InputBindings> </ScrollViewer> <ContentPresenter x: </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> </ResourceDictionary> </Application.Resources>
Figure 2: RadListBox control with red border
Extracting Control Templates Using Visual Studio
The other way to extract a control template is through the Visual Studio designer or Expression Blend. You have to right click on the desired control and navigate through the context menu to the Edit Template option. Afterwards just click on the Edit a Copy option as shown in Figure 3.
Figure 3: Visual Studio designer context menu
The Create Style Resource dialog will appear, providing you with a few choices.
The first option is to extract the style with the default control template in a specified document with a resource key. You can choose this option if you need to apply it on a single instance of the control.
Figure 4: Generating a style with a resource key
The second option is to create an implicit style.
Figure 5: Generating an implicit style
Let's assume you just need to style one specific instance of the control and you have chosen to extract the style with a resource key in the current document. Example 2 shows the generated XAML code.
[XAML] Example 2: The generated XAML code
<Window> <Window.Resources> <Style x: <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type telerik:RadListBox}"> ... </ControlTemplate> </Setter.Value> </Setter> </Style> </Window.Resources> <Grid> <telerik:RadListBox </Grid> <Window> | http://docs.telerik.com/devtools/wpf/styling-and-appearance/styling-apperance-editing-control-templates | 2017-08-16T21:35:22 | CC-MAIN-2017-34 | 1502886102663.36 | [array(['images/styling-apperance-editing-control-templates_1.png',
'Navigating to the required XAML file'], dtype=object)
array(['images/styling-apperance-editing-control-templates_2.png',
'RadListBox control with red border'], dtype=object)
array(['images/styling-apperance-editing-control-templates_3.png',
'Visual Studio designer context menu'], dtype=object)
array(['images/styling-apperance-editing-control-templates_4.png',
'styling-apperance-editing-control-templates 4'], dtype=object)
array(['images/styling-apperance-editing-control-templates_5.png',
'Generating an implicit style'], dtype=object) ] | docs.telerik.com |
TOPICS×
Test this capability on your system
Download and import this package into AEM This package contains the sample workflow and the html page which allows you create the schema from the uploaded Acroform.
Configure Workflow
- Open the configuration properties of MergeAcroformData step.
- Click on the Process tab.
- Make sure the arguments you are passing is a valid folder on your server.
- Save the changes.
Create Adaptive Form
- Create an Adaptive Form using the schema created in the earlier step.
- Drag and drop a few schema elements on to the Adaptive Form.
- Configure submit action of the Adaptive Form to submit to AEM workflow (MergeAcroformData).
- Make sure you specify the Data file path as "Data.xml". This is very important as the sample code looks for a file called Data.xml in the workflow payload.
- Preview Adaptive Form, fill in the form and submit.
- You should see PDF with the data merged saved to the folder specified in step 4 under the configure workflow
The pdf generated by merging data with the acroform is saved as pdfdocument.pdf under the workflow's payload folder. This document can then be used for further processing as part of the workflow | https://docs.adobe.com/content/help/en/experience-manager-learn/forms/acroform-aem-forms-sign/part3.html | 2020-03-29T01:21:34 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.adobe.com |
Always review the complete molecule using the reference screen before making any moves.
Next, study the play field and plan your moves. Remember, once a piece is moved it may not be possible to return it into the starting position.
Think through your every move and try to visualize the trajectory piece will follow once a directional arrow is clicked.
When using the keyboard to move pieces make sure that the desired piece is selected. If a wrong atom is marked as selected, use the Tab key to switch between the pieces until you reach a desired one. | https://docs.kde.org/stable5/en/kdegames/katomic/tips.html | 2020-03-29T01:13:15 | CC-MAIN-2020-16 | 1585370493121.36 | [array(['/stable5/en/kdoctools5-common/top-kde.jpg', None], dtype=object)] | docs.kde.org |
Event ID 4198 — TCP/IP Network Interface Configuration
Applies To: Windows Server 2008 R2. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd379838(v=ws.10)?redirectedfrom=MSDN | 2020-03-29T01:01:04 | CC-MAIN-2020-16 | 1585370493121.36 | [array(['images/ee406008.red%28ws.10%29.jpg', None], dtype=object)] | docs.microsoft.com |
This page explains how to configure your DNS Pod and customize the DNS resolution process. In Kubernetes version 1.11 and later, CoreDNS is at GA and is installed by default with kubeadm. See CoreDNS ConfigMap options and Using CoreDNS for Service Discovery.. Refer to the documentation provided by your installer to know which DNS server is installed by default.
The CoreDNS Deployment is exposed as a Kubernetes Service with a static IP.
Both the CoreDNS and kube-dns Service are named
kube-dns in the
metadata.name field. This is done so that there is greater interoperability with workloads that relied on the legacy
kube-dns Service name to resolve addresses internal to the cluster. It abstracts away the implementation detail of which DNS provider is running behind that common endpoint.
The kubelet passes DNS to each container with the
--cluster-dns=<dns-service-ip> flag.
DNS names also need domains. You configure the local domain in the kubelet
with the flag
--cluster-domain=<default-local-domain>.
The DNS server supports forward lookups (A is a general-purpose authoritative DNS server that can serve as cluster DNS, complying with the dns specifications.
CoreDNS is a DNS server that is modular and pluggable, and each plugin adds new functionality to CoreDNS. This can be configured by maintaining a Corefile, which is the CoreDNS configuration file. A cluster administrator can modify the ConfigMap for the CoreDNS Corefile to change how service discovery works.:
lameduckwill make the process unhealthy then wait for 5 seconds before the process is shut down..
You can modify the default CoreDNS behavior by modifying the ConfigMap.
CoreDNS has the ability to configure stubdomains and upstream nameservers using the forward plugin. }
In Kubernetes version 1.10 and later, kubeadm supports automatic translation of the CoreDNS ConfigMap from the kube-d.
Kube-dns is now available as an optional DNS server since CoreDNS is now the default. The running DNS Pod holds 3.
Cluster administrators can specify custom stub domains and upstream nameservers
by providing a ConfigMap for kube-dns (
kube-system:kube-dns).
For example, the following ConfigMap sets up a DNS configuration with a single stub domain and two upstream nameservers:
apiVersion: v1 kind: ConfigMap metadata: name: kube-dns namespace: kube-system data: stubDomains: | {"acme.local": ["1.2.3.4"]} upstreamNameservers: | ["8.8.8.8", "8.8.4.4"]
DNS requests with the “.acme.local” suffix are forwarded to a DNS listening at 1.2.3.4. Google Public DNS serves the upstream queries.
The table below describes how queries with certain domain names map to their destination DNS servers:
See ConfigMap options for details about the configuration option format.
Custom upstream nameservers and stub domains do not affect Pods with a
dnsPolicy set to “
Default” or “
None”.
If a Pod’s
dnsPolicy is set to “
ClusterFirst”, its name resolution is
handled differently, depending on whether stub-domain and upstream DNS servers
are configured.
Without custom configurations: Any query that does not match the configured cluster domain suffix, such as “”, is forwarded to the upstream nameserver inherited from the node.
With custom configurations: If stub domains and upstream DNS servers are configured, DNS queries are routed according to the following flow:
The query is first sent to the DNS caching layer in kube-dns.
From the caching layer, the suffix of the request is examined and then forwarded to the appropriate DNS, based on the following cases:
Names with the cluster suffix, for example “.cluster.local”: The request is sent to kube-dns.
Names with the stub domain suffix, for example “.acme.local”: The request is sent to the configured custom DNS resolver, listening for example at 1.2.3.4.
Names without a matching suffix, for example “widget.com”: The request is forwarded to the upstream DNS, for example Google public DNS servers at 8.8.8.8 and 8.8.4.4.
Options for the kube-dns
kube-system:kube-dns ConfigMap:
In this example, the user has a Consul DNS service discovery system they want to
integrate with kube-dns. The consul domain server is located at 10.150.0.1, and
all consul names have the suffix
.consul.local. To configure Kubernetes, the
cluster administrator creates the following ConfigMap:
apiVersion: v1 kind: ConfigMap metadata: name: kube-dns namespace: kube-system data: stubDomains: | {"consul.local": ["10.150.0.1"]}
Note that the cluster administrator does not want to override the node’s
upstream nameservers, so they did not specify the optional
upstreamNameservers field.
In this example the cluster administrator wants to explicitly force all
non-cluster DNS lookups to go through their own nameserver at 172.16.0.1.
In this case, they create a ConfigMap with the
upstreamNameservers field specifying the desired nameserver:
apiVersion: v1 kind: ConfigMap metadata: name: kube-dns namespace: kube-system data: upstreamNameservers: | ["172.16.0.1"].
This example ConfigMap for kubedns specifies federations, stubdomains and upstreamnameservers:
apiVersion: v1 data: federations: | {"foo" : "foo.feddomain.com"} stubDomains: | {"abc.com" : ["1.2.3.4"], "my.cluster.local" : ["2.3.4.5"]} upstreamNameservers: | ["8.8.8.8", "8.8.4.4"] kind: ConfigMap
The equivalent configuration in CoreDNS creates a Corefile:
For federations:
federation cluster.local { foo foo.feddomain.com } }
To migrate from kube-dns to CoreDNS, a detailed blog is available to help users adapt CoreDNS in place of kube-dns. A cluster administrator can also migrate using the deploy script.
Was this page helpful?
Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement. | https://v1-17.docs.kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/ | 2020-03-29T01:14:52 | CC-MAIN-2020-16 | 1585370493121.36 | [] | v1-17.docs.kubernetes.io |
8.5.104.01
Genesys Co-browse Server Release Notes
Helpful Links
Releases Info
Product Documentation
Genesys Products
What's New
This release contains the following new features and enhancements:
- Jetty is now upgraded to version 9.4.7, which is an important security update.
Resolved Issues
This release contains the following resolved issues:
The Jetty 9.4.7 upgrade also resolves an issue with CSS synchronization, where in some cases outbound requests from the Co-browse Server to static resources on the customer's website (such as CSS) did not go through a forward proxy. (CB-4027, CB-4372)
Upgrade Notes
Because of the Jetty 9.4.7 upgrade, Co-browse Server no longer supports Oracle Java 7 Developer's Kit (JDK). You must now use Oracle Java 8 Developer's Kit (JDK) in order to run Co-browse Server.
This page was last edited on November 29, 2017, at 18:57.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/RN/latest/gcb-svr85rn/gcb-svr8510401 | 2020-03-29T00:48:48 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.genesys.com |
Code And Data Relocation¶
Overview¶
This feature will allow relocating .text, .rodata, .data, and .bss sections from required files and place them in the required memory region. The memory region and file are given to the scripts/gen_relocate_app.py script in the form of a string. This script is always invoked from inside cmake.
This script provides a robust way to re-order the memory contents without
actually having to modify the code. In simple terms this script will do the job
of
__attribute__((section("name"))) for a bunch of files together.
Details¶
The memory region and file are given to the scripts/gen_relocate_app.py script in the form of a string.
An example of such a string is:
SRAM2:/home/xyz/zephyr/samples/hello_world/src/main.c,SRAM1:/home/xyz/zephyr/samples/hello_world/src/main2.c
This script is invoked with the following parameters:
python3 gen_relocate_app.py -i input_string -o generated_linker -c generated_code
Kconfig
CONFIG_CODE_DATA_RELOCATION option, when enabled in
prj.conf, will invoke the script and do the required relocation.
This script also trigger the generation of
linker_relocate.ld and
code_relocation.c files. The
linker_relocate.ld file creates
appropriate sections and links the required functions or variables from all the
selected files.
Note
The text section is split into 2 parts in the main linker script. The first section will have some info regarding vector tables and other debug related info. The second section will have the complete text section. This is needed to force the required functions and data variables to the correct locations. This is due to the behavior of the linker. The linker will only link once and hence this text section had to be split to make room for the generated linker script.
The
code_relocation.c file has code that is needed for
initializing data sections, and a copy of the text sections (if XIP).
Also this contains code needed for bss zeroing and
for data copy operations from ROM to required memory type.
The procedure to invoke this feature is:
Enable
CONFIG_CODE_DATA_RELOCATIONin the
prj.conffile
Inside the
CMakeLists.txtfile in the project, mention all the files that need relocation.
zephyr_code_relocate(src/*.c SRAM2)
Where the first argument is the file/files and the second argument is the memory where it must be placed.
Note
The file argument supports limited regular expressions. function zephyr_code_relocate() can be called as many times as required. This step has to be performed before the inclusion of boilerplate.cmake.
Additional Configurations¶
This section shows additional configuration options that can be set in
CMakeLists.txt
if the memory is SRAM1, SRAM2, CCD, or AON, then place the full object in the sections for example:
zephyr_code_relocate(src/file1.c SRAM2) zephyr_code_relocate(src/file2.c.c SRAM)
if the memory type is appended with _DATA, _TEXT, _RODATA or _BSS, only the selected memory is placed in the required memory region. for example:
zephyr_code_relocate(src/file1.c SRAM2_DATA) zephyr_code_relocate(src/file2.c.c SRAM2_TEXT)
Multiple regions can also be appended together such as: SRAM2_DATA_BSS. This will place data and bss inside SRAM2.
Sample¶
A sample showcasing this feature is provided at
$ZEPHYR_BASE/samples/application_development/test_relocation/
This is an example of using the code relocation feature.
This example will place .text, .data, .bss from 3 files to various parts in the SRAM
using a custom linker file derived from
include/arch/arm/aarch32/cortex_m/scripts/linker.ld | https://docs.zephyrproject.org/latest/guides/code-relocation.html | 2020-03-29T00:46:43 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.zephyrproject.org |
ECNet Tools¶
Database creation¶
ECNet databases are comma-separated value (CSV) formatted files that provide information such as the ID of each data point, an optional explicit sort type, various strings and groups to identify data points, target values and input parameters. Row 1 is used to identify which columns are used for ID, explicit sorting assignment, various strings and groups, and target and input data, and row 2 contains the names of these strings/groups/targets/inputs. Additional rows are data points.
Our databases directory on GitHub contains databases for cetane number, cloud point, kinetic viscosity, pour point and yield sooting index, as well as a database template.
You can create an ECNet-formatted database with SMILES strings and (optionally) target values. Java JRE version 6 and above is required to create a database.
The database can then be constructed with:
from ecnet.tools.database import create_db smiles_strings = ['CCC', 'CCCC', 'CCCCC'] targets = [0, 1, 2] create_db(smiles_strings, 'my_database.csv', targets=targets)
Your database’s DATAID column (essentially Bates numbers for each molecule) will increment starting at 0001. If a prefix is desired for these values, specify it with:
from ecnet.tools.database import create_db smiles_strings = ['CCC', 'CCCC', 'CCCCC'] targets = [0, 1, 2] create_db(smiles_strings, 'my_database.csv', targets=targets, id_prefix='MOL')
You may specify additional STRING columns:
from ecnet.tools.database import create_db smiles_strings = ['CCC', 'CCCC', 'CCCCC'] targets = [0, 1, 2] extra_strings = { 'Compound Name': ['Propane', 'n-Butane', 'Pentane'], 'Literature Source': ['[1]', '[2]', '[3]'] } create_db( smiles_strings, 'my_database.csv', targets=targets, extra_strings=extra_strings )
ECNet .prj file usage¶
Once an ECNet project has been created, the resulting .prj file can be used to predict properties for new molecules. Just supply SMILES strings, a pre-existing ECNet .prj file, and optionally a path to save the results to:
from ecnet.tools.project import predict smiles = ['CCC', 'CCCC'] # obtain results, do not save to file results = predict(smiles, 'my_project.prj') # obtain results, save to file results = predict(smiles, 'my_project.prj', 'results.csv')
Java JRE 6.0+ is required for conversions.
Constructing parity plots¶
A common method for visualizing how well neural networks predict data is by utilizing a parity plot. A parity plot will show how much predictions deviate from experimental values by plotting them in conjunction with a 1:1 linear function (the closer a plot’s data points are to this line, the better they perform).
To create a parity plot, let’s import the ParityPlot object:
from ecnet.tools.plotting import ParityPlot
And initialize it:
my_plot = ParityPlot() # The plot's title defaults to `Parity Plot`; let's change that: my_plot = ParityPlot(title='Cetane Number Parity Plot') # The plot's axes default to `Experimental Value` (x-axis) and `Predicted Value` # (y-axis); we can change those too: my_plot = ParityPlot( title='Cetane Number Parity Plot', x_label='Experimental CN', y_label='Predicted CN' ) # The plot's font is Times New Roman by default; to use another font: my_plot = ParityPlot( title='Cetane Number Parity Plot', x_label='Experimental CN', y_label='Predicted CN', font='Calibri' )
Now that our plot is initialized, we can add data:
my_plot.add_series(x_vals, y_vals)
Say, for example, we obtained results from ECNet’s Server object using the “use” method; let’s plot predicted vs. experimental for the test set:
'''Let's assume you've trained your model using a Server, `sv`''' # Obtain predictions for data in the test set: predicted_data = sv.use(dset='test') experimental_data = sv._sets.test_y # Pass the test data's experimental values and its predicted values: my_plot.add_series(experimental_data, predicted_data) # We can also asign a name to the series, and change its color: my_plot.add_series( experimental_data, predicted_data, name='Test Set', color='red' )
Multiple data series can be added to your plot, allowing you to visualize different data sets together.
If we want to visualize how well given data points perform with respect to an error metric, we can add error bars to the plot. These error bars are placed on the positive and negative side of the 1:1 parity line:
'''Let's assume you've trained your model using a Server, `sv`''' # Obtain the test set's RMSE: errors = sv.errors('rmse', dset='test') # Add the error bars: my_plot.add_error_bars(errors['rmse']) # We can show the value of the error by supplying: my_plot.add_error_bars(errors['rmse'], label='Test Set RMSE')
Once the plot is complete, it can be saved:
# Save the plot to `my_plot.png`: my_plot.save('my_plot.png')
Or, we can view it without saving:
# View the plot in a pop-up window: my_plot.show()
Here is what plotting cetane number training and test data looks like:
| https://ecnet.readthedocs.io/en/latest/usage/tools.html | 2020-03-28T23:07:18 | CC-MAIN-2020-16 | 1585370493121.36 | [array(['../_images/cn_parity_plot.png', '../_images/cn_parity_plot.png'],
dtype=object) ] | ecnet.readthedocs.io |
Search Server Express Heroes - Tell Your Story!
Search Server 2008 Express has only been available for a month now, and we have already been hearing stories from customers about how it has helped them deliver value to their organization -- in many cases in unexpected ways that became a great hero opportunity for IT. We would really like to share your stories on our website to help others understand how search can help them...
Have you used Search Server Express to change the way your organization uses search to find information? Can your co-workers uncover the information they need on the company intranet faster than before? Do your customers now locate products and services from public facing websites quickly and easily? If so, tell us your story!
Visit here to briefly tell us how and why you implemented Search Server Express at your organization as well as the feedback you received from your co-workers and customers. Partners, you can submit your customer stories here as well, and we will include your name in the entries. We will post selected stories on the website this month!
Rhett Dillingham
Senior Product Manager
Microsoft Corp | https://docs.microsoft.com/en-us/archive/blogs/enterprisesearch/search-server-express-heroes-tell-your-story | 2020-03-29T01:37:05 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.microsoft.com |
6.1.1.2.2.1.2.1.2 Connection Object
An nTDSConnection object represents a path for replication from a source DC to a destination DC. This object is a child of the nTDSDSA object of the destination DC. See section 6.2 for more information about connection objects.
Each nTDSConnection object has the following attributes:
objectClass: nTDSConnection
enabledConnection: Indicates whether the connection can be used for replication.
fromServer: A reference to the nTDSDSA object of the source DC.
schedule: Contains a SCHEDULE structure specifying the time intervals when replication can be performed between the source and the destination DCs. In case of intrasite replication (source and destination DCs are in the same site), the value of this attribute is derived from the schedule attribute on the nTDSSiteSettings object of the site where the two DCs reside. In case of intersite replication (source and destination DCs are in different sites), the value is derived from the schedule attribute on the siteLink object that links the two sites.
systemFlags: {FLAG_CONFIG_ALLOW_RENAME | FLAG_CONFIG_ALLOW_MOVE}
options: One or more bits from the following diagram. The bits are presented in big-endian byte order.
X: Unused. Must be zero and ignored.
IG (NTDSCONN_OPT_IS_GENERATED, 0x00000001): The nTDSConnection object was generated by the KCC. See section 6.2 for more information.
TS (NTDSCONN_OPT_TWOWAY_SYNC, 0x00000002): Indicates that a replication cycle must be performed in the opposite direction at the end of a replication cycle that is using this connection.
OND (NTDSCONN_OPT_OVERRIDE_NOTIFY_DEFAULT, 0x00000004): Do not use defaults to determine notification.
UN (NTDSCONN_OPT_USE_NOTIFY, 0x00000008): The source DC notifies the destination DC regarding changes on the source DC.
DIC (NTDSCONN_OPT_DISABLE_INTERSITE_COMPRESSION, 0x00000010): For intersite replication, this indicates that the compression of replication data is disabled.
UOS (NTDSCONN_OPT_USER_OWNED_SCHEDULE, 0x00000020): For KCC-generated connections, indicates that the schedule attribute is owned by the user and must not be modified by the KCC. See section 6.2 for more information.
transportType: A reference to the interSiteTransport object for the transport used on this connection. For more information about physical transport types, see [MS-SRPL].
mS-DS-ReplicatesNCReason: For each NC that is replicated using this connection, this attribute contains an Object(DN-Binary) value, where the DN portion is the DN of the NC, and the binary value is a 32-bit–wide bit field. The binary portion contains extended information about a connection object that could be used by administrators. It consists of one or more bits from the following diagram. The bits are presented in big-endian byte order.
X: Unused. Must be zero and ignored.
GC (NTDSCONN_KCC_GC_TOPOLOGY, 0x00000001): Not used.
R (NTDSCONN_KCC_RING_TOPOLOGY, 0x00000002): The connection object is created to form a ring topology.
MH (NTDSCONN_KCC_MINIMIZE_HOPS_TOPOLOGY, 0x00000004): The connection object is created to minimize hops between replicating nodes.
SS (NTDSCONN_KCC_STALE_SERVERS_TOPOLOGY, 0x00000008): If the KCC finds that the destination server is not responding, then it sets this bit.
OC (NTDSCONN_KCC_OSCILLATING_CONNECTION_TOPOLOGY, 0x00000010): The KCC sets this bit if deletion of the connection object was prevented.
When the KCC considers deleting a connection object, it first checks if it previously deleted connection objects with the same source DC, destination DC, and options for an implementation-specific number of times T (default value is 3) over the last implementation-specific time period t (the default is 7 days) since the server has started. If it did, it will set the NTDSCONN_KCC_OSCILLATING_CONNECTION_TOPOLOGY bit on the connection object and will not delete it. Otherwise, it will delete the connection object.
ISG (NTDSCONN_KCC_INTERSITE_GC_TOPOLOGY, 0x00000020): This connection is to enable replication of partial NC replica between DCs in different sites.
IS (NTDSCONN_KCC_INTERSITE_TOPOLOGY, 0x00000040): This connection is to enable replication of a full NC replica between DCs in different sites.
SF (NTDSCONN_KCC_SERVER_FAILOVER_TOPOLOGY, 0x00000080): This connection is a redundant connection between DCs that is used for failover when other connections between DCs are not functioning.
SIF (NTDSCONN_KCC_SITE_FAILOVER_TOPOLOGY, 0x00000100): This connection is a redundant connection between bridgehead DCs in different DCs; it is used for failover when other connections between bridgehead DCs connecting two sites are not functioning.
RS (NTDSCONN_KCC_REDUNDANT_SERVER_TOPOLOGY, 0x00000200): Redundant connection object connecting bridgeheads in different sites.
The connection object is for server-to-server replication implementation only. Peer DCs MAY assign a meaning to it, but it is not required for interoperation with Windows clients.
See section 6.2 for more information about these options. | https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-adts/d6cca73b-9696-4700-9dab-7c4e54502960 | 2020-03-29T01:49:27 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.microsoft.com |
File:R2014a-install-1.jpg
From UABgrid Documentation
Revision as of 13:51, 12 January 2015 by [email protected] (Talk | contribs)
R2014a-install-1.jpg (708 × 431 pixels, file size: 112 KB, MIME type: image/jpeg)
MATLAB R2014a install 1
File history
Click on a date/time to view the file as it appeared at that time.
- Edit this file using an external application (See the setup instructions for more information)
File usage
The following page links to this file: | https://docs.uabgrid.uab.edu/w/index.php?title=File:R2014a-install-1.jpg&oldid=4939 | 2020-03-29T01:07:33 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.uabgrid.uab.edu |
server data update mode
webix.ui({ view:"list", save:{ url: "my.php", updateFromResponse:true } })
In this mode, DataProcessor will expect to receive a JSON object as a response for the update and insert commands. All data from such JSON object will be applied to the inserted|updated item.
In other words, server side can update the saved item on client side. | https://docs.webix.com/api__dataprocessor_updatefromresponse_config.html | 2020-03-28T22:55:32 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.webix.com |
Controls the properties
of xref-dependent layers.
Controls visibility,
color, linetype, lineweight, and plot styles.. | http://docs.autodesk.com/ACD/2010/ENU/AutoCAD%202010%20User%20Documentation/files/WS1a9193826455f5ffa23ce210c4a30acaf-4e05.htm | 2014-09-15T04:02:42 | CC-MAIN-2014-41 | 1410657104119.19 | [] | docs.autodesk.com |
Document Type
Article
Abstract
Our laws reflect our values. What we value, we make laws to protect. In this article, Tricia Martland describes the child custody statute in North Dakota, which is the only state to use “clear and convincing” standard of evidence. This means that children will not be placed with parents with a history of domestic violence unless there is clear and convincing evidence of their rehabilitation. Other states deem the clear and convincing standard too stringent. Yet this standard is often used with regard to property title. Do our laws indicate that we value things over children? Changing policy to apply the same degree of protection to children that we do for property requires using the clear and convincing evidence standard.
Recommended Citation
Martland, Tricia P. Winter 2011. "The Case for Clear and Convincing Evidence: Do our Laws Value Property over Children?" Family and Intimate Partner Violence Quarterly 3 (3): 285-288
Included in
Criminology and Criminal Justice Commons
Published in: Family and Intimate Partner Violence Quarterly, Volume 03, Number 03, Winter 2011 , pp.285-288(4) | http://docs.rwu.edu/sjs_fp/22/ | 2014-09-15T04:03:18 | CC-MAIN-2014-41 | 1410657104119.19 | [] | docs.rwu.edu |
You.
After you enable the section box, you can modify its extents using drag controls in the 3D view, or you can modify extents from other views, for example a plan or elevation view. Section box extents are not cropped by the view’s crop region.
To enable a section box:
The following image shows the section box selected with the blue arrow drag controls visible. The section box extents have been modified to cut into the stair tower.
To modify section box extents outside of the 3D view:
To control visibility of section box extents: | http://docs.autodesk.com/REVIT/2010/ENU/Revit%20Architecture%202010%20Users%20Guide/RAC/files/WS73099cc142f487559c61d310ffd2e0732-7ecc.htm | 2014-09-15T04:02:48 | CC-MAIN-2014-41 | 1410657104119.19 | [] | docs.autodesk.com |
How To
- Phone
- Messages
- Passwords and security
- Media
- Maps and locations
- Applications and features
- Documents and files
- Settings and options
Home > Support > BlackBerry Manuals & Help > BlackBerry Manuals > BlackBerry Smartphones > BlackBerry Q10 > How To BlackBerry Q10 Smartphone - 10.1
Media
Related information
Next topic: Video: Taking pictures
Previous topic: Video: BBM Video chat with screen share
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/50635/stb1363012380641.jsp | 2014-09-15T04:25:45 | CC-MAIN-2014-41 | 1410657104119.19 | [] | docs.blackberry.com |
JBoss.orgCommunity Documentation either '#' or '//'. The parser will ignore anything in the line after the comment symbol. Example:
rule "Testing Comments" when # this is a single line comment // this is also a single line comment eval( true ) # this is a comment in the same line of a pattern then // this is a comment inside a semantic code block # this is another comment in a semantic code block end 5.3.
1: package org.drools; 5.6.
1: package org.drools; 5.7.
1: package nesting; 2: dialect "mvel" 3: 4: import org.drools.Person 5: import org.drools atttribute .
Example 5.17. 5.18. 5.19. 5.20. 5.21. Rule Syntax Overview
rule "<name>" <attribute>* when <conditional element>* then <action>* end
Example 5.22. 5.23.. In other words, the first rule in an activation-group to fire will cancel the other rules' activations, suport both interval and cron based timers, which replace the now deprecated duration attribute.
Example 5.25. 5.26. A Cron Example
rule "Send SMS every 15 minutes"
timer (cron:* 0/15 * * * ?)
when
$a : Alarm( on == true )
then
channels[ "sms" ].insert( new Sms( $a.mobileNumber, "The alarm is still on" );
end
Calendars are used to control when rules can fire. The Calendar API is modelled on Quartz:
Example 5.27. 5.29. 5.30. 5.31. 5.32. (seperates occurance, and constrain to the same value of the bound field for sequence occurances.
Comperable
properties.
Matches a field against any valid Java Regular Expression. Typically that regexp is a string literal, but variables that resolve to a valid regexp are also allowed.
Like in Java, regular expressions written as string literals need to escape
'
\'.
Up to Drools 5.1 there was a configuration to support non-escaped regexps from Drools 4.0 for backwards compatilibity. From Drools 5.2 this is no longer supported.
Only applies on
String properties.
The operator returns true if the String does not match the
regular expression. The same rules apply as for the
matches operator. Example:
Only applies on
String properties.
The operator
contains is used to check
whether a field that is a Collection or array contains the specified
value.
Example 5.36. Contains with Collections
CheeseCounter( cheeses contains "stilton" ) // contains with a String literal CheeseCounter( cheeses contains $var ) // contains with a variable
Only applies on
Collection
properties.
The operator
not contains is used to check
whether a field that is a Collection or array does not
contain the specified value.
Example 5.37.
array.
Person( father memberOf parents )
The operator
not memberOf is used to check whether a field is not a member of a
collection or array.
Example 5.39. Literal Constraint with Collections
CheeseCounter( cheese not memberOf $matureCheeses )
Person( father not memberOf childern )
This operator is similar to
matches, but it
checks whether a word has almost the same sound (using English
pronunciation) as the given value. This is based on the Soundex
algorithm (see).
Example 5.40. 5.41..
This example will find all male-female pairs where the male is 2 years older than the female; the variable
age is auto-created in the second pattern by the autovivification process.
Example 5.42. Return Value operator
Person( girlAge : age, sex = "F" ) Person( eval( age == girlAge + 2 ), sex = 'M' ) // eval() is actually obselete posional consrain against that using unification; these are referred to as input arguments. If the binding does not yet exist, it will create the declaration binding it to the field represented by the position argument; these are referred to as output arguments. 5.43. 5.45. 5.47. 5.48. Single Pattern Forall
rule "All Buses are Red" when forall( Bus( color == 'red' ) ) then # all asserted Bus facts are red end
Another example shows multiple patterns inside the
forall:
Example 5.49. 5.50. extremely hard to maintain, and frequently leads to code duplication. Accumulate functions are easier to test and reuse.
The accumulate CE is a very powerful CE, but it gets real declarative and easy to use when using predefined functions that are known as Accumulate Functions. They work exactly like accumulate, but instead of explicitly writing custom code in every accumulate CE, the user can use predefined code for common operations.
For instance, a rule to apply a 10% discount on orders over $100 could be written in the following way, using Accumulate Functions:
rule "Apply 10% discount to orders over US$ 100,00" when $order : Order() $total : Number( doubleValue > 100 ) from accumulate( OrderItem( order == $order, $value : value ), sum( $value ) ) then # apply discount to $order end
In the above example, sum is an Accumulate Function and will sum the $value of all OrderItems and return the result.
Drools ships with the following built-in accumulate functions:
average
min
max
count
sum
collectList
collectSet
These common functions accept any expression as input. For instance, if someone wants to calculate the average profit on all items of an order, a rule could be written using the average function:
rule "Average profit" when $order : Order() $profit : Number() from accumulate( OrderItem( order == $order, $cost : cost, $price : price ) average( 1 - $cost / $price ) ) then # average profit for $order is $profit end
Accumulate Functions are all pluggable. That means that if
needed, custom, domain specific functions can easily be added to the
engine and rules can start to use them without any restrictions. To
implement a new Accumulate Functions all one needs to do is to
create a Java class that implements the
org.drools Accumulate.base.accumulators.AverageAccumulateFunction" is the fully qualified name of the class that implements the function behavior.
The use of accumulate with inline custom code is not a good practice for several reasons, including dificulties on maintaining and testing rules that use them, as well as the inability of reusing that code. Implementing your own accumulate functions is very simple and straightforward, they are easy to unit test and to use. This form of accumulate is supported for backward compatibility only. 5.51. A modify statement
rule "modify stilton" when $stilton : Cheese(type == "stilton") then modify( $stilton ){ setPrice( 20 ), setAge( "overripe" ) } end 5.52. Query People over the age of 30
query "people over the age of 30" person : Person( age > 30 ) end
Example 5 5.54. the declared type order in the type declaration matches the argument position. But it possible to override these using the @position annotation..
Queries can now call other queries, this combined with optional query arguments provides deriviation query style backward chaining. Positional and mixed positional/named type is supported. Literal expressions can passed as query arguments, but at this stage you cannot mix expressions with variables. Here s an example of a query that calls another query. Note that 'z' here will always be an 'out' variable. The '?' symbol means the query is pull only, once the results are returned you will not received.runtime.rule.Variable.v - note you must use 'v' and not an alternative instanceof suported: accomodating 5.56. 5.57. 5.58. attachements,. All the features are available with XML that are available to DRL. 5.59. A rule in XML
<?xml version="1.0" encoding="UTF-8"?>
<package name="com.sample"
xmlns=""
xmlns:xs=""
xs:schemaLocation=" drools 5.60.. | http://docs.jboss.org/drools/release/5.2.0.Final/drools-expert-docs/html/ch05.html | 2014-09-15T04:12:54 | CC-MAIN-2014-41 | 1410657104119.19 | [] | docs.jboss.org |
Create a Custom Access Policy
You can create a custom access policy for a specific Schema Registry entity, specify an access type, and add a user or user-group to the policy.
- The schema registry entity that the user needs access to.
- Whether the user requires all objects in the entity or specific objects.
- Whether the user needs read, view, edit, or delete permissions to the entity.
- If there are any IP addresses to include or exclude from the user's access.
With a custom policy you can specify the Schema Registry entity and the type of access the user requires.
- Go to the Ranger List of Policies page.
- Click Add New Policy.
The Create Policy page appears.
- Enter a unique name for the policy.
- Optionally, enter a keyword in the Policy Label field to aid in searching for a policy.
- Select a Schema Registry entity. You can choose the Schema Registry service, schema group, or serde. Then, do one of the following tasks:
- If you want the user to access all the objects in the entity, enter
*.
- If you want to specify the objects in the entity that a user can access, enter the name of the object in the text field.
- Optionally, enter a description.
- In the Allow Conditions section, add the user or group to the respective Select User or Select Group field.
- Optionally, from the Policy Conditions field, enter the appropriate IP address.
- From the Permissions field, select the appropriate permission.
- Click Save. | https://docs.cloudera.com/runtime/7.2.8/schema-registry-security/topics/schemaregistry-security-create-custom-access-policy.html | 2021-04-10T11:44:20 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.cloudera.com |
Set the scale of the character in the stage vehicle.
Context
This command should be called between calls to AddStage and CloseStage.
Additionally, this should only be called on characters previously added to the car with AddStageVehicleCharacter.
Syntax
SetStageVehicleCharacterScale( car, character, scale );
Game.SetStageVehicleCharacterScale( car, character, scale )
- car: The name of the car the character is in.
- character: The name of the character.
- scale: Set the scale of the character.
- Defaults to the car's character scale set with SetCharacterScale.
Examples
AddStage(); AddStageVehicle("cVan","m1_cVan","NULL","cVan.con"); AddStageVehicleCharacter("cVan", "bart", "sl", 0); // Big Bort SetStageVehicleCharacterScale("cVan","bart", 3); AddObjective("dummy"); CloseObjective(); CloseStage();
Game.AddStage() Game.AddStageVehicle("cVan","m1_cVan","NULL","cVan.con") Game.AddStageVehicleCharacter("cVan", "bart", "sl", 0) -- Big Bort Game.SetStageVehicleCharacterScale("cVan","bart", 3) Game.AddObjective("dummy") Game.CloseObjective() Game.CloseStage()
Notes
No additional notes.
Version History
1.19
Added this command. | https://docs.donutteam.com/docs/lucasmodlauncher/hacks/asf/mfk-commands/setstagevehiclecharacterscale | 2021-04-10T11:11:16 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.donutteam.com |
SMS Inbox
3rd-party customer applications need to access to SMS messages at anytime without worrying how the resources are managed.
Legato provides an SMS Inbox service to allow apps to receive SMS messages through their own, private message box, without:
- having to manage SIM or modem memory;
- conflicting with other applications that also receive SMS messages;
- missing messages while being updated or restarted.
The SMS Inbox Service handles the resource arbitration for the user app: the message reception is always guaranteed, for example the user app doesn't have to worried about freeing space in the SIM (or device's storage area) when it is full.
In fact, at device's startup or when a SIM is inserted, the SIM content is copied into the "Inbox Message Storage" folder allocated in the root file system of the device. Then, the process frees automatically the SIM content. Moreover, every new received SMS message is automatically copied into the "Inbox Message Storage" folder and deleted from the SIM. This mechanism keeps the SIM always empty in order to guarantee the reception of SMS messages.
This process is the same when the SMS message storage is the device's storage area (ME - Mobile Equipment).
The message box is a persistent storage area. All files are saved into the file system in the directory /mnt/flash/smsInbox.
The creation of SMS inboxes is done based on the message box configuration settings (cf. Configuration tree section). This way, the message box contents will be kept up to date automatically by the SMS Inbox Service, even when the user app is slow to start, is stopped while it is being updated, or is being restarted to recover from a fault.
A message box works as a circular buffer, when the message box is filled, the older messages are deleted to free space for new messages. But, the application can also explicitly delete messages if it doesn't need them anymore.
IPC interfaces binding
All the functions of this API are provided by the smsInboxService application service.
Here's a code sample binding to SMS Inbox services:
bindings: { clientExe.clientComponent.le_smsInbox1 -> smsInboxService.le_smsInbox1 }
- Note
- By default, smsInboxService starts manually. To start it automatically, the user can remove the option from the smsInboxService.adef file.
A second message box (named le_smsInbox2) can be used by another application. These 2 message boxes are used independently. All functions of this second message box are prefixed by le_smsInbox2 (instead of le_msmInbox1). The user can implement other message boxes based on le_smsInbox1 and le_smsInbox2 model.
Initialise a message box
Use the API le_smsInbox1_Open() to open a message box for access.
Receiving a message
To receive messages, register a handler function to obtain incoming messages. Use le_smsInbox1_AddRxMessageHandler() to register that handler.
The handler must satisfy the following prototype:
typedef void (*le_smsInbox1_RxMessageHandlerFunc_t)(uint32_t msgId,void* contextPtr)
If a succession of messages is received, a new Message reference is created for each, and the handler is called for each new message.
Uninstall the handler function by calling le_smsInbox1_RemoveRxMessageHandler().
- Note
- le_smsInbox1_RemoveRxMessageHandler() function does not delete the Message Object. The caller has to delete it with le_smsInbox1_DeleteMsg().
Use the following APIs to retrieve message information and data from the message object:
- le_smsInbox1_GetImsi() - get the IMSI of the message receiver SIM if it applies.
- le_smsInbox1_GetFormat() - determine if it is a binary or a text message.
- le_smsInbox1_GetSenderTel() - get the sender's Telephone number.
- le_smsInbox1_GetTimeStamp() - get the timestamp sets by the Service Center.
- le_smsInbox1_GetMsgLen() - get the message content length.
- le_smsInbox1_GetText() - get the message text.
- le_smsInbox1_GetBinary() - get the message binary content.
- le_smsInbox1_GetPdu() - get the PDU message content.
- Note
- For incoming SMS Inbox, format returned by le_smsInbox1_GetFormat() is never LE_SMSINBOX1_FORMAT_PDU.
Getting received messages
Call le_smsInbox1_GetFirst() to get the first message from the inbox folder, and then call le_smsInbox1_GetNext() to get the next message.
Call le_smsInbox1_IsUnread() to know whether the message has been read or not. The message is marked as "read" when one of those APIs is called: le_smsInbox1_GetImsi(), le_smsInbox1_GetFormat(), le_smsInbox1_GetSenderTel(), le_smsInbox1_GetTimeStamp(), le_smsInbox1_GetMsgLen(), le_smsInbox1_GetText(), le_smsInbox1_GetBinary(), le_smsInbox1_GetPdu().
To finish, you can also modify the received status of a message with le_smsInbox1_MarkRead() and le_smsInbox1_MarkUnread().
- Note
- The message status is tied to the client app.
Deleting a message
le_smsInbox1_DeleteMsg() deletes the message from the folder. Message is identified with le_smsInbox1_MsgRef_t object. The function returns an error if the message is not found.
Close a message box
Use the API le_smsInbox1_Close() to close a message box (the message box is still exist and can be re-opened and retrieve later all the messages). | https://docs.legato.io/19_11_5/c_smsInbox.html | 2021-04-10T11:50:53 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.legato.io |
3.1.4.15 R_DnssrvComplexOperation3 (Opnum 14)
The R_DnssrvComplexOperation3 method is used to invoke a set of server functions specified by the caller. These functions generally return more complex structures than simple 32-bit integer values, unlike the operations accessible through R_DnssrvOperation (section 3.1.4.1). The DNS server SHOULD<289> implement R_DnssrvComplexOperation2 (section 3.1.4.8).
All parameters are as specified by the R_DnssrvComplexOperation method (section 3.1.4.3) with the following exceptions:
LONG R_DnssrvComplexOperation3( [in] DWORD dwClientVersion, [in] DWORD dwSettingFlags, [in, unique, string] LPCWSTR pwszServerName, [in, unique, string] LPCWSTR pwszVirtualizationInstanceID, [in, unique, string] LPCSTR pszZone, [in, unique, string] LPCSTR pszOperation, [in] DWORD dwTypeIn, [in, switch_is(dwTypeIn)] DNSSRV_RPC_UNION pDataIn, [out] PDWORD pdwTypeOut, [out, switch_is(*pdwTypeOut)] DNSSRV_RPC_UNION * ppDataOut );
dwClientVersion: The client version in DNS_RPC_CURRENT_CLIENT_VER (section 2.2.1.2.1) format.
dwSettingFlags: Reserved for future use only. This field MUST be set to zero by clients and ignored by servers.
pwszVirtualizationInstanceID: A pointer to a null-terminated Unicode string that contains the name of the virtualization instance configured in the DNS server. For operations specific to a virtualization instance, this field MUST contain the name of the virtualization instance. If the value is NULL, then the API is as specified in R_DnssrvComplexOperation2 (section 3.1.4.8). Apart from the EnumVirtualizationInstances operation (section 3.1.4.3), R_DnssrvComplexOperation3 changes the behavior of the following operations: EnumZoneScopes, EnumZones2, and EnumZones (section 3.1.4.3), if these operation are called with R_DnssrvComplexOperation3 and a non-NULL pwszVirtualizationInstanceID, they are performed under the given virtualization instance.
When processing this call, the server MUST perform the same actions as for the R_DnssrvComplexOperation2 method (section 3.1.4.8) with the following exceptions: for output structure types with multiple versions, the server MUST return the structure type selected by dwClientVersion. In the event the dwClientVersion is greater than the server version, the server MUST return the highest version number known. If unable to perform the operation, returns error EPT_S_CANT_PERFORM_OP (0x000006D8) ([MS-ERREF] section 2.2). | https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-dnsp/4edb00e2-9dee-4e26-9584-8933f3299ebe | 2021-04-10T13:10:13 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.microsoft.com |
Using Custom Object Loaders
Last updated Mar 13th, 2019 | Page history | Improve this page | Report an issue
Change the Way xPDO Loads Data¶
You can provide any of the following static methods in your custom
xPDOObject derivative classes to override their behavior, including in driver-specific classes:
- _loadRows
- _loadInstance
- _loadCollectionInstance
- load
- loadCollection
- loadCollectionGraph
This is done with the help of the
xPDO::call() method.
Overriding these methods allows you to implement additional behavior or completely change the behavior of loading your table objects via the object and collection methods provided by xPDO and xPDOObject. For instance, it can be used to perform security checks or to add i18n processing before allowing a row to be loaded.
< 2.0¶
Prior to 2.0.0-pl, you can specify custom loader classes that extend or override the behavior of the default object loaders by specifying these classes in the xPDO options array when instantiating an xPDO instance.
$xpdo = new xPDO($dsn, $username, $password, array( xPDO::OPT_LOADER_CLASSES => array('myCustomLoaderClass') )); | https://docs.modx.org/3.x/en/extending-modx/xpdo/custom-object-loaders | 2021-04-10T11:50:27 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.modx.org |
Health Log Analytics release notes The business-impacting issue. Health Log Analytics is a new ServiceNow® ITOM Health application in the Quebec release. Health Log Analytics highlights for the Quebec release. Important: Health Log Analytics is available in the ServiceNow Store. For details, see the "Activation information" section of these release notes. Health Log Analytics features. Activation information Install Health Log Analytics. Related ServiceNow applications and features Event Management The Event Management application uses events generated by Health Log Analytics. When Health Log Analytics detects an anomaly in your log data, it sends an event to Event Management. Event Management receives the event and generates an alert based on event and alert management rules. MID Server Health Log Analytics uses the ServiceNow® MID Server product to stream logs to the ServiceNow instance. | https://docs.servicenow.com/bundle/quebec-release-notes/page/release-notes/it-operations-management/health-log-analytics-rn.html | 2021-04-10T12:23:11 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.servicenow.com |
Bridge¶
xBlog doesn’t have its own database. It prompts the content of fields of foreign tables and extensions to the frontend.
These fields must registered. A lot of default fields are registered by default.
There is something like a bridge, to connect fields of foreign databases with the frontend.
You are controlling registered fields by the Constant Editor. Please refer to
Look for the sections ‘fields’.
If there is a need for a new field with a new workflow, you should register it. Please refer to | https://docs.typo3.org/p/netzmacher/xblog/master/en-us/Developers/GoodToKnow/Bridge/Index.html | 2021-04-10T11:23:07 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.typo3.org |
Date: Sat, 10 Apr 2021 12:51:01 +0100 (BST) Message-ID: <1416439459.31477.1618055461005@OMGSVR86> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_31476_1079504068.1618055461005" ------=_Part_31476_1079504068.1618055461005 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
This guide contains instructions for using Vicon Nexus. It explains conf= iguring your Vicon system within Nexus and the basic tasks that make up the= everyday Nexus workflow. It assumes you have already installed and license= d Nexus and set up your Vicon system hardware. If you need information abou= t these procedures, see Installing and licensing Vicon Nexus and/or the Vicon docum= entation that was supplied with your hardware, or for help with how to conn= ect up your Vicon system, see Vicon system setup information. You can also contact = Vicon Support.
Videos of many of the procedure=
s described in this guide, including additional tips and examples, are avai=
lable from the N=
exus 2 How To playlist and the Vicon Nexus 2 Tutorials playlist on YouTu=
be, beginning with system calibration. | https://docs.vicon.com/exportword?pageId=94470156 | 2021-04-10T11:51:01 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.vicon.com |
Types of recording
Time recording
Add the following line of code where you want to start a recording:
[[Megacool sharedMegacool] startRecording:self.view];
The recording will by default keep a buffer of the last 5 seconds. Old frames get overwritten by new frames. See Max Frames for editing the default setting.
To end a recording, add the following line of code:
[[Megacool sharedMegacool] sharedMegacool] registerScoreChange]; // without intensity[[Megacool sharedMegacool]!
[[Megacool sharedMegacool] startRecording:self.viewwithConfig:^(MCLRecordingConfig *config) {config.overflowStrategy = kMCLOverflowStrategyHighlight;}]; sharedMegacool] startRecording:view withConfig:^(MCLRecordingConfig *config) {config.overflowStrategy = kMCLOverflowStrategyHighlight;config.peakLocation = 0.5;}];
Frame-by-frame recording
Capture each frame of the recording, e.g. on each user tap:
[[Megacool sharedMegacool] captureFrame:self.view];
By default max 50 frames will be captured. If it exceeds 50 frames, old frames are overwritten
until
stopRecording gets called to reset it. Add the following line of code to stop
a recording:
[[Megacool sharedMegacool] sharedMegacool]captureFrame:self.viewwithConfig:^(MCLRecordingConfig *config) {config.overflowStrategy = kMCLOverflowStrategyTimelapse;}];
If you want to guarantee that a frame is added to the timelapse (like an end screen or final state
of the game), pass a
forceAdd:YES parameter to
captureFrame like this:
[[Megacool sharedMegacool]captureFrame:self.viewwithConfig:nilforceAdd:YES];
Important: Only pass this parameter once! Normally this function returns as soon as it has
spawned a thread to do the actual capture, but if you pass
forceAdd:YES sharedMegacool] shareScreenshot:self.view];
If you want to show a preview of the screenshot before sharing, you need to use a regular recording with max frames set to 1.
Rendering preview
This creates a rendered preview of the GIF that can be displayed to the user before sharing.
Call
startAnimating /
stopAnimating to play/pause the animation of the GIF. But you're free to change the size later. If you want to do something advanced with this preview, please review the UIImageView documentation.
MCLPreview *preview = [[Megacool sharedMegacool] getPreview];if (preview) {[self.view addSubview:preview];[preview startAnimating];} else {NSLog(@"Previewing GIF failed, not showing the preview");}
Customize position and size of the rendered GIF preview that can be showed before sharing:
MCLPreview *preview = [[Megacool sharedMegacool]getPreviewWithConfig:^(MCLPreviewConfig *config) {config.previewFrame = CGRectMake(100, 100, 200, 300);}];
Recording frame rate
Set numbers of frames to record per second. Recommended range: 1 - 10. The default is 10:
[[Megacool sharedMegacool] startRecording:view withConfig:^(MCLRecordingConfig *config) {config.frameRate = 8;}];
Note: This only applies to recordings made with
startRecording, not
captureFrame. sharedMegacool] startRecording:view withConfig:^(MCLRecordingConfig *config) {config.playbackFrameRate = 11;}];
Max frames
Max number of frames to record, default is 50 frames. Setting this to 1 will create a screenshot instead of a GIF. Change the maximum amount of captured frames by setting:
[[Megacool sharedMegacool] startRecording:view withConfig:^(MCLRecordingConfig *config) {config.maxFrames = 25;}];
Delay last frame
Set a delay (in milliseconds) for the last frame of the GIF. The default is 1 second to give a natural break before the GIF loops again.
[[Megacool sharedMegacool] startRecording:view withConfig:^(MCLRecordingConfig *config) {config.lastFrameDelay = 2000;}];
Last frame overlay
With last frame overlay you can add a transparent image on top of the last GIF frame.
[[Megacool sharedMegacool] startRecording:view withConfig:^(MCLRecordingConfig *config) {config.lastFrameOverlay = [UIImage imageNamed:@"overlay.png"];}];.
[Megacool sharedMegacool].gifColorTable = kMCLGIFColorTableAnalyzeFirst;
captureFrame or
startRecording. To save a recording you can set
[Megacool sharedMegacool].keepCompletedRecordings = YES;
To save multiple recordings you can set a
recordingId in the config
object at
startRecording:withConfig or
captureFrame:withConfig. The
recordingId can be anything you prefer, for instance a name of the current
level like
"level3".
A recording with a given
recordingId can be resumed until
stopRecording is called. After calling
stopRecording you can either share
the recording or start a new one from scratch with the same
recordingId.
[Megacool sharedMegacool].keepCompletedRecordings = YES;[[Megacool sharedMegacool]startRecording:self.viewwithConfig:^(MCLRecordingConfig *config) {config.recordingId = @"level3";}];
Pause recording
When you want to pause a recording that can be continued later, you call
pauseRecording.
[[Megacool sharedMegacool] pauseRecording];
Delete recording
deleteRecording will remove any frames of the recording in memory and on disk.
Both completed and incomplete recordings will take space on disk, thus particularly if you're using
[Megacool sharedMegacool].keepCompletedRecordings = YES; you might want to provide an interface to your users for
removing recordings they don't care about anymore to free up space for new recordings.
[[Megacool sharedMegacool] deleteRecording:@"level3"];
Show a GIF from a specific recording
To show a GIF from a specific recording:
MCLPreview *preview = [[Megacool sharedMegacool]getPreviewWithConfig:^(MCLPreviewConfig *config) {config.recordingId = @"level3";}];if (preview) {[self.view addSubview:preview];[preview startAnimating];}
Share a GIF from a specific recording
To share a GIF from a specific recording you pass the
RecordingId as a parameter to
presentShareWithConfig.
[[Megacool sharedMegacool] presentShareWithConfig:^(MCLShareConfig *config) {config.recordingId = @"level3";}];
Note:
presentShare calls
pauseRecording automatically, so
if you haven't called
stopRecording before next time you call
startRecording, the recording will continue from where you paused. | https://docs.megacool.co/learn/recording/ios?language=objc | 2021-04-10T11:32:55 | CC-MAIN-2021-17 | 1618038056869.3 | [array(['/static/img/framerate5.gif', None], dtype=object)
array(['/static/img/framerate10.gif', None], dtype=object)
array(['/static/img/framerate25.gif', None], dtype=object)
array(['/static/img/playback5.gif', None], dtype=object)
array(['/static/img/framerate10.gif', None], dtype=object)
array(['/static/img/playback25.gif', None], dtype=object)
array(['/static/img/maxframes25.gif', None], dtype=object)
array(['/static/img/framerate10.gif', None], dtype=object)
array(['/static/img/maxframes100.gif', None], dtype=object)
array(['/static/img/framedelay05.gif', None], dtype=object)
array(['/static/img/framerate10.gif', None], dtype=object)
array(['/static/img/framedelay2.gif', None], dtype=object)
array(['/static/img/screen.png', None], dtype=object)
array(['/static/img/penguinTransparent.png', None], dtype=object)
array(['/static/img/penguinOverlay.png', None], dtype=object)] | docs.megacool.co |
With Prebid Mobile, you’ll need to setup line items to tell your ad server how much money the “bidder” demand is worth to you. This process is done with an ad server key-value pair:
hb_pb, which stands for “header bidding price bucket”.
Example:
Prebid Mobile is going to call Prebid Server which calls your bidders for their price, then passes it into your ad server on the query-string. You want to target this bid price with a line item that earns you the same amount if it serves.
If you had 1-line item for every bid at a penny granularity of $0.01, $0.02, $0.03, …, 1.23, …, $4.56 you’d need 1,000 line items just to represent bids from $0-$10. We call this the “Exact” granularity option.
Creating 1,000 line items can be a hassle, so publishers typically use price buckets to represent price ranges that matter. For example, you could group bids into 10 cent increments, so bids of $1.06 or $1.02 would be rounded down into a single price bucket of $1.00.
The SDK itself doesn’t deal with granularity. Instead, these details are set up in Prebid Server and your ad server. The specific details about how to set up the price granularity will differ for each Prebid Mobile managed service company – check with your provider to get details for their system.
"low": $0.50 increments, capped at $5 CPM
"med": $0.10 increments, capped at $20 CPM (the default)
"high": $0.01 increments, capped at $20 CPM
"auto": Applies a sliding scale to determine granularity as shown in the Auto Granularity table below.
"dense": Like
"auto", but the bid price granularity uses smaller increments, especially at lower CPMs. For details, see the Dense Granularity table below.
"custom": If none of the above apply, your service provider should provide a way to establish arbitrary price buckets.
Notes:
Please contact your Prebid Mobile host company for details about how to implement granularity. | https://docs.prebid.org/prebid-mobile/adops-price-granularity.html | 2021-04-10T11:09:25 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.prebid.org |
The.Components installed with the email clientSeveral types of components are installed with activation of the Email Client (com.glide.email_client) and Email Client Template (com.glide.email_client_template) plugins, including tables and user roles.Email client interfaceThe instance's email client interface looks like a standard email interface, which contains a toolbar for text formatting and adding attachments. Email client configurationsUse email client configurations to manage the behavior of your email client. Each configuration consists of different email controls for setting allowable email recipients and email addresses.Customize the email clientThe email client has default properties and values that you can customize to suit your needs.Control access to the email clientYou can control access to the email client by changing an ACL rule.Create an email client templateYou can create a different template for each table that uses the email client.SMS delivery with the email clientA property is available that lets the user select an option to send a notification via SMS.Composing emails with quick messagesInsert predefined content into the message body of emails that you send from the email client.Email icon displayYou can use access control rules to hide or display the email icon on forms. | https://docs.servicenow.com/bundle/newyork-servicenow-platform/page/administer/notification/concept/c_EnableTheEmailClient.html | 2021-04-10T11:29:58 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.servicenow.com |
The Plug-in Gait biomechanical model calculates joint kinematics and kinetics from the XYZ marker positions and specific subject anthropometric measurements. As with all motion capture and analysis in Vicon Nexus, the information about the marker set as well as the generic relationship between the physical markers attached to a subject is contained in a labeling skeleton template (.vst) file. This template defines a generic model of the chosen marker set.
You create a subject in Nexus based on a specific template file and then you calibrate the generic marker set model defined in the template to your particular subject. The calibration process creates a labeling skeleton (.vsk) file which is strictly specific to your subject. Nexus then uses this subject-specific .vsk file to automatically label dynamic motion capture trials for that patient both in real time and in post-processing.
The labeling skeleton templates included in the supplied .vst files are used only to define the marker set and to enable Nexus to perform automatic labeling. They are not biomechanical models that will output valid joint angles or other kinematic/kinetic variables. To derive valid kinematics or kinetics, use Plug-in Gait or create your own biomechanical model using Vicon BodyBuilder, Python or MATLAB.
The following table lists the predefined Plug-in Gait labeling skeleton templates (.vst files) supplied with Nexus, identifying the portion of the body it applies to for gait analysis.
An extended version of Plug-in Gait that defines additional markers for foot modeling is available. For information about the Oxford Foot Model plug-in, contact Vicon Support.
These Plug-in Gait template files are installed under the Nexus ModelTemplates folder (by default, C:\Program Files (x86)\Vicon\Nexus2.#\ModelTemplates). If you create a template of your own, store it in this location, so that it will be immediately available for selection from the drop-down list when you create a subject node based on a predefined template file. (If you choose not to store it in this location, you can instead browse to the relevant location.) | https://docs.vicon.com/pages/viewpage.action?pageId=98963551 | 2021-04-10T12:03:58 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.vicon.com |
Go to in Microsoft Cognitive Services:.
Click on
Get API Key and subscribe on
Bing Autosuggest API. Follow the on-screen instructions to register for a subscription key.
3. Find your
Subscription key for Bing Autosuggest API.
You can set several keys, so the plugin will use a random key for each request. | https://ce-docs.keywordrush.com/modules/content/relatedkeywords | 2021-04-10T12:16:11 | CC-MAIN-2021-17 | 1618038056869.3 | [] | ce-docs.keywordrush.com |
- /nagios/pnp/index.php?host=$HOSTNAME$&srv=$SERVICEDESC$ }
You have to specify an additional Extended Info Definition for every service.
Since nagios 3.0 the action_url-directive has be moved to the host or service definition. The serviceextinfo and hostextinfo definitions are deprecated. This way the definition of URLs to the PNP-interface has been simplified.
First two nagios templates are defined. If you used the Nagios quickstart installation guides you can append these lines to templates.cfg:
define host { name host-pnp register 0 action_url /nagios/pnp/index.php?host=$HOSTNAME$ } define service { name srv-pnp register 0 action_url /nagios/pnp/index.php?host=$HOSTNAME$&srv=$SERVICEDESC$ }.
Starting with PNP 0.4.13 you can integrate PNP into Nagios in a way that you have current graphs without clicking any icons. This can be accomplished using the CGI Includes which allow us to include JavaScript code in the status detail view ( status.cgi ).
Prerequisites:
/usr/local/nagios/share/pnp/include/jscontains the files
prototype.jsund
overlib_mini.js. Depending on the distribution the share folder may be located elsewhere. If in doubt have a look at the alias definition in the Nagios configuration file of your web server.
ssifrom the contrib folder of the PNP package was copied to /usr/local/nagios/share. Please review the paths in the file status-header.ssi (both js-files and the ajax.php).
Definition:
define host { name host-pnp register 0 action_url /nagios/pnp/index.php?host=$HOSTNAME$' onmouseover="get_g('$HOSTNAME$','_HOST_')" onmouseout='clear_g() } define service { name srv-pnp register 0 action_url /nagios/pnp/index.php?host=$HOSTNAME$&srv=$SERVICEDESC$' onmouseover="get_g('$HOSTNAME$','$SERVICEDESC$')" onmouseout='clear_g() }
After a restart of Nagios (after modifying the definitions) the result might look like this: | https://docs.pnp4nagios.org/pnp-0.4/webfe | 2021-04-10T12:39:08 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.pnp4nagios.org |
Account Administration
- The Admin role has additional access to create new users, as well as to view the user file uploads and batch jobs.
- The User role allows you to view only your jobs and doesn't provide any user management-related options.
- The Super Admin role is the user who first signs up for the account and provides billing information. This is the user who has got all admin rights as well as access to billing & plans section. Admin and User role users can’t access billing & plans page. Super Admin is the main account administrator.
Viewing User Uploads and Batch Jobs
The MapMarker Dashboard provides an admin the same functionality as a user has access to as well as access to all the user file uploads and batch jobs.
To view the MapMarker Dashboard:
- On the MapMarker Dashboard header, click Dashboard.The MapMarker Dashboard page displays.
- See Your MapMarker Dashboard for more information on the functions that you can perform on this page.
Managing Accounts
The Users page allows you to view user information, as well as create and delete user accounts.
Viewing Users
To view users:
On this page, you can view the users, their email address, their last file upload and their role/account permission level.
This page also allows you to create and delete user accounts.
Creating a New User
To add a new user:
- On the Users page, click the Create New User button.The Create New User page displays.
- Enter the Account Information for the user: First Name, Last Name and Email.
- Use the Access drop-down menu and select either Admin or User permission.
- To complete, click Create User. To abort the process and return to the Users page, click Back or Cancel.
Deleting a User
To delete a user:
- In the table row with the user account, click
.
- On the Confirm Delete User pop-up window, click OK. To abort the process and return to the Users page, click Cancel.
Billing and Plans
This section describes the detailed information about the subscribed plan and billing invoice.
- Click the username on the top-right corner of the page header.
- Click Billing & Plans.
Plan Subscription
Located on the left of the page, this section displays information about the current plan. Information includes the name of the plan, total geocodes in the plan, and the remaining geocodes. User can modify their plan using the Upgrade Plan button.
Billing History
This section displays the billing history for last six months. Information displayed includes invoice number, bill date, paid date, bill amount, and paid amount. To view the invoice details, click View against the invoice entry.
Billing Info
The billing information describing the card details are displayed below the billing history. Edit the relevant information and click the Update Card button to save the changes. | https://docs.precisely.com/docs/sftw/mapmarker/main/en-us/webhelp/mmo/AccountManagement/Admin_Intro.html | 2021-04-10T12:12:57 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.precisely.com |
Analyzer user accounts and roles
Each user must be logged on through an account, and each account is identified by the configuration parameters described in this topic:
The account policies represent the system default configuration. Users with the Security role can change the default policies.
User names
A user name uniquely identifies a single account, according to the account type:
- Locally managed accounts — The user name must consist of 1 to 64 alphanumeric characters, but cannot contain the @ symbol.
- LDAP-managed accounts — The user name must follow the rules of the Lightweight Directory Access Protocol (LDAP) server.
Roles
The role associated with a user account defines the level of access that the user has to the features on this device. For example, Administrators can create accounts, but Observers cannot.
User roles and access permissionsThe levels of access provided by the roles are cumulative; that is, starting from the most restricted role, access at each successive access level has the preceding level of access, as shown in the following matrix. Throughout this documentation, whenever a product feature or capability is attributed to a role, the feature or capability is also available in the higher access levels.
Roles and access matrix
Passwords
Passwords are initially set by an Administrator and can be updated by the account owner. For security protection, users with the Security role can configure the device to force users to change their passwords the first time they log on. Users with the Security role can also configure the device to expire passwords after a specified period of account inactivity. For more information about configuring stronger access policies, see Configuring access management policies and settings on the Analyzer or Collector.
Note
Password expiration does not affect the Security account or access to the CLI.
By default, the system applies simple password validation rules. The system checks such passwords only for length (minimum 6 characters).
If your organization requires stronger passwords, the Security role can enable the strict password rule. When the strict password rule is enabled, the system prompts users who try to log on with simple passwords to change their password.
A strict password must have:
- Minimum of 10 characters
- Two noncontiguous nonalphabetic characters from the following set:
0 1 2 3 4 5 6 7 8 9 ! " # $ % & ' ( ) * + , - . / : ; < = > ? @ [ \ ] ^ _ ' | { } ~ `
A password can also contain a "space" character. All passwords are case sensitive.
Related topics
Adding or deleting a local account on the Analyzer and Collector
Configuring access management policies and settings on the Analyzer or Collector
Using LDAP authentication and authorization for the Analyzer or Collector | https://docs.bmc.com/docs/applicationmanagement/113/analyzer-user-accounts-and-roles-772581915.html | 2021-04-10T12:17:15 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.bmc.com |
App support
To make GIFs playable on as many apps as possible when shared, Megacool tailors sharing to each app. Since capabilities in apps varies greatly, we've created this overview of what to expect when sharing. Let us know if your favorite app is missing or you find any data that's incorrect.
Click on any app to learn how it behaves when sharing with Megacool. If you can't find an app here it might still be possible to share to it, these are just the ones we have either tested ourselves or that we are otherwise aware of. | https://docs.megacool.co/apps | 2021-04-10T11:34:04 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.megacool.co |
Refactor with Rename when the existing name of a symbol is ambiguous, misleading, misspelled, or non-standard. Rename changes the definition, declaration in C/C++, and all references within a project or throughout a solution. Rename optionally changes "references" in comments and strings.
Rename is available for namespaces, classes, methods, fields, variables and method parameters; and can be invoked from a definition, reference, or declaration of a symbol.
Visual Studio 2015 and newer
Although Rename built into Visual Studio is often sufficient, the version in Visual Assist is typically faster.
Access
Place the caret on a symbol and use the default shortcut (Shift+Alt+R) for Rename, or select Rename from the Quick Action and Refactoring menu (Shift+Alt+Q).
Enter the new name for your symbol, and select the scope of the rename. You can preview references before committing to the refactoring.
If you manually rename a symbol at its definition, Visual Assist may recognize the edit and offer to rename the references immediately, with or without a preview in the Rename dialog.
Access to Rename is also available in several tool windows, including the Hovering Class Browser of the VA View, and the VA Outline.
Rename in Multiple Projects
You can rename throughout a solution via a checkbox in the Rename dialog. Multiple project nodes appear in the Rename list when references are found in multiple projects.
When not renaming in all projects, a rename is restricted to the project that contains the current file.
The Rename dialog opens with the scope of the previous Rename, Change Signature, or Find References.
Multiple References
Highlight References
References found can appear bold and highlighted in the Rename list. By default, references where a symbol is modified are highlighted in MistyRose, whereas references where a symbol is read are highlighted in LightCyan.
Rename in an Inheritance Chain
Visual Assist can rename inherited references from parent classes and overridden references in child classes. When using this setting, virtual methods are renamed up and down the class hierarchy
Rename Comment and String References
You can broaden a rename to include hits in relevant comments and strings. Visual Assist uses heuristics to determine which hits are relevant, e.g. proximity to code references.
Including and Excluding References
Use the context menu of the Rename dialog to include or exclude all references of a specific type.
Deselect entries in the Rename list to exclude specific references—at the scope of project, file, or call-site.
Set broader scope, e.g. through the inheritance chain or in comments, before excluding specific entries. The Rename list is refreshed when you change a setting that affects scope; you will lose your exclusions when you change a setting.
View References
Double-click entries in the results list to view references in your source. If necessary, Visual Assist opens the file containing the reference, but the file does not get focus as long as the Rename dialog is open.
Stop a Rename Search
You can prematurely terminate lengthy searches by clicking "Stop" in a Rename dialog, or by via the shortcut to cancel a build (Ctrl+Break).
Stop a search prematurely only to change the scope of a rename, or abort the rename entirely. Committing to a rename after a stopped search will likely break your solution.
Show Projects
Projects nodes appears in the results list if a rename searched all projects and found references in more than one project.
You can make project nodes appear for all renames via the options dialog for Visual Assist.
Rename Files
If you rename a symbol whose name is identical to that of a file, e.g. class foo in foo.cpp, Visual Assist prompts you to rename the like-named files after the rename of the symbol is complete. If directed, Visual Assist launches Rename Files, where you have the opportunity to preview the list of files to be renamed.
Shared Scope Setting
The setting to rename in all projects is shared with the Find References and Change Signature commands. If you restrict Rename to the current project, the scope of the next Find References or Change Signature is restricted to the current project as well. If you broaden the scope of Rename to all projects, the next invocation of Find Signature or Change Signature will also search all projects. The shared setting prevents you from inadvertently refactoring references inconsistent with those you review using Find References.
Rename of System Symbols
Rename is available for system symbols, both for symbols defined in system include files, and for solution symbols defined using system types.
Rename of a system symbol changes its references in a project or solution such that they refer to a different system symbol. Rename does not change the definition of the system symbol. Fox example, rename of MessageBoxA can change references such that MessageBoxW is used instead of MessageBoxA.
Rename of a system type used in a project or solution changes changes references to that of a different system symbol. For example, rename of std:vector can change references such that std:set is used instead of std:vector.
Undo
Rename can be undone (Ctrl+Z), even when a change affects multiple files in multiple projects.
Visual C++ 6.0
Multiple Undos are required to revert a single rename.
Rename Versus Change Signature
In most instances, use Change Signature in lieu of rename to modify names of methods and their parameters. Although Rename is acceptable, Change Signature reaches deeper into parameter lists of declarations and definitions, and into the bodies of method implementations. | https://docs.wholetomato.com/default.asp?W154 | 2021-04-10T11:28:05 | CC-MAIN-2021-17 | 1618038056869.3 | [array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=31388&sFileName=renameQuickMenu.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=31389&sFileName=renameOpen.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=31402&sFileName=renameAtDefinition.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=31390&sFileName=renameScopeSolution.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=31391&sFileName=renameMultipleRefsOneLine.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=31392&sFileName=renameHighlight.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=46084&sFileName=viewColoring.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=46083&sFileName=highlight.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=31395&sFileName=renameInheritence.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=31396&sFileName=renameComment.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=31401&sFileName=renameContextMenu.png',
None], dtype=object)
array(['https://wholetomato.fogbugz.com/default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=31400&sFileName=renameExclude.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=31397&sFileName=renameStop.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=31398&sFileName=renameCancel.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=48133&sFileName=renameProjectsInResults.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=48134&sFileName=renameProjectsShow.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=32000&sFileName=renameRenameFiles.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=31399&sFileName=renameSharedSetting.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=66572&sFileName=renameMessageBox.png',
None], dtype=object)
array(['default.asp?pg=pgDownload&pgType=pgWikiAttachment&ixAttachment=66573&sFileName=renameList.png',
None], dtype=object) ] | docs.wholetomato.com |
Go to your account on Shareasale, follow:
Tools →
API Reporting and set IP of your server, which you use for the site. You can know correct IP of the server from your hosting provider.
2. Find
Token and
Secret Key and save settings
You can find Affiliate ID in the top left corner of your account, near your login name.
I am getting this error message: Error: Invalid Account - Error Code 4002 - xx.xx.xx.xx
This means your IP address isn’t registered under
Tools ->
API Reporting your Shareasale dashboard. Be sure to register the IP you’re sending API requests from. | https://ce-docs.keywordrush.com/modules/affiliate/shareasale | 2021-04-10T11:24:39 | CC-MAIN-2021-17 | 1618038056869.3 | [] | ce-docs.keywordrush.com |
Viewing and Downloading Stacks Logs
Stacks are collected and logged to a compressed, rotated log file. You can view and download stacks logs.
- On the service page, click the Instances tab.
- Click the role in the Role Type column.
- Click the Stacks Logs tab.
- Click Stacks Log File to view the most recent stacks file. Click Download Stacks Logs to download a zipped bundle of the stacks logs. | https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/monitoring-and-diagnostics/topics/cm-viewing-and-downloading-stacks-logs.html | 2021-04-10T11:54:34 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.cloudera.com |
I want address a CIS control 3.4 Ensure that shared access signature tokens expire within an hour
Additional info the control
Control number 3.4.
I was hoping to address this recommendation to create a stored access policy on the blob container with dynamic values for date and time variables or i am open to any other ideas. Also realized when researching this that SAS tokens are not logged in the Azure Activity | https://docs.microsoft.com/en-us/answers/questions/31335/ensure-that-shared-access-signature-tokens-expire.html | 2021-04-10T12:42:22 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.microsoft.com |
Side-by-Side Execution in the .NET Framework a collection of assemblies that contain the API types. The runtime and the .NET Framework assemblies are versioned separately. For example, version 4.0 of the runtime is actually version 4.0.319,.
Benefits of Side-by-Side Execution Components for Side-by-Side Execution.
Version Compatibility.
Locating Runtime Version Information
Information on which runtime version an application or component was compiled with and which versions of the runtime the application requires to run are stored in two locations. When an application or component is compiled, information on the runtime version used to compile it is stored in the managed executable. Information on the runtime versions the application or component requires is stored in the application configuration file.
Runtime Version Information in the Managed Executable
The portable executable (PE) file header of each managed application and component contains information about the runtime version it was built with. The common language runtime uses this information to determine the most likely version of the runtime the application needs to run.
Runtime Version Information in the Application Configuration File
In addition to the information in the PE file header, an application can be deployed with an application configuration file that provides runtime version information. The application configuration file is an XML-based file that is created by the application developer and that ships with an application. The <requiredRuntime> Element of the <startup> section, if it is present in this file, specifies which versions of the runtime and which versions of a component the application supports. You can also use this file in testing to test an application's compatibility with different versions of the runtime.
Unmanaged code, including COM and COM+ applications, can have application configuration files that the runtime uses for interacting with managed code. The application configuration file affects any managed code that you activate through COM. The file can specify which runtime versions it supports, as well as assembly redirects. By default, COM interop applications calling to managed code use the latest version of the runtime installed on the computer.
For more information about the application configuration files, see Configuring Apps.
Determining.
Note
You can suppress the display of this message by using the NoGuiFromShim value under the registry key HKLM\Software\Microsoft\.NETFramework or using the environment variable COMPLUS_NoGuiFromShim. For example, you can suppress the message for applications that do not.
Partially Qualified Assembly Names.
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <qualifyAssembly partialName="myAssembly" fullName="myAssembly, version=1.0.0.0, publicKeyToken=..., culture=neutral"/> </assemblyBinding>").
Note
You can use the LoadWithPartialName method to bypass the common language runtime restriction that prohibits partially referenced assemblies from being loaded from the global assembly cache. This method should be used only in remoting scenarios as it can easily cause problems in side-by-side execution.
Related Topics
Reference
<supportedRuntime> Element | https://docs.microsoft.com/en-us/dotnet/framework/deployment/side-by-side-execution?WT.mc_id=DT-MVP-5001507 | 2021-04-10T13:21:52 | CC-MAIN-2021-17 | 1618038056869.3 | [array(['media/side-by-side-execution/side-by-side-runtime-execution.gif',
'Side-by-side execution of different runtime versions,'],
dtype=object)
array(['media/side-by-side-execution/side-by-side-component-execution.gif',
'Diagram that shows side-by-side execution of a component.'],
dtype=object) ] | docs.microsoft.com |
We assign one, comprehensive score to each user who visits your website and every order that they place. This score is a number between 0 and 1000. Higher scores are more trustworthy and lower scores are more suspicious. We track over 170 unique attributes to calculate a user's score, starting with their first interaction with your website. We follow each user through their browsing and checkout process. Positive user attributes raise their score, negative attributes and risk factors lower it.
This score also determines each order's NS8 Risk. It's less likely that an order with a high score is fraud. To customize your risk thresholds, go to Settings. An order's risk level can change over time, but the order's score stays the same throughout the order workflow.
We score all interactions with your website, including users who don't make a purchase. If a user does make a purchase, their user score and their order score is the same.
Updated 7 months ago | https://docs.ns8.com/v2.0/docs/the-eq8-score | 2021-04-10T11:08:46 | CC-MAIN-2021-17 | 1618038056869.3 | [] | docs.ns8.com |
The easiest way to translate strings in Content Egg templates is to use the
Content Egg > Frontend > Frontend texts setting.
If the original string contains placeholders
%s or
%d, the translated string must contain the same placeholders, too.
You can also use the
Frontend texts setting to customize the displayed text.
For a more detailed translation, use the method from POT files. | https://ce-docs.keywordrush.com/frontend/translation | 2021-04-10T11:25:11 | CC-MAIN-2021-17 | 1618038056869.3 | [] | ce-docs.keywordrush.com |
Rocky Series Release Notes¶
9.4.0¶
New Features¶
A key called “domain” is now available for interfaces. This allows the setting of a domain for an ifcfg configuration, which will aide DNS search.
Support for configuring policy-based routing has been added. A new top-level object “route_table” has been added, which allows the user to add tables to the system route table at /etc/iproute2/rt_tables. Routes have a new “table” property for specifying which table to apply the route. Interfaces now have a “rules” property that allows the user to add arbitrary rules for when the system should use a particular routing table, such as input interface or source IP address.
9.3.0¶
New Features¶
Some changes can now be made to interfaces without restarting. Changes to routes, IP addresses, netmask, or MTU will now be applied using iproute2 without restarting the interface, and the ifcfg file will be updated.
Bug Fixes¶
When the ivs interface (or nfvswitch) configuration changes, ivs (or nvfswitch) needs to be restarted in order to pick up the new configuration.
The ovs-appctl command may fail, particularly when setting an interface as a slave in a bond if the primary interface is not yet up. Retry the ovs-appctl command and log a failure if the command still fails.
Other Notes¶
The schema now allow the
routesoption to be an empty list. (Previously at least one route was was required.) Bug: 1792992 <>_.
Since this change uses iproute2 to make changes to live interfaces, it does not allow MTU on DPDK interfaces to be modified in place. DPDK requires that ovs-vsctl be run to modify MTU. For DPDK interfaces, MTU changes will result in an interface restart.
9.1.0¶
New Features¶
Adds support to use
destinationand
nexthopas keys in the
Routeobjects.
destinationmaps to
ip_netmaskand
nexthopmaps to
next_hop. Neutron Route objects use
destinationand
nexthop, supporting the same schema allow passing a neutron route directly to os-net-config.
Known Issues¶
Currently the member interface for a contrail vrouter interface can only be of type interface. Types vlan and linux_bond are needed. | https://docs.openstack.org/releasenotes/os-net-config/rocky.html | 2019-08-17T13:34:28 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.openstack.org |
[−][src]Module gotham::
handler
Defines types for handlers, the primary building block of a Gotham application.
A function can be used directly as a handler using one of the default implementations of
Handler, but the traits can also be implemented directly for greater control. See the
Handler trait for some examples of valid handlers. | https://docs.rs/gotham/0.4.0/gotham/handler/index.html | 2019-08-17T12:38:06 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.rs |
>> the Splunk documentation
- Providing
Hi, JB. Please email [email protected] for assistance with the Fundamentals Part 1 elearning. That team can help you find the training materials that you're looking for. | https://docs.splunk.com/Documentation/Splunk/7.1.1/SearchTutorial/WelcometotheSearchTutorial | 2019-08-17T13:13:53 | CC-MAIN-2019-35 | 1566027313259.30 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
CRUD¶
CRUD class offers a very usable extension to
Grid class, which automatically adds actions for deleting,
updating and adding records as well as linking them with corresponding Model actions.
Important
If you only wish to display a non-interractive table use
Table class. If you need to
display Data Grid with some custom actions (not update/delete/add) or if you want to use your own editing
mechanism (such as edit data on separate page, not inside a modal), use
Grid
Important
ATK Addon - MasterCRUD implements a higher-level multi-model management solution, that takes advantage of model relations and traversal to create multiple levels of CRUDs:
Using CRUD¶
The basic usage of CRUD is:
$app->add('CRUD')->setModel(new Country($app->db));
Users are now able to fully interract with the table. There are ways to restrict which “rows” and which “columns” user can access. First we can only allow user to read, manage and delete only countries that are part of European Union:
$eu_countries = new Country($app->db); $eu_countries->addCondition('is_eu', true); $app->add('CRUD')->setModel($eu_countries);
After that column is_eu will not be editable to the user anymore as it will be marked system by addCondition.
You can also specify which columns you would like to see on the grid:
$crud->setModel($eu_countries, ['name']);
This restriction will apply to both viewing and editing, but you can fine-tune that by specifying one of many parameters to CRUD.
Disabling Actions¶
By default CRUD allows all four operations - creating, reading, updating and deleting. CRUD cannot function without read operation, but the other operations can be explicitly disabled:
$app->add(['CRUD', 'canDelete'=>false]);
Specifying Fields¶
Through those properties you can specify which fields to use. setModel() second argument will set fieldsDefault but if it’s not passed, then you can inject fieldsDefault property during creation of setModel. Alternatively you can override which fields will be used for the corresponding mode by specifying the property:
$crud=$this->add([ 'CRUD', 'fieldsRead'=>['name'], // only field 'name' will be visible in table 'fieldsUpdate'=>['name', 'surname'] // fields 'name' and 'surname' will be accessible in edit form ]);
Custom Form¶
Form in Agile UI allows you to use many different things, such as custom layouts. With CRUD you can
specify your own form to use, which can be either an object or a seed:
class UserForm extends \atk4\ui\Form { function setModel($m, $fields = null) { parent::setModel($m, false); $gr = $this->addGroup('Name'); $gr->addField('first_name'); $gr->addField('middle_name'); $gr->addField('last_name'); $this->addField('email'); return $this->model; } } $crud=$this->add([ 'CRUD', 'formDefault'=>new UserForm(); ])->setModel($big_model);
Custom Page¶
You can also specify a custom class for your Page. Normally it’s a
VirtualPage but you
can extend it to introduce your own style or add more components than just a form:
class TwoPanels extends \atk4\ui\VirtualPage { function add($v, $p = null) { // is called with the form $col = parent::add('Columns'); $col_l = $col->addColumn(); $v = $col_l->add($v); $col_r = $col->addColumn(); $col_r->add('Table')->setModel($this->owner->model->ref('Invoices')); return $v; } } $crud=$this->add([ 'CRUD', 'pageDefault'=>new TwoPanels(); ])->setModel(new Client($app->db));
Notification¶
When data is saved, properties $notifyDefault can contain a custom notification action. By default it uses
jsNotify
which will display green strip on top of the page. You can either override it or add additional actions:
$crud=$this->add([ 'CRUD', 'notifyDefault'=>[ new \atk4\ui\jsNotify(['Custom Notification', 'color'=>'blue']), $otherview->jsReload(); // both actions will be executed ] ])->setModel(new Client($app->db)); | https://agile-ui.readthedocs.io/en/latest/crud.html | 2019-08-17T12:40:04 | CC-MAIN-2019-35 | 1566027313259.30 | [] | agile-ui.readthedocs.io |
Contents Geneva Release Notes Previous Topic Next Topic Geneva Patch 9 Hot Fix 2 Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Geneva Patch 9 Hot Fix 2 Geneva Patch 9 Hot Fix 2 provides fixes for the Geneva release. For Geneva Patch 9 Hot Fix 2: Build date: 12-08-2016_0944 Build tag: glide-geneva-08-25-2015__patch9-hotfix2-12-07 9 Hot Fix 2 Problem Short description Description Steps to reproduce ITIL User (do not impersonate). With Abel Tut. Fixes included with Geneva Patch 9 9 Hot Fix 1 | https://docs.servicenow.com/bundle/geneva-release-notes/page/release-notes/r_Geneva-Patch-9-HF-2-PO.html | 2019-08-17T13:58:02 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.servicenow.com |
Contents IT Service Management Previous Topic Next Topic UI actions to create task record from Incident Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share UI actions to create task record from Incident UI actions add links in the Incident form context menu to create problems, changes, or requests from an incident. Figure 1. UI actions to create task record The image depicts the UI actions that are used to promote incidents. Administrators and users with the ui_action_admin role can edit them to customize the behavior of the menu item. Create Problem UI actionThe Create Problem script copies these fields from the Incident form: short_description cmdb_ci priority company The syntax for copying a field from the Incident form to the Problem form is:prob.<fieldname> = current.<fieldname> Create Request UI action The Create Request script redirects the user to the service catalog. The service desk agent locates the catalog item and orders it. The Caller is copied to the Requested for user in the request.Note: This feature is available only for new instances, starting with the Jakarta release. Create <type of> Change UI actionThe Create Normal Change and Create Emergency Change scripts copy these fields from the Incident form: short_description description cmdb_ci priority company The syntax for copying a field from the Incident form to the Change form is:changeRequest.setValue("field_name", current.field_name); Other UI actions to create task record from incident If there is another process for which task record needs to be generated from an incident, such as a facilities request, create a UI action. Model it after the Create Normal Change and Create Problem UI actions to generate task record from the incident. Related tasksCreate task record from incident On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/jakarta-it-service-management/page/product/incident-management/reference/r_IncidentPromotionUIActions.html | 2019-08-17T13:53:59 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.servicenow.com |
Receiving and Publishing Events in Custom CSV Format¶
Purpose¶
This application demonstrates how to configure WSO2 Streaming Integrator Tooling to publish and receive data events processed within Siddhi to files in CSV custom format.
Before you begin:
- Edit the sample Siddhi application as follows:
- In the source configuration, update the value for the
dir.uriparameter by replacing
{WSO2SIHome}with the absolute path of your WSO2 SI Tooling directory.
- In the sink configuration, update the value for the
file.uriparameter by replacing
{WSO2SIHome}with the absolute path of your WSO2 SI Tooling directory. If required, you can provide a different path to publish the output to a location of your choice.
- Save the sample Siddhi application in Streaming Integrator Tooling.
Executing and testing the Sample¶
To execute the sample open the saved Siddhi application in Streaming Integrator Tooling, and start it by clicking the Start button (shown below) or by clicking Run -> Run.
If the Siddhi application starts successfully, the Streaming Integrator logs the following messages in the console.
CSVCustomMapping.siddhi - Started Successfully!
Viewing the Results¶
The source gets the input from the
SI_HOME>/samples/artifacts/CSVMappingWithFile/new/example.csvfile and produces the event. This file has data in below format.
1,WSO2,23.5
2,IBM,2.5
The sink gets the input from the source output and publishes the output in the
outputOfCustom.csvfile. The data is published in this file in the following format.
WSO2,1,100.0
IBM,2,2.5
@App:name("CSVCustomMapping") @App:description('Publish and receive data events processed within Siddhi to files in CSV custom format.') @source(type='file', dir.uri='file://{WSO2SIHome}/samples/artifacts/CSVMappingWithFile/new', action.after.process='NONE', @map(type='csv', @attributes(id='0', name='1', amount='2'))) define stream IntputStream (name string, id int, amount double); @sink(type='file', file.uri='/{WSO2SIHome}/samples/artifacts/CSVMappingWithFile/new/outputOfCustom.csv' , @map(type='csv',@payload(id='1', name='0', amount='2'))) define stream OutputStream (name string, id int, amount double); from IntputStream select * insert into OutputStream; | https://apim.docs.wso2.com/en/latest/use-cases/examples/streaming-examples/csv-custom-mapping/ | 2021-09-17T05:05:32 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['https://apim.docs.wso2.com/en/4.0.0/assets/img/streaming/amazon-s3-sink-sample/start.png',
'Start button'], dtype=object) ] | apim.docs.wso2.com |
JChem for Excel is part of ChemAxon's JChem for Office package. This chemoinformatics solution in Microsoft Office products enables. | https://docs.chemaxon.com/display/lts-gallium/jchem-for-excel-user-s-guide.md | 2021-09-17T04:31:48 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.chemaxon.com |
To create the users who are going to use the application you must access the users part of the application. On this section of the application there is a form to fill in to create each and every user, giving them an ID, name and password. Permissions will be configured in a different part of the application.
To access the users section select the "users" option on the menu at the top of the page:
The image below shows the display that comes after clicking on users. The display shows an empty table that is supposed to list the users once they have been made. The actions column will show options that can be taken for user e.g. change password, delete user. To add the user the "add new" button shown below is clicked
After clicking on "add new" a form is unveiled where the user can add the "name", "login" and "password"
The name does not need to be unique and it can be the real name of the user depending on your organisations rules. It should be inserted in the field marked red below:
For this example a user named tom will be added to the system, given a login of "tom005" and the password will be generated by the application. In the "name" field (shown above) the word tom will be inserted.
Inside the "login username" field the value will that will be inserted will be Tom005. The user may notice that the field already has the word "tom" in small letters inserted when the "name" field was deselected. To demonstrate that names may not be unique but users have the option to add numbers after the name we will just add "005" after the name. The image below demonstrates the addition of the "tom005" in the "login username" field:
Note: the name and login username do not have to be the same
The next step is to click "save". The generated password will be copied in the next step to send to the user because after clicking save the user's information will be displayed on the screen in a pop up message.
The pop message discussed in the previous step is shown in the image below:
Note: these details must be copied and pasted into another document so that the list of users and passwords is documented
If the user clicks OK on the pop shown then it goes back to the table that shows a list of all the users and the actions that can be taken in the management of user:
If more users are required the same steps listed above should be followed. Should a user forget their password it can be reset by clicking on the "reset password" button under the "actions" column.
When clicked the button will instantly delete the old password and generate a new, and pop up a message with the new password
The next session discusses how to give permissions to your users that have been created in this section. | https://docs.disarm.io/app-docs/editor-v1/users | 2021-09-17T02:52:28 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.disarm.io |
settings.
In the Built-in Render PipelineA series of operations that take the contents of a Scene, and displays them on a screen. Unity lets you choose from pre-built render pipelines, or write your own. More info
See in Glossary, you can use Tier settings to change rendering and shader compilation settings for different types of hardware. For more information, see Graphics tiers.
Use these settings to specify which Shader to use for each of the listed built-in features.
For each of these features, you can choose which type of Shader to use:
When you choose Custom shader, a ShaderA program that runs on the GPU.A verion of a shader program that Unity generates according to a specific combination of shader keywords and their status. A Shader object can contain multiple.
The Unity Editor which shader variants your application uses when it runs. You can use this information to build shader variant collections. | https://docs.unity3d.com/2021.1/Documentation/Manual/class-GraphicsSettings.html | 2021-09-17T04:46:33 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.unity3d.com |
Endpoint Suspension¶
In API Manager, by default, the gateway suspends an API for 30 seconds when it cannot reach the endpoint. If another request is made to your API within those 30 seconds, it will not be sent to the backend. The following response appears when the endpoint is suspended.
>
What's Next?
For more information on endpoint timeout configurations, see - | https://apim.docs.wso2.com/en/latest/design/endpoints/resiliency/endpoint-suspension/ | 2021-09-17T04:30:08 | CC-MAIN-2021-39 | 1631780054023.35 | [] | apim.docs.wso2.com |
To configure List Search Simple, add the web part to a page and modify the properties.
Click the List Search Simple Web Part Settings button to display the Web Part Settings page.
IMPORTANT: You must disable pop-up blockers for the site to display the Web Part Settings.IMPORTANT: You must disable pop-up blockers for the site to display the Web Part Settings.
There are several areas to configure on this web part; click the links below for details about configuring all sections of List Search Simple.
- Search Criteria Configuration
- Search Results Configuration
- Customize Search Criteria Look and Feel
- Customize Search Results Look and Feel
- Language Settings
When you are finished configuring the web part, click the Save & Close button in the Web Part Settings page, and then click Apply and then OK in the Web Part tool pane as shown above. | https://docs.bamboosolutions.com/document/overview_of_configuration_tool_pane-simple/ | 2021-09-17T05:27:31 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.bamboosolutions.com |
Making HTTP requests¶
The HTTPRequest node is the easiest way to make HTTP requests in Godot. It is backed by the more low-level HTTPClient, for which a tutorial is available here.
For the sake of this example, we will create a simple UI with a button, that when pressed will start the HTTP request to the specified URL.
Preparing scene¶
Create a new empty scene, add a CanvasLayer as the root node and add a script to it. Then add two child nodes to it: a Button and an HTTPRequest node. You will need to connect the following signals to the CanvasLayer script:
Button.pressed: When the button is pressed, we will start the request.
HTTPRequest.request_completed: When the request is completed, we will get the requested data as an argument.
스크립팅(Scripting)¶
Below is all the code we need to make it work. The URL points to an online API mocker; it returns a pre-defined JSON string, which we will then parse to get access to the data.
extends CanvasLayer func _ready(): D.Print(json.Result); } }
With this, you should see
(hello:world) printed on the console; hello being a key, and world being a value, both of them strings.
For more information on parsing JSON, see the class references for JSON and JSONParseResult.
Note that you may want to check whether the
result equals
RESULT_SUCCESS and whether a JSON parsing error occurred, see the JSON class reference and HTTPRequest for more.
Of course, you can also set custom HTTP headers. These are given as a string array, with each string containing a header in the format
"header: value".
For example, to set a custom user agent (the HTTP
user-agent header) you could use the following:
$HTTPRequest.request("", ["user-agent: YourCustomUserAgent"])
HTTPRequest httpRequest = GetNode<HTTPRequest>("HTTPRequest"); httpRequest.Request("", new string[] { "user-agent: YourCustomUserAgent" });
Please note that, for SSL/TLS encryption and thus HTTPS URLs to work, you may need to take some steps as described here.
Also, when calling APIs using authorization, be aware that someone might analyse and decompile your released application and thus may gain access to any embedded authorization information like tokens, usernames or passwords. That means it is usually not a good idea to embed things such as database access credentials inside your game. Avoid providing information useful to an attacker whenever possible.
Sending data to server¶
Until now, we have limited ourselves to requesting data from a server. But what if you need to send data to the server? Here is a common way of doing it:
func _make_post_request(url, data_to_send, use_ssl): # Convert data to json string: var query = JSON.print(data_to_send) # Add 'Content-Type' header: var headers = ["Content-Type: application/json"] $HTTPRequest.request(url, headers, use_ssl, HTTPClient.METHOD_POST, query). | https://docs.godotengine.org/ko/stable/tutorials/networking/http_request_class.html | 2021-09-17T02:55:02 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['../../_images/rest_api_scene.png',
'../../_images/rest_api_scene.png'], dtype=object)] | docs.godotengine.org |
- Alerts and Monitoring >
- Improve Your Schema >
- Avoid Unbounded Arrays
Avoid Unbounded Arrays¶
On this page
Overview¶
One of the benefits of MongoDB’s rich schema model is the ability to store arrays as document field values. Storing arrays as field values allows you to model one-to-many or many-to-many relationships in a single document, instead of across separate collections as you might in a relational database.
However, you should exercise caution if you are consistently adding
elements to arrays in your documents. If you do not limit the number of
elements in an array, your documents may grow to an unpredictable size.
As an array continues to grow, reading and building indexes on that
array gradually decrease in performance. A large, growing array can
strain application resources and put your documents at risk of exceeding
the
BSON Document Size limit.
Instead, consider bounding your arrays to improve performance and keep your documents a manageable size.
Example¶
Consider the following schema for a
publishers collection:
In this scenario, the
books array is unbounded. Each new book
released by this publishing company adds a new sub-document to the
books array. As publishing companies continue to release books, the
documents will eventually grow very large and cause a disproportionate
amount of memory strain on the application.
To avoid mutable, unbounded arrays, separate the
publishers
collection into two collections, one for
publishers and one for
books. Instead of embedding the entire
book document in the
publishers document, include a
reference
to the publisher inside of the book document:
This updated schema removes the unbounded array in the
publishers
collection and places a reference to the publisher in each book document
using the
publisher_id field. This ensures that each document has a
manageable size, and there is no risk of a document field growing
abnormally large.
Document References May Require
$lookups¶
This approach works especially well if your application loads the book
and publisher information separately. If your application requires the
book and information together, it needs to perform a
$lookup
operation to join the data from the
publishers and
books
collections.
$lookup operations are not very performant, but
in this scenario may be worth the trade off to avoid unbounded arrays.
Learn More¶
- To learn more about Data Modeling in MongoDB and the flexible schema model, see Data Modeling Introduction.
- To learn how to model relationships with document references, see Model One-to-Many Relationships with Document References
- To learn how to query arrays in MongoDB, see Query an Array.
- MongoDB also offers a free MongoDB University Course on Data Modeling: M320: Data Modeling. | https://docs.opsmanager.mongodb.com/current/schema-advisor/avoid-unbounded-arrays/ | 2021-09-17T03:54:36 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.opsmanager.mongodb.com |
4.2 Release Notes
Scuba release 4.2 includes the ability to create an information banner, the ability for Support to set a default page for your users, improvements to advanced filters and chart options, and several other enhancements and bug fixes.
New features
Scuba release 4.2 includes the following new features:
An admin can now create an information banner at the top of every page in Scuba in their environment. This lets them communicate system information with their users.
An admin can now set a default starting page in Scuba for their users by contacting Scuba support. This can be any page within Scuba, for example the custom information page or a board or query.
Advanced filters have received several improvements. To access advanced filters, in the query builder, click "Split by" or "Filtered to", then type an equals sign and click the external window icon to open the Advanced Operations window. The improvements include:
Adding suggestions for values.
Adding some validation for expressions.
Removing nonsensical options from the expression matcher drop-downs.
Improved chart options in Explore as well as apps, including a new "clear all" button, several bug fixes, and internal logging.
Additional enhancements
Some items in the query builder have new labels for clarity:
"Compare" changed to "add measure".
"Split by options" had only one option, order by. Changed text to say "order by".
The sampled button has been restyled.
Small updates to the emailed dashboard dialog for consistency.
Fixed issues
Distribution view fixed issues
The following bug fixes in distribution view are included in Scuba 4.2:
Export now works with a query that includes a split by.
The "clear all" button now respects the dataset selection.
Absolute start date can now be more consistently changed to relative start date.
UI now displays a warning when filtering on an incorrectly scoped event property. See Scuba query concepts for BQL users for more information about scope.
Retention view fixed issues
The following bug fixes in retention view are included:
Fixed a crash that occurred when a user typed an invalid time unit and then tried to open a dropdown.
Splitting by an actor property with a trailing window now uses the correct time specification.
Initial fix to a bug where some "as fraction of" results were displaying as over 100%. In this situation, for now the highest result is pinned to 100%.
UI now displays a warning when filtering on an incorrectly scoped event property. See Scuba query concepts for BAQL users for more information about scope.
Additional fixed issues
Scuba release 4.2 includes the following fixed issues:
Fixed a bug that occurred when referencing other properties in a flow property.
Fixed a bug that prevented the column dropdown from appearing after aggregators were changed.
Fixed a bug where clicking on an item in a dropdown sometimes did not highlight the item.
Matcher fixed bugs:
matcher expression typed out in lowercase (not selected from dropdown) now provides suggestions.
using two properties as arguments for string match functions no longer fails
The ability to edit the title of a raw event property is now restricted to UX admins.
Chart legends:
Improved support for legend labels when using "Day of week" functions.
Legends now show human-readable time instead of epoch time.
Improved handling of/support for:
Using a calculate-method flow property in a definition for a label-method flow property.
Using a calculate-method actor property in a definition for a label-method actor property.
An identifier actor used in a filter or split by in a Sankey or a saved measure.
Editing a filter then changing the actor in a query palette property.
Using an actor pre-filter in Explore to analyze a flow. | https://docs.scuba.io/release-notes/4.2-Release-Notes.1310916643.html | 2021-09-17T04:58:57 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.scuba.io |
PendingFoldersChanged¶
New in version 1.14.0.
Emitted when pending folders were added / updated (offered by some
device, but not shared to them) or removed (folder ignored, dismissed
or added or no longer offered from the remote device). A removed
entry without a
deviceID attribute means that the folder is no
longer pending for any device.
{ "id": 101, "type": "PendingFoldersChanged", "time": "2020-12-22T22:36:55.66744317+01:00", "data": { "added": [ { "deviceID": "EJHMPAQ-OGCVORE-ISB4IS3-SYYVJXF-TKJGLTU-66DIQPF-GJ5D2GX-GQ3OWQK", "folderID": "GXWxf-3zgnU", "folderLabel": "My Pictures" "receiveEncrypted": "false" "remoteEncrypted": "false" } ], "removed": [ { "deviceID": "P56IOI7-MZJNU2Y-IQGDREY-DM2MGTI-MGL3BXN-PQ6W5BM-TBBZ4TJ-XZWICQ2", "folderID": "neyfh-sa2nu" }, { "folderID": "abcde-fghij" } ] } } | https://docs.syncthing.net/events/pendingfolderschanged.html | 2021-09-17T02:54:13 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.syncthing.net |
27.09.2021 09:00
Workshop by the Center for Doctoral Studies
01.09.2021
Registration is open from Mo 13.09.2021 09:00 to Mo 20.09.2021 09:00
The LinkedIn Group aims to connect current members and alumni of DoCS
Follow us on Twitter and receive the latest updates, news, and information on the Doctoral School Computer Science.
More events can be found in the »Event calendar of the Faculty«. | https://docs.univie.ac.at/news-events/event-calender-by-month/ | 2021-09-17T05:01:29 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.univie.ac.at |
Clean Up Amazon SageMaker Experiment Resources
To avoid incurring unnecessary charges, delete the Amazon SageMaker Experiment resources
you
no longer need. You can't delete Experiment resources through the SageMaker
Management Console or the Amazon SageMaker Studio UI. This topic shows you how to
clean up these
resources using Boto3 and the Experiments SDK. For more information about the Experiments
SDK, see
sagemaker-experiments
To delete the experiment, you must delete all trials in the experiment. To delete a trial, you must remove all trial components from the trial. To delete a trial component, you must remove the component from all trials.
Trial components can exist independent of trials and experiments. You do not have
to
delete them. If you want to reuse them, comment out
tc.delete() in the code.
Clean Up Using the Experiments SDK
To clean up using the Experiments SDK
import sys !{sys.executable} -m pip install sagemaker-experiments
import time from smexperiments.experiment import Experiment from smexperiments.trial import Trial from smexperiments.trial_component import TrialComponent
Define cleanup_sme_sdk
def cleanup_sme_sdk(experiment): for trial_summary in experiment.list_trials(): trial = Trial.load(trial_name=trial_summary.trial_name) for trial_component_summary in trial.list_trial_components(): tc = TrialComponent.load( trial_component_name=trial_component_summary.trial_component_name) trial.remove_trial_component(tc) try: # comment out to keep trial components tc.delete() except: # tc is associated with another trial continue # to prevent throttling time.sleep(.5) trial.delete() experiment_name = experiment.experiment_name experiment.delete() print(f"\nExperiment {experiment_name} deleted")
Call cleanup_sme_sdk
experiment_to_cleanup = Experiment.load( # Use experiment name not display name experiment_name="
experiment-name") cleanup_sme_sdk(experiment_to_cleanup)
Clean Up Using the Python SDK (Boto3)
To clean up using Boto 3
import boto3 sm = boto3.Session().client('sagemaker')
Define cleanup_boto3
def cleanup_boto3(experiment_name): trials = sm.list_trials(ExperimentName=experiment_name)['TrialSummaries'] print('TrialNames:') for trial in trials: trial_name = trial['TrialName'] print(f"\n{trial_name}") components_in_trial = sm.list_trial_components(TrialName=trial_name) print('\tTrialComponentNames:') for component in components_in_trial['TrialComponentSummaries']: component_name = component['TrialComponentName'] print(f"\t{component_name}") sm.disassociate_trial_component(TrialComponentName=component_name, TrialName=trial_name) try: # comment out to keep trial components sm.delete_trial_component(TrialComponentName=component_name) except: # component is associated with another trial continue # to prevent throttling time.sleep(.5) sm.delete_trial(TrialName=trial_name) sm.delete_experiment(ExperimentName=experiment_name) print(f"\nExperiment {experiment_name} deleted")
Call cleanup_boto3
# Use experiment name not display name experiment_name = "
experiment-name" cleanup_boto3(experiment_name) | https://docs.aws.amazon.com/sagemaker/latest/dg/experiments-cleanup.html | 2021-09-17T04:15:48 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.aws.amazon.com |
Migrating to version 0.11 (from 0.10)¶
The version
0.11 improves match-making and scalability, and introduces breaking changes in both client-side and server-side.
Client-side¶
client.id has been removed!¶
If you're using
client.id in the client-side, you should replace it with
room.sessionId.
New match-making methods available in the client-side!¶
A few methods have been added to the client-side allowing to explicitly create rooms or join them.
joinOrCreate(roomName, options)- joins or creates a room by name (previously known as
join())
create(roomName, options)- only creates new rooms
join(roomName, options)- only joins existing rooms by name
joinById(roomId, options)- only joins existing rooms by id
reconnect(roomId, sessionId)- re-establish a previously lost connection (previously known as
rejoin())
Also, the
Room instance is not returned immediatelly in the client-side. A promise is returned instead, and it is fulfilled whenever the
onJoin() has finished successfully.
Replace your existing
client.join() calls with its new
client.joinOrCreate():
client.joinOrCreate("battle", {/* options */}).then(room => { console.log("joined successfully", room); }).catch(e => { console.error("join error", e); });
try { Room<YourStateClass> room = await client.JoinOrCreate<YourStateClass>("battle", /* Dictionary of options */); Debug.Log("joined successfully"); } catch (ex) { Debug.Log("join error"); Debug.Log(ex.Message); }
client:join_or_create("battle", {--[[options]]}, function(err, room) if (err ~= nil) then print("join error: " .. err) return end print("joined successfully") end)
client.joinOrCreate("battle", [/* options */], YourStateClass, function(err, room) { if (err != null) { trace("join error: " + err); return; } trace("joined successfully"); });
client->joinOrCreate<YourStateClass>("battle", {/* options */}, [=](std::string err, Room<State>* room) { if (err != "") { std::cout << "join error: " << err << std::endl; return; } std::cout << "joined successfully" << std::endl; });
Lua, Haxe and C++
In languages that doesn't provide an
async mechanism out of the box, a callback is expected as last argument for the matchmaking functions. The callback gets invoked whenever the
onJoin() has been finished successfully.
room.onJoin is not necessary anymore in the client-side¶
The
room.onJoin is now only used internally. When the promise (or callback) fulfils returning the
room instance, it has already been joined successfully.
The
reconnect() now expects the room id instead of room name.¶
Previously, the
rejoin() method accepted the room name and sessionId. Now, with
reconnect() you should pass the room id instead of the room name:
client.reconnect(roomId, sessionId).then(room => {/* ... */});
JavaScript/TypeScript: Signals API has changed slighly¶
The room signals are
onLeave,
onStateChange,
onMessage and
onError.
- Use
onStateChange(callback)instead of
onStateChange.add(callback)
- Use
onStateChange.once(callback)instead of
onStateChange.addOnce(callback)
C#/Unity¶
The
sender object has been removed from all Schema callbacks and events.
Schema callbacks API has changed slighly¶
- Use
players.OnAdd += (Player player, string key) => {}.
- Use
players.OnRemove += (Player player, string key) => {}.
- ... and so on!
Events API has changed slighly¶
The events are
onLeave,
onStateChange,
onMessage and
onError.
- No need to use
client.Connect(),
room.ReadyToConnect(),
room.Connect(), or
client.Recv()anymore.
- Use
onStateChange += (State state, bool isFirstState) => {}instead of
onStateChange += (sender, e) => {}
- Use
onMessage += (object message) => {}instead of
onMessage += (sender, e) => {}
- Use
onLeave += (int code) => {}instead of
onLeave += (sender, e) => {}
- Use
onError += (string message) => {}instead of
onError += (sender, e) => {}
arraySchema.GetItems() now returns a
Dictionary<int, MySchemaType> instead of a
List<MySchemaType>. Replace any cases of
(List<MySchemaType>) state.myArraySchema.GetItems() with
((Dictionary<int, MySchemaType>) state.myArraySchema.GetItems()).Values.ToList().
Server-side¶
Usage with
express¶
Before creating the
Colyseus.Server instance, you'll need to:
- use the
express.json()middleware
- use the
cors()middleware (if you're testing server/client from different port or domain)
- pass both
serverand
expressto the
Colyseus.Serverconstructor.
import http from "http"; import express from "express"; import cors from "cors"; import { Server } from "colyseus"; const app = express(); app.use(cors()); app.use(express.json()); const server = http.createServer(app); const gameServer = new Server({ server: server });
const http = require("http"); const express = require("express"); const cors = require("cors"); const colyseus = require("colyseus"); const app = express(); app.use(cors()); app.use(express.json()); const server = http.createServer(app); const gameServer = new colyseus.Server({ server: server });
gameServer.register has been renamed to
gameServer.define¶
onInit(options) has been renamed to
onCreate(options)¶
Replace your
onInit(options) method in your room with
onCreate(options).
onAuth(options) is now
onAuth(client, options)¶
Replace your
onAuth(options) method in your room with
onAuth(client, options).
client.id is now an alias to
client.sessionId¶
As the
client.id has been removed from the client-side, it is now just an alias to
client.sessionId (available in the client-side as
room.sessionId).
The
client.id was not a reliable source to identify unique users. If you need an efficient way to determine if the user is the same in multiple browser tabs, consider using some form of authentication. The anonymous authentication from @colyseus/social can serve this purpose very well.
The
requestJoin() method has been deprecated.¶
Instead of using
requestJoin() to determine wheter a player is allowed to join a room, you should use matchmaking filters for your defined rooms.
Take this example of using
requestJoin() from version
0.10, and how to translate it to
0.11:
// version 0.10 class MyRoom extends Room { onInit(options) { this.progress = options.progress; } requestJoin(options, isNew) { return this.progress === options.progress; } }
You can have the same behaviour by defining a
progress filter when defining your room. The
requestJoin() method should be removed.
// version 0.11 gameServer .define("dungeon", DungeonRoom) .filterBy(['progress']);
Avoid using
this.clients inside
onJoin() or
onAuth()¶
The
client instance will be automatically added to the
this.clients list only after
onJoin() has been completed.
If you have a piece of code like this:
onJoin(client, options) { if (this.clients.length === 2) { // do something! } }
It's encouraged to replace with something else, like this:
onJoin(client, options) { this.state.players[client.sessionId] = new Player(/*...*/); if (Object.keys(this.state.players).length === 2) { // do something! } } onLeave(client, options) { delete this.state.players[client.sessionId]; } | https://docs.colyseus.io/colyseus/migrating/0.11/ | 2021-09-17T03:28:52 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.colyseus.io |
This article will guide you in adding and removing your DS100.
The Door/Window Sensor can be included and operated in any Z-Wave network with other Z-Wave certified devices from other manufacturers and/or other applications. All non-battery operated nodes within the network will act as repeaters regardless of vendor to increase reliability of the network.
Inclusion (Use this procedure to add the HS-DS100+ to your Z-Wave network).
- Ensure AAA batteries are installed. Remove any plastic from battery compartment (if necessary)
- Put your home automation controller into ‘inclusion’ mode. Consult your system’s manual for details.
- HS-DS100+ may be included “securely” (option a) or “non-securely” (option b). If your automation controller does not support secure devices, or if you wish to improve battery life, add the sensor “non-securely”. Otherwise, include the sensor “securely”
- Secure inclusion: Press and hold the Z-Wave button inside the sensor body for 3 seconds. Wait for the process to finish.
- Non-Secure inclusion: Triple-click the Z-Wave button inside the sensor body. Wait for the process to finish.
- If successful, the sensor body LED will blink briefly and then stay on for 3 seconds. If unsuccessful, the LED with blink briefly and then turn off. Should this happen, repeat the inclusion process.
Note: If you want this Door/Window sensor to function as a security device using secure/encrypted Z-Wave communications, then a security enabled Z-Wave controller is required.
Exclusion (Use this procedure to remove the HS-DS100+ from your Z-Wave network).
- Put your home automation controller into ‘exclusion’ mode. Consult your system’s manual for details.
- Triple-click the Z-Wave button inside the sensor body. If successful, the LED will turn off within 1 second. If unsuccessful, the LED with blink for 5 seconds. Should this happen, repeat the exclusion process.
Reset (Use this procedure to reset the HS-DS100+ to factory default settings when the network primary controller is missing or otherwise inoperable).
- Press and hold the Z-Wave button inside the sensor body for 20 seconds. If successful, the LED will change from solid, to blinking, to solid again.
Z-Wave Association Information
HS-DS100+ supports Group 1 and Group 2 associations. Group 1 reports the sensor’s condition, battery level and tamper state. Group 2 sends the BASIC SET command.
Z-Wave Parameters
Use the parameters below to adjust DS100 configuration settings: | https://docs.homeseer.com/display/HSPRODKB/HS-DS100+Sensor+Z-Wave+Configuration | 2021-09-17T05:00:36 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.homeseer.com |
SQL samples
APPLIES TO:
SQL Server
Azure SQL Database
Azure Synapse Analytics Synapse Analytics
AdventureWorks sample database
AdventureWorks databases can be found on the installation page or directly within the SQL Server samples GitHub repository.. | https://docs.microsoft.com/en-us/sql/samples/sql-samples-where-are?view=sql-server-ver15 | 2021-09-17T05:40:32 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.microsoft.com |
Upgrade a Replica Set
All Members Version¶
All replica set members must be running version 4.2. To upgrade a replica set from an 4.0-series and earlier, first upgrade all members of the replica set to the latest 4.2-series release, and then follow the procedure to upgrade from MongoDB 4.2 to 4.4.
Confirm Clean Shutdown¶
Prior to upgrading a member of the replica set, confirm that the member was cleanly shut down.
Feature Compatibility Version¶
The 4.2 replica set must have
featureCompatibilityVersion set to
"4.2".
To ensure that all members of the replica set have
featureCompatibilityVersion set to
"4.2", connect to each
replica set member and check the
featureCompatibilityVersion:
All members should return a result that includes
"featureCompatibilityVersion" : { "version" : "4.2" }.
You can upgrade from MongoDB 4.2 to 4.4.
Ensure that no initial sync is in progress. Running
setFeatureCompatibilityVersion command while an initial
sync is in progress will cause the initial sync to restart.
On the primary, run the
setFeatureCompatibilityVersion command in the
admin database:
Setting featureCompatibilityVersion (fCV) : "4.4"
implicitly performs a
replSetReconfig to add the
term field to the configuration document and blocks
until the new configuration propagates to a majority of replica
set members. 4.4.
- To upgrade a sharded cluster, see Upgrade a Sharded Cluster to 4.4. | https://docs.mongodb.com/v5.0/release-notes/4.4-upgrade-replica-set/ | 2021-09-17T04:19:07 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.mongodb.com |
- Frequently Asked Questions >
- FAQ: Security
FAQ: Security¶
This addresses common questions about Ops Manager and its security features.
How are Ops Manager releases scheduled to address JDK security updates?¶
Maintenance releases of all supported Ops Manager versions generally occur within one month of each JDK security release. Ops Manager maintenance releases include Java Critical Patch updates released within the last month.
MongoDB evaluates Java security issues that arise between maintenance releases on a case-by-case basis to determine if the issue requires an accelerated maintenance release. | https://docs.opsmanager.mongodb.com/current/reference/faq/faq-security/ | 2021-09-17T04:38:11 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.opsmanager.mongodb.com |
Monitoring your data flow
Learn about the different monitoring options for your ADLS ingest data flow in CDP Public Cloud.
You can monitor your data flow for information about health, status, and details about the operation of processors and connections. NiFi records and indexes data provenance information, so you can conduct troubleshooting amount of time.
If you are worried about data being queued up, you can check how much data is currently queued. Process groups also conveniently show the totals for any queues within them. This can often indicate if there is a bottleneck in your flow somewhere, and how far the data has got through that pipeline.
Another option to check that data has fully passed through your flow is to check out data provenance to see the full history of your data. | https://docs.cloudera.com/cdf-datahub/7.2.11/nifi-azure-ingest/topics/cdf-datahub-fm-adls-ingest-monitorflow.html | 2021-09-17T04:51:07 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.cloudera.com |
Configuring CDP Services for HDFS Encryption
This page contains recommendations for setting up HDFS Transparent Encryption with various CD KMS ACLs (
Configuring
KMS Access Control Lists (ACLs)).
Steps
On a cluster without HBase currently installed, create the
/hbase directory and make that an encryption zone.
- Stop the HBase service.
- Move data from the
/hbasedirectory to
/hbase-tmp.
- Create an empty
/hbasedirectory and make it an encryption zone.
- Distcp all data from
/hbase-tmpto
/hbase, preserving user-group permissions and extended attributes.
- Start the HBase service and verify that it is working as expected.
- Remove the
/hbase-tmpdirectory.
KMS ACL Configuration for HBase
In the KMS ACL (
Configuring KMS Access Control Lists (ACLs)), grant the
hbase user and group
DECRYPT_EEK permission for the
HBase key:
<property> <name>key.acl.hbase-key.DECRYPT_EEK</name> <value>hbase hbase</value> </description> </property>s (
Configuring KMS Access Control Lists (ACLs)),and
ezTbl2.
- Create two new encryption zones,
/data/ezTbl1and
/data/ezTbl2.
- Load data to the tables in Hive using
LOADstatements..jointo
false, or encrypt the local Hive Scratch directory (
hive.exec.local.scratchdir) using Cloudera Navigator Encrypt.
DOWNLOADED_RESOURCES_DIR: JARs that are added to a user session and stored in HDFS are downloaded to
hive.downloaded.resources.diron-dirsYARNis 32 MB, which you can modify by changing the value for the
hive.exec.copyfile.maxsizeconfigurationstatement results in a copy. For this reason, Cloudera recommends landing data that you need to encrypt inside the destination encryption zone. You can use
distcpto speed up the copying process if your data is inside HDFS.
- If the data to be loaded is already inside a Hive table, you can create a new table with a
LOCATIONinside an encryption zone as follows:
The location specified in the
CREATE TABLE encrypted_table [STORED AS] LOCATION ... AS SELECT * FROM <unencrypted_table>
CREATE TABLEstatement must be inside an encryption zone. Creating a table pointing
LOCATIONto an unencrypted directory does not encrypt your source data. You must copy your data to an encryption zone, and then point
LOCATIONto that zone.
- Example 2: Loading encrypted data to an encrypted table - If the data is already encrypted, use the
CREATE TABLEstatement pointing
LOCATIONtoin each table or partition
- Previously, an
INSERT OVERWRITEs (
Configuring KMS Access Control
Lists (ACLs)) Security Configuration, KMS ACLs
(
Configuring KMS Access Control Lists (ACLs)).
Steps
On a cluster without Hue currently installed, create the
/user/hue directory and make it an encryption
zone.
On a cluster with Hue already installed:
- Create an empty
/user/hue-tmpdirectory.
- Make
/user/hue-tmpan encryption zone.
- DistCp all data from
/user/hueinto
/user/hue-tmp.
- Remove
/user/hueand rename
/user/hue-tmpto
/user/hue.
KMS ACL Configuration for Hue
In the KMS ACLs (
Configuring KMS Access Control Lists (ACLs)), grant the
hue and
oozie users and groups
DECRYPT_EEK permission for the Hue key:
<property> <name>key.acl.hue-key.DECRYPT_EEK</name> <value>oozie,hue oozie,hue</value> </property>
Impala
Recommendations
If HDFS encryption is enabled, configure Impala to encrypt data spilled to local disk.
In releases lower than Impala 2.2.0 / CDH 5.4.0, Impala does not support the
LOAD DATAstatement when the source and destination are in different encryption zones. If you are running an affected release and need to use
LOAD DATAw_dirstartup flag for the impalad daemon. / CDH 5.
MapReduce and YARN
MapReduce v1
Recommendations
MRv1 stores both history and logs on local disks by default. Even if you do configure history to be stored on HDFS, the files are not renamed. Hence, no special configuration is required. (
Configuring KMS Access Control Lists
(ACLs)).
Steps
On a cluster with MRv2 (YARN) installed, create the
/user/history directory and make that an
encryption zone.
If
/user/history already exists and is not
empty:
- Create an empty
/user/history-tmpdirectory.
- Make
/user/history-tmpan encryption zone.
- DistCp all data from
/user/historyinto
/user/history-tmp.
- Remove
/user/historyand rename
/user/history-tmpto
/user/history.
KMS ACL Configuration for MapReduce
In the KMS ACLs (
Configuring KMS Access Control Lists (ACLs)), grant
DECRYPT_EEK permission for the MapReduce key to the
mapred and
yarn users and the
hadoop group:
<property> <name>key.acl.mapred-key.DECRYPT_EEK</name> <value>mapred,yarn hadoop</value> </description> </property>
Search
Recommendations
Make
/solr an encryption zone. When you create the encryption zone, name
the key
solr-key to take advantage of auto-generated KMS ACLs
(
Configuring KMS Access Control Lists (ACLs)).
Steps
On a cluster without Solr currently installed, create the
/solr directory and make that an encryption
zone.
On a cluster with Solr already installed:
- Create an empty
/solr-tmpdirectory.
- Make
/solr-tmpan encryption zone.
- DistCp all data from
/solrinto
/solr-tmp.
- Remove
/solr, and rename
/solr-tmpto
/solr.
KMS ACL Configuration for Search
In the KMS ACLs (
Configuring KMS Access Control Lists (ACLs)), grant the
solr user and group
DECRYPT_EEK permission for the
Solr key:
<property> <name>key.acl.solr-key.DECRYPT_EEK</name> <value>solr solr</value> </description> </property>
In the KMS ACLs (
Configuring KMS Access Control Lists (ACLs)), grant
DECRYPT_EEK permission for the Spark key to the
spark
user and any groups that can submit Spark jobs:
<property> <name>key.acl.spark-key.DECRYPT_EEK</name> <value>spark spark-users</value> </property>
Sqoop
Recommendations
- For Hive support: Ensure that you are using Sqoop with the
--target-dirparameter set to a directory that is inside the Hive encryption zone. For more details, see Hive.
- For append/incremental support: Make sure that the
sqoop.test.import.rootDirproperty points to the same encryption zone as the
--target-dirargument.
- For HCatalog support: No special configuration is required. | https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/security-encrypting-data-at-rest/topics/cm-security-component-kms.html | 2021-09-17T05:04:54 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.cloudera.com |
DiscoveryUsing Saved Discovery Scan Settings
Saved Scan Settings Library (DISCOVER > Saved Scan Settings) enables you to manage and run saved discovery profiles.
The default view of the saved discovery profile table includes:
See Also
Discovery
Getting the Most from Your Scan
Initiating a Discovery Scan
Handling Shared Addresses (Merge Devices)
Exclude a List of IP Addresses
Adding Discovered Devices (to MY NETWORK) | https://docs.ipswitch.com/NM/WhatsUpGold2019/03_Help/1033/41399.htm | 2021-09-17T04:55:23 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.ipswitch.com |
How to replicate the ISE experience in Visual Studio Code
While the PowerShell extension for VS Code doesn't seek complete feature parity with the PowerShell ISE, there are features in place to make the VS Code experience more natural for users of the ISE.
This document tries to list settings you can configure in VS Code to make the user experience a bit more familiar compared to the ISE.
ISE Mode
Note
This feature is available in the PowerShell Preview extension since version 2019.12.0 and in the PowerShell extension since version 2020.3.0.
The easiest way to replicate the ISE experience in Visual Studio Code is by turning on "ISE Mode". To do this, open the command palette (F1 OR Ctrl+Shift+P OR Cmd+Shift+P on macOS) and type in "ISE Mode". Select "PowerShell: Enable ISE Mode" from the list.
This command automatically applies the settings described below The result looks like this:
ISE Mode configuration settings
ISE Mode makes the following changes to VS Code settings.
Key bindings
Note
You can configure your own key bindings in VS Code as well.
Simplified ISE-like UI
If you're looking to simplify the Visual Studio Code UI to look more closely to the UI of the ISE, apply these two settings:
"workbench.activityBar.visible": false, "debug.openDebug": "neverOpen",
These settings hide the "Activity Bar" and the "Debug Side Bar" sections shown inside the red box below:
The end result looks like this:
Tab completion
To enable more ISE-like tab completion, add this setting:
"editor.tabCompletion": "on",
No focus on console when executing
To keep the focus in the editor when you execute with F8:
"powershell.integratedConsole.focusConsoleOnExecute": false
The default is
truefor accessibility purposes.
Don't start integrated console on startup
To stop the integrated console on startup, set:
"powershell.integratedConsole.showOnStartup": false
Note
The background PowerShell process still starts to provide IntelliSense, script analysis, symbol navigation, etc., but the console won't be shown.
Assume files are PowerShell by default
To make new/untitled files, register as PowerShell by default:
"files.defaultLanguage": "powershell",
Color scheme
There are a number of ISE themes available for VS Code to make the editor look much more like the ISE.
In the Command Palette type
themeto get
Preferences: Color Themeand press Enter. In the drop-down list, select
PowerShell ISE.
You can set this theme in the settings with:
"workbench.colorTheme": "PowerShell ISE",
PowerShell Command Explorer
Thanks to the work of @corbob, the PowerShell extension has the beginnings of its own command explorer.
In the Command Palette, enter
PowerShell Command Explorerand press Enter.
Open in the ISE
If you want to open a file in the Windows PowerShell ISE anyway, open the Command Palette, search for "open in ise", then select PowerShell: Open Current File in PowerShell ISE.
Other resources
- 4sysops has a great article on configuring VS Code to be more like the ISE.
- Mike F Robbins has a great post on setting up VS Code.
VS Code Tips
Command Palette
The Command Palette is handy way of executing commands in VS Code. Open the command palette using F1 OR Ctrl+Shift+P OR Cmd+Shift+P on macOS.
For more information, see the VS Code documentation.
Disable the Debug Console
If you only plan on using VS Code for PowerShell scripting, you can hide the Debug Console since it is not used by the PowerShell extension. To do so, right click on Debug Console then click on the check mark to hide it.
More settings
If you know of more ways to make VS Code feel more familiar for ISE users, contribute to this doc. If there's a compatibility configuration you're looking for, but you can't find any way to enable it, open an issue and ask away!
We're always happy to accept PRs and contributions as well! | https://docs.microsoft.com/en-us/powershell/scripting/dev-cross-plat/vscode/how-to-replicate-the-ise-experience-in-vscode?view=powershell-7 | 2021-09-17T05:29:30 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['../../../docs-conceptual/dev-cross-plat/vscode/media/how-to-replicate-the-ise-experience-in-vscode/3-ise-mode.png?view=powershell-7',
'Visual Studio Code in ISE Mode'], dtype=object) ] | docs.microsoft.com |
Product restrictions can be really useful in a number of distribution or manufacturing scenarios to ensure you sell the right products to the right customers.
We have highlighted three different scenarios which would commonly require product restrictions within the CRM:
Scenario 1 - Authorised Dealers
Some businesses may sell different tiers of products and each may have a different level of quality. As a result, you may only sell the high-end/premium and potentially branded products to selected resellers, dealers or retailers. Therefore, particular products can only be sold to authorised dealers, requiring product restrictions within the CRM.
Scenario 2 - Trained & Qualified
Another scenario where product restrictions may be required is if a business sells products that require a particular skill. For example, if you sold specialist beauty products such as nails, lashes and hair extensions, you will only want to sell these to qualified practitioners. This could equally apply to products that require health and safety training.
Scenario 3 - Customer-branded Products
The final scenario where product restrictions would commonly be required is if a business sells custom versions of a product that's unique to the customer. For example, if you sold beer pumps, you may create a custom range of pumps specifically branded for a customer or the pumps may have specific dimensions that they require. As a result, you can only sell these pumps to that specific customer, so product restrictions would be helpful to avoid selling these products to other customers.
Hopefully this article has highlighted how product restrictions could help your business. For more information on how to setup product restrictions within the CRM, take a look at our article. | https://docs.prospect365.com/en/articles/2829232-3-reasons-why-you-should-use-product-restrictions | 2021-09-17T03:22:13 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.prospect365.com |
Installation¶
This chapter discusses GYRE installation in detail. If you just want to get up and running, have a look at the Quick Start chapter.
Pre-Requisites¶
To compile and run GYRE, you’ll need the following software components:
A modern (2003+) Fortran compiler
The BLAS linear algebra library
The LAPACK linear algebra library
The LAPACK95 Fortran 95 interface to LAPACK
The HDF5 data management library
The crlibm correctly rounded math library
The crmath Fortran 2003 interface to crlibm
An OpenMP-aware version of the ODEPACK differential equation library (optional)
On Linux and MacOS platforms, these components are bundled together in the MESA Software Development Kit (SDK), which can be downloaded from the MESA SDK homepage. Using this SDK is strongly recommended.
Building GYRE¶
Download¶
Download the GYRE source code, and unpack it from the command line using the tar utility:
tar xf gyre-6.0.1.tar.gz
Set the
GYRE_DIR environment variable with the path to the
newly created source directory; this can be achieved e.g. using the
dirname built-in command:
export GYRE_DIR=$(dirname gyre-6.0.1)
Compile¶
Compile GYRE using the make utility:
make -j -C $GYRE_DIR
(the -j flags tells make to use multiple cores, speeding up the build).
Test¶
To check that GYRE has compiled correctly and gives reasonable results, you can run the calculation test suite via the command
make -C $GYRE_DIR test
The initial output from the tests should look something like this:
TEST numerics (OpenMP)... ...succeeded TEST numerics (band matrix)... ...succeeded TEST numerics (*_DELTA frequency units)... ...succeeded TEST numerics (rotation, Doppler shift)... ...succeeded TEST numerics (rotation, traditional approximation)... ...succeeded
If things go awry, consult the Troubleshooting chapter.
Custom Builds¶
Custom builds of GYRE can be created by setting certain environment
variables, and/or variables in the file
$GYRE_DIR/src/build/Makefile, to the value
yes. The
following variables are currently supported:
- DEBUG
Enable debugging mode (default
no)
- OMP
Enable OpenMP parallelization (default
yes)
- MPI
Enable MPI parallelization (default
no)
- DOUBLE_PRECISION
Use double precision floating point arithmetic (default
yes)
- CRMATH
Use correctly rounded math functions (default
yes)
- IEEE
Use Fortran IEEE floating point features (default
no)
- FPE
Enable floating point exception checks (default
yes)
- HDF5
Include HDF5 support (default
yes)
- EXPERIMENTAL
Enable experimental features (default
no)
If a variable is not set, then its default value is assumed.
Git Access¶
Sometimes, you’ll want to try out new features in GYRE that haven’t yet made it into a formal release. In such cases, you can check out GYRE directly from the rhdtownsend/gyre git repository on GitHub:
git clone
However, a word of caution: GYRE is under constant development, and
features in the main (
master) branch can change without warning. | https://gyre.readthedocs.io/en/stable/ref-guide/installation.html | 2021-09-17T05:06:27 | CC-MAIN-2021-39 | 1631780054023.35 | [] | gyre.readthedocs.io |
Date: Fri, 18 Jan 2008 13:06:50 -0200 From: "Thiago Matias" <[email protected]> To: [email protected]. Subject: Partnership Message-ID: <[email protected]>
Next in thread | Raw E-Mail | Index | Archive | Help
Good Afternoon. I dsa TecnologiaBR school of Informatica in Brasilia, and would like to see the possibility to achieve a partnership with FreeBSD, because I have a very good demand of companies that training for the use of their products. I Grato for your attention -- Att, Thiago Matias Gerente de Treinamentos e Projetos TecnologiaBR [email protected] ou [email protected] Skype: tmatias23 QI 15 lote 1/3 - Taguatinga Norte SGAS 906 Conj F - Plano Piloto TEL 5561 3354-3162 ou 5561 8423-4142 The TecnologiaBR, focuses on the area of education and development of technology. Our trainings have excellent technical level and didactic. Our instructors are certified in the most important evidence of the market. With excellent infrastructure and teaching material.
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=1153440+0+archive/2008/freebsd-questions/20080120.freebsd-questions | 2021-09-17T04:14:17 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.freebsd.org |
Date: Sat, 23 Jan 2010 11:08:27 -0600 From: John <[email protected]> To: Jerry McAllister <[email protected]> Cc: Erich Dollansky <[email protected]>, [email protected] Subject: Re: Migration planning - old system to new Message-ID: <[email protected]> In-Reply-To: <[email protected]>; from [email protected] on Sat, Jan 23, 2010 at 11:19:34AM -0500 References: <[email protected]> <[email protected]> <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Sat, Jan 23, 2010 at 11:19:34AM -0500, Jerry McAllister wrote: > On Sat, Jan 23, 2010 at 10:15:19AM +0800, Erich Dollansky wrote: > > > Hi, > > > > On 23 January 2010 am 01:12:19 John wrote: > > > Now that I've actually gotten the new system to boot, I need to > > > figure out how I'm going to migrate everything - users, data, > > > MySQL, NAT, firewall, apache, DHCP, gateway services BIND, > > > Sendmail, etc., etc from > > > FreeBSD 4.3-RELEASE #0: Thu Jan 22 19:44:16 CST 2004 > > > to > > > FreeBSD 8.0-RELEASE #0: Sat Nov 21 15:48:17 UTC 2009 > > > > this is real jump. > > > > > > Bit of a challenge, eh? > > > > I have heard that somebody actually landed on the moon? Was it > > you? > > > > > > Not only that, but I'd like to update my UID scheme from a > > > pre-standard version (most of the UIDs are down in the 100s) to > > > the new convention so that I'm more in-line with the rest of > > > the world. > > > > Ok, I cannot imagine how you will do this with the access rights > > of the files? > > > > > > My rough idea: > > > > > > 1) Create a "migrate" account in Wheel with home as > > > /var/migrate so that I can do a dump/restore on "home" without > > > messing things up > > > > Are you sure? Use /usr to make sure you will have enough space. > > You are making the rash and probably incorrect assumption that /usr is > the largest partition/filesystem. Many people, including I, make /home > or another partition be the large one. The OP may also have done that. > > > > > > 2) Start putting together all the pieces - trying to find > > > update / conversion scripts whenever possible. > > > > I think, this would only help if you would go the long way 5.x, > > 6.x, 7x and finally 8. > > > > Setup the new machine, install the applications you need, > > configure them as close as possible to the original configuration > > and see what happens. > > > > > 4) Let people move in, try it out, see how things are > > > 5) Fix everything found in #4 > > > 6) Try a cut-over and make sure all the network services work > > > in the middle of the night sometime, then switch back > > > > Oh, it is a life system in use while you migrate. > > > > Are you able to set the new thing up in parallel? > > > > It might be easier for you to run both machines and move first the > > simple things over. > > > > > 7) Nuke /home and /var/mail and migrate them again to get the > > > latest version 8) Do the real switch > > Move/migrate them first. Don't make assumptions about what the OP has > on /home. > > But, I agree, if possible, use a second machine with V 8.0 installed > and migrate to it. > > Otherwise, make full backups, check them for readability. Then do a new > install of FreeBSD V8. Add a large disk and pull stuff out of your dump > to it and then migrate that stuff piece by piece back to the machine > main filesystems. > > ////jerry > > > > > 9) spend a couple of weeks fixing all the things that weren't > > > so disastrous that they got picked up in #4. > > > > I think, if you do it service by service, you have a better chance > > to avoid this. > > > > > > Ideas / scripts / project plans / outlines - whatever? Maybe I > > > should write a chapter for The Complete FreeBSD after surviving > > > this... > > > > Yes. It is a Le Must. > > > > Erich Sorry, gang - I should have been more clear! I am DEFINITELY doing this on a new machine! And I don't need any "migration" storage, because, well, gosh - it's tcp, people! ;) I just did the first transfer of home, and it went swell: On elwood (the new, 8.0 system): cd / umount /home newfs /dev/ad0s3e mount /home cd /home rsh dexter "dump 0uf - /home" | restore rvf - That preserves all the file modification times, too, even on directories. All you young'uns out there - don't forget dump/restore! tar and cpio are nice, but sometimes, you just gotta take it all... ("dexter" is the old machine - the FreeBSD 4.3 system) Oh, BTW, just for giggles: 10:56AM up 492 days, 13:57, 2 users, load averages: 0.02, 0.03, 0.00 That's right! Nearly 500 days! And it was well over a two hundred days before that, but we had a power outage that outlasted the UPS. Gotta love this stuff! Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll rl0 1500 <Link#1> 00:40:f4:8e:37:c7 384602213 1 585428419 0 0 rl0 1500 192.168.1 gateway 56160554 - 40918536 - - rl0 1500 dexter/32 dexter 246146 - 0 - - ed0 1500 <Link#2> 52:54:40:21:f8:a8 609350703 376495 380734152 0 15678720 ed0 1500 XXmaskedXX xxMASKEDx x 63482137 - 380606442 - - lo0 16384 <Link#3> 27069210 0 27069210 0 0 lo0 16384 127 localhost 27069192 - 27069192 - - 987522505 cpu context switches 2590296969 device interrupts 342786031 software interrupts 1096125243 traps 1217705867 system calls 4 kernel threads created 12006286 fork() calls 429899 vfork() calls 0 rfork() calls 954 swap pager pageins 1156 swap pager pages paged in 456 swap pager pageouts 774 swap pager pages paged out 7135 vnode pager pageins 32897 vnode pager pages paged in 1 vnode pager pageouts 1 vnode pager pages paged out 1502 page daemon wakeups 2926806 pages examined by the page daemon 7262 pages reactivated 507170456 copy-on-write faults 0 copy-on-write optimized faults 175434998 zero fill pages zeroed 127442529 zero fill pages prezeroed 1028 intransit blocking page faults 1020317711 total VM faults taken 0 pages affected by kernel thread creation 1300974057 pages affected by fork() 49625615 pages affected by vfork() 0 pages affected by rfork() 808730759 pages freed 4 pages freed by daemon 578473126 pages freed by exiting processes 10408 pages active 41102 pages inactive 2502 pages in VM cache 7573 pages wired down 2148 pages free 4096 bytes per page 637017826 total name lookups cache hits (69% pos + 4% neg) system 6% per-directory deletions 0%, falsehits 0%, toolong 0% -- John Lind [email protected]
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=1501234+0+archive/2010/freebsd-questions/20100124.freebsd-questions | 2021-09-17T04:53:27 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.freebsd.org |
Intellicus Query Objects support two types of merging of data sets.
- Join – includes Equi and Outer joins
- Union – union of equal and unequal number of columns
Join
This step takes two inputs. When the data is passing through this step, the data of both inputs will be joined based on properties set for this step.
Figure 14: Join Step
The Join step properties are:
Union
Union step takes two or more inputs. The data passing through this step appends to one another and forms a single output.
Generally, the data inputs selected for Union are of same structure i.e. the columns are same. But this step supports exceptions and allows having extra columns in some inputs. You can decide whether to carry forward the fields coming from only some inputs to output.
During the union process you can decide to take out the sorted or you can prefer unsorted appending.
Figure 15: Union Step
The properties of Union step are: | https://docs.intellicus.com/documentation/advanced-configurations-as-and-when-needed-19-1/working-with-query-objects-19-1/join-and-union-step-19-1/ | 2021-09-17T04:09:18 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['https://docs.intellicus.com/wp-content/uploads/2019/14/join_queryobjects.png',
None], dtype=object)
array(['https://docs.intellicus.com/wp-content/uploads/2019/14/union_queryobjects.png',
None], dtype=object) ] | docs.intellicus.com |
Nurture Unsubscribe API
#Purpose
The Nurture Unsubscribe API allows advertisers and partners to immediately send unsubscribed customers back to the Rokt system for real-time removal of all advertiser communications. Additionally, the Nurture Unsubscribe API can be used by Rokt to send unsubscribes to advertisers.
#What are the benefits?
- Efficiency. Ensure your email list is updated in real time.
- Accuracy. Maintain an up-to-date list to ensure only customers who remain opted-in receive your email communications.
#Who is the Import Nurture Unsubscribe API suitable for?
- Advertisers hosting their own unsubscribe landing page (instead of a Rokt-powered landing page).
- Advertisers simultaneously sending out their own email communications alongside Rokt-powered email nurture series to Rokt-generated customers.
#Sounds great. What do I need to do?
Engage your engineering resources to put the API in place. Reach out to your account manager for documentation on setting up the API. For those who are handling large volumes and for whom nurture lists are a priority, this is worth the upfront investment of time.
#How does this work once the engineering work is done?
Once the Nurture Unsubscribe API has been set up, you're all set. Any time a customer unsubscribes from Rokt email nurtures or advertiser communications, both systems are updated in real time.
#How do I know it's working?
Unsubscribed customers can be viewed in the Rokt platform under Customer Data > Unsubscribes.
#Authorization
The API expects an authorization header as part of the request. This is to ensure that you have permission to access the Rokt platform. The authorization header value should be your account's unique API key, acting as your credentials to access the Rokt API.
#Getting an API key
- Go to to OP1 - Data for your account.
- Click Settings in the left navigation.
- Click Webhook API Key in the submenu under Settings.
- If your account already has an API key, you see it on the page. Otherwise, click Create API Key to generate one.
Security note
Since the point of this API key is to enable access to APIs that affect your Rokt account, you should treat this API key the way you would treat other credential (like a password).
#API endpoint
The API accepts a JSON payload which must be a map. As the payload is JSON, you must set the
Content-Type header to
application/json.
#Response handling
Since the expected response from the API is a JSON payload, your request should include an accept" header of either
/ or
application/json.
#Parameters
The following key must be present in the map:
#Command line examples
On a Linux or MacOS X system with the curl command installed, the following command will unsubscribe the email address
[email protected], assuming that the
API_KEY environment variable is set to your API key:
cURL
curl --header "Authorization: $API_KEY" --header "Content-Type: application/json" --data '{"email":"[email protected]"}'
#Testing the API
Note that the above examples point to our live production system. So if you test with email addresses like
[email protected], those addresses would no longer receive nurture emails. Do not test with any email addresses that you wouldn't want to be unsubscribed. | https://docs.rokt.com/docs/developers/api-reference/unsubscribe-import | 2021-09-17T04:02:08 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['/assets/images/unsubscribe-import-1-30b36ea3cfba3b0772696a9b99492f4d.png',
'Screenshot 1'], dtype=object) ] | docs.rokt.com |
Best practices for affiliate creatives and banners, and brand assets
AffiliateWP allows you to provide creatives and marketing material to your affiliates to make it easier for them to promote your website and products quickly. Below are some best practice guidelines that will help you provide attractive, effective creatives for your affiliates, as well as brand assets for them to use as they wish.
Creatives provided by site owner/admin
- Creative banners should be simple and effective. Communicate one key message per banner that you want to convey about your product or service, and feel free to use some imagery to make it stand out.
- Creative banners should be aligned with your website branding and design. Consistency is key - maintain your brand's look and feel in creatives for greater impact on potential customers.
- The smaller your creative banner file size, the more efficiently it will load on websites. As per point number 1 above, simple is best. Try to avoid overloading your banners with hi-resolution images.
- Your creative banners can be any dimension or size, however we strongly recommend following regular digital ad banner sizes for best practice - they’re more likely to be compatible across multiple websites and in e-newsletter templates. Check out the IAB's Display Advertising Guidelines for some suggestions.
Affiliate-made creatives
- Your logo and guidelines for how it should be used correctly
- Brand colors
- Fonts
- Any product shots or imagery you want them to use
- Specific copy or content that should be used.
Common banner specs
- 300x250
- 728x90
- 300x600
- 468x60
- 160x600
- 40kb is recommended for a faster load time
- .PNG (preferred)
- .JPG
- .GIF
Note: To share creatives to social networks, images can be saved to an affiliate's computer and uploaded to the appropriate social network. On some social networks (e.g. Facebook) the HTML code provided by creatives cannot be used directly inside the share box. In these instances, the affiliate can simply use the text and link where appropriate. For more, see. | https://docs.affiliatewp.com/article/1071-best-practices-for-affiliate-creatives-banners-and-brand-assets | 2021-09-17T04:02:05 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.affiliatewp.com |
Change a Document's Shard Key Value¶
Starting in MongoDB 4.2, you can update a document's shard key value
unless the shard key field is the immutable
_id field.
Important
When updating the shard key value
- You must be on a
mongos. Do not issue the operation directly on the shard.
- You must run either in a transaction or as a retryable write.
- You must include an equality condition on the full shard key in the query filter. For example, consider a
messagescollection that uses
{ activityid: 1, userid : 1 }as the shard key. To update the shard key value for a document, you must include
activityid: <value>, userid: <value>in the query filter. You can include additional fields in the query as appropriate.
See also the specific write command/methods for additional operation-specific requirements when run against a sharded collection.
To update a shard key value, use the following operations:
Warning
Starting in version 4.4, documents in sharded collections can be missing the shard key fields. Take precaution to avoid accidentally removing the shard key when changing a document's shard key value.
Example¶
Consider a
sales collection which is sharded on the
location
field. The collection contains a document with the
_id
12345 and the
location
"". To update the field value for
this document, you can run the following command:
Tip
See also:
Give Feedback | https://docs.mongodb.com/v5.0/core/sharding-change-shard-key-value/ | 2021-09-17T04:11:21 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.mongodb.com |
Before you begin¶
Have a copy of the HiChIP scripts on your machine:¶
Clone this repository:
git clone
And make the
enrichment_stats.sh script executable:
chmod +x ./HiChiP/enrichment_stats.sh
Dependencies¶
Make sure that the following dependencies are installed:
If you are facing any issues with the installation of any of the dependencies, please contact the supporter of the relevant package.
python3 and pip3 are required, if you don’t already have them installed, you will need sudo privileges.
Update and install python3 and pip3:
sudo apt-get update sudo apt-get install python3 python3-pip
To set python3 and pip3 as primary alternative:
sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 1 sudo update-alternatives --install /usr/bin/pip pip /usr/bin/pip3 1
If you are working on a new machine and don’t have the dependencies, you can use the
installDep.sh script in this repository for updating your instance and installing the dependencies and python3. This process will take approximately 10’ and requires sudo privileges. The script was tested on Ubuntu 18.04 with the latest version as of 04/11/2020
If you choose to run the provided installation script you will first need to set the permission to the file:
chmod +x ./HiChiP/installDep.sh
And then run the installation script:
./HiChiP/installDep.sh
Remember!
Once the installation is completed, sign off and then sign back to your instance to refresh the database of applications.
Input files¶
For this tutorial you will need:
fastq files R1 and R2, either fastq or fastq.gz are acceptable
reference in a fasta file format, e.g. hg38
peak calls from ChiP-seq experiment (e.g. your own experiment or ENCODE gold standard in bed or narrowpeak format, as explained here), more details and links to ENCODE files can be found here.
If you don’t already have your own input files or want to run a test on a small data set, you can download sample fastq files from the HiChIP Data Sets section. The 2M data set is suitable for a quick testing of the instructions in this tutorial.
The following files are suitable for testing, you can download them as follows:
wget wget wget
For zipped bed files, unzip them after download is completed (no need to unzip fastq.gz files)
Example:
gunzip ENCFF017XLW.bed.gz | https://hichip.readthedocs.io/en/latest/before_you_begin.html | 2021-09-17T04:02:26 | CC-MAIN-2021-39 | 1631780054023.35 | [] | hichip.readthedocs.io |
Running in GCE¶
After setting up Toil on Installation, Toil scripts can be run just by designating a job store location as shown in Running a basic workflow.
Note
Google Cloud Storage is available in Toil for experimental purposes. Only AWS is currently supported in Toil.
If you wish to use the Google Storage job store, install Toil with the
.boto with your
credentials and some configuration:
[Credentials] gs_access_key_id = KEY_ID gs_secret_access_key = SECRET_KEY [Boto] https_validate_certificates = True [GSUtil] content_language = en default_api_version = 2
gs_access_key_id and
gs_secret_access_key can be generated by navigating
to your Google Cloud Storage console and clicking on Settings. On
the Settings page, navigate to the Interoperability tab and click Enable
interoperability access. On this page you can now click Create a new key to
generate an access key and a matching secret. Insert these into their
respective places in the
.boto file and you will be able to use a Google
job store when invoking a Toil script, as in the following example:
$ python HelloWorld.py google:projectID:jobStore
The
projectID component of the job store argument above refers your Google
Cloud Project ID in the Google Cloud Console, and will be visible in the
console’s banner at the top of the screen. The
jobStore component is a name
of your choosing that you will use to refer to this job store. | https://toil.readthedocs.io/en/3.10.0/running/gce.html | 2021-09-17T04:17:37 | CC-MAIN-2021-39 | 1631780054023.35 | [] | toil.readthedocs.io |
Contributing¶
This page will go over the process for contributing to the TOM Toolkit.
Contributing Code/Documentation¶
If you’re interested in contributing code to the project, thank you! For those unfamiliar with the process of contributing to an open-source project, you may want to read through Github’s own short informational section on how to submit a contribution.
Identifying a starting point¶
The best place to begin contributing is by first looking at the Github issues page, to see what’s currently needed. Issues that don’t require much familiarity with the TOM Toolkit will be tagged appropriately.
Familiarizing yourself with Git¶
If you are not familiar with git, we encourage you to briefly look at the Git Basics page.
Git Workflow¶
The workflow for submitting a code change is, more or less, the following:
Fork the TOM Toolkit repository to your own Github account.
Clone the forked repository to your local working machine.
git clone [email protected]:<Your Username>/tom_base.git
Add the original “upstream” repository as a remote.
git remote add upstream
Ensure that you’re synchronizing your repository with the “upstream” one relatively frequently.
git fetch upstream git merge upstream/main
Create and checkout a branch for your changes (see Branch Naming).
git checkout -b <New Branch Name>
Commit frequently, and push your changes to Github. Be sure to merge main in before submitting your pull request.
git push origin <Branch Name>
When your code is complete and tested, create a pull request from the upstream TOM Toolkit repository.
Be sure to click “compare across forks” in order to see your branch!
We may ask for some updates to your pull request, so revise as necessary and push when revisions are complete. This will automatically update your pull request.
Branch Naming¶
Branch names should be prefixed with the purpose of the branch, be it a bugfix or an enhancement, along with a descriptive title for the branch.
bugfix/fix-typo-target-detail feature/reticulating-splines enhancement/refactor-planning-tool
Code Style¶
We recommend that you use a linter, as all pull requests must pass a
flake8 check. We also recommend configuring your editor to
automatically remove trailing whitespace, add newlines on save, and
other such helpful style corrections. You can check if your styling will
meet standards before submitting a pull request by doing a
pip install flake8 and running the same command our Github Actions
build does:
flake8 tom_* --exclude=*/migrations/* --max-line-length=120 | https://tom-toolkit.readthedocs.io/en/stable/introduction/contributing.html | 2021-09-17T04:20:13 | CC-MAIN-2021-39 | 1631780054023.35 | [] | tom-toolkit.readthedocs.io |
Unit testing¶
Introduction¶
Many aspects in the design flow of modern digital hardware design can be viewed as a special kind of software development. From that viewpoint, it is a natural question whether advances in software design techniques can not also be applied to hardware design.
One software design approach that deserves attention is Extreme Programming (XP). It is a fascinating set of techniques and guidelines that often seems to go against the conventional wisdom. On other occasions, XP just seems to emphasize the common sense, which doesn’t always coincide with common practice. For example, XP stresses the importance of normal workweeks, if we are to have the fresh mind needed for good software development.
It is not my intention nor qualification to present a tutorial on Extreme Programming. Instead, in this section I will highlight one XP concept which I think is very relevant to hardware design: the importance and methodology of unit testing.
The importance of unit tests¶
Unit testing is one of the corner stones of Extreme Programming. Other XP concepts, such as collective ownership of code and continuous refinement, are only possible by having unit tests. Moreover, XP emphasizes that writing unit tests should be automated, that they should test everything in every class, and that they should run perfectly all the time.
I believe that these concepts apply directly to hardware design. In addition, unit tests are a way to manage simulation time. For example, a state machine that runs very slowly on infrequent events may be impossible to verify at the system level, even on the fastest simulator. On the other hand, it may be easy to verify it exhaustively in a unit test, even on the slowest simulator.
It is clear that unit tests have compelling advantages. On the other hand, if we
need to test everything, we have to write lots of unit tests. So it should be
easy and pleasant to create, manage and run them. Therefore, XP emphasizes the
need for a unit test framework that supports these tasks. In this chapter, we
will explore the use of the
unittest module from the standard Python library
for creating unit tests for hardware designs.
Unit test development¶
In this section, we will informally explore the application of unit test techniques to hardware design. We will do so by a (small) example: testing a binary to Gray encoder as introduced in section Bit indexing.
Defining the requirements¶
We start by defining the requirements. For a Gray encoder, we want to the output
to comply with Gray code characteristics. Let’s define a code as a list
of codewords, where a codeword is a bit string. A code of order
n has
2**n codewords.
A well-known characteristic is the one that Gray codes are all about:
Consecutive codewords in a Gray code should differ in a single bit.
Is this sufficient? Not quite: suppose for example that an implementation returns the lsb of each binary input. This would comply with the requirement, but is obviously not what we want. Also, we don’t want the bit width of Gray codewords to exceed the bit width of the binary codewords.
Each codeword in a Gray code of order n must occur exactly once in the binary code of the same order.
With the requirements written down we can proceed.
Writing the test first¶
A fascinating guideline in the XP world is to write the unit test first. That is, before implementing something, first write the test that will verify it. This seems to go against our natural inclination, and certainly against common practices. Many engineers like to implement first and think about verification afterwards.
But if you think about it, it makes a lot of sense to deal with verification first. Verification is about the requirements only — so your thoughts are not yet cluttered with implementation details. The unit tests are an executable description of the requirements, so they will be better understood and it will be very clear what needs to be done. Consequently, the implementation should go smoother. Perhaps most importantly, the test is available when you are done implementing, and can be run anytime by anybody to verify changes.
Python has a standard
unittest module that facilitates writing, managing and
running unit tests. With
unittest, a test case is written by creating a
class that inherits from
unittest.TestCase. Individual tests are created by
methods of that class: all method names that start with
test are considered
to be tests of the test case.
We will define a test case for the Gray code properties, and then write a test for each of the requirements. The outline of the test case class is as follows:
import unittest class TestGrayCodeProperties(unittest.TestCase): def testSingleBitChange(self): """Check that only one bit changes in successive codewords.""" .... def testUniqueCodeWords(self): """Check that all codewords occur exactly once.""" ....
Each method will be a small test bench that tests a single requirement. To write the tests, we don’t need an implementation of the Gray encoder, but we do need the interface of the design. We can specify this by a dummy implementation, as follows:
from myhdl import block @block def bin2gray(B, G): # DUMMY PLACEHOLDER """ Gray encoder. B -- binary input G -- Gray encoded output """ pass
For the first requirement, we will test all consecutive input numbers, and
compare the current output with the previous one For each input, we check that
the difference is exactly a single bit. For the second requirement, we will test
all input numbers and put the result in a list. The requirement implies that if
we sort the result list, we should get a range of numbers. For both
requirements, we will test all Gray codes up to a certain order
MAX_WIDTH.
The test code looks as follows:
import unittest from myhdl import Simulation, Signal, delay, intbv, bin from bin2gray import bin2gray MAX_WIDTH = 11 class TestGrayCodeProperties(unittest.TestCase): def testSingleBitChange(self): """Check that only one bit changes in successive codewords.""" def test(B, G): w = len(B) G_Z = Signal(intbv(0)[w:]) B.next = intbv(0) yield delay(10) for i in range(1, 2**w): G_Z.next = G B.next = intbv(i) yield delay(10) diffcode = bin(G ^ G_Z) self.assertEqual(diffcode.count('1'), 1) self.runTests(test) def testUniqueCodeWords(self): """Check that all codewords occur exactly once.""" def test(B, G): w = len(B) actual = [] for i in range(2**w): B.next = intbv(i) yield delay(10) actual.append(int(G)) actual.sort() expected = list(range(2**w)) self.assertEqual(actual, expected) self.runTests(test) def runTests(self, test): """Helper method to run the actual tests.""" for w in range(1, MAX_WIDTH): B = Signal(intbv(0)[w:]) G = Signal(intbv(0)[w:]) dut = bin2gray(B, G) check = test(B, G) sim = Simulation(dut, check) sim.run(quiet=1) if __name__ == '__main__': unittest.main(verbosity=2)
Note how the actual check is performed by a
self.assertEqual method, defined
by the
unittest.TestCase class. Also, we have factored out running the
tests for all Gray codes in a separate method
runTests.
Test-driven implementation¶
With the test written, we begin with the implementation. For illustration purposes, we will intentionally write some incorrect implementations to see how the test behaves.
The easiest way to run tests defined with the
unittest framework, is to put
a call to its
main method at the end of the test module:
unittest.main()
Let’s run the test using the dummy Gray encoder shown earlier:
% python test_gray_properties.py testSingleBitChange (__main__.TestGrayCodeProperties) Check that only one bit changes in successive codewords. ... ERROR testUniqueCodeWords (__main__.TestGrayCodeProperties) Check that all codewords occur exactly once. ... ERROR
As expected, this fails completely. Let us try an incorrect implementation, that puts the lsb of in the input on the output:
from myhdl import block, always_comb @block def bin2gray(B, G): # INCORRECT IMPLEMENTATION """ Gray encoder. B -- binary input G -- Gray encoded output """ @always_comb def logic(): G.next = B[0] return logic
Running the test produces:. ... FAIL ====================================================================== FAIL: testUniqueCodeWords (__main__.TestGrayCodeProperties) Check that all codewords occur exactly once. ---------------------------------------------------------------------- Traceback (most recent call last): File "test_gray_properties.py", line 42, in testUniqueCodeWords self.runTests(test) File "test_gray_properties.py", line 53, in runTests sim.run(quiet=1) File "/home/jand/dev/myhdl/myhdl/_Simulation.py", line 154, in run waiter.next(waiters, actives, exc) File "/home/jand/dev/myhdl/myhdl/_Waiter.py", line 127, in next clause = next(self.generator) File "test_gray_properties.py", line 40, in test self.assertEqual(actual, expected) AssertionError: Lists differ: [0, 0, 1, 1] != [0, 1, 2, 3] First differing element 1: 0 1 - [0, 0, 1, 1] + [0, 1, 2, 3] ---------------------------------------------------------------------- Ran 2 tests in 0.083s FAILED (failures=1)
Now the test passes the first requirement, as expected, but fails the second one. After the test feedback, a full traceback is shown that can help to debug the test output.
Finally, we use a correct implementation:
from myhdl import block, always_comb @block def bin2gray(B, G): """ Gray encoder. B -- binary input G -- Gray encoded output """ @always_comb def logic(): G.next = (B>>1) ^ B return logic
Now the tests pass:
$. ... ok ---------------------------------------------------------------------- Ran 2 tests in 0.204s OK
Additional requirements¶
In the previous section, we concentrated on the general requirements of a Gray code. It is possible to specify these without specifying the actual code. It is easy to see that there are several codes that satisfy these requirements. In good XP style, we only tested the requirements and nothing more.
It may be that more control is needed. For example, the requirement may be for a particular code, instead of compliance with general properties. As an illustration, we will show how to test for the original Gray code, which is one specific instance that satisfies the requirements of the previous section. In this particular case, this test will actually be easier than the previous one.
We denote the original Gray code of order
n as
Ln. Some examples:
L1 = ['0', '1'] L2 = ['00', '01', '11', '10'] L3 = ['000', '001', '011', '010', '110', '111', '101', 100']
It is possible to specify these codes by a recursive algorithm, as follows:
- L1 = [‘0’, ‘1’]
- Ln+1 can be obtained from Ln as follows. Create a new code Ln0 by prefixing all codewords of Ln with ‘0’. Create another new code Ln1 by prefixing all codewords of Ln with ‘1’, and reversing their order. Ln+1 is the concatenation of Ln0 and Ln1.
Python is well-known for its elegant algorithmic descriptions, and this is a good example. We can write the algorithm in Python as follows:
def nextLn(Ln): """ Return Gray code Ln+1, given Ln. """ Ln0 = ['0' + codeword for codeword in Ln] Ln1 = ['1' + codeword for codeword in Ln] Ln1.reverse() return Ln0 + Ln1
The code
['0' + codeword for ...] is called a list comprehension. It
is a concise way to describe lists built by short computations in a for loop.
The requirement is now that the output code matches the expected code Ln. We use
the
nextLn function to compute the expected result. The new test case code
is as follows:
import unittest from myhdl import Simulation, Signal, delay, intbv, bin from bin2gray import bin2gray from next_gray_code import nextLn MAX_WIDTH = 11 class TestOriginalGrayCode(unittest.TestCase): def testOriginalGrayCode(self): """Check that the code is an original Gray code.""" Rn = [] def stimulus(B, G, n): for i in range(2**n): B.next = intbv(i) yield delay(10) Rn.append(bin(G, width=n)) Ln = ['0', '1'] # n == 1 for w in range(2, MAX_WIDTH): Ln = nextLn(Ln) del Rn[:] B = Signal(intbv(0)[w:]) G = Signal(intbv(0)[w:]) dut = bin2gray(B, G) stim = stimulus(B, G, w) sim = Simulation(dut, stim) sim.run(quiet=1) self.assertEqual(Ln, Rn) if __name__ == '__main__': unittest.main(verbosity=2)
As it happens, our implementation is apparently an original Gray code:
$ python test_gray_original.py testOriginalGrayCode (__main__.TestOriginalGrayCode) Check that the code is an original Gray code. ... ok ---------------------------------------------------------------------- Ran 1 test in 0.099s OK | http://docs.myhdl.org/en/stable/manual/unittest.html | 2021-09-17T04:19:08 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.myhdl.org |
Theme Options
Angelline supports the awesome Theme Customizer. You can configure all theme settings on your WordPress Administration Panels → Appearance → Customize
Header¶
In this section, you will find all options to change your header including: Header layout (5 types), edit Slogan, change Logo, set Sticky menu, set effects and backgound, etc
Footer¶
Go to Customize → Footer, you can change Copyright and Footer Logo
Home layout¶
Buzzo supports up to 9 beautiful layouts for Homepage, category page and Archive page. You just choose the layout you prefer: Standard, Standard with Related, List, List (1st Post Large), Grid, Grid (1st Post large), Mansory, Modern and Parallax
Note: This setting is generally applied. If you want more specific setting, you can go to Post > categories
Sidebar¶
Angelline allows changing sidebar to left, right or no sidebar in Customize → Sidebar. You can also override these settings in Advanced display settings when editing Post/Page
Note: If you want display a specific layout for specific post, enable setting Customize → Blog → Enable advanced layout. Then each post can be display with Advanced layout setting in Edit post page.
Feature¶
Here you can display feature content in homepage/categories page/archive page/single page and all pages with detail settings for layout.
Single Post¶
Here to display some options for single post like Social Sharing, Author Bio, etc
Social Network¶
Add you social accounts here. 13 most popular social networks are supported
Color Scheme¶
Change theme basic color here to best suit your needs | https://docs.awethemes.com/angelline/theme-options.html | 2021-09-17T04:52:47 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['images/header.png', None], dtype=object)
array(['images/footer.png', None], dtype=object)
array(['images/layout.png', None], dtype=object)
array(['images/feature0.png', None], dtype=object)
array(['images/feature.png', None], dtype=object)] | docs.awethemes.com |
Date: Wed, 16 Jan 2008 20:13:07 +0100 (CET) From: Wojciech Puchar <[email protected]> To: FreeBSD User <[email protected]> Cc: [email protected] Subject: Re: When is 7.0 being released? Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
> Hello, > > Does anybody have an idea when 7.0 will be released? It looks like the > schedule hasn't been updated, and it was scheduled for January 14th. > > Where can I find additional information? when it will be ready, stable and tested. if you need to have "the latest" NOW, consider installing -current. you will help developers then, reporting any bugs that may happen
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=713292+0+archive/2008/freebsd-questions/20080120.freebsd-questions | 2021-09-17T04:36:23 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.freebsd.org |
Command asm
Asm, typically invoked as “go tool asm”, assembles the source file into an object file named for the basename of the argument source file with a .o suffix. The object file can then be combined with other objects into a package archive.
Command Line
Usage:
go tool asm [flags] file
The specified file must be a Go assembly file. The same assembler is used for all target operating systems and architectures. The GOOS and GOARCH environment variables set the desired target.
Flags:
-D value predefined symbol with optional simple value -D=identifier=value; can be set multiple times -I value include directory; can be set multiple times -S print assembly and machine code -debug dump instructions as they are parsed -dynlink support references to Go symbols defined in other shared libraries -o string output file; default foo.o for /a/b/c/foo.s -shared generate code that can be linked into a shared library -trimpath string remove prefix from recorded source file paths
Input language:
The assembler uses mostly the same syntax for all architectures, the main variation having to do with addressing modes. Input is run through a simplified C preprocessor that implements #include, #define, #ifdef/endif, but not #if or ##.
For more information, see. | http://docs.activestate.com/activego/1.8/cmd/asm/index.html | 2019-02-15T22:07:27 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.activestate.com |
NetworkAclEntry
Describes an entry in a network ACL.
Contents
- cidrBlock
The IPv4 network range to allow or deny, in CIDR notation.
Type: String
Required: No
- egress
Indicates whether the rule is an egress rule (applied to traffic leaving the subnet).
Type: Boolean
Required: No
- icmpTypeCode
ICMP protocol: The ICMP type and code.
Type: IcmpTypeCode object
Required: No
- ipv6CidrBlock
The IPv6 network range to allow or deny, in CIDR notation.
Type: String
Required: No
- portRange
TCP or UDP protocols: The range of ports the rule applies to.
Required: No
- protocol
The protocol number. A value of "-1" means all protocols.
Type: String
Required: No
- ruleAction
Indicates whether to allow or deny the traffic that matches the rule.
Type: String
Valid Values:
allow | deny
Required: No
- ruleNumber
The rule number for the entry. ACL entries are processed in ascending order by rule number.
Type: Integer
Required: No
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_NetworkAclEntry.html | 2019-02-15T21:26:53 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.aws.amazon.com |
Titanium SDK Animation Objective. // box is a Ti.UI.View, when clicked, it fades out of view in 2 seconds, then fades back into view box.addEventListener('click', function(){ box.animate({ opacity: 0, duration: 2000 }, function(){ box.animate({ opacity: 1, duration: 2000 }); }); }); Alternatively, you could use Animation objects to do the same thing: var a = Ti.UI.createAnimation({ opacity: 0, duration: 2000 }); var b = Ti.UI.createAnimation({ opacity: 1, duration: 2000 }); box.addEventListener('click', function(){ box.animate(a, function(){ box.animate(b); }); });. var matrix2d = Ti.UI.create2DMatrix(); matrix2d = matrix2d.rotate(20); // in degrees matrix2d = matrix2d.scale(1.5); // scale to 1.5 times original size var a = Ti.UI.createAnimation({ transform: matrix2d, duration: 2000, autoreverse: true, repeat: 3 }); box.animate(a); // set the animation in motion. var matrix3d = Ti.UI.create3DMatrix(); // In next statement, the first arg is in degrees and the next three define an xyz vector describing the transformation matrix3d = matrix3d.rotate(180, 1, 1, 0); matrix3d = matrix3d.scale(2, 2, 2); // scale factor in the xyz axes var a = Ti.UI.createAnimation({ transform: matrix3d, duration: 2000, autoreverse: true, repeat: 3 }); box.animate(a); // set the animation in motion. // container is a View to which box1 and box2 are added var selectedIndex = 0; container.addEventListener('click', function(){ if (selectedIndex%2 == 0) { container.animate({ view: box2, transition:Ti.UI.iPhone.AnimationStyle.FLIP_FROM_LEFT }); } else { container.animate({ view: box1, transition:Ti.UI.iPhone.AnimationStyle.FLIP_FROM_RIGHT }); } selectedIndex++; }); MatrixTransformations from Western Kentucky University (lots of math!) Rotation Matrices. Related Links | https://docs.axway.com/bundle/Titanium_SDK_allOS_en/page/animation.html | 2019-02-15T21:43:05 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.axway.com |
a Dask bag. Instead, use Dask Bag to load your data. This parallelizes the loading step and reduces inter-worker communication:
>>> b = db.from_sequence(['1.dat', '2.dat', ...]).map(load_from_filename)
db.read_text¶
Dask Bag can load data directly from text files. You can pass either a single file name, a list of file names, or a globstring. The resulting bag will have one item per line and the file name extension, or by using the
compression='gzip' keyword:
>>> b = db.read_text('myfile.*.txt.gz')
The resulting items in the bag are strings. If you have encoded data like line-delimited.read_avro¶
Dask Bag can read binary files in the Avro format if fastavro is installed. A bag can be made from one or more files, with optional chunking within files. The resulting bag will have one item per Avro record, which will be a dictionary of the form given by the Avro schema. There will be at least one partition per input file:
>>> b = db.read_avro('datafile.avro') >>> b = db.read_avro('data.*.avro')
By default, Dask will split data files into chunks of approximately
blocksize
bytes in size. The actual blocks you would get depend on the internal blocking
of the file.
For files that are compressed after creation (this is not the same as the internal “codec” used by Avro), no chunking should be used, and there will be exactly one partition per file:
> b = bd.read_avro('compressed.*.avro.gz', blocksize=None, compression='gzip')
db.from_delayed¶
You can construct a Dask bag from dask.delayed values using the
db.from_delayed function. For more information, see
documentation on using dask.delayed with collections.
Store Dask Bags¶
In Memory¶
You can convert a Dask bag to a list or Python iterable by calling
compute()
or by converting the object into a list:
>>> result = b.compute() or >>> result = list(b)
To Text Files¶
You can convert a Dask bag into a sequence of files on disk by calling the
.to_textfiles() method:
dask.bag.core.
to_textfiles(b,.
To Avro¶
Dask bags can be written directly to Avro binary format using fastavro. One file
will be written per bag partition. This requires the user to provide a fully-specified
schema dictionary (see the docstring of the
.to_avro() method).
dask.bag.avro.
to_avroFrames¶
You can convert a Dask bag into a Dask DataFrame and use those storage solutions.
Bag. Values¶
You can convert a Dask bag into a list of Dask delayed values and custom storage solutions from there. | https://docs.dask.org/en/latest/bag-creation.html | 2019-02-15T21:48:35 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.dask.org |
FAQs
Q: I have noticed a large number of files created in the folder “/Portals/0/Cache/MyTokens/Razor/”. What’s up with these?
A: Razor scripts are compiled into .NET assemblies dynamically and placed into this folder. My Tokens compiles a .NET assembly for every script. But notice that if you use tokens inside razor scripts, don’t use the square bracket syntax, e.g. [User:UserID]. This is merely string replacement that happens before the script is compiled. So what happens is that My Tokens will create an assembly for each User ID. That’s why you might end up with lots of assemblies. The solution is to switch to real variables, e.g. @User.UserID.
Q: I get a “A critical error has occurred” after I add the My Tokens module to a page
A: In case the URL contains the following “error=Page+Exists+in+portal”
Go to Admin - Page Management
Under the Admin Pages delete the “My Tokens” page.
Go to Admin - Recycle Bin
Under Pages - Empty Recycle Bin
Q: Can I create database tokens that connect to non-DNN databases?
A: While creating a database token you get the chance to specify a connection string (or the name of a connection string defined in web.config), making it possible to connect to other databases. If specified through web.config, My Tokens will also use the provider, so it’s possible to connect to non-SQL databases as well.
Q: How can I clear token cache programatically?
A: Using DNN API, you can clear the cache through the provider like this:
DotNetNuke.Services.Cache.CachingProvider.Instance().Remove("avt.MyTokens")
Or using My Tokens API:
avt.MyTokens.MyTokensController.ClearAllCache()
Q: How to delete file Razor code
A: Here is some C# code for MyTokens Razor scripts to delete a file by FileId.
int fileID = [TknParams:FileID]; DotNetNuke.Services.FileSystem.FileManager.Instance.DeleteFile(DotNetNuke.Services.FileSystem.FileManager.Instance.GetFile(fileID)); }
Q: Executable tokens with Razor script
A: Scenario: you need to display different paragraphs in your HTML module based on tokens that are “Execute type”; use these tokens in Razor script inside the HTML module to conditionally display content based on token values.
There are 2 solutions to this:
- Using Razor scripts (which can call your code from assembly directly)
- Implementing IPropertySource interface from DNN and placing a config file under
/DesktopModules/avt.MyTokens/Config.
The first option provides the most flexibility because you can reference your assembly and have real objects.
A video on how to extend MyTokens with new property sources is available here
If you’re using Razor scripts, you simply add a reference to your assembly and then write your C# code in the script body. Example
Q: What is the String Replace Token?
A: The syntax is
[String:Replace(Input=<text>, Match=<text>, Replacement=<text>)]. For example, when you use
[String:Replace(Input=1 1 1 8 3 1, Match=1, Replacement=3)] 1 will be replaced by 3 . The result will be 3 3 3 8 3 3.
Q: Implement localized tokens
A: Scenario: you would like to implement custom database tokens such that the value they render varies depending on the culture code of the page in a localized site. In other words, if we had a token
[mysite:greeting] on the English page, it would replace with “Hello” and on the Spanish version of that page, the same token would render as “Hola”. For example, if there was some way that the page could automatically submit the culture code in use as a parameter, etc.
Solution:
There are 2 options: Razor tokens or SQL tokens.
For Razor, you can do something like
@if (Request.QueryString["language"] == "en-us") { <text>show this</text> } else { <text>show that</text> }
There are many ways to get current language such as
System.Threading.Thread.CurrentThread.CurrentUICulture or using the DNN APIs.
For SQL, you’d build a translation table, then do something like
select TranslatedText from Table where Key="Hello" and Language="[QueryString:language]".
Q: Class for active tab
A: If you would like to style the active tab in CSS, do note that the current tab will get the “active” class.
Q: I’m trying to grab this complete URL from a page, and it only gets it up to the query string The part after # doesn’t get to the server side, that’s why My Tokens doesn’t see it. More info at en.wikipedia.org/wiki/Fragment_identifier.
A: “The fragment identifier functions differently than the rest of the URI: namely, its processing is exclusively client-side with no participation from the web server.”
Q: How to use tokens to display date
A: There are several tokens which you can use to display the date:
To extract only the day, you can use:
[String:Substring(Input="25 feb 2015",Start=0,Length=2)]
or this token:
[String:RegexMatch(Input="25 feb 2015",Match="\d{2}")]
To display something like: ”this is a two-day course and is held 1 feb 2015 - 25 feb 2015”, you can use:
[String:Replace(Input="25 feb 2015", Match="25", Replacement="01")] - 25 feb 2015
or
[String:RegexReplace(Input="25 feb 2015", Match="^\d{2}", Replacement="01")] - 25 feb 2015
Q: How to check for the latest Ventrian articles via token
A: If you want to send your users the latest news articles from your Ventrian News Article module you can easily do that with My-Tokens. Copy the xml below and import in myTokens. It will generate a token to select all approved news items of the last 7 days. And it will generate a razorscript to list that items.
<namespaces> <ns> <name>News</name> <desc /> <allPortals>false</allPortals> <tokens> <tkn> <name>ListLaatsteNews</name> <desc /> <src><src><type>razor</type><lang>csharp</lang><script>@foreach (var Newsbericht in News.selecteerafgelopenweek) { <h2> @Newsbericht.Title </h2> <text>geplaatst op:</text> @Newsbericht.StartDate <br><a href=" @Newsbericht.NewsURL ">link naar dit bericht</a></text> <hr> }</script><assemblies></assemblies></src></src> <parser /> <default /> <cacheTime>0</cacheTime> <cacheLayer>Global</cacheLayer> <server>-1</server> </tkn> <tkn> <name>selecteerafgelopenweek</name> <desc /> <src><src><type>db</type><syncTknId>-1</syncTknId><connStr></connStr><sql>SELECT TOP (1000) ArticleID, Title, IsApproved, StartDate, CONCAT('//YOURDOMAINNAMEHERE.COM/News?id=',ArticleID) AS NewsURL FROM dbo.DnnForge_NewsArticles_Article WHERE (IsApproved = 1) AND (StartDate > (GETDATE()-7))</sql><cols>ArticleID Title IsApproved StartDate NewsURL</cols><col></col></src></src> <parser><parser><type>string</type><replace>true</replace><decodeHtml>true</decodeHtml></parser></parser> <default /> <cacheTime>0</cacheTime> <cacheLayer>Global</cacheLayer> <server>-1</server> </tkn> </tokens> </ns> </namespaces> | https://docs.dnnsharp.com/my-tokens/overview/faq.html | 2019-02-15T21:12:53 | CC-MAIN-2019-09 | 1550247479159.2 | [array(['../assets/faq 1.png', None], dtype=object)] | docs.dnnsharp.com |
Configuration file¶
kiskadee uses a configuration file to define several parameters used in execution time. Database configuration, and some fetchers metadata are defined in this configuration file. When a new fetcher is added to kiskadee, a new entry in the configuration file is needed. When the new fetcher inherits from kiskadee.fetchers.Fetcher, its configurations are available in its config attribute.
kiskadee looks for the configuration file under /etc/kiskadee.conf.
This example shows a simple entry to the example fetcher.
[example_fetcher] target = example description = SAMATE Juliet test suite analyzers = cppcheck flawfinder active = yes
The field active tells kiskadee if this fetcher must be loaded when kiskadee starts. Variables to database conection are also defined in the configuration file. The field analyzers defines which analyzers will be used to run the analysis on the packages monitored by this fetcher. Check the list of analyzers supported by kiskadee for more information. | https://docs.pagure.org/kiskadee/config_file.html | 2019-02-15T21:38:06 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.pagure.org |
Auto-search mode allows you to search column data.
To enable or disable the Auto-Search mode, do any of the following:
To search for the data, select a cell in a column, and type first letters of the required data. If you made a mistake when typing, press BACKSPACE and type the string again.
To navigate to the next entry, press CTRL+DOWN keys. To return to the previous entry, press CTRL+UP. The entries will be highlighted. | https://docs.devart.com/documenter-for-oracle/working-with-data-in-data-editor/auto-search-mode.html | 2019-02-15T20:45:28 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.devart.com |
Action Form
One of the main use cases of the Action Grid module is to manage data submitted via Action Form. This includes querying, editing and deleting data. The integration is seamless. Simply add an Action Grid module to a page and select one of the Action Form modules to connect to. If you are interested how this works, read more below.
This integration was straightforward because Action Form saves all data in an internal reports table. In the past, this used to be an action that was added just like the rest of the actions. But then other functionalities were built on top of it such as the auto save or multistage submits (for example the PayPal action). In recent versions of Action Form you have the option to choose which fields are saved in the reports table. Be aware that only the included fields will appear in the Action Grid module.
But the internal reports table is not optimized for querying such as free text search, sorting or filtering. Because of this, Action Grid implements a Lucene index where data is cached and optimized accordingly. This index is kept in sync with the Action Form internal reports table using an incremental algorithm. Note that this is an abstract class from which you can derive to implement additional data sources. Read the section on Implementing Custom Data Sources for more information.
Once an Action Form module is selected as data source, the admin screen will load all fields from that form. The fields can’t be removed, but they can be turned off – and later back on if needed. While off, the fields won’t appear in the grid, but they can still be used in the actions which run on events on the server side. The Action Form data source will try to best determine each field’s capabilities. For example, it will automatically make dropdowns filterable. Each field can be expanded to configure its capabilities in terms of sorting, searching and filtering, thus overriding the defaults.
The field names are initialized with their respective titles from the form. The names can be freely edited. Action Grid shows the original field name in the list of fields in the admin screen inside brackets next to each title. It’s labeled as “ref:”
The Action Form data source links each record back to the original entry from the Action Form internal reports table. When clicking the edit button on a grid row, the browser gets redirected to the page where the form lives, passing the id of the entry via query string. Action Form sees this parameter, and loads associated data back into the form. It also knows that on submit, it should update existing record and not create a new one. The Action Grid also passes a return URL parameter that points to the page where the grid module lives. After submit, Action Form uses this to return back to the grid page. Note that if the form issues other redirects on submit, these will be dropped in favor of the return URL.
A similar workflow is in place for adding new records. Action Grid knows where the form lives and so it provides a button that when clicked redirects users to that page to submit new entries.
When deleting entries, the Action Form data source removes it both from the Lucene index and from the Action Form reports table. Following the same style as Action Form, the Action Grid module also provides a mechanism to specify a list of actions in response to an event – in this case On Delete and On Bulk Delete. Therefore it’s possible to add more logic around deleting entries. For example, when a lead is deleted also make an HTTP request to remove it from Mailchimp lists as well. So this becomes perfectly balance with the Action Form list of action that will be executed on insert. | https://docs.dnnsharp.com/action-grid/data-sources/action-form.html | 2019-02-15T21:11:37 | CC-MAIN-2019-09 | 1550247479159.2 | [array(['../assets/action form ds.png', None], dtype=object)] | docs.dnnsharp.com |
DBus responder¶
Related ticket(s):
- Provide an experimental DBus responder to retrieve custom attributes from SSSD cache
- Extend the LDAP backend to retrieve extended set of attributes
Problem Statement¶
The contemporary centralized user databases such as IPA or Active Directory store many attributes that describe the user. Apart from attributes that are related to a “computer user” entry such as user name, login shell or an ID, the databases often store data about the physical user represented by the entry, such as telephone number. Since the SSSD already has means of connecting to the remote directory, including advanced features like offline support or fail over, it would appear as a natural choice for retrieving these attributes. However, the only interface the SSSD provides towards the system at the moment is the standard POSIX interface and a couple of ad-hoc application specific responders (sudo, ssh, autofs). The purpose of this document is to describe a design of a new responder, that would listen on the system bus and allow third party consumers to retrieve custom attributes stored in a centralized database via a DBus call.
The DBus interface design¶
This section gathers feedback expressed in mailing lists, private e-mail conversations and IRC discussions and summarizes feature requests and areas that need improvement into a design proposal of both the DBus API and several required changes in the core SSSD daemon.
Cached objects¶
D-Bus Interface: Cached Objects
Object exposed on the bus¶
Instead of a single interface returning object attributes in an LDAP-like way, the interface would be built in an object-oriented fashion. Each object (i.e. a user or a group) would be identified with an object path and methods would be available to the interface user to make it possible to retrieve either a single object or a set of object.
The interface will support users, groups and domains.
Representing users and groups on the bus¶
D-Bus Interface: Users and Groups
Representing SSSD processes on the bus¶
- object path: /org/freedesktop/sssd/infopipe/Components/monitor
- object path: /org/freedesktop/sssd/infopipe/Components/Responders/$responder_name
- object path: /org/freedesktop/sssd/infopipe/Components/Backends/$sssd_domain_name
- method org.freedesktop.sssd.infopipe.ListComponents()
- returns: Array of object paths representing component objects
- method org.freedesktop.sssd.infopipe.ListResponders()
- returns: Array of object paths representing component objects
- method org.freedesktop.sssd.infopipe.ListBackends()
- returns: Array of object paths representing component objects
- method org.freedesktop.sssd.infopipe.FindMonitor()
- returns: Object path representing the monitor object
- method org.freedesktop.sssd.infopipe.FindResponderByName(String name)
- name: The name of the responder to retrieve
- returns: Object path representing the responder object
- method org.freedesktop.sssd.infopipe.FindBackendByName(String name)
- name: The name of the backend to retrieve
- returns: Object path representing the backend object
The name “Components” is chosen to not imply any particular implementation on SSSD side.
The component objects implements org.freedesktop.sssd.infopipe.Components interface, which is define as:
- method org.freedesktop.sssd.infopipe.Components.Enable()
- returns: nothing
- note: changes will be visible after SSSD is restarted
- method org.freedesktop.sssd.infopipe.Components.Disable()
- returns: nothing
- note: changes will be visible after SSSD is restarted
- method org.freedesktop.sssd.infopipe.Components.ChangeDebugLevel(Uint32 debug_level)
- debug_level: Debug level to set
- returns: nothing
- note: changes will be permanent but do not require restart of the daemon
- property String name
- The name of this service.
- property Uint32 debug_level
- The name of this service.
- property Boolean enabled
- Whether the service is enabled or not
- property string type
- Type of the component. One of “monitor”, “responder”, “backend”.
This approach will completely distinguish SSSD processes from services and domains, which are logical units that should not contain any information about SSSD architecture.
Representing service objects on the bus¶
This API should include methods to represent service object(s) and provide basic information and configuration abilities.
- object path: /org/freedesktop/sssd/infopipe/Services/$service
- method org.freedesktop.sssd.infopipe.ListServices()
- returns: Array of object paths representing Service objects
- method org.freedesktop.sssd.infopipe.FindServiceByName(String name)
- name: The name of the service to retrieve
- returns: Object path representing the service object
The service object will in the first iteration include several properties describing the domain. As this iteration doesn’t allow any modification, only properties and not methods are considered:
- property String name
- The name of this service.
- service dependent properties
Other properties might be added upon request.
Representing domain objects on the bus¶
For some consumers (such as realmd), it’s important to also know the properties of a domain. The API should include methods to retrieve a active domain object(s) and represent the domains as objects on the bus as well.
- object path: /org/freedesktop/sssd/infopipe/Domains/$domain
The synopsis of these calls would look like:
- method org.freedesktop.sssd.infopipe.ListDomains()
- returns: Array of object paths representing Domain objects
- method org.freedesktop.sssd.infopipe.ListSubdomainsByDomain(String name)
- returns: Array of object paths representing Domain objects associated with domain $name
- method org.freedesktop.sssd.infopipe.FindDomainByName(String name)
- name: The name of the domain to retrieve
- returns: Object path representing the domain object
The domain object will in the first iteration include several properties describing the domain. As this iteration doesn’t allow any modification, only properties and not methods are considered:
- property String name
- The name of this domain. Same as the domain stanza in the sssd.conf
- property String[] primary_servers
- Array of primary servers associated with this domain
- property String[] backup_servers
- Array of backup servers associated with this domain
- property Uint32 min_id
- Minimum UID and GID value for this domain
- property Uint32 max_id
- Maximum UID and GID value for this domain
- property String realm
- The Kerberos realm this domain is configured with
- property String forest
- The domain forest this domain belongs to
- property String login_format
- The login format this domain expects.
- property String fully_qualified_name_format
- The format of fully qualified names this domain uses
- property Boolean enumerable
- Whether this domain can be enumerated or not
- property Boolean use_fully_qualified_names
- Whether this domain requires fully qualified names
- property Boolean subdomain
- Whether the domain is an autodiscovered subdomain or a user-defined domain
- property ObjectPath parent_domain
- Object path of a parent domain or empty string if this is a root domain
Other properties such as provider type or case sensitivity might be added upon request. Right now, we need something other developers can experiment with.
Synchronous getter behaviour¶
Retrieving a property with a getter will always be synchronous and return the value currently cached. The getter might schedule an out-of-band update depending on the state of the cache object. The primary reason for the getter being synchronous is to be able to be composable, in other words being able to call N getters in a loop and construct a reply message containing N properties without resorting to asynchronous updates of the properties.
Callers that with to have an up-to-date view of the properties should
update the object by calling a special
update (not included ATM)
method or subscribe to the PropertiesChanged interface.
SSSD daemon features¶
Apart from features that will directly benefit the new interface, the SSSD itself must adapt to some requirements as well.
Access control¶
The DBus responder needs to limit who can request information at all and what attributes can be returned.
Limiting access to the responder¶
The DBus responder will re-use the same mechanism the PAC responder uses where UIDs of clients that can contact the responder will be enumerated in the “allowed_uids” parameter of the responder configuration.
In a future enhancement, we might add a “self” mechanism, where client will be allowed to read its own attributes. As limiting attribute access might be different for this use-case, the first iteration of the responder will not include the “self” mechanism.
Limiting access to attributes¶
The responder will have a whitelist of attributes that the client can
query. No other attributes will be returned. Requesting an attribute
that is not permitted will yield an empty response, same as if the
attribute didn’t exist. The whitelist will include the standard set of
POSIX attributes as returned by i.e.
getpwnam by default.
The administrator will be allowed to extend the whitelist in sssd.conf
using a configuration directive either in the
[ifp] section itself
or per-domain. The configuration directive shall allow either explicitly
adding attributes to the whitelist (using
+attrname) or explicitly
remove them using
-attrname.
The following example illustrates explicitly allowing the telephoneNumber attribute to be queried and removing the gecos attribute from the whitelist.
[ifp] user_attributes = +telephoneNumer, -gecos
Support for non-POSIX users and groups¶
Currently the SSSD supports looking up POSIX users and groups, mostly due to the fact that primary consumers are POSIX interfaces such as the Name Service Switch. For instance, the search filters in back ends require the presence of attributes ID.
In contrast, users and groups that consumers of this new interface require often lack the POSIX attributes. The SSSD must be extended so that even non-POSIX users and groups are handled well.
Do not require enumeration to be enabled to retrieve set of users¶
At the moment, the SSSD can either fetch a single user (using getpwnam for example) or all available users (using getpwent). As an effect, all proposed DBus calls require enumeration to be switched on in order to be able to retrieve sets of users. The SSSD needs to either grow a way to retrieve several entries at once without enumerating or needs to make enumeration much faster. | https://docs.pagure.org/SSSD.sssd/design_pages/dbus_responder.html | 2019-02-15T22:11:59 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.pagure.org |
Developer Services Portal v. 4.17.01.
For the portal running on Apache:
> SSL SSL. | https://docs.apigee.com/private-cloud/v4.17.01/configuring-portal-use-https | 2019-02-15T21:55:54 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.apigee.com |
Use this topic to find the documentation related to the reviews subcommand.
The reviews command returns information about reviews that meet specified criteria.
The Reviews command will return reviews that meet all of the next arguments:
and at least one of the following:
For example, the specified arguments are project, title, and revision. The logical condition in this case is as follows project AND (title OR revision).
To see the common information about the Review Assistant command line, please refer to the overview topic.
The example below demonstrates how to retrieve reviews that were modified before specified date. | https://docs.devart.com/review-assistant/command-line-client/reviews-subcommand.html | 2019-02-15T21:18:08 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.devart.com |
Query API reference
Interana’s API gives you, users can now run queries outside of the Interana front end.
The Interana external API is a REST API that allows integration with Interana outside of the standard interface. The API is deployed automatically as part of an Interana cluster installation. The first version of the API provides basic functionality for single measurement and time series queries.
Single Measurement Queries
Single measurement queries are queries that return a single result set. An example of a single measurement query is a Table view query that extracts a site’s top users based on the count of user events. Only one table of results is returned for the given time range.
Time series queries are queries that return a result set for every data point in the query’s time range. In Interana Explorer, time series queries are rendered using the Time view.
Endpoints
The API currently supports one endpoint: query. The query endpoint allows the user to make queries against Interana. The URL path to the query endpoint is:
https://<cluster hostname>/api/v1/query
The query endpoint must be accessed with the GET http method.
Authentication and authorization
All requests to the API must be over SSL (https protocol).
The API uses a token based authentication model. Tokens can be created or revoked by Interana support, or you can generate your own API tokens. Tokens must be passed in the Authorization header of each request to the API in the following format:
Authorization: Token <token>
Every user account is authorized to make requests to the API, as long as they use a valid request token.
Requests
Requests to the query endpoint must be sent with the GET HTTP method. The required query parameter defines the query to be executed on Interana. It is a JSON object that is URL-encoded and passed as a parameter to the request.
Object format
The
query object has the following format:
query
"Type" refers to the JSON type of the property. See for more information.
queries
You can use the advanced filter syntax with the API to reference per-actor metrics in queries. The API does not support ratio metrics.
measure
order_by
Responses
The API will return the http status code 200 for successful requests to the query endpoint, along with a JSON object containing the results of the query.
Object format
The query result object format is:
results
columns
rows
row properties
Data types
The type property of column objects describes the type of data that will appear in each row. The following table describes the JSON format of the possible type values.
time_series
time_series properties
Examples: single measurement and time series
Single measurement queries
In this single measurement example, the query is looking for the number of unique userids, grouped by artist, from April 25, 2016 to April 30, 2016.
The same query can be executed in the API with the following request and response calls. Note that start and end times are specified as milliseconds since epoch and
timezone_offset is relative to GMT.
Request: single measurement
{ "dataset": "Music", "start": 1461567600000, "end": 1461999600000, "timezone_offset": -25200000, "queries": [ { "type": "single_measurement", "measure": { "aggregator": "unique_count", "column": "userId" }, "filter": "(`artist` != \"*null*\")" } ], "sampled": true, "group_by": ["artist"], "max_groups": 5, "compute_all_others": false }
Response: single measurement
{ "rows": [ {"values": [ [ "3 Doors Down"], 31456] }, {"values": [ [ "Justin Bieber"], 31336] }, {"values": [ [ "OneRepublic"], 31772] }, {"values": [ [ "Taylor Swift"], 30136] }, {"values": [ [ "The White Stripes"], 27036] } ], "columns": [ {"type": "array", "label": ["artist"] }, {"type": "number", "label": "measure_value"} ] }
Time series queries
If the single measurement query example used above is issued as a time series query, each x-axis point in the query time range returns a count of each user’s events for that given time window. In other words, the query returns a separate result set for each point in the x-axis.
The time series query in the example below looks for the number of events between April 29, 2016 12:00 am and April 29, 2016 12:00 pm.
The same query can be executed using the API with the request call below. You can also view the corresponding response. Note that the response has been abbreviated given the large number of results.
In the request call, the query start and end times are specified in milliseconds since epoch, and
timezone_offset, also specified in milliseconds, is relative to GMT. Finally, the sampled flag indicates whether to use sampling when running the query.
Request: time series
{ “dataset”: “Music”, "start": 1461913200000, "end": 1461956400000, "timezone_offset": -25200000, "queries": [ { "type": "time_series", "measure": {"aggregator": "count_star"}, } ], "sampled": true, }
Response: time series
{ "rows": [ {"values": [ [ "All"], [ {“timestamp”: 1461913200000, “properties”: {“object_count”: 25243, "event_count": 25243}, "value": 8414.333333333332}, ... {"timestamp": 1461934800000, "properties": {"object_count": 20977, "event_count": 20977}, "value": 6992.333333333333}, ... {"timestamp": 1461956400000, "properties": {"object_count": 27581, "event_count": 27581}, "value": 9193.666666666666} ... ... ] ], "properties": {"window": 43200000, "resolution": 21600000, "rate": "minute"} } ], "columns": [ {"type": "array", "label": ["result"] }, {"type": "time_series", "label": "measure_value"} ] }
Errors and retry
The API returns appropriate HTTP status codes for error cases and JSON objects containing error information.
Status Codes
The possible error status codes for the query endpoint are:
Object format
The JSON error object format is:
Examples
400 status
{ "error": "Invalid parameter", "message": "End time must be after start time", }
429 status
{ "error": "Request limit exceeded", "message": "The request limit of 1000 queries has been reached. This token can be used for requests on 2016-03-18 00:00:00" }
Retry
Some queries that time out in the API may be cached on the server. Retrying the API request can sometimes result in retrieving results successfully. We recommend limiting retry policies to a small number of retries to avoid excessive load on the server.
Request limits and throttling
By default, tokens are authorized to make 1 query per second and 1000 requests per day. Once that limit has been exceeded, requests using that token will be rejected with HTTP error 429 until the next day. Contact Interana customer support to request a limit increase.
Versioning
This is version 1 of the External API, indicated by the string “v1” in the URL path. This API may be expanded in future releases.
For more information
- See the Query API implementation guide to learn how to create a language-specific implementation that can be used for your custom scenario.
- See the Query API troubleshooting topic if you need help troubleshooting issues with API calls. | https://docs.interana.com/Guides/Reference/Query_API_reference | 2019-02-15T20:54:45 | CC-MAIN-2019-09 | 1550247479159.2 | [array(['https://docs.interana.com/@api/deki/files/318/single_measurement_UI.png?revision=1&size=bestfit&width=640&height=390',
None], dtype=object)
array(['https://docs.interana.com/@api/deki/files/319/time_series_UI.png?revision=1&size=bestfit&width=640&height=316',
None], dtype=object) ] | docs.interana.com |
Image Viewer¶
Displays images that come with a data set.
Signals¶
Inputs:
Data
A data set with images.
Outputs:
Data
Images that come with the data.
Selected images
Images selected in the widget.
Description¶
The Image Viewer widget can display images from a data set, which are stored locally or on the internet. The widget will look for an attribute with type=image in the third header row. It can be used for image comparison, while looking for similarities or discrepancies between selected data instances (e.g. bacterial growth or bitmap representations of handwriting).
- Information on the data set
- Select the column with image data (links).
- Select the column with image titles.
- Zoom in or out.
- Saves the visualization in a file.
- Tick the box on the left to commit changes automatically. Alternatively, click Send.
Examples¶
A very simple way to use this widget is to connect the File widget with Image Viewer and see all the images that come with your data set. You can also visualize images from Import Images.
Alternatively, you can visualize only selected instances, as shown in the example below.
| https://orange3-imageanalytics.readthedocs.io/en/latest/widgets/imageviewer.html | 2019-02-15T21:11:32 | CC-MAIN-2019-09 | 1550247479159.2 | [array(['../_images/image-viewer.png', '../_images/image-viewer.png'],
dtype=object)
array(['../_images/ImageViewer-stamped.png',
'../_images/ImageViewer-stamped.png'], dtype=object)
array(['../_images/image-viewer-example1.png',
'../_images/image-viewer-example1.png'], dtype=object)
array(['../_images/image-viewer-example2.png',
'../_images/image-viewer-example2.png'], dtype=object)] | orange3-imageanalytics.readthedocs.io |
Import Images¶
Import images from a directory(s)
Description¶
Import Images walks through a directory and returs one row per located image. Columns include image name, path to image, width, height and image size. Column with image path is later used as an attribute for image visualization and embedding.
- Currently loaded folder.
- Select the folder to load.
- Click Reload to update imported images.
- Information on the input.
- Access help.
You can load a folder containing subfolders. In this case Orange will consider each folder as a class value. In the example above, Import Images loaded 26 images belonging to two categories. These two categories will be used as class values.
Example¶
Import Images is likely the first widget you will use in image analysis. It loads images and creates class values from folders. In this example we used Import Images to load 26 painting belonging to either Monet or Manet.
We can observe the result in a Data Table. See how Orange added an extra class attribute with values Monet and Manet?
Now we can proceed with standard machine learning methods. We will send images to Image Embedding, where we will use Painters embedder to retrieve image vectors.
Then we will use Test & Score and Logistic Regression, to build a model for predicting the author of a painting. We get a perfect score? How come? It turns out, these were the images the Painters embedder was trained on, so a high accuracy is expected. | https://orange3-imageanalytics.readthedocs.io/en/latest/widgets/importimages.html | 2019-02-15T20:45:12 | CC-MAIN-2019-09 | 1550247479159.2 | [array(['../_images/import-images.png', '../_images/import-images.png'],
dtype=object) ] | orange3-imageanalytics.readthedocs.io |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.