content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Difference between revisions of "Multiple PS Eye Cameras Calibration" Revision as of 12:06, 24 February 2015 Calibration Quality - 9 Step 7: Check Ground Plane - 10 Step 8:). _13<<). -. - For each ground point, click on it in 3D view and press Mark as ground button. - You can cancel marking point as ground point by pressing Unmark ground button. Step 8: Set Scene Scale Using Сamera_16<<
http://docs.ipisoft.com/index.php?title=Multiple_PS_Eye_Cameras_Calibration&diff=prev&oldid=677
2019-10-13T23:17:17
CC-MAIN-2019-43
1570986648343.8
[array(['/images/thumb/f/ff/Important.png/24px-Important.png', 'Important.png'], dtype=object) array(['/images/thumb/4/45/Tip.png/24px-Tip.png', 'Tip.png'], dtype=object) array(['/images/9/93/Maglite.jpg', 'Maglite.jpg'], dtype=object) array(['/images/4/45/Psmove.jpg', 'Psmove.jpg'], dtype=object) array(['/images/5/5a/Calibration-darkening.png', 'Calibration-darkening.png'], dtype=object) array(['/images/e/e3/Calibration-exposure.png', 'Calibration-exposure.png'], dtype=object) array(['/images/thumb/f/ff/Important.png/24px-Important.png', 'Important.png'], dtype=object) array(['/images/thumb/4/45/Tip.png/24px-Tip.png', 'Tip.png'], dtype=object) array(['/images/thumb/4/45/Tip.png/24px-Tip.png', 'Tip.png'], dtype=object) array(['/images/thumb/4/45/Tip.png/24px-Tip.png', 'Tip.png'], dtype=object) array(['/images/0/04/Calibration1.jpg', 'Calibration1.jpg'], dtype=object) array(['/images/b/bd/Calibration2.jpg', 'Calibration2.jpg'], dtype=object) array(['/images/a/a6/Calibration3.jpg', 'Calibration3.jpg'], dtype=object) array(['/images/thumb/4/45/Tip.png/24px-Tip.png', 'Tip.png'], dtype=object) array(['/images/thumb/f/ff/Important.png/24px-Important.png', 'Important.png'], dtype=object) array(['/images/thumb/f/ff/Important.png/24px-Important.png', 'Important.png'], dtype=object) array(['/images/9/9d/Depth_Project_Scene_Tab.png', 'Depth Project Scene Tab.png'], dtype=object)]
docs.ipisoft.com
Using an efficient back office tool is great, but using it collaboratively is even more powerful! This is why we've laid the foundations of what will grow to be an essential tool of your day-to-day work: Notes - Leave a note and communicate around a specific record. Activity logs - Check all that has been done on a record or by a user In the next pages, we'll show you how to use these 2 features. Those features require a Business plan or higher. However, they're free to try in non-production environments.
https://docs.forestadmin.com/documentation/reference-guide/collaboration
2019-10-13T22:23:06
CC-MAIN-2019-43
1570986648343.8
[]
docs.forestadmin.com
One of ipdata's most useful features is that if you use it client-side with Javascript you don't have to pass in the user's IP address. We will automatically get the user's IP Address, geolocate it and return the geolocation data to you client-side. An example request in JQuery $.get("", function(response) {console.log(response.country_name);}, "jsonp"); A simple call like above without passing in any parameters other than the API key will give you the location of the user currently on the page with the above code embedded. Pure Javascript var request = new XMLHttpRequest();request.open('GET', '');request.setRequestHeader('Accept', 'application/json');request.onreadystatechange = function () {if (this.readyState === 4) {console.log(this.responseText);}};request.send();
https://docs.ipdata.co/use-cases/get-the-location-from-an-ip-address-in-javascript-jquery
2019-10-14T00:05:47
CC-MAIN-2019-43
1570986648343.8
[]
docs.ipdata.co
SKCanvas Class Definition Encapsulates all of the state about drawing into a device (bitmap or surface). public class SKCanvas : SkiaSharp.SKObject - Inheritance - SKCanvas - Derived - Examples var info = new SKImageInfo(640, 480); using (var surface = SKSurface.Create(info)) { SKCanvas canvas = surface.Canvas; canvas.Clear(SKColors.White); // set up drawing tools var paint = new SKPaint { IsAntialias = true, Color = new SKColor(0x2c, 0x3e, 0x50), StrokeCap = SKStrokeCap.Round }; // create the Xamagon path); } Remarks A canvas encapsulates all of the state about drawing into a device (bitmap or surface). This includes a reference to the device itself, and a stack of matrix/clip values. For any given draw call (e.g. DrawRect), the geometry of the object being drawn is transformed by the concatenation of all the matrices in the stack. The transformed geometry is clipped by the intersection of all of the clips in the stack. While the canvas holds the state of the drawing device, the state (style) of the object being drawn is held by the paint, which is provided as a parameter to each of the "Draw" methods. The paint holds attributes such as color, typeface, the text size, the stroke width, the shader (for example, gradients, patterns), etc. The canvas is returned when accessing the SKSurface.Canvas property of a surface. Construction SkiaSharp has multiple backends which receive SKCanvas drawing commands, including: - Raster Surface - GPU Surface - PDF Document - XPS Document (experimental) - SVG Canvas (experimental) - Picture - Null Canvas (for testing) Constructing a Raster Surface The raster backend draws to a block of memory. This memory can be managed by SkiaSharp or by the client. The recommended way of creating a canvas for the Raster and Ganesh backends is to use a SKSurface, which is an object that manages the memory into which the canvas commands are drawn. // define the surface properties var info = new SKImageInfo(256, 256); // construct a new surface var surface = SKSurface.Create(info); // get the canvas from the surface var canvas = surface.Canvas; // draw on the canvas ... Alternatively, we could have specified the memory for the surface explicitly, instead of asking SkiaSharp to manage it. // define the surface properties var info = new SKImageInfo(256, 256); // allocate memory var memory = Marshal.AllocCoTaskMem(info.BytesSize); // construct a surface around the existing memory var surface = SKSurface.Create(info, memory, info.RowBytes); // get the canvas from the surface var canvas = surface.Canvas; // draw on the canvas ... Constructing a GPU Surface GPU surfaces must have a GRContext object which manages the GPU context, and related caches for textures and fonts. GRContext objects are matched one to one with OpenGL contexts or Vulkan devices. That is, all SKSurface instances that will be rendered to using the same OpenGL context or Vulkan device should share a GRContext. SkiaSharp does not create an OpenGL context or a Vulkan device for you. In OpenGL mode it also assumes that the correct OpenGL context has been made current to the current thread when SkiaSharp calls are made. // an OpenGL context must be created and set as current // define the surface properties var info = new SKImageInfo(256, 256); // create the surface var context = GRContext.Create(GRBackend.OpenGL); var surface = SKSurface.Create(context, false, info); // get the canvas from the surface var canvas = surface.Canvas; // draw on the canvas ... Constructing a PDF Document The PDF backend uses SKDocument instead of SKSurface, since a document must include multiple pages. // create the document var stream = SKFileWStream.OpenStream("document.pdf"); var document = SKDocument.CreatePdf(stream); // get the canvas from the page var canvas = document.BeginPage(256, 256); // draw on the canvas ... // end the page and document document.EndPage(); document.Close(); Constructing a XPS Document (experimental) The XPS backend uses SKDocument instead of SKSurface, since a document must include multiple pages. // create the document var stream = SKFileWStream.OpenStream("document.xps"); var document = SKDocument.CreateXps(stream); // get the canvas from the page var canvas = document.BeginPage(256, 256); // draw on the canvas ... // end the page and document document.EndPage(); document.Close(); Constructing a SVG Canvas (experimental) The SVG backend uses SKSvgCanvas. // create the canvas var stream = SKFileWStream.OpenStream("image.svg"); var writer = new SKXmlStreamWriter(stream); var canvas = SKSvgCanvas.Create(SKRect.Create(256, 256), writer); // draw on the canvas ... Constructing a Picture The XPS backend uses SKPictureRecorder instead of SKSurface. // create the picture recorder var recorder = new SKPictureRecorder(); // get the canvas from the page var canvas = recorder.BeginRecording(SKRect.Create(256, 256)); // draw on the canvas ... // finish recording var picture = recorder.EndRecording(); Constructing a Null Canvas (for testing) The null canvas is a canvas that ignores all drawing commands and does nothing. // create the dummy canvas var canvas = new SKNoDrawCanvas(256, 256); // draw on the canvas ... Transformations The canvas supports a number of 2D transformations. Unlike other 2D graphic systems like CoreGraphics or Cairo, SKCanvas extends the transformations to include perspectives. You can use the Scale, Skew, Translate, RotateDegrees, RotateRadians to perform some of the most common 2D transformations. For more control you can use the SetMatrix to set an arbitrary transformation using the SKMatrix and the Concat to concatenate an SKMatrix transformation to the current matrix in use. The ResetMatrix can be used to reset the state of the matrix. Drawing The drawing operations can take a SKPaint parameter to affect their drawing. You use SKPaint objects to cache the style and color information to draw geometries, texts and bitmaps. Clipping and State It is possible to save the current transformations by calling the Save method which preserves the current transformation matrix, you can then alter the matrix and restore the previous state by using the Restore or RestoreToCount methods. Additionally, it is possible to push a new state with SaveLayer which will make an offscreen copy of a region, and once the drawing is completed, calling the Restore() method which copies the offscreen bitmap into this canvas.
https://docs.microsoft.com/en-us/dotnet/api/skiasharp.skcanvas?view=skiasharp-1.60.3
2019-10-13T23:05:42
CC-MAIN-2019-43
1570986648343.8
[]
docs.microsoft.com
IUIAutomationElement::FindFirstBuildCache method Retrieves the first child or descendant element that matches the specified condition, prefetches the requested properties and control patterns, and stores the prefetched items in the cache. Syntax HRESULT FindFirstBuildCache( TreeScope scope , IUIAutomationCondition *condition, IUIAutomationCacheRequest *cacheRequest, IUIAutomationElement **found ); Parameters arg1 A combination of values specifying the scope of the search. condition Type: IUIAutomationCondition* A pointer to a condition that represents the criteria to match. cacheRequest Type: IUIAutomationCacheRequest* A pointer to a cache request that specifies the control patterns and properties to include in the cache. found Type: IUIAutomationElement** Receives a pointer to the matching element, or NULL were_Descendants. A search through the entire subtree of the desktop could iterate through thousands of items and lead to a stack overflow. If your client application might try to find elements in its own user interface, you must make all UI Automation calls on a separate thread. To search the raw tree, specify the appropriate TreeScope in the cacheRequest parameter. Requirements See Also Caching UI Automation Properties and Control Patterns Conceptual Obtaining UI Automation Elements Reference
https://docs.microsoft.com/en-us/windows/win32/api/uiautomationclient/nf-uiautomationclient-iuiautomationelement-findfirstbuildcache
2019-10-13T23:06:23
CC-MAIN-2019-43
1570986648343.8
[]
docs.microsoft.com
Stein Series Release Notes¶ 3.8.0¶ New Features¶ Added the new CLI command “execution-get-report” that prints information about the entire workflow execution tree, including its task executions, action executions and nested workflow executions. The command currently has filters “–errors-only” that allows to find only ERROR paths of the execution tree (enabled by default), “–no-errors-only” that allows to print all tree regardless of the elements’ state, “–max-depth” that allows to limit the depth of the tree that needs to be printed. This command should be especially useful for debugging failure situations when it’s not easy to manually track down to the root cause. Add namespace parameter to workbook commands. Namespace parameter allows to create multiple workbooks with same name under different namespaces. Critical Issues¶ The default behavior of the action-execution-list, execution-list and task-list commands has been changed. Instead of returning the oldest N records (default 100 or –limit specified value) by default, they now return the most recent N records, when no other sort_dir, sort_key or marker values are provided. If the user specifies –oldest or any of the –marker, –sort_key or –sort_dir options, the new behavior is disabled and the commands will work according to the user-supplied options.
https://docs.openstack.org/releasenotes/python-mistralclient/stein.html
2019-10-13T23:40:55
CC-MAIN-2019-43
1570986648343.8
[]
docs.openstack.org
Feature: #83016 - Listing of page translations in list module¶ See Issue #83016 Description¶ Listing and editing translations of the current page are re-introduced for the List module. This feature was previously available for v8 and below due to the concept of “pages_language_overlay”, which resided on the actually current page. However, due to the removal of “pages_language_overlay” database table, page translations are only accessible when visiting the list module of the parent page. The original behaviour was now reintroduced, but with an improved visibility and additional restrictions.
https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/9.0/Feature-83016-ListingOfPageTranslationsInListModule.html
2019-10-14T00:07:38
CC-MAIN-2019-43
1570986648343.8
[]
docs.typo3.org
Doc’s HVAC is proud to install and service quality products from leading brands you can depend on. We pride ourselves in exceptional customer service. Call Doc’s HVAC to schedule a service appointment or a free, no-obligation consultation to determine the perfect system for your home or business. We look forward to hearing from you and adding you to our long list of satisfied customers. Doc’s HVAC has been providing the Chicagoland area with quality products and services from Amana and other top brands for more than 15 years. We are proud to sell and install quality heating and air conditioning products and also service all major brands of equipment. We can help you determine the perfect system for your home or business regardless of size.
https://docshvac.com/about-docs-hvac/
2019-10-13T22:29:48
CC-MAIN-2019-43
1570986648343.8
[]
docshvac.com
Grow Your Business and Simplify Sales Operations with Einstein Analytics for Manufacturing The Analytics for Manufacturing app lets account managers visualize all aspects of their business. Insights based on data from the manufacturing cloud keep you on top of your sales agreements, orders, and contracts. Visualizations help you grow the business by identifying customers to call on for new or expanded deals. You can also quickly spot products that sell the most and the least and analyze the impact of volume on pricing and revenue. Who:To create an app from the Einstein Analytics for Manufacturing template, you must also have the Manufacturing Analytics Plus add-on license. Where: This change applies to Lightning Experience and all versions of the Salesforce app in Professional, Performance, and Unlimited editions where Health Cloud is enabled. Analytics for Manufacturing is only for Salesforce Manufacturing Cloud users. How: Go to Analytics Studio, click Create, choose App, and click Start from Template. Select Analytics for Manufacturing, and follow the instructions in the wizard to create your app.
https://releasenotes.docs.salesforce.com/en-us/winter20/release-notes/rn_mfg_einstein.htm?edition=&impact=
2019-10-13T23:18:52
CC-MAIN-2019-43
1570986648343.8
[]
releasenotes.docs.salesforce.com
Difference between revisions of "SO2R/Shift binds second radio" Revision as of 11:16, 13 March 2007. click add. When the program asks "type on the key to redefine" push [Shift] and number 1 together, when the program asks "type on the new key", push number 1. Do the same procedure for all the numbers above the letters in the keyboard. Pay attention to the / key that becomes redefined as 7. In this case the user should find an unused key to redefine as /, or use the / key in the number keypad.
http://docs.win-test.com/w/index.php?title=SO2R/Shift_binds_second_radio&diff=prev&oldid=2876
2019-10-13T23:00:32
CC-MAIN-2019-43
1570986648343.8
[]
docs.win-test.com
Salesforce Einstein: Case Classification, Reports for Insights and Contacts, and Pardot Behavior Scoring (Generally Available) Here’s how Einstein helps you work smarter. Sales Einstein Activity Capture: Go to the Next Level with Syncing Capabilities, Increased Sharing, and Better Metrics Sync contacts and events. Share activities with colleagues who don’t use Einstein Activity Capture. Get a better understanding of activities with activity metrics and an improved Activities dashboard. Plus, do more with Einstein Email Insights and Recommended Connections. Einstein Insights: Create Reports Based on Insights You can now create reports and dashboards related to account and opportunity insights. Get a better understanding of your customers by, for example, running reports that show all insights for specific accounts or opportunities. Einstein Automated Contacts: Create Reports Based on Suggestions You can create reports and dashboards related to contact suggestions and opportunity contact role suggestions. For example, use reports to get a quick view of which suggestions Einstein added to Salesforce and for which accounts. Give Opportunity Scores to More Users Now everyone can prioritize opportunities and close more deals. If you have at least one Sales Cloud Einstein license, we’ll turn on Einstein Opportunity Scoring for all Salesforce users at your company. Users who are assigned a Sales Cloud Einstein license see scores on all opportunities, while all other users see scores on a limited number of opportunities. See Opportunity Scores in More Places Opportunity scores are now in the opportunity Recently Viewed list view when it’s displayed as a table. So sales reps can work from a prioritized list of opportunities with fewer clicks.. Service Save Agents Time and Improve Accuracy and Completion with Einstein Case Classification Use Einstein Case Classification to recommend or populate picklist and checkbox field values for new cases based on past data. Display those recommendations to your agents in the Service Console in Lightning Experience and Saleseforce Classic. In Lightning Experience your agents view recommendations in the related record component. Evaluate How Well Your Bots Understand Your Customers The Model Management page provides metrics on the quality of your bot’s intent model and a detailed look at individual intents. Use the information to fine-tune intents and improve how well your bot understands your customers. Make Your Bots Smarter with Customer Input Classification and Feedback Review how a bot classifies what your customers type in chat and then approve, ignore, or reclassify each input to the proper intent. This feedback process helps your bot learn and become smarter over time. Put Your Bots to Work on Your SMS Text Messaging Channel Build your bot once, and deploy it to the LiveMessage SMS text messaging channel in addition to the Live Agent chat channel. Set Up Einstein Bots More Easily with Channel Settings in the Bot Builder Are you confused when setting up your bot to work with LiveAgent? You’re not alone! We streamlined channel setup by moving it into the Bot Builder. Build Channel-Independent Bots Do you plan to put your bot to work on both Live Agent and SMS Text Messaging channels? If so, check out the new context variables and system variable that let your bot gather customer information regardless of channel. No need for pre-chat form rule actions or custom code. You can now filter by date and channel (chat, messaging) on the Performance Dashboard. And you have more ways to understand your bot’s performance: Top Last Dialogs, Escalation Success, Einstein Intent Usage, and Interactions. Get to Know Our New Bots Terminology To support natural-language processing, we’re introducing the term utterance and distinguishing it from customer input. Analytics Learn More from Your Reports with Einstein Data Insights (Generally Available) Einstein Data Insights extends your reports with AI to help you discover new and useful insights about your data. Crunch More Data with Double the Row Limit Einstein Discovery can create stories from Einstein Analytics datasets with up to 20 million rows. Export Your Story for Additional Analysis Export your story to R so your data scientists can analyze and customize the model in their own language. Export your story to Quip to share with business users on your team. Easily Deploy Models to Salesforce Objects Map fields between a model and a Salesforce object, so you can get the recommendations and predictions you want without writing code. Encrypt Einstein Discovery Data (Pilot) Einstein Discovery users who have the Shield Platform encryption license can now opt-in to encrypt Einstein Discovery data. Customization.
http://releasenotes.docs.salesforce.com/en-us/spring19/release-notes/rn_einstein.htm
2019-10-13T23:05:04
CC-MAIN-2019-43
1570986648343.8
[]
releasenotes.docs.salesforce.com
Most Popular Articles - How can I change the widget location? - How do manual recommendations work? - How do I cancel my Also Bought subscription? - Will multiple h2 tags hurt my SEO? - Can I configure the app to show only products from the same collection? - How do automatic recommendations work? - Can this app show recommendations for a new store with very few sales or no sales? - In which order are recommendations displayed? - Is my customer data protected? What about GDPR? - How to place the app widget in a different location in Fashionopolism theme
https://docs.codeblackbelt.com/collection/31-also-bought
2019-10-13T23:50:04
CC-MAIN-2019-43
1570986648343.8
[]
docs.codeblackbelt.com
Based … Main dialog to build expressions, the Expression string builder is available from many parts in QGIS and, can particularly be accessed when: The Expression builder dialog offers access to the:! The Expression tab provides the main interface to write expressions using functions, layer’s fields and values. It contains following widgets: An expression editor area to type or paste expressions. Autocompletion is available to speed expression writing: Upand Downarrows to browse the items and press Tabto insert in the expression or simply click on the wished item. QGIS also checks the expression rightness and highlights all the errors using:. The Expression tab") This group contains functions to create and manipulate arrays (also known as list data structures). The order of values within the array matters, unlike the ‘map’ data structure, where the order of key-value pairs is irrelevant and values are identified by their keys. This group contains functions for manipulating colors. This group contains functions to handle conditional checks in expressions. Some example: Send back a value if the first condition is true, else another value: CASE WHEN "software" LIKE '%QGIS%' THEN 'QGIS' ELSE 'Other' END This group contains functions to convert one data type to another (e.g., string to integer, integer to string). This group contains functions created by the user. See Function Editor for more details.: to_format()function. day()to get the interval expressed in days). This group contains functions for fuzzy comparisons between values. This group contains general assorted functions. This group contains functions that operate on geometry objects (e.g., length, area). Some examples: You can manipulate the current geometry with the variable $geometry to create a buffer or get the point on surface: buffer( $geometry, 10 ) point_on_surface( $geometry ) Return the X coordinate of the current feature’s centroid: x( $geometry ) Send back a value according to feature’s area: CASE WHEN $area > 10 000 THEN 'Larger' ELSE 'Smaller' END This group contains functions to manipulate print layout items properties. Some example: Get the scale of the ‘Map 0’ in the current print layout: map_get( item_variables('Map 0'), 'map_scale') This group contains a list of the available layers in the current project. This offers a convenient way to write expressions referring to multiple layers, such as when performing aggregates, attribute or spatial queries.. This group contains math functions (e.g., square root, sin and cos). This group contains operators (e.g., +, -, *). Note that for most of the mathematical functions below, if one of the inputs is NULL then the result is NULL. Note About fields concatenation You can concatenate strings using either || or +. The latter also means sum up expression. So if you have an integer (field or numeric value) this can be error prone. In this case, you should use ||. If you concatenate two string values, you can use both. Some examples: Joins a string and a value from a column name: 'My feature''s id is: ' || "gid" 'My feature''s id is: ' + "gid" => triggers an error as gid is an integer "country_name" + '(' + "country_code" + ')' "country_name" || '(' || "country_code" || ')' Test if the “description” attribute field starts with the ‘Hello’ string in the value (note the position of the % character): "description" LIKE 'Hello%' This group contains functions to operate on raster layer.') ) ) ) This group contains functions that operate on strings (e.g., that replace, convert to upper case). This group contains dynamic variables related to the application, the project file and other settings. It means that some functions may not be available according to the context:. With the Function Editor tab, you are able to write your own functions in Python language. This provides a handy and comfortable way to address particular needs that would not be covered by the predefined functions. The Function Editor tab To create a new function: Press the New File button. Enter a name to use in the form that pops up and press OK. A new item of the name you provide is added in the left panel of the Function Editor tab; this is a Python .py file based on QGIS template file and stored in the /python/expressions folder group.: Custom Function added to the Expression tab Further information about creating Python code can be found in the PyQGIS Developer Cookbook.
https://docs.qgis.org/3.4/en/docs/user_manual/working_with_vector/expression.html
2019-10-13T23:35:08
CC-MAIN-2019-43
1570986648343.8
[]
docs.qgis.org
>>" in this manual. - Management of knowledge objects through configuration files. Some aspects of knowledge object setup are best managed through configuration files. This manual will show Splunk Enterprise knowledge managers how to work with knowledge objects in this way. - See "Create and maintain search-time field extractions through index files" in this manual" in this manual!
https://docs.splunk.com/Documentation/Splunk/6.4.0/Knowledge/WhymanageSplunkknowledge
2019-10-13T23:21:52
CC-MAIN-2019-43
1570986648343.8
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
As explained in the introduction, ZBrush mixes both 2D and 3D; as a result it has both 2D navigation and 3D navigation. On the top right of the default ZBrush user interface you will find the 2D navigation, which is close to what you will find in photo and other image editing software: - Scroll: Click and drag on this icon to pan across your document. - Zoom: Click and drag on this icon to magnify in and out your document interactively like in other 2D editing packages. Note: some beginners use this tool to zoom in 3D, which is not its purpose. At high zoom the pixels of the document are very visible. - Actual: Click on this icon to returns the canvas to its actual size, or 100% magnification. - AAHalf Mode: When this icon is pressed, it sets the zoom factor for the canvas to exactly 0.5, or 50%. ZBrush treats this scale factor in a special manner; when the zoom factor is exactly 50%, the canvas contents are antialiased which reduces the “jagged” effect that can appear along edges in a computer-generated image. Many artists create their documents at twice the desired export size, then activate AAHalf before exporting the rendered image. ZBrush also has a Best Preview Render (BPR) mode that does not need AAHalf and produces spectacular effects like SubSurface Scattering and ambient occlusion. See the BPR page or more information on this powerful feature. Underneath these buttons and those for perspective and grid display, you will find the 3D navigation buttons. Some of them are just modes while others are direct actions. They will only be active when a model is in Edit mode. - XYZ Rotation mode (on by default): When set, rotation of the object is unconstrained so that it can be quickly spun on any axis. - Y or Z Rotation mode: When set, moving the mouse horizontally will cause rotation only around the model’s Y or Z axis. Moving the mouse vertically will cause the object to be rotated around the screen’s horizontal axis. This makes it easy to rotate around the model’s Y or Z axis, while still giving flexibility in positioning the model. - Frame: When this icon is pressed, ZBrush will scale the current Tool in order to make it fit the viewport space. The shortcut is F. - Move: Click and drag on this icon to move your 3D model inside the document. This operation is similar to a 3D pan in other 3D software. - Scale: Click and drag up or down on this icon to resize the model within the viewport. This allows you to show the whole model at once or two scale it higher so that you can get a good view of fine details. This operation is similar to moving the camera closer to or farther from your object in other 3D software. - Rotate: Click and drag on this icon to rotate your 3D model inside the document. This operation is similar to orbiting the camera or point of view around an object in another software. It won’t affect the real rotation value of the vertices of the model (the model’s local coordinate system). When doing a rotation, you can press the Shift key to do 90° constraints.
http://docs.pixologic.com/getting-started/basic-concepts/2d-and-3d-navigation/
2019-10-13T22:36:40
CC-MAIN-2019-43
1570986648343.8
[]
docs.pixologic.com
PolyGroups (which are groups of polygons identified by a specific color) are an essential part of the creation process with ZModeler. ZModeler has an extended toolset of functions to create and manipulate PolyGroups, such as using them as a Target so that an Action will affect all polygons belonging to the same PolyGroup, no matter where they appear in the mesh. PolyGroups can also be modified in the in the Tool >> Polygroups sub-palette. Propagation of PolyGroups The current PolyGroup remain the same until you decide to assign a new PolyGroup after an Action. Several Targets specifically use the PolyGroups while most Actions will either create or propagate PolyGroups. Depending on your needs, you can use the PolyGroup Action to create new PolyGroups before applying another Action. An example of this: Using the Extrusion Action will maintain the existing PolyGroup for the top part of the extrusion while creating a new PolyGroup for the sides. Continuing this Action elsewhere on the model will continue produce identical PolyGroups unless you instruct ZBrush otherwise. Temporary PolyGroup When modeling there may be times when no specific Target fits the selection you are looking for. Or perhaps you may simply want to extend an existing Target with extra polygons from another location. For this purpose, ZModeler has an integrated Temporary PolyGroup which is always displayed as white. To apply the Temporary PolyGroup, you must be working with a polygon Action. If so, simply Alt+click the desired polygons. These polygons will turn white to indicate that they are part of the Temporary PolyGroup. You can also click and drag to paint this Temporary PolyGroup. Alt+clicking a white polygon will remove it from the Temporary PolyGroup selection. You are free to continue editing this Temporary PolyGroup until you execute an Action. The Temporary PolyGroup always adds to the current Target. As an example, if you are selecting an Extrude Action with a Polyloop Target and create a Temporary PolyGroup out of polygons not belonging to the poly loop you are looking for, the Action will extrude both the poly loop itself and any polygons belonging to the Temporary PolyGroup. Changing of PolyGroups During an Action While editing your model, it may happen that you would need a different PolyGroup from what is being created by the Action. While still applying the Action, simply tap the Alt key once to change the PolyGroup to another one. The actual color of a PolyGroup is irrelevant to any Actions or Targets but sometimes PolyGroup colors might be too similar for you to be able to easily tell the groups apart. If you don’t like the color that ZBrush gives you, tap Alt again and repeat until you find something that you’re satisfied with. Not all Actions permit you to use Alt to change the PolyGroup color. This is because they use the Alt key as a modifier. Note: Be careful to not tap the Alt key until after you have started executing the Action. Otherwise you could end up changing the Target instead or even add polygons to the Temporary PolyGroup. Copying an Existing PolyGroup The Temporary PolyGroup is useful for one-off selections but you will sometimes want to keep coming back to the same Targeted polygons. In this case, you can apply an existing PolyGroup to another location. With the PolyGroup Action, it is possible to pick a PolyGroup identifier and color, then copy and store it for the next Action. To do this, follow these steps: - Select the PolyGroup Action - Select the A Single Poly Target - Hover over a polygon belonging to the desired PolyGroup. - While clicking and holding on this polygon, press (or tap) the Shift key. ZBrush will copy the clicked polygon’s PolyGroup. Release the click. - Now click on another polygon to paste the PolyGroup. You can do this on multiple locations. Try it also with other Targets, like Polyloop to apply the same strips of PolyGroups on multiple polygons.
http://docs.pixologic.com/user-guide/3d-modeling/modeling-basics/creating-meshes/zmodeler/working-with-polygroups/
2019-10-13T22:37:06
CC-MAIN-2019-43
1570986648343.8
[]
docs.pixologic.com
API Gateway 7.5.3 Policy Developer Filter Reference Save PDF Selected topic Selected topic and subtopics All content Cache attribute Overview The Cache Attribute filter allows you to configure what part of the message to cache. Typically, response messages are cached and so this filter is usually configured after the routing filters in a policy. In this case, the content.body attribute stores the response message body from the web service and so this message attribute should be selected in the Attribute Name to Store field. For more information on how to configure this filter in a caching policy, see Global caches in the API Gateway Policy Developer Guide. Configuration Name:Enter a name for this filter to display in a policy. Select Cache to Use:Click the button on the right, and select the cache to store the attribute value. The list of currently configured caches is displayed in the tree. To add a cache, right-click the Caches tree node, and select Add Local Cache or Add Distributed Cache. Alternatively, you can configure caches under the Environment Configuration > Libraries node in the Policy Studio tree. For more details, see Global caches in the API Gateway Policy Developer Guide. Attribute Key:The value of the message attribute entered here acts as the key into the cache. In the context of a caching policy, it must be the same as the attribute specified in the Attribute containing key field on the Is Cached? filter. Attribute Name to Store:The value of the API Gateway message attribute entered here is cached in the cache specified in the Cache to use field above. Related Links
https://docs.axway.com/bundle/APIGateway_753_PolicyDevFilterReference_allOS_en_HTML5/page/Content/PolicyDevTopics/cache_cache_attribute.htm
2019-10-14T00:05:56
CC-MAIN-2019-43
1570986648343.8
[]
docs.axway.com
dse.cluster - Clusters and Sessions class Cluster Cluster extending cassandra.cluster.Cluster. The API is identical, except that it returns a dse.cluster.Session (see below). It also uses the new Execution Profile API, so legacy parameters are disallowed._source=b'g',. Module Data session extension based on cassandra.cluster.Session with additional features: - Pre-registered DSE-specific types (geometric types) - Graph execution API Methods execute_graph(statement[, parameters][, trace][, execution_profile]) Executes a Gremlin query string or SimpleGraphStatement synchronously, and returns a ResultSet from this execution. parameters is dict of named parameters to bind. The values must be JSON-serializable. execution_profile: Selects an execution profile for the request. execute_graph_async(statement[, parameters][, trace][, execution_profile]) Execute the graph query and return a ResponseFuture object which callbacks may be attached to for asynchronous response delivery. You may also call ResponseFuture.result() to synchronously block for results at any time.
https://docs.datastax.com/en/developer/python-dse-driver/1.0/api/dse/cluster/
2019-10-13T23:00:55
CC-MAIN-2019-43
1570986648343.8
[]
docs.datastax.com
User profile property data types (SharePoint Server 2010) Applies to: SharePoint Server 2010 . . Note You cannot use a business data connectivity connection to map a binary property to a property that implements the Stream accessor method. See Also Concepts Default user profile properties (SharePoint Server 2010) Plan for profile synchronization (SharePoint Server 2010)
https://docs.microsoft.com/en-us/previous-versions/office/sharepoint-server-2010/hh227257(v%3Doffice.14)
2019-10-14T00:19:49
CC-MAIN-2019-43
1570986648343.8
[]
docs.microsoft.com
Searchlight API¶ Searchlight’s API adds authentication and Role Based Access Control in front of Elasticsearch’s query API. Authentication¶ Searchlight, like other OpenStack APIs, depends on Keystone and the OpenStack Identity API to handle authentication. You must obtain an authentication token from Keystone and pass it to Searchlight in API requests with the X-Auth-Token header. See Keystone Authentication for more information on integrating with Keystone. Using v1¶ For the purposes of examples, assume a Searchlight server is running at the URL on HTTP port 80. All queries are assumed to include an X-Auth-Token header. Where request bodies are present, it is assumed that an appropriate Content-Type header is present (usually application/json). Searches use Elasticsearch’s query DSL. Elasticsearch stores each ‘document’ in an ‘index’, which has one or more ‘types’. Searchlight’s indexing service stores all resource types in their own document type, grouped by service into indices. For instance, the OS::Glance::Image and OS::Glance::Metadef types both reside in the searchlight index. type is unique to a resource type. Document access is defined by each document type, for instance for glance images: If the current user is the resource owner OR If the resource is marked public Some resources may have additional rules. Administrators have access to all resources, though by default searches are restricted to the current tenant unless all_projects is set in the search request body. Querying available plugins¶ Searchlight indexes OpenStack resources as defined by installed plugins. In general, a plugin maps directly to an OpenStack resource type. For instance, a plugin might index nova instances, or glance images. There may be multiple plugins related to a given OpenStack project (an example being glance images and metadefs). A given deployment may not necessarily expose all available plugins. Searchlight provides a REST endpoint to request a list of installed plugins. A GET request to might yield: { "plugins": [ { "type": "OS::Glance::Image", "alias-searching": "searchlight-search" "alias-indexing": "searchlight-listener" }, { "type": "OS::Glance::Metadef", "alias-searching": "searchlight-search" "alias-indexing": "searchlight-listener" } ] } This response shows the plugin information associated with the Glance image and metadef resources. type: the resource group, which is used as the document type in Elasticsearch. alias-searching: the Elasticsearch alias used for querying. alias-indexing: the Elasticsearch alias used for indexing. If desired, all indexed Glance images can be queried directly from Elasticsearch, rather than using Searchlight. Assuming an Elasticsearch server running on localhost, the following request can be made: curl Running a search¶ The simplest query is to ask for everything we have access to. We issue a POST request to with the following body: { "query": { "match_all": {} } } The data is returned as a JSON-encoded mapping from Elasticsearch: { "_shards": { "failed": 0, "successful": 2, "total": 2 }, "hits": { "hits": [ { "_id": "76580e9d-f83d-49d8-b428-1fb90c5d8e95", "_index": "searchlight", "_type": "OS::Glance::Image" "_score": 1.0, "_source": { "id": "76580e9d-f83d-49d8-b428-1fb90c5d8e95", "members": [], "name": "cirros-0.3.2-x86_64-uec", "owner": "d95b27da6e9f4acc9a8031918e443e04", "visibility": "public", ... } }, { "_id": "OS::Software::DBMS", "_index": "searchlight", "_type": "metadef", "_score": 1.0, "_source": { "description": "A database is an ...", "display_name": "Database Software", "namespace": "OS::Software::DBMS", "objects": [ { "description": "PostgreSQL, often simply 'Postgres' ...", "name": "PostgreSQL", "properties": [ { "default": "5432", "description": "Specifies the TCP/IP port...", "property": "sw_database_postgresql_listen_port", ... }, ... ] } ], "tags": [ { "name": "Database" }, ] } }, ... ], "max_score": 1.0, "total": 8 }, "timed_out": false, "took": 1 } Each hit is a document in Elasticsearch, representing an OpenStack resource. the fields in the root of each hit are: _id Uniquely identifies the resource within its OpenStack context (for instance, Glance images use their GUID). _index The service to which the resource belongs (e.g. searchlight). _type The document type within the service (e.g. image, metadef) _score Where applicable the relevancy of a given hit. By default, the field upon which results are sorted. _source The document originally indexed. The _sourceis a map, where each key is a fieldwhose value may be a scalar value, a list, a nested object or a list of nested objects. More example searches¶ Results are shown here only where it would help illustrate the example. The query parameter supports anything that Elasticsearch exposes via its query DSL. There are normally multiple ways to represent the same query, often with some subtle differences, but some common examples are shown here. Administrators - search all resources¶ By default, all users see search results restricted by access control; in practice, this is a combination of resources belonging to the user’s current tenant/project, and any fields that are restricted to administrators. Administrators also have the option to view all resources, by passing all_projects in the search request body. For instance, a POST to: { "query": { "match_all": {} }, "all_projects": true } Restricting document index or type¶ To restrict a query to Glance image and metadef information only (both index and type can be arrays or a single string): { "query": { "match_all": {} }, "type": ["OS::Glance::Image", "OS::Glance::Metadef"] } If index or type are not provided they will default to covering as wide a range of results as possible. Be aware that it is possible to specify combinations of index and type that can return no results. In general type is preferred since type is unique to a resource. Retrieving an item by id¶ To retrieve a resource by its OpenStack ID (e.g. a glance image), we can use Elasticsearch’s term query: { "index": "searchlight", "query": { "term": { "id": "79fa243d-e05d-4848-8a9e-27a01e83ceba" } } } Limiting and paging results¶ Elasticsearch (and Searchlight) support paging through the size and from parameters (Searchlight also accepts limit and offset respectively as synonyms). from is zero-indexed. If size is zero, no results will be returned. This can be useful for retrieving the total number of hits for a query without being interested in the results themselves, or for aggregations: { "query": {"match_all": {}}, "size": 0 } Gives: { "hits": { "hits": [], "max_score": 0.0, "total": 40 } } Limiting the fields returned¶ To restrict the source to include only certain fields using Elasticsearch’s source filtering: { "type": "OS::Glance::Image", "_source": ["name", "size"] } Gives: { "_shards": { "failed": 0, "successful": 1, "total": 1 }, "hits": { "hits": [ { "_id": "76580e9d-f83d-49d8-b428-1fb90c5d8e95", "_index": "searchlight", "_score": 1.0, "_source": { "name": "cirros-0.3.2-x86_64-uec", "size": 3723817 }, "_type": "OS::Glance::Image" }, ... ], "max_score": 1.0, "total": 4 }, "timed_out": false, "took": 1 } Versioning¶ Internally an always-incrementing value is stored with search results to ensure that out of order notifications don’t lead to inconsistencies with search results. Normally this value is not exposed in search results, but including a search parameter version: true in requests will result in a field named _version (note the underscore) being present in each result: { "index": "searchlight", "query": {"match_all": {}}, "version": true } { "hits": { "hits": [ { "_id": "76580e9d-f83d-49d8-b428-1fb90c5d8e95", "_index": "searchlight", "_version": 462198730000000000, .... }, .... ] }, ... } Sorting¶ Elasticsearch allows sorting by single or multiple fields. See Elasticsearch’s sort documentation for details of the allowed syntax. Sort fields can be included as a top level field in the request body. For instance: { "query": {"match_all": {}}, "sort": {"name": "desc"} } You will see in the search results a sort field for each result: ... { "_id": "7741fbcc-3fa9-4ace-adff-593304b6e629", "_index": "glance", "_score": null, "_source": { "name": "cirros-0.3.4-x86_64-uec", "size": 25165824 }, "_type": "image", "sort": [ "cirros-0.3.4-x86_64-uec", 25165824 ] }, ... Wildcards¶ Elasticsearch supports regular expression searches but often wildcards within query_string elements are sufficient, using * to represent one or more characters or ? to represent a single character. Note that starting a search term with a wildcard can lead to extremely slow queries: { "query": { "query_string": { "query": "name: ubun?u AND mysql_version: 5.*" } } } Highlighting¶ A common requirement is to highlight search terms in results: { "type": "OS::Glance::Metadef", "query": { "query_string": { "query": "database" } }, "_source": ["namespace", "description"], "highlight": { "fields": { "namespace": {}, "description": {} } } } Results: { "hits": { "hits": [ { "_id": "OS::Software::DBMS", "_index": "searchlight", "_type": "OS::Glance::Metadef", "_score": 0.56079304, "_source": { "description": "A database is an organized collection of data. The data is typically organized to model aspects of reality in a way that supports processes requiring information. Database management systems are computer software applications that interact with the user, other applications, and the database itself to capture and analyze data. ()" }, "highlight": { "description": [ "A <em>database</em> is an organized collection of data. The data is typically organized to model aspects of", " reality in a way that supports processes requiring information. <em>Database</em> management systems are", " computer software applications that interact with the user, other applications, and the <em>database</em> itself", " to capture and analyze data. (<em>Database</em>)" ], "display_name": [ "<em>Database</em> Software" ] } } ], "max_score": 0.56079304, "total": 1 }, "timed_out": false, "took": 3 } Faceting¶ Searchlight can provide a list of field names and values present for those fields for each registered resource type. Exactly which fields are returned and whether values are listed is up to each plugin. Some fields or values may only be listed for administrative users. For some string fields, ‘facet_field’ may be included in the result and can be used to do an exact term match against facet options. To list supported facets, issue a GET to: { "OS::Glance::Image": [ { "name": "status", "type": "string" }, { "name": "created_at", "type": "date" }, { "name": "virtual_size", "type": "long" }, { "name": "name", "type": "string", "facet_field": "name.raw" }, ... ], "OS::Glance::Metadef": [ { "name": "objects.description", "type": "string" }, { "name": "objects.properties.description", "type": "string", "nested": true }, ... ], "OS::Nova::Server": [ { "name": "status", "options": [ { "doc_count": 1, "key": "ACTIVE" } ], "type": "string" }, { "name": "OS-EXT-SRV-ATTR:host", "type": "string" }, { "name": "name", "type": "string", "facet_field": "name.raw" }, { "name": "image.id", "type": "string", "nested": false }, { "name": "OS-EXT-AZ:availability_zone", "options": [ { "doc_count": 1, "key": "nova" } ], "type": "string" } ... ] } Facet fields containing the ‘nested’ (boolean) attribute indicate that the field mapping type is either ‘nested’ or ‘object’. This can influence how a field should be queried. In general ‘object’ types are queried as any other field; ‘nested’ types require some additional complexity. It’s also possible to request facets for a particular type by adding a type query parameter. For instance, a GET to: { "OS::Nova::Server": [ { "name": "status", "options": [ { "doc_count": 1, "key": "ACTIVE" } ], "type": "string" }, ... ] } As with searches, administrators are able to request facet terms for all projects/tenants. By default, facet terms are limited to the currently scoped project; adding all_projects=true as a query parameter removes the restriction. It is possible to limit the number of options returned for fields that support facet terms. limit_terms restricts the number of terms (sorted in order of descending frequency). A value of 0 indicates no limit, and is the default. It is possible to not return any options for facets. By default all options are returned for fields that support facet terms. Adding exclude_options=true as a query parameter will return only the facet field and not any of the options. Using this option will avoid an aggregation query being performed on Elasticsearch, providing a performance boost. Aggregations¶ Faceting (above) is a more general form of Elasticsearch aggregation. Faceting is an example of ‘bucketing’; ‘metrics’ includes functions like min, max, percentiles. Aggregations will be based on the query provided as well as restrictions on resource type and any RBAC filters. To include aggregations in a query, include aggs or aggregations in a search request body. "size": 0 prevents Elasticsearch returning any results, just the aggregation, though it is valid to retrieve both search results and aggregations from a single query. For example: { "query": {"match_all": {}}, "size": 0, "aggregations": { "names": { "terms": {"field": "name"} }, "earliest": { "min": {"field": "created_at"} } } } Response: { "hits": {"total": 2, "max_score": 0.0, "hits": []}, "aggregations": { "names": { "doc_count_error_upper_bound": 0, "sum_other_doc_count": 0, "buckets": [ {"key": "for_instance1", "doc_count": 2}, {"key": "instance1", "doc_count": 1} ] }, "earliest": { "value": 1459946898000.0, "value_as_string": "2016-04-06T12:48:18.000Z" } } } Note that for some aggregations value_as_string may be more useful than value - for example, the earliest aggregation in the example operates on a date field whose internal representation is a timestamp. The global aggregation type is not allowed because unlike other aggregation types it operates outside the search query scope. Freeform queries¶ Elasticsearch has a flexible query parser that can be used for many kinds of search terms: the query_string operator. Some things to bear in mind about using query_string (see the documentation for full options): A query term may be prefixed with a fieldname (as seen below). If it is not, by default the entire document will be searched for the term. The default operator between terms is OR By default, query terms are case insensitive For instance, the following will look for images with a restriction on name and a range query on size: { "query": { "query_string": { "query": "name: (Ubuntu OR Fedora) AND size: [3000000 TO 5000000]" } } } Within the query string query, you may perform a number of interesting queries. Below are some examples. Phrases¶ \"i love openstack\" By default, each word you type will be searched for individually. You may also try to search an exact phrase by using quotes (“my phrase”) to surround a phrase. The search service may allow a certain amount of phrase slop - meaning that if you have some words out of order in the phrase it may still match. Wildcards¶ python3.? 10.0.0.* 172.*.4.* By default, each word you type will match full words only. You may also use wildcards to match parts of words. Wildcard searches can be run on individual terms, using ? to replace a single character, and * to replace zero or more character. ‘demo’ will match the full word ‘demo’ only. However, ‘de*’ will match anything that starts with ‘de’, such as ‘demo_1’. ‘de*1’ will match anything that starts with ‘de’ and ends with ‘1’. Note Wildcard queries place a heavy burden on the search service and may perform poorly. Term Operators¶ +apache -apache web +(apache OR python) Add a ‘+’ or a ‘-‘ to indicate terms that must or must not appear. For example ‘+python -apache web’ would find everything that has ‘python’ does NOT have ‘apache’ and should have ‘web’. This may also be used with grouping. For example, ‘web -(apache AND python)’ would find anything with ‘web’, but does not have either ‘apache’ or ‘python’. Boolean Operators¶ python AND apache nginx OR apache web && !apache You can separate search terms and groups with AND, OR and NOT (also written &&, || and !). For example, ‘python OR javascript’ will find anything with either term (OR is used by default, so does not need to be specified). However, ‘python AND javascript’ will find things that only have both terms. You can do this with as many terms as you’d like (e.g. ‘django AND javascript AND !unholy’). It is important to use all caps or the alternate syntax (&&, ||), because ‘and’ will be treated as another search term, but ‘AND’ will be treated as a logical operator. Grouping¶ python AND (2.7 OR 3.4) web && (apache !python) Use parenthesis to group different aspects of your query to form sub-queries. For example, ‘web OR (python AND apache)’ will return anything that either has ‘web’ OR has both ‘python’ AND ‘apache’. Facets¶ name:cirros name:cirros && protected:false You may decide to only look in a certain field for a search term by setting a specific facet. This is accomplished by either selecting a facet from the drop down or by typing the facet manually. For example, if you are looking for an image, you may choose to only look at the name field by adding ‘name:foo’. You may group facets and use logical operators. Range Queries¶ size:[1 TO 1000] size:[1 TO *] size:>=1 size:<1000 Date, numeric or string fields can use range queries. Use square brackets [min TO max] for inclusive ranges and curly brackets {min TO max} for exclusive ranges. IP Addresses¶ 172.24.4.0/16 [10.0.0.1 TO 10.0.0.4] IPv4 addresses may be searched based on ranges and with CIDR notation. Boosting¶ web javascript^2 python^0.1 You can increase or decrease the relevance of a search term by boosting different terms, phrases, or groups. Boost one of these by adding ^n to the term, phrase, or group where n is a number greater than 1 to increase relevance and between 0 and 1 to decrease relevance. For example ‘web^4 python^0.1’ would find anything with both web and python, but would increase the relevance for anything with ‘web’ in the result and decrease the relevance for anything with ‘python’ in the result. Advanced Features¶ CORS - Accessing Searchlight from the browser¶ Searchlight can be configured to permit access directly from the browser. For details on this configuration, please refer to the OpenStack Cloud Admin Guide.
https://docs.openstack.org/searchlight/latest/user/searchlightapi.html
2019-10-13T22:32:35
CC-MAIN-2019-43
1570986648343.8
[]
docs.openstack.org
Events Events are a special kind of Member in a Class or Record that allow other parts of the code to subscribe to notifications about certain, well, events in the class. An event is very similar to a Block type Field, bu rather than just storing a single block reference, events are "multi-cast". This means they maintain a list of subscribers, and when calling the event, all subscribers will get a callback. Platform Considerations Although events are most commonly used on .NET and both Cocoa and Java have different paradigms to deal with similar concepts (such as regular Blocks, Delegate Classes (not to eb confused with .NET'suse of the term) and Anonymous Interfaces), events are supported on all platforms. Event Declaration Syntax A simple event declaration consists of the event keyword, followed by a name for the event and the type of Block that can be used to subscribe to the event. The block type can be a named Alias, or an explicit block declaration: event ButtonClick: EventHandler; // EventHandler is a named block defined elsewhere event Status: block(aMessage: String); Just as with Stored Properties, with this short syntax the compiler will take care of creating all the infrastructure for the event, including a private variable to store subscribers, and add and remove methods. Optionally, an add and remove clause can be provided to explicitly name the methods responsible for adding and removing handlers. These methods must then be declared and implemented separately, and they must take a single parameter of the same type as the event (This, too, is comparable to the read and write statements for a Property). It is then up to the implementation of these methods to handle the subscription/unsubscription logic. private method AddCallback(v: EventHandler); method RemoveCallback(v: EventHandler); public event Callback: EventHandler add AddCallback remove RemoveCallback; Alternatively, on .NET only, a block field to be used for storage can be provided via the block (or legacy delegate) keyword: private fCallback: EventHandler; public event Callback: EventHandler block fCallback; Subscribing to or Unsubscribing from Events Externally, code can subscribe or unsubscribe from an events by adding or removing handlers. This is done with the special += and -= operators, to emphasize that events are, by default, not a 1:1 mapping, but that each event can have an unlimited number of subscribers. method ReactToSomething(aEventArgs: EventArgs); //... myObject.Callback += @ReactToSomething //... myObject.Callback -= @ReactToSomething Of course, any compatible Block can be used to subscribe to an event – be it a method of the local type, as in the example above or e.g. an Anonymous Method. Please refer to the Event Access Expression topic for more details. Raising Events An event can be raised by simply calling it like a Block or Method. Before doing so, one should ensure that at least one subscriber has been added, because firing a unassigned event, just as calling a nil block, will cause a NullReferenceException. The assigned() System Function or comparison to nil can be used to check if an event is assigned. if assigned(Callback) then Callback(); By default, only the type that defines the event can raise it, regardless of the visibility of the event itself. See more on this in the following section. Visibility The visibility of events is governed by the Visibility Section of the containing type the event is declared in, or the Visibility Modifiers applied to the event. This visibility extends to the ability to add and remove subscribers, but not to the ability to raise (or fire off) the event, which can be controlled by the raise statement, described below. Optionally, separate visibility levels can be provided for the add, remove and raise statements. These will override the general visibility of the event itself: event Callback: EventHandler public add AddCallback private remove RemoveCallback; Raise Statements Optionally, a raise statement combined with an (also optional) visibility level can be specified, in order to extend the reach of who can raise (or fire off) the event. By default, the ability to raise an event is private, and limited to rhe class that declares it. event Callback: EventHandler protected raise; In the example above, raising the event (normally private) is propagated to a protected action, meaning it is now available to descendant classes. Static/Class Events Like most type members, events are by default defined on the instance – that means the event can be called on and will execute in the context of an instance of the class. A event can be marked as static by prefixing the event declaration with the class keyword, or by applying the static Member Modifier: class event SomethingChanged: EventHandler; // static event on the class itself event SomethingElseChanged: EventHandler; static; // also static event on the class itself Virtuality The Virtuality of events can be controlled by applying one of the Virtuality Member Modifiers. event SomethingChanged; virtual; Events can be marked as abstract, if a descendant class must provide the implementation. Abstract events (and events in Interfaces may not define an add, remove or raise statement. event OnClick: EventHandler; abstract; Other Modifiers A number of other Member Modifiers can be applied to events. deprecated implements ISomeInterface.SomeMember locked locked on Expression mapped to(See Mapped Members) optional(Interface members only) unsafe
https://docs.elementscompiler.com/Oxygene/Members/Events/
2018-12-09T22:43:51
CC-MAIN-2018-51
1544376823183.3
[]
docs.elementscompiler.com
VMware vSphere Host Group To create a vSphere Host Group, you must first setup a VMware vSphere Cloud Provider. To securely connect Nirmata to a vSphere in your Private Cloud or Data Center, first setup a Private Cloud. Setting up VMware consists of the following steps: - Setup Resource Pool in vCenter. - Create a vSphere template. - Create a VMware vSphere Host Group. Setup a Resource Pool in vCenter Refer to VMWare vSphere documentation for instructions on setting up a Resource Pool. Create a vSphere Template. Create a VMware vSphere Cloud Provider Create a VMWare vSphere provider by providing the vCenter SDK URL () and credentials: After entering the credentials, validate access to your cloud provider before closing the wizard: Create.
https://docs.nirmata.io/hostgroups/vmware_vsphere_host_group/
2018-12-09T22:46:08
CC-MAIN-2018-51
1544376823183.3
[array(['/images/create-vsphere-provider-1.png', 'image'], dtype=object) array(['/images/create-vsphere-provider-2.png', 'image'], dtype=object) array(['/images/create-vsphere-hg-0.png', 'image'], dtype=object) array(['/images/create-vsphere-hg-1.png', 'image'], dtype=object)]
docs.nirmata.io
Azure Active Directory risk events The vast majority of security breaches take place when attackers gain access to an environment by stealing a user’s identity. Discovering compromised identities is no easy task. Azure Active Directory uses adaptive machine learning algorithms and heuristics to detect suspicious actions that are related to your user accounts. Each detected suspicious action is stored in a record called a risk event. There are two places where you review reported risk events: Azure AD reporting - Risk events are part of Azure AD's security reports. For more information, see the users at risk security report and the risky sign-ins security report. Azure AD Identity Protection - Risk events are also part of the reporting capabilities of Azure Active Directory Identity Protection. Currently, Azure Active Directory detects six types of risk events: - Users with leaked credentials - Impossible travel to atypical locations The insight you get for a detected risk event is tied to your Azure AD subscription. - With the Azure AD Premium P2 edition, you get the most detailed information about all underlying detections. - With the Azure AD Premium P1 edition, detections that are not covered by your license appear as the risk event Sign-in with additional risk detected. While the detection of risk events already represents an important aspect of protecting your identities, you also have the option to either manually address them or implement automated responses by configuring conditional access policies. For more information, see Azure Active Directory Identity Protection. Risk event types The risk event type property is an identifier for the suspicious action a risk event record has been created for. Microsoft's continuous investments into the detection process lead to: - Improvements to the detection accuracy of existing risk events - New risk event types that will be added in the future Leaked credentials When cybercriminals compromise valid passwords of legitimate users, they often share those credentials. This is usually done by posting them publicly on the dark web or paste sites or by trading or selling the credentials on the black market. The Microsoft leaked credentials service acquires username / password pairs by monitoring public and dark web sites and by working with: - Researchers - Law enforcement - Security teams at Microsoft - Other trusted sources When the service acquires username / password pairs, they are checked against AAD users' current valid credentials. When a match is found, it means that a user's password has been compromised, and a leaked credentials risk event is created. This risk event type identifies users who have successfully signed in from an IP address that has been identified as an anonymous proxy IP address. These proxies are used by people who want to hide their device’s IP address, and may be used for malicious intent. Impossible travel to atypical locations This risk event type identifies two sign-ins originating from geographically distant locations, where at least one of the locations may also be atypical for the user, given past behavior. Among several other factors, this machine learning algorithm takes into account the time between the two sign-ins and the time it would have taken for the user to travel from the first location to the second, indicating that a different user is using the same credentials. The algorithm ignores obvious "false positives" contributing to the impossible travel conditions, such as VPNs and locations regularly used by other users in the organization. The system has an initial learning period of 14 days during which it learns a new user’s sign-in behavior. This risk event type considers past sign-in locations (IP, Latitude / Longitude and ASN) to determine new / unfamiliar locations. The system stores information about previous locations used by a user, and considers these “familiar” locations. The risk event is triggered when the sign-in occurs from a location that's not already in the list of familiar locations. The system has an initial learning period of 30 days, during which it does not flag any new locations as unfamiliar locations. The system also ignores sign-ins from familiar devices, and locations that are geographically close to a familiar location. Identity Protection detects sign-ins from unfamiliar locations also for basic authentication / legacy protocols. Because these protocols do not have modern familiar features such as client id, there is not enough telemetry to reduce false positives. To reduce the number of detected risk events, you should move to modern authentication. This risk event type identifies sign-ins from devices infected with malware, that are known to actively communicate with a bot server. This is determined by correlating IP addresses of the user’s device against IP addresses that were in contact with a bot server.. Detection type The detection type property is an indicator (Real-time or Offline) for the detection timeframe of a risk event. Currently, most risk events are detected offline in a post-processing operation after the risk event has occurred. The following table lists the amount of time it takes for a detection type to show up in a related report: For the risk event types Azure Active Directory detects, the detection types are: Risk level The risk level property of a risk event is an indicator (High, Medium, or Low) for the severity and the confidence of a risk event. This property helps you to prioritize the actions you must take. The severity of the risk event represents the strength of the signal as a predictor of identity compromise. The confidence is an indicator for the possibility of false positives. For example,. Leaked credentials Leaked credentials risk events are classified as a High, because they provide a clear indication that the user name and password are available to an attacker. The risk level for this risk event type is Medium because an anonymous IP address is not a strong indication of an account compromise. We recommend that you immediately contact the user to verify if they were using anonymous IP addresses. Impossible travel to atypical locations give the appearance of. Tip You can reduce the amount of reported false-positives for this risk event type by configuring named locations. Unfamiliar locations can provide a strong indication that an attacker is able to use a stolen identity. False-positives may occur when a user is traveling, is trying out a new device, or is using a new VPN. As a result of these false positives, the risk level for this event type is Medium. This risk event identifies IP addresses, not user devices. If several devices are behind a single IP address, and only some are controlled by a bot network, sign-ins from other devices my trigger this event unnecessarily, which is why this risk event is classified as Low. We recommend that you contact the user and scan all the user's devices. It is also possible that a user's personal device is infected or that someone else was using an infected device from the same IP address as the user. Infected devices are often infected by malware that have not yet been identified by anti-virus software, and may also indicate any bad user habits that may have caused the device to become infected. For more information about how to address malware infections, see the Malware Protection Center. We recommend that you contact the user to verify if they actually signed in from an IP address that was marked as suspicious. The risk level for this event type is “Medium” because several devices may be behind the same IP address, while only some may be responsible for the suspicious activity.
https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/concept-risk-events
2018-12-09T21:44:37
CC-MAIN-2018-51
1544376823183.3
[array(['media/concept-risk-events/91.png', 'Risk event'], dtype=object) array(['media/concept-risk-events/01.png', 'Risk Level'], dtype=object)]
docs.microsoft.com
Testing ClassicPress ClassicPress is focused on quality.Link to this section Our Quality Assurance (QA) team focuses on testing and software quality, as well as making sure any bugs get in front of the right people so they can be fixed. Continuous QA is crucially important in software development because it helps to ensure that any change or update will keep the quality of the software and avoid any new defects or bugs. Software bugs and defects can have a huge negative impact on the businesses/people that use the software. How can I help the QA Team?Link to this section We will need all the help we can get to cover all the necessary testing scenarios. We’ve released ClassicPress 1.0.0-beta1 (the first official beta release) and we need help testing new installs of ClassicPress using this version. We’ve also released the migration plugin that will convert a WordPress installation into a ClassicPress installation, and we need help testing that too. To install ClassicPress using either of these methods, see the Installing ClassicPress page. For a suggested list of testing scenarios, see the Testing Scenarios page. To find out more, join our Slack group and introduce yourself in the #testing channel, or make a post on our support forum. Testing process and requirementsLink to this section You will need a computer to perform the testing, and of course some free time. Even a few minutes to perform a fresh install or a migration from WordPress to ClassicPress can help. You should test in new installs or existing site installs that aren’t in production and aren’t important in case something goes wrong. And of course, always make a backup of your site files and database first! Reporting bugsLink to this section Any bugs that are found can be reported on the GitHub project pages. You can report bugs in ClassicPress itself on the ClassicPress GitHub repository. You can report bugs related to the ClassicPress migration plugin on the GitHub repository for the plugin. We are happy to help you through the process of reporting a bug as well, just send us a note on Slack or email. Manual and automated testingLink to this section There are two main types of software testing that can be done. One is manual testing done by real people and the other is automated testing that is performed by computers. The idea is to have always a mix of both kinds of testing to increase and maintain ClassicPress software quality. ClassicPress core has an extensive PHP and JavaScript automated test suite, which is based on the WordPress automated test suite. This test suite runs against the ClassicPress code every time any change is made, and the tests are required to pass before the change can be merged. New code or logic introduced into ClassicPress or its plugins should also have automated tests. For more information see the WordPress automated testing handbook – most of the same information applies to ClassicPress.
https://docs.classicpress.net/testing-classicpress/
2018-12-09T22:56:08
CC-MAIN-2018-51
1544376823183.3
[]
docs.classicpress.net
This article explains how to run any standard report. For descriptions of specific reports, see the appropriate article following this one. To run a standard report The Account Level Selection field setting on the Administration > Configuration > Reporting tab determines which account level is used for the Start Account and End Account fields for filtering report results. You can ignore the Account Level Selection setting by typing an account name in the Start Account and End Account fields. For Monthly services, Charge data in reports is available on the last day of the period. If you want to see Charge data for the first few days of the current month, select an End Date of the last day of the month.
https://docs.consumption.support.hpe.com/CC3/04Viewing_financial_data/01Standard_reports/Running_standard_reports
2018-12-09T22:51:58
CC-MAIN-2018-51
1544376823183.3
[]
docs.consumption.support.hpe.com
RECONFIGURE (Transact-SQL) Syntax RECONFIGURE [ WITH OVERRIDE ] Arguments RECONFIGURE option. Almost any configuration option can be reconfigured by using the WITH OVERRIDE option, however some fatal errors are still prevented. For example, the min server memory configuration option cannot be configured with a value greater than the value specified in the max server memory configuration option. Remarks. When reconfiguring the resource governor, see the RECONFIGURE option of ALTER RESOURCE GOVERNOR (Transact-SQL). Permissions RECONFIGURE permissions default to grantees of the ALTER SETTINGS permission. The sysadmin and serveradmin fixed server roles implicitly hold this permission. Examples The following example sets the upper limit for the recovery interval configuration option to 75. EXEC sp_configure 'recovery interval', 75' RECONFIGURE WITH OVERRIDE; GO See Also Server Configuration Options (SQL Server) sp_configure (Transact-SQL)
https://docs.microsoft.com/en-us/sql/t-sql/language-elements/reconfigure-transact-sql?view=sql-server-2017
2018-12-09T21:18:35
CC-MAIN-2018-51
1544376823183.3
[array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'], dtype=object) array(['../../includes/media/no.png?view=sql-server-2017', 'no'], dtype=object) array(['../../includes/media/no.png?view=sql-server-2017', 'no'], dtype=object) array(['../../includes/media/no.png?view=sql-server-2017', 'no'], dtype=object) ]
docs.microsoft.com
Overview The RadBulletGraph control is a variation of linear gauge. It combines a number of indicators into one control making it light-weight, customizable, and straightforward to setup and use. The control is a great tool for dashboards as it is the optimal way to present a lot of information in relatively small size. Figure 1: RadBulletGraph Key Features Easy to use: Using Telerik RadBulletGraph is as easy as just dragging and dropping it from the toolbox. The Telerik RadBulletGraph control supports the full design specification: non zero-based scale, negative featured measures, projected values, having many comparative measures and quantitative ranges is not a problem. Data Binding Support: The control can be easily data bound to your business data, either by setting its properties directly, or by using a binding declaration.
https://docs.telerik.com/devtools/winforms/controls/gauges/bulletgraph/bulletgraph
2018-12-09T22:45:08
CC-MAIN-2018-51
1544376823183.3
[array(['images/bulletgraph-overview001.png', 'bulletgraph-overview 001'], dtype=object) ]
docs.telerik.com
Welcome to the VMware vRealize Configuration Manager (VCM) documentation page. Use the navigation on the left to browse through the documentation for your release of vRealize Configuration Manager. You can find information about Product such as What's New, Upgrades, Compatibility, Resolved and Known Issues, in the release notes. You can use the Guide for step-by-step instructions to install and configure product. About vRealize Configuration Manager vRealize Configuration Manager enables you to store asset, security, and configuration settings, manage and remediate asset compliance. It collects thousands of asset, security, and configuration data settings from each networked virtual environments system and virtual object, and from Windows, UNIX, and Linux server and workstation and stores them in a centralized Configuration Management Database (CMDB). By leveraging the information stored in the CMDB, IT administrators can ensure that company policies and the actions they perform are appropriate for the IT infrastructure that they support.
https://docs.vmware.com/en/VMware-vRealize-Configuration-Manager/index.html
2018-12-09T21:57:36
CC-MAIN-2018-51
1544376823183.3
[]
docs.vmware.com
Overview¶ PSAMM is an open source software that is designed for the curation and analysis of metabolic models. It supports model version tracking, model annotation, data integration, data parsing and formatting, consistency checking, automatic gap filling, and model simulations. PSAMM is developed as an open source project, coordinated through Github. The PSAMM software is being developed in the Zhang Laboratory at the University of Rhode Island. Citing PSAMM¶ If you use PSAMM in a publication, please cite: Steffensen JL, Dufault-Thompson K, Zhang Y. PSAMM: A Portable System for the Analysis of Metabolic Models. PLOS Comput Biol. Public Library of Science; 2016;12: e1004732. doi:10.1371/journal.pcbi.1004732. Software license¶ PS.
https://psamm.readthedocs.io/en/03-2017-tutorial-update/overview.html
2018-12-09T21:33:13
CC-MAIN-2018-51
1544376823183.3
[]
psamm.readthedocs.io
Introduces an interface for setting the visible state of a client. procedure SetVisible(Value: Boolean); virtual; virtual __fastcall SetVisible(Boolean Value); The associated action calls SetVisible when its Visible property changes so that the action link can propagate the new Visible value to the client object. Value specifies the new value of the Visible property. As implemented in TActionLink, SetVisible does nothing. Descendant classes override SetVisible to set the client property that corresponds to the action's Visible property if the IsVisibleLinked method returns true.
http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/delphivclwin32/ActnList_TActionLink_SetVisible.html
2018-12-09T21:24:37
CC-MAIN-2018-51
1544376823183.3
[]
docs.embarcadero.com
Installation¶ Install the latest client library via pip pip install descarteslabs To install with optional recommended dependencies (blosc, NumPy, and matplotlib): pip install "descarteslabs[complete]" The latest development version can always be found on GitHub. It can be installed via pip pip install -U git+ Windows Users¶ The client library requires shapely, which can be hard to install on Windows with pip. We recommended using Anaconda to first install shapely. If you’re unfamiliar with conda, check out the document on development environments for recommendations. conda install shapely pip install descarteslabs Assistance¶ For assistance setting up a development environment, go to Managing a Development Environment or contact us for Support. Continue to Authentication
https://docs.descarteslabs.com/installation.html
2018-12-09T21:43:19
CC-MAIN-2018-51
1544376823183.3
[]
docs.descarteslabs.com
Connect to API secured with Azure Active Directory When building SharePoint Framework solutions, you might need to connect to your custom API to retrieve some data or to communicate with line of business applications. Securing custom APIs with Microsoft Azure Active Directory (Azure AD) offers you many benefits and can be done in a number of ways. After you have built the API, there are several ways in which you can access it. These ways vary in complexity and each have their specific considerations. This article discusses the different approaches and describes the step-by-step process of securing and connecting to an API secured with Azure AD. Important When connecting to Azure AD-secured APIs, we recommend that you use the MSGraphClient and AadHttpClient classes, which are now generally available. For more information about the recommended models, see Connect to Azure AD-secured APIs in SharePoint Framework solutions and Use the MSGraphClient to connect to Microsoft Graph. Secure an API with Azure AD If you're using Office 365, securing custom APIs using Azure AD is an architectural option that you should definitely consider. First and foremost, it allows you to secure the access to the API using existing organizational credentials that are already managed through Office 365 and Azure AD. Users with an active account can seamlessly work with applications that leverage APIs secured with Azure AD. Azure AD administrators can centrally manage access to the API, the same way they manage access to all other applications registered with Azure AD. As the API developer, using Azure AD to secure your API frees you from managing a proprietary set of user credentials and implementing a custom security layer for your API. Additionally, Azure AD supports the OAuth protocol, which allows you to connect to the API from a range of application types varying from mobile apps to client-side solutions. When building custom APIs, there are two main ways in which you can secure your API with Azure AD. If you host the API in Microsoft Azure App Service, you can benefit from the App Service Authentication option. If you look for more hosting flexibility for your API, such as hosting it on your own infrastructure or in Docker containers, you need to secure it in code. In such cases, the implementation depends on your programming language and framework. In this article, when discussing this option, you will use C# and the ASP.NET Web API as the framework. Secure the API using Azure App Service Authentication When deploying custom APIs to Azure App Service, you can benefit from the App Service Authentication option to secure the API with Azure AD. The biggest benefit of using App Service Authentication is its simplicity: by following the configuration steps available in the Azure portal, you can have the wizard set up the authentication configuration for you. If you choose the basic setup, the wizard creates a new Azure AD application in Azure AD associated with the current subscription. In the advanced configuration, you can choose which Azure AD application should be used to secure the access to the App Service hosting the API.. When using Azure App Service Authentication, there is no additional configuration required in your application. Azure App Service Authentication is a feature available only in Azure App Service. While this capability significantly simplifies implementing authentication in your API, it ties it to running inside Azure App Service. If you want to host the API with another cloud provider or inside a Docker container, you need to implement the authentication layer first. Secure the API using ASP.NET authentication If you want to have the maximum flexibility with regards to where your API is hosted and how it is deployed, you should consider implementing the support for Azure AD authentication in ASP.NET. Visual Studio simplifies the implementation process significantly, and after completing the authentication setup wizard, your API requires users to sign in by using their organizational account. During the configuration process, Visual Studio adds all the necessary references and settings to your ASP.NET Web API project, including registering a new Azure AD application to secure your API. Access an API secured with Azure AD from SharePoint Framework solutions SharePoint Framework solutions are fully client-side and as such are incapable of securely storing secrets required to connect to secured APIs. To support secure communication with client-side solutions, Azure AD supports a number of mechanisms such as authentication cookies or the OAuth implicit flow. Typically, when building SharePoint Framework solutions, you would use the MSGraphClient to connect to the Microsoft Graph and the AadHttpClient to connect to an enterprise API secured with Azure AD. If you however work with a JavaScript framework that has its own service for executing web requests, such as AngularJS or jQuery or your solution is built on an older version of the SharePoint Framework, you might need to use other approaches to obtain an access token to APIs secured with Azure AD. Azure AD authorization flows Office 365 uses Azure Active Directory (Azure AD) to secure its APIs, which are accessed through named. Use the AadTokenProvider to retrieve access token When working with JavaScript libraries that have their own services for executing web requests, the recommended way to obtain an access token to an API secured with Azure AD is by using the AadTokenProvider available from SharePoint Framework v1.6.0. Compared to other solutions, the AadTokenProvider takes away all intricacies of implementing OAuth implicit flow in SharePoint Framework solutions, allowing you to focus on building your application. Note It's recommended to regularly update your SharePoint Framework solutions to the most recent version of the SharePoint Framework to benefit of the improvements and new capabilities added by Microsoft. Request permissions to API secured with Azure AD If your SharePoint Framework solution requires permissions to specific resources secured with Azure AD, such as enterprise API, you should specify these resources along with the necessary permissions in the configuration of your solution. In your SharePoint Framework project, open the config/package-solution.json file. To the solution property, add the webApiPermissionRequests property that lists all the resources and corresponding permissions that your solution needs. Following is an example of a SharePoint Framework solution requesting access to enterprise API: { "$schema": "", "solution": { "name": "spfx-client-side-solution", "id": "5d16587c-5e87-44d7-b658-1148988f212a", "version": "1.0.0.0", "includeClientSideAssets": true, "skipFeatureDeployment": true, "webApiPermissionRequests": [ { "resource": "Enterprise-API-Name", "scope": "user_impersonation" } ] }, "paths": { "zippedPackage": "solution/spfx-api.sppkg" } } Note For the value of the resource property, you can specify either the displayName or the objectId of the application to which you want to request permissions. Using the displayName not only is more readable but also allows you to build your solution once and reuse it across multiple tenants. While the objectId of an Azure AD application is different on each tenant, the displayName stays the same. When this solution is deployed to the SharePoint app catalog, it prompts the administrator to verify the requested permissions and either grant or deny them. Read Manage permission requests article to learn more about different ways of managing permission requests. Acquire an access token Following is how you would use the AadTokenProvider to retrieve an access token for an enterprise API secured with Azure AD and use it to perform a web request using jQuery: // ... import * as $ from 'jquery'; import { AadTokenProvider } from '@microsoft/sp-http'; export default class OrdersWebPart extends BaseClientSideWebPart<IOrdersWebPartProps> { // ... public render(): void { this.context.statusRenderer.displayLoadingIndicator(this.domElement, 'orders'); this.context.aadTokenProviderFactory .getTokenProvider() .then((tokenProvider: AadTokenProvider): Promise<string> => { // retrieve access token for the enterprise API secured with Azure AD return tokenProvider.getToken('09c4b84d-13c4-4451-9350-3baedf70aab4'); }) .then((accessToken: string): void => { // call the enterprise API using jQuery passing the access token $.get({ url: '', headers: { authorization: `Bearer ${accessToken}`, accept: 'application/json' } }) .done((orders: any): void => { this.context.statusRenderer.clearLoadingIndicator(this.domElement); this.domElement.innerHTML = ` <div class="${ styles.orders}"> <div class="${ styles.container}"> <div class="${ styles.row}"> <div class="${ styles.column}"> <span class="${ styles.title}">Orders</span> <p class="${ styles.description}"> <ul> ${orders.map(o => `<li>${o.Rep} $${o.Total}</li>`).join('')} </ul> </p> <a href="" class="${ styles.button}"> <span class="${ styles.label}">Learn more</span> </a> </div> </div> </div> </div>`; }); }); } // ... } You start, by retrieving an instance of the AadTokenProvider using the aadTokenProviderFactory. Next, you use the AadTokenProvider to retrieve the access token for your API secured with Azure AD. Once you have obtained the access token, you execute the AJAX request to the enterprise API including the access token in the request headers. Use ADAL JS to handle authorization and retrieve access token If you're using an older version of the SharePoint Framework that v1.6.0 and can't use the AadTokenProvider, you can use the ADAL JS library to handle the OAuth implicit flow and retrieve the access token for the specific API secured with Azure AD. For applications built using AngularJS, ADAL JS offers an HTTP request interceptor that automatically adds required access tokens to headers of outgoing web requests. By using this requestor, developers don't need to modify web requests to APIs secured with Azure AD and can focus on building the application instead. Limitations when using ADAL JS with stores its data either in the browser's local storage or in the session storage. Either way, the information stored by ADAL JS is accessible to any component on the same page. For example, if one web part retrieves an access token for Microsoft Graph, any other web part on the same page can reuse that token and call. Because, the first retrieved token is served by ADAL JS to all components. If different components require access to the same resource, users' calendars. The web part that creates new meetings, on the other hand, requires a Microsoft Graph access token with permissions to write to users' calendars. If the upcoming meetings web part loads first, ADAL JS retrieves its token with only the read permission. The same token is then served by ADAL JS to the web part creating new meetings. As you can see, when the web part attempts to create a new meeting, the web part fails. After to complete the authentication process, you need to leave the whole page, which is a poor user experience. After the authentication completes, Azure AD redirects you back to your application. In the URL hash, it includes the identity token that your application needs to process. Unfortunately, because the identity token doesn't contain any information about its origin, if you had multiple web parts on one page, all of them would try to process the identity token from the URL. Considerations when using ADAL JS to communicate with APIs secured with Azure AD Aside the limitations, there are some considerations that you should take into account before implementing ADAL JS in your SharePoint Framework solution yourself. ADAL JS is meant to be used for single-page applications ADAL JS has been designed to be used with single-page applications. As such, by default it doesn't work correctly when used with SharePoint Framework solutions. By applying a patch however, it can be successfully used in SharePoint Framework projects. Handle all possible authentication exceptions yourself When using ADAL JS and OAuth to access APIs secured with Azure AD, the authentication flow is facilitated by Azure. Any errors are handled by the Azure sign-in page. After the user has signed-in with her organizational account, the application tries to retrieve a valid access token. All errors that occur at this stage have to be explicitly handled by the developer of the application because retrieving access tokens is non-interactive and doesn't present any UI to the user. Register every client-side application in Azure AD Every client-side application that wants to use ADAL JS needs to be registered as an Azure AD application. A part of the registration information is the URL where the application is located. Because the application is fully client-side and is not capable of securely storing a secret, the URL is a part of the contract between the application and Azure AD to establish security. This requirement is problematic for SharePoint Framework solutions because developers cannot simply know upfront all URLs where a particular web part will be used. Additionally, at this moment, Azure AD supports specifying up to 10 reply URLs, which might not be sufficient in some scenarios. Implement authorization flow in each web part Before a client-side application can retrieve an access token to a specific resource, it needs to authenticate the user to obtain the ID token which can then be exchanged for an access token. Even though SharePoint Framework solutions are hosted in SharePoint, where users are already signed in using their organizational accounts, the authentication information for the current user isn't available to SharePoint Framework solutions. Instead, each solution must explicitly request the user to sign in. This can be done either by redirecting the user to the Azure sign-in page or by showing a pop-up window with the sign-in page. The latter is less intrusive in the case of a web part, which is one of the many elements on a page. If there are multiple SharePoint Framework client-side web parts on the page, each of them manages its state separately and requires the user to explicitly sign in to that particular web part. Configure Internet Explorer security zones Retrieving access tokens required to communicate with APIs secured with Azure AD is facilitated by hidden iframes that handle redirects to Azure AD endpoints. There is a known limitation in Microsoft Internet Explorer where obtaining access tokens in OAuth implicit flow fails, if the Azure AD sign-in endpoints and the SharePoint Online URL are not in the same security zone. If your organization is using Internet Explorer, ensure that the Azure AD endpoint and SharePoint Online URLs are configured in the same security zone. To maintain consistency, some organizations choose to push these settings to end-users using group policies. Add access token to all AJAX requests to APIs secured with Azure AD APIs secured with Azure AD cannot be accessed anonymously. Instead they require a valid credential to be presented by the application calling them. When using the OAuth implicit flow with client-side applications, this credential is the bearer access token obtained using ADAL JS. If you have built your SharePoint Framework solution using AngularJS, ADAL JS automatically ensures that you have a valid access token for the particular resource, and adds it to all outgoing requests executed by using the AngularJS $http service. When using other JavaScript libraries, you have to obtain a valid access token, and if necessary refresh it, and attach it to the outgoing web requests yourself. Use ADAL JS with ADAL JS: - All information is stored in a key specific to the web part (see the override of the _getItemand _saveItemfunctions). - Callbacks are processed only by web parts that initiated them (see the override of the handleWindowCallbackfunction). - When verifying data from callbacks, the instance of the AuthenticationContextclass of the specific web part is used instead of the globally registered singleton (see _renewToken, getRequestInfoand the empty registration of the window.AuthenticationContextfunction). Use the ADAL JS patch in SharePoint Framework web parts For ADAL JS to work correctly in SharePoint Framework web parts, you have to configure it in a specific way. Define a custom interface that extends the standard ADAL JS Configinterface to expose additional properties on the configuration object. export interface IAdalConfig extends adal.Config { popUp?: boolean; callback?: (error: any, token: string) => void; webPartId?: string; } The popUpproperty and the callbackfunction are both already implemented in ADAL JS but are not exposed in the TypeScript typings of the standard Configinterface. You need them in order to allow users to sign in to your web parts by using a pop-up window instead of redirecting them to the Azure AD sign-in page.class.. It then client-side web parts Before you build SharePoint Framework client-side web parts that communicate with resources secured by Azure AD, there are some constraints that you should consider. The following information helps, after Important Information in this section doesn't apply to you if you use the AadTokenProvider as it handles the OAuth implicit flow for you Client-side applications are incapable of storing a client secret without revealing it to users. fails, and the web part that use OAuth and the Azure AD sign-in page (located at) must be in the same security zone. Without such configuration, the authentication process fails, and web parts won't be able to communicate with Azure AD-secured resources. User must sign in regularly When using regular OAuth flow, web applications and client applications get a short-lived access token (60 minutes by default) and a refresh token that. After that token expires, the application must start a new OAuth flow, which could require the user to sign in again.
https://docs.microsoft.com/en-us/sharepoint/dev/spfx/web-parts/guidance/connect-to-api-secured-with-aad
2018-12-09T21:37:26
CC-MAIN-2018-51
1544376823183.3
[array(['../../../images/api-aad-azure-app-service-authentication.png', 'App Service Authentication settings displayed in the Azure portal'], dtype=object) array(['../../../images/api-aad-visual-studio-authentication-wizard.png', 'Visual Studio authentication setup wizard'], dtype=object) ]
docs.microsoft.com
Driving Distance calculation¶ Important Only valid for pgRouting v1.x. For pgRouting v2.0 or higher see Function:¶ The driving_distance function has the following signature: CREATE OR REPLACE FUNCTION driving_distance( sql text, source_id integer, distance float8, directed boolean, has_reverse_cost boolean) RETURNS SETOF path_result Arguments:¶ sql: a SQL query, which should return a set of rows with the following columns: -). source_id: int4 id of the start point distance: float8 value of distance in degrees distance: float8 value in edge cost units (not in projection units - they might be different). directed: true if the graph is directed has_reverse_cost: if true, the reverse_cost column of the SQL generated set of rows will be used for the cost of the traversal of the edge in the opposite direction.: the identifier of the edge crossed - cost: The cost associated to the current edge. It is 0 for the row after the last edge. Thus, the path total cost can be computated using a sum of all rows in the cost column. Examples:¶ SELECT * FROM driving_distance('SELECT gid AS id,source,target, length::double precision AS cost FROM dourol',10549,0.01,false,false); vertex_id | edge_id | cost -----------+---------+--------------- 6190 | 120220 | 0.00967666852 6205 | 118671 | 0.00961557335 6225 | 119384 | 0.00965668162 6320 | 119378 | 0.00959826176 ... ... ... 15144 | 122612 | 0.00973386526 15285 | 120471 | 0.00912965866 15349 | 122085 | 0.00944814966 15417 | 120471 | 0.00942316736 15483 | 121629 | 0.00972957546 (293 rows)
http://docs.pgrouting.org/1.x/en/dd.html
2017-04-23T05:30:04
CC-MAIN-2017-17
1492917118477.15
[]
docs.pgrouting.org
<![CDATA[ ]]>Harmony Draw Guide > Introduction Introduction. This guide is divided as follows:
http://docs.toonboom.com/help/harmony-11/draw-standalone/Content/_CORE/Draw/000_CT_Draw_Intro.html
2017-04-23T05:35:09
CC-MAIN-2017-17
1492917118477.15
[array(['../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../Resources/Images/HAR/_Skins/stage.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/_Skins/draw.png', 'Toon Boom Harmony 11 Draw Online Documentation'], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/_Skins/sketch.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/_Skins/controlcenter.png', 'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/_Skins/scan.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/_Skins/stagePaint.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/_Skins/stagePlay.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/_Skins/stageXsheet.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/Stage/Interface/HAR11_Module_Icons/HarmonyDraw.png', None], dtype=object) ]
docs.toonboom.com
- MongoDB Integration and Tools > - Operational Procedures using Red Hat Enterprise Linux Identity Management Operational Procedures using Red Hat Enterprise Linux Identity Management¶ On this page The Red Hat Enterprise Linux Identity Management solution, RHEL IdM integrates Kerberos authentication, directory services, certificate management, DNS and NTP in a single service. The following sections provide instructions for adding and removing users, and managing user permissions, as well as providing instructions for managing password policies within RHEL IdM. User Management¶ Adding a New User¶ To add a new user, you must first create that user within IdM and then map that user to a set of privileges within MongoDB. Note This procedure requires administrative privileges within IdM and MongoDB Authenticate as an administrator (e.g. [email protected]) in Kerberos kinit admin Add a new user by issuing a command similar to the following: ipa user-add johnsmith --password Follow the prompts to complete adding the new user. Connect to MongoDB and authenticate as user with at least userAdminor userAdminAnyDatabaseprivileges, mongo --authenticationMechanism=GSSAPI \ --authenticationDatabase='$external' \ --username [email protected] Add the new user to a database and provide the appropriate privileges. In the following example, [email protected] is granted “read” privileges on the “test” database. use test db.addUser( { "user": "[email protected]", "roles": [ "read" ], "userSource" : "$external" } ) See also system.users Privilege Documents and User Privileges in MongoDB. Revoke User Access¶ To revoke a user’s access, you must complete two steps: first, you must remove the specified user from the databases they have access to, and second, you must remove the user from the IdM infrastructure. Authenticate as an administrator (e.g. [email protected]) in Kerberos and connect to MongoDB: kinit admin mongo --authenticationMechanism=GSSAPI \ --authenticationDatabase='$external' \ --username [email protected] Remove the user from a database using db.removeUser. The following example removes user [email protected] the testdatabase: use test db.removeUser("[email protected]") Repeat these steps for any databases that the user has access to. You can now either disable or remove the user. Disabled users still exist within the IdM system, but no longer have access to any IdM services (e.g. Kerberos). It is still possible to reactive a disabled user by granting them access to a database. By contrast, removing a user deletes their information from IdM and they cannot be reactivated in the future. To disable a user (in this case, johnsmith), issue a command that resembles the following: kinit admin ipa user-disable johnsmith To remove the user, use the user-del instead, as in the following: kinit admin ipa user-del johnsmith Whether disabled or removed, the user in question will no longer have access to Kerberos and will be unable to authenticate to any IdM clients. Configuring Password Policies¶ By default, RHEL IdM provides a global password policy for all users and groups. To view the policy details, connect as an administrator, and execute the pwpolicy-show command, is the following: kinit admin ipa pwpolicy-show To view the policies applied to a particular user, you can add the --user=<username> flag, as in the following: kinit admin ipa pwpolicy-show --user=johnsmith You can edit password policies to update parameters such as lockout time or minimum length. The following command changes the global policy’s minimum length allowable for passwords, setting the minimum length to 10 characters. kinit admin ipa pwpolicy-mod --minlength=10 For more information, refer to Defining Password Policies within the IdM documentation.
https://docs.mongodb.com/ecosystem/tutorial/manage-red-hat-enterprise-linux-identity-management/
2017-04-23T05:32:37
CC-MAIN-2017-17
1492917118477.15
[]
docs.mongodb.com
Class HTMLPurifier_StringHashParser Parses string hash files. File format is as such: DefaultKeyValue KEY: Value KEY2: Value2 --MULTILINE-KEY-- Multiline value. Which would output something similar to: array( 'ID' => 'DefaultKeyValue', 'KEY' => 'Value', 'KEY2' => 'Value2', 'MULTILINE-KEY' => "Multiline\nvalue.\n", ) We use this as an easy to use file-format for configuration schema files, but the class itself is usage agnostic. You can use ---- to forcibly terminate parsing of a single string-hash; this marker is used in multi string-hashes to delimit boundaries. Public Properties Hide inherited properties
http://docs.phundament.com/4.0-ee/htmlpurifier_stringhashparser.html
2017-04-23T05:32:35
CC-MAIN-2017-17
1492917118477.15
[]
docs.phundament.com
Jun. 22, 2016 The Citrix Federated Authentication Service. The following diagram shows the Federated Authentication Service integrating with a Microsoft Certification Authority and providing support services to StoreFront and XenApp and XenDesktop Virtual Delivery Agents (VDAs). Trusted StoreFront servers contact the Federated Authentication Service (FAS) as users request access to the Citrix environment. The FAS grants a ticket that allows a single XenApp or XenDesktop session to authenticate with a certificate for that session. When a VDA needs to authenticate a user, it connects to the FAS and redeems the ticket. Only the FAS has access to the user certificate’s private key; the VDA must send each signing and decryption operation that it needs to perform with the certificate to the FAS. The Federated Authentication Service is supported on Windows servers (Windows Server 2008 R2 or later). In the XenApp or XenDesktop Site: When planning your deployment of this service, review the Security considerations section. References: For security, Citrix recommends that the FAS be installed on a dedicated server that is secured in a similar way to a domain controller or certificate authority. The FAS can be installed from the Federated Authentication Service button on the autorun splash screen when the ISO is inserted. This will install the following components: To enable Federated Authentication Service integration on a StoreFront Store, run the following PowerShell cmdlets as an Administrator account. If you have more than one store, or if the store has a different name, the path text below may differ. "" To use the Federated Authentication Service, configure the XenApp or XenDesktop Delivery Controller to trust the StoreFront servers that can connect to it: run the Set-BrokerSite -TrustRequestsSentToTheXmlServicePort $true PowerShell cmdlet. After you install the Federated Authentication Service, you must specify the full DNS addresses of the FAS servers in Group Policy using the Group Policy templates provided in the installation. Important: Ensure that the StoreFront servers requesting tickets and the VDAs redeeming tickets have identical configuration of DNS addresses, including the automatic server numbering applied by the Group Policy object. For simplicity, the following examples configure a single policy at the domain level that applies to all machines; however, that is not required. The the FAS, locate the C:\Program Files\Citrix\Federated Authentication Service\PolicyDefinitions\CitrixFederatedAuthenticationService.admx file. Step 5. Open the Federated Authentication Service policy and select Enabled. This allows you to select the Show button, where you configure the DNS addresses of your FAS servers. Step 6. Enter the DNS addresses of the servers hosting your Federated Authentication Service. Remember: If you enter multiple addresses,. The Group Policy template includes support for configuring the system for in-session certificates. This places certificates in the user’s personal certificate store after logon for application use. For example, if you require TLS authentication to web servers within the VDA session, the certificate can be used by Internet Explorer. By default, VDAs will not allow access to certificates after logon. The Federated Authentication Service administration console is installed as part of the Federated Authentication Service. An icon (Citrix Federated Authentication Service) is placed in the Start Menu. The console attempts to automatically locate the FAS servers in your environment using the Group Policy configuration. If this fails, see the Configure Group Policy section. If your user account is not a member of the Administrators group on the machine running the Federated Authentication Service, you will be prompted for credentials. The first time the administration console is used, it guides you through a three-step process that deploys certificate templates, sets up the certificate authority, and authorizes the Federated Authentication Service to use the certificate authority. Some of the steps can alternatively be completed manually using OS configuration tools. To avoid interoperability issues with other software, the Federated Authentication Service provides three Citrix certificate templates for its own use. the Federated Authentication Service.) The final setup step in the console initiates the authorization of the Federated Authentication Service. the Federated Authentication Service can continue. Note that the authorization request appears as a Pending Request from the FAS machine account. Right-click All Tasks and then select Issue or Deny for the certificate request. The Federated Authentication Service administration console automatically detects when this process completes. This can take a couple of minutes. the setup of the Federated Authentication Service, the administrator must define the default rule by switching to the User Rules tab of the FAS administration console, selecting a certificate authority to which the Citrix_SmartcardLogon template is published, and editing the list of StoreFront servers. The list of VDAs defaults to Domain Computers and the list of users defaults to Domain Users; these can be changed if the defaults are inappropriate. Fields: Certificate Authority and Certificate Template: The certificate template and certificate authority that will be used to issue user certificates. This should be the Citrix_SmartcardLogon template, or a modified copy of it, on one of the certificate authorities that the template is published to. The FAS supports adding multiple certificate authorities for failover and load balancing, using PowerShell commands. Similarly, more advanced certificate generation options can be configured using the command line and configuration files. See the PowerShell and Hardware security modules sections. In-Session Certificates: The Available after logon check box controls whether a certificate can also be used as an in-session certificate. If this check box is not selected, the certificate will be used only for logon or reconnection, and the user will not have access to the certificate after authenticating. List of StoreFront servers that can use this rule: The list of trusted StoreFront server machines that are authorized to request certificates for logon or reconnection of users. Note that this setting is security critical, and must be managed carefully. the Federated Authentication Service.. The Federated Authentication Service has a registration authority certificate that allows it to issue certificates autonomously on behalf of your domain users. As such, it is important to develop and implement a security policy to protect the the FAS servers, and to constrain their permissions. Delegated Enrollment Agents The Microsoft Certification Authority allows control of which templates the FAS server can use, as well as limiting which users the FAS server can issue certificates for. Citrix strongly recommends configuring these options so that the Federated Authentication Service can only issue certificates for the intended users. For example, it is good practice to prevent the Federated Authentication Service from issuing certificates to users in an Administration or Protected Users group. Access Control List configuration As described in the Configure user roles section, you must configure a list of StoreFront servers that are trusted to assert user identities to the Federated Authentication Service The Federated Authentication Service and the VDA write information to the Windows Event Log. This can be used for monitoring and auditing information. The Event logs section lists event log entries that may be generated. Hardware security modules All private keys, including those of user certificates issued by the Federated Authentication Service, are stored as non-exportable private keys by the Network Service account. The Federated Authentication Service. Although the Federated Authentication Service. You can also download a zip file containing all the FAS PowerShell cmdlet help files; see the PowerShell SDK article. The Federated Authentication Service includes a set of performance counters for load tracking purposes. The following table lists the available counters. Most counters are rolling averages over five minutes. The following tables list the event log entries generated by the Federated Authentication Service. Administration events [Event Source: Citrix.Authentication.FederatedAuthenticationService] These events are logged in response to a configuration change in the Federated Authentication Service server. Creating identity assertions [Federated Authentication Service] These events are logged at runtime on the Federated Authentication Service server when a trusted server asserts a user logon. Acting as a relying party [Federated Authentication Service] These events are logged at runtime on the Federated Authentication Service server when a VDA logs on a user. In-session certificate server [Federated Authentication Service] These events are logged on the Federated Authentication Service server when a user uses an in-session certificate. Log on [VDA] [Event Source: Citrix.Authentication.IdentityAssertion] These events are logged on the VDA during the logon stage. In-session certificates [VDA] These events are logged on the VDA when a user attempts to use an in-session certificate. Certificate request and generation codes [Federated Authentication Service] [Event Source: Citrix.TrustFabric] These low-level events are logged when the Federated Authentication Service server performs log-level cryptographic operations.
https://docs.citrix.com/en-us/xenapp-and-xendesktop/7-13/secure/federated-authentication-service.html
2017-04-23T05:34:43
CC-MAIN-2017-17
1492917118477.15
[]
docs.citrix.com
Applies to: Azure Information Protection, Office 365 Before you deploy Azure Information Protection for your organization, make sure that the following are in place: User accounts and groups in the cloud that you create manually or that are automatically created and synchronized from Active Directory Domain Services (AD DS). When you synchronize your on-premises accounts and groups, not all attributes need to be synchronized. For a list of the attributes that must be synchronized for the Azure Rights Management service that is used by Azure Information Protection, see the Azure RMS section from the Azure Active Directory documentation. For ease of deployment, we recommend that you use Azure AD Connect to connect your on-premises directories with Azure Active Directory but you can use any directory synchronization method that achieves the same result. Mail-enabled groups in the cloud that you will use with Azure Information Protection. These can be built-in groups or manually created groups that contain users who will use protected documents and emails. If you have Exchange Online, you can create and use mail-enabled groups by using the Exchange admin center. If you have AD DS and are synchronizing to Azure AD, you can create and use mail-enabled groups that are either security groups or distribution groups. Group membership caching For performance reasons, group membership is cached by the Azure Rights Management service. This means that any changes to group membership can take up to 3 hours to take effect, and this time period is subject to change. Remember to factor this delay into any changes or testing that you do when you use groups in your configuration of the Azure Rights Management service, such as configuring custom templates or when you use a group for the super user feature. Considerations if email addresses change When you configure usage rights for users or groups and select them by their display name, your selection saves and uses that object's email address. If the email address is later changed, your selected users will not be successfully authorized. If email addresses are changed, we recommend you add the old email address as a proxy email address (also known as an alias or alternate email address) to the user or group, so that usage rights that were assigned previously are retained. If you cannot do that, you must remove the user or group from your configuration, and select it again to save the updated email address so that newly protected content uses the new email address. Custom Rights Management templates are an example of where you might select users or groups by the display name to assign usage rights. Users can also select users and groups by their display name when they configure custom permissions with the Azure Information Protection client. Activate the Rights Management service for data protection When you are ready to start protecting documents and emails, activate the Rights Management service to enable this technology. For more information, see Activating Azure Rights Management. Before commenting, we ask that you review our House rules.
https://docs.microsoft.com/en-us/information-protection/plan-design/prepare
2017-04-23T05:34:12
CC-MAIN-2017-17
1492917118477.15
[]
docs.microsoft.com
Managing Solace JNDI Objects Java Messaging Service (JMS) provides a common way for an enterprise’s Java applications to create, send, receive, and read messages. The Solace Messaging Platform supports the JMS 1.1 standard. Java messaging applications can use the Solace implementation of the JMS Application Programming Interface (API) connect to a Solace router. The Solace router can provide JNDI service, which allows JMS clients to perform JNDI lookups and object binding. It also acts as a JMS broker, to access the messaging capabilities of the Solace Messaging Platform. The Solace router acts as the JMS broker for the JMS client. As such, it provides access control, message routing, selecting, and filtering. The Solace router provides an internal JNDI store for provisioned Connection Factory, Topic, and Queue objects that clients can access through JNDI lookups. The Solace Guaranteed Messaging feature provides a message spooling mechanism to support the persistent store notion of JMS queues (when using a point-to-point messaging model) and subscription names (when using a publish‑subscribe messaging model). - Physical queues can be provisioned on a router through the Solace CLI (refer to Configuring Queues). - Topic endpoints can be provisioned on a router through the Solace CLI (refer to Configuring Topic Endpoints). These topic endpoints must have the same name as the subscription name that they represent. Note: To use Guaranteed Messaging with JMS messaging, a Solace appliance must have an ADB installed with Guaranteed Messaging and message spooling enabled. For configuration information, refer to Managing Guaranteed Messaging. This section describes: - the tasks associated with configuring standard object properties and property lists in the JNDI store on a Solace router - the Command Line Interface (CLI) commands used to manage administered objects in the JNDI store on a Solace router Note: The Solace Messaging Platform also supports JNDI lookups of administered objects maintained in an LDAP-based JNDI store on a remote host. However, this section only provides information on how to work with the JNDI store on a Solace router. Solace Messaging API for JMS is JNDI‑compliant and accepts the standard JMS objects Connection Factory, Topic, and Queue. Standard JMS object properties and property lists can be configured in the JNDI store on the Solace router. Through the JMS API, JNDI provides a standard way of accessing naming and directory services on the Solace router that allows clients to discover and lookup data and objects through a common name. Starting/Stopping JNDI Access By default JNDI access is not enabled on Solace appliances, but it is enabled on VMRs. - To enable JNDI access for clients, enter the following CONFIG command: solace(configure/jndi)# no shutdown - To stop JNDI access for clients , enter the following CONFIG command: solace(configure/jndi)# shutdown
http://docs.solace.com/Configuring-and-Managing-Routers/Managing-Solace-JNDI-Objects.htm
2017-04-23T05:23:59
CC-MAIN-2017-17
1492917118477.15
[]
docs.solace.com
Installation on OS X¶ Python package¶ Install a proper Python version (see issue #39 for a discussion regarding the required Python version on OS X): sudo port select python python27-apple Homebrew may be used here: brew install python Note In case powerline.shas a client socatand coreutilsneed to be installed. coreutilsmay be installed using brew install coreutils. Install Powerline using one of the following commands: pip install --user powerline-status will get current release version and pip install --user git+git://github.com/powerline/powerline will get latest development version. Warning When using brew installto install Python one must not supply --userflag to pip. Note Due to the naming conflict with an unrelated project powerline is named powerline-statusin PyPI. Note Powerline developers should be aware that pip install --editabledoes not currently fully work. Installation performed this way are missing powerlineexecutable that needs to be symlinked. It will be located in scripts/powerline. Vim installation¶ Any terminal vim version with Python 3.2+ or Python 2.6+ support should work, but MacVim users need to install it using the following command: brew install macvim --env-std --with-override-system-vim Fonts installation¶ To install patched font double-click the font file in Finder, then click Install this font in the preview window. After installing the patched font MacVim or terminal emulator (whatever application powerline should work with) need to be configured to use the patched font. The correct font usually ends with for Powerline.
http://powerline.readthedocs.io/en/latest/installation/osx.html
2017-04-23T05:31:28
CC-MAIN-2017-17
1492917118477.15
[]
powerline.readthedocs.io
Note: Most user interface tasks can be performed in Edge Classic or the New Edge experience. For an overview, getting started topics, and release notes specific to the New Edge experience, see the docs. Overview of Edge Microgateway Edge Microgateway v. 2.1.x Audience This topic is a general introduction to Edge Microgateway intended for all audiences. What is Apigee Operation and configuration reference for.. Help or comments? - If something's not working: Ask the Apigee Community or see Apigee Support. - If something's wrong with the docs: Send Docs Feedback (Incorrect? Unclear? Broken link? Typo?)
http://docs.apigee.com/microgateway/v21x/edge-microgateway-overview-v21x
2017-04-23T05:31:39
CC-MAIN-2017-17
1492917118477.15
[array(['http://d3grn7b5c5cnw5.cloudfront.net/sites/docs/files/em-admin-fig-1-new.png', None], dtype=object) array(['http://d3grn7b5c5cnw5.cloudfront.net/sites/docs/files/em-admin-fig-2-new.png', None], dtype=object) array(['http://d3grn7b5c5cnw5.cloudfront.net/sites/docs/files/em-admin-fig-3-new.png', None], dtype=object) array(['http://d3grn7b5c5cnw5.cloudfront.net/sites/docs/files/em-admin-fig-4-new.png', None], dtype=object) array(['http://d3grn7b5c5cnw5.cloudfront.net/sites/docs/files/em-overview-6.png', None], dtype=object) array(['http://d3grn7b5c5cnw5.cloudfront.net/sites/docs/files/em-admin-fig-6.png', None], dtype=object) ]
docs.apigee.com
Monetize Your Site Banner Ads Description and Use Cases Banner Ads are the most common ad format in mobile advertising. You often see Banner Ads at the top or bottom of your screen while reading articles and searching through content. We recommend using Banner Ads on both sites targeted at smartphones and tablets in cases where the user is reading and/or interacting with content on the same screen for a period of time. ###Basic Integration This section will discuss how to do basic integration of Banner Ads for the Javascript Ad Tag. 1. Add the ad container The Javascript Ad Tag requires an element for the ad to be placed in. This element should be placed wherever you would like the ad displayed on the site. The id of this element can be anything you choose, but the id will be required for the ad call. The ad container should NOT use padding. The same effects can be achieved using margins or other styling if desired. Please note the special requirements if the ad container is being placed within an iframe. An example element is shown below. <div id="adContainer"></div> 2. Add the Javascript Ad Library Add the Javascript Ad Library script to your site. <script src=""></script> 3. Add the Javascript Ad Tag Insert the Javascript Ad Tag to initialize the ad. The containerElementId must be the id of the ad container created in Step 1. In the example provided, note that <YOUR_APID> must be replaced with your actual APID. Also, “placementType” must be set to “inline.” The callBack parameter is optional. Please see our API section for further details on the placeAd call. <script> window.mmAPI.placeAd({ containerElementId: "adContainer", apid: "<YOUR_APID>", placementType: "inline", width: 320, height: 50, allowLocation: true }); </script> The same example with a callback function would look as follows. <script> function callBack(adFilled) { //adFilled will be true if the ad fills, else it will be false console.log("was the ad filled: " + adFilled); }; window.mmAPI.placeAd({ containerElementId: "adContainer", apid: "<YOUR_APID>", placementType: "inline", width: 320, height: 50, allowLocation: true }, callBack); </script>
http://docs.onemobilesdk.aol.com/mmadlib/JavascriptBannerAds.html
2017-04-23T05:28:30
CC-MAIN-2017-17
1492917118477.15
[]
docs.onemobilesdk.aol.com
Frequently Asked Questions About Live Transifex Live is a simple way of translating websites, documentation, and web apps without needing to internationalize code or use files. Just add a snippet of JavaScript to your site, translate the content in context, then publish translations with the click of a button. Whether you use Transifex Live or the traditional file-based approach, you still have access to our translation partners and other Transifex features such as Teams, Translation Memory, and Reports. The in-context feature of Transifex Live only works with Transifex Live projects. However, our Visual Context lets you translate file-based content using screenshots. As of right now, if you use Transifex Live, you can download your translations as a JSON Key-Value file. To do this, visit your resource details page and click on the target language of your choice. Inside the pop up window, click on "Download only reviewed translations". Unfortunately you cannot use Transifex for converting one file format to another. Transifex Live serves your translated website content through JavaScript. Historically, Google has only been able crawl and index content served through JavaScript in a limited fashion, however in the past couple of years, Google and other search engines have made significant strides, enabling their search bots to not only execute and index JavaScript, but to render entire pages, including dynamically generated content. Google has officially stated that their bots are able to crawl and index content served through JavaScript, and tests have verified this is true. Knowing that international SEO is a concern for many companies going global, we've also created the Transifex Live Translation Plugin for WordPress, the largest self-hosted blogging tool in the world, which allows you to assign unique URLs to the multilingual versions of your website and also adds hreflang tags where applicable to help increase indexability in global search engines. If you're still concerned about how Transifex Live will impact SEO, you can run a service on your server which pre-renders the content before serving it to search crawlers. Check out our Search Engine Optimization (SEO) article article for more info. Translations are served through our Content Delivery Network (CDN), ensuring high uptime and low latency. We only serve the translations, not your site. This means you don't give up control of your site to a third party, and sensitive information like credit card numbers don't pass through our servers. Your website's reliability and security are totally in your hands. Transifex uses Fastly's CDN. Knowing that application downtime often means the loss of customers and revenue, we carefully selected our CDN to provide our customers with a high level of performance that helps prevent applications from crashing. We built Transifex Live with speed and reliability in mind so you don't have to worry about end user performance being negatively impacted. When a user visits your website for the first time, translations are fetched from the Transifex CDN, and are then stored locally on the user's machine (saved to the cache on their computer). This way, when a user reloads or visits your website in the future, the translations don't have to be downloaded again. Translations will load faster and overall site speed will remain at optimal levels. When a user fetches translations from the Transifex CDN for the first time, they may see the page in your source language briefly before the translated content appears. To avoid this translation swapping on page load, you can implement what's suggested here. Our JavaScript snippet is: Small, weighing roughly 22kb to ensure optimal site speed. Static, meaning it's only cached after the first request and is not loaded again until changes or updates are made to your translated content. Delivered through our CDN (Fastly), allowing you to serve your website using your own servers. We do not handle Ajax content, we handle dynamic changes in the HTML. So if the outcome of an Ajax call is to display new content in the browser, Transifex Live will capture that content (e.g. a popup) and try to translate it. No, your translators do not have to use Transifex Live to finish translations and can still work in the Web Editor or download XLIFF files and translate offline. We do, however, recommend at least using Transifex Live to review translations in context. Only a little bit of technical knowledge is required to use Transifex Live. In fact, one of the driving forces behind creating Transifex Live was to minimize the technical effort spent on extracting content from code for translation. All you need to do is copy the JavaScript snippet and paste it in your HTML pages. It's similar to installing Google Analytics to your site. Power users who want to customize how Transifex Live behaves can use the JavaScript API. Installing the Transifex Live JavaScript snippet is similar to adding Google Analytics to your site. Once you get your unique JavaScript snippet, copy and paste it into the <head> element of your site's HTML. You only need to do this once, and it'll allow you to use Transifex Live and publish translations. Alternatively, if you have a tag manager such as Google Tag Manager installed, you can use it to add Transifex Live to your site without editing any code. We also have guides on adding your JavaScript snippet to a number of popular publishing platforms. Each domain you translate is its own resource in Transifex and has its own unique JavaScript snippet, so be sure you're adding the correct JavaScript snippet to your site. We suggest installing the JavaScript as soon as possible. There are few reasons for this: - You'll be able to use the Transifex Live sidebar and translate directly on your site. - Transifex Live will automatically detect content changes on your site when you have the JavaScript snippet installed. - Installing the JavaScript snippet lets you approve phrases from private pages. - To take translations live on your site, you'll need to have the JavaScript snippet installed. Yes, Transifex Live has been fully tested on Chrome, Safari, Firefox, and IE 9, 10, and 11. Note that with Internet Explorer, hexadecimal colors in inline CSS containing is converted to RGB, thus creating a different signature for the same source string. To fix this, you can: - Include styling in classes which works fine across browsers. - Add the notranslate class to elements containing inline css. - Define color styling using rgb and not hex values. By default, Transifex Live automatically handles static content, i.e. HTML that's initially loaded from the first HTTP request. However, you can still translate dynamic content with Transifex Live using one of the following methods: - Instruct the JavaScript library to monitor the page for dynamically injected content (recommended). - Mark dynamic content with a special class. - For more complex scenarios, use the JavaScript API. Refer to this article for details on using Transifex Live with dynamic content. Yes, you can. The Transifex Live sidebar works directly on your site, so you'll be able to navigate to pages that are behind a login. Transifex Live automatically detects the URL and feeds the right language. Related information about this can be found here. However, you may want to create a custom language picker to redirect your visitors accordingly. For example, when a user selects German from your home page using your site's language picker, they will be redirected to instead of seeing the German content on the same page. If you need more information about creating a custom language picker, check this article here. This is probably caused by the auto-collect feature of Live. When a user visits your website and you have auto-collect enabled in the Javascript snippet, untranslated strings will be sent to Transifex for translation. Sometimes, when you have dynamic content, such as user names, they will be collected as well. However you can disable collection and translation of those strings by adding the "notranslate" class on the encapsulated html tags. Also strings appearing in the Detected tab do not affect your billing quota. Those are just strings that are candidates for translations. Only strings in the Approved tab affect billing, which are strings that have been approved for translations by you. If you want to disable auto-collection entirely in your website you can do it by settings "autocollect: false" in the Javascript you have injected in the website. However in that case you will have to go through every page in Transifex Live Tool and collect the strings manually. Yes, to access your staging server through Live, first whitelist the following IP addresses: - 162.13.142.17 - 162.13.179.217 Next, go to the Detailed view of your Web project, select Resources, pick the resource, and hit Settings. From there, you can set the staging domain. Note that content from the staging domain will be saved to the same resource as the production domain. Be sure to install the JavaScript snippet on your staging site too! Use the same snippet as the one you used on your production site. In the Publish widget, you can choose to make translations live either on your production site, or your staging site. Publishing to a staging server is useful for testing and if you have content on your staging site that's not ready to go public yet. Nope, nothing changes in the API. Certain website elements might not load properly within the Transifex Live Preview. This can happen if you have absolute URLs in CSS or JavaScript files, or there is some other complex JavaScript functionality that misbehaves when running in the Transifex Live Preview. Most of the time, you can fix this by clicking the shield icon in the address bar of Chrome and Firefox, then choosing to load the script. If your website is performing AJAX requests to show certain functionality or content, those requests might fail because in the Transifex Live preview, your website is loaded under the live.transifex.com domain, yet you are requesting data from your.domain.com, which the browser will block for security reasons. You can fix this by enabling Cross-Origin Resource Sharing (CORS) for the live.transifex.com domain on your web server, like so: Access-Control-Allow-Origin: For more information on how to enable CORS on various web servers, take a look here. POST requests are not supported as HTTP servers usually have security checks preventing this. However, you can work around this by by installing the JavaScript snippet and enabling "Automatically identify new strings when page content changes" in the settings. Yup. If you're running experiments in the source language, Transifex Live will not interfere at all. If you want to run experiments in translated languages, you should use Optimizely's custom JavaScript code feature to trigger the translation of the altered block. For example: $("h1").replaceWith("<h1>Localization doesn't<br> have to be hard</h1>"); window.Transifex && window.Transifex.live.translateNode($("h1").get(0)); For more details, check the JavaScript API documentation. When your site is being loaded through Transifex Live, it's actually loaded through one of our proxy servers. This requires whitelabeling, and unfortunately, there isn't any documentation regarding security implications, but we haven't had any reported issues. Transifex has no intention of accessing your code or making changes to your website. Our goal is simple: to provide you with the easiest way to localize content quickly and efficiently. You can prevent sensitive data from ever going to Transifex by doing the following: - Turn off auto detection of content by going to the Transifex Live settings and unchecking Identify new strings when page content changes. When auto detect is off, you'll have to visit a page directly while logged in to Transifex in order for content to be detected. - Transifex Live by default doesn't save any data present in input fields, such as signup forms. The only way sensitive data would be detected is if the content is displayed in the HTML itself. In that case, you can tell Transifex Live to ignore a block in your HTML containing sensitive data using a notranslate class. Transifex Live JavaScript loads an external library for error reporting that's licensed under a BSD license. Development tools, used during the localization process, load several external libraries, all of which are under an MIT license. So good news, there isn't any issue regarding your Open Source license with Transifex Live and live.js!
https://docs.transifex.com/live/faq
2017-04-23T05:27:31
CC-MAIN-2017-17
1492917118477.15
[]
docs.transifex.com
SolGeneos Directories After SolGeneos Agent is installed on a Solace router, the following directories are created: Netprobe Directories If Netprobe is deployed locally on the router it will be installed in the following directory: /usr/sw/geneos, the root directory for the Netprobe installation. Configuration Properties SolGeneos Agent uses the following types of configuration properties. These are stored in the solgeneos.properties file; they use a standard property name/value pair convention. - Global properties—Properties that are shared by all monitors. Some properties can be overwritten in monitor specific property files. To query global properties, call SolGeneosAgent.onlyInstance.getGlobalProperties(). - Monitor specific properties—Properties that are used by a specific monitor implementation. They can be global properties that must be overwritten or require exclusive monitor properties. For example, properties for AlarmsMonitorare stored in the AlarmsMonitor.propertiesfile. To query monitor specific properties, call SolGeneosAgent.onlyInstance.getMonitorConfig(monitorObject). - User properties—Properties that are shared by user developed monitors. The file name for user properties must start with “ user”. To query user properties, call SolGeneosAgent.onlyInstance.getUserPropertiesConfig. (userPropertiesFileName) After the property files are read, and the property name is understood by the agent, they are loaded into the intended service or monitor's property map when the service or the monitor is initialized. (Otherwise, it stays in the properties object to which it is initially loaded.) For properties whose values are out of range, the default values are used. Global Properties The table below lists the global properties SolGeneos Agent supports, and whether they can be overwritten at monitor level or at a data view level inside a monitor: Monitor Properties The following table lists the monitor-specific properties that SolGeneos Agent supports.
http://docs.solace.com/SolGeneos-Agent/SolGeneos-Directories.htm
2017-04-23T05:26:01
CC-MAIN-2017-17
1492917118477.15
[array(['Directory_Structure.png', None], dtype=object)]
docs.solace.com
The Groovy development team's just released Groovy 2.0.4, a bug fix for our Groovy 2.0 branch. It fixes some important issues we've had with generics with the stub generator, as well as several fixes related to the static type checking and static compilation features. We also worked on the recompilation issues with the GroovyScriptEngine. You can download Groovy 2.0.4 here: The artifacts are not yet on Maven Central, but will shortly. Also, the online JavaDoc and GroovyDoc are still being uploaded as I write this, but will hopefully be online in a few hours. You can have a look at the issues fixed in this release on JIRA: If you're curious why we've jumped from 2.0.2 to 2.0.4, skipping 2.0.3, it's because of some issues with the release of 2.0.3 which I had mistakenly built with JDK 6 instead of JDK 7, which ruled out all our invoke dynamic support. So we quickly moved to 2.0.4. We'll certainly also release a 1.8.9 soon, especially for the stub generator fixes which might be useful for those affected with the issues we've fixed.
http://docs.codehaus.org/pages/viewpage.action?pageId=229742519
2014-08-20T09:01:04
CC-MAIN-2014-35
1408500801235.4
[]
docs.codehaus.org
... This proposal would like to refine how we are handling unsupported modules: - The creation of a single page in the developers guide outlining all the requirements from creation through supporting a module (right now this content is split across several pages). We should clarify how a module can drop back to unsupported status and so forth. - Modify the unsupported/pom.xml so that profiles are available grouping like functionality; using a single environmental -Dall to turn on all functions - Deployment of all modules to maven (including those in unsupported) - Continuous testing of all modules (including those in unsupported) - Restrict unsupported modules from the default download The goals here are several fold: - Ensure that everything we (ie the PMC) make available for download has passed the Project QA checks and meets the requirements of our Developers Guide - Continue to foster new development - Ensure that GeoTools always builds in a reasonable amount of time Status We would like to close up this issue in a timely fashion for the 2.5.0 release: Discussion - - Tasks This section is used to make sure your proposal is complete (did you remember documentation?) and has enough paid or volunteer time lined up to be a success ... This proposal requires modification to the Developers Guide:
http://docs.codehaus.org/pages/diffpages.action?pageId=97452576&originalId=228172846
2014-08-20T09:00:17
CC-MAIN-2014-35
1408500801235.4
[]
docs.codehaus.org
This). We will see below how we can use Phing to create ZIP and "tar.az" archives automatically..9"). 2 tells Phing that the default action to take is called "copy_all" and that the base directory for this is the current directory. Lines 3 and.
http://docs.joomla.org/index.php?title=Setting_up_your_workstation_for_extension_development_(build_with_Phing)&diff=prev&oldid=12885
2014-09-15T09:35:26
CC-MAIN-2014-41
1410657104131.95
[]
docs.joomla.org
methods - call private methods from everywhere - call method from inside the class only - choose a private method basing on the class we are currently in (call a private method of this class even if a subclass has overwritten that clas) - when is a method overwritten (with respect to argument types) - when is a method overloaded (with respect to argument types) - in which cases can MetaClass overwrite method implementations? (private? call from inside the class?) - inherit private methods - when is a method call interceptable? (from inside, from outside, never) - any checks a compiler has to do MOP - is the get/setProperty or invokeMethod method used if there exists a private method/property/field in the superclass - only call the mop methods when there is no (private?) method/field visible private - how to handle private when from outside/isnide a class variables - allow redefinition/hiding in what cases - allow redefinition/hiding of unbound variables - when is a variable name a class - allow the definition of a variable name of the same name as a class - what should a compiler be able to check regarding variable names closures/builder - default MetaClass implementation for Closures - nested builder - calling class / outer builder method in nested builder - resolving names in bulder to classes/fields/variables/properties/dynamicProperties allow nested builders? the meaning of 'this' - in methods - in closures (block/markup) misc - no assignment in loop headers missing language features - enum - for loop from java - native modifier - mdim arrays - interface rules - inner classes - generics/templates syntax questions - use with-syntax
http://docs.codehaus.org/display/GroovyJSR/Paris+meeting+discussion+points+for+name+resolution
2014-09-15T09:40:24
CC-MAIN-2014-41
1410657104131.95
[]
docs.codehaus.org
PUBLIC UTILITIES COMMISSION OF THE STATE OF CALIFORNIA Item #21 ID #11351 ENERGY DIVISION RESOLUTION E-4501 June 7, 2012 REDACTED RESOLUTION Resolution E-4501. Southern California Edison Company requests approval of a power purchase agreement with McCoy Solar, LLC a subsidiary of NextEra Energy Resources, LLC. PROPOSED OUTCOME: This Resolution approves cost recovery for a power purchase agreement between Southern California Edison Company and McCoy Solar, LLC. ESTIMATED COST: Actual costs are confidential at this time. By Advice Letter 2661-E filed on November 28, 2011. __________________________________________________________ Southern California Edison Company's renewable energy power purchase agreement with McCoy Solar, LLC. Southern California Edison Company (SCE) requests approval of a power purchase agreement (PPA) with McCoy Solar, LLC a subsidiary of NextEra Energy Resources, LLC. McCoy Solar, LLC is developing a new solar photovoltaic project in Riverside County, California with a total capacity of 250 megawatts (MW) and total annual expected generation of 611 gigawatt-hours (GWh). The McCoy Solar, LLC facility (McCoy Solar) is forecast to achieve commercial operation no later than November 30, 2016 coinciding with SCE's Renewables Portfolio Standard (RPS) portfolio needs in the second half of this decade. The McCoy Solar PPA is reasonably priced compared to other contracts offered to SCE at the time the McCoy Solar PPA was signed, including offers received in SCE's 2011 RPS solicitation. The McCoy Solar PPA was negotiated bilaterally prior to SCE's 2011 RPS solicitation; however, SCE executed the PPA only after seeing the results of the 2011 solicitation to ensure that the McCoy Solar PPA was of comparable or better value to other RPS market offers. This resolution approves SCE's PPA with McCoy Solar, LLC without modification. SCE's execution of the PPA is consistent with SCE's 2011 RPS Procurement Plan, which the Commission approved in Decision 11-04-030. The procurement costs incurred by SCE associated with the McCoy Solar, LLC PPA are fully recoverable in rates over the life of the PPA, subject to Commission review of SCE's administration of the PPA. The following table summarizes the project-specific features of the agreement:
http://docs.cpuc.ca.gov/PUBLISHED/AGENDA_RESOLUTION/167700.htm
2014-09-15T09:28:42
CC-MAIN-2014-41
1410657104131.95
[]
docs.cpuc.ca.gov
See: Description The classes that make up the DDL sequencer, which is capable of parsing. The sequencer is designed to behave as intelligently as possible with as little configuration. Thus, the sequencer automatically determines the dialect used by a given DDL stream. This can be tricky, of course, since most dialects are very similar and the distinguishing features of a dialect may only be apparent in some of the statements. To get around this, the sequencer uses a "best fit" algorithm: run the DDL stream through the parser for each of the dialects, and determine which parser was able to successfully read the greatest number of statements and tokens. One very interesting capability of this sequencer is that, although only a subset of the (more common) DDL statements are supported, the sequencer is still extremely functional since it does still add all statements into the output graph, just without much detail other than just the statement text and the position in the DDL file. Thus, if a DDL file contains statements the sequencer understands and statements the sequencer does not understand, the graph will still contain all statements, where those statements understood by the sequencer will have full detail. Since the underlying parsers are able to operate upon a single statement, it is possible to go back later (after the parsers have been enhanced to support additional DDL statements) and re-parse only those incomplete statements in the graph. At this time, the sequencer supports SQL-92 standard DDL as well as dialects from Oracle, Derby, and PostgreSQL. It supports:
http://docs.jboss.org/modeshape/2.4.0.Final/api-full/org/modeshape/sequencer/ddl/package-summary.html
2014-09-15T10:44:06
CC-MAIN-2014-41
1410657104131.95
[]
docs.jboss.org
The AWS::IAM::UserToGroupAddition type adds AWS Identity and Access Management (IAM) users to a group. This type supports updates. For more information about updating stacks, see AWS CloudFormation Stacks Updates. { "Type": "AWS::IAM::UserToGroupAddition", "Properties": { "GroupName": String, "Users": [ User1, ...] } } The name of group to add users to. Required: Yes Type: String Update requires: No interruption Required: Yes Type: List of users Update requires: No interruption When the logical ID of this resource is provided to the Ref intrinsic function, it returns the resource name. For example: { "Ref": " MyUserToGroupAddition" } For the AWS::IAM::UserToGroupAddition with the logical ID "MyUserToGroupAddition", Ref will return the AWS resource name. For more information about using the Ref function, see Ref. To view AWS::IAM::UserToGroupAddition snippets, see Adding Users to a Group.
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-addusertogroup.html
2014-09-15T09:24:52
CC-MAIN-2014-41
1410657104131.95
[]
docs.aws.amazon.com
Emitter Display tab: HUD quicktab Checking 'Show HUD' will show a very simple head-up display giving the emitter name, the number of live particles, and the number of remaining particles which are still available but haven't been generated yet. These two numbers will be updated each frame as particles are born and die. In addition a variety of data can be displayed next to each particle in the viewport. If you need to see multiple data items for each particle at the same time, consider using the X-Particles Console instead. Interface Parameters Show HUD If checked, shows the HUD in the editor. Only Show if Selected If checked, the HUD will only be shown if 'Show HUD' is checked AND the emitter is selected in the object manager. Show Particle ID If checked, each particle will have a small label showing its unique ID number. You can use this in Questions which test for the particle ID. Show Particle Data This drop-down menu lets you choose an item of particle data to be displayed alongside its ID. 'Show Particle ID' must be checked for this option to become available. The possible data items that can be displayed are: - Age (Frames) - Age (Seconds) - Radius - Speed - Mass - Temperature - Custom Data Custom Data ID, Custom Data Name If you choose 'Custom Data' from the 'Show Particle Data' drop-down, these two fields become available. You can then enter the ID and/or the name of a custom data item. For details about custom particle data and how these settings are used to select the custom data item to display, please see the Custom Data page. Text Color, Background Color, and Opacity These settings allow you to change the colours used in the HUD and its opacity. X Position, Y Position These settings give the location of the HUD on the screen, expressed as percentages of the screen width and height. You can change these to alter the HUD location. (Unlike the Cinema 4D HUD, you can't drag the HUD around the screen.)
http://docs.x-particles.net/html/emitter_display_hud.php
2021-07-23T22:52:47
CC-MAIN-2021-31
1627046150067.51
[array(['../images/emitter_display_tab_2.jpg', None], dtype=object)]
docs.x-particles.net
Standardizer is integrated as an optionally licensed module Molar mass and exact molar mass calculations are available for non-stored macromolecules. The new service calculates "average" or "exact" molar mass from HELM inputs. Exact molar mass calculation for macromolecules is added to the Biomolecule Toolkit. Average isotopic mass (molar mass) of a macromolecule can be calculated based on the atomic weights defined in the elements.txt Option for expanding all monomers is added to the HELM to MOL conversion. Option for expanding branching amino acids is added to the HELM to MOL conversion. Option for expanding unnatural amino acids is added to the HELM to MOL conversion. Registered monomer attributes are selectable in the monomer's additional data section when the remote library is selected in the Library Manager. Monomers without attachment points can be submitted from the Library Manager interface to the monomer library. Improved error message on supported monomer R attachment points in the Library Manager. User warnings are displayed in the Library Manager's Marvin JS pop-up when a monomer structure does not have an R-group and/or a defined R-group is removed. Parallel registration of macromolecules is enabled. Leaving group average isotopic mass (molar mass) is calculated based on the atomic weights defined in the elements.txt Molar mass calculation based on Mw monomer attribute can be applied on BLOB monomers only. Option for contracting natural amino acids is removed from the HELM to MOL conversion. {info} Oracle JDBC drivers are no longer shipping with Biomolecule Toolkit. It is the responsibility of the customer to obtain the Oracle JDBC driver and add it to the BMT classpath. New endpoints are introduced in the Biomolecule Toolkit's admin-controller: List all monomer grouping options Update monomer grouping options Options can be set for monomer grouping performed in BioEddie based on monomer properties (text, number) The recommended way to configure chemical structure format conversion is to leverage the newly added Biomolecule Toolkit API endpoint /api/molecules/convert instead of JChem Web Services /rest-v0/util/calculate/molExport endpoint. Error messages related to chemical structure conversion are more general instead of pointing at JChem Web Services as a potential root cause The Library Manager shows notifications upon monomer modifications if the monomer of interest has changed on server side Explicit hydrogens, that are belonging to unused attachment points where H is leaving group, can be removed during exporting to MOL format (default setting) Multiple errors in Monomer controller are collected and returned when monomer validation fails The Library Manager is integrated to the Biomolecule Toolkit HELM to MOL conversion produces better quality head-to-tail cyclic peptide structures with contracted groups Option for generating peptides with contracted, 1-letter coded natural amino acids during HELM to MOL conversion is added and it is set to default Registration of a monomer with unknown attribute produced undefined error message Monomer registration and update failed if only the Biomolecule Toolkit is upgraded to a newer version Improved error message upon attempting to convert wildcard 'X' in peptides, and wildcard 'N' in nucleotides to a molfile MOL to HELM conversion of oligonucleotides with a chemical modification on 3' prime broke the phosphate-sugar backbone HELM to MOL conversion of phosphate linkers was not handled correctly Single residue placeholder ' X' in peptide query sequences and single residue placeholder ' N' in nucleotide query sequences. Placeholder ' *' to define any number of unknown monomers (0, 1... n) in peptide and nucleotide query sequences Import of sequences containing unnatural monomers with separators HELM to MOL conversion failed for RNAs when 5'-linkers or sugars without nucleobases are present. MOL to HELM conversion of cyclic structures created invalid HELM strings by swapping R1 and R2 attachment points Multi-letter monomer abbreviations with '.' separators in query sequences Derivatives search filter is added to search filters 4th attachment point support is now supported See the changes in the Biomolecule Toolkit 19.8.0 improvements and bugfixes. Updating more than one BLOBs failed with duplication check error. Sequence converter was not working with cyclic flag. See the changes in the Biomolecule Toolkit 19.4.0 improvements and bugfixes. Error and result code improvements Distance based similarity filter is added to search filters MOL to HELM conversion reduces the number of generated CHEMs that are attached to a e.g. CHEM monomer if their heavy atom number is below four New parameter for sequence conversion for cyclic sequences Macromolecule by helm lookup (POST /rest/macromolecules/helm) failed when sending a HELM which did not match any registered macromolecules New attachment point creation (POST /rest/admin/attachmentPointTypes) failed in Oracle The entityType property of AttributeOption appeared in Swagger two times under different names: entityType and macromoleculeTypeName Multiple sugars differing only in stereo information were not recognised correctly Canonicalization of cyclic RNA sequences resulted in incorrect attachment point information HELM to MOL conversion failed for RNAs which had a connection at the terminal P linker New macromolecule type registration failed First release of Biomolecule Toolkit Docker Image. New Rest endpoint for health check: /health Data migration from older versions handled automatically See the detailed description in apichanges_18_24_0.html {primary} Comprehensive API changes will happen in the next releases “No structure” registration. New options for entity types to control the rules of structureless macromolecules. The entity type has MUST_CONTAIN, CAN_CONTAIN and MUST_NOT_CONTAIN options. Structureless rules validation on entity registration. Lot registration. Lot level registration without structure but with lot level attributes. Attribute data validation on lot level. LOT ID generation based on parent ID. (CID) New page for Lot registration in the Biomolecule registration client. Flexible ID generation for entity types. Default settings for generating corporate ID for macromolecules Admin service to create new ID generating rules by entity types. Data type validation for attributes. More flexible additional data definition on macromolecule and component level. Registering biomolecules with domains. Response code of get macromolecules by helm service (/rest/macromolecules/helm) was incorrect when the result set was empty. Macromolecule annotations were not appeared in the output HELM string. Retrieving macromolecules was slow for 100 monomers components. HELM2 format has become the primary format in Biomolecule Toolkit. Cyclic HELM conversion to sequence format failed. Search with entity type filter in biotoolkit demo page caused error. Synchronization problem was present in aromatic MonomerCount service. Two new endpoints for macromolecule property calculation: Pairwise SequenceAlignment calculation Aromatic MonomerCount calculation IsoelectricPoint web service input type has changed, it has fored to text/plain. HELM2 formatted macromolecule registration is available. Macromolecule resolvation with bridge connections failed previously. New default monomer library. Oracle property template file has been changed. Isoelectric point calculation API. Prepare macromolecule domain handling on database level Fixes security issues for bioreg web client. First release of Biomolecule Toolkit.
https://docs.chemaxon.com/display/lts-helium/biomolecule-toolkit-history-of-changes.md
2021-07-23T23:11:15
CC-MAIN-2021-31
1627046150067.51
[]
docs.chemaxon.com
Commerce v1 Developer Payment Gateways TransactionInterface The \modmore\Commerce\Gateways\Interfaces\TransactionInterface defines the status of a payment at a specific point in time. The transaction itself does not perform any actions, but your view, submit, returned or webhook gateway methods may perform actions before turning the status into a TransactionInterface instance. It's not to be confused with a comTransaction, which is the xPDO model name for a transaction. Where your TransactionInterface implementation is only in-memory, a current snapshot, the comTransaction is the persisted data. The interface Think of a TransactionInterface instance as a value object that defines a current payment attempt. Most methods are geared towards indicating status. <?php namespace modmore\Commerce\Gateways\Interfaces; interface TransactionInterface { /** * Indicate if the transaction was paid * * @return bool */ public function isPaid(); /** * Indicate if a transaction is waiting for confirmation/cancellation/failure. This is the case when a payment * is handled off-site, offline, or asynchronously in another why. * * When a transaction is marked as awaiting confirmation, a special page is shown when the customer returns * to the checkout. * * If the payment is a redirect (@see WebhookTransactionInterface), the payment pending page will offer the * customer to return to the redirectUrl. * * @return bool */ public function isAwaitingConfirmation(); /** * Indicate if the payment has failed. * * @return bool * @see TransactionInterface::getExtraInformation() */ public function isFailed(); /** * Indicate if the payment was cancelled by the user (or possibly merchant); which is a separate scenario * from a payment that failed. * * @return bool */ public function isCancelled(); /** * If an error happened, return the error message. * * @return string */ public function getErrorMessage(); /** * Return the (payment providers') reference for this order. Treated as a string. * * @return string */ public function getPaymentReference(); /** * Return a key => value array of transaction information that should be made available to merchant users * in the dashboard. * * @return array */ public function getExtraInformation(); /** * Return an array of all (raw) transaction data, for debugging purposes. * * @return array */ public function getData(); } Note that there is no constructor defined; take advantage of that to define a __construct method that takes in a client object or API response and have the remaining methods read from that. When done correctly, you can typically use a single gateway-specific transaction object per gateway implementation. (That's the same way the \modmore\Commerce\Gateways\Omnipay2\Transaction object can be used for most Omnipay-based integrations.) ManualTransaction Example Arguably the simplest possible example is a transaction that is always paid. Not very useful for production, but here you go. In this case the transaction reference is generated elsewhere (in the GatewayInterface implementation), and passed into the ManualTransaction to be returned in the getPaymentReference() method. <?php namespace modmore\Commerce\Gateways\Manual; use modmore\Commerce\Gateways\Interfaces\TransactionInterface; class ManualTransaction implements TransactionInterface { private $reference; public function __construct($reference) { $this->reference = $reference; } public function isPaid() { return true; } public function isAwaitingConfirmation() { return false; } public function isRedirect() { return false; } public function isFailed() { return false; } public function isCancelled() { return false; } public function getErrorMessage() { return ''; } public function getPaymentReference() { return $this->reference; } public function getExtraInformation() { return []; } public function getData() { return []; } }
https://docs.modmore.com/en/Commerce/v1/Developer/Payment_Gateways/TransactionInterface.html
2021-07-23T23:20:06
CC-MAIN-2021-31
1627046150067.51
[]
docs.modmore.com
Mesh networks View deployment scenarios for configuring access points as root, repeater, or bridge. An access point which is configured to use the mesh network turns into a repeater and scans for the mesh network, if it fails to connect. If a mesh network is found, the access point joins it as a client. An access point can be configured as root, repeater (mesh), or bridge. The role of access points gets determined on the network. We recommend that you set the root access point to 5 GHz and client to 2.4 GHz. The maximum throughput of a mesh client, configured with 5 GHz, gets reduced by 50% per hop. This happens because data packets sent to the access point are forwarded to other access points which adds up to the airtime. Deployment possibilities In mesh mode, you can configure multiple mesh (repeater) access points with one root access point. There can be multiple root access points. A mesh access point can broadcast the SSID from the root access point to cover a larger area without using cables. A mesh network can also be used to bridge Ethernet networks without laying cables. To run a wireless bridge, you have to plug in your second Ethernet segment into the Ethernet interface of the mesh access point. The first Ethernet segment is the one on which the root access point connects to Sophos Central. Good to know There are some things you should. - Avoid using dynamic channel selection as channels of access points may differ after a restart. - The mesh network may need up to five minutes to be available after configuration. - There is no automatic takeover of the root access point. The connection to a mesh occurs during a boot. - For APX access points, there is no need to specify the mesh role. If the mesh-enabled SSID is pushed to 2 APXs, the one with the existing ethernet connection becomes the root AP. Once the mesh-enabled SSIDs are pushed to the APXs, we recommend that you reboot them. During the boot sequence, if the AP has ethernet connectivity, then it becomes the root and the one without ethernet becomes the mesh client. - Mesh networks can only be created between access points of the same series. For example, APX access points can only create a mesh network with other APX access points.
https://docs.sophos.com/central/Customer/help/en-us/central/Customer/concepts/Mesh.html
2021-07-23T22:37:53
CC-MAIN-2021-31
1627046150067.51
[array(['../images/Mesh-Network-Repeater.png', 'Network repeater diagram'], dtype=object) array(['../images/Mesh-Network-Bridge.png', 'Network bridge diagram'], dtype=object) ]
docs.sophos.com
LATEST NEWS End of Life announcement for i-DOCS Output Management versions 5 and 6 The major version numbers 5 and 6 of the i-DOCS Output Management platform are reaching the End of Life. For version 5 this is scheduled for 31/12/2020. For version 6, this is scheduled for 31/12/2021. Customers are encouraged to complete the upgrade of their production systems as soon as possible and in any case before the End of Support. Support services are given for one year after the end of life date. The existing functionality has been integrated into newer versions. Product notifications related to their lifecycle are provided in advance to existing customers and partners. Upgrade Options Customers using any 5.x or 6.x version are encouraged to upgrade to version 8.0 or later. Version 5.0 was released in 2010, version 6.0 in 2013 and version 7.0 in 2016. The latest production ready version is 8.0. It was released in 2020 in two editions: The enterprise edition aimed for organisations that adhere to the traditional deployment model and the cloud native edition which exploits the advantages of the cloud computing model. The latter is the recommended edition for organisations that operate a private cloud which supports orchestration of containerised applications or have transitioned at least to a hybrid cloud environment. Please contact your Account Manager or Technical Coordinator for more information about transitioning to a newer version. This End of Product life announcement does not impact any customers who use i-DOCS OM with major version number 7.. End of Life i-DOCS products have generous lifecycle policies that allow customers to enjoy many years of support. Customers whose platforms have reached End of Life and who are currently under maintenance contracts will be able to Get technical support by raising issues using the designated support system. Order change requests that require configuration only changes Receive security fixes End of Support For deployments on versions that have reached the End of Support date, i-DOCS will not provide technical support or any type of fixes or changes. In exceptional cases, support will be provided on a best-effort basis for 3 months after the end of support date. Why end i-DOCS OM version 5 support i-DOCS is committed to delivering customised deployments, change requests, improvements and bug fixes in a planned, timely and secure manner. We are also committed to providing premium support for all our platforms. The IT landscape, the software components and the frameworks that we rely on are constantly changing. New versions are released, old versions are deprecated and security weaknesses or vulnerabilities are found. The cost of supporting outdated platforms grows exponentially with time and makes it hard or in certain cases impossible to provide the level of support our customers expect from us. Platforms marked as end of support have known limitations and we strongly recommend to stop using such versions.
https://www.i-docs.com/i-docs-omv5-eol
2021-07-23T22:29:14
CC-MAIN-2021-31
1627046150067.51
[array(['https://static.wixstatic.com/media/b9abd5_3fa448bf865d49009eb6583c91832145~mv2.jpg/v1/fill/w_91,h_46,al_c,q_80,usm_0.66_1.00_0.01/i-docs.jpg', 'i-docs.jpg'], dtype=object) array(['https://static.wixstatic.com/media/ef9ce3e48be340a4b91f0c3e1644aa5b.jpg/v1/fill/w_362,h_241,al_c,q_80,usm_0.66_1.00_0.01/ef9ce3e48be340a4b91f0c3e1644aa5b.jpg', 'Data on a Touch Pad'], dtype=object) ]
www.i-docs.com
SimpleCart v2 Manager Administration The administer area of SimpleCart is used for the setting up the shop. This includes setting up custom order statuses, delivery and payment methods, as well as the email configuration and more. To access the administration area, click the cog icon in the top right of the management component, or go to it directly via Extras > SimpleCart > Administer. The cog icon is only visible if you have sufficient permissions.
https://docs.modmore.com/en/SimpleCart/v2.x/Manager/Administration/index.html
2021-07-23T22:47:18
CC-MAIN-2021-31
1627046150067.51
[]
docs.modmore.com
Load Utilities You cannot use FastLoad, MultiLoad, or the Teradata Parallel Transporter operators LOAD and UPDATE to load data into base tables that have hash or join indexes because those indexes are not maintained during the execution of these utilities (see Teradata FastLoad Reference, Teradata MultiLoad Reference, and Teradata Parallel Transporter Reference for details). If you attempt to load data into base tables with hash or join indexes using these utilities, an error message returns and the load does not continue. To load data into hash- or join-indexed base table, you must drop all defined hash or join indexes before you can run FastLoad, MultiLoad, or the Teradata Parallel Transporter operators LOAD and UPDATE. Load utilities like BTEQ, Teradata Parallel Data Pump, and the Teradata Parallel Transporter operators INSERT and STREAM, which perform standard SQL row inserts and updates, are supported for hash- and join‑indexed tables (see Basic Teradata Query Reference, Teradata Parallel Data Pump Reference, and Teradata Parallel Transporter Reference for details). You cannot drop a hash join or index to enable batch data loading by utilities such as MultiLoad and FastLoad as long as queries are running that access that index. Each such query places a lock on the index while it is running, so it blocks the completion of any DROP JOIN INDEX or DROP HASH INDEX transactions until the lock is removed. Furthermore, as long as a DROP JOIN INDEX or DROP HASH INDEX transaction is running, batch data loading jobs against the underlying tables of the index cannot begin processing because of the EXCLUSIVE locks DROP JOIN INDEX and DROP HASH INDEX place on the base table set that defines them.
https://docs.teradata.com/r/ji8nYcbKBTVEaNYVwKF3QQ/QTMMqbqkbgQBHxfnOavMiQ
2021-07-23T22:33:57
CC-MAIN-2021-31
1627046150067.51
[]
docs.teradata.com
The Key Manager handles all clients, security and access token-related operations. For more information, see see Key Manager. To configure WSO2 Identity Server as the Key Manager of the API Manager, see Configuring WSO2 Identity Server as the Key Manager in WSO2 API Manager in the WSO2 Clustering documentation. Overview Content Tools Activity
https://docs.wso2.com/pages/diffpagesbyversion.action?pageId=48269308&selectedPageVersions=2&selectedPageVersions=3
2021-07-23T22:33:04
CC-MAIN-2021-31
1627046150067.51
[]
docs.wso2.com
Run a tablet rebalancing tool on a rack-aware cluster It is possible to use the kudu cluster rebalance tool to establish the placement policy on a cluster. This might be necessary when the rack awareness feature is first configured or when re-replication violated the placement policy. - The rack-aware rebalancer tries to establish the placement policy. Use the ‑‑disable_policy_fixerflagflag to skip this phase. - The rebalancer tries to balance the tablet replica distribution within each location, as if the location were a cluster on its own. Use the ‑‑disable_intra_location_rebalancingflag to skip this phase. By using the ‑‑report_only flag, it’s also possible to check if all tablets in the cluster conform to the placement policy without attempting any replica movement.
https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/kudu-management/topics/kudu-running-tablet-rebalancing-tool-on-rack-aware-cluster.html
2021-07-23T22:50:30
CC-MAIN-2021-31
1627046150067.51
[]
docs.cloudera.com
The drag-and-drop UI for checkout flows makes it seem easy to simply disable existing checkout panes provided by core and replace them with your own custom panes. However, disabling certain checkout panes can cause problems. Specifically, because the "Payment process" pane is dependent on the "Payment information" pane, if you disable the "Payment information" pane, the "Payment process" pane is automatically disabled. Which is probably not what you want. This documentation page describes how you can safely replace the "Payment information" pane (or any other existing pane) with your own. The process for creating the replacement checkout pane is essentially the same as for any other custom checkout pane. The one big exception is that you should not include a @CommerceCheckoutPane annotation for your new plugin. If other Commerce Core code is dependent on the pane you're replacing, it will reference the pane using the id set in the annotation. So if you create a new pane with a new id, references to the existing pane will still point to that existing pane. For example, let's suppose that we want to replace the Commerce Core "Payment information" pane with one of our own. We'll extend the existing pane class to avoid duplicating code unnecessarily, but if your custom pane is significantly different from the pane it's replacing, you could instead extend Drupal\commerce_checkout\Plugin\Commerce\CheckoutPane\CheckoutPaneBase. Here's our custom PaymentInformation pane: <?php namespace Drupal\my_checkout_pane\Plugin\Commerce\CheckoutPane; use Drupal\commerce_payment\Plugin\Commerce\CheckoutPane\PaymentInformation as BasePaymentInformation; use Drupal\Core\Form\FormStateInterface; /** * Provides a custom payment information pane. */ class PaymentInformation extends BasePaymentInformation { /** * {@inheritdoc} */ public function buildPaneForm(array $pane_form, FormStateInterface $form_state, array &$complete_form) { $pane_form = parent::buildPaneForm($pane_form, $form_state, $complete_form); // Do something custom with the pane form here. $pane_form['message'] = [ '#markup' => $this->t('This is my custom payment information pane.'), ]; return $pane_form; } } Note that this custom checkout pane won't show up on the Checkout Flow admin UI page since it doesn't have an annotation. And at this point, the "Payment information" pane is still generated by the Commerce Core code rather than your custom code. Standard Payment information pane, in the Order information step: To "fix" references to the existing pane so that they point to your custom checkout pane, you can use hook_commerce_checkout_pane_info_alter. For our "Payment information" checkout pane replacement example, that code looks like this: /** * Implements hook_commerce_checkout_pane_info_alter(). */ function my_checkout_pane_commerce_checkout_pane_info_alter(&$definitions) { if (isset($definitions['payment_information'])) { $definitions['payment_information']['class'] = \Drupal\my_checkout_pane\Plugin\Commerce\CheckoutPane\PaymentInformation::class; $definitions['payment_information']['provider'] = 'my_checkout_pane'; } } Add that code to your custom modules's .module file, rebuild caches, and reload your checkout page. It now looks like this: Found errors? Think you can improve this documentation? edit this page
https://docs.drupalcommerce.org/commerce2/developer-guide/checkout/replacing-existing-checkout-pane
2021-07-23T22:56:44
CC-MAIN-2021-31
1627046150067.51
[]
docs.drupalcommerce.org
The common things that you want to do, without going as far has hacking Venabili's code, is define your own key layers and macros. Changing what the keyboard does is quick and simple, you won't even need to unplug it, and making small incremental tweaks every time you think of a way to make your typing a little bit faster and/or comfortable is highly encouraged! The process goes like so: Edit the venabili.c with your own layers and macros Recompile the firmware Enter flash mode Flash the new firmware
https://docs.venabili.sillybytes.net/customizing
2021-07-23T21:44:09
CC-MAIN-2021-31
1627046150067.51
[]
docs.venabili.sillybytes.net
TOPICS× JSRP - JCR Storage Resource Provider About JSRP When . Configuration Select JSRP<< - Select JCR Storage Resource Provider (JSRP) - Select Submit Publishing the Configuration While JSRP is the default configuration, to ensure the identical configuration is set in the publish environment: - On author: - From global navigation: Tools > Deployment > Replication - Select Activate Tree - Start Path : - Browse to /conf/global/settings/community/srpc/ - Select Activate Managing User Data For information regarding users , user profiles and user groups , often entered in the publish environment, visit Troubleshooting UGC Not Visible in JCR /conf/global/settings/community - If the srpc node exists and contains node defaultconfiguration , the defaultconfiguration's properties should define JSRP to be the default provider UGC Not Visible on Publish Instance.
https://docs.adobe.com/content/help/en/experience-manager-64/communities/administer/jsrp.html
2020-10-23T22:47:47
CC-MAIN-2020-45
1603107865665.7
[array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-234.png', None], dtype=object) ]
docs.adobe.com
Merge Merge page allows you to see the differences between the app on your Builder and app on the base or “original” Builder. This is useful when you take a copy of an app and customize it, yet from time to time you still want to incorporate updates from the original project as it is being developed. You can choose whether to keep your version, choose the original from the base Builder or ignore the difference and show it as resolved in the future. The topic is well described here.
https://docs.codejig.com/en/entity2305843015656313960/view/4611686018427464225
2020-10-23T22:23:07
CC-MAIN-2020-45
1603107865665.7
[]
docs.codejig.com
Tutorial: Azure Active Directory single sign-on (SSO) integration with Datasite In this tutorial, you'll learn how to integrate Datasite with Azure Active Directory (Azure AD). When you integrate Datasite with Azure AD, you can: - Control in Azure AD who has access to Datasite. - Enable your users to be automatically signed-in to Datasite. - Datasite single sign-on (SSO) enabled subscription. Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment. Datasite supports SP initiated SSO Once you configure Datasite you can enforce session control, which protects exfiltration and infiltration of your organization’s sensitive data in real time. Session control extends from Conditional Access. Learn how to enforce session control with Microsoft Cloud App Security. Note Identifier of this application is a fixed string value so only one instance can be configured in one tenant. Adding Datasite from the gallery To configure the integration of Datasite into Azure AD, you need to add Datasite Datasite in the search box. - Select Datasite from results panel and then add the app. Wait a few seconds while the app is added to your tenant. Configure and test Azure AD SSO for Datasite Configure and test Azure AD SSO with Datasite using a test user called B.Simon. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Datasite. To configure and test Azure AD SSO with Datasite, Datasite SSO - to configure the single sign-on settings on application side. - Create Datasite test user - to have a counterpart of B.Simon in Datasite that is linked to the Azure AD representation of user. - Test SSO - to verify whether the configuration works. Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal. In the Azure portal, on the Datasite: In the Sign-on URL text box, type the URL: On the Set up single sign-on with SAML page, in the SAML Signing Certificate section, find Certificate (Base64) and select Download to download the certificate and save it on your computer. On the Set up Datasite Datasite. In the Azure portal, select Enterprise Applications, and then select All applications. In the applications list, select Datasite. Datasite SSO To configure single sign-on on Datasite side, you need to send the downloaded Certificate (Base64) and appropriate copied URLs from Azure portal to Datasite support team. They set this setting to have the SAML SSO connection set properly on both sides. Create Datasite test user In this section, you create a user called B.Simon in Datasite. Work with Datasite support team to add the users in the Datasite platform. Users must be created and activated before you use single sign-on. Test SSO In this section, you test your Azure AD single sign-on configuration using the Access Panel. When you click the Datasite tile in the Access Panel, you should be automatically signed in to the Datasite Datasite with Azure AD What is session control in Microsoft Cloud App Security? How to protect Datasite with advanced visibility and controls
https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/datasite-tutorial
2020-10-23T23:19:04
CC-MAIN-2020-45
1603107865665.7
[]
docs.microsoft.com
MSSQLSERVER_2534 Applies to: SQL Server (all supported versions) Details Explanation. User. Caution REPAIR will rebuild the index if an index exists. Caution Running REPAIR for the matching MSSQLSERVER_2533 error deallocates the page before the rebuild. Caution This repair may cause data loss.
https://docs.microsoft.com/en-us/sql/relational-databases/errors-events/mssqlserver-2534-database-engine-error?view=sql-server-ver15
2020-10-23T21:04:25
CC-MAIN-2020-45
1603107865665.7
[]
docs.microsoft.com
This documentation does not apply to the most recent version of Splunk. Click here for the latest version. audit.conf The following are the spec and example files for audit.conf. audit.conf.spec Version 7.3.2=[true|false] * Turn off sending audit events to the indexQueue -- tail the audit events instead. * If this is set to 'false', you MUST add an inputs.conf stanza to tail the audit log in order to have the events reach your index. * Defaults to true. audit.conf.example # Version 7.3.2 # # 17 September, 2019 This documentation applies to the following versions of Splunk® Enterprise: 7.3.2 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/Splunk/7.3.2/Admin/Auditconf
2020-10-23T23:01:05
CC-MAIN-2020-45
1603107865665.7
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Metrics Diffusion™ metrics provide information about the server, client sessions, topics and log events. Diffusion can provide metrics in three main ways: via the web console, via JMX-compatible MBeans and via Prometheus. Methods of accessing metrics There are multiple ways to access the metrics. As of Diffusion 6.3, the same information is available through each access method. - Web console metrics - The metrics are available through the Diffusion web console. This is the most convenient way to access metrics for development and testing purposes, but does not support aggregating metrics across multiple servers or recording and retrieving historical data. JMX or Prometheus access are more suitable for production systems. - MBeans for JMX - Diffusion registers MBeans with the Java Management Extensions (JMX) service. This enables monitoring of the metrics using the JMX tools that are available from a range of vendors. - Prometheus - Diffusion provides endpoints for the Prometheus monitoring system. To use Prometheus, your Diffusion server needs to have a Commercial with Scale & Availability license, or an evaluation license such as the Community Evaluation license. See License types for more information. Accessing metrics The metrics can be accessed in the following recommended ways: - As MBeans, using a JMX tool, such as VisualVM or JConsole. See the table below for MBean interfaces. For more information, see Using Java VisualVM or Using JConsole. - Using the Diffusion monitoring console. For more information, see Diffusion monitoring console. - As Prometheus endpoints at, provided you have a suitable license. If not accessing from the same machine as the Diffusion server, replace localhost with the IP address or hostname. Collecting custom metrics using metric collectors A metric collector is a way to collect metrics for a particular set of topics or sessions, configured by you. You can use the Diffusion web console or JMX to define metric collectors. See Configuring metrics for details. Collected metrics are published to the console, JMX and optionally via Prometheus. Counters and gauges Metrics are divided into counters and gauges. - Counter metric - A counter is a cumulative metric, which reports a value since the server was started. A counter metric will always go up over a server's lifetime. For example, the total number of bytes received by the server is a counter. - Gauge metric - A gauge is a metric which reports the current value of a metric. A gauge value can go up or down. For example, the number of connected sessions is a gauge. Built-in metrics This section describes the built-in metrics that are always available, aside from any metric collectors you may have created. Metrics are not persisted between server restarts. Restarting the server will set all counter metrics back to zero. The following is a list of all the top level statistics and their attributes. Delta compression ratio value_bytes and delta_bytes can be used to capture the theoretical delta compression ratio of the application data flowing through the topics. Both the console and the JMX MBean perform this calculation. The ratio is a value between 0 and 1. The closer the ratio is to 1, the more benefit the application data will obtain from delta streaming. If value_bytes is 0, there have been no updates, so the delta compression ratio is reported as zero. Otherwise it is calculated as: 1 - delta_bytes / value_bytes Delta streaming is enabled for subscriptions by default, but can be disabled on a per-topic basis using the PUBLISH_VALUES_ONLY topic property. If delta streaming is enabled, a stable set of subscribers remain connected, and no session has a significant backlog (so conflation is not applied), the following relationship should hold: subscriber_update_bytes ≅ delta_updates x subscribers Delta streaming can also be used to update topic values. If the delta compression ratio is high, but delta_updates is zero (or low, relative to value_updates), consider whether your application can use the stateful update stream API to take advantage of delta streaming. Log metrics Log metrics record information about server log events. Separate metrics are kept for each unique pair of log code and log severity level that has been logged. The log severity levels are: error, warn, info, debug, trace. A JMX MBean is created for each pair of log code and log severity that has been logged at least once. Here is an example MBean name: com.pushtechnology.diffusion:type=LogMetrics,server="server_name",level=warncode=PUSH-12345 Session metrics versus network metrics The network inbound_bytes and outbound_bytes metrics include bytes that are not counted by the equivalent session metrics. The session metrics include bytes from transport framing and all session traffic (including additional HTTP traffic from long polling). - TLS overhead - Web server traffic (for example, browsers downloading the web console pages) - Rejected connection attempts Metrics in the Publisher API Publisher metrics, client instance metrics, and topic instance metrics have all been removed. Consequently the PublisherStatistics, ClientStatistics and TopicStatistics interfaces provide no information. These interfaces are deprecated and will be removed in a future release. Limited server metrics are still available through the Publisher API using the ServerStatistics interface. For more information, see the Java API documentation. This page last modified: 2019/04/17
https://docs.pushtechnology.com/docs/6.3.2/manual/html/administratorguide/systemmanagement/r_statistics.html
2020-10-23T21:12:47
CC-MAIN-2020-45
1603107865665.7
[]
docs.pushtechnology.com
You can now clone your Salesforce-NetSuite (IO) integration app setting values and mappings from one integration tile to another integration tile of different environments. If you have the Salesforce - NetSuite IO integration apps in the production and sandbox environments, the integration app provides feasibility to choose the environment. Important: This feature is only supported on the integrator.io current UI. The cloning feature is also helpful if you have to verify any features in the sandbox environment. In this scenario, you can directly clone the integration tile from the production environment to the sandbox environment. You can clone your integration app between the following environments: - Sandbox to sandbox - Production to production - Sandbox to production - Production to sandbox The below table explains the components that are cloned and not cloned from the base integration tile to the source integration tile: Information: The cloning capability will be available for the saved searches in future releases. Checklist after you clone - Check all the settings where you select the Salesforce or NetSuite specific components. - Click the refresh icon for all the settings wherever applicable, select the appropriate option, and save the settings. - Check all the required custom fields. Clone your integration app - Login to your integrator.io account. - Select the Salesforce - NetSuite (IO) integration tile you wished to clone. - On the top-right, click Clone integration. You are redirected to the cloning page. - On the cloning page, in the Tag field, enter the name of the cloning integration app.Notes - By default, in the Tag field, the existing integration app name is prefixed with “Clone.” If your existing integration app name is “Salesforce - NetSuite,” in the Tag field while cloning, you will see “Clone - Salesforce - NetSuite.” - You cannot have the same names for the source and destination tiles. - Choose the Environment to which you wish to clone your existing integration app. Notes - The Environment option is available only if you have the Salesforce - NetSuite IO integrator.io subscription in the production and sandbox environments. Once the integration app is cloned and if you try to enable any flow, the integration app license is validated. - You can read through the components that are cloned as part of this integration such as integration, flows, imports, exports, and connection details. - Click Clone integration. Configure and install your destination integration app Prerequisite: Before you begin to configure the destination integration app, be sure that you have a valid integration app license. However, you will be able to clone the integration app and when you try to enable any flow, the integration app license is validated. Note: If you wish to cancel your installation, you can click Uninstall on the top-right. Step 1: Configure your NetSuite connection You can authenticate your connection either using the basic, token, or automatic token options. We recommend you to use any of the token-based authentication methods. For token-based authentication, create an access token in NetSuite. After you configure, you won't be able to change the NetSuite environment and account. Step 2: Configure your Salesforce connection Prerequisite: The new Salesforce account users are recommended to install the packages and enable/disable the flows to back up the real-time SObjects. Lets you create a connection with Salesforce. You can authenticate your connection either using the Refresh Token or JWT Bearer Token option. Once you allow access with your Salesforce account credentials, you won't be able to change the Salesforce configuration. You can change the account or account type after you completely install the integration app. Note for step 3 to step 6: You have to manually verify the Salesforce integrator.io package, Salesforce integration app package, NetSuite integrator.io bundle, and NetSuite integration app SuiteApp. Step 3: Install the integrator.io package in Salesforce It is recommended to install using the Install for All Users option. After you install, an email is sent and you can find the installed package on the Salesforce > Installed Packages page. Verify your package after installation. Step 4: Install the NetSuite package in Salesforce It is recommended to install using the Install for All Users option. After you install, an email is sent and you can find the installed package on the Salesforce > Installed Packages page. Verify your package after installation. Step 5: Install the integrator.io bundle in NetSuite Lets you install the integrator.io bundle (20038) in NetSuite. It is a common bundle across all integration apps. Verify the bundle. If you already have a bundle installed, it is either updated or auto-verified. It is recommended to update and verify the bundle from NetSuite > Installed Bundles page. Step 6: Install the Salesforce SuiteApp in NetSuite Lets you install the Salesforce SuiteApp in NetSuite. You can install and verify the SuiteApp in NetSuite. Step 7: Copy resources now from template zip In this step, clone all the information related to the source tile such as mappings, advanced setting values, imports, exports, and connection details from the source tile to the destination tile. Information: It is expected that the “Copy resources now from template zip” step might take a long time as it is migrating the information from the source tile to the destination tile. Understand the destination tile Important: After you clone your integration tile, it is recommended that you verify all the mappings, settings, saved searches before you sync data between Salesforce and NetSuite. After you clone, when you try to enable the flow, the integration app license is validated. If you do not have a valid license, then the integration app displays an error message when you try to enable any flow. After cloning, the saved search value is set to the default value. It is recommended to verify all the saved searches and update the value accordingly. This is also applicable for the saved search filters if any. Go to - Configure Product > NetSuite to Salesforce > Salesforce Standard Price Book - for this setting you have to check and uncheck the Active checkbox in your Salesforce account to reflect the appropriate changes in your integration app. Internal ID that is populated in the Salesforce account URL might vary in the source and destination tiles. - Map NetSuite Price Level to Salesforce Price Book - If you refresh the Salesforce Price Book, then in this setting you might have to reconfigure your values. - For the NetSuite components, the internal ID might change. It is recommended that you refresh and reset the values again. - If the user is using any default fields that are provided by our package, then you might not have to re-configure those settings. All the values as per our package will be cloned. If you are using any other field, then you have to re-configure that setting again. Contract-renewals add-on If your source tile has the contract renewal add-on, it is cloned to the destination tile. After cloning, when you try to enable the add-on flow, the add-on license is validated. If you do not have a valid license, an error message is displayed. Note: If your source tile has the “contract renewal” add-on and in your destination tile, the NetSuite account you configured has the “contract renewal” feature disabled, cloning fails and the entire integration app is not cloned. Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/360047107372-Clone-your-Salesforce-NetSuite-IO-integration-app
2020-10-23T22:33:32
CC-MAIN-2020-45
1603107865665.7
[array(['/hc/article_attachments/360064195492/SFNSIO_Installation.jpg', 'SFNSIO_Installation.jpg'], dtype=object) ]
docs.celigo.com
MSSQLSERVER_905 Applies to: SQL Server (all supported versions) Details Explanation The database contains one or more partitioned tables or indexes. This edition of SQL Server cannot use partitioning. Therefore, the database cannot be started correctly. Partitioned tables and indexes are not available in every edition of MicrosoftSQL Server. For a list of features that are supported by the editions of SQL Server, see Features Supported by the Editions of SQL Server 2016. User Action.
https://docs.microsoft.com/en-us/sql/relational-databases/errors-events/mssqlserver-905-database-engine-error?view=sql-server-ver15
2020-10-23T22:27:07
CC-MAIN-2020-45
1603107865665.7
[]
docs.microsoft.com
Overview of account linking for Apple Business Chat You can use an interaction to prompt the user to log into a third-party account. The interaction for account linking needs: - The Requires account linking option to be set (on its Advanced tab) - An Account Link button that initiates the login on the user's device. Authentication is handled by the client's OAuth server and Apple Business Chat. See Prerequisites for account linking on Apple Business Chat. Once the user has signed in successfully, Apple Business Chat will send an access token to the bot. See Using the returned authorization code. The following diagram provides an overview of the account linking process.
https://docs.converse.engageone.co/AccountLinking/account_linking_overview_apple_business_chat.html
2020-10-23T21:35:25
CC-MAIN-2020-45
1603107865665.7
[array(['../images/account_linking_apple_bchat.png', None], dtype=object)]
docs.converse.engageone.co
Purpose The stub task is used as a placeholder or "stand in" type of task in a workflow. Potential Use Case Let's say you are building a workflow and you identify some gaps in your high-level design, but you don't want to stop to work out the details where those gaps exist. You could use the stub task as a placeholder and once you've "stubbed" the placements that you need, you could go back and replace those stand in tasks with a real task or a set of tasks that perform what your workflow is designed to do. Properties Input and output parameters are shown below. Example In this example the type Reference variable is set to "success". The job can also be set set to "error", in which case an error is returned as the response instead of a clean and complete automation. The delay is set for "12", which shows in the automation as a delay of 12 seconds. The response to pass through is the word "awesome". The response that returns for this stub is the word "awesome", which completed after a 12 second delay. Of note, you can view the 12 second delay in Job Mananger as the job is running.
https://docs.itential.io/user/Itential%20Automation%20Platform/Workflow%20Engine/Task%20References/stub/
2020-10-23T21:55:45
CC-MAIN-2020-45
1603107865665.7
[array(['image/ex01-stub.png', 'stub'], dtype=object) array(['image/ex02-stubResult.png', 'stubresult'], dtype=object)]
docs.itential.io
Database audit logging Topics - Overview - Amazon Redshift logs - Enabling logging - Managing log files - Troubleshooting Amazon Redshift audit logging - Logging Amazon Redshift API calls with AWS CloudTrail - Amazon S3 buckets. These provide. This information might. Log files are not as current as the base system log tables, STL_USERLOG and STL_CONNECTION_LOG. Records that are older than, but not including, the latest record are copied to log files. time audit logging is enabled to the present time. Each logging update is a continuation of the information that was already logged. log information for only the connection log and user log, but not for the user activity log. The enable_user_activity_logging parameter is not enabled ( false) by default. You can set it to true to enable the user activity log. For more information, see Amazon Redshift parameter groups. Currently, you can only use Amazon S3-managed keys (SSE-S3) encryption (AES-256) for audit logging. Managing log files The number and size of Amazon Redshift log files in Amazon S3 depends incur charges for the storage that you use in Amazon S3. Before you configure logging, you should have a plan for how long you need to store the log files. As part of this, determine when the log files have Amazon Redshift create a new bucket for you as part of configuration, correct permissions are applied to the bucket. However, if you create your own bucket in Amazon S3 or use an existing bucket, you need to add a bucket policy that includes the bucket name. You also need the Amazon Redshift account ID that corresponds to your AWS Region from the following table. The bucket policy uses the following format, where BucketName and AccountId are placeholders for your own values: { . Logging Amazon Redshift API calls with AWS CloudTrail Amazon Redshift is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in Amazon Redshift. CloudTrail captures all API calls for Amazon Redshift as events. These include calls from the Amazon Redshift console and from code calls to the Amazon Redshift API operations. If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket, including events for Amazon Redshift. If you don't configure a trail, you can still view the most recent events in the CloudTrail console in Event history. Using the information collected by CloudTrail, you can determine certain details. These include the request that was made to Amazon Redshift, the IP address it was made from, who made it, when it was made, and other information. You can use CloudTrail independently from or in addition to Amazon Redshift database audit logging. To learn more about CloudTrail, see the AWS CloudTrail User Guide. Amazon Redshift information in CloudTrail CloudTrail is enabled on your AWS account when you create the account. When activity occurs in Amazon Redshift, Redshift, Amazon Redshift actions are logged by CloudTrail and are documented in the Amazon Redshift API Reference. For example, calls to the CreateCluster, DeleteCluster, and DescribeCluster actions. For more information, see the CloudTrail userIdentity Element. Understanding Amazon Redshift Create:51:56Z" } }, "invokedBy": "signin.amazonaws.com" }, "eventTime": "2017-03-03T16:56:09Z", "eventSource": "redshift.amazonaws.com", "eventName": "CreateCluster", "awsRegion": "us-east-2", "sourceIPAddress": "52.95.4.13", "userAgent": "signin.amazonaws.com", "requestParameters": { "clusterIdentifier": "my-dw-instance", "allowVersionUpgrade": true, "enhancedVpcRouting": false, "encrypted": false, "clusterVersion": "1.0", "masterUsername": "awsuser", "masterUserPassword": "****", "automatedSnapshotRetentionPeriod": 1, "port": 5439, "dBName": "mydbtest", "clusterType": "single-node", "nodeType": "dc1.large", "publiclyAccessible": true, "vpcSecurityGroupIds": [ "sg-95f606fc" ] }, "responseElements": { "nodeType": "dc1.large", "preferredMaintenanceWindow": "sat:05:30-sat:06:00", "clusterStatus": "creating", "vpcId": "vpc-84c22aed", "enhancedVpcRouting": false, "masterUsername": "awsuser", "clusterSecurityGroups": [], "pendingModifiedValues": { "masterUserPassword": "****" }, "dBName": "mydbtest", "clusterVersion": "1.0", "encrypted": false, "publiclyAccessible": true, "tags": [], "clusterParameterGroups": [ { "parameterGroupName": "default.redshift-1.0", "parameterApplyStatus": "in-sync" } ], "allowVersionUpgrade": true, "automatedSnapshotRetentionPeriod": 1, "numberOfNodes": 1, "vpcSecurityGroups": [ { "status": "active", "vpcSecurityGroupId": "sg-95f606fc" } ], "iamRoles": [], "clusterIdentifier": "my-dw-instance", "clusterSubnetGroupName": "default" }, "requestID": "4c506036-0032-11e7-b8bf-d7aa466e9920", "eventID": "13ba5550-56ac-405b-900a-8a42b0f43c45", "eventType": "AwsApiCall", "recipientAccountId": "123456789012" } The following example shows a CloudTrail log entry for a sample Delete:58:23Z" } }, "invokedBy": "signin.amazonaws.com" }, "eventTime": "2017-03-03T17:02:34Z", "eventSource": "redshift.amazonaws.com", "eventName": "DeleteCluster", "awsRegion": "us-east-2", "sourceIPAddress": "52.95.4.13", "userAgent": "signin.amazonaws.com", "requestParameters": { "clusterIdentifier": "my-dw-instance", "skipFinalClusterSnapshot": true }, "responseElements": null, "requestID": "324cb76a-0033-11e7-809b-1bbbef7710bf", "eventID": "59bcc3ce-e635-4cce-b47f-3419a36b3fa5", "eventType": "AwsApiCall", "recipientAccountId": "123456789012" } Amazon Redshift account IDs in AWS CloudTrail logs When Amazon Redshift calls another AWS service for you, the call is logged with an account ID that belongs to Amazon Redshift. It isn't logged with your account ID. For example, suppose that Amazon Redshift calls AWS Key Management Service (AWS KMS) actions such as CreateGrant, Decrypt, Encrypt, and RetireGrant to manage encryption on your cluster. In this case, the calls are logged by AWS CloudTrail using an Amazon Redshift account ID. Amazon Redshift uses the account IDs in the following table when calling other AWS services. The following example shows a CloudTrail log entry for the AWS KMS Decrypt operation that was called by Amazon Redshift. { "eventVersion": "1.05", "userIdentity": { "type": "AssumedRole", "principalId": "AROAI5QPCMKLTL4VHFCYY:i-0f53e22dbe5df8a89", "arn": "arn:aws:sts::790247189693:assumed-role/prod-23264-role-wp/i-0f53e22dbe5df8a89", "accountId": "790247189693", "accessKeyId": "AKIAIOSFODNN7EXAMPLE", "sessionContext": { "attributes": { "mfaAuthenticated": "false", "creationDate": "2017-03-03T16:24:54Z" }, "sessionIssuer": { "type": "Role", "principalId": "AROAI5QPCMKLTL4VHFCYY", "arn": "arn:aws:iam::790247189693:role/prod-23264-role-wp", "accountId": "790247189693", "userName": "prod-23264-role-wp" } } }, "eventTime": "2017-03-03T17:16:51Z", "eventSource": "kms.amazonaws.com", "eventName": "Decrypt", "awsRegion": "us-east-2", "sourceIPAddress": "52.14.143.61", "userAgent": "aws-internal/3", "requestParameters": { "encryptionContext": { "aws:redshift:createtime": "20170303T1710Z", "aws:redshift:arn": "arn:aws:redshift:us-east-2:123456789012:cluster:my-dw-instance-2" } }, "responseElements": null, "requestID": "30d2fe51-0035-11e7-ab67-17595a8411c8", "eventID": "619bad54-1764-4de4-a786-8898b0a7f40c", "readOnly": true, "resources": [ { "ARN": "arn:aws:kms:us-east-2:123456789012:key/f8f4f94f-e588-4254-b7e8-078b99270be7", "accountId": "123456789012", "type": "AWS::KMS::Key" } ], "eventType": "AwsApiCall", "recipientAccountId": "123456789012", "sharedEventID": "c1daefea-a5c2-4fab-b6f4-d8eaa1e522dc" }
https://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html
2020-10-23T21:38:38
CC-MAIN-2020-45
1603107865665.7
[]
docs.aws.amazon.com
FTP Access to Your Websites One of the most convenient ways to update your website content is to upload it through FTP. FTP (File Transfer Protocol) is a standard network protocol that allows transferring files between two hosts (for example, your computer and a Plesk server). Plesk acts as an FTP server, while users should use some FTP client to access the directories on the server. Plesk provides all main FTP features: - Authorized access to the server. Learn more in the section Changing FTP Access Credentials. - Multiple user accounts for collaborative work. Learn more in the section Adding FTP Accounts. - Anonymous FTP access: The access without authorization that may be used, for example, to share software updates. Learn more in the section Setting Up Anonymous FTP Access. Note: Information is transferred over FTP unencrypted. We encourage you to use the secure FTPS protocol (also known as FTP-SSL). FTPS is supported by most modern FTP clients. Refer to your FTP client documentation for instructions on how to enable FTPS.
https://docs.plesk.com/en-US/12.5/customer-guide/ftp-access-to-your-websites.69544/
2020-10-23T21:39:54
CC-MAIN-2020-45
1603107865665.7
[]
docs.plesk.com
Filtering For a Web Gateway policy, you can configure allow and block rules, and trusted domains and IP addresses. To change the filtering settings, in the Web Gateway policy, click Settings to show the Filtering options. Category Filtering Use this to control which websites your users are allowed to visit. You can set options for security categories or productivity categories. For more information on how Sophos filters websites, see Sophos Web Security and Control Test Site. Security Categories Use this section to configure access to websites that are known to be high-risk. You can choose these options: - Block risky downloads: This will block all high-risk websites. - Block All: This blocks all traffic categorized as security. - Custom: Lets you choose which categories you want to Allow, Audit, Warn or Block. To see the effect of an option on various categories of websites and downloads, click View Details. Productivity Categoties To see the effect of an option on various categories of websites, click View Details. - Keep It Clean: Prevents users from accessing adult and other potentially inappropriate or controversial websites. - Audit Potential Risks: Allows administrators to flag events where users visited adult, controversial or data sharing websites that could be a potential risk. The user is not shown any type of warning. - Conserve Bandwidth: Blocks inappropriate browsing and site categories likely to consume high bandwidth. - Business Only: Only allows site categories that are generally business-related. - Block Data Sharing: Blocks any website associated with data sharing activities. This helps prevent data loss. - Custom: Lets you choose which category groups or individual categories of sites you want to Allow, Audit, Warn or Block. Web Filtering Use this to control access to websites that you have "tagged", that is, put into your own categories, in. - Select Web Filtering . - Click Add New (on the right). - Select your Website Tag and set the Action to one of the following options. - Allow allows access to the website. - Audit allows access to the website, but associates an Audit action with the website so that you can filter and report on these events. - Warn displays a warning to the user, but allows them to proceed to the website if they decide they want to. - Block denies access to the website and shows the user a block page (which you can customize). Data Filters Use this to specify keywords and regular expressions that should be identified and used for filtering web pages. To set up a filtering rule: - Select Data Filters . - Click Add New (on the right). The Add Data Filter dialog is displayed. - Enter a Name for the rule. - Choose whether to Allow, Audit, Warn or Block the content once a rule is matched. - Choose whether the filter applies to Download, Upload or Both. - Select the Type: - Manual. If you select this, enter a Keyword and a Count (number of occurrences). - Template. If you select this, choose a template from the drop-down list. The rule is applied when all the conditions of the filter are met. Web Safe Mode Use this to help restrict access to inappropriate images or videos. - Enable Google SafeSearch. This helps to block inappropriate or explicit images from Google search results. - Enable YouTube restricted mode. This hides videos that may contain inappropriate content (as flagged by users and other criteria). SSL Scanning Use this to configure whether web pages should be decrypted to identify potential malware or content that should be filtered. You can select SSL scanning for: - Risky websites. - Search engines and social media. - Let me specify. This lets you set options for each category of website. For each category, you can specify whether to scan all sites in the category, or select Let me specify again to select which subcategories to scan. Trusted Destination IPs & Domains Use this to specify IP addresses and domains for which traffic will not be routed through the Web Gateway. Instead that traffic will go directly to the internet. Trusted Source IPs Use this to specify source IP addresses and subnets where traffic will not be routed through the Web Gateway. When the Web Gateway agent is on the specified IP address or subnet, Web Gateway will not run. This setting is often used for known safe networks where network security is already in place.
https://docs.sophos.com/central/Customer/help/en-us/central/Customer/concepts/WGPolicyFiltering.html
2020-10-23T21:46:25
CC-MAIN-2020-45
1603107865665.7
[]
docs.sophos.com
. XG Firewall provides additional security features such as IPS, web filtering, web application firewall, VPN gateway, and Synchronized Security. What is Sophos Synchronized Security? When you deploy Sophos Intercept X advanced security agents and XG Firewall, you can guard against a compromised system becoming the entry for further malicious activity. XG Firewall prevents a compromised AWS EC2 instance with Intercept X Advanced from communicating with other AWS EC2 instances or sending traffic to the internet. For more information, see Sophos Synchronized Security. How is XG Firewall on AWS different than the XG Firewall that can be run on-premise or in local virtual environments? XG Firewall on AWS offers the same features and benefits as XG Firewall running on-premises, but you can easily install and run it in the AWS Cloud. Currently, XG Firewall on AWS doesn't support high availability and must be deployed as a standalone appliance. XG Firewall on AWS also supports additional purchasing options, as described below. XG Firewall on AWS licensing options XG Firewall on AWS is available via the AWS Marketplace and can be purchased from a Sophos reseller or directly from the AWS Marketplace. Software licenses purchased from a Sophos reseller and used in AWS are referred to as Bring your license (BYOL). When XG Firewall is purchased directly from the AWS Marketplace, it's referred to as Pay as you go (PAYG). BYOL You can purchase and use traditional term software licenses using the Sophos partner network. XG Firewall software licenses offer a variety of bundles, subscriptions, and support options. For more information, see XG licensing guide. If you bring your own XG Firewall license for use in AWS, you don't pay AWS Marketplace software charges, but you're still billed by AWS for the EC2 instance used to run the XG Firewall software. For more information, see Sophos XG Firewall Standalone (BYOL). XG Firewall software licenses are provided in various CPU and RAM combinations, which can then be mapped to a supported EC2 instance, as shown below. PAYG If you don't want to purchase a traditional term license or want to purchase directly from AWS, you can use the Pay as you go licensing option. This method provides all XG Firewall functionality (FullGuard) for an additional hourly software charge which is added together with the cost of the EC2 instance used to run XG Firewall. You'll see this additional charge on your monthly AWS bill. You can stop charges at any time by removing any XG Firewall instances from your AWS account. Sophos also supports the AWS Private offers program, which allows customers and partners to negotiate custom pricing and terms. Contact your Sophos sales representative for more information. Are XG Firewall free trials available for AWS? Both the PAYG and BYOL licensing options allow for XG Firewall free trials. PAYG trials are provided directly from AWS Marketplace and are available for 30 days. After the first month, AWS automatically starts charging for any XG PAYG usage incurred. If you have a BYOL license, you can start a trial during the initial configuration or get a trial license from the Sophos free trial link. Can I migrate my UTM license to XG Firewall? You can convert your UTM production license into an XG Firewall license. For more information, see How to convert an SG appliance to an XG appliance with SFOS. Can I use an existing XG Firewall license for a new XG Firewall on AWS? XG Firewall license transfers are only supported under certain circumstances. For more information, see License transfer. Are there any prerequisites to deploy XG Firewall on AWS? For both BYOL and PAYG XG on AWS deployments, you must first accept the AWS Marketplace software terms and subscribe to the software. You can do this from the XG Firewall on AWS listing pages.
https://docs.sophos.com/nsg/sophos-firewall/18.0/Help/en-us/webhelp/onlinehelp/nsg/sfos/concepts/AWSFAQ.html
2020-10-23T22:06:09
CC-MAIN-2020-45
1603107865665.7
[]
docs.sophos.com
server validates the request. - If a request is not valid, the proxy rejects the request and the client receives an error or is redirected. - If a request is valid, the forward proxy checks whether the requested information is cached. - If a cached copy is available, the forward proxy serves the cached information. - If the requested information is not cached, the request is sent to an actual content server which sends the information to the forward proxy. The forward proxy then relays the response to the client.!
https://docs.splunk.com/Documentation/Splunk/7.3.2/Admin/Aboutserverproxysplunkd
2020-10-23T23:01:49
CC-MAIN-2020-45
1603107865665.7
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Feature: #54518 - Provide TSconfig to link checkers¶ See Issue #54518 Description¶ The active TSconfig of the linkvalidator is stored in the LinkAnalyzer and made publicly available to the link checkers. The TSconfig is read either from the currently active TSconfig in the Backend when the linkvalidator is used in the info module or from the configuration provided in the linkvalidator scheduler task. This allows passing configuration to the different link checkers. Usage: # The configuration in mod.linkvalidator can be read by the link checkers. mod.linkvalidator.mychecker.myvar = 1
https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/7.0/Feature-54518-ProvideTsconfigToLinkCheckers.html
2020-10-23T22:32:08
CC-MAIN-2020-45
1603107865665.7
[]
docs.typo3.org
Installation ↑ Back to top WooSidebars is available for download directly through WooDojo. If you would prefer to install the plugin manually, please use the following directions: - Download the plugin to your computer - Login to your WordPress admin panel, and click Plugins > Add New - Using the “Upload” option, click “Choose File” and browse to where you downloaded the WooSidebars plugin. - Click “Install Now” and then activate the WooSidebars plugin to complete installation. Getting Started ↑ Back to top WooSidebars adds a “Widget Areas” menu item under the “Appearance” menu in your WordPress administration area. This is where all WooSidebars interaction takes place. Adding a Widget Area ↑ Back to top On the “Widget Areas” screen, a list of all currently stored widget areas is displayed. To get started, click the “Add New” link next to the “Widget Areas” title on the screen, to add your first widget area. Adding a widget area consists of several fields: a title, description, the sidebar to be replaced and the conditions for replacing the sidebar (we’ll discuss conditions in more detail below). Title ↑ Back to top The title is the name of the widget area, as it displays on the “Appearance > Widgets” screen. This should be kept short and relevant (for example, “About Page – Primary” for replacing the “Primary” sidebar on the “About” page). Description ↑ Back to top If necessary, this provides more details for where and when the sidebar is used. This text displays inside the sidebar, in its display on the “Appearance > Widgets” screen. Sidebar To Replace ↑ Back to top WooSidebars works by overriding a widget area from the active theme, with a custom widget area, if certain conditions are met. This field is where you would choose which of the theme’s sidebars is to be replaced by the sidebar you’re creating here. Conditions ↑ Back to top WooSidebars comes bundled with an advanced conditions management system, where by it is possible to choose under which conditions the sidebar is replaced with the custom sidebar (for example, on certain pages, categories, tags or internal WordPress screens). By default, only “Pages” is available. More advanced conditions can be made available by clicking the “Advanced” tab inside the “Conditions” box. Conditions ↑ Back to top By default, WooSidebars comes bundled with conditions for pages, which displays by default. Clicking the “Advanced” tab inside the “Conditions” box opens up a range of extra tabs, for additional conditions. Other conditions include: - Specific page templates (if the active theme contains page templates) - Post types (for the post type archives and to display on all posts of a specific post type) - Taxonomy archives (categories, tags, etc, for all registered taxonomies on the installation) - Taxonomy terms (specific categories, tags, etc, for all registered taxonomies on the installation) - WordPress Template Hierarchy - All pages - Search results - Default “Your Latest Posts” screen - Front page - Single entries - All archives - Author archives - Date archives - 404 error screens - WooCommerce - Shop page - Product categories - Product tags - Products - Cart page - Account pages Note: The ‘Posts’ tab in the image above will only be visible if you enable Custom Sidebars on individual posts, as described below. Conditions for specific entries ↑ Back to top With WooSidebars, it’s possible to create custom sidebars for specific entries in a post type. By default, the “post” post type supports this and, if WooCommerce is active, the “product” post type supports this as well. To create a widget area for a specific blog post, simply go to Posts > All Posts and click the check mark next to the desired blog post. To add support for this to other post types, please add the following to your theme’s “functions.php” file or to your custom plugin that you’re developing, inside PHP tags: add_post_type_support( 'post_type', 'woosidebars' ); In the above example, please replace “post_type” with the desired post type. Adding this code, adds checkmark-style buttons to the “List” screen for that post type in the WordPress admin. Here, you can click the button for each entry you’d like to be able to create a custom sidebar for. Non-active Sidebars ↑ Back to top WooSidebars detects which sidebars are active in the current theme, and notifies you of which of your custom sidebars don’t apply to the current theme. Those sidebars do, however, remain in the system, in case you are switching regularly between themes that use different registered sidebars. Video Overview ↑ Back to top Video courtesy of Jamie Marsland at PootlePress.
https://docs.woocommerce.com/document/woosidebars-2/
2020-10-23T22:06:51
CC-MAIN-2020-45
1603107865665.7
[]
docs.woocommerce.com
pandera.Hypothesis.__init__¶ Hypothesis. __init__(test, samples=None, groupby=None, relationship='equal', test_kwargs=None, relationship_kwargs=None, name=None, error=None, raise_warning=False)[source]¶ Perform a hypothesis test on a Series or DataFrame. - Parameters test ( Callable) – The hypothesis test function. It should take one or more arrays as positional arguments and return a test statistic and a p-value. The arrays passed into the test function are determined by the samplesargument. samples ( Union[ str, List[ str], None]) – for Column or SeriesSchema hypotheses, this refers to the group keys in the groupby column(s) used to group the Series into a dict of Series. The samples column(s) are passed into the test function as positional arguments. For DataFrame-level hypotheses, samples refers to a column or multiple columns to pass into the test function. The samples column(s) are passed into the test function as positional arguments. the hypothesis_check function. Specifying this argument changes the fn signature to: dict[str|tuple[str], Series] -> bool|pd.Series[bool] Where specific groups can be obtained from the input dict. relationship ( Union[ str, Callable]) – Represents what relationship conditions are imposed on the hypothesis test. A function or lambda function can be supplied. Available built-in relationships are: “greater_than”, “less_than”, “not_equal” or “equal”, where “equal” is the null hypothesis. If callable, the input function signature should have the signature (stat: float, pvalue: float, **kwargs)where stat is the hypothesis test statistic, pvalue assesses statistical significance, and **kwargs are other arguments supplied via the **relationship_kwargs argument. Default is “equal” for the null hypothesis. test_kwargs (dict) – Keyword arguments to be supplied to the test. relationship_kwargs (dict) – Keyword arguments to be supplied to the relationship function. e.g. alpha could be used to specify a threshold in a t-test. name ( Optional[ str]) – optional name of hypothesis test error ( Optional[ str]) – error message to show raise_warning ( bool) – if True, raise a UserWarning and do not throw exception instead of raising a SchemaError for a specific check. This option should be used carefully in cases where a failing check is informational and shouldn’t stop execution of the program. - Examples - Define a two-sample hypothesis test using scipy. >>> import pandas as pd >>> import pandera as pa >>> >>> from scipy import stats >>> >>> schema = pa.DataFrameSchema({ ... "height_in_feet": pa.Column(pa.Float, [ ... pa.Hypothesis( ... test=stats.ttest_ind, ... samples=["A", "B"], ... groupby="group", ... # assert that the mean height of group "A" is greater ... # than that of group "B" ... relationship=lambda stat, pvalue, alpha=0.1: ( ... stat > 0 and pvalue / 2 < alpha ... ), ... # set alpha criterion to 5% ... relationship_kwargs={"alpha": 0.05} ... ) ... ]), ... "group": pa.Column(pa.String), ... }) >>> See here for more usage details. - Return type None
https://pandera.readthedocs.io/en/stable/generated/methods/pandera.Hypothesis.__init__.html
2020-10-23T21:27:37
CC-MAIN-2020-45
1603107865665.7
[]
pandera.readthedocs.io
TOPICS× SPA Model Routing For single page applications in AEM, the app is responsible for the routing. This document describes the routing mechanism, the contract, and options available. The Single-Page Application (SPA) Editor feature requires AEM 6.4 service pack 2 or newer. The SPA Editor is the recommended solution for projects that require SPA framework based client-side rendering (e.g. React or Angular). Project Routing The App owns the routing and is then implemented by the project front end developers. This document describes the routing specific to the model returned by the AEM server. The page model data structure exposes the URL of the underlying resource. The front end project can use any custom or third-party library providing routing functionalities. Once a route expects a fragment of model, a call to the PageModelManager.getData() function can be made. When a model route has changed an event must be triggered to warn listening libraries such as the Page Editor. Architecture For a detailed description, please refer to the PageModelManager section of the SPA Blueprint document. ModelRouter The ModelRouter - when enabled - encapsulates the HTML5 History API functions pushState and replaceState to guarantee a given fragment of model is pre-fetched and accessible. It then notifies the registered front end component that the model has been modified. Manual vs Automatic Model Routing The ModelRouter automates the fetching of fragments of the model. But as any automated tooling it comes with limitations. When needed the ModelRouter can be disabled or configured to ignore paths using meta properties (See the Meta Properties section of the SPA Page Component document). Front end developers can then implement their own model routing layer by requesting the PageModelManager to load any given fragment of model using the getData() function. Currently the We.Retail Journal sample React project illustrates the automated approach while the Angular project illustrates the manual one. A semi-automated approach would also be valid use-case. The current version of the ModelRouter only support the use of URLs that points to the actual resource path of Sling Model entry points. It doesn't support the use of Vanity URLs or aliases. Routing Contract The current implementation is based on the assumption that the SPA project uses the HTML5 History API for routing to the different application pages. Configuration The ModelRouter supports the concept of model routing as it listens for pushState and replaceState calls to prefetch model fragments. Internally it triggers the PageModelManager to load the model that corresponds to a given URL and fires a cq-pagemodel-route-changed event that other modules can listen to. By default, this behavior is automatically enabled. To disable it, the SPA should render the following meta property: <meta property="cq:pagemodel_router" content="disable"\> Note that every route of the SPA should correspond to an accessible resource in AEM (e.g., " /content/mysite/mypage" ) since the PageModelManager will automatically try to load the corresponding page model once the route is selected. Though, if needed, the SPA can also define a "block list" of routes that should be ignored by the PageModelManager : <meta property="cq:pagemodel_route_filters" content="route/not/found,^(.*)(?:exclude/path)(.*)"/>
https://docs.adobe.com/content/help/en/experience-manager-64/developing/headless/spas/spa-routing.html
2020-10-23T23:03:32
CC-MAIN-2020-45
1603107865665.7
[]
docs.adobe.com
The Particle SystemA component that simulates fluid entities such as liquids, clouds and flames by generating and animating large numbers of small 2D images in the scene. More info See in Glossary component has a powerful set of properties that are organized into modules for ease of use. This section of the manual covers each of the modules in detail.
https://docs.unity3d.com/2020.1/Documentation/Manual/ParticleSystemModules.html
2020-10-23T22:55:32
CC-MAIN-2020-45
1603107865665.7
[]
docs.unity3d.com
Once the quick, initial set up is completed you’ll be greeted with the following dialogue box. From here, you can create projects, load an existing Mix, or create a new one. Projects A project is a folder in your Mixer directory that contains Mixes. - To create a new project, simply click the plus (+) icon next to Projects. - If you wish to delete a project, hover over a project’s name to reveal an x next to it. Click to delete. - Click on a project’s name to see the Mixes within the project. Mixes Inside the Sample Mixes project, there are some Mixes that come with the app. You can load these to immediately see the different kind of surfaces that can be created using Mixer or quickly learn how they were created by diving into the details in their layer stack. - By hovering over an existing Mix you’ll see an X to the right of the Mix name. Click to delete the Mix. - To load an existing Mix, either double click the Mix or select it and click Open. Creating a New Mix To create a new Mix click New Mix at the top-right corner. A New Mix dialogue will open up with the following settings: Working Resolution: Set between 256 and 8k pixels as per your project requirements. After clicking “Ok” you will be directed towards Setup Tab, where you can select the type of surface or 3D mesh for texturing. Post your comment on this topic.
http://docs.quixel.com/mixer/1/en/topic/project-creation
2020-10-23T21:57:54
CC-MAIN-2020-45
1603107865665.7
[]
docs.quixel.com
The ability to backup and restore your system(s) has been integrated into the Tower setup playbook. Refer to Backup and Restore for Clustered Environments for additional considerations. Note When restoring, be sure to restore to the same version from which it was backed up. However,. Also, backup and restore will only work on PostgreSQL versions supported by your current Ansible Tower version. For more information, see Requirements in the Ansible Tower Installation and Reference Guide. The Tower setup playbook is invoked as setup.sh from the path where you unpacked the Tower installer tarball. It uses the same inventory file used by the install playbook. The setup script takes the following arguments for backing up and restoring: -b Perform a database backup rather than an installation. -r Perform a database restore rather than an installation. As the root user, call setup.sh with the appropriate parameters and Tower backup or restored as configured. root@localhost:~# ./setup.sh -b root@localhost:~# ./setup.sh -r Backup files will be created on the same path that setup.sh script exists. It can be changed by specifying the following EXTRA_VARS : root@localhost:~# ./setup.sh -e 'backup_dest=/path/to/backup_dir/' -b A default restore path is used unless EXTRA_VARS are provided with a non-default path, as shown in the example below: root@localhost:~# ./setup.sh -e 'restore_backup_file=/path/to/nondefault/backup.tar.gz' -r Optionally, you can override the inventory file used by passing it as an argument to the setup script: setup.sh -i <inventory file> In addition to the install.yml file included with your setup.sh setup playbook, there are also backup.yml and restore.yml files for your backup and restoration needs. These playbooks serve two functions–backup and restore. The overall backup will backup: the database the SECRET_KEY file The per-system backups include: custom user config files job stdout manual projects The restore will restore the backed up files and data to a freshly installed and working second instance of Tower. When restoring your system, Tower checks to see that the backup file exists before beginning the restoration. If the backup file is not available, your restoration will fail. Note Ensure your Tower host(s) are properly set up with SSH keys or user/pass variables in the hosts file, and that the user has sudo access. Disk Space: Review your disk space requirements to ensure you have enough room to backup configuration files, keys, and other relevant files, plus the database of the Tower installation. System Credentials: Confirm you have the system credentials you need when working with a local database or a remote database. On local systems, you may need root or sudo access, depending on how credentials were setup. On remote systems, you may need different credentials to grant you access to the remote system you are trying to backup or restore.. When using setup.sh to do a restore from the default restore file path, /var/lib/awx, -r is still required in order to do the restore, but it no longer accepts an argument. If a non-default restore file path is needed, the user must provide this as an extra var ( root@localhost:~# ./setup.sh -e 'restore_backup_file=/path/to/nondefault/backup.tar.gz' -r). If the backup file is placed in the same directory as the setup.sh installer, the restore playbook will automatically locate the restore files. In this case, you do not need to use the restore_backup_file extra var to specify the location of the backup file. The procedure for backup and restore for a clustered environment is similar to a single install, except with some considerations described in this section. If restoring to a new cluster, make sure the old cluster is shut down before proceeding because they could conflict with each other when accessing the database. Per-node backups will only be restored to nodes bearing the same hostname as the backup. When restoring to an existing cluster, the restore contains: Dump of the PostgreSQL database UI artifacts (included in database dump) Tower configuration (retrieved from /etc/tower) Tower secret key Manual projects When restoring a backup to a separate instance or cluster, manual projects and custom settings under /etc/tower are retained. Job output and job events are stored in the database, and therefore, not affected. The restore process will not alter instance groups present before the restore (neither will it introduce any new instance groups). Restored Tower resources that were associated to instance groups will likely need to be reassigned to instance groups present on the new Tower cluster.
https://docs.ansible.com/ansible-tower/latest/html/administration/backup_restore.html
2020-10-23T22:29:35
CC-MAIN-2020-45
1603107865665.7
[]
docs.ansible.com
We are Planning to Upgrade our Magento 1 Store to a New Version. Are there any Impacts on the Magento 1 - NetSuite Connector v3? Diana Badgett April 06, 2020 13:47 Updated Follow Supported Magento Editions & Versions can be found here. It is highly recommended that you contact Celigo Support prior to upgrading your Magento Store to make sure there are no issues. Related articles What is the Difference Between a Regular User and a Web Services User? Comments 0 comments Please sign in to leave a comment. Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/228179928-We-are-Planning-to-Upgrade-our-Magento-1-Store-to-a-New-Version-Are-there-any-Impacts-on-the-Magento-1-NetSuite-Connector-v3-
2020-10-23T21:49:58
CC-MAIN-2020-45
1603107865665.7
[]
docs.celigo.com
Settings for Routing Processes Summary EnsLib.MsgRouter.RoutingEngine has the following settings: The remaining settings are common to all business processes. See “Settings for All Business Processes” in Configuring Productions. Act On Transform Error If True, causes errors returned by a transformation to stop rule evaluation and the error to be handled by Reply Code Actions setting. Act On Validation Error If True, causes errors returned by validation to be handled by Reply Code Actions setting. Alert On Bad Message If True, any document that fails validation automatically triggers an alert. Bad Message Handler If the document fails validation, and if the routing process has a configured Bad Message Handler, it sends the bad document to this business operation instead of its usual target for documents that pass validation. See “Defining Bad Message Handlers,” earlier in this book. Business Rule Name The full name of the routing rule set for this routing process. Force Sync Send If True, make synchronous calls for all “send” actions from this routing process. If False, allow these calls to be made asynchronously. This setting is intended to ensure FIFO ordering in the following case: This routing process and its target business operations all have Pool Size set to 1, and ancillary business operations might be called asynchronously from within a data transformation or business operation called from this routing process. If Force Sync Send is True, this can cause deadlock if another business process is called by a target that is called synchronously from this routing process. Note that if there are multiple “send” targets, Force Sync Send means these targets will be called one after another in serial fashion, with the next being called after the previous call completes. Also note that synchronous calls are not subject to the Response Timeout setting. Reply Target Config Names Specifies a comma-separated list of configuration items within the production to which the business service should relay any reply documents that it receives. Usually the list contains one item, but it can be longer. The list can include business processes or business operations, or a combination of both. This setting takes effect only if the Response From setting has a value. Response From A comma-separated list of configured items within the production. This list identifies the targets from which a response may be forwarded back to the original caller, if the caller requested a response. If a Response From string is specified, the response returned to the caller is the first response that arrives back from any target in the list. If there are no responses, an empty “OK” response header is returned. The Response From string also allows special characters, as follows: The * character by itself matches any target in the production, so the first response from any target is returned. If there are no responses, an empty “OK” response header is returned. If the list of targets begins with a + character, the responses from all targets return together, as a list of document header IDs in the response header. If none of the targets responds, an empty OK response header is returned. If the list of targets begins with a - character, only error responses will be returned, as a list of document header IDs in the response header. If none of the targets responds with an error, an empty OK response header is returned. If this setting value is unspecified, nothing is returned. Response Timeout Maximum length of time to wait for asynchronous responses before returning a “timed-out error” response header. A value of -1 means to wait forever. Note that a value of 0 is not useful, because every response would time out. This setting takes effect only if the Response From field has a value. Rule Logging If logging is enabled controls the level of logging in rules. You can specify the following flags: e—Log errors only. All errors will be logged irrespective of other flags, so setting the value to 'e' or leaving the value empty will only log errors. r—Log return values. This is the default value for the setting, and is also automatic whenever the 'd' or 'c' flags are specified. d—Log user-defined debug actions in the rule. For details on the debug action, see “Adding Actions” in Developing Business Rules. This will also include 'r'. c—Log details of the conditions that are evaluated in the rule. This will also include 'r'. a—Log all available information. This is equivalent to 'rcd'. Validation For allowed values and basic information, see the Validation setting for business services. If the document fails validation, the routing process forwards the document to its bad message handler, as specified by the Bad Message Handler setting. If there is no bad message handler, the routing process does not route the document, but logs an error. Also see Alert On Bad Message.
https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=EEDI_settings_bp
2020-10-23T22:05:24
CC-MAIN-2020-45
1603107865665.7
[]
docs.intersystems.com
ピクセル単位でのテクスチャの幅 (Read Only) // Print texture size to the Console var texture : Texture; function Start () { print("Size is " + texture.width + " by " + texture.height); } using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { public Texture texture; void Start() { print("Size is " + texture.width + " by " + texture.height); } } import UnityEngine import System.Collections public class ExampleClass(MonoBehaviour): public texture as Texture def Start() as void: print(((('Size is ' + texture.width) + ' by ') + texture.height))
https://docs.unity3d.com/jp/460/ScriptReference/Texture-width.html
2020-10-23T23:01:21
CC-MAIN-2020-45
1603107865665.7
[]
docs.unity3d.com
Everest Forms Documentation This addon will help you to edit the form that you created using the Everest Forms plugin. You can easily customize each and every component of your form with the help of this addon. It can help you to make your form very intuitive with an excellent design. To know more about this addon, let’s go through the following documentation. The Style Customizer is already configured on your form when you activate it. Once you finish installing and activating the addon, you can easily create your form from the dashboard. After you’ve created it, you can see the form designer icon on the bottom right of the screen of your dashboard. Click on it and you will be able to see your Everest Forms Style Customizer. You can endlessly modify your form with the help of it. option Field Sublabels are the further subdivided labels of the fields of your forms. You can modify the following settings: You will have three different types of messages that you can customize. All three of them have similar customization options. For example, If you want to customize the Success Message then you have the following options. The Customization Options that you get to customize the messages are: You will find similar options to customize the Error Message and the Validation Message. As the name suggests itself, Additional CSS can be used to to add your own CSS codes to customize your form the way you want. You can see an CSS editor in the customizer where you can add all the CSS codes you want. Name: *
https://docs.wpeverest.com/everest-forms/docs/style-customizer/
2020-10-23T21:31:10
CC-MAIN-2020-45
1603107865665.7
[]
docs.wpeverest.com
4. How to Deal With Strings¶ This section explains how strings are represented in Python 2.x, Python 3.x and GTK+ and discusses common errors that arise when working with strings. 4.1. Definitions¶ Conceptional, a string is a list of characters such as ‘A’, ‘B’, ‘C’ or ‘É’. Characters are abstract representations and their meaning depends on the language and context they are used in. The Unicode standard describes how characters are represented by code points. For example the characters above are represented with the code points U+0041, U+0042, U+0043, and U+00C9, respectively. Basically, code points are numbers in the range from 0 to 0x10FFFF. As mentioned earlier, the representation of a string as a list of code points is abstract. In order to convert this abstract representation into a sequence of bytes the Unicode string must be encoded. The simplest form of encoding is ASCII and is performed as follows: -.) Although ASCII encoding is simple to apply it can only encode for 128 different characters which is hardly enough. One of the most commonly used encodings that addresses this problem is UTF-8 (it can handle any Unicode code point). UTF stands for “Unicode Transformation Format”, and the ‘8’ means that 8-bit numbers are used in the encoding. 4.2. Python 2¶ 4.2.1. Python 2.x’s Unicode Support¶ Python 2 comes with two different kinds of objects that can be used to represent strings, str and unicode. Instances of the latter are used to express Unicode strings, whereas instances of the str type are byte representations (the encoded string). Under the hood, Python represents Unicode strings as either 16- or 32-bit integers, depending on how the Python interpreter was compiled. Unicode strings can be converted to 8-bit strings with unicode.encode(): >>> unicode_string = u"Fu\u00dfb\u00e4lle" >>> print unicode_string Fußbälle >>> type(unicode_string) <type 'unicode'> >>> unicode_string.encode("utf-8") 'Fu\xc3\x9fb\xc3\xa4lle' Python’s 8-bit strings have a str.decode() method that interprets the string using the given encoding: >>> utf8_string = unicode_string.encode("utf-8") >>> type(utf8_string) <type 'str'> >>> u2 = utf8_string.decode("utf-8") >>> unicode_string == u2 True Unfortunately, Python 2.x allows you to mix unicode and str if the 8-bit string happened to contain only 7-bit (ASCII) bytes, but would get UnicodeDecodeError if it contained non-ASCII values: >>>>> unicode_string + utf8_string u'Fu\xdfb\xe4lle sind rund' >>>>> print utf8_string könnten rund sein >>> unicode_string + utf8_string Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 2: ordinal not in range(128) 4.2.2. Unicode in GTK+¶ GTK+ uses UTF-8 encoded strings for all text. This means that if you call a method that returns a string you will always obtain an instance of the str type. The same applies to methods that expect one or more strings as parameter, they must be UTF-8 encoded. However, for convenience PyGObject will automatically convert any unicode instance to str if supplied as argument: >>> from gi.repository import Gtk >>> label = Gtk.Label() >>> unicode_string = u"Fu\u00dfb\u00e4lle" >>> label.set_text(unicode_string) >>> txt = label.get_text() >>> type(txt), txt (<type 'str'>, 'Fu\xc3\x9fb\xc3\xa4lle') >>> txt == unicode_string __main__:1: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal False Note the warning at the end. Although we called Gtk.Label.set_text() with a unicode instance as argument, Gtk.Label.get_text() will always return a str instance. Accordingly, txt and unicode_string are not equal. This is especially important if you want to internationalize your program using gettext. You have to make sure that gettext will return UTF-8 encoded 8-bit strings for all languages. In general it is recommended to not use unicode objects in GTK+ applications at all and only use UTF-8 encoded str objects since GTK+ does not fully integrate with unicode objects. Otherwise, you would have to decode the return values to Unicode strings each time you call a GTK+ method: >>> txt = label.get_text().decode("utf-8") >>> txt == unicode_string True 4.3. Python 3¶ 4.3.1. Python 3.x’s Unicode support¶ Since Python 3.0, all strings are stored as Unicode in an instance of the str type. Encoded strings on the other hand are represented as binary data in the form of instances of the bytes type. Conceptional, str refers to text, whereas bytes refers to data. Use str.encode() to go from str to bytes, and bytes.decode() to go from bytes to str. In addition, it is no longer possible to mix Unicode strings with encoded strings, because it will result in a TypeError: >>>>> data = b" sind rund" >>> text + data Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Can't convert 'bytes' object to str implicitly >>> text + data.decode("utf-8") 'Fußbälle sind rund' >>> text.encode("utf-8") + data b'Fu\xc3\x9fb\xc3\xa4lle sind rund' 4.3.2. Unicode in GTK+¶ As a consequence, things are much cleaner and consistent with Python 3.x, because PyGObject will automatically encode/decode to/from UTF-8 if you pass a string to a method or a method returns a string. Strings, or text, will always be represented as instances of str only: >>> from gi.repository import Gtk >>> label = Gtk.Label() >>>>> label.set_text(text) >>> txt = label.get_text() >>> type(txt), txt (<class 'str'>, 'Fußbälle') >>> txt == text True 4.4. References¶ What’s new in Python 3.0 describes the new concepts that clearly distinguish between text and data. The Unicode HOWTO discusses Python 2.x’s support for Unicode, and explains various problems that people commonly encounter when trying to work with Unicode. The Unicode HOWTO for Python 3.x discusses Unicode support in Python 3.x. UTF-8 encoding table and Unicode characters contains a list of Unicode code points and their respective UTF-8 encoding.
http://python-gtk-3-tutorial.readthedocs.io/en/latest/unicode.html
2018-03-17T10:36:00
CC-MAIN-2018-13
1521257644877.27
[]
python-gtk-3-tutorial.readthedocs.io
Performance analytics with domain separation When using Performance Analytics with domain separation you can collect domain-specific scores, and use global or domain-specific configuration records such as indicators, breakdowns, and dashboards. Note: You must have the premium version of Performance Analytics to use Performance Analytics in any domain other than global.. Global configuration By using configuration records in the global domain, you can present domain-appropriate data automatically.. PADomainUtilsThe PADomainUtils API allows you to copy Performance Analytics configurations between different domains.
https://docs.servicenow.com/bundle/geneva-performance-analytics-and-reporting/page/use/performance_analytics/concept/c_PAWithDomainSeparation.html
2018-03-17T10:50:39
CC-MAIN-2018-13
1521257644877.27
[]
docs.servicenow.com
GRC authority documents and GRC citations - Legacy authority documents Authority documents are used to define policies, risks, controls, audits, and other processes ensuring adherence to the authoritative content. Each authority document is defined by a master record on the Authoritative Source [grc_authoritative_source] table, with a related list of records from the Authoritative Source Content [grc_authoritative_src_content] table. GRC citations Citation records contain the actual provisions of the authority document, which can be interrelated using configured relationships. In this way, the relationships between different sections of the authority documents can be mapped to better record how the authority document is meant to be implemented. The same relationship mechanism can be used to document relationships across authority documents. This is important because different sources address the same or similar controls and objectives. You can create citations or import them from UCF authority documents and then create any necessary relationships between the citations. See UCF authority document import process - Legacy. Create a GRC authority document manually - LegacyCreate an authority document.Create a GRC citation - LegacyA citation can be created manually.Add a relationship between GRC citations - LegacyYou can define relationships between citations from within the citation form.Define a GRC citation relationship type - LegacyRelationships can be defined at the GRC citation level.View a relationship between GRC citations - LegacyIt is useful to view whether or not organizational controls are in place to address citations.
https://docs.servicenow.com/bundle/helsinki-governance-risk-compliance/page/product/it-governance-risk-and-compliance/concept/c_AuthorityDocuments.html
2018-03-17T10:50:50
CC-MAIN-2018-13
1521257644877.27
[]
docs.servicenow.com
obspy.core.inventory¶ obspy.core.inventory - Classes for handling station metadata¶ This module provides a class hierarchy to consistently handle station metadata. This class hierarchy is closely modelled after the upcoming de-facto standard format FDSN StationXML which was developed as a human readable XML replacement for Dataless SEED. Note IRIS is maintaining a Java tool for converting dataless SEED into StationXML and vice versa at Reading¶ StationXML files can be read using the read_inventory() function that returns an Inventory object. >>> from obspy import read_inventory >>> inv = read_inventory("/path/to/BW_RJOB.xml") >>> inv <obspy.core.inventory.inventory.Inventory object at 0x...> >>> print(inv) Inventory created at 2013-12-07T18:00:42.878000Z Created by: fdsn-stationxml-converter/1.0.0 Sending institution: Erdbebendienst Bayern Contains: Networks (1): BW Stations (1): BW.RJOB (Jochberg, Bavaria, BW-Net) Channels (3): BW.RJOB..EHE, BW.RJOB..EHN, BW.RJOB..EHZ The file format in principle is autodetected. However, the autodetection uses the official StationXML XSD schema and unfortunately many real world files currently show minor deviations from the official StationXML definition causing the autodetection to fail. Thus, manually specifying the format is a good idea: >>> inv = read_inventory("/path/to/BW_RJOB.xml", format="STATIONXML") Class hierarchy¶ The Inventory class has a hierarchical structure, starting with a list of Networks, each containing a list of Stations which again each contain a list of Channels. The Responses are attached to the channels as an attribute. >>> net = inv[0] >>> net <obspy.core.inventory.network.Network object at 0x...> >>> print(net) Network BW (BayernNetz) Station Count: None/None (Selected/Total) None - Access: None Contains: Stations (1): BW.RJOB (Jochberg, Bavaria, BW-Net) Channels (3): BW.RJOB..EHZ, BW.RJOB..EHN, BW.RJOB..EHE >>> sta = net[0] >>> print(sta) Station RJOB (Jochberg, Bavaria, BW-Net) Station Code: RJOB Channel Count: None/None (Selected/Total) 2007-12-17T00:00:00.000000Z - Access: None Latitude: 47.74, Longitude: 12.80, Elevation: 860.0 m Available Channels: RJOB..EHZ, RJOB..EHN, RJOB..EHE >>> cha = sta[0] >>> print(cha) Channel 'EHZ', Location '' Timerange: 2007-12-17T00:00:00.000000Z - -- Latitude: 47.74, Longitude: 12.80, Elevation: 860.0 m, Local Depth: 0.0 m Azimuth: 0.00 degrees from north, clockwise Dip: -90.00 degrees down from horizontal Channel types: TRIGGERED, GEOPHYSICAL Sampling Rate: 200.00 Hz Sensor: Streckeisen STS-2/N seismometer Response information available >>> print(cha, gain: 1.67... Stage 3: FIRResponseStage from COUNTS to COUNTS, gain: 1 Stage 4: FIRResponseStage from COUNTS to COUNTS, gain: 1 Preview plots of station map and instrument response¶ For station metadata, preview plot routines for geographic location of stations as well as bode plots for channel instrument response information are available. The routines for station map plots are: For example: >>> from obspy import read_inventory >>> inv = read_inventory() >>> inv.plot() (Source code, png, hires.png) The routines for bode plots of channel instrument response are: For example: >>> from obspy import read_inventory >>> inv = read_inventory() >>> resp = inv[0][0][0].response >>> resp.plot(0.001, output="VEL") (Source code, png, hires.png) For more examples see the Obspy Gallery. Dealing with the Response information¶ The get_evalresp_response() method will call some functions within evalresp to generate the response. >>> response = cha.response >>> response, freqs = response.get_evalresp_response(0.1, 16384, output="VEL") >>> print(response) [ 0.00000000e+00 +0.00000000e+00j -1.36383361e+07 +1.42086194e+06j -5.36470300e+07 +1.13620679e+07j ..., 2.48907496e+09 -3.94151237e+08j 2.48906963e+09 -3.94200472e+08j 2.48906430e+09 -3.94249707e+08j] Some convenience methods to perform an instrument correction on Stream (and Trace) objects are available and most users will want to use those. The attach_response() method will attach matching responses to each trace if they are available within the inventory object. The remove_response() method deconvolves the instrument response in-place. As always see the corresponding docs pages for a full list of options and a more detailed explanation. >>> from obspy import read >>> st = read() >>> inv = read_inventory("/path/to/BW_RJOB.xml") >>> st.attach_response(inv) [] >>> st.remove_response(output="VEL", water_level=20) <obspy.core.stream.Stream object at 0x...>
http://docs.obspy.org/packages/autogen/obspy.core.inventory.html
2018-03-17T10:32:13
CC-MAIN-2018-13
1521257644877.27
[array(['../../_images/Inventory.png', '../../_images/Inventory.png'], dtype=object) array(['../../_images/obspy-core-inventory-1.png', '../../_images/obspy-core-inventory-1.png'], dtype=object) array(['../../_images/obspy-core-inventory-2.png', '../../_images/obspy-core-inventory-2.png'], dtype=object)]
docs.obspy.org
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Container for the parameters to the ListResourceDelegates operation. Lists the delegates associated with a resource. Users and groups can be resource delegates and answer requests on behalf of the resource. Namespace: Amazon.WorkMail.Model Assembly: AWSSDK.WorkMail.dll Version: 3.x.y.z The ListResourceDelegates
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/WorkMail/TListResourceDelegatesRequest.html
2018-03-17T11:03:57
CC-MAIN-2018-13
1521257644877.27
[]
docs.aws.amazon.com
Filtering Nodes RadTreeView supports filtering of its nodes according to their Text property. In order to apply a filter, you should set the Filter property to the desired text value. For example, if we have this RadTreeView instance: and we set the Filter property as shown below: this.radTreeView1.Filter = "new"; Me.RadTreeView1.Filter = "new" we will get this look of RadTreeView at the end:
https://docs.telerik.com/devtools/winforms/treeview/working-with-nodes/filtering-nodes
2018-03-17T10:44:58
CC-MAIN-2018-13
1521257644877.27
[array(['images/treeview-working-with-nodes-filtering001.png', 'treeview-working-with-nodes-filtering 001'], dtype=object) array(['images/treeview-working-with-nodes-filtering002.png', 'treeview-working-with-nodes-filtering 002'], dtype=object)]
docs.telerik.com
relevancy Description Calculates how well the event matches the query based on how well the events _raw field matches the keywords of the 'search'. Saves the result into a field named "relevancy". Useful for retrieving the best matching events/documents, rather than the default time-based ordering. Events score a higher relevancy if they have more rare search keywords, more frequently, in fewer terms. For example a search for disk error will favor a short event/document that has 'disk' (a rare term) several times and 'error' once, than a very large event that has 'disk' once and 'error' several times. Note: The relevancy command does not currently work. See SPL-93039 on the Known issues page here: Syntax relevancy Examples Example 1: Calculate the relevancy of the search and sort the results in descending order. disk error | relevancy | sort -relevancy See also abstract, highlight, sort Answers Have questions? Visit Splunk Answers and see what questions and answers the Splunk community has using the relevancy command. This documentation applies to the following versions of Splunk Cloud™: 6.5.0, 6.5.1, 6.6.0, 6.5.1612, 6.6.1, 6.6.3, 7.0.0 Feedback submitted, thanks!
http://docs.splunk.com/Documentation/SplunkCloud/6.6.3/SearchReference/Relevancy
2018-03-17T10:32:16
CC-MAIN-2018-13
1521257644877.27
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Knowi supports connectivity to internal and external REST API's, with the ability to manipulate, store and join with other datasources. Highlights - Connect to any REST API. - Query and transform the data. - Perform advanced analytics and queries on top of it using Cloud9QL, including multi-structured data returned. - Join and blend it against other datasources - Store and incrementally track query results where needed Connecting - From the New Datasource menu (Datasources icon on the left-hand menu --> 'New Datasource'), click on the REST icon. - Enter a name and the REST API the base URL that your requests/end points will be driven off from. After creation, click on the 'Configure Queries here' link. ` Querying Note that the data returned must be in JSON form. - Enter any HTTP headers, if applicable. Multiple headers must be separated one per line. - Specify the REST end point for the request - Add any optional URL parameters that you'd like to add. Parameters will be encoded automatically. - Cloud9QL: Optional post processing of the data returned by the REST API. This can be used to transform the data as well as unwind and interact with nested JSON structures. - Query strategy: You can either have this query execute real-time when the direct Query checkbox is selected, or, store and incrementally track the results into the datastore we provide (default option). Example The integration is pre-configured to a live Parse API end-point with dummy data to easily follow along. - Create the REST API datasource with the default URL. - Enter user id and password, if required (If present, the base64 encoded hash will be automatically passed into all requests) - In the Headers box, copy and paste the sample headers from the help icon, use the default End Point. Click on Preview to see the results. The result in this cases is a nested object, which can be unwound using the following Cloud9QL select expand(results); REST API pagination Sometimes, your API may require you to iterate through a number of pages for a given request. Typically, it'll also contain a bookmark or a page number field to indicate how to load next set. To do this, set up the "Paging - Field Name" field with bookmark field name of response. If the API requires a next page bookmark or page number as a url argument, use the "Paging REST URL parameter name" field. Examples: All examples include an clean call, and call containing an user-defined payload parameters which are not related to pagination. First call First call is without any bookmark or page, like this two examples: parameter-less: with parameters: Response contains "bookmark" as "nextPageWillBe" field in this example: { someData: ["data1", "data2"], nextPageWillBe: "bmABCD" } Fill "Pagination - Response Field Name" in query settings with "nextPageWillBe". Bookmark inside url path subsequent requests: Without parameters: With parameters: Bookmark inside url parameters If the API requires "nextPageWillBe" within url parameters, use the "Pagination - URL parameter name" field with required parameter name. Example: subsequent requests: Without parameters: With parameters: (where nextPageAttrLink is the parameter name for the url). If the next page is a number within the request on the payload: { someData: ["data1", "data2"], nextPage: 2 } Subsequent request if the URL parameter is "page": Without parameters: With parameters: Example when "has more" flag used Some APIs may contain a "hasMore" entry in the response (like Hubspot), along with a bookmark to the next page. Example: { someData: ["data1", "data2"], nextPageWillBe: "bmABCD" hasMore: true } In this case, specify the field name for bookmark ("nextPageWillBe"), as well as the "Pagination - boolean flag for more pages" field ("hasMore"). Pagination with response arrays Some APIs may return array of objects, each of which contain could some sort of an identifier, where the API expects max or min of that identifier as a paramater into the next batch (for example, Trello API). For this please use the "Pagination Response type" combobox and select appropriate sorting kind. Example: [ { someData: { } id: 70 }, { someData: { } id: 60 }, { someData: { } id: 50 } ] In the example above, API expects the last element from the result for field "id" and pass that in as a "before" parameter for the next call into the URL. For this, set the "Pagination Response type" to "Response is Array and get last element", the "Response Field Name" to "id", and "URL parameter name" to "before". The next call will be: where "50" is the id of last element from previous response that we'll inject automatically. NOTE: If the REST API does not use a next_page token but a url instead, then configure the setting as follows: "Pagination - Response Field Name" use "next_page" value "Pagination Response type" use "Response contains full url to next page"
https://docs.knowi.com/hc/en-us/articles/115006386708-REST-API
2018-03-17T10:44:20
CC-MAIN-2018-13
1521257644877.27
[]
docs.knowi.com
Groovy provides a number of See Streams, Readers, and Writers for more on Input and Output. Using processes Groovy provides a simple way to execute command line processes. The expression returns a java.lang.Process instance which can have the in/out/err streams processed along with the exit value inspected etc. e
http://docs.codehaus.org/pages/viewpage.action?pageId=9765278
2014-04-16T08:26:27
CC-MAIN-2014-15
1397609521558.37
[]
docs.codehaus.org
JBoss.orgCommunity Documentation JBoss DNA is designed around extensions: sequencers, connectors, MIME type detectors, and class loader factories. The core part of JBoss DNA is relatively small and has few dependencies, while all of the "interesting" components are extensions that plug into and are used by different parts of the core. The core doesn't really care what the extensions do or what external libraries they require, as long as the extension fulfills its end of the extension contract. This means that you only need the core modules of from the class loaders used to load the extensions, your application is isolated from the extensions and their dependencies. Of course, you can put all the JARs on the application classpath, too. (This is what the examples in the Getting Started document do.) This design also allows you to select only those extensions that are interesting and useful for your application. Not every application needs all of the JBoss DNA functionality. Some applications may only need JBoss DNA sequencing, and specifically just a few types of sequencers. Other applications may not need sequencing but do want to use JBoss DNA federation capabilities. Finally, the use of these formal extensions also makes it easier for you to write your own customized extensions. You may have proprietary file formats that you want to sequence. Or, you may have a non-JCR repository system that you want to access via JCR and maybe even federate with information from other sources. Since extensions do only one thing (e.g., be a sequencer, or a connector, etc.), its easier to develop those customizations. JBoss DNA loads all of the extension classes using class loaders returned by a class loader factory. Each time JBoss DNA wants to load a class, it needs the name of the class and an optional "class loader name". The meaning of the names is dependent upon the implementation of the class loader factory. For example, the Maven class loader factory expects the names to be Maven coordinates. Either way, the class loader factory implementation uses the name to create and return a ClassLoader instance that can be used to load the class. Of course, if no name is provided, then a JBoss DNA service just uses its class loader to load the class. (This is why putting all the extension jars on the classpath works.) The class loader factory interface is pretty simple: public interface ClassLoaderFactory { /** * Get a class loader given the supplied classpath. The meaning of the classpath is implementation-dependent. * @param classpath the classpath to use * @return the class loader; may not be null */ ClassLoader getClassLoader( String... classpath ); } In the next chapter we'll describe an ExecutionContext interface that is supplied to each of the JBoss DNA core services. This context interface actually extends the ClassLoaderFactory interface, so setting up an ExecutionContext implicitly sets up the class loader factory. JBoss DNA includes and uses as a default a standard class loader factory that just loads the classes using the Thread's current context class loader (if there is one), or a delegate class loader that defaults to the class loader that loaded the StandardClassLoaderFactory class. The class ignores any class loader names that are supplied. The dna also able to use a JCR repository that contains the equivalent contents of a Maven repository. However, JBoss DNA doesn't currently have any tooling to help populate that repository, so this component may be of limited use right now. In this chapter, we described the framework used by JBoss DNA to load extension classes, like implementations of repositories, sequencers, MIME type detectors, and other components. Next, we cover how JBoss security works and how the various components of JBoss DNA can access this security information as well as information about the environment in which the component is running.
http://docs.jboss.org/jbossdna/0.3/manuals/reference/html/classloaders.html
2014-04-16T08:36:36
CC-MAIN-2014-15
1397609521558.37
[]
docs.jboss.org
changes.mady.by.user Greg Wilkins Saved on Oct 16, 2009 changes.mady.by.user Shirley Boulay Saved on Aug 03, 2011..
http://docs.codehaus.org/pages/diffpagesbyversion.action?pageId=38482&selectedPageVersions=157&selectedPageVersions=156
2014-04-16T08:28:52
CC-MAIN-2014-15
1397609521558.37
[]
docs.codehaus.org
Container support Here are the configurations that currently support DataSource or Resource configuration: Notes: - Datasource support for OW2 JOnAS container has been introduced in Cargo version 1.1.1. - Datasource support for GlassFish 3.x container has been introduced in Cargo version 1.1.3. - means cargo.resource.
http://docs.codehaus.org/pages/viewpage.action?pageId=227050101
2014-04-16T07:55:27
CC-MAIN-2014-15
1397609521558.37
[]
docs.codehaus.org