content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Microsoft Ignite 2017: Common Questions About Microsoft Teams
I work in the One Commercial Partner (OCP) organization at Microsoft where I work with many customers and Microsoft Partners to help answer deep technical questions, review technical designs, and ensure successful implementations of Office 365 products. As part of this role, I recently worked at the Expo area for Microsoft Teams and Skype for Business at Microsoft Ignite 2017 conference held in Orlando, Florida, September 25-29, 2017.
I was asked what seemed like hundreds of great questions during the week and enjoyed helping answer the smallest concern to diagramming potential solutions for complex customer scenarios. Although there were a wide variety of questions asked, there were several common questions I identified. I have listed the questions below with additional information for all of our customers to review. If you have additional questions, please add them to the comments below and I will provide additional information.
What is Microsoft Teams?
Microsoft Teams is a modern chat based workspace in Office 365 offering threaded and persistent chat for anyone working together. Instead of me quoting all of the great features of Microsoft Teams, please review the overview of Microsoft Teams here along with the embedded video. This is truly an evolution in collaboration tools, demonstrated by the popularity of Microsoft Teams being used by 125,000 organizations in less than six months since its launch.
What is the plan for Skype for Business and Microsoft Teams?
Microsoft is taking the capabilities of Skype for Business and combining them with Microsoft Teams for a fully collaborative, single client experience (reference). Referenced from the Microsoft Teams Faq Journey site, "Microsoft has no current plans to schedule upgrades for enterprise customers. Customers can choose to move to Microsoft Teams as the capabilities meet their business needs." For customers who wish to continue using Skype for Business Server and a hybrid model, a refreshed server version is planned for release in the second half of 2018. For a demonstration of the many new features coming in Microsoft Teams as well as great information of how Microsoft Skype for Business and Teams will work together, the upcoming administration center, the roll-out features of Microsoft Teams to Skype for Business users, etc. please review the entire announcement video from Microsoft Ignite 2017. Additionally, a new site was launched at Microsoft Ignite that contains a wealth of information on this topic.
When will a user in Microsoft Teams and a user in Microsoft Skype for Business be able to communicate within the same Office 365 tenant?
Also announced at Microsoft Ignite, universal presence, messaging and calling between clients is being introduced. This information as well as additional information is available here.
When will the Microsoft Government Community Cloud (GCC) offer Microsoft Teams?
Microsoft Teams is on the roadmap to be offered in the Government Community Cloud and will be available after the proper compliance has been achieved. For the latest information on the Office 365 roadmap, review this link. As of September 27, 2017, below is the latest information from the site:
Although most of my time at Microsoft Ignite was working with our many fantastic customers and partners, I was able to attend several interesting sessions about Microsoft Teams and Skype for Business that I recommend below. These videos, along with the links above, are especially informative for readers looking for more information about Microsoft Teams and Microsoft Skype for Business announcements.
Microsoft 365: Transform your communications with Microsoft Teams and Skype for Business
Session Description:
View: Microsoft 365: Transform your communications with Microsoft Teams and Skype for Business
Skype for Business and the vision for Unified Communications
Session Description:
View: Teams On Air: Ep 52 Skype for Business & the vision for unified communications
Note: I posted a blog about several Skype for Business futures videos released a few years ago in this link.
Migrating from your Avaya PBX to Skype for Business Voice
Session Description:
View: Migrating from your Avaya PBX to Skype for Business Voice
Please add to this list of recommended sessions in the comment area below!
Below is a picture of me assisting our many great customers and partners at Microsoft Ignite 2017 using a 84" Surface Hub! | https://docs.microsoft.com/en-us/archive/blogs/cloudready/microsoft-ignite-2017-common-questions-about-microsoft-teams | 2020-02-17T11:00:19 | CC-MAIN-2020-10 | 1581875141806.26 | [array(['https://msdnshared.blob.core.windows.net/media/2017/10/100117_1642_MicrosoftIg1.png',
None], dtype=object) ] | docs.microsoft.com |
open the Windows Store
In the search box, search for Company Portal
Find the Company Portal app and install it
Once installed, launch the Company Portal app
You’ll be prompted to enter your domain credentials
As we’re using ADFS, the logon process will redirect you to your on-prem ADFS server
Now enter your password
Once you’ve logged in, you’ll be reminded to enrol your device prior to applications being made available
Now browse back to the Desktop, and open the Change PC Settings modern settings menu
Browse into the Network subset
And select Workplace
Enter your User ID and press the Turn on button to start the enrolment process
The device enrolment will lookup your enrollment services based on the username you’ve entered
And will prompt for your domain credentials. You can see here it’s utilizing our ADFS server for logon.
Agree to the terms of service, and press Turn on
Once this has been completed, the Turn on button will change to a Turn off button.
Now launch the Company Portal from your Apps menu
And we can now see that we’re enrolled and my current devices are listed!
Finally, we can see the client appear in the ConfigMgr console | https://docs.microsoft.com/en-us/archive/blogs/configmgrdogs/the-ultimate-intune-setup-guide-stage-5-enrol-your-devices-windows-8-1 | 2020-02-17T11:05:39 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.microsoft.com |
In a New Relic Alerts policy, a condition identifies what triggers an alert. You can use the New Relic REST API to disable and enable Alerts conditions. You can also disable and enable Alerts conditions in the New Relic UI.
Alerts policies cannot be enabled or disabled, whether via the API or the UI. Policies can only be created, deleted, or have their conditions changed.
Requirements
Modifying any attribute in an Alerts condition using the API requires:
- An Admin User's API Key
- The condition's
id(available from API Explorer: Alerts Conditions > GET > List)
- If your account hosts data in the EU data center, ensure you are using the proper API endpoints for EU region accounts.
Enable and disable an Alerts condition
The process for disabling or enabling an Alerts condition is the same general process for changing any attribute in an Alerts.
- Details on searching for condition ID
If you don't know the category of the condition you want to change, you must search for it by making API calls using the four condition categories. Here are the different API call formats for the various condition categories.
- APM, Browser, and Mobile
Conditions available:
apm_app_metric,
apm_kt_metric,
browser_metric, and
mobile_metric
API Explorer link Get>List
- External services
Conditions available:
apm_external_service,
mobile_external_service
API Explorer link Get>List
- Synthetics
API Explorer link Get>List
- Plugins
API Explorer link Get>List
-. Different New Relic products require different API requests.
- Details on Update API requests
Use the Update API request that corresponds to the product in question:
- Conditions for APM, Browser, and Mobile
Conditions available:
apm_app_metric,
apm_kt_metric,
browser_metric, and
mobile_metric
API Explorer PUT>Update link
- Conditions for external services
Conditions available:
apm_external_service,
mobile_external_service
API Explorer PUT>Update
- Conditions for Synthetics
API Explorer PUT>Update
- Conditions for Plugins
API Explorer PUT>Update
An Update API request can only change one condition at a time, it cannot update a vector of objects. For example, to change three conditions, you will have to make three separate requests.
Example: Disable an APM condition
The following example shows how to disable a New Relic Alerts condition for an
apm_app_metric condition. With the exception of the types of API calls required, the process is similar to the process for changing other condition types.
Obtain the policy_id of the Alerts policy you want to update. For an imaginary policy named
Logjam Alert, the command would be:
curl -X GET '' \ -H 'X-Api-Key:{YOUR_API_KEY}' -i \ -G --data-urlencode 'filter[name]= Logjam Alert' <---<<< {policy_name}
The output for this request might look like:
{ ':{YOUR.
For more help
Additional documentation resources include:
- API calls for New Relic Alerts (list of all API calls available for Alerts)
- Using the API Explorer (using the API Explorer's user interface to get data in and data out of New Relic)
- Parts of the API Explorer (a quick reference for how to use each section of the API Explorer) | https://docs.newrelic.com/docs/alerts/rest-api-alerts/new-relic-alerts-rest-api/disable-enable-alerts-conditions-using-api | 2020-02-17T11:32:20 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.newrelic.com |
UDN
Search public documentation:
UnrealEd User Guide
- Unreal Editor User Guide
- Overview
- Opening the Editor
- Creating a Level
- Level Editor Overview
- Working with Levels
- Placing Actors
- Selecting Actors
- Transforming Actors
- Detail Mode
- Editor Performance
Overview
UnrealEd is a suite of tools for working with content in the Unreal Engine. At the core, it is used for level design; but contained within are editors and browsers for importing and manipulating content for your game project. This document focuses on the level editing capabilities of UnrealEd, for information on other aspects see Browsers and Editor Tools.
Opening the Editor
The general method used to open Unreal Editor is to locate the appropriate executable (.exe) for your game project in the Binaries directory, and run it with the
editorcommand line argument. As an example, the UDK project has a compiled executable called
UDK.exein the Binaries directory. To open the editor for UDK from a command prompt, use the following command:
UDK editorCreating a shortcut to the game’s executable and adding the
editorcommand line argument is a great way to provide easy access to the editor if you prefer not to go through the UnrealFrontend application or use a command prompt.
Creating a Level
Unreal Editor is comprised of a great deal of tools, some more complex than others. It might benefit you to walk through the process of creating a simple level before trying to wrap your head around every single menu, button, tool, and command available. For an introduction on creating a simple level, see the Creating Levels page.
Level Editor Overview
The Unreal Level Editor is the core editing tool in UnrealEd. It is in this editor that worlds or levels are created - levels that are built from BSP and Static Mesh geometry, as well as Terrain. They contain Lights and other Actors, AI Path Nodes, and Scripted Sequences.
Editor InterfaceThe level editor window is divided into several parts of its own:
Menu BarThe menu bar in the level editor should be familiar to anyone who has used Windows applications previously. It provides access to a great deal of tools and commands that are needed when progressing through the process of creating levels with UnrealEd. For more information about the Menu Bar, see the Editor Menu Bar page.
Tool BarThe toolbar, like in most application, is a group of commands providing quick access to commonly used tools and commands. Many of the items found in the menus of the level editor can also be found as buttons in the toolbar. For more information about the Tool Bar, see the Editor Tool Bar page.
ToolboxThe toolbox is a set of tools used to control the mode the level editor is currently in, reshape the builder brush, create new BSP geometry and volumes, and control visibility and selection of actors within the viewports. For more information about the Toolbox, see the Editor Toolbox page.
ViewportsThe viewports of the level editor are your windows into the worlds you create in UnrealEd. Offering multiple orthographic views (Top, Side, Front) and a perspective view, you have complete control over what you see as well as how you see it. For more information about the viewport toolbar in UnrealEd, see the Viewport Tool Bar page. For a guide to the different View Modes in UnrealEd, see the View Modes page. For a guide to the different Show Flags in UnrealEd, see the Show Flags page.
ControlsKnowing the viewport navigation controls as well as the keyboard controls and hotkeys can help to speed up your workflow and save time in the long run.
Mouse ControlsFor a list of mouse controls see the Editor Buttons page.
Keyboard ControlsFor a list of keyboard controls see the Editor Buttons page.
Hot KeysFor summary of how to bind editor hotkeys and create new editor hotkey commands, see the Editor Hot Keys page.
Working with Levels
The level creation process can be boiled down to a few integral tasks: placing actors, selecting actors, transforming actors, modifying actors. In other words, to create a level, actors will be placed into a map, moved around to create an environment, and their properties will be modified to cause them to look or behave appropriately.
Placing ActorsEach map begins as a blank slate. To build the desired environment or populate the world, actors must be placed in the map. There are a few different ways this can be done, but they all result in a new instance of a certain class being created which can then be moved around or have its properties modified. The different methods of placing new actors into a map are detailed below.
From Content BrowserCertain types of assets can be selected in the Content Browser and then assigned to new instance of an appropriate type of actor by right-clicking in one of the viewports and choosing “Add Actor >”. This will display a flyout menu with a list of possible actor types to add using the selected asset.
- Static Meshes
- Skeletal Meshes
- Physics Assets
- Fractured Static Meshes
- Particle System
- Speed Trees
- Sound Cue
- Sound Wave
- Lens Flare
Drag and DropIn addition to being able to add specific types of actors from the Content Browser to a map through the viewport context menu, these can also be added simply by dragging an asset from the Content Browser and dropping it onto one of the viewports in the location you would like the actor to be placed. When doing so, the cursor will change so that you know that type of asset can be dropped onto a viewport.
- Static Meshes - places a StaticMeshActor
- Skeletal Meshes - Places a SkeletalMeshActor
- Physics Assets - Places a KAsset
- Fractured Static Meshes - Places a FracturedStaticMeshActor
- Particle System - Places an Emitter
- Speed Trees - Places a SpeedTreeActor
- Sound Cue - Places an AmbientSound
- Sound Wave - Places an AmbientSoundSimple
- Lens Flare - Places a LensFlareSource
From Actor BrowserWhile drag and drop is extremely efficient and easy to use, it only works for a specific subset of types of actors. Any and all types of placeable actors (shown in bold in the Actor Browser) can be added to a map by selecting a class of actor in the Actor Browser, right-clicking in one of the viewports, and choosing “Add New [ActorClass] Here…”
Selecting ActorsSelecting actors, while simple in nature, is a very important part of the level editing process. If you can’t quickly select the correct group of actors, the process gets slowed down and productivity decreases. There are many different ways to select actors, or groups of actors. Each of these is detailed below.
Simple SelectionThe most basic method of selecting actors is simply to left-click on them in the viewport. Each click on an actor will deselect the currently selected actor and select the new one. If the Ctrl key is held down while clicking on a new actor, the new actor is added to the selection.
Marquee SelectionA marquee selection is a quick way to select or deselect a group of actors within a certain area. This type of selection involves holding down a combination of keys, clicking one of the mouse buttons, and dragging the mouse cursor to create a box. All the actors within the box will be selected or deselected depending on the combination of keys and the mouse button that is clicked.
- Ctrl + Alt + LMB - Replaces the current selection with the actors contained in the box.
- Ctrl + Alt + Shift + LMB - Adds the actors contained in the box to the current selection.
- Ctrl + Alt + Shift + RMB - Removes any select actors in the box from the current selection.
Select by Class / Select by Asset / Select by PropertyThese types of selection allow you to select a group of actors depending on the class of actor, the use of a specific asset, or having a specific value of a certain property. These will require an actor to already be selected (in the case of selecting by class or asset) or a property and value to have been made active. These selections can be made through the viewport context menu.
Transforming ActorsTransforming actors refers to the moving, rotating, or scaling of the actors. It shouldn’t be any surprise that this is a huge part of the level editing process. There are two basic ways to transform actors in the level editor.
Manual TransformationThe first method involves changing the values manually in the Property Window. You have access to all of the important transform properties of any actor in the Property Window under the Movement section (Location and Rotation) and the Display section (Draw Scale and Draw Scale 3D). While this certainly gives you the most fine-grained control over the placement, orientation, and size of an actor, it is far from intuitive and often leads to a great deal of trial and error while changing the values over and over.
Interactive TransformationThe second method involves the use of a visual tool displayed in the viewport that allows you to use the mouse to move, rotate, and scale the actor interactively directly in the viewport. This has the exact opposite pros and cons of the manual method. While it is extremely intuitive to use this method, it can be far from precise which is necessary in some circumstances. The drag grid, rotation grid, and scale grid can help in this aspect. The ability to snap to known values or in known increments allows for more precise control. This method also allows you to choose the reference coordinate system you wish to use when doing the transformation. This means you can transform the actor in world space, along the world axes, or you can transform the actor in its own local space, or along its local axes. This provides a lot more flexibility and would close to impossible to do simply by setting the values manually; at least without doing a great deal of complex calculations first.
Translation WidgetThe translation widget consists of a set of color-coded arrows pointing down the positive direction of each axis in the world. Each of these arrows is essentially a handle that can be grabbed (by left-clicking the mouse on it) and dragged to move the selected actor(s) along that particular axis. When the mouse is over one of the handles, the cursor will change and the handle will turn yellow signifying that clicking and dragging will move the object along that axis.
Rotation WidgetThe rotation widget is a set of three color-coded circles each associated with one of the axes that, when grabbed and dragged, will cause the selected actor(s) to rotate around the associated axis. In the case of the rotation widget, the axis affected by any one of the circles involved is the one perpendicular to the circle itself. This means that the circle aligned to the XY plane will actually rotate the actor around the Z-axis. As with the translation widget, when the mouse hovers over a particular circle, the cursor will change and that circle will turn yellow.
Uniform Scale WidgetThe uniform scale widget is a set of three handles that when grabbed and dragged will scale the selected actor in all three axes at the same time uniformly. It is similar in appearance to the translation widget except that the ends of each handle are small cubes instead of arrows. Also, since grabbing any of the handles affects all three axes, all the handles are simply colored red instead of being colored according to a particular axis.
Non-Uniform Scale WidgetThe non-uniform scale widget is almost identical to the uniform scale widget except that the individual handles, when grabbed and dragged, will cause the selected actor to be scaled only along the associated axis. As such, the handles are color-coded in a similar fashion to the translation and rotation widgets. Also, this widget has the ability to scale in two axes at the same time in much the same manner that the translation widget can move along a plane.
Detail ModeUnreal Engine 3 provides the ability to use a Detail Mode setting to assign actors within a map to only be displayed when the Detail Mode of the engine is at or below that level. This is extremely useful for making levels which have the ability to run well on various hardware configurations. Actors which are solely used to increase detail and not necessary at all to the gameplay of the level might be set to only appear when the highest Detail Mode. Actors which are absolutely essential to gameplay would always be set to the lowest Detail Mode, ensuring they were always visible no matter what the specifications of the user’s system may be. Setting the Detail Mode of an actor can be performed in one of two ways. The first is to select the actor in the viewport, right-clicking the selected actor to open the viewport context menu, and selecting LOD Operations > Set Detail Mode, and choosing the Detail Mode (low, medium or high).
Editor Performance
The editor can be extremely slow in large levels, especially when there are a lot of Actors on the screen. Here are some settings you can tweak to improve editor framerate. For Single Player levels, the single best option is Level Streaming Volume Previs.
- Distance to far clipping plane
- self-explanatory; a useful quick-fix. It's the only slider bar.
- Turn off realtime update
- self explanatory.
- G mode
- hides all editor debug information. Results may vary.
- Have lighting built
- not always possible but unbuilt lighting will make the editor much slower.
- Unlit movement
- self explanatory.
-. | https://docs.unrealengine.com/udk/Three/UnrealEdUserGuide.html | 2017-08-16T21:40:19 | CC-MAIN-2017-34 | 1502886102663.36 | [array(['rsrc/Three/UnrealEdUserGuide/EditorLayout.jpg',
'EditorLayout.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Place_ContentBrowser_Loaded.jpg',
'Place_ContentBrowser_Loaded.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Place_ContentBrowser_NotLoaded.jpg',
'Place_ContentBrowser_NotLoaded.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Place_ContentBrowser_AddNotLoaded.jpg',
'Place_ContentBrowser_AddNotLoaded.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Place_ContentBrowser_DragDrop.jpg',
'Place_ContentBrowser_DragDrop.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Place_ActorBrowser.jpg',
'Place_ActorBrowser.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Place_ActorBrowser_Asset.jpg',
'Place_ActorBrowser_Asset.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Place_ActorBrowser_AssetNone.jpg',
'Place_ActorBrowser_AssetNone.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Select_Multiple.jpg',
'Select_Multiple.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Select_Marquee.jpg',
'Select_Marquee.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Select_ByClass.jpg',
'Select_ByClass.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Transform_Properties.jpg',
'Transform_Properties.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Transform_World.jpg',
'Transform_World.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Transform_Local.jpg',
'Transform_Local.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Transform_Translate.jpg',
'Transform_Translate.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Transform_TranslatePlane.jpg',
'Transform_TranslatePlane.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Transform_Rotate.jpg',
'Transform_Rotate.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Transform_UniformScale.jpg',
'Transform_UniformScale.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Transform_NonUniformScale.jpg',
'Transform_NonUniformScale.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Detail_Context.jpg',
'Detail_Context.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Detail_Properties.jpg',
'Detail_Properties.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Detail_High.jpg', 'Detail_High.jpg'],
dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/Detail_Low.jpg', 'Detail_Low.jpg'],
dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/UnrealEd_LevelStreamingVolumePrevis.jpg',
'UnrealEd_LevelStreamingVolumePrevis.jpg'], dtype=object)
array(['rsrc/Three/UnrealEdUserGuide/UnrealEd_UnlitMovement.jpg',
'UnrealEd_UnlitMovement.jpg'], dtype=object) ] | docs.unrealengine.com |
API documentation¶
Class based access¶
Exceptions¶
Function based access¶
Additionally the top-level functions are available, if you do not wish to use the classes.
Note
While the layout of this module is a result of it moving from a strictly function-based layout to a class-based layout these functions will not be removed. Backwards compatibility is important, and will be maintained. | http://pyisbn.readthedocs.io/en/latest/api/index.html | 2017-02-19T23:12:50 | CC-MAIN-2017-09 | 1487501170286.6 | [] | pyisbn.readthedocs.io |
Sikuli / SikuliX Documentation for version 1.1+ (2014 and later)¶
This document is being maintained by Raimund Hocke aka RaiMan.
If you have any questions or ideas about this document, you are welcome to contact him directly via the above link and the email behind.
QuickStart information and other possibilities to get in contact: sikulix.com.
For questions regarding the functions and features of Sikuli itself, that are not answered sufficiently in this documentation, please use the Sikuli Questions and Answers Board.
For hints and links of how to get more information and help, please see the sidebar.
Documentation for previous versions
Might not be available any longer without notice or might not work properly. Feel free to post a bug in case.
The documentation for the versions up to SikuliX-1.0rc3 is still available here.
How to use this document¶
SikuliX at the top supports scripting via SikuliX IDE (a basic script editor to load, edit, save and run scripts including the creation/organization of the needed images for your visual workflow).
Supported scripting languages:
- Python (language level 2.7) supported by the Jython interpreter.
- Ruby (language level 1.9/2.0) supported by the JRuby interpreter.
- JavaScript supported by the Java builtin scripting engine (Java 7: Rhino, Java 8: Nashorn).
If you are new to programming, you can still enjoy using SikuliX to automate simple repetitive tasks without learning one of the supported scripting languages using the SikuliX IDE.
A good start might be to have a look at the
tutorials.
If you plan to write more complex scripts, which might even be structured in classes and modules, you have to dive into the Python Language, the Ruby Language or JavaScript.
NOTE: Since Jython and JRuby are based on Java, the modules available for Python or Ruby might not be available in the Sikulix environment. So before trying to use any non-standard modules or extension packages, you have to check, wether they are supported in this SikuliX environment.
NOTE on Java usage The features in SikuliX at the bottom line are implemented with Java. So you might as well use SikuliX at this Java API level in your Java projects or other Java aware environments (see how to). Though this documentation is targeted at the scripting people it contains basic information about the Java level API as well at places, where there are major differences between the two API level.
Additionally you might look through the JavaDocs (temporary location).
Each chapter in this documentaton briefly describes a class or a group of methods regarding their basic features. General usage information are provided and hints, that apply to all methods in that chapter. We recommend to read carefully in this sequence:
Region, then
Match, and finally
Screen.
After that, you can go to any places of interest using the table of contents or use the Index to browse all classes, methods and functions in alphabetical order.
SikuliX - how does it work and system specific information¶
SikuliX scripting and usage in programming scenarios (preferably Java based)¶
- SikuliX - general aspects of scripting
- Using JavaScript
- Using Python
- Using Ruby
- Using SikuliX in Java programming
- Using SikuliX in non-Java programming scenarios
- Using RobotFramework
SikuliX IDE create/run scripts and organize your images¶
SikuliX API for scripting and Java programming¶
- General features regarding scripting and image handling
- Controlling Sikuli Scripts and their Behavior
- Writing and redirecting log and debug messages
- File and Path handling - convenience functions
- Image Search Path - where SikuliX looks for image files
- Importing other Sikuli Scripts (reuse code and images)
- Running scripts and snippets from within other scripts and run scripts one after the other
- Interacting with the User and other Applications
- General Settings and Access to Environment Information
- Region (rectangular pixel area on a screen)
- Create a Region, Set and Get Attributes
- Get evenly sized parts of a Region (as rows, columns and cells based on a raster)
- Extend Regions and create new Regions based on existing Regions
- Finding inside a Region and Waiting for a Visual Event
- Observing Visual Events in a Region
- Acting on a Region
- Extracting Text from a Region
- Low-level Mouse and Keyboard Actions
- Exception FindFailed
- Grouping Method Calls ( with Region: )
- Location
- Screen
- Pattern
- Match
- Finder
- Key Constants
- The Application Class (App)
Miscellanous¶
- Can I do X or Y or Z in SikuliX?
- How to run SikuliX from Command Line
- Command Line Options (generally, debug output related)
- Command Line Options (special)
- Command Line Options (intention: IDE should open)
- Command Line Options (intention: run a script without opening the IDE)
- Command Line Options (intention: run the experimental scriptrun server)
- Command Line Options (intention: provide user parameters for running scripts)
- How to use SikuliX API in your JAVA programs or Java aware scripting
- Extensions | http://sikulix-2014.readthedocs.io/en/latest/ | 2017-02-19T23:14:58 | CC-MAIN-2017-09 | 1487501170286.6 | [] | sikulix-2014.readthedocs.io |
This guide is intended for any person or group who serves as the active administrator(s) for a RightScale account or set of accounts within an organization. It is recommended that you carefully read each section of this guide to ensure that you are following best practices for properly setting up a RightScale account(s) for use within your organization.
Prerequisites: 'admin' user role privileges are required. Only another user with 'admin' privileges can grant another user 'admin' access. If you do not have 'admin' access to the RightScale account, request access from the owner of the account (i.e., the person who created the RightScale account). | http://docs.rightscale.com/cm/administrators_guide/index.html | 2017-02-19T23:16:16 | CC-MAIN-2017-09 | 1487501170286.6 | [] | docs.rightscale.com |
Note: You are currently reading the documentation for Bolt 3.0. Looking for the documentation for Bolt 3.2 instead?
Internals » The 'Model'
The way Bolt handles its ContentTypes is defined in the
contenttypes.yml file,
which in turn determines the data-structure of the website.
Basically, whatever is defined in the ContentType gets added as columns to the
database that's configured in
config.yml.
Whenever the 'dashboard' is displayed, Bolt checks if the definitions in
contenttypes.yml matches the database columns, and if it doesn't it urges
the user to go to the 'repair database' screen.
Even though Bolt strives to be as simple as possible, it makes sense to think of Bolt as an MVC application. Silex provides the Controller part, the Twig templates are the View and the ContentTypes define the Model part.
All access to the content and the ContentTypes is done through the Storage class.
Records of content have a Content class. Browse the files
src/Storage.php
and
src/Content.php for details.
Couldn't find what you were looking for? We are happy to help you in the forum, on Slack or on IRC.
Spotted a typo, or have something to add? Edit this page on GitHub. | https://docs.bolt.cm/3.0/internals/the-model | 2017-02-19T23:21:55 | CC-MAIN-2017-09 | 1487501170286.6 | [] | docs.bolt.cm |
SoftLayer provides cloud, dedicated, and managed hosting through integrated computing environments, with data centers in Dallas, Houston, San Jose, Seattle, Washington D.C., Toronto, Asia-Pacific (via Singapore, HongKong, Melbourne), and Europe (via Amsterdam, Paris, London, and Milan), and network Points of Presence worldwide.
RightScale Support for SoftLayer
Supported Features
Currently, you can manage your SoftLayer assets by stopping and starting your instances.
- Instances (Virtual Server in SoftLayer parlance)
- Images
- Instance Types
- Datacenters / Zones
- Managed SSH
- SSH Keys
- Public and Private VLANs
Supported RightScale Objects
Published ServerTemplates
Machine Images
RightScale supports launching instances using the stock images supplied by SoftLayer. One can also bundle RightLink agent (RL v6.3.3 or RL v10.1.x) to create their own version of RightImages that can then be used with RightScale Server Templates.
Ubuntu
- Ubuntu 10.04, 12.04, 14.04
CentOS
- CentOS 5, 6, 7
RedHat
- RHEL 5, 6, 7
Windows
- 2008 R2, 2012 R2
Instance Types
Softlayer does not provide pre-configured typical instance types per se, but instead they offer the ability to select CPU/core count and memory capacity when creating hourly compute instances. Rightscale provides a few pre-created instance types that can be used with your Softlayer account. These instances types will be seen under the Clouds > Softlayer > Instance Types menu in the Rightscale dashboard, and their resources are explained in the following table:
Note: Extra storage can be added to the instance. One can also rebundle images with extra storage.
Private and Public VLANs
RightScale supports launching of servers and instances into VLANs configured in the customer account in various datacenters. Servers and instances can be launched in either private-only VLANs (by specifying only the private VLAN at the time of server launch) or with multiple private and public interfaces. Instead of specifying a public and private VLAN, one can also choose to use
cloud-default as a selection. If
cloud-default is selected, SoftLayer will launch the instance in a public and private VLAN based on the VLANs configured for the account.
Please see Clouds->SoftLayer->Instances->New and the drop-down box for Subnets, on the RightScale Dashboard.
Known Issues and Limitations
Enable Private Network Spanning
Important
- With SoftLayer accounts, the
Enable Private Network Spanningsetting must be enabled. Please make this change in the SoftLayer portal:
- Click the private network link where you will see a link called Enable Private Network Spanning. This will allow you to enable your vlan spanning. If this is not enabled, you will see inconsistent behavior depending on what subnet the VM launches with.
- For more information see SoftLayer's Cross connects between two or more servers.
Defect Support
To report any bugs related to RightScale SoftLayer support, please raise a support ticket from the Dashboard or email [email protected]. | http://docs.rightscale.com/clouds/softlayer/softlayer_about.html | 2017-02-19T23:16:56 | CC-MAIN-2017-09 | 1487501170286.6 | [] | docs.rightscale.com |
I get a “
Permission matching query does not exist” exception¶
Sometimes Django decides not to install the default permissions for a model
and thus the
change_profile permission goes missing. To fix this, run the
check_permissions in Commands.. This checks all permissions and adds
those that are missing.
I get a “
Site matching query does not exist.” exception¶
This means that your settings.SITE_ID value is incorrect. See the instructions on SITE_ID in the [Installation section]().
<ProfileModel> is already registered exception¶
Userena already registered your profile model for you. If you want to customize the profile model, you can do so by registering your profile as follows:
# Unregister userena's admin.site.unregister(YOUR_PROFILE_MODEL) # Register your own admin class and attach it to the model admin.site.register(YOUR_PROFILE_MODEL, YOUR_PROFILE_ADMIN)
Can I still add users manually?¶
Yes, but Userena requires there to be a UserenaSignup object for every registered user. If it’s not there, you could receive the following error:
Exception Type: DoesNotExist at /accounts/mynewuser/email/
So, whenever you are manually creating a user (outside of Userena), don’t forget to also create a UserenaSignup object.
How can I have multiple profiles per user?¶
One way to do this is by overriding the save method on SignupForm with your own form, extending userena’s form and supply this form with to the signup view. For example:
def save(self): """ My extra profile """ # Let userena do it's thing user = super(SignupForm, self).save() # You do all the logic needed for your own extra profile custom_profile = ExtraProfile() custom_profile.extra_field = self.cleaned_data['field'] custom_profile.save() # Always return the new user return user
Important to note here is that you should always return the newly created User object. This is something that userena expects. Userena will take care of creating the user and the “standard” profile.
Don’t forget to supply your own form to the signup view by overriding the URL in your urls.py:
(r'^accounts/signup/$', 'userena.views.signup', {'signup_form': SignupExtraProfileForm}),
How do I add extra fields to forms?¶
This is done by overriding the default templates. A demo tells more than a thousand words. So here’s how you add the first and last name to the signup form. First you override the signup form and add the fields.
from django import forms from django.utils.translation import ugettext_lazy as _ from userena.forms import SignupForm class SignupFormExtra(SignupForm): """ A form to demonstrate how to add extra fields to the signup form, in this case adding the first and last name. """ first_name = forms.CharField(label=_(u'First name'), max_length=30, required=False) last_name = forms.CharField(label=_(u'Last name'), max_length=30, required=False) def __init__(self, *args, **kw): """ A bit of hackery to get the first name and last name at the top of the form instead at the end. """ super(SignupFormExtra, self).__init__(*args, **kw) # Put the first and last name at the top new_order = self.fields.keyOrder[:-2] new_order.insert(0, 'first_name') new_order.insert(1, 'last_name') self.fields.keyOrder = new_order def save(self): """ Override the save method to save the first and last name to the user field. """ # First save the parent form and get the user. new_user = super(SignupFormExtra, self).save() # Get the profile, the `save` method above creates a profile for each # user because it calls the manager method `create_user`. # See: user_profile = new_user.get_profile() user_profile.first_name = self.cleaned_data['first_name'] user_profile.last_name = self.cleaned_data['last_name'] user_profile.save() # Userena expects to get the new user from this form, so return the new # user. return new_user
Finally, to use this form instead of our own, override the default URI by placing a new URI above it.
(r'^accounts/signup/$', 'userena.views.signup', {'signup_form': SignupFormExtra}),
That’s all there is to it! | http://django-userena.readthedocs.io/en/latest/faq.html | 2017-02-19T23:28:52 | CC-MAIN-2017-09 | 1487501170286.6 | [] | django-userena.readthedocs.io |
Overview
RightScale will notify customers when features, assets or clouds are nearing End of Support or End of Life stages. Listed below is the schedule of the lifecycle of these items. It will be updated when dates are determined and in accordance with longstanding RightScale EOL policy.
Definitions
End of Support (EOS)
End of Support (EOS) stage marks the official withdrawal of technical support for specific versions of clouds, ServerTemplates, RightImages or other RightScale product features. RightScale Support will provide best-effort support up to the time the feature is removed from the product (End of Life). No new enhancements / bug fixes will be added after EOS date.
End of Life (EOL)
End of Life (EOL) stage marks when the feature or support for specific version is removed at the discretion of engineering. Customers will lose access to the discontinued feature on a specified date.
Schedule
The following timetable shows various clouds/product features currently supported by RightScale and EOL timelines. Contact RightScale support ([email protected]) if you have questions or concerns. | http://docs.rightscale.com/faq/end_of_life_end_of_service.html | 2017-02-19T23:16:24 | CC-MAIN-2017-09 | 1487501170286.6 | [] | docs.rightscale.com |
Creating Dynamic Animations
One way you can improve the realism in your character's movement is to provide dynamic animations for items that they may be carrying or wearing. With AnimDynamics those pieces that you would expect to move around as you move in real life (hair, necklaces, bracelets, swords, pouches, etc.) will bounce around and move while your character moves.
In this How-to we will apply AnimDynamics to a character to achieve the effect seen below:
Above, the character has AnimDynamics applied to the harness and furnace that is being carried around the character's neck. As the character moves, the harness shifts slightly while the furnace has a bit more movement forwards/backwards. The amount of movement can be adjusted via the Details panel of the AnimDynamics node to achieve the effect you are looking for. Additional constraints can be added as well to manipulate just how the bones move.
If you already have a character with an AnimBlueprint and bones ready to drive with AnimDynamics, you can proceed to step 2. | https://docs.unrealengine.com/latest/INT/Engine/Animation/AnimHowTo/AnimDynamics/index.html | 2017-02-19T23:18:06 | CC-MAIN-2017-09 | 1487501170286.6 | [] | docs.unrealengine.com |
Curve > User Guide BlackBerry Curve 9350/9360/9370 Smartphones - BlackBerry Curve Series - 7.1
About hearing aid mode
In hearing aid mode, or telecoil mode, the magnetic signal of your BlackBerry smartphone is modified to an appropriate level and frequency response to be picked up by hearing aids that are equipped with telecoils.
Next topic: Turn on hearing aid mode
Previous topic: Reset a call timer
Next topic: Turn on hearing aid mode
Previous topic: Add or delete a contact alert
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/38106/1585853.jsp | 2014-04-16T08:59:49 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.blackberry.com |
Mission
Cargo is a thin wrapper that allows you to manipulate Java EE containers in a standard way.
Tools
Cargo provides the following Tools and APIs:
- A Java API to start/stop/configure
- Introduction to the Cargo Daemon
Development Status
Current Versions
You can click on the version number to access the Downloads page and | http://docs.codehaus.org/pages/viewpage.action?pageId=231080417 | 2014-04-16T08:23:30 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.codehaus.org |
Roadmap
- History
- RHQ 4.2 (October 31st, 2011)
- RHQ 4.3 (originally planned for late January 2012, released on March 8th, 2012)
- RHQ 4.4 (around Mid April 2012)
- RHQ 4.5 (planned for around mid of August 2012, released on September 27th, 2012)
- RHQ 4.5.1 (released on October 4th 2012)
- Releases in the Works
- Wishlist & Ideas
- Server & Database
- Inventory & Resources
- Alerting & Resource Monitoring
- Provisioning & Content
- Resource Config
- UI
- CLI & Server Scripts
- Resource Scripts
- Performance
- Agent
- Logging & Reports
- Translations for the UI & docs
- Builds
- Backburner
History
All of the previous releases for RHQ are listed with information about new features and bug fixes.
RHQ 4.2 (October 31st, 2011)
- Resource Configuration
- better drift support
- Server Enhancements
- enhanced config sync
- Support for PostgreSQL 9.1
- performance enhancements
- UI Enhancements
- start supporting a REST api to have some endpoints around and to play with it.
- Resource Support
- enhanced JBoss AS 7 plugin
- bug fixes
RHQ 4.3 (originally planned for late January 2012, released on March 8th, 2012)
- UI Enhancements
- update of GWT / SmartGWT to support more modern browsers
- Bug fixes
- Better REST API support
- Option to dump the system state to the server log in order for diagnosing
RHQ 4.4 (around Mid April 2012)
- Improvements in availability handling
- better as7 support
- increased test coverage
- exporting of reports
- Performance enhancements
RHQ 4.5 (planned for around mid of August 2012, released on September 27th, 2012)
- Plugin I18N
- more REST work
- Changes in charting (replace the old struts/JSP code and replace with a GWT solution; GSoC work)
- Build with maven 3
- Running on JDK 7
RHQ 4.5.1 (released on October 4th 2012)
- Two bugfixes for RHQ 4.5.0
Releases in the Works
RHQ 4.6 (planned
- Removal of web services
- Removal of (for a long time not built
Future Releases
These features are almost certainly going into a future release of RHQ. We like 'em.
- TBD
Wishlist & Ideas
These are ideas and things that could be nifty to include but that don't have a plan yet.
Server & Database
- Adding support for MS SQL as a backend: Design - MS SQL Server DB backend
- Add auditing framework for RHQ operations
- Remote server install
- consistent write/read perms for all subsystems, i.e. CONTROL becomes OPERATION_READ and OPERATION_WRITE, just like CONFIGURE became CONFIGURE_READ and CONFIGURE_WRITE
- authorization fixes
- conditionally render tabs or disable/lock tabs? conditionally render buttons or disable/lock buttons?
Inventory & Resources
- Change tree so that relationship other than parent-child can be defined
- Design-Resource Extra Info
- A visual inventory browser
- More managed resource plugins
- what resources could be added?
Alerting & Resource Monitoring
- Add mod_jk monitoring
- What kinds of metrics could be collected?
- Allow for different availability check intervals depending on resource or resource type to e.g. check "important" resources more often.
- A syslog alert sender plugin that can talk to remote syslogd to deliver alert notifications
- Alerts based on total group measurements
- Design - Alerting Improvements
- true group alerting – e.g. "3 of 5 resources are down" or "the average heap usage is <yadda>"
- Combining data from different resources and types to trigger alerts on this combined data
- Allow to create derived metrics in a sense that the metric is the result of some computation like e.g. the difference of created vs. destroyed session bean instances. Or full servlet execution time minus some SLSB method time and so on.
- Design - CallTime 2
- Design - System Events
- Send alerts based on trends
- Track downtime to help monitor SLAs
- Availability alerting – not exactly or not all of what users expect (i.e. down and has been down for 10 minutes)
- Alert sending schedules (bucketing)
- instead of getting alert notification storms, allow the user to configure when he/she should get notifications; so the system will queue up notifications and then send, in a single email, all queued notifications every 10 minutes for that user
- composite alerting
- instead of an alert definition being tied to a single resource or a single group or a single type (alert template), allow each alert condition to be related to a resource / group. this way you can say "if average web server group metric is <foo>" or "if average app server service metric is <bar>" or "if database metric is "yadda" then alert
- complex alerting
- more flexible alert condition processing - today we only support 'AND' vs 'OR' - let's support complex conditions such as (a AND (b OR c))
- in-line call-time instructions
- instead of forcing users to go to our documentation to figure out how to instrument their WARs for call-time data collection, make the web.xml snippet and instructions available in the monitor>response sub-tab itself. this way users go there, realize they haven't instrumented their app, and can easily copy/paste into their application's web.xml
- in-line call-time enablement
- instead of forcing the users to go monitor>schedules to enable call-time data on the server-side for a resource (or group) enable them to click a quick link within the monitor>response tab which will enable the appropriate schedule automatically...or, at the very least, provide a link that takes them to monitor>schedules with instructions on what to do (this means that we should always keep the monitor>response sub-tab enables for resources that support it)
- in-line call-time setup
- instead of forcing users to go to inventory>connection, allow viewing and alteration of the response time properties (log file, url excludes, url transforms) in line with the tabular results
- in-line event source setup
- instead of forcing users to go to inventory>connection, move the creation to a new sub-tab called event>sources (or even in-line the viewing/editing of event sources at the top of the audit trail / table
Provisioning & Content
- A standalone (Swing?) bundle creation tool. Bundles are the way to provision complex software installations onto managed machines. There is some support for creating bundles already (e.g. bundleGen and some cli scripts), but those could become more comfortable for the user to use.
- Finish group-wise content functionality
- Push this EAR out to that cluster; update (in rolling fashion) this clustered WAR
Resource Config
- Comparing different resource configs and copying changes
UI
- Incorporating third-party GUIs into the RHQ GUI
- Better subtab nav / better alert definition subtab view
jshaughn: the default alerts subtab is definitions. Would be nice if this list view gave some indication of alerts generated for the listed defs. today you have to click again and go to the history subtab to see if anything actually happened. maybe not a big deal given that alert notification is probably the primary mechanism for knowing if something happened
joseph: would be nice if users could hover over the alerts main-tab, and the sub-tabs would be auto-visible options to choose from before clicking
ips: jshaughn: +1. in addition to having an Alerts (Count) colun, we could potentially also add a Last Fired column with a dateTime
jshaughn: both of those would be nice
jshaughn: i like the subtab hover a lot, i hate having that multi-step process to get where I'm going
jshaughn: would be very useful for Inventory tab as well
joseph: yeah, preview functionality is not only bad ass, but often times actually useful too ; )
- resource "actions"
- either right-hand side panel (collapsable) vs. pop-up menu somewhere on the content area vs. quick action icon (next to avail check and favorite badge)
- ajax feedback mechanism
- add an "in progress" spinner to all pages when gwt is doing something async - this might be done at the global level for all pages if there are any async things happening, or perhaps it should be relevant to the content area that is doing the async work.
- event>source sub-tab
- move event sources out of plugin config into event>configure sub-tab to sit alongside event>history sub-tab - this will also allow event source reconfiguration without having to restart the agent-side resource component, and allow for a more directed/concise UI for group-wise updates of just the event logging stuff
- search trees
- searchable resource / group trees - a tree showing search results might be a flattened but disambiguated result list, or it might be greyed inner tree nodes
- parameterized web context
- make coregui web context a configurable parameter as part of the build (<address>:7080/<context>/) so that we can have, for example, "/rhq" or "/jon"
CLI & Server Scripts
Resource Scripts
Performance
- Improve the availability reporting lag
- precompute resource disambiguation
- pre-compute resource disambiguation / lineage / hierarchy - the read/write profile is heavily weighted towards the read-access since nearly all fragments of all pages across the site need to be disambiguated. this would also enable us to easily expose lineage data via the remote API, instead of disambiguation being a UI-only thing
Agent
- Agent signaling from the outside
- Design-Agentless Management
- Design-Alternate Agent Models
- Add more flexibility for the remote agent install: Design-RemoteAgentInstall
- Separate agent heartbeat from availability reporting
Logging & Reports
Translations for the UI & docs
- Provide translations of e.g. Alert messages or Installer messages
Builds
- maven build forking
- fix maven build to fork javac for most (all?) modules, this will lower the maximum memory requirement (Xmx) when building from scratch, as well as lower the needed memory when building individual modules (the common case) | https://docs.jboss.org/author/display/RHQ/Roadmap | 2014-04-16T07:36:45 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.jboss.org |
FAQs
This section provides the frequently asked questions on the following NetScaler MAS features. Click a feature name in the table below to view the list of FAQs for that feature.
AnalyticsAnalytics
How do I enable NetScaler MAS to monitor web-application and virtual-desktop traffic?
Navigate to Infrastructure > Instances, and select the NetScaler.
Note: For NetScaler instances of 11.0 release, 65.30 build and above, there is no option on NetScaler MAS to enable Security Insight explicitly. Make sure that you configure the AppFlow parameters on the NetScaler instances, so that NetScaler MAS starts recieving the Security Insight traffic along with the Web Insight traffic. For more information on how to set the AppFlow parameters on NetScaler instances, see To set the AppFlow parameters by using the configuration utility.
After I add the NetScaler instances, does NetScaler MAS automatically start collecting analytical information?
No. You must first enable analytics on the virtual servers hosted in NetScaler instances that are managed by NetScaler MAS. For more details, see How to Enable Analytics on Instances.
Should I access the individual NetScaler appliance for enabling analytics?
No. All configuration is done from the NetScaler MAS user interface, which lists the virtual servers hosted on the specific NetScaler instance. For more details, see How to Enable Analytics on Instances.
What are the types of virtual servers that can be listed on a NetScaler instance to enable analytics?
Currently, the NetScaler MAS user interface lists the following virtual servers for enabling analytics:
- Load balancing virtual server
- Content switching virtual server
- VPN virtual server
- Cache redirection virtual server
How do I attach an additional disk to NetScaler MAS?
To attach an additional disk to NetScaler MAS:
Shut down the NetScaler MAS virtual machine.
In the hypervisor, attach an additional disk of the required disk size to NetScaler MAS virtual machine.
For example, for a NetScaler MAS virtual machine of 120 GB, if you want to increase its disk space to 200 GB, you then need to attach a disk space of 200 GB instead of 80 GB. Newly attached 200 GB of disk space will be used to store Database data, NetScaler MAS log files. The existing 120 GB disk space will be used to store core files, Operating system log files, and so on.
Start the NetScaler MAS virtual machine.
AuthenticationAuthentication
What is load balancing of authentication requests?
The authentication-server load balancing feature enables NetScaler MAS NetScaler MAS.
Do I have an alternative when external authentication fails?
There could be a situation when external authentication completely fails, even when you have cascaded a number of servers. For example, the external servers could become unreachable, or a new user’s credentials might not have been entered in any of, NetScaler MAS accesses the local user database to authenticate your users.
In NetScaler MAS, NetScaler MAS. You have to import user groups once and provide a group permission to a user group rather than importing individual users and giving them individual permissions. You do not have to recreate the users on NetScaler MAS.
Why do we need to assign group permissions?
When you are using the load balancing feature of NetScaler, you can integrate NetScaler MAS with external authentication servers, and import user group information from the authentication servers. Log in to NetScaler MAS and manually create same group information in NetScaler MAS and assign permission to those groups. The user and user group permission is managed in NetScaler MAS and not in the external server. The users have different role-based access permissions on the external servers. Configure the same permissions for the users in NetScaler MAS also. Instead of configuring permissions individually for each user, you can configure a group-level permission so that the user-group members can access specific services on the load balanced virtual servers. The typical permissions that you can assign are permissions to manage NetScaler instances, NetScaler NetScaler MAS.
Configuration ManagementConfiguration Management
Can I perform configuration across multiple NetScaler instances simultaneously using NetScaler MAS?
Yes, you can use configuration jobs to perform configuration across multiple NetScaler instances.
What are configuration jobs on NetScaler MAS?
A job is a set of configuration commands that you can create and run on one or more managed instances. You can create jobs to make configuration changes across instances, replicate configurations on multiple instances on your network, and record-and-play configuration tasks using the NetScaler MAS GUI. You can also convert the recorded tasks into CLI commands.
You can use the Configuration Jobs feature of NetScaler MAS to create a configuration job, send email notifications, and check execution logs of the jobs created.
Can I schedule jobs using built-in templates in NetScaler MAS?
Yes, you can schedule a job by using the built-in template option. A job is a set of configuration commands that you can run on one or more managed instances. For example, you can use the built-in template option to schedule a job to configure syslog servers. You can NetScaler MAS lead to deletion of certificates from NetScaler instances?
No
Event ManagementEvent Management
How can I keep track of all the events that have been generated on my managed NetScaler instances using NetScaler MAS?
As a network administrator, you can view details such as configuration changes, log on conditions, hardware failures, threshold violations, and entity state changes on your NetScaler instances, along with events and their severity on specific instances. You can use the NetScaler MAS events dashboard to view reports generated for critical event severity details on all your NetScaler instances.
What are event rules?
Using NetScaler MAS, you can configure rules to monitor specific events. Event Rules make it easier to monitor a large number of events generated across your NetScaler, NetScaler instances, category, and failure objects. The actions you can assign to the events are sending an email notifications, forwarding SNMP traps from managed NetScaler instances to the NetScaler MAS, and sending an SMS notification.
Instance ManagementInstance Management
What are data centers in NetScaler MAS?
A NetScaler MAS data center is a logical grouping of the NetScaler instances in a specific geographical location. Each server can monitor and manage several NetScaler instances within a data center. You can use the NetScaler MAS server to manage data such as syslog, application traffic flow, and SNMP traps from the managed instances. For more details on configuring data centers, see How to Configure Data Centers for Geomaps in NetScaler MAS.
What are the different Citrix Appliances that are supported by NetScaler MAS?
Instances are the Citrix appliances or virtual appliances that you want to discover, manage, and monitor from NetScaler MAS. You must add these instances to the NetScaler MAS server. at a later time.
What is an instance profile?
An instance profile is used by NetScaler MAS to access a particular instance.
An instance profile contains the user name and password for access to one or more instances. A default profile is available for each instance type. For example, the ns-root-profile is the default profile for NetScaler instances. It contains the default NetScaler administrator credentials. When you change the credentials required for access to instances, you can define custom instance profiles for those instances.
Can we add unlimited SD-WAN instances in NetScaler MAS? Can NetScaler MAS handle all scalar and vector counters for SD-WAN?
Currently, there is no license limit on SD-WAN instances that can be added to NetScaler MAS. NetScaler MAS has a set of built-in reports that internally polls both scalar and vector counters.
Can I rediscover multiple NetScaler VPX instances in NetScaler MAS?
Yes, you can rediscover multiple NetScaler VPX instances in NetScaler MAS NetScaler MAS be installed on NetScaler SDX?
No
StylebooksStylebooks
Can stylebooks be used to configure different NetScaler instances running on different versions of the NetScaler software?
Yes, you can use stylebooks to configure different NetScaler instances running on different versions if there is no discrepancy between the commands across different versions.
When a stylebook is used to configure multiple NetScaler instances at the same time, and configuration of one NetScaler instance fails, what happens?
If applying the configuration to a NetScaler instance fails, the configuration is not applied not any more instances, and already-applied configurations are rolled back.
Do NetScaler backups made through NetScaler MAS include configurations applied through Stylebooks?
Yes
System ManagementSystem Management
Can I assign a host name to my NetScaler MAS server?
Yes, you can assign a host name to identify your NetScaler MAS server. To assign a host name, navigate to System> System Administration > System Settings, and click Change Hostname.
The host name is displayed on the Universal license for NetScaler MAS. For more information, see How to Assign a Host Name to a NetScaler MAS Server.
Can I back up and restore my NetScaler MAS configuration?
Yes, you can back up configuration files (NTP files and SSL certificates), system data, infrastructure and application data, and all your SNMP settings. If your NetScaler MAS ever becomes unstable, you can use the backed up files to restore your NetScaler MAS to a stable state.
To back up and restore your NetScaler MAS’s configuration, navigate to System > Advanced Settings > Backup Files, and click Back Up or Restore as the case may be. For more information, see How to Back Up and Restore Configuration on NetScaler MAS.
Citrix recommends that you use this feature before performing an upgrade or for precautionary reasons.
What are Thresholds and Alerts on NetScaler MAS?
You can set thresholds and alerts to monitor the state of a NetScaler instance and monitor entities on managed instances.
When the value of a counter exceeds the threshold, NetScaler MAS generates an alert to signify a performance-related issue. When the counter value returns to the clear value specified in the threshold, the event is cleared.
Can I generate a technical support file for NetScaler MAS?
Yes. Citrix recommends that you generate an archive of NetScaler MAS NetScaler MAS database.
To configure and send a technical support file, navigate to System > Diagnostics > Technical Support, and then, click Generate Technical Support File. For more information, see How to Generate a Tech Support File for NetScaler MAS. NetScaler Gateway data will be deleted from NetScaler MAS.
Can I configure NTP server on NetScaler MAS?
You can configure a Network Time Protocol (NTP) server in NetScaler MAS to synchronize the NetScaler MAS clock with the NTP server. Configuring an NTP server ensures that the NetScaler MAS clock has the same date and time settings as the other servers on the network.
To configure an NTP server, navigate to System > NTP Servers, and then click Add. For more information, see How to Configure NTP Server on NetScaler MAS.
From which version is the NetScaler MAS active-passive HA deployment supported?
The NetScaler MAS active-passive HA deployment mode is supported from NetScaler MAS version 12.0 build 51.24.
I had a NetScaler MAS active-active HA setup and had configured a NetScaler appliance with load balancing virtual server on it for unified GUI access. How do I update this configuration?
After you upgrade the NetScaler MAS HA pair to active-passive mode, you have to run the following command on the NetScaler appliance to update the load balancing configuration:
add lb monitor MAS_Monitor TCP-ECV -send “GET /mas_health HTTP/1.1\r\nAccept-Encoding: identity\r\nUser-Agent: NetScaler-Monitor\r\nConnection: close\r\n\r\n\”” -recv “{\“statuscode\“:0, \“is_passive\“:0}” -LRTM DISABLED
Can I configure load balancing of the NetScaler MAS HA pair on a Netscaler Instance using port 443?
No, you cannot configure load balancing of the NetScaler MAS HA pair on a NetScaler Instance using port 443.
When you configure the http-ecv and https-ecv monitors on NetScaler, it does not monitor the NetScaler MAS HA nodes correctly.
Can a NetScaler MAS server backup file be used to restore the configuration of another NetScaler MAS server?
Yes
After NetScaler MAS backs up a NetScaler instance, can that backup file be used to restore the configuration of another NetScaler instance through NetScaler MAS?
Yes. Download the NetScaler MAS backup file, upload it into another NetScaler instance’s backup repository, and restore that instance. Make sure that the network information and authentication information do not conflict. For example, check for IP-address or port conflicts, mismatched password profiles. Also make sure that the restored VPX instance has the same NSIP address and NetScaler NetScaler license.
Can we force NetScaler MAS to use a SNIP address to communicate with the NetScaler instances, instead of using the NSIP address of the NetScaler MAS server?
Yes, you can add a SNIP address (with management enabled) in NetScaler MAS for communication with NetScaler instances.
When I back up NetScaler Instances in NetScaler MAS, is the result a full back-up or a basic back-up?
Backups of NetScaler instances by NetScaler MAS are full backups.
Is there a troubleshooting guide for NetScaler MAS?
Yes. See.
How are NetScaler instances managed when a NetScaler MAS HA failover occurs?
If the heartbeat and SSH based check fails, the primary node is considered to be down and the secondary node takes over as the primary node. All the NetScaler instances are updated with the latest primary node details as their SNMP trap destination by default.
The new primary (active) NetScaler MAS NetScaler MAS HA node that went down comes back up?
After returning to service, the NetScaler MAS node remains passive unless the active node fails over
How are NetScaler instances distributed across NetScaler MAS HA nodes?
All the NetScaler instances are managed by the primary NetScaler MAS node.
How are virtual server licenses managed in case of NetScaler MAS HA failover?
If the NetScaler MAS NetScaler MAS HA setup?
No, but if there is no load balancer, NetScaler MAS nodes must be accessed through their own IP addresses. The passive node is marked with the tag “Passive,” and Citrix recommends not to create any configurations on the passive node.
Does NetScaler MAS support an external database?
No
Can a NetScaler instance that is being managed by NetScaler MAS be used as a Load balancer for NetScaler MAS HA?
Yes
What data is synchronized between NetScaler MAS HA nodes?
Complete NetScaler MAS/ | https://docs.citrix.com/en-us/netscaler-mas/12-1/faq.html | 2018-08-14T09:16:54 | CC-MAIN-2018-34 | 1534221208750.9 | [array(['/en-us/netscaler-mas/12-1/media/netscaler-mas-faq.png',
'localized image'], dtype=object) ] | docs.citrix.com |
Hardware Notifications
Windows provides an infrastructure for the hardware-agnostic support of notification components such as LEDs and vibration mechanisms. This support is delivered through the introduction of a Kernel-Mode Driver Framework (KMDF) class extension specifically for hardware notification components that allows for the rapid development of client drivers. A KMDF class extension is essentially a KMDF driver that provides a defined set of functionality for a given class of devices, similar to a port driver in the Windows Driver Model (WDM). This section provides an overview of the architecture of the hardware notification class extension.
For additional information about the KMDF, see Using WDF to Develop a Driver.
To provide support for hardware notifications, you need: | https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/content/_gpiobtn/ | 2018-08-14T09:06:15 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.microsoft.com |
django-force-logout¶
Framework to be able to forcibly log users out of Django projects¶
This project provides the ability to log specific users out of your Django project.
Whilst you can easily log all users out by clearing your session table or similar blunt technique, due to the difficulty in enumerating specific user’s actual session entries from a user ID, logging-out troublemakers etc. is not really feasible “out of the box”.
You can iterate over all possible sessions and match them against a user, but with a heavy-traffic site with long-lasting sessions this can be prohibitively expensive.
Installation¶
Add django_force_logout.middleware.ForceLogoutMiddleware to MIDDLEWARE_CLASSES.
Configure FORCE_LOGOUT_CALLBACK in your settings to point to a method which, given a User instance, will return a nullable timestamp for that user. This would typically be stored on custom User, profile or some other field depending on your setup.
For example:
def force_logout_callback(user): return user.some_profile_model.force_logout FORCE_LOGOUT_CALLBACK = 'path.to.force_logout_callback'
Alternatively, you can just specify a lambda directly:
FORCE_LOGOUT_CALLBACK = lambda x: x.some_profile_model.force_logout
Important
This callback is executed on every request by a logged-in user. Therefore, it is advisable that you have some sort of caching preventing additional database queries. Remember to ensure that you clear the cached value when wish to log a user out, otherwise you will have to wait for the cache entry to expire before the user will actually be logged out.
You are not restricted to returning a field from a SQL database (Redis may suit your needs better and avoid the caching requirement), but you must return a nullable timestamp.
Usage¶
To forcibly log that user out, simply set your timestamp field to the current time. For example:
user.some_profile_model.force_logout = datetime.datetime.utcnow() user.some_profile_model.save()
That’s it. The middleware will then log this user out on their next request.
Configuration¶
Links¶
- Homepage/documentation:
-
- View/download code
-
- File a bug
- | https://django-force-logout.readthedocs.io/en/latest/ | 2018-08-14T08:54:34 | CC-MAIN-2018-34 | 1534221208750.9 | [array(['_images/thread.png', '_images/thread.png'], dtype=object)] | django-force-logout.readthedocs.io |
>(op => { var update = op.Update.Get();
When processing an
OpList, callbacks
for most operations are invoked in the order they were registered with the:
AddEntityOpand
AddComponentOpcallbacks are invoked in the order they were registered with the
Dispatcher.
RemoveEntityOpand
RemoveComponentOpcallbacks are invoked in reverse order.
AuthorityChangeOpcallbacks are invoked in the order they were registered when authority is granted, but in reverse order when authority is revoked (or authority loss is imminent).
CriticalSectionOpcallbacks
View.
Using the View
Improbable.Worker.View is a subclass of
Improbable.Worker.Dispatcher, which
automatically maintains
Connection.GetWorkerFlag()
method:. | https://docs.improbable.io/reference/13.1/csharpsdk/using/receiving-data | 2018-08-14T08:57:10 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.improbable.io |
Understanding Language Resource Components
Language resources consist of word breakers and stemmers that extend index building and querying capabilities to new languages and locales. Word breakers are used during both index creation and querying. Stemmers are used only for querying. Windows Search uses language resource DLLs to bind to IWordBreaker and IStemmer implementations for a specific language locale.
This topic is organized as follows:
About Language Resources
Windows Search uses a filter (an implementation of the IFilter interface) and ILoadFilter to access a document in its native format. The IFilter component extracts text content, properties, and formatting from the document. The IFilter identifies the locale of the document that it is filtering. The indexing component invokes the appropriate word breaker for that locale. If none is available, the indexing component invokes the neutral word breaker. The word breaker receives, from an IFilter, an input stream of Unicode characters that the word breaker parses to produce individual words and phrases. The word breaker also normalizes date and time formats. The indexer normalizes the words produced by the word breaker by converting the words to all uppercase letters. The indexer saves the uppercase words to the full-text index, with the exception of noise words identified for that locale.
The following table lists the actions and corresponding results for the sentence "Figure 1 illustrates the role of language resources for Windows Search during the index creation process."
Word breakers and stemmers are used to expand FREETEXT queries at query time. The locale of the query is the default locale unless a language code identifier (LCID) is passed as a query parameter. The query component invokes the appropriate word breaker on the query terms listed in the WHERE clause of the query. For example, if the WHERE clause of the query contains "FREETEXT (apples, oranges, and pears)," the word breaker receives the text, "apples, oranges, and pears." If the query WHERE clause uses the CONTAINS full-text predicate, the text output from the word breaker is normalized. Otherwise, the query component passes each word identified by the word breaker to the appropriate stemmer for that language and locale. The stemmer generates a list of alternative, or inflected, forms for that word. The query component normalizes the expanded list of query terms and removes noise words.
The following table lists the actions and corresponding results for the query "apples, oranges, and pears."
The expanded query terms increase the likelihood that the query will find documents that match the intent of the original query. Text that the word breaker or stemmer generates at query time is not stored on disk.
Word Breaking
Word breaking is the separation of text into individual text tokens, or words. Many languages, especially those with Roman alphabets, have an array of word separators (such as white space) and punctuation that are used to discern words, phrases, and sentences. Word breakers must rely on accurate language heuristics to provide reliable and accurate results. Word breaking is more complex for character-based systems of writing or script-based alphabets, where the meaning of individual characters is determined from context. For more information about linguistic considerations that may affect your word breaker implementation, see Linguistic and Unicode Considerations.
Stemming
Windows Search applies stemmers exclusively at query time to generate additional word forms for terms in FREETEXT and property queries. Stemmers perform morphological analysis and apply grammatical rules to generate a list of alternative, or inflected, forms for words. Alternative forms often have the same stem or base form. By generating the inflected forms for a word, Indexing Service returns query results that are statistically more relevant to a query. For example, a full-text query for "swim meet" matches documents that contain "swim, swim's, swims, swims', swimming, swam, swum" or "meet, meet's, meets, meets', meeting, met" and combinations of these terms.
Some languages require that inflected terms be generated at both index time and query time for both standard and variant inflections. In this case, stemming happens in the word breaker component, with minimal stemming work in the actual stemmer. For example, the Japanese word breaker performs stemming during both index creation and querying to enable a query to find different inflected forms of the search terms.
Normalization
Documents of all languages are stored in a single index. Although words and linguistic rules differ dramatically, there are some considerations, such as numbers, dates, and times, that are handled consistently across all word breakers. For more information about normalization considerations that may affect your word breaker implementation, see Surface Form Normalization.
Noise Words. You can configure Windows Search to use noise word lists for specific languages. These lists are used when a word breaker is invoked for that language.."
Related topics
Extending Language Resources
Implementing a Word Breaker and Stemmer
Linguistic and Unicode Considerations
Troubleshooting Language Resources and Best Practices | https://docs.microsoft.com/en-us/windows/desktop/search/understanding-language-resource-components | 2018-08-14T09:09:23 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.microsoft.com |
This article will explain:
Why NowSecure Workstation users should update iOS testing devices with caution because a jailbreak might not yet be available
Why users should keep devices running major iOS versions that are not yet jailbroken on reserve in anticipation of NowSecure Workstation support for those versions.
To account for all possible scenarios, mobile apps should undergo security testing in a worst-case environment such as on a jailbroken device. For this reason, NowSecure NowSecure Workstation and the agent to support testing of apps on that jailbroken version of iOS. Until that NowSecure Workstation update is released, however, a device running a non-jailbroken version of iOS will not function with NowSecure Workstation.
Therefore, we recommend that whenever Apple releases a new major version of iOS (e.g., iOS 8, 9, 10, etc.), NowSecure Workstation users do the following:
Update another device not currently used for testing to the new major iOS version
Keep that device on reserve
Refrain from performing any further updates on that device
This will ensure that as soon as NowSecure Workstation supports a new major version of iOS, you have a device on hand you can use to test apps on that jailbroken version of iOS.
iOS Device Update Instructions (iOS 11.3.1)
If you currently have iDevices that are running iOS 11.0 - 11.1.2, do not update them. We expect to support iOS 11.1.2 jailbreak in the near future.
Put the device in DFU mode for device restore.
Restore the new iOS version onto the device.
Settings -> General -> Abouton the device
Please contact [email protected] with any questions regarding this issue. | https://docs.nowsecure.com/workstation/ios-device-update/updating-ios-test-devices/ | 2018-08-14T08:55:08 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.nowsecure.com |
If you are unable to see the “Lab-WiFi” network, or if the provided Android/iOS device is unable to connect, please try the following steps to troubleshoot:
Completely close Workstation (if open).
Physically disconnect the Wi-Fi dongle from the host machine, wait a few seconds, and plug it back in.
Ensure that the adapter is passed into the VM (if applicable).
Re-open NowSecure Workstation and reboot the VM to make sure the settings are reset.
From there, open NowSecure Workstation from the icon on the desktop and create/open a project to launch the WiFi network.
Due to an improper sleep setting in earlier iterations of the Workstation VM, occasionaly VMWare Fusion would lock up, making the Workstation VM unrepsonsive. The only reliable workaround for this was to force quit VMWare Fusion, and reboot the Macbook host.
The fix for this is to disable the sleep settings of the VM, allowing it to follow the settings of the Macbook host. From a terminal window within the Workstation VM, issue this command:
sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target
The VM will still lock after a timeout, requiring the password to log back in. If you downloaded the VM after 05/08/2018 these commands are not required. If you still experience issues with the VM freezing up, please reach out to NowSecure Support..
Workstation tries to detect the specific process associated with your application but in some rare cases (usually when a long timeout occurred, or if the application was launched several times) it might be associated to several processes.
In order to get an accurate memory dump, Workstation if you are not going to use it for a long period of time.
Close Workstation.
Start Workstation
You can now re-test your network connectivity. If you are still experiencing issues, try these last steps:
Close Workstation
Reboot your device
Restart your operating system
Start Workstation Workstation. Ubuntu.
Please note that in some cases, a reboot of your operating system is needed to save and apply changes made to the DNS settings.
Please make sure the correct phone (iPhone or Android) is connected and detected by Workstation, and then re-open your project.
Make sure the device is connected with provided USB cable to your host system. In case you are running Workstation, Workstation Workstation.
Some of the results are only populated if they are relevant to your assessment and the information provided. For example, the “Search Results” output, will only be displayed if sensitive information was provided to Workstation Workstation over Ethernet. If your device is using WiFi/USB connections, please refer to the steps above.
If the steps described on this page do not resolve your issues, or if you are experiencing problems not listed here, please contact NowSecure directly so we can better assist you:
phone: +1 (312)-878-1100 | https://docs.nowsecure.com/workstation/troubleshooting/ | 2018-08-14T08:56:25 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.nowsecure.com |
Working with Amazon VPC
Amazon QuickSight is fully integrated with the Amazon Virtual Private Cloud (Amazon VPC) service. Use this section to find how to configure Amazon QuickSight to access data in your VPC.
In Amazon QuickSight Enterprise edition, you can create connections to your VPCs from your AWS account's Amazon QuickSight subscription. Each connection creates an elastic network interface in your VPC for Amazon QuickSight to send traffic to instances in your VPC. When creating a data set, Amazon QuickSight accesses a VPC connection using only private IP addresses to connect to an instance that is not reachable from the public internet. You can access VPCs that are located in the same AWS Region where you are using Amazon QuickSight to create analyses.
Create a Private Connection to Amazon VPC Using Amazon QuickSight
Use the following procedure to create a connection to a VPC. Before you begin, you should understand your deployment of Amazon VPC in the AWS Region you are using: its subnets, and security groups, in relation to the destinations (databases) you want to reach from Amazon QuickSight.
In Amazon QuickSight, choose your profile icon at the top right of the screen, then choose Manage QuickSight. From the menu at left, choose Manage VPC connections.
The Account Settings page appears. Any existing private connections to VPCs display on this page.
Choose Add VPC connection to add a new VPC connection.
On this page, you can also delete a VPC connection by using the delete icon. You can change a VPC connection on this page by creating a new VPC connection and deleting the old one.
For VPC connection name, type a unique descriptive name. This name doesn't need to be an actual VPC ID or name.
Type the subnet ID for Subnet ID, and type the group ID for Security group ID. Make sure that the subnet and the security group are in the same VPC. Also, make sure you are accessing a VPC that is in the same AWS Region where you are creating Amazon QuickSight analyses. You can't use Amazon QuickSight in one AWS Region to connect to a subnet and security group that are in a different AWS Region. More detailed requirements are provided in the following steps, and in How Amazon QuickSight Connects to Your VPC.
If you need to locate information about the subnet and security group, do the following:
On the Amazon VPC console, find the VPC ID that you want to use.
On the Amazon VPC subnet console page, see which subnets are in that VPC by locating the VPC ID. Choose a subnet, and copy its Subnet ID value. The subnet you choose is the one where you plan to create an elastic network interface. It must be possible to route from this subnet to any destinations you want to reach. For more information, see VPCs and Subnets.
On the Adding VPC connection screen, enter the Subnet ID value that you copied in the previous step for Subnet ID.
On the Amazon VPC security group console page, see which security groups are in that VPC by locating the VPC ID. Choose a group, and copy its Group ID value.
Create a new security group for use only with the elastic network interface created by Amazon QuickSight.The group must allow inbound traffic on all ports from the security groups of the destinations you want to reach.
The group must also allow outbound traffic to the database on the port that the database is listening on.
Additionally, you must update your database's security group to allow inbound traffic from your new security group.
For more information, see Security Group Rules for Amazon QuickSight's Elastic Network Interface.
Note
The database server's security group must allow inbound traffic from the security group you choose.
On the Adding VPC connection screen, enter the Group ID value that you copied in the previous step for Security group ID.
Important
You can't change the settings for a VPC connection.
Review your choices, then choose Create.
Note
Creating a VPC connection requires permission for the
quicksight:CreateVPCConnection
and
ec2:CreateNetworkInterface actions.
For best practices when using Amazon VPC, see the following:
AWS Single VPC Design on the AWS website
Recommended Network ACL Rules for Your VPC in the Amazon VPC User Guide
VPC Scenarios and Examples in the Amazon VPC User Guide
What Is Amazon VPC?. For more information, see the Amazon VPC User Guide.
How Amazon QuickSight Connects to Your VPC
When you create a VPC connection from Amazon QuickSight to your VPC, Amazon QuickSight creates an elastic network interface in the subnet that you choose. It must be possible to route from this subnet to any destinations you want to reach.
Network traffic from Amazon QuickSight then originates from this network interface when Amazon QuickSight connects to a database or other instance within your VPC using a VPC connection. Because this network interface exists inside your VPC, traffic originating from it can reach destinations within your VPC using their private IP addresses.
Controlling the Resources That Amazon QuickSight Can Reach in Your VPC
Network traffic sent from Amazon QuickSight to an instance within your VPC through a VPC connection is subject to all of the standard security controls, just as other traffic in your VPC is. Route tables, network ACLs, and security groups all apply to network traffic from Amazon QuickSight in the same way they apply to traffic between other instances n your VPC.
Configuring Security Group Rules for Use with Amazon QuickSight
For Amazon QuickSight to successfully connect to an instance in your VPC, you must configure your security group rules to allow traffic between the Amazon QuickSight network interface and your instance.
Security Group Rules for the Instance in Your VPC
The security group attached to your data source's instance must allow inbound traffic from Amazon QuickSight on the port that Amazon QuickSight is connecting to.
You can do this by adding a rule to your security group that allows traffic from the security group ID that is associated with the Amazon QuickSight (recommended). Alternatively, you can use a rule that allows traffic from the private IP address assigned to Amazon QuickSight.
For more information, see Security Groups for Your VPC and VPCs and Subnets.
Security Group Rules for Amazon QuickSight's Elastic Network Interface
When using a VPC Connection, traffic comes from the elastic network interface that we create in your VPC. Each elastic network inteface gets its own private IP address that’s chosen from the subnet you configure. The private IP address is unique for each AWS account, unlike the public IP range.
The security group attached to the Amazon QuickSight elastic network interface should have outbound rules allowing traffic to all of the data source instances in your VPC that you want Amazon QuickSight to connect to. If you don't want to restrict which instances Amazon QuickSight can connect to, you can configure your security group with an outbound rule to allow traffic to 0.0.0.0/0 on all ports. If you want to restrict Amazon QuickSight to connect only to certain instances, you can specify the security group ID (recommended) or private IP address of the instances you want to allow. You specify these, along with the appropriate port numbers for your instances, in your outbound security group rule.
The security group attached to the Amazon QuickSight elastic network interface behaves differently than most security groups. Security groups are usually stateful, meaning that when an outbound connection is established the return traffic from the destination host is automatically allowed. However, the security group attached to the Amazon QuickSight network interface isn't stateful. This means that your return traffic from the destination host isn't automatically allowed. In this case, adding an egress rule to the network interface security group doesn't work. Therefore, you must add inbound rules to your security group to explicitly authorize it.
Because the destination port number of any inbound return packets is set to a randomly allocated port number, the inbound rule in your security group must allow traffic on all ports (0–65535). If you don't want to restrict which instances Amazon QuickSight can connect to, then you can configure this security group with an inbound rule to allow traffic on 0.0.0.0/0 on all ports. If you want to restrict Amazon QuickSight to connect only to certain instances, you can specify the security group ID (recommended). Alternatively, you can specify the private IP address of the instances you want to allow in your inbound security group rule. In this case, your inbound security group rule still needs to allow traffic on all ports.
Limitations on Data Sources Using a VPC Connection
The following data source types can use a VPC connection:
Amazon Redshift
Amazon RDS
Amazon Aurora
PostgreSQL
MySQL
MariaDB
Microsoft SQL Server
The instance you are connecting to must either reside within your VPC or be reachable by using an AWS Direct Connect gateway. Amazon QuickSight can't send traffic through a VPC connection to instances that are only reachable by a VPN gateway, NAT gateway, or VPC peering connection.
Amazon QuickSight can't connect to a network load balancer by using a VPC connection.
Other Requirements for Data Sources Using a VPC Connection
The DNS name of the database or instance you are connecting to through a VPC connection must be resolvable from outside of your VPC. Also, the connection must return the private IP address of your instance. Databases hosted by Amazon Redshift, Amazon RDS, and Aurora automatically meet this requirement. | https://docs.aws.amazon.com/quicksight/latest/user/working-with-aws-vpc.html | 2018-08-14T08:59:41 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.aws.amazon.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Moves the file from the local file system to S3 in this directory. If the file already exists in S3 and overwrite is set to false than an ArgumentException is thrown.
Namespace: Amazon.S3.IO
Assembly: AWSSDK.S3.dll
Version: 3.x.y.z
The local file system path where the files are to be moved.
Determines whether the file can be overwritten.
.NET Framework:
Supported in: 4.5, 4.0, 3.5 | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/S3/MS3FileInfoMoveFromLocalStringBoolean.html | 2018-08-14T09:08:33 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.aws.amazon.com |
DZMagnifyingView Documentation Center Reference
DZMagnifierView
I was creating a color picker when I decided that I needed a magnifying glass similar to the one created when one taps and holds text on iOS. I present
DZMagnifierView the class for all of your magnifying needs!
Documentation
Tutorial
You initialize this view just like you would with any other UIView:
DZMagnifierView *magnifier = [[DZMagnifierView alloc] initWithFrame:CGRectMake(0,0,100,100)];
Note that the width and height should be the same, not doing so will result in undefined behavior. The actual radius of the glass will be half the width (and likewise half the height if you're doing this properly). From here you need to specify the view you want to magnify:
magnifier.targetView = aview;
Finally you specify the origin, or center of the magnifier view in the coordinates of the target view's window, and you specify the closeupCenter (the portion of the target view you want magnified) in the target view's coordinates:
CGPoint touchPointInTargetView; magnifier.center = CGPointMake(touchPointInTargetView.x, touchPointInTargetView.y - 50); magnifier.closeupCenter = touchPointInTargetView;
The API is designed so that you can place the magnifying glass wherever you want on screen and magnify a totally different part of the screen. Why you ask is this? Flexibility is always nice; also, let's say you were going to use this magnifying glass when someone touches a view, then you wouldn't want the magnifying glass center to line up with the touch itself, you would want the magnifying glass to float above the touch (similar to the example above) so that you can see what you're touching (note here that it is assumed you are trying to magnify what you are touching).
Pull requests are welcome, hopefully people enjoy! | http://docs.danzimm.com/DZMagnifyingView/ | 2018-08-14T08:49:54 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.danzimm.com |
Keyboard shortcuts to manage Wiki pages
VSTS | TFS 2018.2
Note
Keyboard shortcuts to manage Wiki pages are supported on TFS 2018.2 or later versions. To download TFS 2018.2, see Team Foundation Server 2018 Update 2 Release Notes.
You can use the following keyboard shortcuts when managing or editing Wiki pages. To view the valid shortcuts, enter Shift+? from the Wiki hub or when editing a wiki page.
Note
Feature availability: The following shortcuts are available from the web portal for VSTS and TFS 2018.2 and later versions. | https://docs.microsoft.com/en-us/vsts/project/wiki/wiki-keyboard-shortcuts?view=vsts | 2018-08-14T09:03:30 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.microsoft.com |
Administration Guide.
-
- Reconciliation rules for conflicting IT policies
- Resolving IT policy assignments for user accounts and groups
- Deactivating BlackBerry devices that do not have IT policies applied
- Creating new IT policy rules to control third-party applications
- Export all IT policy data to a data file
- Delete an IT policy
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/admin/deliverables/20839/Using_IT_policy_to_manage_BESolution_security_810036_11.jsp | 2014-03-07T11:17:50 | CC-MAIN-2014-10 | 1393999642201 | [] | docs.blackberry.com |
User Guide
-
- SIM card
- Security
- Service books and diagnostic reports
- Accessibility options
- BrickBreaker
- Word Mole game
- Glossary
- Legal notice
Home > Support > BlackBerry Manuals & Help > BlackBerry Manuals > BlackBerry Smartphones > BlackBerry Curve > User Guide BlackBerry Curve 8330 Smartphone - 5.0. | http://docs.blackberry.com/en/smartphone_users/deliverables/19603/Format_the_device_memory_or_media_card_50_764550_11.jsp | 2014-03-07T11:17:41 | CC-MAIN-2014-10 | 1393999642201 | [] | docs.blackberry.com |
You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
Version 2
Current ».. | http://docs.codehaus.org/pages/viewpage.action?pageId=231081129 | 2014-03-07T11:19:46 | CC-MAIN-2014-10 | 1393999642201 | [] | docs.codehaus.org |
.
- Contextual selectors can obviously have a negative effect on performance e.g. evaluating a match for a selector like "a/b/c/d/e" will obviously require more processing than that of a selector like "d/e". Obviously there will be situations where your data model will require deep selectors, but where it does not, you should try to. | http://docs.codehaus.org/pages/diffpages.action?pageId=111182014&originalId=228173727 | 2014-03-07T11:21:51 | CC-MAIN-2014-10 | 1393999642201 | [] | docs.codehaus.org |
FOSS4G, held 20-23 October 2009 in Sydney, Australia, is the international "gathering of tribes" for open source geospatial communities. The theme for the FOSS4G 2009 conference will be "User Driven". Users and developers are encouraged:
Submission instructions and templates are available at.
The deadline for workshop / tutorial submissions is March 9, 2009 | http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=117899808 | 2014-03-07T11:21:21 | CC-MAIN-2014-10 | 1393999642201 | [] | docs.codehaus.org |
. group's name and press the Search to find matching names. Press Reset to clear the search field and restore the list of groups. | http://docs.joomla.org/index.php?title=Help16:Users_Groups&diff=35639&oldid=35544 | 2014-03-07T11:24:38 | CC-MAIN-2014-10 | 1393999642201 | [] | docs.joomla.org |
. For example, you can connect to a database, Google Drive and sheets, PaperVision - Image Silo document management systems and more. There are several connectors available to help with this integration. The connectors are listed below.
The Database Connector makes it very easy to connect Live Forms to most databases. You can save form submissions in your database or initialize forms from a database. Note that this connector uses XML schema.
See Database Connector for help installing and using this connector.
Connecting your Live Forms to Google sheets and drive is very easy to do with the frevvo Google Connector. See the Google Connector chapter for help installing and using this connector.
The Live Forms Add-on for Confluence is available as an add-on to either Live Forms' Online service or In-house installations. You need to install it into Confluence before you can add forms and submissions pages to Confluence. You will also need to download and install Live Forms for Confluence.
See Live Forms ™ for Confluence documentation for help installing and using this plugin.
Live Forms supports form submissions sent directly into Digitech Systems PaperVision® and ImageSilo® document management products.
See Connecting to PaperVision® / ImageSilo® for help installing and using this connector.
The Filesystem Connector saves Live Forms submissions to a local or remote filesystem or an Enterprise Content Management system (ECM). See Filesystem Connector for easy installation and configuration information.
Store documents and information to a secure Microsoft SharePoint website. Configure the frevvo SharePoint Connector for your Live Forms tenant then use the SharePoint wizard to connect your forms/workflows to the SharePoint website for document storage. . Refer to the SharePoint Connector topic for the details. | https://docs.frevvo.com/d/display/frevvo92/Connectors | 2020-05-25T07:38:52 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.frevvo.com |
Privacy, security, and transparency
Note
The information in this article applies to worldwide versions of Office 365. If you are using a national cloud instance of Office 365, including Office 365 U.S. Government, Office 365 Germany, and Office 365 operated by 21Vianet, see Microsoft National Clouds.
Microsoft values the partnerships we have with our customers and places great emphasis on protecting the privacy and security of customer data. For more information, see the Microsoft Trust Center.
Privacy Microsoft 365 Apps for enterprise, see To which online services does the Trust Center apply?
Security
To learn how Microsoft delivers Office 365 services securely and reliably, see Security.
Transparency
As a customer, you can find out where your data resides, who at Microsoft can access it, and what we do with that information internally. For more information, see Transparency.
Advanced eDiscovery
Electronic discovery, or eDiscovery, is the process of identifying and delivering electronic information that can be used as evidence in legal cases. Advanced eDiscovery.
Customer Lockbox
As a Microsoft.
Advanced Threat Protection
Office 365 Advanced Threat Protection helps protect your organization against malware and viruses. ATP includes Safe Links, Safe Attachments, Anti-phishing, and Spoof intelligence features. Safe Links proactively protects your users from malicious hyperlinks in a message, providing protection every time the link is selected. Safe Attachments protects against unknown malware and viruses, routing all messages and attachments that don't have a known virus/malware signature to a special environment where ATP can detect malicious intent. For more information about ATP, see Office 365 Advanced Threat Protection service description.
Feature availability
To view feature availability across plans, see Microsoft 365 and Office 365 platform service description. | https://docs.microsoft.com/en-us/office365/servicedescriptions/office-365-platform-service-description/privacy-security-and-transparency | 2020-05-25T08:30:46 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.microsoft.com |
How to Export a Project
It is possible to export your project as a PDF or spreadsheet, in order to keep an offline record of your project.
Export your Project as a PDF
- Create a Survey, Form, or Quiz
- Enter the ‘Paper Form’ tab in the sidebar
- You will be presented with a pop up, containing the options to ‘Open’, ‘Save’, or ‘Cancel’
- Open: Opens your project as a PDF in a new tab
- Save: Stores the PDF in your personal documents.
- Cancel: This option will void your current action, and close the pop up.
If you wish, you can then print your project from your personal system. | https://docs.shout.com/article/99-how-to-export-a-project | 2020-05-25T08:39:19 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.shout.com |
haven't configured this property at the time of starting up the server for the first time, you will get errors at the start up.
Configure the following set of parameters in the user store configuration, depending on the type of user store you are connected to (LDAP/Active Directory/ JDBC).
With these configuration users can log in.
Let's get started!. | https://docs.wso2.com/display/IS570/Logging+in+to+Salesforce+using+the+Identity+Server | 2020-05-25T07:23:46 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.wso2.com |
Query Performance
A newer version of this documentation is available. Use the version menu above to view the most up-to-date release of the Greenplum. | https://gpdb.docs.pivotal.io/5260/admin_guide/query/topics/query-performance.html | 2020-05-25T08:56:33 | CC-MAIN-2020-24 | 1590347388012.14 | [] | gpdb.docs.pivotal.io |
Alerting with Portworx
This guide shows you how to configure prometheus to monitor your Portworx node and visualize your cluster status and activities in Grafana. We will also configure AlertManager to send email alerts.
Configure Prometheus
Prometheus requires the following two files: config file, alert rules file. These files need to be bind mounted into Prometheus container.
# This can be any directory on the host. PROMETHEUS_CONF=/etc/prometheus
Prometheus config file
Modify the below configuration to include your Portworx nodes’ IP addresses, and save it as ${PROMETHEUS_CONF}/prometheus.yml.
global: scrape_interval: 1m scrape_timeout: 10s evaluation_interval: 1m rule_files: - px.rules scrape_configs: - job_name: 'PX' scrape_interval: 5s static_configs: - targets: ['px-node-01-IP:9001','px-node-02-IP:9001','px-node-03-IP:9001'] alerting: alertmanagers: - scheme: http static_configs: - targets: - "alert-manager-ip:9093"
This file can be downloaded from prometheus.yml
Note: ‘alert-manager-ip’ is the IP address of the node where AlertManager is running. It is configured in the later steps.
Prometheus alerts rules file
Copy px.rules file, and save it as ${PROMETHEUS_CONF}/px.rules. For Prometheus v2.0.0 and above, rules file is available here.
Run Prometheus
In this example prometheus is running as docker container. Make sure to map the directory where your rules and config file is stored to ‘/etc/prometheus’.
docker run --restart=always --name prometheus -d -p 9090:9090 \ -v ${PROMETHEUS_CONF}:/etc/prometheus \ prom/prometheus
Prometheus UI is available at
Configure AlertManager
The Alertmanager handles alerts sent by Prometheus server. It can be configured to send them to the correct receiver integrations such as email, PagerDuty, Slack etc. This example shows how it can be configured to send email notifications using gmail as SMTP server.
AlertManager requires a config file, which needs to be bind mounted into AlertManager container.
# This can be any directory on the host. ALERTMANAGER_CONF=/etc/alertmanager
AlertManager config file
Modify the below config file to use Google’s SMTP server for your account.
Save it as
${ALERTMANAGER_CONF}/config.yml.
global: # The smarthost and SMTP sender used for mail notifications. smtp_smarthost: 'smtp.gmail.com:587' smtp_from: '<sender-email-address>' smtp_auth_username: "<sender-email-address>" smtp_auth_password: '<sender-email-password>' route: group_by: [Alertname] # Send all notifications to me. receiver: email-me receivers: - name: email-me email_configs: - to: <receiver-email-address> from: <sender-email-address> smarthost: smtp.gmail.com:587 auth_username: "<sender-email-address>" auth_identity: "<sender-email-address>" auth_password: "<sender-email-password>"
This file can be downloaded from config.yml
Run AlertManager
In this example AlertManager is running as docker container. Make sure to map the directory where your config file is stored to ‘/etc/alertmanager’.
docker run -d -p 9093:9093 --restart=always --name alertmgr \ -v ${ALERTMANAGER_CONF}:/etc/alertmanager \ prom/alertmanager | https://2.3.docs.portworx.com/install-with-other/operate-and-maintain/monitoring/alerting/ | 2020-05-25T07:54:54 | CC-MAIN-2020-24 | 1590347388012.14 | [] | 2.3.docs.portworx.com |
ARGetListEntryWithFields
Note
You can continue to use C APIs to customize your application, but C APIs are not enhanced to support new capabilities provided by Java APIs and REST APIs.
Description
Retrieves a list of form entries from the specified server. Data from each entry is returned as field/value pairs for all fields. You can limit the list to entries that match particular conditions by specifying the
qualifier parameter.
ARGetListEntry also returns a qualified list of entries, but as an unformatted string with a maximum length of 128 bytes for each entry containing the concatenated values of selected fields.
Privileges
The system returns information based on the access privileges of the user that you specify for the
control parameter. All lists, therefore, are limited to entries the user can access (users must have permission for the entryId field to access and retrieve entries).
Synopsis
#include "ar.h" #include "arerrno.h" #include "arextern.h" #include "arstruct.h" int ARGetListEntryWithFields( ARControlStruct *control, ARNameType schema, ARQualifierStruct *qualifier, AREntryListFieldList *getListFields, ARSortList *sortList, unsigned int firstRetrieve, unsigned int maxRetrieve, ARBoolean useLocale, AREntryListFieldValueList *entryList, unsigned int *numMatches, retrieve entries for.
qualifier
A query that determines the set of entries to retrieve..
getListFields
A list of zero or more fields to be retrieved with each entry. The system checks the permissions for each specified field and returns only those fields for which you have read access.
Note
You should not add the Status History field to this list as the AR System server cannot retrieve the Status History field using ARGetListEntryWithFields .
sortList
A list of zero or more fields that identifies the entry sort order. The system generates an error if you do not have read access on all specified fields. Specify
NULL for this parameter (or zero fields) to use the default sort order for the form (see ARCreateSchema). The system sorts the entries in ascending order by
entryId if the form has no default sort order.
firstRetrieve
The first entry to retrieve. A value of
0 (
AR_START_WITH_FIRST_ENTRY) represents the first entry. A value of
1 will skip the first entry.
useLocale
A flag that indicates whether to search for entries based on the locale. If you specify
1 (
TRUE) and the Localize Server option is selected, entries are searched using the locale specified in
AR_RESERV_LOCALE_LOCALIZED_SCHEMA. If no matches are found for the specified locale, the search becomes less restrictive until a match is found. If you specify
0 (
FALSE) or the Localize Server option is cleared, all entries are searched. For more information, see Setting the Localize Server option.
maxRetrieve
The maximum number of entries to retrieve. Use this parameter to limit the amount of data returned if the qualification does not sufficiently narrow the list. Specify
0(
AR_NO_MAX_LIST_RETRIEVE) to assign no maximum.
Return values
entryList
A list of zero or more (accessible) entries that match the criteria defined by the
qualifier parameter. The system returns a list with zero items if no entries match the specified criteria.
numMatches
The total number of (accessible) entries that match the qualification criteria. This value does not represent the number of entries returned unless the number of matching entries is less than or equal to the
maxRetrieve value. Specify
NULL for this parameter if you do not want to retrieve this value.
Note
Performing this count requires additional search time if the number of matching entries is more than the
maxRetrieve value. In this case, the cost of completing the search diminishes the performance benefits of retrieving fewer entries.
status
A list of zero or more notes, warnings, or errors generated from a call to this function. For a description of all possible values, see Error checking.
See also
ARGetListEntry, ARCreateEntry, ARDeleteEntry, ARGetEntry, ARGetEntryStatistics, ARLoadARQualifierStruct, ARMergeEntry, ARSetEntry. See FreeAR for:
FreeAREntryListFieldList,
FreeAREntryListList,
FreeARQualifierStruct,
FreeARSortList,
FreeARStatusList. | https://docs.bmc.com/docs/ars91/en/argetlistentrywithfields-609070970.html | 2020-05-25T09:15:08 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.bmc.com |
Thank you for downloading the GiftPress Plugin.
This guide will assist you with the installation of the plugin.
- Please go to the My Downloads section of our website.
- Download the GiftPress PRO version from there and grab the license key as well.
- Make sure the zip file name is: GiftPress PRO - WooCommerce Gift Cards. GiftPress plugin. :
Still Unclear ?
Please submit a support request. We are always happy to assist you. | https://docs.flycart.org/en/articles/3831168-installation | 2020-05-25T08:12:12 | CC-MAIN-2020-24 | 1590347388012.14 | [array(['https://downloads.intercomcdn.com/i/o/73786696/28af0a82f5226a01ca903967/add_new.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/73786884/2ad3bc1d22cf0a8c073e7789/upload.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/73786963/9afeda7dfdc5795350d4144f/upload_file.png',
None], dtype=object) ] | docs.flycart.org |
Adding and Coaching Clients
Throughout our documentation, you will come across many references to the term client. This article goes over what the term means within Goalify Professional, how you can invite new clients and how to manage your client list.
Understanding Clients
Within Goalify Professional, we use client as a general term for an actual client, a patient or an employee - really anyone who has accepted a coaching invitation.
As a coach, you can assign and manage goals for a client, analyze a client's progress and keep in touch via the integrated chat feature. Please keep in mind that you only gain access to goals that you have created for your client. You won't be able to look at goals that the client created independently.
As part of every pricing plan, a client will gain access to the Goalify Unlimited Edition for as long as they remain your clients. Additionally, with our generous limits on the number of clients you can coach, you won't have to worry about the cost of adding a new client to Goalify Professional.
Ownership of Data
As a coach, you will be able to manage goals for your clients. When you remove a client from your client list, you can choose if your client should retain his or her access to the goals that you created and the connected data, or if access should be removed altogether. Regardless which option you chose, you as a coach will lose access to a client's data.
Use the archive client option to remove access for a client but retain access to historic data for coaches.
The Goalify mobile app is the ideal way for your clients to view their goals, record and analyze their progress and to connect with you. To get started with the Goalify mobile app, please read the following help documents:
- Within the Goalify app: Open the menu and switch to help.
- Online: Read our full documentation at.
- Video channel: Watch our many 30-second how-to videos at Goalify User Edition
Inviting a new Client
Quick Setup Tutorial
- Go to your team's Clients menu
- Choose the Add Client option using the Action button
- Enter your client's details
To invite a new client, switch to the Client list of your team. From the main Action button select the Add a Client option. This will open the invitation widget so you can add some information about your client. We recommend that you enter at least a first name, last name and email address for your client. This will make it easier to tell your clients apart in your client list.
You can add additional fields from the Add field button to enter contact and personal information if needed. When you're done entering your client's details, create the invitation using the Save button.
If you enter an email address for a new client, Goalify Professional will offer you the option to send an invitation email on your behalf. Only use this option, if you have obtained the necessary consent of the recipient to receive communications from you.
If your client is already registered with the same email address, we will also send a push notification to his or her mobile device.
Accepting an Invitation
To accept an invitation, your client needs to create a Goalify Account first. Our invitation emails include easy to follow step-by-step instructions.
Open invitations will be added to the Invited Clients section at the bottom of the Client List. As soon as your client has accepted the invitation, he or she will show up as a client at the top of the list.
Batch Inviting New Clients
Quick Setup Tutorial
- Go to your team's Clients menu
- Choose the Import Clients option using the Action button
- Enter your clients' details
In case you want to invite many clients at the same time, it is much faster to use our batch import option. Using this feature you can either upload a list of your clients or paste a list directly into the browser.
To batch invite new clients, switch to the Client list of your team. From the main Action button select the Import Clients option. This will open the import screen. You can either choose to upload an excisting list as a text or .csv file or past the information into the designated text field.
Your data must have the following structure:
- last name (optional)
The following delimiters are applicable: colon, semi-colon or tab. Use our auto-detect feature if you are not sure or when pasting data from spreadsheet software like Excel or Numbers. You need to provide some input for all three columns, however you can choose to not enter a last name.
Data rows that return an error will be marked in red. We will also display a warning below the data.
You can make adjustments to the data to be imported. When you are ready, click the Import button below the shown data. This button will also tell you how many invitations will be created.
Each client will be emailed an invitation email and added to the Invited Clients section of your team's client list. Clients who have accepted your invitation will be moved to the regular client list.
Adding a Goal to a New Client
Quick Setup Tutorial
- Go to your team's Clients menu
- Select your client
- Choose the Add Goal option using the Action button
- Define the goal
After you created a client's invitation, you can start adding goals to his/her dashboard. Please read our full documentation on Understanding Goalify Goals to familiarize yourself with the different options you have. You will find additional help in our Managing Goals for a Client support guide.
Using Reminder
Quick Setup Tutorial
- Go to your team's Clients menu
- Select your client
- Choose the Add Reminder option using the Action button
- Define the reminder
Reminders are push notifications that you can send to a client according to a predefined schedule. Reminders can be useful in many different situations:
- Remind a client of a task
- Send positive reinforcements
- Remind clients of a regular event
Shared Reminder
Use shared reminders whenever you need to assign the same reminder to several clients.
Add a new Reminder
To create a new reminder for one of your clients, switch to the Client list of your team. Select the client you wish to create the reminder for. With the main Action button, select the Add Reminder option.
This will open the create reminder window. In addition to the name of a reminder, a reminder has a few more properties that can be modified:
- Name
Name of the reminder.
- Notification
The actual message that is sent with each notification to a client.
- Time
Time entries define at what time and intervall a reminder should be sent to the client. A reminder can have as many timing entries as needed.
You need to set a name, a notification text and at least one notification to create a new reminder. To add a notification to the schedule of a reminder use the Add button within the notifications section.
Notifications are defined by
- Type
You can choose between daily, on certain weekdays or certain days of the month.
- Style
You can choose to send the reminder at a certain time on a specific day or according to a schedule (i.e. every 60 minutes between 9am and 5pm). Choose the Time Span option to set an intervall.
Click the Add button to add the notifications to the notification list of the reminder. Once you have set up all your notifications of the reminder you can save and activate the reminder by clicking the Save button on the top right.
Editing a Reminder
Click on a reminder to edit its name, message and notification schedule.
Pausing a Reminder
Click on the activation toggle of a reminder to choose between active and paused. You can pause and unpause a reminder at any time.
Deleting a Reminder
Click on a reminder and choose the Delete Reminder option from the actions button.
Using Shared Reminder
Quick Setup Tutorial
- Go to your team's Reminder menu
- Choose the Add Reminder option using the Action button
- Define the reminder
- Add clients to the reminder
Shared Reminders work just like reminders created for one specific client, but with the ability to add several clients to the recipient list.
After you have set up your reminder, add clients by using the Add button within the clients' section. Use the Edit button to remove clients from the recipient list of the reminder.
Shared reminders are managed from the team's Reminder menu. However, each recipient will also show the shared reminder within his/her reminder section, where individual and shared reminders are organized in two seperate lists.
Organizing Clients with Tags
Depending on the way you use Goalify Professional for your professional coaching needs, your client list can grow and become quite long. To keep your list manageable and organized, we have introduced the tag feature. By using tags you can make your client list easier to use.
Here are some examples of how to use tags to structure your client list:
- By coaching agreement
- By location
- By corporate mandate
- Any other grouping system that meets your needs
You can even add multiple tags to every client. Since you can search using tags in almost every search field we provide, tags are a real productivity booster.
Teams vs. Tags
Tags are a powerful feature used to organize a team's client list when using the search bar. You cannot provide access to clients for different coaches by using tags. If you need separate, independent client lists with different coaches accessing different client lists, please familiarize yourself with our team feature.
Adding a Tag
Quick Setup Tutorial
- Go to your team's Clients menu
- Click the purple Tools button right to the client list
- Open the Tags section
- Enter the name of the tag
- Hit the Enter key
Applying Tags to a Client
Quick Setup Tutorial
- Go to your team's Clients menu
- Click the purple Tools button right to the client list
- Open the Tags section
- Drag and drop the tag onto the name of the client
Applying Tags to Multiple Clients
Quick Setup Tutorial
- Go to your team's Clients menu
- Click the purple Tools button right to the client list
- Open the Tags section
- Click the Select Clients button on the top of the widget
- Click the tag you want to apply
- From your client list, select the clients you want the tag applied to
- Click the Tag Selected button
Using Tags in Search
Quick Setup Tutorial
- Enter # directly followed by the tag's name in the search bar.
Removing a Tag from a Client
Quick Setup Tutorial
- Go to your team's Clients menu
- Click the blue Tag button on the top of the client list
- Click the Edit button on the top right of the tag widget
- Click the tag you want to remove from a client
- From your client list select the clients you want the tag removed from
- Click the Untag Selected button
Removing a Tag
Quick Setup Tutorial
- Go to your team's Clients menu
- Click the blue Tag button on the top of the client list
- Click the Edit button on the top right of the tag widget
- Select the tag you want to remove
- Click the Delete This Tag button
Communicating with a Client
We believe that communication is crucial for every coaching process. This is why we thoroughly integrated our chat feature into Goalify Professional.
Using the Goalify Professional chat feature, you can do the following:
- Engage in private chat conversations
- Engage in group chat conversations
You can learn all about our chat feature by reading our private chat and group chat guide.
Archiving a Client
This action CANNOT be undone!
Archiving a client from your client list is a one-time action and it CANNOT be undone. Please be extremely careful when archiving clients from your team's client list.
Quick Setup Tutorial
- Go to your team's Clients menu
- Select your client
- Choose the Archive client option using the Action button
Once you archive a client, a few things will happen:
- Coaches will keep access to all historic data connected to goals, workflows and reminders of this client
- The client will lose access to all goals, workflows, reminders and connected data and will be removed from the team
- Your client will be removed from your shared goals and public challenges - shared goals goals will be retained as individual goals on the archived client's dashboard.
Removing a Client
This action CANNOT be undone!
Removing a client from your client list is a one-time action and it CANNOT be undone. Please be extremely careful when deleting clients from your team's client list.
Quick Setup Tutorial
- Go to your team's Clients menu
- Select the More button on the top of the client list
- Choose the Delete Clients option
- Select the clients you want to remove
- Confirm your selection and click the red Delete Clients button.
Once you delete a client from your client list, a few things will happen:
- You will lose access to all goals and connected data for that client
- You can decide whether or not your client should be able to retain access to his or her goals, i.e. the ones you created
- Your client will be removed from your shared goals and public challenges | https://docs.goalifypro.com/adding_and_coaching_clients.en.html | 2020-05-25T08:34:43 | CC-MAIN-2020-24 | 1590347388012.14 | [array(['images/docu-batch-import-bg.png', 'Batch Import new Clients'],
dtype=object) ] | docs.goalifypro.com |
Metrics in this document have been deprecated and are no longer supported by Microsoft Azure. Refer to the Cosmos DB metrics currently supported by Microsoft. For current users, both the deprecated and currently supported events, metrics, and accompanying metadata are included.
Use this page as a reference to migrate your alert conditions and custom dashboards from the deprecated metrics to their supported counterparts, as the deprecated metrics may stop working without notice should Microsoft stop publishing them. | https://docs.newrelic.com/docs/azure-cosmos-db-document-db-monitoring-integration-deprecated | 2020-05-25T09:33:28 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.newrelic.com |
Read and Send configuration groups Read and Send configuration list This entity defines the test for sending mail. Attributes are used to derive values of java mail properties, or, they can be specified directly as key value pairs. Attributes will are easier to read but there isn't an attribute for every javamail property possible (some are fairly obscure). Define the To, From, Subject, and body of a message. If not defined, one will be defined for your benefit (or confusion ;-) Basically attributes that help setup the javamailer's confusion set of properties. Configuration for a sendmail host Configuration container for configuration all settings for reading email Define the host and port of a service for reading email. Basically any attributes that help setup the javamailer's confusing set of properties. Configure user based authentication. Don't allow poorly configured read protocols. These are case sensitive. Use these name value pairs to configure free-form properties from the JavaMail class. | https://docs.opennms.org/opennms/releases/19.0.1/xsds/javamail-configuration.xsd | 2020-05-25T09:16:39 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.opennms.org |
Most Popular Articles
- Sharing Your Project
- JavaScript API
- How to Enable Tracked Responses
- How to Export a Project
- Sharing to Social Media (And Other Platforms)
- Receive Email Notifications for Responses
- Prevent Multiple Responses from the Same Device
- How to Print a Survey
- QR Code
- Use Your Own SMTP Servers (For Email Distribution) | https://docs.shout.com/collection/75-send | 2020-05-25T06:41:24 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.shout.com |
List of speakers in Visual Brain Core Seminar Series
Revision as of 19:10, 8 January 2018
Previous talks from newest to oldest for the Brain Core Seminar Series from the Civitan International Neuroimaging Labs. Meetings are at noon on the second Friday of the month in Civitan International Research Center room 120.
Feb 9, 2017: "TBA" Jonathan Power, MD, PhD, Weill Cornell Medical School Cl.
Link to list of upcoming speakers for combined Brain Core seminar series and Neuroimaging Journal club. Together these meet every Friday at noon in CIRC 120. Usually, we have pizza. | https://docs.uabgrid.uab.edu/w/index.php?title=List_of_speakers_in_Visual_Brain_Core_Seminar_Series&diff=prev&oldid=5710 | 2020-05-25T09:12:17 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.uabgrid.uab.edu |
iSAM is an optimization library for sparse nonlinear problems as encountered in simultaneous localization and mapping (SLAM). ([email protected]) and Frank Dellaert ([email protected]) at Georgia Tech.
Michael Kaess ([email protected]), Hordur Johannsson ([email protected]), David Rosen ([email protected]) and John Leonard ([email protected])
iSAM is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version.
iS iSAM. If not, see <>.
The latest version of iSAM is available at
The source code is available from our public subversion repository:
svn co
This software was tested under Ubuntu 9.04-11.04, and Mac OS X 10.5-10.6. iSAM depends on the following software:
To install all required packages in Ubuntu 9.10 and later, simply type:
sudo apt-get install cmake libsuitesparse-dev libeigen3-dev libsdl1.2-dev doxygen graphviz
Note that CHOLMOD is contained in SuiteSparse. On Mac OS X, SuiteSparse and SDL are available through MacPorts:
sudo port install suitesparse libsdl
Compile with:
make
Directory structure:
isamlib/contains the source code for the iSAM library
include/contains the header files for the iSAM library
isam/contains the source code for the iSAM executable
examples/contains examples for using different components of the iSAM library
doc/contains the documentation (after calling "make doc")
misc/contains code referenced from publications
data/contains example data files
lib/contains the actual library "libisam.a"
bin/contains the main executable "isam"
Usage example:
bin/isam -G data/sphere400.txt
For more usage information:
bin/isam -h
Install the library in your system with:
make install
Note that make just provides a convenient wrapper for running cmake in a separate "build" directory. Compile options can be changed with "ccmake build". In particular, support for the 3D viewer can be disabled by setting USE_GUI to OFF. Library and include paths can be modified manually in case SuiteSparse/CHOLMOD was installed in a local directory and cannot be found automatically.
To generate detailed documentation for the source code, type:
make doc
and open
doc/html/index.html in your browser.
Details of the algorithms used in this software are provided in these publications (the latex bibliography file
isam.bib is included for convenience).
A full list of iSAM-related references in BibTeX format is available here: Bibliography.
Newer publications will be available from my web page at
Many thanks to Richard Roberts for his help with this software. Thanks also to John McDonald, Ayoung Kim, Ryan Eustice, Aisha Walcott, Been Kim and Abe Bachrach for their feedback and patience with earlier versions. | http://docs.ros.org/fuerte/api/demo_rgbd/html/index.html | 2020-05-25T09:34:34 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.ros.org |
API Gateway 7.6.2 Policy Developer Guide Save PDF Selected topic Selected topic and subtopics All content Configure an FTP poller Overview The FTP poller enables you to query and retrieve files to be processed by polling a remote file server. When the files are retrieved, they can be passed to the API Gateway core message pipeline for processing. For example, this is useful where an external application drops files on to a remote file server, which can then be validated, modified, and potentially routed on over HTTP or JMS by the API Gateway. This kind of protocol mediation is useful when integrating with Business-to-Business (B2B) partner destinations or with legacy systems. For example, instead of making drastic changes to either system, the API Gateway can download the files from a remote file server, and route them over HTTP to another back-end system. The added benefit is that messages are exposed to the full complement of API Gateway message processing filters. This ensures that only properly validated messages are routed on to the target system. The FTP poller supports the following file transfer protocols: FTP: File Transfer Protocol FTPS: FTP over Secure Sockets Layer (SSL) SFTP: Secure Shell (SSH) File Transfer Protocol To add a new FTP poller, in the Policy Studio tree, under the Environment Configuration > Listeners node, right-click the instance name (for example, API Gateway), and select FTP Poller > Add. This topic describes how to configure the fields on the FTP Poller Settings dialog. Tip For details on how to configure the API Gateway to act as a file transfer service that listens on a port for remote clients, see Configure a file transfer service. General settings This filter includes the following general settings: Name:Enter a descriptive name for this FTP poller. Enable Poller:Select whether this FTP poller is enabled. This is selected by default. Host:Enter the host name of the file transfer server to connect to. Port:Enter the port on which to connect to the file transfer server. Defaults to 21. User name:Enter the user name to connect to the file transfer server. Password:Specify the password for this user. Scan settings The fields configured in the Scan details tab determine when to scan, where to scan, and what files to scan: Poll every (ms):Specifies how often in milliseconds the API Gateway scans the specified directory for new files. Defaults to 60000. To optimize performance, it is good practice to poll often to prevent the number of files building up. Look in directory on FTP server:Enter the path of the target directory on the FTP server to scan for new files. For example, outfiles. For files that match the pattern:Specifies to scan only for files based on a pattern in a regular expression. For example, to scan only for files with a particular file extension (for example, .xml), enter an appropriate regular expression. Defaults to: ([^\s]+(\.(?i)(xml|xhtml|soap|wsdl|asmx))$) Establish new session for each file found:Select whether to establish a new file transfer session for each file found. This is selected by default. Limit the number of files to be processed:Select this option to limit the number of files that the FTP poller can will process on each poll of the FTP server. This option is not selected by default. Specify the max number of files to be processed:Enter the maximum number of files to be processed on each poll of the FTP server. The default is 100. Process file with following policy:Click the browse button to select the policy to process each file with. For example, this policy may perform tasks such as validation, threat detection, content filtering, or routing over HTTP or JMS. You can select what action to take after the policy processes the file in the On Policy Success and On Policy Failure fields. On Policy Success:This field enables you to choose the behavior if the policy passes.. For example, if Look in directory is outfiles and Move to directory is processed, then files are moved to outfiles/processed on the FTP server. The Move to directory is created if it does not exist. On Policy Failure:This field enables you to choose the behavior if the policy fails.. Connection type settings The fields configured in the Connection Type tab determine the type of file transfer connection. Select the connection type from the list: FTP — File Transfer Protocol FTPS — FTP over SSL SFTP — SSH File Transfer Protocol FTP and FTPS connections The following general settings apply to FTP and FTPS connections: Passive transfer mode:Select this option to prevent problems caused by opening outgoing ports in the firewall relative to the file transfer server (for example, when using active FTP connections). This is selected by default. Note To use passive transfer mode, you must perform the steps described in Configure passive transfer mode. File Type:Select ASCII mode for sending text-based data or Binary mode for sending binary data over the connection. Defaults to ASCII mode. FTPS connections The following security settings apply to FTPS connections only: SSL Protocol:Enter the SSL protocol used (for example, SSL or TLS). Defaults to SSL. Implicit:When this option is selected, security is automatically enabled as soon as the FTP poller client makes a connection to the remote file transfer service. No clear text is passed between the client and server at any time. In this case, the client defines a specific port for the remote file transfer service to use for secure connections (990). This option is not selected by default. Explicit:When this option is selected, the remote file transfer service must explicitly request security from the FTP poller client, and negotiate the required security. If the file transfer service does not request security, the client can allow the file transfer service to continue insecure or refuse and/or limit the connection. This option is selected by default. Trusted Certificates:To connect to a remote file server over SSL, you must trust that server's SSL certificate. When you have imported this certificate into the Certificate Store, you can select it on the Trusted Certificates tab. Client Certificates:If the remote file server requires the FTP poller client to present an SSL certificate to it during the SSL handshake for mutual authentication, you must select this certificate from the list on the Client Certificates tab. This certificate must have a private key associated with it that is also stored in the Certificate Store. SFTP connections The following security settings apply to SFTP connections only: Present following key for authentication:Click the button on the right, and select a previously configured key to be used for authentication from the tree. To add a key, right-click the Key Pairs node, and select Add. Alternatively, you can import key pairs under the Environment Configuration > Certificates and Keys node in the Policy Studio tree. For more details, see Manage X.509 certificates and keys. SFTP host must present key with the following finger print:Enter the fingerprint of the public key that the SFTP host must present (for example, 43:51:43:a1:b5:fc:8b:b7:0a:3a:a9:b1:0f:66:73:a8). Related Links | https://docs.axway.com/bundle/APIGateway_762_PolicyDevGuide_allOS_en_HTML5/page/Content/PolicyDevTopics/general_ftp_scanner.htm | 2020-05-25T08:27:02 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.axway.com |
In the dashboards index you can view all the dashboards associated with your New Relic account. This includes the dashboards you have created within the New Relic One platform as well as the dashboards built in Insights.
You can also quickly access the core New Relic One features such as quick find or the chart builder that are available across the platform.
See your dashboard's basic information at a glance
You can access dashboards using the launcher on the New Relic One home page.
For each dashboard, the index displays the following information:
- Favorite status, indicated by a star
- Name: The name of the dashboard
- Account: The account the dashboard belongs to
- Created by: The user who created the dashboard
- Last edited: When the dashboard was last modified
- Created on: When the dashboard was created
Create a new dashboard
You can easily create a dashboard in New Relic One from the dashboards index:
- Select the + Create a dashboard button located at the top right corner of the dashboards index.
- Name your dashboard. Names are searchable, so we recommend giving it a meaningful name (your service or application, for instance) using words that will help you locate your dashboard easily.
- Select the account the dashboard belongs to. Chose carefully because this action cannot be modified.
- Press Create to continue, or Cancel to return to the index.
By default a dashboard is created with Anyone can edit permissions. You can edit them from the settings menu once you access the dashboard.
Alternatively, New Relic One gives you the ability to create a new dashboard:
- By cloning an existing dashboard.
- From any chart: Copy any chart from any dashboard to a new or an existing dashboard.
- In the chart builder: Add any chart you create in the chart builder to a new or an existing dashboard.
- From the entity explorer: Take any custom view from the entity manager over to dashboards.
Create dashboards with multiple pages
To create a dashboards that have multiple pages, see Add pages to a dashboard.
Dashboard permissions
Dashboards have two types of permissions:
- Read only: Only you will have full rights to work with the dashboard. Other users will be able to access the dashboard but will not be able to edit or delete it, although they will be able to clone it.
- Anyone can edit: All users will have full rights to the dashboard.
When you create a dashboard using the Create a dashboard button or by cloning another dashboard, it will have Anyone can edit rights by default. Access the new dashboard to change this setting.
Clone a dashboard
You can clone any dashboard by clicking on the Clone dashboard button which appears when you hover over any dashboard row in the index.
You can clone any dashboard regardless of your permission levels (Read only or Anyone can edit).
The dashboard will be automatically copied and the clone will be added to the index. You can access the new dashboard by clicking on the message that will pop up on your screen.
The cloned dashboard will be called like the original dashboard followed by the word "copy". For example, if you clone a dashboard named
this is my dashboard, the clone will be created as
this is my dashboard copy. The clone will have Anyone can edit permissions.
You can edit the name and other properties of the dashboard, like the permissions, at any time.
The index displays dashboards according to sorting. To quickly find your cloned dashboard sort the dashboards by creation date, the new dashboard will appear on top.
Delete a dashboard
To delete a dashboard, hover over the dashboard row at the index until the Delete button appears. You can only delete a dashboard if you created it, or if it has Anyone can edit permissions. For more information, see the permissions information.
Mark a dashboard as favorite
You can favorite any dashboard by selecting the star icon in the dashboard index. When checked, the star will turn yellow.
Favoriting dashboards helps you:
- Find dashboards faster by sorting the index by favorites.
- Access dashboards quickly from the New Relic One home page.
To remove a dashboard from your favorites, select the star icon again.
New Relic One doesn’t retrieve favorited dashboards from Insights.
Sort your dashboards
The dashboard index is structured in two sections: favorited dashboards always show up at the top of the index, followed by the remaining dashboards you have access to.
By default, dashboards you edited recently are at the top of the index in both sections. To change this order, you can sort both sections by any of the following attributes of the dashboard:
- Dashboard name
- Account name
- Created by
- Last edited
- Created on
New Relic One remembers the sorting you set in your last session.
Search by dashboard name and author using the search box above the index, matching dashboards will be automatically marked in bold.
Filter your dashboards
You can filter your dashboards by tags, which you can use to identify users, accounts, locations, etc.
Click on the tag filter to see the available tags, you can easily select one or more tags from the list to narrow down the dashboards in the index.
You can add tags using our tagging API. See our tagging docs for more information. | https://docs.newrelic.com/docs/dashboards/explore-dashboards-index/explore-dashboards-index | 2020-05-25T09:35:49 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.newrelic.com |
Create and use search macros
Search macros are chunks of a search that you can reuse in multiple places, including saved and ad hoc searches. Search macros can be any part of a search, such as an eval statement or search term, and do not need to be a complete command. You can also specify whether or not the macro field takes any arguments.
Create search macros in Splunk Web
In Settings > Advanced Search > Search macros, click "New" to create a new search macro.
Define the search macro and its arguments
Your search macro can be any chunk of your search string or search command pipeline that you want to re-use as part of another search.
Destination app is the name of the app you want to restrict your search macro to; by default, your search macros are restricted to the Search app.
Name is the name of your search macro, such as
mymacro. If your search macro takes an argument, you need to indicate this by appending the number of arguments to the name; for example, if
mymacro required two arguments, it should be named
mymacro(2). You can create multiple search macros that have the same name but require different numbers of arguments:
foo, foo(1), foo(2), etc.
Definition is the string that your search macro expands to when referenced in another search. If the search macro!\"")`.
Validate your argument values
You can verify that the argument values used to invoke the search macro are acceptable. How to invoke search macros are discussed in the following section, "Apply macros to saved and ad hoc searches".
- Validation Expression is a string that is an 'eval' expression that evaluates to a boolean or a string.
- If the validation expression is a boolean expression, validation succeeds when it returns true. If it returns false or is null, validation fails, and the Validation Error Message is returned.
If the validation expression is not a boolean expression, it is expected to return a string or NULL. If it returns null, validation is considered a success. Otherwise, the string returned is rendered as the error string.
Apply macros to saved and ad hoc searches
To include a search macro in your saved or ad hoc searches, use the left quote (also known as a grave accent) character; on most English-language keyboards, this character is located on the same key as the tilde (~). You can also reference a search macro within other search macros using this same syntax.
Note: Do NOT use the straight quote character that appears in the same key as the double quote (").
Example - Combine search macros and transactions
Transactions and macro searches are a powerful combination that you can use to simplify your transaction searches and reports. This example demonstrates how you can use search macros to build reports based on a defined transaction.
Here, a search macro, named "makesessions", defines a transaction session from events that share the same clientip value that occurred within 30 minutes of each other:
transaction clientip maxpause=30m
This search takes web traffic events and breaks them into sessions, using the "makesessions" search macro:
sourcetype=access_* | `makesessions`
This search returns a report of the number of pageviews per session for each day:
sourcetype=access_* | `makesessions` | timechart span=1d sum(eventcount) as pageviews count as sessions
If you wanted to build the same report, but with varying span lengths, just save it as a search macro with an argument for the span length. Let's call this search macro, "pageviews_per_second(1)":
sourcetype=access_* | `makesessions` | timechart $spanarg$ sum(eventcount) as pageviews count as sessions
Now, you can specify a span length when you run this search from the Search app or add it to a saved search:
`pageviews_per_second(span=1h)`! | https://docs.splunk.com/Documentation/Splunk/6.2.15/Search/Usesearchmacros | 2020-05-25T08:06:26 | CC-MAIN-2020-24 | 1590347388012.14 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Grant Proposal Language
Quotations from previous proposals using CINL resources are provided below. Please email [email protected] to discuss exact budget amounts.
Also, if you have a good write up about this, please add it to the wiki.
Quality Assurance for MRI
Periodic QA assessments are performed on the UAB Prisma 3T MRI system. A standard spherical agar phantom is scanned weekly using the fMRI quality assurance methodology described by Friedman & Glover (1). Four quantitative measures of SNR, signal fluctuation, and signal drift are calculated and graphed on a website available to all facility users. Values which differ from measured historical means by more than 2 standard deviations are flagged and investigated by lab personnel and referred to MRI system manufacturer service engineers as needed. An American College of Radiology (ACR) large MRI geometry phantom is scanned every two weeks. Seven quantitative parameters are measured: geometric accuracy, high-contrast spatial resolution, slice thickness accuracy, slice position accuracy, image intensity uniformity, percent-signal ghosting, and low-contrast object detectability. These values are given on a website that is available to all facility users. Values which differ from measured historical means by more than 2 standard deviations are flagged and investigated by lab personnel and referred to MRI system manufacturer service engineers as needed. When QA procedures are flagged by lab personnel, users will be notified so that they can make note of anomalies that may affect their data.
Friedman, L. and Glover, G. H. (2006), Report on a multicenter fMRI quality assurance protocol. J. Magn. Reson. Imaging, 23: 827–839. doi:10.1002/jmri.20583 20TB of traditional SAN storage is available via a 1G.**
back to VBC page
back to Visual Brain Core page | https://docs.uabgrid.uab.edu/w/index.php?title=Grant_Proposal_Language&oldid=5588 | 2020-05-25T09:10:29 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.uabgrid.uab.edu |
...
- Take a backup of the running database.
Set up the database dump in a test environment and test it for any issues.
Depending on your database, select the appropriate token cleanup script from here and run it on the database dump. This takes a backup of the necessary tables, turns off SQL updates and cleans the database of unused tokens.
Once the cleanup is over, start WSO2 Identity Server pointing to the cleaned-up database dump and test throughly for any issues.
You can also schedule a cleanup task that will be automatically run after a given period of time. Here's an example:
Overview
Content Tools
Activity | https://docs.wso2.com/pages/diffpagesbyversion.action?pageId=85384601&selectedPageVersions=7&selectedPageVersions=6 | 2020-05-25T07:27:46 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.wso2.com |
Creating a simple Django app for Fedora Commons repository content¶
This is a tutorial to walk you through using EULfedora with Django to build a simple interface to the Fedora-Commons repository for uploading files, viewing uploaded files in the repository, editing Dublin Core metadata, and searching content in the repository.
This tutorial assumes that you have an installation of the Fedora Commons repository available to interact with. You should have some familiarity with Python and Django (at the very least, you should have worked through the Django Tutorial). You should also have some familiarity with the Fedora Commons Repository and a basic understanding of objects and content models in Fedora.
We will use pip to install EULfedora and its dependencies; on some platforms (most notably, in Windows), you may need to install some of the python dependencies manually.
Create a new Django project and setup
eulfedora¶
Use
pip to install the
eulfedora library and its
dependencies. For this tutorial, we’ll use the latest released
version:
$ pip install eulfedora
This command should install EULfedora and its Python dependencies.
We’re going to make use of a few items in
eulcommon, so let’s
install that now too:
$ pip install eulcommon
We’ll use Django, a popular web framework, for the web components of this tutorial:
$ pip install django==1.8.3
Note
You are free to use latest version of django. But this tutorial is updated using django v1.8.3.
Now, let’s go ahead and create a new Django project. We’ll call it simplerepo:
$ django-admin.py startproject simplerepo
Go ahead and do some minimal configuration in your django settings. For simplicity, you can use a sqlite database for this tutorial (in fact, we won’t make much use of this database).
In addition to the standard Django settings, add
eulfedora to
your
INSTALLED_APPS and add Fedora connection configurations to
your
settings.py so that the
eulfedora
Repository object can automatically connect
to your configured Fedora repository:
# Fedora Repository settings FEDORA_ROOT = '' FEDORA_USER = 'fedoraAdmin' FEDORA_PASSWORD = 'fedoraAdmin' FEDORA_PIDSPACE = 'simplerepo'
Since we’re planning to upload content into Fedora, make sure you are using a fedora user account that has permission to upload, ingest, and modify content.
Create a model for your Fedora object¶
Before we can upload any content, we need to create an object to represent how we want to store that data in Fedora. Let’s create a new Django app where we will create this model and associated views:
$ python manage.py startapp repo
In
repo/models.py, create a class that extends
DigitalObject:
from eulfedora.models import DigitalObject, FileDatastream class FileObject(DigitalObject): FILE_CONTENT_MODEL = 'info:fedora/genrepo:File-1.0' CONTENT_MODELS = [ FILE_CONTENT_MODEL ] file = FileDatastream("FILE", "Binary datastream", defaults={ 'versionable': True, })
What we’re doing here extending the default
DigitalObject, which gives us Dublin Core
and RELS-EXT datastream mappings for free, since those are part of
every Fedora object. In addition, we’re defining a custom datastream
that we will use to store the binary files that we’re going to upload
for ingest into Fedora. This configures a versionable
FileDatastream with a datastream id of
FILE and a default datastream label of
Binary datastream. We
could also set a default mimetype here, if we wanted.
Let’s inspect our new model object in the Django console for a moment:
$ python manage.py shell
The easiest way to initialize a new object is to use the Repository object
get_object method, which can also be used
to access existing Fedora objects. Using the Repository object allows us to seamlessly pass along the Fedora
connection configuration that the Repository object picks up from your django
settings.py:
>>> from eulfedora.server import Repository >>> from simplerepo.repo.models import FileObject # initialize a connection to the configured Fedora repository instance >>> repo = Repository() # create a new FileObject instance >>> obj = repo.get_object(type=FileObject) # this is an uningested object; it will get the default type of generated pid when we save it >>> obj <FileObject (generated pid; uningested)> # every DigitalObject has Dublin Core >>> obj.dc <eulfedora.models.XmlDatastreamObject object at 0xa56f4ec> # dc.content is where you access and update the actual content of the datastream >>> obj.dc.content <eulxml.xmlmap.dc.DublinCore object at 0xa5681ec> # print out the content of the DC datastream - nothing there (yet) >>> print obj.dc.content.serialize(pretty=True) <oai_dc:dc xmlns: # every DigitalObject also gets rels_ext for free >>> obj.rels_ext <eulfedora.models.RdfDatastreamObject object at 0xa56866c> # this is an RDF datastream, so the content uses rdflib instead of :mod:`eulxml.xmlmap` >>> obj.rels_ext.content <Graph identifier=omYiNhtw0 (<class 'rdflib.graph.Graph'>)> # print out the content of the rels_ext datastream # notice that it has a content-model relation defined based on our class definition >>> print obj.rels_ext.content.serialize(pretty=True) <?xml version="1.0" encoding="UTF-8"?> <rdf:RDF xmlns: <rdf:Description rdf: <fedora-model:hasModel rdf: </rdf:Description> </rdf:RDF> # our FileObject also has a custom file datastream, but there's no content yet >>> obj.file <eulfedora.models.FileDatastreamObject object at 0xa56ffac> # save the object to Fedora >>> obj.save() # our object now has a pid that was automatically generated by Fedora >>> obj.pid 'simplerepo:1' # the object also has information about when it was created, modified, etc >>> obj.created datetime.datetime(2011, 3, 16, 19, 22, 46, 317000, tzinfo=tzutc()) >>> print obj.created 2011-03-16 19:22:46.317000+00:00 # datastreams have this kind of information as well >>> print obj.dc.mimetype text/xml >>> print obj.dc.created 2011-03-16 19:22:46.384000+00:00 # we can modify the content and save the changes >>> obj.dc.content.>> obj.save()
We’ve defined a FileObject with a custom content model, but we haven’t created the content model object in Fedora yet. For simple content models, we can do this with a custom django manage.py command. Run it in verbose mode so you can more details about what it is doing:
$ python manage.py syncrepo -v 2
You should see some output indicating that content models were generated for the class you just defined.
This command was is analogous to the Django
syncdb command. It
looks through your models for classes that extend DigitalObject, and
when it finds content models defined that it can generate, which don’t
already exist in the configured repository, it will generate them and
ingest them into Fedora. It can also be used to load initial objects
by way of simple XML filters.
Create a view to upload content¶
So, we have a custom
DigitalObject defined.
Let’s do something with it now.
Display an upload form¶
We haven’t defined any url patterns yet, so let’s create a
urls.py
for our repo app and hook that into the main project urls. Create
repo/urls.py with this content:
from django.conf.urls.defaults import patterns, url from simplerepo.repo import views urlpatterns = patterns('', url(r'^upload/$', views.upload, name='upload'), )
Then include that in your project
urls.py:
url(r'^', include('repo.urls')),
Now, let’s define a simple upload form and a view method to correspond
to that url. First, for the form, create a file named
repo/forms.py and add the following:
from django import forms class UploadForm(forms.Form): label = forms.CharField(max_length=255, # fedora label maxes out at 255 characters help_text='Preliminary title for the new object. 255 characters max.') file = forms.FileField()
The minimum we need to create a new FileObject in Fedora is a file to ingest and a label for the object in Fedora. We’re could actually make the label optional here, because we could use the file name as a preliminary label, but for simplicity let’s require it.
Now, define an upload view to use this form. For now, we’re just
going to display the form on GET; we’ll add the form processing in a
moment. Edit
repo/views.py and add:
from django.shortcuts import render from simplerepo.repo.forms import UploadForm def upload(request): if request.method == 'GET': form = UploadForm() return render(request, 'repo/upload.html', {'form': form})
But we still need a template to display our form. Create a template
directory and add it to your
TEMPLATES configuration in
settings.py:
TEMPLATES = [ { ... 'DIRS': [os.path.join(BASE_DIR, 'templates')], # for example }
Create a
repo directory inside your template directory, and then
create
upload.html inside that directory and give it this content:
<form method="post" enctype="multipart/form-data">{% csrf_token %} {{ form.as_p }} <input type="submit" value="Submit"/> </form>
Let’s start the django server and make sure everything is working so far. Start the server:
$ python manage.py runserver
Then load in your Web browser. You should see a simple upload form with the two fields defined.
Process the upload¶
Ok, but our view doesn’t do anything yet when you submit the web form.
Let’s add some logic to process the form. We need to import the
Repository and FileObject classes and use the posted form data to
initialize and save a new object, rather like what we did earlier when
we were investigating FileObject in the console. Modify your
repo/views.py so it looks like this:
from django.shortcuts import render_to_response from django.template import RequestContext from eulfedora.server import Repository from simplerepo.repo.forms import UploadForm from simplerepo.repo.models import FileObject def upload(request): obj = None if request.method == 'POST': form = UploadForm(request.POST, request.FILES) if form.is_valid(): # initialize a connection to the repository and create a new FileObject repo = Repository() obj = repo.get_object(type=FileObject) # set the file datastream content to use the django UploadedFile object obj.file.content = request.FILES['file'] # use the browser-supplied mimetype for now, even though we know this is unreliable obj.file.mimetype = request.FILES['file'].content_type # let's store the original file name as the datastream label obj.file.label = request.FILES['file'].name # set the initial object label from the form as the object label and the dc:title obj.label = form.cleaned_data['label'] obj.dc.content.title = form.cleaned_data['label'] obj.save() # re-init an empty upload form for additional uploads form = UploadForm() elif request.method == 'GET': form = UploadForm() return render(request, 'repo/upload.html', {'form': form, 'obj': obj})
When content is posted to this view, we’re binding our form to the
request data and, when the form is valid, creating a new FileObject
and initializing it with the label and file that were posted, and
saving it. The view is now passing that object to the template, so if
it is defined that should mean we’ve successfully ingested content
into Fedora. Let’s update our template to show something if that is
defined. Add this to
repo/upload.html before the form is
displayed:
{% if obj %} <p>Successfully ingested <b>{{ obj.label }}</b> as {{ obj.pid }}.</p> <hr/> {# re-display the form to allow additional uploads #} <p>Upload another file?</p> {% endif %}
Go back to the upload page in your web browser. Go ahead and enter a label, select a file, and submit the form. If all goes well, you should see a the message we added to the template for successful ingest, along with the pid of the object you just created.
Display uploaded content¶
Now we have a way to get content in Fedora, but we don’t have any way to get it back out. Let’s build a display method that will allow us to view the object and its metadata.
Object display view¶
Add a new url for a single-object view to your urlpatterns in
repo/urls.py:
url(r'^objects/(?P<pid>[^/]+)/$', views.display, name='display'),
Then define a simple view method that takes a pid in
repo/views.py:
def display(request, pid): repo = Repository() obj = repo.get_object(pid, type=FileObject) return render(request, 'repo/display.html', {'obj': obj})
For now, we’re going to assume the object is the type of object we expect and that we have permission to access it in Fedora; we can add error handling for those cases a bit later.
We still need a template to display something. Create a new file
called
repo/display.html in your templates directory, and then add
some code to output some information from the object:
<h1>{{ obj.label }}</h1> <table> <tr><th>pid:</th><td> {{ obj.pid }}</td></tr> {% with obj.dc.content as dc %} <tr><th>title:</th><td>{{ dc.title }}</td></tr> <tr><th>creator:</th><td>{{ dc.creator }}</td></tr> <tr><th>date:</th><td>{{ dc.date }}</td></tr> {% endwith %} </table>
We’re just using a simple table layout for now, but of course you can display this object information anyway you like. We’re just starting with a few of the Dublin Core fields for now, since most of them don’t have any content yet.
Go ahead and take a look at the object you created before using the
upload form. If you used the
simplerepo PIDSPACE configured
above, then the the first item you uploaded should now be viewable at.
You might notice that we’re displaying the text ‘None’ for creator and
date. This is because those fields aren’t present at all yet in our
object Dublin Core, and
eulxml.xmlmap fields distinguish
between an empty XML field and one that is not-present at all by using
the empty string and None respectively. Still, that doesn’t look
great, so let’s adjust our template a little bit:
<tr><th>creator:</th><td>{{ dc.creator|default:'' }}</td></tr> <tr><th>date:</th><td>{{ dc.date|default:'' }}</td></tr>
We actually have more information about this object than we’re currently displaying, so let’s add a few more things to our object display template. The object has information about when it was created and when it was last modified, so let’s add a line after the object label:
<p>Uploaded at {{ obj.created }}; last modified {{ obj.modified }}.</p>
These fields are actually Python datetime objects, so we can use Django template filters to display then a bit more nicely. Try modifying the line we just added:
<p>Uploaded at {{ obj.created }}; last modified {{ obj.modified }} ({{ obj.modified|timesince }} ago).</p>
It’s pretty easy to display the Dublin Core datastream content as XML
too. This may not be something you’d want to expose to regular users,
but it may be helpful as we develop the site. Add a few more lines at
the end of your
repo/display.html template:
<hr/> <pre>{{ obj.dc.content.serialize }}</pre>
You could do this with the RELS-EXT just as easily (or basically any XML or RDF datastream), although it may not be as valuable for now, since we’re not going to be modifying the RELS-EXST just yet.
So far, we’ve got information about the object and the Dublin Core displaying, but nothing about the file that we uploaded to create this object. Let’s add a bit more to our template:
<p>{{ obj.file.label }} ({{ obj.file.info.size|filesizeformat }}, {{ obj.file.mimetype }})</p>
Remember that in our
upload view method we set the file datastream
label and mimetype based on the file that was uploaded from the web
form. Those are stored in Fedora as part of the datastream
information, along with some other things that Fedora calculates for
us, like the size of the content.
Download File datastream¶
Now we’re displaying information about the file, but we don’t actually have a way to get the file back out of Fedora yet. Let’s add another view.
Add another line to your url patterns in
repo/urls.py:
url(r'^objects/(?P<pid>[^/]+)/file/$', views.file, name='download'),
And then update
repo/views.py to define the new view method.
First, we need to add a new import:
from eulfedora.views import raw_datastream
eulfedora.views.raw_datastream() is a generic view method that
can be used for displaying datastream content from fedora objects. In
some cases you may be able to use
raw_datastream() directly (e.g., it might be
useful for displaying XML datastreams), but in this case we want to
add an extra header to indicate that the content should be downloaded.
Add this method to
repo/views.py:
def file(request, pid): dsid = 'FILE' extra_headers = { 'Content-Disposition': "attachment; filename=%s.pdf" % pid, } return raw_datastream(request, pid, dsid, type=FileObject, headers=extra_headers)
We’ve defined a content disposition header so the user will be
prompted to save the response with a filename based on the pid do the
object in fedora. The
raw_datastream() method
will add a few additional response headers based on the datastream
information from Fedora. Let’s link this in from our object display
page so we can try it out. Edit your
repo/display.html template
and turn the original filename into a link:
<a href="{% url 'download' obj.pid %}">{{ obj.file.label }}</a>
Now, try it out! You should be able to download the file you originally uploaded.
But, hang on– you may have noticed, there are a couple of details hard-coded in our download view that really shouldn’t be. What if the file you uploaded wasn’t a PDF? What if we decide we want to use a different datastream ID? Let’s revise our view method a bit:
def file(request, pid): dsid = FileObject.file.id repo = Repository() obj = repo.get_object(pid, type=FileObject) extra_headers = { 'Content-Disposition': "attachment; filename=%s" % obj.file.label, } return raw_datastream(request, pid, dsid, type=FileObject, headers=extra_headers)
We can get the ID for the file datastream directly from the
FileDatastream object on our
FileObject class. And in our upload view we set the original file
name as our datastream label, so we’ll go ahead and use that as the
download name.
Edit Fedora content¶
So far, we can get content into Fedora and we can get it back out. Now, how do we modify it? Let’s build an edit form & a view that we can use to update the Dublin Core metadata.
XmlObjectForm for Dublin Core¶
We’re going to create an
eulxml.forms.XmlObjectForm instance
for editing
eulxml.xmlmap.dc.DublinCore.
XmlObjectForm is roughly analogous to Django’s
ModelForm, except in place of a Django Model we
have an
XmlObject that we want to make
editable.
First, add some new imports to
repo/forms.py:
from eulxml.xmlmap.dc import DublinCore from eulxml.forms import XmlObjectForm
Then we can define our new edit form:
class DublinCoreEditForm(XmlObjectForm): class Meta: model = DublinCore fields = ['title', 'creator', 'date']
We’ll start simple, with just the three fields we’re currently displaying on
our object display page. This code creates a custom
XmlObjectForm with a model of (which for us is an
instance of
XmlObject)
DublinCore.
XmlObjectForm
knows how to look at the model object and figure out how to generate form
fields that correspond to the xml fields. By adding a list of fields, we
tell XmlObjectForm to only build form fields for these attributes of our
model.
Now we need a view and a template to display our new form. Add
another url to
repo/urls.py:
url(r'^objects/(?P<pid>[^/]+)/edit/$', 'edit', name='edit'),
And then define the corresponding method in
repo/views.py. We
need to import our new form:
from repo.forms import DublinCoreEditForm
Then, use it in a view method. For now, we’ll just instantiate the form, bind it to our content, and pass it to a template:
def edit(request, pid): repo = Repository() obj = repo.get_object(pid, type=FileObject) form = DublinCoreEditForm(instance=obj.dc.content) return render(request, 'repo/edit.html', {'form': form, 'obj': obj})
We have to instantiate our object, and then pass in the content of
the DC datastream as the instance to our model. Our XmlObjectForm is
using
DublinCore as its model, and
obj.dc.content is an instance of DublinCore with data loaded from
Fedora.
Create a new file called
repo/edit.html in your templates
directory and add a little bit of code to display the form:
<h1>Edit {{ obj.label }}</h1> <form method="post">{% csrf_token %} <table>{{ form.as_table }}</table> <input type="submit" value="Save"/> </form>
Load the edit page for that first item you uploaded:. You should see a form with the three fields that we listed. Let’s modify our view method so it will do something when we submit the form:
def edit(request, pid): repo = Repository() obj = repo.get_object(pid, type=FileObject) if request.method == 'POST': form = DublinCoreEditForm(request.POST, instance=obj.dc.content) if form.is_valid(): form.update_instance() obj.save() elif request.method == 'GET': form = DublinCoreEditForm(instance=obj.dc.content) return render(request, 'repo/edit.html', {'form': form, 'obj': obj})
When the data is posted to this view, we’re binding our form to the posted
data and the XmlObject instance. If it’s valid, then we can call the
update_instance() method, which actually
updates the
XmlObject that is attached to our DC
datastream object based on the form data that was posted to the view. When
we save the object, the
DigitalObject class
detects that the
dc.content has been modified and will make the
necessary API calls to update that content in Fedora.
Note
It may not matter too much in this case, since we are working with simple
Dublin Core XML, but it’s probably worth noting that the form
is_valid() check actually includes XML
Schema validation on
XmlObject instances that have a schema defined.
In most cases, it should be difficult (if not impossible) to generate
invalid XML via an
XmlObjectForm; but if you edit
the XML manually and introduce something that is not schema-valid, you’ll
see the validation error when you attempt to update that content with
XmlObjectForm.
Try entering some text in your form and submitting the data. It should update your object in Fedora with the changes you made. However, our interface isn’t very user friendly right now. Let’s adjust the edit view to redirect the user to the object display after changes are saved.
We’ll need some additional imports:
from django.core.urlresolvers import reverse from eulcommon.djangoextras.http import HttpResponseSeeOtherRedirect
Note
HttpResponseSeeOtherRedirect is a
custom subclass of
django.http.HttpResponse analogous to
HttpResponseRedirect or
HttpResponsePermanentRedirect, but it returns a
See Other
redirect (HTTP status code 303).
After the
object.save() call in the edit view method, add this:
return HttpResponseSeeOtherRedirect(reverse('display', args=[obj.pid]))
Now when you make changes to the Dublin Core fields and submit the form, it should redirect you to the object display page and show the changes you just made.
Right now our edit form only has three fields. Let’s customize it a bit more. First, let’s add all of the Dublin Core fields. Replace the original list of fields in DublinCoreEditForm with this:
fields = ['title', 'creator', 'contributor', 'date', 'subject', 'description', 'relation', 'coverage', 'source', 'publisher', 'rights', 'language', 'type', 'format', 'identifier']
Right now all of those are getting displayed as text inputs, but we might want to treat some of them a bit differently. Let’s customize some of the widgets:
widgets = { 'description': forms.Textarea, 'date': SelectDateWidget, }
You’ll also need to add another import line so you can use
SelectDateWidget:
from django.forms.extras.widgets import SelectDateWidget
Reload the object edit page in your browser. You should see all of the Dublin Core fields we added, and the custom widgets for description and date. Go ahead and fill in some more fields and save your changes.
While we’re adding fields, let’s change our display template so that
we can see any Dublin Core fields that are present, not just those
first three we started with. Replace the title, creator, and date
lines in your
repo/display.html template with this:
{% for el in dc.elements %} <tr><th>{{ el.name }}:</th><td>{{el}}</td</tr> {% endfor %}
And then add an extra parameter ‘dc’ to render_to_response in display function:
def display(request, pid): repo = Repository() obj = repo.get_object(pid, type=FileObject) return render_to_response('display.html', {'obj': obj, 'pid': pid, 'dc': obj.dc.content})
Now when you load the object page in your browser, you should see all of the fields that you entered data for on the edit page.
Search Fedora content¶
So far, we’ve just been working with the objects we uploaded, where we know the PID of the object we want to view or edit. But how do we come back and find that again later? Or find other content that someone else created? Let’s build a simple search to find objects in Fedora.
Note
For this tutorial, we’ll us the Fedora findObjects API method. This search is quite limited, and for a production system, you’ll probably want to use something more powerful, such Solr, but findObjects is enough to get you started.
The built-in fedora search can either do a keyword search across all
indexed fields or a fielded search. For the purposes of this
tutorial, a simple keyword search will accomplish what we need. Let’s
create a simple form with one input for keyword search terms. Add the
following to
repo/forms.py:
class SearchForm(forms.Form): keyword = forms.CharField()
Add a search url to
repo/urls.py:
url(r'^search/$', views.search, name='search'),
Then import the new form into
repo/views.py and define the view
that will actually do the searching:
from repo.forms import SearchForm def search(request): objects = None if request.method == 'POST': form = SearchForm(request.POST) if form.is_valid(): repo = Repository() objects = list(repo.find_objects(form.cleaned_data['keyword'], type=FileObject)) elif request.method == 'GET': form = SearchForm() return render(request, 'repo/search.html', {'form': form, 'objects': objects})
As before, on a GET request we simple pass the form to the template for
display. When the request is a POST with valid search data, we’re going to
instantiate our
Repository object and call the
find_objects() method. Since we’re just
doing a term search, we can just pass in the keywords from the form. If you
wanted to do a fielded search, you could build a keyword-argument style list
of fields and search terms instead. We’re telling
find_objects() to return everything it
finds as an instance of our
FileObject class for now, even though that
is an over-simplification and in searching across all content in the Fedora
repository we may well find other kinds of content.
Let’s create a search template to display the search form and search
results. Create
repo/search.html in your templates directory and
add this:
<h1>Search for objects</h1> <form method="post">{% csrf_token %} {{ form.as_p }} <input type="submit" value="Submit"/> </form> {% if objects %} <hr/> {% for obj in objects %} <p><a href="{% url 'display' obj.pid %}">{{ obj.label }}</a></p> {% endfor %} {% endif %}
This template will always display the search form, and if any objects were
found, it will list them. Let’s take it for a whirl! Go to and enter a search term. Try searching
for the object labels, any of the values you entered into the Dublin Core
fields that you edited, or if you’re using
simplerepo for your
configured
PIDSPACE, search on
simplerepo:* to find the objects
you’ve uploaded.
When you are searching across disparate content in the Fedora repository,
depending on how you have access configured for that repository, there is a
possibility that the search could return an object that the current user
doesn’t actually have permission to view. For efficiency reasons, the
DigitalObject postpones any Fedora API calls
until the last possibly moment– which means that in our search results, any
connection errors will happen in the template instead of in the view method.
Fortunately,
eulfedora.templatetags has a template tag to help with
that! Let’s rewrite the search template to use it:
{% load fedora %} <h1>Search for objects</h1> <form method="post">{% csrf_token %} {{ form.as_p }} <input type="submit" value="Submit"/> </form> {% if objects %} <hr/> {% for obj in objects %} {% fedora_access %} <p><a href="{% url 'display' obj.pid %}">{{ obj.label }}</a></p> {% permission_denied %} <p>You don't have permission to view this object.</p> {% fedora_failed %} <p>There was an error accessing fedora.</p> {% end_fedora_access %} {% endfor %} {% endif %}
What we’re doing here is loading the
fedora template tag library, and
then using fedora_access for each object that
we want to display. That way we can catch any permission or connection
errors and display some kind of message to the user, and still display all
the content they have permission to view.
For this template tag to work correctly, you’re also going to have
disable template debugging (otherwise, the Django template debugging
will catch the error first). Edit your
settings.py and change
TEMPLATE_DEBUG to False. | https://eulfedora.readthedocs.io/en/1.5.2/tutorials/fedora.html | 2020-05-25T08:09:50 | CC-MAIN-2020-24 | 1590347388012.14 | [] | eulfedora.readthedocs.io |
Response Time Observability with Open Telemetry
RTO - Observability using the Open Telemetry API.
OpenTelemetry is an Observability Framework, created by the merging of OpenCensus and OpenTracing — the latter was used in the previous version of the SDK. OpenTelemetry is currently in testing, and will be released later in 2020, at which point it will be made available in the Couchbase SDKs. | https://docs.couchbase.com/go-sdk/2.0/concept-docs/response-time-observability.html | 2020-05-25T09:16:37 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.couchbase.com |
describes the supported platforms for Live Forms.
The information on this page applies to the initial version of Live Forms v9.2 deployed to the frevvo Cloud in April 2020.
Version 9.2 is a Cloud-only release.
On This Page:
frevvo supports major versions for two years after the first GA release date. Please see our complete list of End of Life dates on the Software Downloads Directory page. For some releases, the End of Life Date may be extended. Documentation for Live Forms versions that are no longer supported is available on our frevvo Documentation Directory. | https://docs.frevvo.com/d/display/frevvo92/Supported+Platforms | 2020-05-25T09:08:06 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.frevvo.com |
.
How to read this document
For those wishing to get an introduction to the ADMX format, the conceptual topics in this reference guide provide a starting point. It is recommended you read the following topics:
-
-
.admx and .adml File Structure
Best Practices for Authoring ADMX Files
Referencing the Windows Base ADMX File
Creating a Custom Base ADMX File
Associating .admx and .adml Parameter Information
Comparing ADM and ADMX Syntax
-.
Note
This document will use the following rules when referring to the different components of ADMX files. When referring to the general set of administrative template files for Windows Vista registry settings, we will refer to "ADMX files." This document will refer to ".admx files" when referring to the language-neutral administrative template files and ".adml files" when referring to the language-dependent administrative template files. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753471(v=ws.10)?redirectedfrom=MSDN | 2020-05-25T07:04:23 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.microsoft.com |
:
• Based on LSA-PLUS® connection technology
• Fits on all LSA-PLUS® back mount frames
• Suitable for use in all Telecom circuits
• Flexible indoor and outdoor use
• Robust long term environmental stability
• Supports LSA-PLUS® series 2 accessories
• Overvoltage protection using 10 pair protection magazine only | https://docs.msp-ict.com/Connection-module-termination-terminal-telecom-krone-en.html | 2020-05-25T07:46:22 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.msp-ict.com |
New Relic offers a Dropwizard Dropwizard 1.3, ensure your configuration meets the following requirements:
- Java agent 5.6.0 or higher: Install or update
- JVM argument
-javaagentenabled on the Java agent.
- Dropwizard 1.3 package installed and working on the application, with the original Dropwizard appenders and logging factory.
Configure logs in context with New Relic Logs
To configure New Relic logs in context with Dropwizard:
- Enable New Relic Logs with a compatible log forwarding plugin.
- Install or update the Java agent.
- Configure the Dropwizard Dropwizard extension
To configure logs in context with the Dropwizard 1.3 extension, complete the following steps:
Update your project's dependencies to include the Dropwizard 1.3 extension as applicable:
To update with Gradle, add the following to your
build.gradlefile:
dependencies { compile("com.newrelic.logging:dropwizard:1.1") }
To update with Maven, add the following to your
pom.xmlfile:
<dependencies> <dependency> <groupId>com.newrelic.logging</groupId> <artifactId>dropwizard</artifactId> <version>1.1</version> </dependency> </dependencies>
- Update your Dropwizard .yaml configuration file with a
newrelic-jsonlayout, replacing the currently used
type: consoleor
type: filewith either
type: newrelic-consoleor
type: newrelic-fileas appropriate. For example:
logging: appenders: - type: newrelic-console # Add the two lines below if you don't have a layout specified on the appender. # If you have a layout, remove all parameters to the layout and set the type. layout: type: newrelic-json
Alternatively, the New Relic Dropwizard extension also supports a
log-formatlayout type that uses the standard Dropwizard logging. For testing purposes, you can change the type of the layout with a one-line change
logging: appenders: - type: newrelic-file # This format will be ignored by the
newrelic-jsonlayout, but used by the
log-formatlayout. logFormat: "%date{ISO8601} %c %-5p: %m trace.id=%mdc{trace.id} span.id=%mdc{span.id}%n" layout: # type: newrelic-json type: log-format. | https://docs.newrelic.com/docs/logs/enable-logs/logs-context-java/configure-logs-context-dropwizard | 2020-05-25T09:10:57 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.newrelic.com |
Create a Schedule
Our scheduler can save you a lot of time. It allows you to build flexible schedules to automatically create checklists on a recurring basis.
You can set your schedules to repeat in the following ways:
- Daily
- Weekly
- Monthly
- Yearly
- Every X Days
- Daily - Specific Days of the Week
- Every X Weeks
- Every X Weeks - Specific Days of the Week
- Every X Months
- Every X Years
How to Create a ScheduleHow to Create a Schedule
Click on the calendar icon in the navigation toolbar on the left side of the screen.
This will take you to the scheduling page.
This page allows you to view your schedules in the following ways:
- View your schedules in a grid
- View the checklists that your schedules have created, or will be creating in the future:
- In a calendar
- In a list
You can also create a new schedule from here. Click the 'Create New Schedule' button located in the top left of the page.
This will take you to the schedule page. From here you can enter the following details for your schedule:
- Schedule Name - A name to identify your schedule.
- Template - The template that the scheduler will use to create the checklists from.
- Checklist Name - The name the scheduler will set for each checklist it creates. You can use auto generated values to get the scheduler to insert certain values such as the start date or start time of the checklist.
- Active - Whether or not the schedule is currently active.
- Repeats - The repeating interval, which can be one of the following values:
- Never
- Daily
- Weekly
- Monthly
- Yearly
| https://docs.checkflow.io/docs/scheduling/create-a-schedule | 2020-05-25T06:44:46 | CC-MAIN-2020-24 | 1590347388012.14 | [array(['/img/nav-bar-scheduling.png', 'Navigation Bar Scheduling Button'],
dtype=object)
array(['/img/create-new-schedule-button.png',
'Navigation Bar Scheduling Button'], dtype=object)
array(['/img/create-new-schedule.gif', 'Create New Schedule'],
dtype=object) ] | docs.checkflow.io |
Interview with a Microsoft Partner, SharePoint MVP, and Forum Ninja - Trevor Seward
Welcome to another edition of Interview with a Forum Ninja! This interview is a long time coming! Oh, and a correction to the blog title... Trevor is a former SharePoint MVP. Now they're called an "Office Services and Servers MVP"! Let's learn a bit more about...
Here are some of his profile statistics:
- 129K Recognition Points
- Currently Ranked #41 this year (of 10K+)
- 6 Gold Medals, 8 Silver, 19 Bronze
- 7,383 Forum Answers!
- 18,687 Forum Replies!
Recent questions he answered:
- Licensing Office Web Apps for editing
- Site Collection Back up by PowerShell Command
- Shutting Down SharePoint Farm for extended periods
- Best Practice for SharePoint 2013 DMZ setup for SharePoint
- SharePoint authentication using ADFS
- SharePoint 2010 - sword.dll - latest version issues
- Access Services Setup SharePoint 2016, and SQL 2016
That's enough! Let's get to the questions...
===========
Who are you, where are you, and what do you do? What are your specialty technologies?
My name is Trevor Seward and I reside in Washington State with my wife and two kids. I work on SharePoint… a lot. I am a four-time Office Services and Servers MVP (formerly known as SharePoint MVP) and work for a consulting company, ZAACT, based out of Draper, Utah. I am part of the DevOps team, working on a variety of solutions for our customers in the Microsoft stack, both on-premises and Azure/Office 365. In my spare time, either I’m working on open source projects () or playing games including Guild Wars 2 (lots of World vs. World), Elite: Dangerous, The Witcher 3, and Divinity: Original Sin.
What are your big projects right now?
I’m currently working on a book! It is on the topic of SharePoint Server 2016 administration and architecture. Writing a book is a larger project than I thought it would be, but rewarding once you had in that final chapter and can clear your head for a few months while the book undergoes the various reviews. Being part of the overall SharePoint beta, including working as a vendor for a period of time supporting SharePoint 2016 in “production” has proved to be valuable when providing input on other people’s questions in the new SharePoint Server 2016 forum. There’s a lot to learn about SharePoint Server 2016, including some of the significant changes regarding MinRole and Profile Synchronization options, which we’re seeing a lot of questions on.
What Microsoft Forums do you participate in? What different roles do you play?
I primarily participate in five forums, of which I also moderate. These are the SharePoint 2010 and SharePoint 2013 forums named ‘General Discussions and Questions’, ‘Setup, Upgrade, Administration and Operations’ as well as the simply titled forum, ‘SharePoint Server 2016’.
In what other sites and communities do you contribute your technical knowledge?
I also participate and am a moderator of reddit.com/r/sharepoint where we currently have about 4,600 subscribers. In addition to reddit, I help out on SharePoint Stack Exchange, and where possible, the Twitter hashtag #SPHelp. The SharePoint community is incredible with many very talented people ready to help; always feel free to reach out!
What are your favorite forum threads you’ve contributed on?
The ones where we come to a good solution that the original poster is happy with! It is amazing to have an individual come back to answer more and more questions because you helped them once. It instills a certain amount of confidence in the support they can receive from the forums.
What could we do differently on MSDN and TechNet Forums?
I’m always hoping for a certain amount of improvement in the forums, to bring them into that ‘modern era’, where mobile is taken into consideration and modern features like notification of new posts or replies, similar to how StackExchange functions. Cross-browser compatibility with the input editor is something I’m hoping improves, for example, showing formatted text within Chrome when reading posts.
Do you have any comments for product groups about the MSDN and TechNet Forums that they moderate?
As a moderator, along with all other moderators, I have the ability to mark (or unmark!) any post as an answer to a question, but this is something I rarely do. As other’s can do, if I feel a particular post has answered the question, I will propose it as such and allow that original poster to mark it as a final answer. This is where I think the Microsoft moderators could improve – not every thread has an answer, but I often see that an post has been marked as an answer, even when it wasn’t an answer. This has been one of the heavier criticisms of the MSDN and TechNet forums, especially those used to the StackExchange format where only the asker can mark any one particular response as an answer.
Do you have any tips for people asking questions on MSDN/TechNet Forums?
Always be very thorough with your question. Provide as much detail as humanly possible, otherwise the response to your question will be more questions! As forum participants are essentially providing support “in the blind”, the more information at their fingertips, the faster your question can be resolved. If you need to post logs, make sure to clean them up of any sensitive information, which could include server names, usernames, or public IP addresses and host them outside of the forum, for example on OneDrive, gist.github.com, or PasteBin.com. These external services provide an interface and formatting that is much easier to read and review than the built-in forum software.
=================
Please join me in thanking Trevor for his epic contributions to the Microsoft forums!
May the Forums be with you! (Don't be a rogue one!)
- Ninja Ed | https://docs.microsoft.com/en-us/archive/blogs/forumninjas/interview-with-a-microsoft-partner-sharepoint-mvp-and-forum-ninja-trevor-seward | 2020-05-25T08:06:29 | CC-MAIN-2020-24 | 1590347388012.14 | [array(['https://msdnshared.blob.core.windows.net/media/2016/08/Forum-Ninjas-Blog.jpg',
'Forum Ninjas Blog'], dtype=object) ] | docs.microsoft.com |
Fall 2016 Research Computing Day
Date: September 14, 2016
Venue: Hill Student Center Ballroom C & D
Open to all UAB faculty, staff, and students. Registration is free but please register here to attend.
Agenda
Frank Skidmore
Kristina Visscher
Ryoichi Kawai
Yuhua Song
Dell HPC Strategy and Technologies
Brian Marquis, HPC Solutions Architect, Dell
David Crossman
Hemant Tiwari | https://docs.uabgrid.uab.edu/w/index.php?title=RCDay2016&direction=next&oldid=5192 | 2020-05-25T09:20:53 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.uabgrid.uab.edu |
Overview
This document will guide you through the installation of ThreatSTOP’s IP Defense iptables linux integration on an Azure provisioned Ubuntu virtual machine.
- Step 1. Create and login to ThreatSTOP account to copy License / API Key
- Step 2. Return to Azure and create new “ThreatSTOP IP Firewall” device from the marketplace offerings
- Fill out Azure VM provisioning form providing machine type, storage, network, license, etc. Wait for device to provision
- Step 3. Secure SSH & log in to deployed Azure ThreatSTOP IP IP Firewall in the search results.
Click Create. You will be led to a form to create your new IP Firewall.
Provisioning an IP Firewall
Basics
- Name your ThreatSTOP IP Firewall
- Provide an admin username
- Select an authentication type: Password or SSH public key
- Provide a password or SSH public key
- Select a subscription model
- Create a new resource group and provide a name
- Select a datacenter location for your virtual environment
- Click [OK]
Network and Storage Settings
- Select your virtual machine size
- Create a virtual network, or select one that exists in the datacenter to which the network is being deployed.
- Create a subnet. A new sub-net will need to be created prior to deployment for the IP firewall if one does not already exist. For testing purposes you may wish to deploy a new virtual net alongside an existing vnet, and then roll the servers into the existing vnet if they meet approval.
- Create a new public address
- Provide a DNS prefix / hostname.
- Configure a storage account.
Firewall Configuration
- Enter the License/API key you copied in ThreatSTOP Account Setup above.
- Click OK.
Summary
- Verify that everything looks correct and click OK.
Buy
- Review the Terms of Use and Privacy Policy then click Purchase. This step will bill your Azure account per your agreed terms just like any other VM. ThreatSTOP will not bill through Azure as we setup terms directly with you after the trial period is over.
This will begin deploying the ThreatSTOP IP Firewall into your Azure instance, including creating a Resource Group, firewall VM, and adding your new device to your ThreatSTOP account. You can verify that the deployment to your account was successful by logging into and looking for a device named azure_ip_X where x is 12 digit number and has its Manufacturer / Model set to Microsoft / Azure IP Firewall.
If this appears then the deployment was successful and you can move on to Testing the IP Firewall or Final Subnet Configuration.
Step 3. Securing network access & verifying functionality
Connecting to the Firewall via SSH
After your resource group has been successfully deployed, you will be able to login to your IP Firewall using SSH. To find the IP address to login:
- Open the resource group by clicking on its icon in the Azure Dashboard.
- Click on the icon for your Virtual Network. This will open the Virtual network blade. The connected devices will list the IP address for your IP Firewall device. This can be used with your favorite SSH program to access the server. The username and password provided during Azure marketplace offering creation will allow login to the server. IP Firewall functionality
Before deploying the IP Firewall into your live environment, it is advisable to test that the firewall is performing as expected. One way of doing this is to deploy a temporary VM into the Clients subnet, connect to both it and the firewall, and monitor for traffic flowing across the firewall.
- In Azure deploy a new Ubuntu VM into the same resource group as the IP firewall. For our example we are going to use the existing TSProtectedVnet but make sure the client VM is assigned to the Clients subnet.
- The default settings are OK with two exceptions:
- Choose none for Public IP address as this will be a private subnet.
- Choose none for Network Security Group (NSG) for simplicity.
- Click on OK to bring up the Summary of the VM device.
- Click on OK again begin deploying the test VM.
- While the client VM is being deployed, it’s safe to add it into the routing table.To do this:
- Open the Resource Group you just created.
- Click on the Route Table created by the solution template
- Click on Subnets.
- Click + Associate.
- Choose the Virtual Network.
- Select TSProtectedVNet.
- Choose Subnet.
- Associate it with the Clients subnet.
- Click OK.
- To test, open up two SSH sessions to the firewall’s public IP. In one window ssh into the private IP of the client vm (the firewall is a jump box to the client):
- Run the following command on the firewall:
sudo tcpdump -i eth0 proto ICMP -n # If you don't have tcpdump installed, install it by running: sudo apt-get install -y tcpdump
- On the client vm, ping bing.com or similar and watch for the packets passing through the firewall. If you get a response on the client and also see packets flowing, the setup is complete. The examples below show a test ping for reference of what you should see:
Test One
Here we will test to make sure traffic is flowing through the firewall. We will first start a tcpdump session and examine packets being sent to a known good site (bing.com in this example). Server
admin@TSIPFirewall1:~$ sudo tcpdump -i eth0 proto ICMP -n tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 00:25:04.569136 IP 10.7.0.4 > 204.79.197.200: ICMP echo request, id 6065, seq 1, length 64 00:25:04.569762 IP 204.79.197.200 > 10.7.0.4: ICMP echo reply, id 6065, seq 1, length 64 00:25:05.571144 IP 10.7.0.4 > 204.79.197.200: ICMP echo request, id 6065, seq 2, length 64 00:25:05.572268 IP 204.79.197.200 > 10.7.0.4: ICMP echo reply, id 6065, seq 2, length 64
Client
```bash admin@TSIPFirewall1:~$ ping bing.com PING bing.com (204.79.197.200) 56(84) bytes of data. 64 bytes from a-0001.a-msedge.net (204.79.197.200): icmp_seq=1 ttl=121 time=0.637 ms 64 bytes from a-0001.a-msedge.net (204.79.197.200): icmp_seq=2 ttl=121 time=1.14 ms 64 bytes from a-0001.a-msedge.net (204.79.197.200): icmp_seq=3 ttl=121 time=0.645 ms 64 bytes from a-0001.a-msedge.net (204.79.197.200): icmp_seq=4 ttl=121 time=0.839 ms ^C --- bing.com ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 11002ms rtt min/avg/max/mdev = 0.591/0.780/1.596/0.289 ms ```
Test Two
Here we will test to make sure traffic is being dropped at the firewall. We will first start a tcpdump session and examine packets being sent to a known blocked site (bad.threatstop.com in this example as a safe to test blocked IOC). We’ll also take the opportunity to verify the logs reflect what we just blocked.
Server
admin@TSIPFirewall1:~$ sudo tcpdump -i eth0 proto ICMP -n tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 00:26:02.636465 IP 10.7.0.4 > 64.87.3.133: ICMP echo request, id 6072, seq 1, length 64 00:26:02.648962 IP 64.87.3.133 > 10.7.0.4: ICMP echo reply, id 6072, seq 1, length 64 00:26:03.637502 IP 10.7.0.4 > 64.87.3.133: ICMP echo request, id 6072, seq 2, length 64 00:26:03.652238 IP 64.87.3.133 > 10.7.0.4: ICMP echo reply, id 6072, seq 2, length 64
Client
``` tsadmin@acmeipfw1:~$ ping bad.threatstop.com PING bad.threatstop.com (64.87.3.133) 56(84) bytes of data. ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted ^C --- bad.threatstop.com ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1031ms # testing the log was written ok tsadmin@acmeipfw1:~$ ls -l /var/log/threatstop/threatstop.log -rw-r--r-- 1 root adm 392 Aug 28 01:28 /var/log/threatstop/threatstop.log tsadmin@acmeipfw1:~$ sudo cat /var/log/threatstop/threatstop.log Aug 28 01:28:46 acmeipfw1 kernel: [ 1806.463962] ThreatSTOP-TSblock IN= OUT=eth0 SRC=10.0.0.4 DST=64.87.3.133 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=32743 DF PROTO=ICMP TYPE=8 CODE=0 ID=19868 SEQ=1 Aug 28 01:28:47 acmeipfw1 kernel: [ 1807.495827] ThreatSTOP-TSblock IN= OUT=eth0 SRC=10.0.0.4 DST=64.87.3.133 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=32784 DF PROTO=ICMP TYPE=8 CODE=0 ID=19868 SEQ=2 ```
Final Subnet Configuration
If testing proved successful, you may associate your clients subnet with the route table created during deployment.
Caution: These instructions are a simulation of the steps you will perform to go live in a production environment.
- Click on the Route Table created by the solution template and associate it with your production subnet.
- Click OK to finalize deployment of the IP Firewall. IP Firewall you can head over to the Admin Portal’s Devices list to edit the configuration of the newly provisioned virtual machine(s).
Troubleshooting@TSIPFirewall1:~$ retry_tsadmin_add.sh. | https://docs.threatstop.com/iptables_azure.html | 2020-05-25T07:26:58 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.threatstop.com |
Getting Started with Mobile Services.
Mobile Apps in OpenShift allow you to perform Cloud Native Mobile App Development using OpenShift as the back-end for your mobile apps. A Mobile App is a representation of the mobile app that you are developing locally. Mobile Apps App in OpenShift and your local app in development. This configuration file is used to initialize the AeroGear SDK in your mobile app and to connect to the back-end Mobile Services you have provisioned on OpenShift.
Mobile Developer Console is part of AeroGear Mobile Services and allows you to:
create a representation of your mobile application in OpenShift
bind Mobile App with mobile services
build your mobile app (requires Mobile CI/CD service)
get the
mobile-services.jsonconfiguration file for use in your local development environment
This guide shows you how to:
Set up AeroGear Mobile Services on OpenShift
Create a Mobile App and a Mobile Service (Identity Management)
Set up a local development environment
Configure the AeroGear showcase app for your mobile platform (Android, iOS, Cordova or Xamarin)
Run the showcase app and make calls to the Identity Management service | http://docs.aerogear.org/aerogear/latest/getting-started.html | 2020-05-25T09:01:59 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.aerogear.org |
AuthBackend
GitHub accounts, with
GitHubAuthBackend
Microsoft Azure Active Directory, with
AzureADAuthBackend
Each of these requires one to a handful of lines of configuration in
settings.py, as well as a secret in
zulip-secrets.conf. Details
are documented in your
settings.py.
SAML¶
Zulip 2.1 and later supports SAML authentication, used by Okta, OneLogin, and many other IdPs (identity providers). You can configure it as follows:
These instructions assume you have an installed Zulip server. You can have created an organization already using EmailAuthBackend, or plan to create the organization using SAML authentication.
Tell your IdP how to find your Zulip server:
SP Entity ID:.
SSO URL:. This is the “SAML ACS url” in SAML terminology.
The
Entity IDshould match the value of
SOCIAL_AUTH_SAML_SP_ENTITY_IDcomputed in the Zulip settings. You can run on your Zulip server
/home/zulip/deployments/current/scripts/setup/get-django-setting SOCIAL_AUTH_SAML_SP_ENTITY_IDto get the computed value.
Tell Zulip how to connect to your SAML provider(s) by filling out the section of
/etc/zulip/settings.pyon your Zulip server with the heading “SAML Authentication”.
You will need to update
SOCIAL_AUTH_SAML_ORG_INFOwith your organization name (
displaynamemay appear in the IdP’s authentication flow;
namewon’t be displayed to humans).
Fill out
SOCIAL_AUTH_SAML_ENABLED_IDPSwith data provided by your identity provider. You may find the python-social-auth SAML docs helpful. You’ll need to obtain several values from your IdP’s metadata and enter them on the right-hand side of this Python dictionary:
Set the outer
idp_namekey to be an identifier for your IdP, e.g.
testshibor
okta. This field appears in URLs for parts of your Zulip server’s SAML authentication flow.
The IdP should provide the
urland
entity_idvalues.
Save the
x509certvalue to a file; you’ll use it in the instructions below.
The values needed in the
attr_fields are often configurable in your IdP’s interface when setting up SAML authentication (referred to as “Attribute Statements” with Okta, or “Attribute Mapping” with GSuite). You’ll want to connect these so that Zulip gets the email address (used as a unique user ID) and name for the user.
The
display_nameand
display_iconfields are used to display the login/registration buttons for the IdP.
Install the certificate(s) required for SAML authentication. You will definitely need the public certificate of your IdP. Some IdP providers also support the Zulip server (Service Provider) having a certificate used for encryption and signing. We detail these steps as optional below, because they aren’t required for basic setup, and some IdPs like Okta don’t fully support Service Provider certificates. You should install them as follows:
On your Zulip server,
mkdir -p /etc/zulip/saml/idps/
Put the IDP public certificate in
/etc/zulip/saml/idps/{idp_name}.crt
(Optional) Put the Zulip server public certificate in
/etc/zulip/saml/zulip-cert.crt
(Optional) Put the Zulip server private key in
/etc/zulip/saml/zulip-private-key.key
Set the proper permissions on these files and directories:
chown -R zulip.zulip /etc/zulip/saml/ find /etc/zulip/saml/ -type f -exec chmod 644 -- {} + chmod 640 /etc/zulip/saml/zulip-private-key.key
(Optional) If you configured the optional public and private server certificates above, you can enable the additional setting
"authnRequestsSigned": Truein
SOCIAL_AUTH_SAML_SECURITY_CONFIGto have the SAMLRequests the server will be issuing to the IdP signed using those certificates. Additionally, if the IdP supports it, you can upload the public certificate to enable encryption of assertions in the SAMLResponses the IdP will send about authenticated users.
Enable the
zproject.backends.SAMLAuthBackendauth backend, in
AUTHENTICATION_BACKENDSin
/etc/zulip/settings.py.
Restart the Zulip server to ensure your settings changes take effect. The Zulip login page should now have a button for SAML authentication that you can use to login or create an account (including when creating a new organization).
If the configuration was successful, the server’s metadata can be found at. You can use this for verifying your configuration or provide it to your IdP.
LDAP (including Active Directory)¶
Zulip supports retrieving information about users via LDAP, and optionally using LDAP as an authentication mechanism.
In either configuration, you will need to do the following:
These instructions assume you have an installed Zulip server and are logged into a shell there. You can have created an organization already using EmailAuthBackend, or plan to create the organization using LDAP authentication. for authentication.".
In configurations (A) and (C), you need to tell Zulip how to look up a user’s LDAP data given their user’s email address:
Set
AUTH_LDAP_REVERSE_EMAIL_SEARCHto a query that will find an LDAP user given their email address. Generally, this will be
AUTH_LDAP_USER_SEARCHin configuration (A) or a search by
LDAP_EMAIL_ATTRin configuration (C).
Set
AUTH_LDAP_USERNAME_ATTRto the name of the LDAP attribute for the user’s LDAP username in that search result.
You can quickly test whether your configuration works by running:
/home/zulip/deployments/current/manage.py query_ldap username
from the root of your Zulip installation. If your configuration is working, that will output the full name for your user (and that user’s email address, if it isn’t the same as the “Zulip username”).
Active Directory: For Active Directory, one typically sets
AUTH_LDAP_USER_SEARCH to one of:
To access by Active Directory username:
AUTH_LDAP_USER_SEARCH = LDAPSearch("ou=users,dc=example,dc=com", ldap.SCOPE_SUBTREE, "(sAMAccountName=%(user)s)")
To access by Active Directory email address:
AUTH_LDAP_USER_SEARCH = LDAPSearch("ou=users,dc=example,dc=com", ldap.SCOPE_SUBTREE, "(mail=%(user)s)").
Synchronizing data¶
Zulip can automatically synchronize data declared in
AUTH_LDAP_USER_ATTR_MAP from LDAP into Zulip, via the following
management command:
/home/zulip/deployments/current/manage.py sync_ldap_user_data
This will sync the fields declared in
AUTH_LDAP_USER_ATTR_MAP for
all of your users.
We recommend running this command in a regular cron job, to pick up changes made on your LDAP server.
All of these data synchronization options have the same model:
New users will be populated automatically with the name/avatar/etc. from LDAP (as configured) on account creation.
The
manage.py sync_ldap_user_datacron job will automatically update existing users with any changes that were made in LDAP.
You can easily test your configuration using
manage.py query_ldap. Once you’re happy with the configuration, remember to restart the Zulip server with
/home/zulip/deployments/current/scripts/restart-serverso that your configuration changes take effect.
When using this feature, you may also want to
prevent users from changing their display name in the Zulip UI,
since any such changes would be automatically overwritten on the sync
run of
manage.py sync_ldap_user_data.
Synchronizing avatars¶
Starting with Zulip 2.0, Zulip supports syncing LDAP / Active
Directory profile pictures (usually available in the
thumbnailPhoto
or
jpegPhoto attribute in LDAP) by configuring the
avatar key in
AUTH_LDAP_USER_ATTR_MAP.
Synchronizing custom profile fields¶
Starting with Zulip 2.0, Zulip supports syncing
custom profile fields from LDAP / Active
Directory. To configure this, you first need to
configure some custom profile fields for your
Zulip organization. Then, define a mapping from the fields you’d like
to sync from LDAP to the corresponding LDAP attributes. For example,
if you have a custom profile field
LinkedIn Profile and the
corresponding LDAP attribute is
linkedinProfile then you just need
to add
'custom_profile_field__linkedin_profile': 'linkedinProfile'
to the
AUTH_LDAP_USER_ATTR_MAP.
Automatically deactivating users with Active Directory¶
Starting with Zulip 2.0, Zulip supports synchronizing the
disabled/deactivated status of users from Active Directory. You can
configure this by uncommenting the sample line
"userAccountControl": "userAccountControl", in
AUTH_LDAP_USER_ATTR_MAP (and restarting
the Zulip server). Zulip will then treat users that are disabled via
the “Disable Account” feature in Active Directory as deactivated in
Zulip.
Users disabled in active directory will be immediately unable to login
to Zulip, since Zulip queries the LDAP/Active Directory server on
every login attempt. The user will be fully deactivated the next time
your
manage.py sync_ldap_user_data cron job runs (at which point
they will be forcefully logged out from all active browser sessions,
appear as deactivated in the Zulip UI, etc.).
This feature works by checking for the
ACCOUNTDISABLE flag on the
userAccountControl field in Active Directory. See
this handy resource
for details on the various
userAccountControl flags.
Deactivating non-matching users¶
Starting with Zulip 2.0, Zulip supports automatically deactivating
users if they are not found by the
AUTH_LDAP_USER_SEARCH query
(either because the user is no longer in LDAP/Active Directory, or
because the user no longer matches the query). This feature is
enabled by default if LDAP is the only authentication backend
configured on the Zulip server. Otherwise, you can enable this
feature by setting
LDAP_DEACTIVATE_NON_MATCHING_USERS to
True in
/etc/zulip/settings.py. Nonmatching users will be fully deactivated
the next time your
manage.py sync_ldap_user_data cron job runs.
Other fields¶
Other fields you may want to sync from LDAP include:
Boolean flags;
is_realm_admin(the organization’s administrator permission) is the main one. You can use the AUTH_LDAP_USER_FLAGS_BY_GROUP feature of
django-auth-ldapto configure a group to get this permissions. (We don’t recommend using this flags feature for managing
is_activebecause deactivating a user this way would not disable any active sessions the user might have; see the above discussion of automatic deactivation for how to do that properly).
String fields like
default_language(e.g.
en) or
timezone, if you have that data in the right format in your LDAP database.
Coming soon: Support for syncing custom profile fields from your LDAP database.
You can look at the full list of fields in the Zulip user
model; search for
class UserProfile, but the above should cover all
the fields that would be useful to sync from your LDAP databases.)"), )
Restricting access to an LDAP group¶
You can restrict access to your Zulip server to a set of LDAP groups
using the
AUTH_LDAP_REQUIRE_GROUP and
AUTH_LDAP_DENY_GROUP
settings in
/etc/zulip/settings.py. See the
upstream django-auth-ldap documentation for
details./as
HOME_NOT_LOGGED_IN. This makes(a.k.a. the homepage for the main Zulip Django app running behind nginx) redirect to
/accounts/login/sso/for any of the more than 100 authentication providers supported by python-social-auth (e.g., Facebook, Twitter, etc.) is easy to do if you’re willing to write a bit of code, and pull requests to add new backends are welcome.
For example, the Azure Active Directory integration was about 30 lines of code, plus some documentation and an automatically generated migration. We also have helpful developer documentation on testing auth backends. | https://zulip.readthedocs.io/en/latest/production/authentication-methods.html | 2019-11-12T05:22:23 | CC-MAIN-2019-47 | 1573496664752.70 | [] | zulip.readthedocs.io |
The Physics ProfilerA window that helps you to optimize your game. It shows how much time is spent in the various areas of your game. For example, it can report the percentage of time spent rendering, animating or in your game logic. More info
See in Glossary displays statistics about physics that have been processed by the physics engineA system that simulates aspects of physical systems so that objects can accelerate correctly and be affected by collisions, gravity and other forces.. This information can help you diagnose and resolve performance issues or unexpected discrepancies related to the physics in your Scene.
See also Physics Debug Visualization.
Notes:
The numbers might not correspond to the exact number of GameObjects with physics components in your Scene. This is because some physics components are processed at a different rate depending on which other components affect it (for example, an attached Joint component). To calculate the exact number of GameObjects with specific physics components attached, write a custom script using the FindObjectsOfType function.
The Physics Profiler does not show the number of sleeping Rigidbody components. These are components which are not engaging with the physics engine, and are therefore not processed by the Physics Profiler. Refer to Rigidbody overview: Sleeping for more information on sleeping Rigidbody components.
The physics simulation runs on a separate fixed frequency update cycle from the main logic’s update loop, and can only advance time via a Time.fixedDeltaTime per call. This is similar to the difference between Update and FixedUpdate (see documentation on the Time windowA broad collection of settings which allow you to configure how Physics, Audio, Networking, Graphics, Input and many other areas of your Project behave. More info
See in Glossary, then select the Time category).An automatic process performed by Unity which determines whether a moving GameObject with a rigidbody and collider component has come into contact with any other colliders. More info
See in Glossary phases.
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/2018.4/Documentation/Manual/ProfilerPhysics.html | 2019-11-12T06:45:36 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.unity3d.com |
HDX
Oct 05, 2016
Citrix HDX includes a broad set of technologies that provide a high-definition user experience.
To experience HDX capabilities from your virtual desktop:
- See how HDX delivers rich video content to virtual desktops: View a video on a web site containing high definition videos, such as.
- See how Flash Redirection accelerates delivery of Flash multimedia content:
- Download Adobe Flash player () and install it on both the virtual desktop and the user device.
- On the Desktop Viewer toolbar, click Preferences. In the Desktop Viewer Preferences dialog box, click the Flash tab and select Optimize content.
- To experience how Flash Redirection accelerates the delivery of Flash multimedia content to virtual desktops, view a video on your desktop from a web site containing Flash videos, such as YouTube. Flash Redirection is designed to be seamless so that users do not know when it is running. You can check to see whether Flash Redirection is being used by looking for a block of color that appears momentarily before the Flash player starts.
- See how HDX delivers high definition audio:
- Configure your Citrix client for maximum audio quality; see the Receiver documentation for details.
- Play music files with a digital audio player (such as iTunes) on your desktop.
HDX provides a superior graphics and video experience for most users by default, with no configuration required. Citrix policy settings that provide the best out-of-the-box experience for the majority of use cases are enabled by default.
- HDX automatically selects the best delivery method based on the client, platform, application, and network bandwidth, and then self-tunes based on changing conditions.
- HDX optimizes the performance of 2D and 3D graphics and video.
- HDX delivers a Windows Aero experience to virtual desktop users on any client.
- HDX enables user devices to stream multimedia files directly from the source provider on the Internet or Intranet, rather than through the host server. If the requirements for this client-side content fetching are not met, media delivery falls back to Windows Media redirection to play media run-time files on user devices rather than the host server. In most cases, no adjustments to the Windows Media feature policies are needed.
Good to know:
- For support and requirements information for HDX features, see System requirements for XenApp and XenDesktop 7.6 LTSR. Except where otherwise noted, HDX features are available for supported Windows Server OS and Windows Desktop OS machines, plus Remote PC Access desktops.
- This content describes how to further optimize the user experience, improve server scalability, or reduce bandwidth requirements. For information about working with Citrix policies and policy settings, see the Citrix policies documents for this release.
- For instructions that include working with.
Reduce the bandwidth needed for Windows desktopsReduce the bandwidth needed for Windows desktops
By default, HDX delivers a highly responsive Windows Aero or Windows 8 desktop experience to virtual desktops accessed from supported Windows user devices. To do that, HDX leverages the graphics processing unit (GPU) or integrated graphics processor (IGP) on the user devices for local DirectX graphics rendering. This feature, named desktop composition redirection, maintains high scalability on the server. For details, see What to do with all these choices in.
To reduce the bandwidth required in user sessions, consider adjusting the following Citrix policy settings. Keep in mind that changing these settings can reduce the quality of the user experience.
- Desktop Composition Redirection. Applies only to Windows Desktop OS machines accessed from Windows user devices and applies only to the composition of the Windows desktop. Application windows are rendered on the server unless the Citrix policy setting Allow local app access (LTSR: not supported) is Allowed.
- Desktop Composition Redirection graphics quality. Uses high-quality graphics for desktop composition unless seamless applications or Local App Access (LTSR: not supported) are enabled. To reduce bandwidth requirements, lower the graphics quality.
- Dynamic windows preview. Controls the display of seamless windows in Flip, Flip 3D, taskbar preview, and peek window preview modes. To reduce bandwidth requirements, disable this policy setting.).
- Target frame rate. Specifies the maximum number of frames per second that are sent from the virtual desktop to the user device (default = 30). In many circumstances, you can improve the user experience by specifying a higher value. For devices with slower CPUs, specifying a lower value can improve the user experience.
- Display memory limit. Specifies the maximum video buffer size for the session in kilobytes (default = 65536 KB). For connections requiring more color depth and higher resolution, increase the limit. You can calculate the maximum memory required. Color depths other than 32-bit are available only if the Legacy graphics mode policy setting is enabled.
Improve video conference performanceImprove video conference performance
HDX webcam video compression improves bandwidth efficiency and latency tolerance for webcams during video conferencing in a session. This technology streams webcam traffic over a dedicated multimedia virtual channel; this uses significantly less bandwidth compared to the isochronous HDX Plug-n-Play support, and works well over WAN connections.
Receiver users can override the default behavior by choosing the Desktop Viewer Mic & Webcam setting Don’t use my microphone or webcam. To prevent users from switching from HDX webcam video compression, disable USB device redirection with the policy settings under ICA policy settings > USB Devices policy settings.
HDX webcam video compression is enabled by default on Receiver for Windows but must be configured on Receiver for Linux. For more information, refer to the Receiver documentation. uses additional bandwidth and is not suitable for a low bandwidth network. To force software compression over low bandwidth networks, add the following DWORD key value to the registry key: HKCU\Software\Citrix\HdxRealTime: DeepCompress_ForceSWEncode=1.
Choose server scalability over user experienceChoose server scalability over user experience
For deployments where server scalability is of greater concern than user experience, you can use the legacy graphics system by adding the Legacy graphics mode policy setting and configuring the individual legacy graphics policy settings. Use of the legacy graphics system affects the user experience over WAN and mobile connections. | https://docs.citrix.com/en-us/xenapp-and-xendesktop/7-6-long-term-service-release/xad-hdx-landing.html | 2019-11-12T07:08:07 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.citrix.com |
Assembly
Load Context. Resolving Unmanaged Dll Event
Definition
Occurs when the resolution of a native library fails.
public: event Func<System::Reflection::Assembly ^, System::String ^, IntPtr> ^ ResolvingUnmanagedDll;
public event Func<System.Reflection.Assembly,string,IntPtr> ResolvingUnmanagedDll;
member this.ResolvingUnmanagedDll : Func<System.Reflection.Assembly, string, nativeint>
Public Custom Event ResolvingUnmanagedDll As Func(Of Assembly, String, IntPtr)
Remarks
This event is raised if the native library cannot be resolved by the default resolution logic (including LoadUnmanagedDll). | https://docs.microsoft.com/en-us/dotnet/api/system.runtime.loader.assemblyloadcontext.resolvingunmanageddll?view=netcore-3.0&viewFallbackFrom=netcore-2.1 | 2019-11-12T06:02:12 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.microsoft.com |
You can save yourself hours, even days of work by being able to import your data from your other systems and spreadsheets with Vinsight’s import feature (sorry, we can’t import Post-it notes). Importing is available in every area of the app where you may need to bring in information in bulk.
In this document:
Let’s look at an example of importing Contacts:
From the Contacts list, click on the ‘Import’ button next to ‘New’ at the top of the page:
Most likely you will want to import data from a spreadsheet. To do this, you can click on the “Excel” import icon on the data source page:
This will take you to the next step. Here you can download the template that you will need to fill in:
Open the template up from your downloads area:
Fill it in with the data you want to import:
Here’s the cool bit, once you have filled in the spreadsheet with all the data you want, highlight the whole table (including the headers) and then copy it (CTRL + C). Once you have done that, you can go back to Vinsight and paste (CTRL + V) the data straight in to the importer:
Your data will be validated, and then you will be presented with a screen that looks like this:
If everything looks good, you can press “Import to Vinsight” which will do the rest for you. If everything goes smoothly, you will get to see this page:
Now you can press “Continue” and go check out your new data:
Happy importing!
Almost every table supports data being pasted directly in to it from a spreadsheet program such as Microsoft Excel.:
This is a much faster way of entering a lot of data, and you won’t be at risk of mistyping something in to the system which could have repercussions later on.
As well as supporting pasting data, every table has a “copy” button. If you look at the top right of any table, you will see this small icon. Clicking on it will open up a menu, from which you can choose “Copy table”:
This will open up a box with the table data sitting in it. You can copy this straight from the box in to an Excel table:
Remember that you are using a web app now so you can have many tabs open. Use ctrl + T to open a new tab or even better ctrl + click a link or button that you want to open in a new tab. This is great if you are halfway through entering something and realise you have to create something else first. For example: Ctrl Click the Contacts menu item, a new tab opens, create your contact, then close that tab to return to where you were to use the new contact. | https://docs.vinsight.net/getting-going/importing-data/ | 2017-09-19T13:38:44 | CC-MAIN-2017-39 | 1505818685698.18 | [] | docs.vinsight.net |
Bugsnag aims to group instances of the same event together to give you a clear view of which issues are having the biggest impact on your users whilst minimizing unnecessary noise.
In most cases the default grouping achieves this but if you would like to change the grouping behavior, Bugsnag provides a range of options as described below.
The reason an error is grouped is displayed on the error view.
Grouped with other events sharing the same error class, file and line number of the top in-project stackframe of the innermost exception.
On most platforms the Bugsnag notifier will automatically detect in-project stackframes, but sometimes additional configuration is required to set the root directory of your project. See the docs for your platform.
Grouped with other events originating from the same error class of the innermost exception. The error classes can be configured in Project Settings > Group by error class.
This can be used to group certain errors that may occur from many different parts of your application. For example a database outage may generate
DatabaseConnectionErrors from every location that you access the database, but they are all caused by the same issue.
Grouped with other events sharing the same context that occur in any of the named error classes configured for the project. The error classes can be configured in Project Settings > Group by error context.
This can be used to group errors with the same cause that originate from the same part of your application. For example you may know that
TimeoutErrors originating from one part of your application (one context) will all have the same cause.
Grouped with other events sharing the same grouping hash. The grouping hash is set by configuring the Bugsnag notifier in your code. For specific instructions for configuring the grouping hash please see the documentation for your platform.
Using a custom grouping hash is for advanced users only and can be useful if you want fine-grained control over how errors are grouped.
Grouped with other events originating from the same point in the code.
Grouped with other events originating from the same method, file and line number.
Grouped with other events caused from the same page location.
Grouped with other events caused from the same script tag. This is used when no information is available about where the error occurred within the script.
Grouped with other events with the same error class when the script was evaluated using eval().
Grouped with other events sharing the same line number when the error is generated from a web worker. | https://docs.bugsnag.com/product/error-grouping/ | 2017-09-19T13:27:14 | CC-MAIN-2017-39 | 1505818685698.18 | [] | docs.bugsnag.com |
eXo Platform provides CMIS support using the xCMIS project and the Content Storage provider.
The CMIS standard aims at defining a common content management web services interface that can be applied in content repositories and bring about the interoperability across repositories. The formal specification of CMIS standard is approved by the Organization for the Advancement of Structured Information Standards (OASIS) technical committee, who drives the development, convergence and adoption of global information society. With CMIS, enterprises now can deploy systems independently, and create specialized applications running over a variety of content management systems.
To see the advantages of content interoperability and the significance of CMIS as a whole, it is necessary to learn about mutual targets which caused the appearance of specification first.
Content integration: With CMIS, integrating content among various repositories, even those created by different vendors in a single application, becomes faster, simpler and more effective. CMIS makes it possible for customers to integrate content management systems into their key business processes across business departments and vendor implementations.
Access unification: CMIS enables different applications and manufacturers to be connected to a CMIS-enabled content repository simply. With CMIS, a business application's developer can focus on the application's business logic, rather than issues related to the compatibility or content migration.
The xCMIS project, which is initially contributed to the Open Source community by eXo Platform, is an Open Source implementation of the Content Management Interoperability Services (CMIS) specification. xCMIS supports all the features stated in the CMIS core definition and both REST AtomPub and Web Services (SOAP/WSDL) protocol bindings.
To learn more about xCMIS, visit:
eXo CMIS is built on the top of xCMIS embedded in eXo Platform to expose the Content drives as the CMIS repositories. The CMIS features are implemented as a set of components deployed on the eXo Container using XML files to describe the service configuration.
SOAP protocol binding is not implemented in eXo CMIS.
Figure: How eXo CMIS works. | https://docs.exoplatform.org/public/topic/PLF40/PLFRefGuide.Introduction.CMIS.Overview.html | 2017-09-19T13:27:16 | CC-MAIN-2017-39 | 1505818685698.18 | [] | docs.exoplatform.org |
Based on the Stack chosen during Select Stack, you are presented with the choice of Services to install into the cluster. Your Stack comprises many services. You may choose to install any other available services now, or to add services later. The install wizard selects all available services for installation by default.
SmartSense deployment is mandatory. You cannot clear the option to install SmartSense using the Cluster Install wizard.
Steps
Choose none to clear all selections, or choose all to select all listed services.
Choose or clear individual checkboxes to define a set of services to install now.
After selecting the services to install now, choose Next.
Next Step
More Information | https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-installation-ppc/content/choose_services.html | 2017-09-19T13:41:47 | CC-MAIN-2017-39 | 1505818685698.18 | [array(['figures/1/figures/SmartSense_Install_mandatory.png', None],
dtype=object) ] | docs.hortonworks.com |
The algorithm and its implementations¶
To help keep things straight, I will distinguish between the dbOTU algorithm and its three implementations dbOTU1, dbOTU2, and dbOTU3.
I will also define the word provenience (from the archaeological term referring to the physical location where an artifact was found) to mean the information about how many times each unique DNA sequence appeared in each sample in the sequencing library. (QIIME calls this the “OTU mapping”, which I find confusing because “mapping” refers to many things.)
dbOTU algorithm¶
Motivation¶
The algorithm aims to separate genetically-similar sequences that appear to be ecologically distinct (or, conversely, to join less-genetically-similar sequences that appear to be ecologically identical). For example, if two sequences differ by only one nucleotide and you had no provenience data, you would probably put those sequences into the same OTU. However, if the two sequences never appeared together in the same sample, you would probably conclude that that one nucleotide difference corresponds to two distinct groups of organisms, one which lives in one group of samples, the other living in the other.
Conversely, if two sequences had a few nucleotides different, you might, without provenience data, place them into different OTUs. However, if the two sequences appeared in the same ratio in all samples (e.g., sequence 2 was always almost exactly ten times less abundant than sequence 1), you would probably conclude that the second sequence was either sequencing error or a member of the same ecological population as the first sequence.
As a cheesy example, consider my last name, Olesen. You might think this is just an Ellis Island mispelling of the more common name Olsen. However, the name Olsen is present among the abundant Norwegians and the rarer Danes, while Olesen is only present among the Danes. This provenience data would lead you to (correctly) conclude that Olesen is a distinctly Danish name that is the the result of differences between the Norwegian and Danish languages and orthography. (The Swedish equivalent, Olsson, is so “genetically” different that you probably do not need provenience data to know that it has a different “ecology”.)
Mechanics¶
The original pipeline, described in Preheim et al. [1] was:
- Process 16S data up to dereplicated, provenienced sequences.
- Align those reads. Using the alignment, make a phylogenetic tree and a “distance matrix” showing the genetic distance between sequences.
- Feed the distance matrix and the table of sequence counts into the algorithm proper, which groups the sequences into OTUs.
In outline, step 3 meant:
- Make the most abundant sequence an OTU.
- For each sequence (in order of decreasing abundance), find the set of OTUs that meet “abundance” and “genetic” cutoffs. The abundance cutoff requires that the candidate sequence be some fold smaller than the OTU (e.g., so that it can be considered sequencing error). The genetic cutoff requires that the candidate sequence be sufficiently similar to the OTU.
- If no OTUs meet these two criteria, make this sequence into an OTU.
- If OTUs do meet these criteria, then, starting with the most closely-genetically-related OTU, check if this sequence is distributed differently among the samples than that OTU. If the distributions are sufficiently similar, merge this sequence into that OTU and go on to the next sequence.
- If this candidate sequence does not have a distribution across sample sufficiently similar to an existing OTU, then make this sequence a new OTU.
- Move on to the next candidate sequence.
Previous implementations¶
The implementations vary in terms of:
- The exact input files they required
- How they evaluated the genetic (i.e., sequence similarity) criterion
- How they evaluated the distribution (i.e., ecological similarity) criterion
- The details of the software
dbOTU1¶
The original implementation (dbOTU1), coded in Perl and shell scripts, took a genetic distance matrix (a Jukes-Cantor distance computed using FastTree) as input. using that distance matrix.
In this implementation, the genetic criterion was evaluated using the minimum of the aligned and unaligned Jukes-Cantor distances. This was a weird hack: sometimes the alignment, made using NAST [2] (actually the PyNAST implementation), led to a greater distance between two sequences than would be computed by just comparing the unaligned sequences.
In this implementation the distribution criterion was evaluated using
the
chisq.test function in R,
called in a separate process from a Perl script.
Many of the comparisons involved
sequences with small numbers of counts, for which the asymptotic (i.e., commonly-used)
calculation of the \(p\)-value of a \(\chi^2\) test is not accurate. This implementation
therefore used a simulated \(p\)-value, available through the R
commands
simulate.p.value option. This empirical calculation
required many simulated contingency tables, which was expensive.
dbOTU2¶
The second implementation (dbOTU2), coded in Python 2 and interfaced with R using r2py, took a set of aligned sequences as input and computed the Hamming distance between these sequences as necessary. This reduced the memory required (since it was no longer an entire matrix of all pairwise distances).
Like the first implementation, this one used the minimum of the aligned and unaligned sequences.
Like the first implementation, this one used R’s
chisq.test, but this time
called via
r2py from the Python script. This removed the need for hacky
temporary files, but it was still slow and required R and Python to talk nicely
to one another.
This implementation also allowed for the distribution criterion to be articulated in terms of the Jensen-Shannon divergence (JSD). The JSD had some advantages over the \(\chi^2\) test but suffered some of the same weaknesses, as will be reviewed in The distribution criterion.
This implementation¶
This implementation, dbOTU3, aims to improve speed and ease of use. It is written in pure Python 3 and aims to do one thing, namely to turn sequence and provenience data into OTUs.
Rather than requiring aligned sequences, this implementation uses a Levenshtein edit distance as an approximation for the aligned sequence dissimilarity. The merit of this choice is discussed in The genetic criterion.
Rather than using an empirical \(\chi^2\) test, this implementation uses a likelihood ratio test. The merit of this choice is discussed in The distribution criterion.
A more thorough comparison of the implementations and an evaluation of the accuracy and speed of this new implementation is in a separate technical publication [3] (although note the Caveat about the publication’s genetic criterion). | http://dbotu3.readthedocs.io/en/latest/dbotu.html | 2017-09-19T13:16:40 | CC-MAIN-2017-39 | 1505818685698.18 | [] | dbotu3.readthedocs.io |
Rules¶
Overview¶
Rules are the cornerstone of the processing pipelines. They contain the logic about how to change, enrich, route, and drop messages.
To avoid the complexities of a complete programming language, Graylog supports a small rule language to express the processing logic. The rule language is limited on purpose to allow for easier understanding, better runtime optimization and fast learning.
The real work of rules is done in functions which are completely pluggable. Graylog already ships with a great number of built-in functions
that range from converting data types over string processing, like
substring,
regex etc, to JSON parsing.
We expect that special purpose functions will be written and shared by the community, allow for faster innovation and problem solving than previously possible.
Rule structure¶
Picking up from, their structure follows a simple when, then pattern. In the when clause we specify a boolean expression which is evaluated in the context of the current message in the pipeline. These are the conditions that are being used by the pipeline processor to determine whether to run a rule and collectively whether to continue in a pipeline.
Note that we are already calling the built-in function
has_field with a field name. In the rule has firewall fields
we make sure the message contains both
src_ip as well as
dst_ip as we want to use them in a later stage.
The rule has no actions to run, because we are only interested in using it as a condition at this point.
The second rule uses another built-in function cidr_match. That functions takes a CIDR pattern
and an IP address. In this case we reference a field from the currently processed message using the message reference syntax
The field
gl2_remote_ip is always set by Graylog upon receiving a messages, so we do not check whether that field exists, otherwise
we would have used another
has_field function call to make sure it is there.
However, note the call to
to_ip around the field reference. This is necessary because the field is stored as a string internally.
In order to successfully match the CIDR pattern, we need to convert it to an IP address.
This is an important feature of Graylog’s rule language, it enforces type safety to ensure that you end up with the data in the correct format. All too often everything is treated as a string, which wastes enormous amounts of cycles to convert data all the time as well as preventing to do proper analysis over the data.
Again we have no actions to immediately run, so the then block is empty..
Conventionally
(
<,
<=,
>,
>=,
==,
!=).
Additionally any function that returns a value can be called (e.g.
route_to_stream does not return a value) but the resulting
expression must eventually be a boolean.
The condition must not be empty, but can instead simply use the boolean literal
true in case you always want to execute the
actions inside the rule.
If a condition calls a function which is not present, e.g. due to a missing plugin, the call evaluates to false instead.
Actions¶
A rule’s then clause contains a list of actions which are evaluated in the order they appear.
There are two different types of actions:
# Function calls # Variable assignments
Function calls look exactly like they do in conditions and all of the functions in the system can be used, including the functions that do not return a value.
Variable assignments have the following form:
let name = value;
They are useful to avoid recomputing expensive parsing of data, holding on to temporary values or making rules more readable.
Variables need to be defined before they can used and can be accessed using the
name.field notation in any place where
a value is required.
The list of actions can also be empty, turning the rule into a pure condition which can be useful in combination with stages to guide the processing flow. | http://graylog2-docs.readthedocs.io/en/2.2/pages/pipelines/rules.html | 2017-09-19T13:18:54 | CC-MAIN-2017-39 | 1505818685698.18 | [] | graylog2-docs.readthedocs.io |
This document contains information for an outdated version and may not be maintained any more. If some of your projects still use this version, consider upgrading as soon as possible.
Files and Images
Files as database records
TODO Explain relationship of files to database
Management through "Files & Images"
TODO Screenshot of admin interface
Upload
TODO Link to Upload and FileIframeField classes
Image Resizing
If you've changed the resize functions of your image uploaders you can run this again - and all the images will be resampled to the new arguments for the GD functions. This also, in some cases, repairs broken image links that can happen from time to time. | https://docs.silverstripe.org/en/2.4/topics/files/ | 2017-09-19T13:39:00 | CC-MAIN-2017-39 | 1505818685698.18 | [] | docs.silverstripe.org |
Notification customization
Important
Some of the functionality described in this release plan has not been released. Delivery timelines may change and projected functionality may not be released (see Microsoft policy). Learn more: What's new and planned
Feature details
Notifications alert the agents when a record is assigned to them or when there are incoming requests from users who need assistance. These notifications include additional context about important customer attributes like name and location of the user.
Notification customization allows users to customize the notification pop-ups to include relevant information based on their business needs, like user entitlements and relationship type. This helps the agent get a quick glimpse of the user information prior to accepting an incoming request. | https://docs.microsoft.com/en-us/dynamics365-release-plan/2019wave2/dynamics365-customer-service/notification-customization | 2019-10-14T01:24:29 | CC-MAIN-2019-43 | 1570986648481.7 | [] | docs.microsoft.com |
The URL can be used to link to this page
Your browser does not support the video tag.
03-01-10 Town Council Minutes
SNOWMASS VILLAGE TOWN COUNCIL REGULAR MEETING MINUTES MONDAY, MARCH 1, 2010 CALL TO ORDER AT 5:00 P.M. Mayor Boineau called to order the Regular Meeting of the Snowmass Village Town Council on Monday, March 1, 2010 at 5:01 p.m. Item No. 1 ROLL CALL COUNCIL MEMBERS PRESENT: Mayor Bill Boineau, John Wilkinson, Reed Lewis, Arnold Mordkin, and Markey Butler COUNCIL MEMBER; Susan Hamley, Marketing Director; Kristi Bradley, Group Sales Director; Rhonda B. Coxon, Town Clerk PUBLIC PRESENT: Madeleine Osberger, Mel Blumenthal, John Quigley, Robert Sinko, Peter Moore, Jenny Smith, Bob Purvis, David Perry, Bob Sirkus, and other members of the public interested in items on today's agenda. Item No. 2 PUBLIC NON AGENDA ITEMS There were no Public Non Agenda Items. Item No. 3 COUNCIL UPDATES Out of Bound Skiers Council Member Wilkinson asked that everyone ski smart and safe, he noted some skiers were skiing out of bounds and put rescue people in danger. Item No. 4 YEAR END FINANCIAL UPDATE FOR TOSV 03- 01 -10tc Minutes Page 2 of 9 Finance Director Marianne Rakowski provided Council with a 2009 year end report. The savings amounted to $654,894. She broke down the savings by the areas and reviewed the funds as listed below. GENERAL FUND (unaudited) The General Fund 2009 preliminary year -end numbers are much better than anticipated. The year end fund balance shows an increase of $654,894 over the 2009 revised budget. Most of the increase is due to continued cost cutting and service adjustments by staff in expenditures, which amounted to $644,500. Personnel services were down by $237,703; purchased services were down $232,906 (Utilities, Contract Service, Dump Fees); Operating and Maintenance was down by $174,713 (Vehicle Gas Oil, Vehicle Parts and Supplies). Revenues came in under budget by $76,000. County sales taxes exceeded budget by $100,000 and sales taxes exceeded budget by $54,000. The current 2010 budget for sales taxes is roughly 4.5% below the 2009 actual numbers. Based on the increase in the 2009 year end fund balance that will carry over to 2010, staff is recommending that these funds are set aside to offset any further decline in revenues in 2010 and thereby avoiding any further service or personnel cuts. Staff will continue to do their best to control costs throughout 2010. Since the audit is currently underway, staff will provide Council with the final year end numbers at the same time that we give you an update on the winter season in May. RETT FUND (unaudited) Although Real Estate Transfer Taxes came in $356,700 higher than budget, this increase is partially offset by a decrease in the Base Village Real Estate Transfer Taxes, which were down from budget by $177,400, this resulted in a net increase in taxes of $179,300. A few items to note: The budgeted purchase of buses and the revenues from the Federal Grant did not occur in 2009 due to the delay in the production of the buses. Therefore, the expenditure (decreased by the Federal Grant) will be set aside in a reserve fund to pay for the buses upon their arrival. Staff also expensed the $350,000 in the RETT Fund, which was recorded as a receivable for the Entryway from the Snowmass Land Company since negotiations are still on -going and we did not receive the cash. Year end carryover increased by $854,000, of which $820,000 is the reserve for the buses leaving an increase of $34,000 in Funds Available. MARKETING and SPECIAL EVENTS FUND (unaudited) The Marketing and Special Events Fund also ended on a positive note with an additional $228,000 in year end fund balance. 03- 01 -10tc Minutes Page 3 of 9 Sales taxes came in $135,000 over budget and event revenues along with donations were up by $55,000. Expenditures came in under budget by $37,000. GROUP SALES FUND (unaudited) The Group Sales Fund year end carryover is up over budget as well. This fund ended the year with an additional $167,500 in fund balance. Lodging taxes were up by $17,000. Expenditures came in under budget by $148,000, which is comprised of savings in Personnel Costs and Operating Maintenance costs (supplies, postage, advertising, etc.). Rakowski also noted that January sales tax is up 1/2 percent over last year which is positive news as we budgeted flat. She also noted staff will be back to update Council again in May on the final audited year -end numbers as well as the financial health of the winter season. Town Council agreed with staff recommendation of setting aside the additional year -end carryover in the General fund to offset any economic impacts that may occur in 2010 including decreases in sales tax revenues or planning and building revenues. Town Council took a break at this time. Item No. 5 JOINT MEETING WITH MARKETING GROUP SALES AND SPECIAL EVENTS BOARD Mayor Boineau noted that Council had asked for some time with the Marketing, Group Sales and Special Event Board to discuss philosophy and to review summer and winter marketing and group sales numbers. Chairman of the Board Robert Sinko stated that the four areas the Board is prepared to discuss with consist of metrics, funding philosophy, summer 2010 events and communication. Sinko reviewed a PowerPoint presentation reflecting business in summer and winter since the inception of the Marketing Department in 2003. The Marketing, Group Sales and Special Events Board discussed night skiing, new events, additional summer concerts and the current focus on March. The Boards also reviewed the amount of money spent on events and group sales summer versus winter. The Board encouraged future questions and open communication with Council. Mel Blumenthal a part time resident of Snowmass Village stated this was a very good meeting and encourages this type of communication continue in the future. Town Council took a break at this time. 03- 01 -10tc Minutes Page 4 of 9 Item No. 6 PRESENTATION ON POSSIBLE OBSERVATORY IN SNOWMASS VILLAGE David Aguilar from Aspen Skies LLC presented a PowerPoint presentation and outline for the development of the "Snowmass Village Observatory." Aguilar has an extensive back ground in astronomy and is excited about the prospect of developing a proposal to detail a plan for a dramatically new destination idea for Snowmass Village: A state of the art public observatory for viewing the planets, stars and galaxies from Snowmass Village's world class mountaintop location. He noted that clear dark skies, unobstructed views and the ultimate in fresh mountain air all combine to provide a spectacular utopia in Snowmass that is ideally suited for children, teens, adults and the young at heart to enrich their minds with the wonder of the universe. Aguilar believes the Snowmass Village Observatory would provide a high caliber, immersive experience for visitors and area residents to enjoy, explore, and study the stars and planets amidst this beautiful alpine environment. Guided observing sessions, astronomy themed workshops and guest speaker presentations are just some of the programs he envisions for this venue. This could signal Snowmass Village as a place with a forward looking edge among the world's destination resorts. Council was very interested in this project and Council Member Butler offered to help with fundraising. The estimated cost would be $82,000 and a location to build it on. Markey Butler made the motion to approve authorizing staff to work with David, look at potential site in Snowmass Village and consider funding options. Mayor Bill Boineau. 7 DISCUSSION AND FIRST READING ORDINANCE NO. 7 SERIES OF 2010 DEVELOPMENT ON SLOPES GREATER THAN THIRTY PERCENT (30 WITHIN LOT 1 RODEO PLACE SUBDIVISION Planning Director Chris Conrad stated the Applicant is requesting approval to permit the construction of two (2) duplex buildings on Lot 1, Rodeo Place Subdivision involving development within areas containing thirty percent (30 slopes. The Applicant has proposed two (2) duplex units within Lot 1 that will require a maximum twenty -one (21) foot cut into the slope and will utilize either the placement of a soil nailed retaining wall (as used for the duplex on Lot 2) or over -dig the slope with the placement of a more substantial foundation system to retain the hillside. The first option is likely more 03- 01 -10tc Minutes Page 5 of 9 expensive but the second method (if authorized by the geotechnical engineer) may prove to be less expensive however; it will disturb more of the natural vegetation. He stated the Town Council approved Resolution No. 39, Series of 2006 "Resolution 39), authorizing the Rodeo Place Subdivision. Said approval allowed Lot 1 to contain a triplex, Council directed that the Applicant proceed with planning and designing Phase 2 of the Rodeo Place development. Applications have been submitted to amend Resolution 39 which includes a proposal to amend the resolution to permit the construction of two (2) duplex units within Lot 1 instead of the originally proposed triplex unit. The development proposal for Lot 1 concerning this development involving thirty percent (30 slopes has been submitted for consideration at this time. The Minor Subdivision Amendment will be considered during the March 15 meeting. Arnold Mordkin made the motion to approve first reading of Ordinance 7, Series of 2010 development on slopes greater that 30% within Lot 1, Rodeo Place subdivision. John Wilkinson. 8 EXECUTIVE SESSION Town Council will now meet in Executive Session pursuant to C.R.S. 24 -6- 402(4) and Snowmass Village Municipal Code Section 2- 45(c), to specifically discuss two). The Town Council did not go into Executive Session at this time. 03- 01 -10tc Minutes Page 6 of 9 Item No. 9 RESOLUTION NO. 13 SERIES OF 2010 -APPROVING EOTC BUDGET Town Manager Russ Forrest stated staff requests Council's approval of Resolution 11, Series of 2010 as amended to appropriate funds for the projects in the 2009 and 2010 Elected Officials Transportation Committee (EOTC) budgets for the Pitkin County Y2 Cent Sales and Use Tax. The text of the Resolution has been edited to remove the last paragraph concerning reservation of the net bondable revenue. EOTC Budget Summa!y See attached 2009 -10 Budget and Multi -year Plan for details. Total 2009 Revenues $4,149,000 Total 2009 Expenditures $4,335,374 Annual Surplus (Deficit) (186,374) Cumulative Surplus (Deficit) $8,778,047 Total 2010 Revenues $4,139,800 Total 2010 Expenditures $3,246,602 Annual Surplus (Deficit) 893,198 Cumulative Surplus (Deficit) $9,671,245 The most significant changes to the 2009 budget are a reduction in the sales tax revenue estimate and the extension of no -fare Aspen Snowmass bus service through year -end. The major projects included in the 2010 budget are as follows: 1. $50,000 contribution for transportation associated with X -Games 2. RFTA contribution (81.04% of Y2 Cent Sales Tax) 3. No -fare Aspen Snowmass and Woody Creek bus service through April 11, 2010 This resolution does not include the designation of at least two thirds of each year's EOTC net bondable revenue to fund the Entrance -to -Aspen capital project. "Net bondable revenue" is the sum of the annual proceeds from the Y2% transit sales and use tax minus the 81.04% of the Y2% sales tax that is contributed to RFTA.) This annual dedication to the Entrance -to -Aspen was discussed at the August 6 th meeting. The EOTC budget includes funding for the Fare Subsidized Snowmass Village to Aspen bus service until April 11, 2010. There is no funding for the service beyond that point. Council will have to discuss with the EOTC at their next meeting on March 18, 2010 whether to continue the Fare Subsidized service between Aspen and Snowmass Village. A committee is meeting to discuss possible options to continue Aspen Snowmass bus service beyond the winter season. 03- 01 -10tc Minutes Page 7 of 9 Transit Manager noted the approval by all three jurisdictions that comprise the EOTC is necessary for appropriation of the Pitkin County Y/2 Cent Sales and Use Tax. John Wilkinson made the motion to approve Resolution No 13, Series of 2010 the EOTC budget. Arnold Mordkin. 10 RELATED WESTPAC PROPOSAL ON INTERIM BUILDING 7 Town Manager Russ Forrest provided an update on this item. Related Westpac submitted a proposal to staff last week which consisted of entering into a development agreement with Related Westpac which included Related to commence the work on Interim Building 7 and to release the Performance Bond from Westchester. Today staff received an email from Related and a phone call from Jeffrey Blaugh from New York that they submitted an agreement with Westchester Fire that Related will assist Surety with construction management but Surety will move forward with their responsibility. Related withdrew the proposal submitted to the Town included in today's packet. Council Member Mordkin stated that each Council Member met with Dwayne Romero in regards to the proposal for Interim Building 7 in and feels it was a waste of their time to have the proposal withdrawn. Mel Blumenthal a part time resident of Snowmass Village asked are we sure the agreement allows Related permission to go onto this property. In response staff noted that Related is only the construction manager for the project. Madeleine Osberger editor of the Snowmass Sun questioned these discussions falling under Ex -Parte conservations with a developer. Council noted there was no development application before Council. Item No. 11 SECOND READING -ORDINANCE NO. 6 SERIES OF 2010 EXTENSION OF THE EXCISE TAX Markey Butler made the motion to approve second reading of Ordinance No. 6, Series of 2010 extending the Excise Tax. A roll call vote was taken. Reed Lewis seconded the motion. The motion was approved by a vote of 5 in favor to 0 opposed. Voting Aye: Mayor Bill Boineau, John Wilkinson, Reed Lewis, Arnold Mordkin, and 03- 01 -10tc Minutes Page 8 of 9 Markey Butler. Voting Nay: None. Item No. 12 SECOND READING ORDINANCE NO. 4 SEREIS OF 2010 NAME CHANGE FOR ARTS ADVISORY BOARD Reed Lewis made the motion to approve second reading of Ordinance No. 4, Series of 2010 allowing a name change for the Arts Advisory Board. A roll call vote was taken. Markey Butler seconded the motion. The motion was approved by a vote of 5 in favor to 0 opposed. Voting Aye: Mayor Bill Boineau, John Wilkinson, Reed Lewis, Arnold Mordkin, and Markey Butler. Voting Nay: None. Item No. 13 MANAGER'S REPORT Town Manager Russ Forrest stated that now that the Comprehensive Plan has been approved, staff will bring forward Land Use changes and housing policy changes. EOTC Town Manager Russ Forrest reminded Council of the EOTC meeting being held on Thursday, March 18, 2010 at 4:00 p.m. in Aspen. CAST Town Manager Russ Forrest spoke to the Legislative Update being offered if anyone is interested in attending in Denver. Item No. 14 AGENDA FOR NEXT TOWN COUNCIL MEETING Town Manager Russ Forrest stated the EOTC discussion will be added to this agenda and the FAB recommendations and the Rec Center subsidy will be discussed. Town Council and staff discussed the Rodeo Place Minor Subdivision Amendment. Staffed noted this is a cleanup item for the entire Rodeo Place subdivision. Item No. 15 APPROVAL OF MEETING MINUTES FOR: December 7, 2009 Reed Lewis made the motion to approve as amended the minutes for December 7, 2009. Markey Butler seconded the motion. The motion was approved by a vote of 5 in favor to 0 opposed. 03- 01 -10tc Minutes Page 9 of 9 Voting Aye: Arnold Mordkin, John Wilkinson, Reed Lewis, Markey Butler, and Mayor Bill Boineau. Voting Nay: None. Item No. 16 COUNCIL COMMENTS /COMMITTEE REPORTS /CALENDARS Council Member Wilkinson would like a discussion on an agenda in the near future regarding alcohol at the Summer Concert Series. RTFA Board Meeting March 11, 2010 Item No. 17 ADJOURNMENT AT 8: 22 p.m. Arnold Mordkin made the motion to adjourn the Regular Meeting of the Snowmass Village Town Council on Monday, March 1, 2010. Reed Lewis | https://docs.tosv.com/WebLink/DocView.aspx?id=3365&dbid=0&repo=TOSV | 2019-10-14T01:47:36 | CC-MAIN-2019-43 | 1570986648481.7 | [] | docs.tosv.com |
As an API provider, managing the companies and developers participating in your monetized API ecosystem could be time consuming if you had to do it all yourself. you begin in Build your portal using Drupal 7.
In addition to reviewing and performing core developer portal environment setup, also review and perform the following monetization-specific topics:
About company and developer self-service in your developer portal
With monetization enabled in your organization, in addition to any developer portal configuration you've already performed, your portal may be ready for self-service. Self-service tasks that developers can perform include:
Example of self-service interactions in developer portal
Below are examples of self-service interactions in the developer portal:
- When developers register on the portal (clicking the Register link), they are automatically logged in (unless you want to manually approve them first) and can create their own company. They automatically become a monetization administrator of the company, and they their.
Enabling company management in the portal
By default, the ability to manage companies is available in the portal if you are using the Apigee Responsive theme. The Company drop-down menu may be obscured if you are using the Fixed Top Navbar position configuration setting. To ensure that the Manage Companies drop-down is displayed, switch to use the Static Top Navbar position setting, as follows:
- Log into the developer portal as an administrator.
- Select Appearance > Settings > Apigee Responsive in the admin bar.
- Select Components under Bootstrap Settings.
- Click Navbar.
- Select Static Top in the Navbar Position drop-down.
- Click Save configuration to save the changes.
If you are using your own custom theme, add the Switch Company block to your theme by selecting Structure > Blocks in the admin bar and dragging the Switch Company block to the desired region of the page. Click Save blocks to save the configuration. | https://docs.apigee.com/api-platform/monetization/companies-developers-self-service | 2019-10-14T01:45:53 | CC-MAIN-2019-43 | 1570986648481.7 | [] | docs.apigee.com |
SQL.
For the latest downloads, articles, samples, and videos from Microsoft as well as the community, visit the Reporting Services page on MSDN and the Report Builder page on TechNet MSDN.
For information about other SQL Server components, tools, and resources, see SQL Server Books Online.
Product Evaluation
Getting Started
Planning and Architecture
Development
Deployment
Operations
Security and Protection
Troubleshooting
Technical Reference
Information Worker
Analyst
Administrator
Developer | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms159106(v=sql.105)?redirectedfrom=MSDN | 2019-10-14T01:27:30 | CC-MAIN-2019-43 | 1570986648481.7 | [] | docs.microsoft.com |
Write a custom plugin for Device Portal
Learn how to write a UWP app that uses th Windows Device Portal to host a web page and provide diagnostic information.
Starting with the Creators Update, you can use Device Portal to host your app's diagnostic interfaces. This article covers the three pieces needed to create a DevicePortalProvider for your app – the appxmanifest changes, setting up your app’s connection to the Device Portal service, and handling an incoming request. A sample app is also provided to get started (Coming soon) .
Create a new UWP app project
In this guide, we'll create everything in one solution for simplicity.
In Microsoft Visual Studio 2019, create a new UWP app project. Go to File > New > Project and select Blank App (Windows Universal) for C#, and then click Next. In the Configure your new project dialog box. Name the project "DevicePortalProvider" and then click Create. This will be the app that contains the app service. Ensure that you choose "Windows 10 Creators Update (10.0; Build 15063)" to support. You may need to update Visual Studio or install the new SDK - see here for details.
Add the devicePortalProvider extension to your package.appxmanifest file
You will need to add some code to your package.appxmanifest file in order to make your app functional as a Device Portal plugin. First, add the following namespace definitions at the top of the file. Also add them to the
IgnorableNamespaces attribute.
<Package ... xmlns: ...
In order to declare that your app is a Device Portal Provider, you need to create an app service and a new Device Portal Provider extension that uses it. Add both the windows.appService extension and the windows.devicePortalProvider extension in the
Extensions element under
Application. Make sure the
AppServiceName attributes match in each extension. This indicates to the Device Portal service that this app service can be launched to handle requests on the handler namespace.
... <Application Id="App" Executable="$targetnametoken$.exe" EntryPoint="DevicePortalProvider.App"> ... <Extensions> <uap:Extension <uap:AppService </uap:Extension> <uap4:Extension <uap4:DevicePortalProvider </uap4:Extension> </Extensions> </Application> ...
The
HandlerRoute attribute references the REST namespace claimed by your app. Any HTTP requests on that namespace (implicitly followed by a wildcard) received by the Device Portal service will be sent to your app to be handled. In this case, any successfully authenticated HTTP request to
<ip_address>/MyNamespace/api/* will be sent to your app. Conflicts between handler routes are settled via a "longest wins" check: whichever route matches more of the requests is selected, meaning that a request to "/MyNamespace/api/foo" will match against a provider with "/MyNamespace/api" rather than one with "/MyNamespace".
Two new capabilities are required for this functionality. they must also be added to your package.appxmanifest file.
... <Capabilities> ... <Capability Name="privateNetworkClientServer" /> <rescap:Capability </Capabilities> ...
Note
The capability "devicePortalProvider" is restricted ("rescap"), which means you must get prior approval from the Store before your app can be published there. However, this does not prevent you from testing your app locally through sideloading. For more information about restricted capabilities, see App capability declarations.
Set up your background task and WinRT Component
In order to set up the Device Portal connection, your app must hook up an app service connection from the Device Portal service with the instance of Device Portal running within your app. To do this, add a new WinRT Component to your application with a class that implements IBackgroundTask.
namespace MySampleProvider { // Implementing a DevicePortalConnection in a background task public sealed class SampleProvider : IBackgroundTask { //... }
Make sure that its name matches the namespace and class name set up by the AppService EntryPoint ("MySampleProvider.SampleProvider"). When you make your first request to your Device Portal provider, Device Portal will stash the request, launch your app's background task, call its Run method, and pass in an IBackgroundTaskInstance. Your app then uses it to set up a DevicePortalConnection instance.
// Implement background task handler with a DevicePortalConnection public void Run(IBackgroundTaskInstance taskInstance) { // Take a deferral to allow the background task to continue executing this.taskDeferral = taskInstance.GetDeferral(); taskInstance.Canceled += TaskInstance_Canceled; // Create a DevicePortal client from an AppServiceConnection var details = taskInstance.TriggerDetails as AppServiceTriggerDetails; var appServiceConnection = details.AppServiceConnection; this.devicePortalConnection = DevicePortalConnection.GetForAppServiceConnection(appServiceConnection); // Add Closed, RequestReceived handlers devicePortalConnection.Closed += DevicePortalConnection_Closed; devicePortalConnection.RequestReceived += DevicePortalConnection_RequestReceived; }
There are two events that must be handled by the app to complete the request handling loop: Closed, for whenever the Device Portal service shuts down, and RequestReceived, which surfaces incoming HTTP requests and provides the main functionality of the Device Portal provider.
Handle the RequestReceived event
The RequestReceived event will be raised once for every HTTP request that is made on your plugin's specified Handler Route. The request handling loop for Device Portal providers is similar to that in NodeJS Express: the request and response objects are provided together with the event, and the handler responds by filling in the response object. In Device Portal providers, the RequestReceived event and its handlers use Windows.Web.Http.HttpRequestMessage and HttpResponseMessage objects.
// Sample RequestReceived echo handler: respond with an HTML page including the query and some additional process information. private void DevicePortalConnection_RequestReceived(DevicePortalConnection sender, DevicePortalConnectionRequestReceivedEventArgs args) { var req = args.RequestMessage; var res = args.ResponseMessage; if (req.RequestUri.AbsolutePath.EndsWith("/echo")) { // construct an html response message string con = "<h1>" + req.RequestUri.AbsoluteUri + "</h1><br/>"; var proc = Windows.System.Diagnostics.ProcessDiagnosticInfo.GetForCurrentProcess(); con += String.Format("This process is consuming {0} bytes (Working Set)<br/>", proc.MemoryUsage.GetReport().WorkingSetSizeInBytes); con += String.Format("The process PID is {0}<br/>", proc.ProcessId); con += String.Format("The executable filename is {0}", proc.ExecutableFileName); res.Content = new HttpStringContent(con); res.Content.Headers.ContentType = new HttpMediaTypeHeaderValue("text/html"); res.StatusCode = HttpStatusCode.Ok; } //... }
In this sample request handler, we first pull the request and response objects out of the args parameter, then create a string with the request URL and some additional HTML formatting. This is added into the Response object as an HttpStringContent instance. Other IHttpContent classes, such as those for "String" and "Buffer," are also allowed.
The response is then set as an HTTP response and given a 200 (OK) status code. It should render as expected in the browser that made the original call. Note that when the RequestReceived event handler returns, the response message is automatically returned to the user agent: no additional "send" method is needed.
Providing static content
Static content can be served directly from a folder within your package, making it very easy to add a UI to your provider. The easiest way to serve static content is to create a content folder in your project that can map to a URL.
Then, add a route handler in your RequestReceived event handler that detects static content routes and maps a request appropriately.
if (req.RequestUri.LocalPath.ToLower().Contains("/www/")) { var filePath = req.RequestUri.AbsolutePath.Replace('/', '\\').ToLower(); filePath = filePath.Replace("\\backgroundprovider", "") try { var fileStream = Windows.ApplicationModel.Package.Current.InstalledLocation.OpenStreamForReadAsync(filePath).GetAwaiter().GetResult(); res.StatusCode = HttpStatusCode.Ok; res.Content = new HttpStreamContent(fileStream.AsInputStream()); res.Content.Headers.ContentType = new HttpMediaTypeHeaderValue("text/html"); } catch(FileNotFoundException e) { string con = String.Format("<h1>{0} - not found</h1>\r\n", filePath); con += "Exception: " + e.ToString(); res.Content = new HttpStringContent(con); res.StatusCode = HttpStatusCode.NotFound; res.Content.Headers.ContentType = new HttpMediaTypeHeaderValue("text/html"); } }
Make sure that all files inside of the content folder are marked as "Content" and set to "Copy if newer" or "Copy always" in Visual Studio’s Properties menu. This ensures that the files will be inside your AppX Package when you deploy it.
Using existing Device Portal resources and APIs
Static content served by a Device Portal provider is served on the same port as the core Device Portal service. This means that you can reference the existing JS and CSS included with Device Portal with simple
<link> and
<script> tags in your HTML. In general, we suggest the use of rest.js, which wraps all the core Device Portal REST APIs in a convenient webbRest object, and the common.css file, which will allow you to style your content to fit with the rest of Device Portal's UI. You can see an example of this in the index.html page included in the sample. It uses rest.js to retrieve the device name and running processes from Device Portal.
Importantly, use of the HttpPost/DeleteExpect200 methods on webbRest will automatically do the CSRF handling for you, which allows your webpage to call state-changing REST APIs.
Note
The static content included with Device Portal does not come with a guarantee against breaking changes. While the APIs are not expected to change often, they may, especially in the common.js and controls.js files, which your provider should not use.
Debugging the Device Portal connection
In order to debug your background task, you must change the way Visual Studio runs your code. Follow the steps below for debugging an app service connection to inspect how your provider is handling the HTTP requests:
- From the Debug menu, select DevicePortalProvider Properties.
- Under the Debugging tab, in the Start action section, select “Do not launch, but debug my code when it starts”.
- Set a breakpoint in your RequestReceived handler function.
Note
Make sure the build architecture matches the architecture of the target exactly. If you are using a 64-bit PC, you must deploy using an AMD64 build. 4. Press F5 to deploy your app 5. Turn Device Portal off, then turn it back on so that it finds your app (only needed when you change your app manifest – the rest of the time you can simply re-deploy and skip this step). 6. In your browser, access the provider's namespace, and the breakpoint should be hit.
Related topics
Feedback | https://docs.microsoft.com/en-us/windows/uwp/debug-test-perf/device-portal-plugin | 2019-10-14T00:51:10 | CC-MAIN-2019-43 | 1570986648481.7 | [array(['images/device-portal/plugin-response-message.png',
'device portal response message'], dtype=object)
array(['images/device-portal/plugin-static-content.png',
'device portal static content folder'], dtype=object)
array(['images/device-portal/plugin-file-copying.png',
'configure static content file copying'], dtype=object)
array(['images/device-portal/plugin-output.png',
'device portal plugin output'], dtype=object)] | docs.microsoft.com |
Vendor-Provided Device Installation Components
This topic describes device installation components that are provided by an IHV or OEM.
Driver Package
A driver package consists of all the software components that you must supply for your device to be supported under Windows. These components include the following:
An INF file, which provides information about the devices and drivers to be installed. For more information, see Creating an INF File.
A catalog file, which contains the digital signature of the driver package. For more information, see Digital Signatures.
The driver for the device.
Drivers
A driver allows the system to interact with the hardware device. Windows copies the driver's binary file (.sys) to the %SystemRoot%\system32\drivers directory when the device is installed. Drivers are required for most devices.
For more information, see Choosing a Driver Model.
For more information about drivers for Windows, see Getting Started with Windows Drivers.
Feedback | https://docs.microsoft.com/en-us/windows-hardware/drivers/install/vendor-provided-device-installation-components | 2019-10-14T02:24:57 | CC-MAIN-2019-43 | 1570986648481.7 | [] | docs.microsoft.com |
Within Azure Red Hat OpenShift,..
A node provides the runtime environments for containers. Each node in a Kubernetes cluster has the required services to be managed by the master. Nodes also have the required services to run pods, including the container runtime, a kubelet, and a service proxy.
Azure Red Hat OpenShift. | https://docs.openshift.com/aro/architecture/infrastructure_components/kubernetes_infrastructure.html | 2019-10-14T01:22:57 | CC-MAIN-2019-43 | 1570986648481.7 | [] | docs.openshift.com |
Displaying post type descriptions
Ever since post type creation became "public" in WordPress 3.0.0 in June 2010, users have been able to provide a description for their post type. However, for whatever reasons that the WordPress core team has, they have never provided an official way to retrieve that description value. Thankfully, the "get_post_type_object()" function provides that value for a provided post type. This allows us to create our own custom function that can be used to display the description value.
function pluginize_display_post_type_description( $post_type_slug = '' ) { $mytype = get_post_type_object( $post_type_slug ); if ( ! empty( $mytype ) ) { echo $the_post_type->description; } }
Using this function, which you can rename however you prefer, will return the description of whatever post type slug you provide it. This would be good to potentially use in your theme and specifically archives for any given post type you may want to provide more details for. | https://docs.pluginize.com/article/82-displaying-post-type-descriptions | 2019-10-14T00:43:47 | CC-MAIN-2019-43 | 1570986648481.7 | [] | docs.pluginize.com |
This file format contains polygon mesh objects that consist of triangular polygon faces only.
Raw import supports ASCII format only.
From the File menu, click Open, Insert, Import, or Worksession > Attach.
Raw Triangle Import Options
Displays the units in the current file. This applies only to the Import and Insert commands. The Open command displays Rhino units as None.
If the Rhino file and the file being imported have different units, the imported model geometry will be scaled accordingly.
Saves the current settings and turns off the dialog display.
See also: ResetMessageBoxes command.
Rhinoceros 6 © 2010-2019 Robert McNeel & Associates. 16-May-2019 | http://docs.mcneel.com/rhino/6/help/en-us/fileio/raw_triangles_raw_import_export.htm | 2019-06-16T07:37:13 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.mcneel.com |
can edit the guidelines.
Annotatable sections
Here you select which sections you want to manually or automatically annotate in scientific papers.
Each time you press the button
Confirm in the annotation editor, in the background, a machine learning model is being trained with all the project documents confirmed. Next time you upload a new document, this model can predict new annotations based on this model. You can remove or add new annotations to continue training the model and get more accurate results.
If activated, machine learning will start annotating automatically from the first document confirmed. No deployments or complex configurations are required, just by annotating you can train a use a machine learning model.
If you don't want to use machine learning, deactivate this option.
More information on how Machine Learning works in tagtog. you can invite and organize other users in your project, so they can collaborate in the annotation tasks. See for more info about roles and collaborative annotation.
Invite other users to your project
To add a new member simply write the tagtog username in the text box, choose the role, and click on
Add Member. Once added, they will receive an email notification.
Task distribution
With this setting you can automatically distribute the project's documents among your annotators. For example, if you choose 1 annotator, every uploaded document will be randomly assigned to one project's member to annotate. Otherwise, for example, if you choose 2 annotators, every uploaded document will be randomly assigned to 2 project's members; that is, every document will have to be annotated by at least 2 annotators. You can choose between different flows to annotate documents in group. Find here the options.
This overlapping is recommended to increase the overall quality of your annotation project. For more information about quality management at tagtog, go here
In this section you can decide whether the project's owner (the person who created the project) should be assigned documents to annotate or not.
Task Distribution
You can distribute documents to annotate automatically among your members. More info on annotation workflows here.
By default documents are not distributed, and therefore members annotate directly on the
master version. Once the task distribution is activated (number of annotators per document is 1 or more), members annotate on their own independent version.
When task distribution is activated, project members see by default (in Documents) the special search view
filter:TODO. This view lists the documents that the annotator still has to annotate or review, if any.
Admin
Remove a project
To remove a project, go to its Settings > Admin. Click on the
Delete Project button. Please notice that removing a project will remove all the documents within the project..
Delete the project | http://docs.tagtog.net/projects.html | 2019-06-16T07:23:31 | CC-MAIN-2019-26 | 1560627997801.20 | [array(['/assets/img/settings-doclabel.png', None], dtype=object)
array(['/assets/img/settings-members.png', None], dtype=object)
array(['/assets/img/settings-task-distribution.png', None], dtype=object)
array(['/assets/img/DeleteProjectBtn.png', None], dtype=object)] | docs.tagtog.net |
6543.
On Unix:
$VENV/bin/python helloworld.py
On Windows:
%VENV%\Scripts\python helloworld.py
This command will not return and nothing will be printed to the console. When
port 6543 is visited by a browser on the URL
/,) to stop the application.
Now that we have a rudimentary understanding of what the application does, let's examine it piece by piece.
Imports¶
The above
helloworld.py script uses the following set of import statements: imports the
Configurator class from the
pyramid.config module. An instance of the
Configurator class is later used to configure your
Pyramid application. is known as a view callable. A view callable accepts a
single argument,
request. It is expected to return a response
object. A view callable doesn't need to be a function; it can be represented
via another type of object, like a class or an instance, but for our purposes
here, a function serves us well.
A view callable is always called with a request object. A request object is a representation of an HTTP request sent to Pyramid via the active WSGI server.
A view callable is required to return a response object because a
response object has all the information necessary to formulate an actual HTTP
response; this object is then converted to text by the; the code within the
if block should only be run during a direct
script execution.
The
with Configurator() as config: line above creates an instance of the
Configurator class using a context manager. The resulting
config object
represents an API which the script uses to configure this particular
Pyramid application. Methods called on the Configurator will cause
registrations to be made in an application registry associated with the
application.
Adding Configuration¶
The first line above calls the
pyramid.config.Configurator.add_route()
method, which registers a route to the root (
/) URL path.
The second line registers the
hello_world function as a view
callable and makes sure that it will be called when the
hello route is
matched.
WSGI Application Creation¶
After configuring views and ending configuration, the script creates a WSGI
application via the
pyramid.config.Configurator.make_wsgi_app() method.
A call to
make_wsgi_app implies that all configuration is finished
(meaning all method calls to the configurator, which sets up views and various
other configuration settings, have been performed). The
make_wsgi_app
method returns a WSGI application object that can be used by any WSGI
server to present an application to a requestor. WSGI is a protocol
that allows servers to talk to Python applications. We don't discuss
WSGI in any depth within this book, but you can learn more about it by
reading its documentation.
The Pyramid application object, in particular, is an instance of a class
representing a Pyramid router. It has a reference to the
application registry which resulted from method calls to the
configurator used to configure it. The router consults the registry to
obey the policy choices made by a single application. These policy choices
were informed by method calls to the Configurator made earlier; in our
case, the only policy choices made were implied by 6543,
6543.. | https://docs.pylonsproject.org/projects/pyramid/en/master/narr/firstapp.html | 2019-06-16T06:31:03 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.pylonsproject.org |
Contents Security Operations Previous Topic Next Topic Create a case from IoCs or observables Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Create a case from IoCs or observables In Threat Intelligence, you can create a case from artifacts (IoCs or observables). After the IoCs or observables have been used to create a case, you can use Security Case Management to analyze the data. Before you beginThe Threat Intelligence plugin must be activated to use Security Case Management.Role required: sn_ti.case_user_write Procedure Navigate to the artifacts (IoCs or observables) you want to use to create a case. To create a case from IoCs, navigate to Threat Intelligence > IoC Repository > Indicators. To create a case from observables, navigate to Threat Intelligence > IoC Repository > Observables. In the list, select the artifacts you want added to a new case. Note: If you select multiple IoCs or observables, they are all added to the case. From the Actions on selected items drop-down list, select Add to Security Case. The Add to Security Case dialog box opens. If you already have cases assigned to you, they display in the list. Click Create New Case. Fill in the fields. Field Description Case Name Enter a name for this case. Description Enter a description that would be of value to the case analyst. Click Submit. A message at the top of the list indicates that a new case has been created, along with a link to the case in Security Case Management. Click the link to view the new case. Related TasksAdd IoCs and observables to an existing caseCreate an observable from a caseRun a sightings search on observables in a case On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/kingston-security-management/page/product/threat-intelligence-case-management/task/create-cases-threat.html | 2019-06-16T07:02:23 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.servicenow.com |
Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_95939_2113023921.1560667918297" ------=_Part_95939_2113023921.1560667918297 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
In V-Ray Next, you=E2=80=99ll no longer see V-Ray RT engine. Instead you= 'll find the V-Ray GPU Next engine.
The reason is that V-Ray has two separate render engines:
As mentioned above, these are two separate render engines and users shal= l not swap between them as they will produce different results. Our recomme= ndation ma= terials that are compatible. If you see grayed-out options in V-Ray Setting= s windows, it simply means that currently they are not supported.
When working with the V-Ray Next GPU you can use IPR mode for look devel= opment and production mode for final frame rendering.
Another big improvement of V-Ray GPU Next is the new kernel, which speed= s up the rendering by up to two times.=20 | https://docs.chaosgroup.com/exportword?pageId=41783053 | 2019-06-16T06:51:58 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.chaosgroup.com |
create an activity map?
With an activity map, you can view the connections between devices across your network in real-time or for a specific time interval. Instead of a static visualization of how your network is organized, an activity map provides a dynamic view of protocol activity on your network as it occurs. An activity map can help answer the following questions:
- Is a server that should be disconnected or decommissioned still sending or receiving traffic from other devices?
- Which services are interacting with my slow application server? Is one of these services sending an overwhelming volume of traffic that might be affecting application performance?
- Are databases or authentication servers making unauthorized connections with other devices?
What kind of devices can I see in an activity map?
Any device can appear in an activity map, except devices in Discovery Mode and devices without any protocol activity during the selected time interval. For more information about Discovery Mode, see Analysis levels.
Can I view my map in 3D?Yes. In the lower right corner of the activity map, click 3D. Maps displayed in the 3D layout automatically rotate until you pan or zoom.
Thank you for your feedback. Can we contact you to ask follow up questions? | https://docs.extrahop.com/7.2/activity-maps-faq/ | 2019-06-16T07:03:05 | CC-MAIN-2019-26 | 1560627997801.20 | [array(['/images/7.2/command_menu_icon.png', None], dtype=object)] | docs.extrahop.com |
Bitmap
Icon
Bitmap Icon
Bitmap Icon
Bitmap Icon
Class
Definition
public : class BitmapIcon : IconElement
struct winrt::Windows::UI::Xaml::Controls::BitmapIcon : IconElement
public class BitmapIcon : IconElement
Public Class BitmapIcon Inherits IconElement
<BitmapIcon .../>
- Inheritance
- BitmapIconBitmapIconBitmapIconBitmapIcon
- Attributes
-
Windows 10 requirements
Examples
This example shows an AppBarButton with a BitmapIcon. The UriSource specifies an image that's included in the app package.
<AppBarButton Label="BitmapIcon" Click="AppBarButton_Click"> <AppBarButton.Icon> <BitmapIcon UriSource="ms-appx:///Assets/globe.png"/> </AppBarButton.Icon> </AppBarButton>
Remarks
Note
BitmapIcon is typically used to provide the icon for an AppBarButton, and the remarks in this section assume this usage. However, it can be used anywhere a UIElement can be used. The remarks apply to all usages.
To use a BitmapIcon as the Icon for an AppBarButton, you specify the URI of an image file.
The file that you use should be a solid image on a transparent background. The bitmap image as retrieved from the UriSource location is expected to be a true bitmap that has transparent pixels and non-transparent pixels. The recommended format is PNG. Other file-format image sources will load apparently without error but result in a solid block of the foreground color inside the AppBarButton.
All color info is stripped from the bitmap when the BitmapIcon is rendered. The remaining non-transparent colors are combined to produce an image that's entirely the foreground color as set by the Foreground property (this typically comes from styles or templates, such as the default template resolving to a theme resource).
Note
You can set the Foreground property on the AppBarButton or on the BitmapIcon. If you set the Foreground on the AppBarButton, it's applied only to the default visual state. It's not applied to the other visual states defined in the AppBarButton template, like
MouseOver. If you set the Foreground on the BitmapIcon, the color is applied to all visual states.
The default font size for an AppBarButton Icon is 20px.
You typically specify a UriSource value that references a bitmap that you've included as part of the app, as a resource or otherwise within the app package. For more info on the ms-appx: scheme and other URI schemes that you can use to reference resources in your app, see Uri schemes.
Version history
Constructors
Properties
Events
Methods
See also
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/uwp/api/windows.ui.xaml.controls.bitmapicon | 2019-06-16T07:59:26 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.microsoft.com |
VSPackages
Note
This article applies to Visual Studio 2015. If you're looking for the latest Visual Studio documentation, use the version selector at the top left. We recommend upgrading to Visual Studio 2019. Download it here
VSPackages are software modules that extend the Visual Studio integrated development environment (IDE) by providing UI elements, services, projects, editors, and designers.
In This Section
Specifying VSPackage File Location to the VS Shell
Explains how to specify the VSPackage location to the Visual Studio shell.
Resources in VSPackages
Explains how to manage resources in VSPackages.
Best Practices for Security in VSPackages
Helps you to create more secure products by understanding security vulnerabilities. | https://docs.microsoft.com/en-us/visualstudio/extensibility/internals/vspackages?view=vs-2015 | 2019-06-16T08:03:41 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.microsoft.com |
Regression¶
A regression example is explained for the SALSA package by the
sinc(x) = sin(x)./x function.
This package provides a function
salsa and explanation on
SALSAModel for the regression case. This use case is supported by the Fixed-Size approach [FS2010] and Nyström approximation with the specific
LEAST_SQUARES() loss function and cross-validation criterion
mse() (mean-squared error).
using SALSA, Base.Test srand(1234) sinc(x) = sin(x)./x X = linspace(0.1,20,100)'' Xtest = linspace(0.11,19.9,100)'' y = sinc(X) model = SALSAModel(NONLINEAR, SIMPLE_SGD(), LEAST_SQUARES, validation_criterion=MSE(), process_labels=false) model = salsa(X, y, model, Xtest) @test_approx_eq_eps mse(sinc(Xtest), model.output.Ytest) 0.05 0.01
By taking a look at the code snippet above we can notice a major difference with the Classification example. The model is equipped with the
NONLINEAR mode,
LEAST_SQUARES loss function while the cross-validation criterion is given by
MSE. Another important model-related parameter is
process_labels which should be set to
false in order to switch into regression mode. These four essential components unambiguously define a regression problem solved stochastically by the
SALSA package. | https://salsajl.readthedocs.io/en/latest/regression.html | 2019-06-16T06:54:26 | CC-MAIN-2019-26 | 1560627997801.20 | [] | salsajl.readthedocs.io |
In order to retrieve any data from the CDP4 WebServices a GET request must be performed. Starting with the first TopContainer SiteDirectory, a GET request on the following URI must be performed.
This query will return a shallow copy of the SiteDirectory object in a JSON format. The CDP4 Web API always returns an array of objects, even if only one object was requested. En example of a GET request on a SiteDirectory is shown below:
[ { "revisionNumber": 1, "classKind": "SiteDirectory", "organization": [], "person": [ "77791b12-4c2c-4499-93fa-869df3692d22" ], "participantRole": [ "ee3ae5ff-ac5e-4957-bab1-7698fba2a267" ], "defaultParticipantRole": "ee3ae5ff-ac5e-4957-bab1-7698fba2a267", "siteReferenceDataLibrary": [ "c454c687-ba3e-44c4-86bc-44544b2c7880" ], "model": [ "116f6253-89bb-47d4-aa24-d11d197e43c9" ], "personRole": [ "2428f4d9-f26d-4112-9d56-1c940748df69" ], "defaultPersonRole": "2428f4d9-f26d-4112-9d56-1c940748df69", "logEntry": [], "domainGroup": [], "domain": [ "0e92edde-fdff-41db-9b1d-f2e484f12535" ], "naturalLanguage": [], "createdOn": "2016-09-01T08:14:45.461Z", "name": "Test Site Directory", "shortName": "TEST-SiteDir", "lastModifiedOn": "2015-04-17T07:48:14.56Z", "iid": "f13de6f8-b03a-46e7-a492-53b2f260f294" } ]
All the classes in the CDP4 UML model derive from the Thing class. The Thing class is an abstract super class that has 3 properties:
iid: the unique identifier of the instance of Thing
classKind: an enumeration that specifies the type of the Thing.
revisionNumber: the revision number of the Thing. This number is updated with the incremented revision number of it's
TopContainerevery time an update is made to the Thing.
When inspecting the sample JSON object we notice scalar properties as well as compound properties. The compound properties always contain the unique id's of the objects that are contained (through a composite aggregation), or referenced. The unique id's are representative of pointers or references to these objects. Scalar properties can either be a single unique id pointer to another object; or a string, number, true, false or null. See json.org for more information on how JSON. The
iid property of the Thing class is of type UUID. The string representation of UUID must be in accordance with the following regular expression:
[a-f0-9]{8}-[a-f0-9]{4}-4[a-f0-9]{3}-[a-b8-9][a-f0-9]{3}-[a-f0-9]{12}
The CDP4 Web API supports a query parmeter called
extent that has as possible values
shallow (the default value) and
deep. When the query parameter is not used a shallow copy of the requested resource is returned. When the query parameter is added to the URI of the resource, and it's value is set to
deep, the CDP4 Web API returns all the objects that are contained by the requested object, all the way down the containment hierarchy. The JSON objects that are returned are returned as a JSON array of
shallow copies, the CDP4 Web API does not return nested objects.
Even though the unique id of the SiteDirectory was not provided, the object will still be returned. This is due to the fact that there is only one instance. From the sample JSON object listed above we can find the unique id of the
SiteDirectory : "f13de6f8-b03a-46e7-a492-53b2f260f294". When we add this unique id to the URI the same object will be returned from the CDP4 Web API
The CDP4 Web API is case sensitive. Only the TopContainers in the URI shall start with an upper case character. To query any of the objects that are contained by the SiteDirectory the property name must be included in the URI. The following query returns all the Person objects that are contained by the SiteDirectory:
The response is as follows:
[ { "revisionNumber": 2, "classKind": "Person", "organization": null, "givenName": "John", "surname": "Doe", "organizationalUnit": "", "emailAddress": [], "telephoneNumber": [], "defaultDomain": "0e92edde-fdff-41db-9b1d-f2e484f12535", "isActive": true, "role": "2428f4d9-f26d-4112-9d56-1c940748df69", "password": "01fbc7972dacd0cf2c9d8f798a73af76186e9b42f865f03539806a156a57c52b", "defaultEmailAddress": null, "defaultTelephoneNumber": null, "userPreference": [], "shortName": "admin", "isDeprecated": false, "iid": "77791b12-4c2c-4499-93fa-869df3692d22" } ]
In the example only one Person object is contained by the SiteDirectory, if more Person objects would have been contained, of course more objects would have been returned. Again, a response from the CDP4 WebServices is always in the form of a JSON array.
To query a specific instance, it's unique id needs to be added to the URI. Following the Person example above, the query is as follows:
More deeply contained objects can of course be returned as well; A Person object can contain zero or more EmailAddress instances, to query the email address of the Person object the following query can be performed:
When an object along the containment hierarchy is queried, see the Person and EmailAddress sample queries, the containers of these objects are not returned. Looking at the response of the Person query, only the Person object was returned, not the SiteDirectory object. The CDP4 Web API can return all the shallow copies of objects up the containment tree. This can be achieved by using the
includeAllContainers query parameter. It has as values true and false, the default value is false.
The following query will return the Person object as well as it's container(s), which in this case is also the SiteDirectory.
The data that is accessible thorugh the CDP4 WebServices is versioned. To minimize the required traffic it is possible to perform a GET request that only returns those objects that have a revision number that is higher than a specified revision number. The
revisionNumber query parameter is used for this purpose. The
revisionNumber query parameter is an integer that must be greater than or equal to zero. Executing a GET request and specifying a value of zero for the
revisionNumber query parameter returns the same result as the GET without the
revisionNumber query parameter.
The following query will return all the objects that are in the containment tree of the SiteDirectory that have a revision number that is greater than 2:
The
revisionNumber query parameter may not be used in combination with any of the other query parameters. It may however be used in combination with a POST request. The result of a POST request including the
revisionNumber query parameter is an array of all the objects that have a revision number that is higher than the value of the
revisionNumber query parameter.
Every change to any object is versioned therefore the state of an object an any point in time, and it's complete containment tree, can be retrieved from the CDP WebServices. The
revisionHistory parameter has not yet been implemented.
The CDP4 data model contains concepts that represent a filesystem containing folders and files. The FileRevision class represents a persisted revision of a File. The CDP4 Web API supports HTTP multipart requests to send and retrieve the content of the files. In order to retrieve files from the CDP4 Web API the
includeFileData query parameter must be used. The
includeFileData query parameter has as possible value true and false, the default value is equal to false. When the
includeFileData query parameter is included and it's value is set to true the response will include JSON data as well as the file content in the form of a multipart message.
The following query will return all the data contained by an Iteration including the content of the FileRevision objects:{iid}/Iteration/{iid}?content=deep&includeFileData=true
The CDP4 Web API exposes version information regarding versions of the software and the CDP4 Web API protocol. The following HTTP headers are used to return this information with every response:
application/json; ecss-e-10-25; version=1.0.0
The EngineeringModel class is a TopContainer. It contains all the concepts to model both the requirements of a system-of-interest and the solution. An EngineeringModel contains 1 or more Iterations. An Iteration is a version of the EngineeringModel that represents one complete and coherent step in the development of the EngineeringModel. This means that the actual engieering data of an EngineeringModel is contained in an Iteration. To retrieve the data from an Iteration the following GET request must be performed:{iid}/iteration/{iid}
This will return the shallow copy of the Iteration. To include the contained items the following GET query must be executed:{iid}/iteration/{iid}?extent=deep
Last modified 1 year ago. | http://cdp4docs.rheagroup.com/?c=F%20CDP4%20Web%20API&p=02_WEB_API_Retrieving_Data.md | 2019-06-16T07:43:24 | CC-MAIN-2019-26 | 1560627997801.20 | [] | cdp4docs.rheagroup.com |
2.1 GPU Updated UI
In V-Ray Next, you’ll no longer see V-Ray RT engine. Instead you'll find the V-Ray GPU Next engine.
The reason is that V-Ray has two separate render engines:
- CPU render engine, which is called V-Ray Next
- GPU render engine, which is respectively called V-Ray Next GPU
As mentioned above, these are two separate render engines and users shall not swap between them as they will produce different results. Our recommendation materials that are compatible. If you see grayed-out options in V-Ray Settings windows, it simply means that currently they are not supported.
When working with the V-Ray Next GPU you can use IPR mode for look development and production mode for final frame rendering.
Another big improvement of V-Ray GPU Next is the new kernel, which speeds up the rendering by up to two times. | https://docs.chaosgroup.com/pages/viewpage.action?pageId=41783053&spaceKey=CWVRAYMAX | 2019-06-16T06:53:29 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.chaosgroup.com |
Program Name Properties: Environment Tab
Applies To: System Center Configuration Manager 2007, System Center Configuration Manager 2007 R2, System Center Configuration Manager 2007 R3, System Center Configuration Manager 2007 SP1, System Center Configuration Manager 2007 SP2
Use the Environment tab of the Configuration Manager 2007 Software Distribution <Program Name> Properties dialog box to specify any conditions, such as user interaction or necessary rights, that are required for a program to run on client computers.
The Environment tab contains the following elements:
Program can run
Specifies the logon conditions necessary for the program to run:
Only when a user is logged on: Prevents the program from running if no user is logged on to the client computer. This is the default setting.
Whether or not a user is logged on: Enables the program to run with or without a user logged on the client computer.
Only when no user is logged on: Prevents the program from running until the user logs off the computer.
Note
If a user logs on while the installation is running, the program continues to run.
Run Mode
Specifies the credentials required to run the program on the client computer. Two options are available:
Run with user's rights: Specifies whether the program runs with user credentials. This option is selected by default.
Run with administrative rights: Specifies whether the program runs with administrator credentials. If selected, this option forces the program to run under the Local System account on the client computer.
If this option is selected, the following additional option is available:
Allow users to interact with this program: Specifies whether to allow users to interact with the program. This check box is available only when the Program can run option is configured for Only when a user is logged on or Whether or not a user is logged on.
Select this option only for programs that must run in an administrative context and that require the user to interact with the program. If you select this option, the user interface for the program is visible to the logged-on user, and that user can interact with the program.
Leave this option clear for all programs that do not display any user interface or that display a user interface but do not require the user to interact with the program.
Note
If you advertise a program that is set to Run with administrative rights and you do not select Allow users to interact with this program, the program can fail if it displays a user interface that requires a user to make a selection or click a button. The program waits for user interaction until the program's configured Maximum allowed run time is exceeded, and then stops if no user interaction is received. If no Maximum allowed run time is specified, the program's process ends after 12 hours.
- Runs with UNC name
Specifies that the program runs with a Universal Naming Convention (UNC) name. This is the default setting.
- Requires drive letter
Indicates that the program requires a drive letter to fully qualify its location, but Configuration Manager 2007 can use any available drive letter on the client.
Requires specific drive letter
Indicates that the program requires a specific drive letter that you specify to fully qualify its location.
If the specified drive letter is already used on a given client, the program does not run.
- Reconnect to distribution point at logon
Specifies that the client computer reconnects to the distribution point when the user logs on. By default, this check box is cleared.
- OK
Saves any changes and exits the dialog box.
- Cancel
Exits the dialog box without saving any changes.
- Apply
Saves any changes and remains in the dialog box.
Opens the help topic for this tab of the dialog box.
See Also
Tasks
Reference
Other Resources
For additional information, see Configuration Manager 2007 Information and Support.
To contact the documentation team, email [email protected]. | https://docs.microsoft.com/en-us/previous-versions/system-center/configuration-manager-2007/bb633096(v=technet.10) | 2019-06-16T07:01:57 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.microsoft.com |
Plugin:
In a Concurrent Design activity design parameters are exchanged between the active domains that are together [developing an engineering model]Develop_EM]. These parameters are often (physical) parameters that describe or use quantities that have scales and units associated with them. The parameters are created based on a definition of a Parameter Type. The concepts of Measurement Units, Measurement Scales and Parameter Types, as well as additional concepts, are managed in Reference Data Libraries (RDLs).
To be able to create the definition of various Parameter Types, the measurement scales and measurement units that are needed to describe a variety of parameters have to be created first if not yet available. The CDP™ implementation contains an extensive set of definitions for Measurement Units, Measurement Scales and (QantityKind) Parameter Types. These definitions are given by the ECSS-E-TM-10-25 guideline that uses as a basis the International System of Quantities (ISQ) and the International System of Units as defined in ISO/IEC 80000. In addition measurement scales are explicitly defined, so that a fully self-described system is captured. Where needed this is extended with non-SI quantities, scales and units with explicit conversion relationships.1
These provide a large set of descriptions for SI base and SI derived units and quantities. They can be created and managed at the level of the CDP™. These items can also be created for the level of the activity, but these are then transferred to the CDP™ level. All new items that are created in the RDL can thus be viewed and used in other activities.
RDLs in a connected data source can be managed at different levels, typically the following 3 levels 1:
A top-level Site RDL
Provides generally used concepts such as generic ParameterTypes, SI QuantityKinds, MeasurementScales and MeasurementUnits. These are concepts that are used in almost any concurrent design EngineeringModel independent of the application domain.
A second-level Site RDL
Holds concepts like ParameterTypes, MeasurementScales, MeasurementUnits and Glossaries for a specific application domain or "family of studies", for example: Space Mission Development, System of Systems Development, Early Concept Development, Payload Instrument Development, Launch Vehicle Development.
A third-level Model RDL
Contains concepts similar to the second level, but now those that are particular to a specific EngineeringModel.
The top-level Site RDL of a data source is available from the installation of e.g. an OCDT PDS and OCDT WSP or a CDP™ Back-End and CDP™ Web Services as ECSS-E-TM-10-25 compliant data sources. All other RDLs, both second-level Site RDLs as well as Model RDLs have a relationship to this top-level RDL. Subsequent changes to this Site RDL can be made, e.g. creating additional items. The details are described in the separate topics of units, scales, and parameter types. These changes will be updated to and available in all the engineering models on that connected data source.
It is also possible to create any number of additional second-level Site RDLs, e.g. to create RDLs for specific types of projects. These are created essentially as branches from the top-level RDL; see below for a description of dependencies between RDLs and their use in engineering models.
Open the Site RDL Browser, navigate to the Directory ribbon tab and select the Reference Data Libraries
icon to manage Site RD Site Directory RDL, select the Create a Site RDL
icon or in the context menu select Create a Site RDL.
Provide a Name, Short Name, and choose a Required RDL for the Site Reference Data Library, and click Ok.
To edit a Site Directory RDL, select the Edit Site Reference Data Library
icon or in the context menu select Edit.
Edit the Name and/or Short Name of the Site Reference Data Library, and click Ok.
To inspect a Site RDL, select the Inspect Site Reference Data Library
icon or in the context menu select Inspect.
In the Inspect modal dialog, all the details can be seen on the Basic tab.
Additionally the status of the Site RDL is given by the check box for Deprecated, see the description of Delete below.
The Advanced tab provides information that may be useful mostly to CDP™ database administrators. Given are the UniqueID and the [Revision Number][Rev_Num].
To delete a Site RDL, select the Delete Site Reference Data Library
icon or in the context menu select Deprecate.
Items from the CDP™ are never completely deleted, but they are marked as Deprecated.
This Deprecated status is an indication to users that it should not be used anymore.
To export a Site RDL, select the Export Site Reference Data Library
icon or in the context menu select Export.
Next to the Site RDLs, following ECSS-E-TM-10-25 as the third level RDLs given above, for each CDP™ Activity, i.e. for each engineering model, a Model RDL is being created. Adaptations and extensions can be made to the RDL of the specific engineering model without the risk of compromising any other engineering models; see the section below however for dependencies between RDLs and engineering models.
The Reference Data of Measurement Units, Measurement Scales and Parameter Types in the RDL are defined at the level of the CDP™ or at the level of a specific engineering model. All new items that are created in a Site RDL can be viewed and used in other activities. This means that the RDL should be carefully managed. In managing the RDL, care has to be taken to create combinations of measurement units, measurement scales and parameter types that lead to meaningful definitions for parameters that are needed in the design work in the engineering models, and trying to avoid incorrect, ambiguous or duplicate definitions.
Editing entries in the RDL can have a large impact on not only the current activity, but also on all the other activities using it.
When a new Site RDL is created, a link is kept between the parent Site RDL(s) and the newly created Site RDL. When an engineering model is being created, an applicable Reference Data Library has to be selected for it from the Site RDLs.
It is advisable to avoid negatively affecting other CD activities and engineering model setups inadvertently by careful management of the measurement units, measurement scales and parameter types by a restricted group of users.
Last modified 1 year ago. | http://cdp4docs.rheagroup.com/?c=C%20Managing%20the%20CDP/Site%20Directory&p=RDL.md | 2019-06-16T07:43:21 | CC-MAIN-2019-26 | 1560627997801.20 | [] | cdp4docs.rheagroup.com |
Draft pull requests and new work item text editor - Sprint 143 Update
In the Sprint 143 Update of Azure DevOps, we are introducing a new work item text editor that is much more powerful and easier to use. This is part of our effort to modernize and improve the experience across the product. In Azure Repos, draft pull requests allow you to create a pull request that you are not yet ready to complete, so they can't be completed accidentally. We are also releasing some new features in Azure Artifacts, including the ability to exclude files in artifact uploads and get provenance information on packages.
Features
General:
Azure Boards:
Azure Repos:
Azure Pipelines:
- Trigger YAML pipelines with tags
- Setting to auto cancel an existing pipeline when a pull requests is updated
- Declare container resources inline
- Changes to default permissions for new projects
- Deploy to failed targets in a Deployment Group
- Support for Infrastructure as Code
Azure Artifacts:
- Exclude files in artifact uploads
- Provenance information on packages
- Azure Artifacts REST API documentation updates
General
REST API version 5.0
Every API request should include an api-version. However, if you are making a REST request to a previously released endpoint without an api-version, the default version of that request will switch from 4.1 to 5.0 with this deployment. For more information on REST and api-versions, please see Azure DevOps Services REST API Reference.
Azure Boards
New work item text editor
We're excited to announce the general availability of the new text editor on the work item form. Our text editor has been outdated for a while, and this new experience will be a huge improvement. The new editor is more modern and powerful, bringing in new capabilities including resizing of images, code snippets, keyboard shortcuts for both Mac and Windows, and a full emoji library.
You can use this control in any text field on the work item form, including in your discussions. Here is the new experience that you can expect to see:
Below, you can see the code snippet experience. With this addition, you can easily and clearly discuss code in the work item form.
We really want to start making the work item a more social experience. Our first step in that journey is bringing emoji support to your text fields and discussions on the work item. Using emojis, you will be able to bring your descriptions and comments to life and give them a bit more personality!
The work done for this editor is open source, so please feel free to check out the roosterjs repo on GitHub at.
Azure Repos
Improved branch picker
Most of the experiences in Azure Repos require you to select a repo and then a branch in that repo. To improve this experience for organizations with large number of branches, we are rolling out a new branch picker. The picker now allows you to select your favorite branches or quickly search for a branch..
Azure Pipelines: - master - releases/* autoCancel: false
Changes to default permissions for new projects
Up until now, project contributors could not create pipelines unless they are explicitly given Create build definition permission. Now, for new projects, all team members can readily create and update pipelines. This change will reduce the friction for new customers that are on-boarding to Azure Pipelines. You can always update the default permissions on the Contributors group and restrict their access..
Support for Infrastructure as Code
We are adding support of Infrastructure as Code (IaC) to our Azure DevOps projects. IaC is a process of managing and provisioning computing infrastructure with some declarative approach, while setting their configuration using definition files instead of traditional interactive configuration tools. This will enable you to work with the resources in your solution as a group. You can deploy, update, or delete all the resources for your solution using a template for deployment. This template can be used for different environments such as testing, staging, and production.
Azure Artifacts
Exclude files in artifact uploads
Previously, in order to exclude files from published artifacts, you would have to copy the files to a staging directory, remove the files to be excluded, and then upload. Now, both Universal Packages and Pipeline Artifacts will look for a file called .artifactignore in the directory being uploaded to and automatically exclude those files, removing the need for a staging directory.
Provenance information on packages
With this update, we've made it a bit easier to understand the provenance of your packages, including who or what published them and what source code commit they came from. This information is populated automatically for all packages published using the npm, NuGet and .NET Core, Twine Authenticate (for Python), and Universal Packages tasks.
Azure Artifacts REST API documentation updates
With this sprint's update, we're rolling out substantial updates to the documentation of the Azure Artifacts REST APIs, which should make it easier to develop against them in your own applications.
Next steps
Note
These features will be rolling out over the next two to three weeks.
Read about the new features below and head over to Azure DevOps to try them for yourself.
Feedback
We would love to hear what you think about these features. Use the feedback menu to report a problem or provide a suggestion.
You can also get advice and your questions answered by the community on Stack Overflow.
Thanks,
Jeremy Epling | https://docs.microsoft.com/en-us/azure/devops/release-notes/2018/sprint-143-update | 2019-06-16T07:34:13 | CC-MAIN-2019-26 | 1560627997801.20 | [array(['_img/143_05.png', 'Text editor'], dtype=object)
array(['_img/143_06.png', 'Text editor code'], dtype=object)
array(['_img/143_04.png', 'Branch picker'], dtype=object)
array(['_img/143_02.png', 'Create PR draft'], dtype=object)
array(['_img/143_03.png', 'Badge'], dtype=object)
array(['_img/143_08.png', 'Badge'], dtype=object)
array(['_img/143_10.png', 'Badge'], dtype=object)
array(['../_img/help-make-a-suggestion.png', 'Make a suggestion'],
dtype=object) ] | docs.microsoft.com |
Manage registered servers with Azure File Sync
Azure File Sync allows you to centralize your organization's file shares in Azure Files without giving up the flexibility, performance, and compatibility of an on-premises file server. It does this by transforming your Windows Servers into a quick cache of your Azure file share. You can use any protocol available on Windows Server to access your data locally (including SMB, NFS, and FTPS) and you can have as many caches as you need across the world.
The following article illustrates how to register and manage a server with a Storage Sync Service. See How to deploy Azure File Sync for information on how to deploy Azure File Sync end-to.
Register/unregister a server with Storage Sync Service
Registering a server with Azure File Sync establishes a trust relationship between Windows Server and Azure. This relationship can then be used to create server endpoints on the server, which represent specific folders that should be synced with an Azure file share (also known as a cloud endpoint).
Prerequisites
To register a server with a Storage Sync Service, you must first prepare your server with the necessary prerequisites:
Your server must be running a supported version of Windows Server. For more information, see Azure File Sync system requirements and interoperability.
Ensure that a Storage Sync Service has been deployed. For more information on how to deploy a Storage Sync Service, see How to deploy Azure File Sync.
Ensure that the server is connected to the internet and that Azure is accessible.
Disable the IE Enhanced Security Configuration for administrators with the Server Manager UI.
Ensure that the Azure PowerShell module is installed on your server. If your server is a member of a Failover Cluster, every node in the cluster will require the Az module. More details on how to install the Az module can be found on the Install and configure Azure PowerShell.
Note
We recommend using the newest version of the Az PowerShell module to register/unregister a server. If the Az package has been previously installed on this server (and the PowerShell version on this server is 5.* or greater), you can use the
Update-Modulecmdlet to update this package.
If you utilize a network proxy server in your environment, configure proxy settings on your server for the sync agent to utilize.
- Determine your proxy IP address and port number
- Edit these two files:
- C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config\machine.config
- C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config\machine.config
- Add the lines in figure 1 (beneath this section) under /System.ServiceModel in the above two files changing 127.0.0.1:8888 to the correct IP address (replace 127.0.0.1) and correct port number (replace 8888):
- Set the WinHTTP proxy settings via command line:
- Show the proxy: netsh winhttp show proxy
- Set the proxy: netsh winhttp set proxy 127.0.0.1:8888
- Reset the proxy: netsh winhttp reset proxy
- if this is setup after the agent is installed, then restart our sync agent: net stop filesyncsvc
Figure 1: <system.net> <defaultProxy enabled="true" useDefaultCredentials="true"> <proxy autoDetect="false" bypassonlocal="false" proxyaddress="" usesystemdefault="false" /> </defaultProxy> </system.net>
Register a server with Storage Sync Service
Before a server can be used as a server endpoint in an Azure File Sync sync group, it must be registered with a Storage Sync Service. A server can only be registered with one Storage Sync Service at a time.
Install the Azure File Sync agent
Download the Azure File Sync agent.
Start the Azure File Sync agent installer.
Be sure to enable updates to the Azure File Sync agent using Microsoft Update. It is important because critical security fixes and feature enhancements to the server package are shipped via Microsoft Update.
If the server has not been previously registered, the server registration UI will pop up immediately after completing the installation.
Important
If the server is a member of a Failover Cluster, the Azure File Sync agent needs to be installed on every node in the cluster.
Register the server using the server registration UI
Important
Cloud Solution Provider (CSP) subscriptions cannot use the server registration UI. Instead, use PowerShell (below this section).
If the server registration UI did not start immediately after completing the installation of the Azure File Sync agent, it can be started manually by executing
C:\Program Files\Azure\StorageSyncAgent\ServerRegistration.exe.
Click Sign-in to access your Azure subscription.
Pick the correct subscription, resource group, and Storage Sync Service from the dialog.
In preview, one more sign-in is required to complete the process.
Important
If the server is a member of a Failover Cluster, each server needs to run the Server Registration. When you view the registered servers in the Azure Portal, Azure File Sync automatically recognizes each node as a member of the same Failover Cluster, and groups them together appropriately.
Register the server with PowerShell
You can also perform server registration via PowerShell. This is the only supported way of server registration for Cloud Solution Provider (CSP) subscriptions:
Register-AzStorageSyncServer -ResourceGroupName "<your-resource-group-name>" -StorageSyncServiceName "<your-storage-sync-service-name>"
Unregister the server with Storage Sync Service
There are several steps that are required to unregister a server with a Storage Sync Service. Let's take a look at how to properly unregister a server.
Warning
Do not attempt to troubleshoot issues with sync, cloud tiering, or any other aspect of Azure File Sync by unregistering and registering a server, or removing and recreating the server endpoints unless explicitly instructed to by a Microsoft engineer. Unregistering a server and removing server endpoints is a destructive operation, and tiered files on the volumes with server endpoints will not be "reconnected" to their locations on the Azure file share after the registered server and server endpoints are recreated, which will result in sync errors. Also note, tiered files that exist outside of a server endpoint namespace may be permanently lost. Tiered files may exist within server endpoints even if cloud tiering was never enabled.
(Optional) Recall all tiered data
If you would like files that are currently tiered to be available after removing Azure File Sync (i.e. this is a production, not a test, environment), recall all files on each volume containing server endpoints. Disable cloud tiering for all server endpoints, and then run the following PowerShell cmdlet:
Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll" Invoke-StorageSyncFileRecall -Path <a-volume-with-server-endpoints-on-it>
Warning
If the local volume hosting the server endpoint does not have enough free space to recall all the tiered data, the
Invoke-StorageSyncFileRecall cmdlet will fail.
Remove the server from all sync groups
Before unregistering the server on the Storage Sync Service, all server endpoints on that server must be removed. This can be done via the Azure portal:
Navigate to the Storage Sync Service where your server is registered.
Remove all server endpoints for this server in each sync group in the Storage Sync Service. This can be accomplished by right-clicking the relevant server endpoint in the sync group pane.
This can also be accomplished with a simple PowerShell script:
Connect-AzAccount $storageSyncServiceName = "<your-storage-sync-service>" $resourceGroup = "<your-resource-group>" Get-AzStorageSyncGroup -ResourceGroupName $resourceGroup -StorageSyncServiceName $storageSyncServiceName | ForEach-Object { $syncGroup = $_; Get-AzStorageSyncServerEndpoint -ParentObject $syncGroup | Where-Object { $_.ServerEndpointName -eq $env:ComputerName } | ForEach-Object { Remove-AzStorageSyncServerEndpoint -InputObject $_ } }
Unregister the server
Now that all data has been recalled and the server has been removed from all sync groups, the server can be unregistered.
In the Azure portal, navigate to the Registered servers section of the Storage Sync Service.
Right-click on the server you want to unregister and click "Unregister Server".
Ensuring Azure File Sync is a good neighbor in your datacenter
Since Azure File Sync will rarely be the only service running in your datacenter, you may want to limit the network and storage usage of Azure File Sync.
Important
Setting limits too low will impact the performance of Azure File Sync synchronization and recall.
Set Azure File Sync network limits
You can throttle the network utilization of Azure File Sync by using the
StorageSyncNetworkLimit cmdlets.
Note
Network limits do not apply when a tiered file is accessed or the Invoke-StorageSyncFileRecall cmdlet is used.
For example, you can create a new throttle limit to ensure that Azure File Sync does not use more than 10 Mbps between 9 am and 5 pm (17:00h) during the work week:
Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll" New-StorageSyncNetworkLimit -Day Monday, Tuesday, Wednesday, Thursday, Friday -StartHour 9 -EndHour 17 -LimitKbps 10000
You can see your limit by using the following cmdlet:
Get-StorageSyncNetworkLimit # assumes StorageSync.Management.ServerCmdlets.dll is imported
To remove network limits, use
Remove-StorageSyncNetworkLimit. For example, the following command removes all network limits:
Get-StorageSyncNetworkLimit | ForEach-Object { Remove-StorageSyncNetworkLimit -Id $_.Id } # assumes StorageSync.Management.ServerCmdlets.dll is imported
Use Windows Server storage QoS
When Azure File Sync is hosted in a virtual machine running on a Windows Server virtualization host, you can use Storage QoS (storage quality of service) to regulate storage IO consumption. The Storage QoS policy can be set either as a maximum (or limit, like how StorageSyncNetwork limit is enforced above) or as a minimum (or reservation). Setting a minimum instead of a maximum allows Azure File Sync to burst to use available storage bandwidth if other workloads are not using it. For more information, see Storage Quality of Service.
See also
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-server-registration | 2019-06-16T08:19:45 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.microsoft.com |
Multilingual app toolkit 4.0 Editor
Use the standalone Multilingual app toolkit 4.0 Editor with Visual Studio to streamline your localization workflow during app development.
Each of the following downloads contain an .msi installer for the Multilingual App Toolkit 4.0 Editor (also known as the Multilingual Editor).
- To start the installation immediately, click Run.
- To save the download to your computer for installation at a later time, click Save.
Important
If using Visual Studio 2017 or later, you should also download and install the Multilingual App Toolkit 4.0 Extension.
If using Visual Studio 2015 or Visual Studio 2013, the .msi installers listed here include the Multilingual App Toolkit 4.0 extension.
Overview
The Multilingual App Toolkit
Important
Ensure you have the latest service pack and critical updates for your installed versions of Windows and Visual Studio.
- Supported operating systems: Windows 10 or later (x86 and x64)
- Required software: Visual Studio 2013 or later
- Disk space requirements: 0 MB (x86 and x64)
Additional info
You must have an active Internet connection to use the Microsoft Language Portal and Microsoft Translator services. | https://docs.microsoft.com/en-us/windows/apps/design/globalizing/multilingual-app-toolkit-editor-downloads | 2021-11-27T04:31:12 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.microsoft.com |
ClickHouse Profile
Community plugin
Some core functionality may be limited. If you're interested in contributing, check out the source code for each repository listed below.
Overview of dbt-clickhouseOverview of dbt-clickhouse
Maintained by: Community
Author: Dmitriy Sokolov
Source:
Core version: v0.19.0 and newer
dbt Cloud: Not Supported
The easiest way to install it is to use pip:
pip install dbt-clickhouse
Connecting to ClickHouse with dbt-clickhouseConnecting to ClickHouse with dbt-clickhouse
User / password authenticationUser / password authentication
Configure your dbt profile for using ClickHouse:
ClickHouse connection informationClickHouse connection information
profiles.yml
dbt-clickhouse:target: devoutputs:dev:type: clickhouseschema: [database name]host: [db.clickhouse.com]port: 9000user: [user]password: [abc123] | https://6167222043a0b700086c2b31--docs-getdbt-com.netlify.app/reference/warehouse-profiles/clickhouse-profile | 2021-11-27T03:36:06 | CC-MAIN-2021-49 | 1637964358078.2 | [] | 6167222043a0b700086c2b31--docs-getdbt-com.netlify.app |
Switch back aggregates in a two-node MetroCluster configuration - AFF fas8300 and FAS8700.
This task only applies to two-node MetroCluster configurations.
Verify that all nodes are in the
enabledstate:
metrocluster node show
cluster_B::> metrocluster node show DR Configuration DR Group Cluster Node State Mirroring Mode ----- ------- -------------- -------------- --------- -------------------- 1 cluster_A controller_A_1 configured enabled heal roots completed cluster_B controller_B_1 configured enabled waiting for switchback recovery 2 entries were displayed.
Verify that resynchronization is complete on all SVMs:
metrocluster vserver show
Verify that any automatic LIF migrations being performed by the healing operations were completed successfully:
metrocluster check lif show
Perform the switchback by using the
metrocluster switchbackcommand from any node in the surviving cluster.
Verify that the switchback operation has completed:
metrocluster show
The switchback operation is still running when a cluster is in the
waiting-for-switchbackstate:
cluster_B::> metrocluster show Cluster Configuration State Mode -------------------- ------------------- --------- Local: cluster_B configured switchover Remote: cluster_A configured waiting-for-switchback
The switchback operation is complete when the clusters are in the
normalstate.:
cluster_B::> metrocluster show Cluster Configuration State Mode -------------------- ------------------- --------- Local: cluster_B configured normal Remote: cluster_A configured normal
If a switchback is taking a long time to finish, you can check on the status of in-progress baselines by using the
metrocluster config-replication resync-status showcommand.
Reestablish any SnapMirror or SnapVault configurations. | https://docs.netapp.com/us-en/ontap-systems/fas8300/bootmedia-2n-mcc-switchback.html | 2021-11-27T02:02:15 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.netapp.com |
Serverless APM Business Transaction Correlation
On this page:
Business Transaction Correlation Architecture
The Serverless Tracer correlates business transactions with upstream and downstream components. Business transactions can be started or continued within application components instrumented with APM agents or Lambda tracers providing end to end transaction visibility in AppDynamics.
This diagram illustrates how the tracer correlates business transactions through a serverless function to APM and EUM agents:
Business Transaction Correlation Process
Business transaction correlation occurs through an opaque correlation string that the application passes across the wire. First, the tracer generates the correlation string at an exit call, and then the application passes the correlation string to the downstream components. The downstream component retrieves the correlation string to continue the business transaction. This process enables the Serverless Tracer to correlate business transactions.
Correlation String Transportation
Your application needs to pass the correlation string, generated by the tracer, to correlate a business transaction between multiple services. The correlation string must be serialized and communicated alongside the application payload.
Inbound HTTP Calls
If you pass the correlation string through an HTTP call, the string is passed as an HTTP header. The tracer searches for the key that holds the correlation header string in your function's invocation object. Correlation occurs when the Serverless Tracer finds the correlation header key. If the key cannot be found, the tracer creates a new business transaction.
Other Protocols
For protocols other than HTTP, you need to define protocol-specific transportation of the correlation string to enable business transaction correlation.
The tracer creates a new business transaction if:
- You have not arranged for the correlation string to be passed, or
- The tracer cannot find the correlation string.
To get started, see Set Up the Serverless APM Environment.
Amazon Web Services, the AWS logo, AWS, and any other AWS Marks used in these materials are trademarks of Amazon.com, Inc. or its affiliates in the United States and/or other countries. | https://docs.appdynamics.com/4.5.x/en/application-monitoring/install-app-server-agents/serverless-apm-for-aws-lambda/serverless-apm-business-transaction-correlation | 2021-11-27T03:15:14 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.appdynamics.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.