content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
An Authentication that assumes an interactive logon. A user inputs logon parameters (e.g. user name and password) manually via the logon dialog.
Returns an authenticated user.
Clears the logon parameters' password.
Returns a list of persistent types that are used within the current authentication.
Checks if the specified member is used by the security system.
Checks if the specified member is a member of the IAuthenticationStandardUser interface.
Re-creates the Logon Parameters object.
Initializes the Logon Parameters. | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.Security.AuthenticationStandard._methods | 2019-08-17T23:38:08 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.devexpress.com |
Gets or sets the focused object in a WinColumnsListEditor.
Namespace: DevExpress.ExpressApp.Win.Editors
Assembly: DevExpress.ExpressApp.Win.v19.1.dll
public override object FocusedObject { get; set; }
Public Overrides Property FocusedObject As Object
Use this property to access the focused object. To perform the required actions when this property's value is changed, handle the following events: | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.Win.Editors.WinColumnsListEditor.FocusedObject | 2019-08-17T23:40:16 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.devexpress.com |
Creating Objects
What is an Object?
An "Object" in xPDO is simply an abstract, class-based representation of a row in a table in a database. In other words, a row in the table 'cars' would have an xPDO model definition of the 'cars' table, and then you would grab Collections of Objects of each car.
xPDO defines these Objects using the xPDOObject class.
Creating an Object
Creating objects in xPDO utilizes the "newObject" xPDO method.
Let's say we have an object defined in our model of class "Box". We want to create a new object of it:
$myBox = $xpdo->newObject('Box');
It's that simple. We can also create the Box object with some pre-filled field values:
$myBox = $xpdo->newObject('Box',array( 'width' => 5, 'height' => 12, 'color' => 'red', ));
You cannot set primary key values when using the second parameter of newObject(). Set the primary key values using fromArray() after creating the instance with the newObject() and make sure you set the parameter setPrimaryKeys equal to true.
This will give us an xPDOObject-based Box object that can be manipulated and saved. Note that this Object is not yet persistent until you save it using xPDOObject.save.
In versions prior to xPDO 2.1.1-pl, if your SQL table does not exist for the object you've created, and the object class has a defined table for that class, xPDO will automatically create the table in the database for you. In 2.1.1-pl and later versions, you must set xPDO::OPT_AUTO_CREATE_TABLES to true to have tables created automatically. It is recommended that you create the tables for your model explicitly in a setup script rather than depending on the auto table creation features that were not optional in earlier releases of xPDO. See xPDOManager.createObjectContainer for information on explicitly creating tables from the model. | https://docs.modx.org/current/en/extending-modx/xpdo/creating-objects | 2019-08-17T23:21:47 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.modx.org |
modMobile
What is modMobile?
modMobile is a plugin that changes your template when a mobile device visits your site.
History
Requirements
- MODX Revolution
- Mobile template
Development & Bug reporting
modMobile is currently being developed on Github. That is also the place to report bugs, file feature requests and improvements.
Upgrading
You may have to set the value of the System Setting modmobile.mobile_template as this has changed form just mobile_template. If you sit have the original setting mobile_template remove it.
Installation
Install through Package Management
Troubleshooting
Since this is an early beta, a lot of things might go wrong after installing this package. Just disable the plugin if you run into any problems and you should be fine. Don't forget to report bugs on our github page!
Usage
Example1
Using one template for mobile and full site
- Go to System -> System Settings
- Set the USE Placeholder to Yes
- Lets assume that the only difference between your standard version and the mobile version is the CSS file then in your template do something like this:
[[If? &subject=`[[+modxSiteTemplate]]` &operand=`mobile` &then=`<link rel="stylesheet" type="text/css" media="all" href="/assets/templates/css/mobileLayout.css" />` &else=`<link rel="stylesheet" type="text/css" media="all" href="/assets/templates/css/commonLayout.css" /> <!--[if IE 6]> <link rel="stylesheet" type="text/css" media="all" href="/assets/templates/css/ie6.css" /> <![endif]-->` ]]
Note: modxSiteTemplate is the value of modmobile.get_var and the same that you will need to send to the url to switch templates. You must also install the If extra for this example to work!
- Now just put a link in your template to the mobile version and then to the full version:
<!-- Moblie Link --> <a href="[[~[[*id]]]]?modxSiteTemplate=mobile">Mobile</a> <!-- Back to Full site link --> <a href="[[~[[*id]]]]?modxSiteTemplate=full">Full Site View</a>
Note this is optional but highly recommended.
Example2
Using a separate mobile template
- Go to System -> System Settings 1.1. Select modmobile, see image 1.2. Enter in your mobile template ID
- Just visit your site on a mobile device like an iPhone or iPad. Your mobile theme should show up.
- Now just put a link in your templates a link to the mobile version and then to the full version like so: Note this is optional but highly recommended.
<!-- Moblie Link --> <a href="[[~[[*id]]]]?modxSiteTemplate=mobile">Mobile</a> <!-- Back to Full site link --> <a href="[[~[[*id]]]]?modxSiteTemplate=full">Full Site View</a> | https://docs.modx.org/current/en/extras/modmobile | 2019-08-17T23:11:41 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.modx.org |
Last updated: 3 years ago
This is a very common question/issue we get reported all the time. Here is how to resolve this issue and debug to find what is causing this issue on your site.
NOTE: EventON should work 100% out of the box as we have tested multiple times before we release. But when its places inside badly made themes and plugins conflicts are unavoidable.
Preliminary Check for Javascript issues
In Chrome press F12 or Ctrl +shift+I and open the web developer console.
You should see below screen show in your browser. Pay attention to the top right corner.
If you see a RED cross with a number this means the number of javascript errors on this page in your website.
Click Esc or switch over to the Console tab
This should show all the javascript errors on the page explained little bit more with some info.
Cause #2
If none of the above items matches your case, in the developer tools in your browser click on the Network tab.
If you see below error – marked in red. Or similar issue that have to do with admin-ajax.php – that comes up when you click on eventON calendar elements/buttons – Please follow below directions to solve this.
This could mean there are PHP code errors stopping eventON from executing AJAX commands.
SOLUTION: In this case we ask you to do a debug.Perform a debug on PHP code
Solving Other Common Javascript Issues
Every javascript error is specific to a website and we can not give the solution to your own issue. But any javascript error you see in the console may conflict with eventON script and will stop from eventON script from running. Hence, you can not slideDown events and can not switch months.
Below are common solutions. Please go through them and try the ones that may apply in your case.
Solution #1
In this console on the right side shows the URL and sometimes line number of where this error is coming. You can look into that to see if you can figure out whats wrong and fix it.
Solution #2
If those javascript errors are not coming from any of the eventON js files. And you can identify which theme or plugin it is coming from. We recommend you contact the theme/plugin developer of those items for solution.
Solution #3
Right click on the web page and click view page source. And then Ctrl+f or search the source for “eventon_script.js” This is the main eventON javascript required for interactive functionality. If you can not find this in the page source try this:.
Cause #3 – shortcode
If you are using incorrect shortcodes than the result you are expecting that may also lead to none responsive calendar. For example if you use ux_val=’x’ in the shortcode clicking on events will not open the event card or do anything.
SOLUTION:
Start removing shortcode parameters one by one until the calendar start working. And then work your way up to what was wrong in the shortcode parameters you used. (shortcode parameters are shortcode peices like ux_val=’2′ or fixed_month=’4′)
Other causes for month switch not working
My site load inside pages via AJAX
If your website inside pages as well as the events page with eventon shortcode is loading via ajax into view (AKA browser does not show as reloading a new page but a new page shows in place of current page).]
Still can not solve my issue
If the above solutions does not apply to your issue nor helped you solve your issue, please send us a ticket in support forum and please mention your findings in here to help us track your issue faster and solve it quick for you. | http://docs.myeventon.com/documentations/debug-javascript-interactive-issues/ | 2019-08-17T22:44:34 | CC-MAIN-2019-35 | 1566027313501.0 | [array(['http://www.myeventon.com/wp-content/uploads/2016/02/Capture2.png',
'Capture'], dtype=object)
array(['http://www.myeventon.com/wp-content/uploads/2016/02/Capture3.png',
'Capture'], dtype=object)
array(['http://www.myeventon.com/wp-content/uploads/2014/01/Capture1.png',
'Capture'], dtype=object)
array(['http://www.myeventon.com/wp-content/uploads/2016/02/Capture4.png',
'Capture'], dtype=object) ] | docs.myeventon.com |
Jobs for auditing and compliance The following topics describe jobs that are related to audit and compliance:Creating and modifying Audit JobsCreating and modifying Compliance JobsCreating and modifying SCAP Compliance JobsCreating and modifying Snapshot Jobs Was this page helpful? Yes No Submitting... What is wrong with this page? Confusing Missing screenshots, graphics Missing technical details Needs a video Not correct Not the information I expected Your feedback: Send Skip Thank you Last modified by Ranu Ganguly on Jun 21, 2019 compliance jobs audit_jobs compliance_jobs scap snapshot_jobs Comments Creating Deploy Jobs for importing Puppet modules Creating and modifying Audit Jobs | https://docs.bmc.com/docs/ServerAutomation/86/using/creating-and-modifying-bmc-server-automation-jobs/jobs-for-auditing-and-compliance | 2019-08-17T23:55:44 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.bmc.com |
setfields
Description
Sets the field values for all results to a common value.
Sets the value of the given fields to the specified values for each event in the result set. Delimit multiple definitions with commas. Missing fields are added, present fields are overwritten.
Whenever you need to change or define field values, you can use the more general purpose
eval command. See usage of an eval expression to set the value of a field in Example 1.
Syntax
setfields <setfields-arg>, ...
Required arguments
- <setfields-arg>
- Syntax: string="<string>", ...
- Description: A key-value pair, with the value quoted. If you specify multiple key-value pairs, separate each pair with a comma. Standard key cleaning is performed. This means all non-alphanumeric characters are replaced with '_' and leading '_' are removed.
Examples
Example 1:
Specify a value for the ip and foo fields.
... | setfields ip="10.10.10.10", foo="foo bar"
To do this with the eval command:
... | eval ip="10.10.10.10" | eval foo="foo bar"
See also
Answers
Have questions? Visit Splunk Answers and see what questions and answers the Splunk community has using the setfields! | https://docs.splunk.com/Documentation/SplunkCloud/7.2.4/SearchReference/Setfields | 2019-08-17T23:24:54 | CC-MAIN-2019-35 | 1566027313501.0 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Contents Now Platform User Interface Previous Topic Next Topic Domain separation in guided tours Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Domain separation in guided tours Domain separation allows you. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/london-platform-user-interface/page/build/help-guided-tours/concept/guided-tours-domain-separation.html | 2019-08-17T23:18:28 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.servicenow.com |
Digitizing stratigraphic diagrams¶.
The package is very new and there are many features that will be included in the future. Therefore we would be very pleased to get feedback! To do so, you can contact us or raise an issue on GitHub.
Documentation¶
How to cite straditize¶
When using straditize, you should at least cite the publication in the Journal of Open Source Software:
Sommer, Philipp, Dilan Rech, Manuel Chevalier, and Basil A. S. Davis. Straditize: Digitizing Stratigraphic Diagrams. Journal of Open Source Software , vol. 4, no. 34, 34, The Open Journal, Feb. 2019, p. 1216, doi:10.21105/joss.01216,.
Furthermore, each release of straditize is associated with a DOI using zenodo.org. If you want to cite a specific version or plugin, please refer to the releases page of straditize. | https://straditize.readthedocs.io/en/latest/ | 2019-08-17T23:27:46 | CC-MAIN-2019-35 | 1566027313501.0 | [array(['_images/straditize1.png', 'straditize logo'], dtype=object)] | straditize.readthedocs.io |
Often the same strings appear at different places of your website or app. Lokalise has two ways to identify duplicates.
First is when you (or any other team member) submit the new translation in editor, the duplicate icon pops up at the right side of the translation in case the duplicate is found.
Another option is to select "Show duplicates" menu item by clicking a small triangle at the right of the language name in projects view. | https://docs.lokalise.co/en/articles/1400533-duplicate-translations | 2019-08-17T22:49:42 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.lokalise.co |
Upgrade the universal forwarder for *nix systems
This topic describes the procedure for upgrading your Splunk universal forwarder from version 4.2.x or 4.3.x to 5.0.
Important:..
Configure auto load balancing for clustering
If you plan to do load-balanced forwarding to a Splunk cluster, you must configure your existing forwarders to use auto-load balancing. To learn how to do this, read "Set up load balancing" in this manual.>". You will need to modify the sample script to meet the needs of an upgrade.
3. Run the script on representative target machines to verify that it works with all required shells.
4. Execute the script against the desired set of hosts.
5. Use the deployment monitor app to verify that the universal forwarders are functioning! | https://docs.splunk.com/Documentation/Splunk/5.0.3/Deploy/Upgradethenixuniversalforwarder | 2019-08-17T23:34:41 | CC-MAIN-2019-35 | 1566027313501.0 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Last updated: 2 months ago | http://docs.myeventon.com/documentations/changelog-countdown/ | 2019-08-17T22:47:01 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.myeventon.com |
Take Advantage of Chat Conferencing for Snap-Ins
Use Snap-ins for your Live Agent chat conferences in Salesforce Classic. With chat conferencing, support agents can work together to solve your customers’ trickiest problems.
Where: This change applies to Lightning Experience and Salesforce Classic. Live Agent is available in Performance and Developer edition orgs that were created after June 14, 2012, and in Unlimited and Enterprise edition orgs with the Service Cloud. | https://docs.releasenotes.salesforce.com/en-us/spring19/release-notes/rn_chat_snapins_classic_conferencing.htm | 2019-08-17T23:00:41 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.releasenotes.salesforce.com |
Draft supplementary planning guidance
Скачать
0.87 Mb.
Название
Draft supplementary planning guidance
страница
1/14
Дата конвертации
29.10.2012
Размер
0.87 Mb.
Тип
Документы
1
2
3
4
5
6
7
8
9
...
14
LAND FOR INDUSTRY AND TRANSPORT
DRAFT SUPPLEMENTARY PLANNING GUIDANCE
FEBRUARY 2012
PUBLISHED FOR PUBLIC CONSULTATION
LONDON PLAN, 2011
IMPLEMENTATION FRAMEWORK
Greater London Authority
February 2012
Published by
Greater London Authority
City Hall
The Queen’s Walk
More London
London SE1 2AA
enquiries
020 7983 4100
minicom
020 7983 4458
ISBN
Contents
Summary 5
Introduction 13
PART A: Industrial Capacity 18
Background and policy context 19
The plan, monitor and manage approach for industrial capacity 26
Strategic Industrial Locations and other industrial provision 42
Logistics, warehousing and rail freight 49
Waterway facilities (wharves and boatyards) 56
Waste management and recycling 60
Transport functions occupying industrial land 63
Utilities (energy and water management) 65
Wholesale markets 68
Industrial capacity and mixed-use development 70
Quality of industrial capacity 74
Variety of industrial capacity and provision for small and
medium sized industrial enterprises 84
PART B: Land for Transport 87
Background and policy context 88
Rail: National Rail, Crossrail, London Underground, Docklands Light Railway (DLR), Tramlink, new and improved stations and interchanges 90
River Thames Crossings 95
Aviation 95
Buses and Coaches 96
Taxis and private hire 101
Walking and Cycling 102
Tackling Congestion 104
Parking and Park and Ride 105
Electric Vehicles 106
Blue Ribbon Network 107
Annexes:
Annex 1 Indicative industrial land release benchmarks 2011-2031 109
Annex 2 Indicative land demand for waste management and recycling 110
Annex 3 Industrial land qualitative assessment checklist 111
Annex 4 Principal property market areas for industry and warehousing 104
Annex 5 Electric vehicle charging infrastructure at new London developments – A Guide for Developers 113
Annex 6 List of abbreviations 123
Index of Supplementary Guidance Implementation Points:
For convenience, the SPG implementation point references relate to the Section numbers starting at SPG3 – accordingly there are no SPG1, 2 or 14.
Part A: Land for Industry
:
SPG 3 Industrial capacity and the plan, monitor and manage approach
SPG 4 Strategic Industrial Locations and other industrial provision
SPG 5 Logistics, warehousing and rail freight
SPG 6 Waterway facilities (wharves and boatyards)
SPG 7 Waste management and recycling
SPG 8 Transport functions occupying industrial land
SPG 9 Utilities (energy and water management)
SPG10 Wholesale markets
SPG 11 Industrial capacity and mixed-use development
SPG 12 Quality of industrial capacity
SPG 13 Variety of industrial capacity and provision for small and medium sized industrial enterprises
Part B: Land for Transport
:
SPG 15 Rail: National Rail, Crossrail, London Underground, Docklands Light Railway (DLR), Tramlink, new and improved stations and interchanges
SPG 16 River Thames Crossings
SPG 17 Aviation
SPG 18 Buses and Coaches
SPG 19 Taxis and private hire
SPG 20 Walking and Cycling
SPG 21 Tackling Congestion
SPG 22 Parking and Park and Ride
SPG 23 Electric Vehicles
SPG 24 Blue Ribbon Network
Summary
This Supplementary Planning Guidance (SPG) provides guidance on the implementation of policies relating to land for industrial type activities and transport in the Mayor’s London Plan
1
published in July 2011 (hereafter referred to as the ‘London Plan’). It is focussed on the implementation of London Plan Policies 2.17 Strategic Industrial Locations, and 4.4 Managing Industrial Land and Premises; and 6.2 Providing Public Transport Capacity and Safeguarding Land for Transport.
The SPG provides guidance to:
ensure an adequate stock of industrial capacity to meet the future needs and functional requirements of different types of industrial and related uses in different parts of London, including that for good quality and affordable space (London Plan Policy 4.4Aa);
plan, monitor and manage the release of surplus industrial land so that it can better contribute to strategic and local planning objectives, especially those to provide more housing (including affordable housing) and, in appropriate locations, to provide social infrastructure and to contribute to town centre renewal (Policy 4.4Ab);
Ensure the provision of sufficient land, suitably located, for the development of an expanded transport system to serve London’s needs (Policy 6.2C).
LAND FOR INDUSTRY
Structural change in the London economy over recent decades has led to a shift in employment away from traditional manufacturing industries and into the service sector. Over the past three decades, London’s employment in manufacturing has declined from over 1 million in 1971 to just 224,000 in 2007 and accounts for under 5 per cent of London’s total employment.
However, over the plan period for the London Plan (2011-2031) there will be increasing demand for industrial land from a range of other important industrial type functions. These include an efficient and sustainable land supply for logistics, waste management, recycling, environmental industries including renewable energy generation, transport functions, utilities, wholesale markets and some creative industries. In the highly competitive London land market, making provision for these requires positive planning to achieve outcomes that can meet the economic objectives as outlined in the London Plan and the Mayor’s Economic Development Strategy in a sustainable manner.
London Plan Policies 2.17 and 4.4 set out a plan-led approach to promoting and managing industrial capacity through three types of location:
•
Strategic Industrial Locations
(SILs) – a resource that must be sustained as London’s main reservoir of industrial capacity but nevertheless must itself be subject to periodic review to reconcile demand and supply.
•
Locally Significant Industrial Sites
(LSIS) - protection of which needs to be justified in assessments of supply and demand for industrial land and identified in Development Plan Documents (DPD); and
•
Other smaller industrial sites
that historically have been particularly susceptible to change. In some circumstances these sites can better meet the London Plan’s objectives in new uses, but in others will have a continuing local and strategic role for industry. This sub-category is likely to continue to be the area of greatest change
In 2010, London had an estimated 7,433 hectares of industrial land, including 4,900 hectares of ‘core uses’ (industry and warehousing) and 2,500 hectares in wider industrial related uses such as waste, utilities, land for transport and wholesale markets. The 2010 total stock represents a reduction of 400 hectares since 2006 and 839 hectares since 2001. Approximately 4,175 hectares or 56 per cent of the total 2010 stock lies within allocated Strategic Industrial Locations. More than two-thirds of land in SILs is comprised of Preferred Industrial Locations (PILs) to meet the needs of industries, which to be competitive, do not place a high premium on an attractive environment, though they may require infrastructure and other qualitative improvements. The remaining third of land in SILs is comprised of Industrial Business Parks offering a higher quality environment.
In planning for industrial land, boroughs are urged to provide for sufficient land and premises in industrial and related uses, including waste management, logistics, utilities and transport functions to meet future demand in London in good quality, flexible and affordable space. Having regard to the net reduction in land demand and the careful management of vacancy rates, the London Plan indicates that there is scope to release 41 hectares per annum between 2006-2026. In accordance with London Plan paragraph 4.22, this SPG has reviewed and updated this monitoring benchmark to 2031 based upon more up to date evidence of the demand for, and supply of industrial land. The revised benchmark for planning and monitoring industrial land release in London for the period 2011-2031 is set in this SPG at 732 hectares in total, or 37 hectares per annum.
There are wide geographical variations in the demand and supply balance in different parts of London both at sub-regional and more local levels including within boroughs. Due to constraints on the quality, availability and nature of the current supply, there may be local shortfalls in quality modern floorspace and readily available development land, particularly in parts of North, West, South and Central London. Supply is less constrained in the East sub-region. The distribution of release must take full account of other land use priorities and be managed carefully to ensure that a balance is struck between retaining sufficient industrial land in appropriate locations and releasing land to other uses.
Based upon new research, this SPG reaffirms the borough groupings contained in Map 4.1 of the London Plan for the transfer of industrial land to other uses, with the exception of the borough of Hounslow, which is recommended to move into the ‘limited’ grouping. The SPG also provides guidance on local policy criteria and borough level monitoring benchmarks for industrial land transfer to manage the release of sites both within and outside SILs. These are to be refined by boroughs in Development Plan Documents in the light of local and strategic assessments of demand and supply.
The spatial expression of this guidance indicates that:
• industrial land in Strategic Industrial Locations and Locally Significant Industrial Sites (where justified) should in general be protected, subject to guidance elsewhere in this SPG. In parts of East and North London there is scope for strategically coordinated release from some SILs to be managed through the London Plan, Opportunity Area Planning Frameworks and DPDs;
• release of industrial land through development management should generally be focussed on smaller sites outside of the SIL framework.
In outer London, the full potential of the Strategic Outer London Development Centres (SOLDCs) with economic functions of greater than sub-regional importance in logistics, industry and green enterprise should be realised along with the need to manage and improve the overall stock of industrial capacity to meet both strategic and local needs, including those of small and medium sized enterprises (SMEs), start-ups and businesses requiring more affordable workspace. There is a need for partnership working to see that adequate provision in inner London is sustained, and where necessary enhanced, to meet the distinct demands of the Central Activities Zone and Canary Wharf for locally accessible, industrial type activities.
Integrated action by the GLA, TfL, boroughs and other relevant agencies in the sub-regions is essential to bring forward the most attractive sites at a time when the planning process must also manage selective release of strategically surplus capacity to other uses. Where consolidation of industrial land affects SILs, the GLA group will coordinate this process through the London Plan and Opportunity Area Planning Frameworks. The Mayor will continue to work with boroughs and other partners to develop more detailed frameworks to manage the appropriate release of land in SILs to inform detailed revisions in DPDs.
Land released as a result of such consolidation exercises must be re-used to meet strategic as well as local priorities. Housing (including affordable housing) and appropriate mixed development will be the key priority. Release of surplus industrial land, in appropriate locations, can also provide capacity for social infrastructure such as education, health, emergency services, prisons, places of worship and other community facilities, and contribute to town centre renewal.
In line with transport policy set out in the London Plan, Mayor’s Transport Strategy and the London Freight Plan, this SPG encourages movement of goods by rail or water, including the use of inter-modal facilities and supports the sustainable movement of waste, and products arising from resource recovery, and the use of modes other than road transport when practicable.
The use and re-activation of safeguarded wharves for waterborne freight transport should be promoted in line with the implementation actions proposed for each safeguarded wharf as part of the individual site assessments in the safeguarded wharves review
– upon final publication expected in late summer 2012
. The development of an additional boatyard facility on industrial land to address an identified shortfall should be promoted.
Utilities (energy and water management) also represent established uses of industrial land. It is important that industrial land is available to ensure that related infrastructure required to accommodate growth can be provided. Future demand is difficult to quantify, although this is being explored as part of the emerging London Plan Implementation Plan. Boroughs should assess their potential local requirements in co-operation with their utility companies and not to release industrial land in DPDs prior to such an assessment.
Mixed uses and intensification can present urban design challenges. Redevelopment for higher density, mixed uses through the plan-led consolidation of a SIL or LSIS must not compromise their offer as the main strategic and local reservoirs of industrial capacity and as competitive locations for logistics, transport, utilities or waste management. Where land is released for housing or mixed-use development it must fulfil London Plan design policies and secure a complementary mix of activities.
The quality and fitness for purpose of industrial sites is an important concern of the London Plan and this SPG. Qualitative improvements in industrial locations can contribute towards the wider objectives of the London Plan to make London an exemplary city in terms of mitigating and adapting to climate change and urban design, public realm and architecture. The SPG contains design guidance for industrial development and areas. The effective management of industrial capacity can also play a key role in promoting social inclusion, access to employment and regeneration. Improving the quality of industrial sites including provision for Small and Medium-sized Enterprises (SMEs), will require coordinated planning, regeneration and transport actions, with cooperation between boroughs, the GLA group and other partners.
LAND FOR TRANSPORT
Reflecting London Plan policy 6.2c, Part B (chapters 14-22) of this SPG seeks to ensure that there is a sufficient supply of land for (predominantly passenger) transport uses in London. The needs of the freight and servicing sector are considered within the chapter on Logistics, Warehousing and Rail Freight (chapter 5). There is recognition within part A of Transport functions occupying industrial land (principally bus and rail-related functions).
It is recognised in the Mayor’s London Plan that transport plays an essential part in keeping the city prosperous economically and socially. Ensuring that land is available for transport functions close to the market it serves helps reduce the cost of provision, improve reliability and reduce transport’s energy consumption. It may also help ensure operational staff can access their place of work more easily.
Rail: London Underground, Crossrail Docklands Light Railway (DLR) and Tramlink, New and improved stations and interchanges
The Government and TfL are making considerable investments in the rail and Underground network in London and the South East. Beyond that there are a number of proposals in the medium to long term. Land may be needed for line of route and stations. The alignment of Crossrail 2 is currently safeguarded. Stakeholders are encouraged to consult with TfL to find out the latest developments. Land for depots and other ancillary facilities should not be released without widespread consultation. Boroughs should, in their DPDs, safeguard land identified and required by TfL for the expansion and enhancement of the London Underground, DLR and London Overground networks. Additional trams are being leased by TfL to provide extra capacity to meet demand growth in the short to medium term.
Improvements to stations, interchange improvements and new stations should, where appropriate, be supported in DPDs and land requirements identified and safeguarded, in consultation with the relevant authorities.
Roads, River Thames crossings, bus priority, congestion
TfL is developing a package of river crossing improvements in east London, including a cable car, due to open in summer 2012. Statutory safeguarding remains for fixed link river crossings between Thamesmead and Beckton, and between North Greenwich and Silvertown. TfL is committed to reviewing the extent of safeguarding to ensure that it remains appropriate and does not unduly hinder the development of land no longer required.
Bus priority schemes are under continuous development across London and in general these take place within highway limits. Some schemes may require small amounts of additional land and Boroughs should reflect this in their approach to DPDs, LIPs, development briefs and consideration of planning applications.
The Mayor wishes to see DPDs and Local Implementation Plans (LIPs) take a co-ordinated approach to smoothing traffic flow and tackling congestion and developing an integrated package of measures across a range of modes of transport. Conversely, any scheme that may have the impact of reducing road capacity, must take into account the impact on buses and wider road user journey time reliability.
Aviation
DPDs should identify and protect any land required to support improvements of the facilities for passengers at Heathrow and other London airports and to ensure the availability of viable and attractive public transport options to access them.
Buses: Garages, stations, passenger infrastructure, Coaches
Protection of existing, and provision of additional, bus garaging to provide the capacity for efficient and sustainable operation of network will continue to be needed. The loss of any bus garage through redevelopment should be resisted unless a suitable alternative site that results in no overall loss of garage capacity can be found in the immediately adjacent area, or TfL agree formally that the particular garage is no longer required. DPDs should, following consultation with TfL, include policies on protection of bus garages and identify existing garages and future sites to meet any appropriate expansion needs.
Land for new bus stations or improved passenger interchange facilities should be identified in DPDs, Opportunity Area planning frameworks (OAPFs) and masterplans and supported by specific policies. Appropriate provision of facilities to serve their schemes should be made by developers, in consultation with TfL. The loss of any existing facility, or access thereto and from, should be resisted unless a suitable alternative arrangement is agreed with TfL.
DPDs and development briefs should identify sites or locations where new, improved or expanded stopping and/or stand facilities are required, both within new developments as well as elsewhere. Opportunities should be taken to improve or provide on-street facilities and off-highway space when sites are redeveloped. Provision of bus stopping, standing and other such facilities should be subject to planning obligations and/or financial contribution from the developer, where appropriate.
Additional / alternate site(s) may be required to accommodate scheduled coach services in order to cater for growing demand at coach termini in the longer term; Westminster City Council should plan for the continued use and upgrade of Victoria Coach Station, in consultation with TfL.
Reflecting a limited supply of dedicated coach parking, DPDs should identify suitable additional locations for on-street coach bays (short term) and coach parking provision (mid to long term), particularly in Central London and in close proximity to key tourist destinations. Allowing temporary use of land for coach parking should also be considered. Promoting the shared use of existing off-street parking areas may sometimes be a possible alternative to on-street parking. TfL will work with coach operators and the private owners and tenants of suitable sites to investigate any such opportunities which arise. The loss of any existing facility for coaches or minibuses used for scheduled services and/or private hire including stations, should be resisted where possible, unless a suitable alternative arrangement is agreed with TfL.
Taxis and private hire
The loss of any existing taxi and private hire facility, including ranks, parking, driver facilities, pick/up and drop off areas and accesses, through a change of use or redevelopment, should be resisted unless a suitable alternative arrangement is agreed with TfL. Where appropriate, provision for taxis and private hire will be required to serve new development in accordance with details to be agreed with TfL. DPDs should support this additional provision and should protect existing provision. Furthermore DPDs should, in consultation with TfL, support provision for Dial a Ride and hospital and local authority transport services.
Walking and cycling
New development should provide high quality, well connected provision for cyclists. Borough LIPs and DPDs should therefore provide support and, where required, safeguarding, to allow this. Consultation with TfL is recommended to determine the current status of Barclays Cycle Superhighways and any Cycle Hire scheme.
Borough LIPs, DPD policies and development briefs should encourage development proposals that include high quality public realm and safe, convenient and direct and accessible walking routes, supported by adequate space for the introduction of Legible London wayfinding. DPDs should also contain policies and safeguarding where necessary to allow the retention and improvement of the strategic walking network and its extension where appropriate. Consultation with TfL is recommended for further information about Legible London, the Strategic Walk London Network and other walking programmes.
Tools such as Pedestrian Environment Review System (PERS) and Pedestrian Comfort Guidance (PCG) can help assess the quality and capacity of pedestrian links and access to public transport stops and facilities in discussions with developers.
Parking, Electric Vehicles, park-and-ride
Parking standards in DPDs and parking provision in development should reflect the standards set out in the London Plan, including those for Blue Badge holders. There may be the opportunity to release under-used, sub-standard or poorly located car parks for more valuable or sustainable land uses or to develop the air space above. Disposal of surplus parking land on specific sites should be identified through DPDs.
A ‘Guide for Developers’ on the provision of EV charging infrastructure is included within this SPG. DPDs, masterplans and site development briefs should reflect this guidance.
Blue Ribbon Network
The London Plan contains a number of policies that seek to encourage use of the Blue Ribbon Network for passenger and freight transport. The latter, including policy related to safeguarded wharves and boatyards, is addressed in section 6.
Passenger facilities including piers, jetties, moorings, slipways and other infrastructure should be protected and DPDs should identify locations for new and any opportunities for enhancing or extending existing facilities, especially within Opportunity Areas.
The provision of such facilities as part of waterside redevelopment, or near to major transport hubs close to the Thames and other navigable waterways, is key to extending water passenger transport. As with all transport interchanges, good access is required. Boroughs should within their DPDs identify, and safeguard where appropriate, land that would be suitable for passenger, tourist or cruise liner facilities.
The loss of any existing facilities and accesses should be resisted unless a suitable alternative arrangement is agreed with TfL. Where appropriate, provision for river buses, ferries, river/canal cruises will be required to serve new riverside development in accordance with details to be agreed with TfL. DPDs should therefore include policies to encourage improved facilities and access to support this.
Facilities for recreational use of the Blue Ribbon Network should also be promoted.
1 Introduction
Purpose of the SPG
1.1 This Supplementary Planning Guidance (SPG) provides guidelines on the implementation of policies relating to industrial capacity and land for transport in the London Plan published in July 2011 (referred to hereafter as the ‘London Plan’). It focuses on the implementation of London Plan Policies 2.17 and 4.4 to plan and manage the protection, release or enhancement of industrial land in the Strategic Industrial Locations (SIL), Locally Significant Industrial Sites (LSIS) and other smaller industrial sites not categorised as SIL or LSIS. The SPG also provides guidance on the implementation of London Plan policy 6.2 related to safeguarding land for transport. The approaches to the management of land for industry and transport set out in this SPG are designed to address the plan’s broader concerns including those to ensure that London is a city that meets the challenges of economic and population growth; secures easy, safe and convenient access for everyone to access jobs, opportunities and facilities; improves the environment and leads the world in tackling climate change (London Plan Objectives 1, 5 and 6).
Status of the SPG
As SPG, this document does not set new policy, but rather explains how policies in the London Plan should be carried through into action. It will assist boroughs when preparing Development Plan Documents (DPDs) and will also be a material planning consideration when determining planning applications. It will also be of interest to landowners, developers, planning professionals and others concerned with the use and enhancement of land and premises in industrial and other related uses.
Objectives and Structure of the SPG
Part A of the SPG provides guidance on London Plan policy 2.17 Strategic Industrial Locations and policy 4.4 Managing Industrial Land and Premises to:
(a) adopt a rigorous approach to industrial land management to ensure a sufficient stock of land and premises to meet the future needs of different types of industrial and related uses in different parts of London, including for good quality and affordable space;
(b) plan, monitor and manage the release of surplus industrial land where this is compatible with (a) above, so that it can contribute to strategic and local planning objectives, especially those to provide more housing (including affordable housing) and, in appropriate locations, to provide social infrastructure and to contribute to town centre renewal.
1.4 The background and policy context for industrial land is set out in section 2. The plan, monitor and manage approach to industrial capacity is set out in Section 3 which is intended to reconcile the relationship between demand and supply of industrial land over the period 2011-2031. It provides a geographical framework for the boroughs and other partners to identify and promote the supply of sites of appropriate quality needed by different occupiers, as well as guiding the release of surplus land for other uses through realistic and balanced land-use policies.
1.5 Section 4 sets out the Strategic Industrial Locations Framework and highlights the importance of Locally Significant Industrial Sites and other smaller industrial sites. Sections 5 to 10 of the SPG provide guidance on a range of industrial related land uses and activities that play a major role in the efficient functioning of the London-wide, sub-regional and local economies and how these contribute to wider sustainability objectives. These uses include logistics, warehousing and rail freight (Section 5), waterways facilities including wharves and boatyards (Section 6), waste management and recycling (Section 7), land for transport functions (Section 8 and see also Part B of the SPG), utilities including energy and water management (Section 9), and wholesale markets (Section 10).
1.6 Section 11 applies national and London-wide policy principles to encourage more sustainable use of industrial land by fostering intensification through higher densities and, where appropriate, a wider mix of uses where these are mutually compatible and can produce a good quality environment and sustain or enhance provision for business.
Section 12 provides guidance on enhancing the quality of London’s industrial capacity including the contribution that it can make to mitigating and adapting to climate change. This section also sets out how the management of industrial capacity can contribute towards social inclusion and regeneration. Section 13 provides advice on promoting a range of provision and responding to the needs of small and medium-sized industrial enterprises.
In Part B, the SPG provides guidance on London Plan policy 6.2 to identify and safeguard land for the full range of transport functions in addition to those occupying industrial land.
Land requirements to enable the development of transport route alignments, passenger facilities and supporting facilities are covered in sections 14 – 24. The background and policy context on land requirements for transport is set out in Section 14. The safeguarding of land required to support existing and new rail schemes on the National Rail network (including London Overground and Crossrail); London Underground; Docklands Light Railway; Tramlink; new stations and interchange projects and upgrades is covered in Section 15. The safeguarding of alignments for proposed river crossings in east London is covered in Section 16 and requirements associated with aviation in Section 17.
Section 18 sets out a range of matters relating to buses (including garages, stations and interchanges, stops and stands and priority schemes) and issues for coaches. The needs of Taxis and private hire vehicles are highlighted in Section 19. Guidance is provided in Section 20 on the need for new development to provide high quality, well connected provision for cyclists. Section 20 also highlights the role of appropriate land designation for providing a high quality public realm and safe, convenient and direct and accessible walking routes. Matters relating to tackling congestion, parking and electric vehicles are set out in Sections 21, 22 and 23 respectively. The need for supporting infrastructure to encourage use of the Blue Ribbon Network for passenger transport and to improve access for recreation to the BRN is in Section 24, complementing the coverage of Waterway facilities in Section 6. Furthermore, Transport functions occupying industrial land (principally bus, rail freight and aviation-related activities) are discussed in Section 8.
Definitions used in this SPG
Industrial Capacity is a general term referring to land, premises and other infrastructure (whether occupied or vacant) in industrial and related uses. For the purposes of this SPG, the expressions of ‘industry and related uses’ and ‘industrial land’ are broken down into the following categories:
(i) Light industry
(ii) General industry
(iii) Logistics, warehousing and storage
(iv) Waste management and recycling
(v) Utilities including energy and water management
(vi) Land for public transport functions
(vii) Wholesale markets
(viii) Some creative industries
(ix) Other industrial related uses not in categories (i) to (viii) above.
In broad terms, light industry and general industry comprise the types of activities defined in the Use Classes Order as B1(b)/(c) and B2 respectively. Logistics, warehousing and storage typically include those uses defined under Use Class B8. Together, the categories (i) to (iii) above, plus vacant industrial land comprise the ‘core’ definition for estimates of the supply of industrial land. However these Use Classes do not necessarily include all the potential users of industrial land including waste management, utilities, land for transport functions, wholesale markets and other industrial related uses, some of which, depending on the specific use, may be sui generis uses.
Conversely, some of these Use Classes can accommodate what are essentially office based rather than production activities. Definitions of industrial land are further complicated as traditional distinctions between production, assembly, distribution and office-based activities in the manufacturing sector are breaking down. Flexibility in the Use Classes and General Permitted Development Orders has in some areas led to changes from low value industrial to high value office uses. In London, the SIL framework seeks to manage this balance and accommodate industries of different types (outlined in Section 4), recognizing that they will have different spatial and environmental requirements.
Recent research studies
2
on the demand for industrial land and the use of business space have investigated the relationship between industrial employment as defined in the Standard Industrial Classification (SIC) and industrial land use. These studies note that some SIC manufacturing and wholesale distribution categories uses exclude activities that occupy industrial land and conversely, include others which are highly unlikely to occupy such land, for example publishing and large manufacturing firms in central London. The latter are classified by the SIC as manufacturing but are most likely to be headquarters offices. The consensus among these research studies suggests that a refined method of selecting specific SICs for analysis is the most reliable approach when considering the demand for industrial land. Except where stated, this SPG adopts the ‘wider’ definition of industrial land comprising the categories (i) to (ix) in paragraph 1.11. The SICs used in assessing industrial employment are set out in recent research for the GLA
3
.:
Collaborative Capacity Guidance Document Draft Mar docs
CoopAtStan-28w weds May 16 7: 00 pm Draft Only Draft Only Draft Only
Draft: The following copy is a draft document for comment only. Not for citation or attribution
Supplementary Information
Supplementary material for ryder et al
Supplementary Information to selected Proposals
Error: Reference source not found supplementary information
Guidance Department | http://lib.convdocs.org/docs/index-6211.html | 2019-08-17T22:34:55 | CC-MAIN-2019-35 | 1566027313501.0 | [] | lib.convdocs.org |
C26441 NO_UNNAMED_GUARDS
"Guard objects must be named."
C++ Core Guidelines: CP.44: Remember to name your lock_guards and unique_locks
The standard library provides a few useful classes which help to control concurrent access to resources. Objects of such types lock exclusive access for the duration of their lifetime. This implies that every lock object must be named, i.e. have clearly defined lifetime which spans through the period in which access operations are executed. So, failing to assign a lock object to a variable is a mistake which is effectively disables locking mechanism (because temporary variables are transient). This rule tries to catch simple cases of such unintended behavior.
Remarks
- Only standard lock types are tracked: std::scoped_lock, std::unique_lock, and std::lock_quard.
- Only simple calls to constructors are analyzed. More complex initializer expression may lead to inaccurate results, but this is rather an unusual scenario.
- Locks passed as arguments to function calls or returned as results of function calls are ignored.
- Locks created as temporaries but assigned to named references to extend their lifetime are ignored.
Example
missing scoped variable
void print_diagnostic(gsl::string_span<> text) { auto stream = get_diagnostic_stream(); if (stream) { std::lock_guard<std::mutex>{ diagnostic_mutex_ }; // C26441 write_line(stream, text); // ... } }
missing scoped variable - corrected
void print_diagnostic(gsl::string_span<> text) { auto stream = get_diagnostic_stream(); if (stream) { std::lock_guard<std::mutex> lock{ diagnostic_mutex_ }; write_line(stream, text); // ... } }
Feedback | https://docs.microsoft.com/en-us/visualstudio/code-quality/c26441?view=vs-2019 | 2019-08-17T23:03:34 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.microsoft.com |
tns test init
Description
Configures your project for unit testing with a selected framework. This operation installs the nativescript-unit-test-runner npm module and its dependencies and creates a
tests folder in the
app directory.
Commands
Options
--framework <Framework>- Sets the unit testing framework to install. The following frameworks are available: .
Command Limitations
- You can configure only one unit testing framework per project. | https://docs.nativescript.org/tooling/docs-cli/project/testing/test-init | 2019-08-17T23:23:09 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.nativescript.org |
Help and support¶
Priority support is included with the purchase of an Anaconda subscription. Visit the support section of our website for documentation and contact information for support.
Training and consulting¶
Training and consulting are available for the Anaconda platform and all Anaconda platform components. For more information, contact [email protected]. | http://docs.continuum.io/anaconda-adam/help-support/ | 2019-08-17T22:52:09 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.continuum.io |
tns test ios
Description
Runs the tests in your project on connected iOS devices or the iOS Simulator. Your project must already be configured for unit testing by running
$ tns test init.
WARNING: You can run this command only on macOS systems. To view the complete help for this command, run
$ tns help test ios
Commands
Options
--watch- If set, when you save changes to the project, changes are automatically synchronized to the connected device and tests are re-ran.
--device- Specifies the serial number or the index of the connected device on which you want to run tests. To list all connected devices, grouped by platform, run
$ tns device. You cannot set
--deviceand
--emulatorsimultaneously.
<Device ID>is the device index or identifier as listed by the
$ tns devicecommand.
--emulator- Runs tests on the iOS Simulator. You cannot set
--deviceand
--emulatorsimultaneously.
-. | https://docs.nativescript.org/tooling/docs-cli/project/testing/test-ios | 2019-08-17T23:03:17 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.nativescript.org |
Converts and copies a source texture to a destination texture with a different format or dimensions.
This function provides an efficient way to convert between textures of different formats and dimensions. The destination texture format must be uncompressed and correspond to a RenderTextureFormat supported on the current device. You can use 2D and cubemap textures as the source and 2D, cubemap, 2D array and cubemap array textures as the destination.
Note that due to API limitations, this function is not supported on DX9 or Mac+OpenGL. | https://docs.unity3d.com/es/2018.2/ScriptReference/Rendering.CommandBuffer.ConvertTexture.html | 2019-08-17T22:50:44 | CC-MAIN-2019-35 | 1566027313501.0 | [] | docs.unity3d.com |
Add a device compliance policy for macOS devices with Intune
An Intune macOS device compliance policy determines the rules and settings that mac macOS. Choose Settings Configure, and enter the Device Health, Device Properties, and System Security settings. When you're done, select OK, and Create.
Device Health
- Require a system integrity protection: Require your macOS devices to have System Integrity Protection enabled.
Device properties
- Minimum OS version: When a device doesn't meet the minimum OS version requirement, it's reported as noncompliant. A link with information on how to upgrade appears. The end user can choose to upgrade their device, and then get access to company resources.
- Maximum OS version: When a device is using an OS version later than the version specified in the rule, access to company resources is blocked. The user is asked to contact their IT admin. Until there is a rule change to allow the OS version, this device can't access company resources.
System security settings
Require a password to unlock mobile devices: Require users to enter a password before they can access their device..
Password type: Choose if a password should have only Numeric characters, or if there should be a mix of numbers and other characters (Alphanumeric).
Number of non-alphanumeric characters in password: Enter the minimum number of special characters (&, #, %, !, and so on) that must be.: Choose Require to encrypt data storage on your devices..
Tip
By default, devices check for compliance every eight hours. But users can force this process through the Intune Company Portal app.
You have applied the policy to users. The devices used by the users who are targeted by the policy are evaluated for compliance.
Next steps
Automate email and add actions for noncompliant devices
Monitor Intune Device compliance policies | https://docs.microsoft.com/en-us/intune/compliance-policy-create-mac-os | 2018-07-15T20:57:35 | CC-MAIN-2018-30 | 1531676588972.37 | [] | docs.microsoft.com |
Create a table index Constructing an effective index requires specialized knowledge in database architecture. If you do not have this expertise, you should consult someone who does. Before you beginRole required: admin Procedure Navigate to System Definition > Tables. In the list, find the table you want and click its label. Navigate to the Database Indexes related list. Click New. Use the slushbucket to select the fields you want included in the index. The order in which you select the fields affects how the index works. If you do not have expertise in database design, you should consult someone who does. To create a unique index, check the Unique Index box. Click Create Index. The Table Name field is there for your reference only. Overriding the default has no effect. | https://docs.servicenow.com/bundle/kingston-platform-administration/page/administer/table-administration/task/t_CreateCustomIndex.html | 2018-07-15T21:29:53 | CC-MAIN-2018-30 | 1531676588972.37 | [] | docs.servicenow.com |
Setup - Astrid
Please note: if you have chosen the Automatic import method from the previous tutorial, you don't have to do anything that is described in the following video in order to set up your site. However, you should still watch it to familiarize yourself with how the theme works.
In this video you can see how you can import the demo content and how you can quickly build content and create your homepage. Note that although we're showing the creation of a single page, the process described in the video can be used for as many pages as you need. | https://docs.athemes.com/article/139-setup-astrid | 2019-06-16T02:40:30 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.athemes.com |
The New Relic Ruby agent automatically instruments the Redis gem (gem version 3.0.0 or higher). After you install the agent and generate traffic for your app, you can view Redis operations on the APM Overview page, on the Databases page, and in transaction traces. For example, the main chart on the APM Overview page will show color-coded Redis information.
Redis instrumentation requires Ruby agent version 3.13.0 or higher.
Interaction with newrelic-redis
The third-party
newrelic-redis gem provides Redis instrumentation support as an add-on to New Relic's Ruby agent. If the Ruby agent detects
newrelic-redis, it will not install the built-in Redis instrumentation and will record a log message like this at startup:
INFO : Not installing New Relic supported Redis instrumentation because the third party newrelic-redis gem is present
To use New Relic's built-in Redis instrumentation and view Redis information in the UI, remove the
newrelic-redis gem.
Removing the
newrelic-redis gem in favor of the built-in instrumentation will change your transaction names. To preserve your existing transaction names, ignore the log message and do not uninstall the gem.
Capture Redis command arguments
By default, the Ruby agent only captures Redis command names. To capture Redis command arguments, use this configuration:
transaction_tracer: record_redis_arguments: true
The agent limits the number of characters and arguments collected from each transaction trace node. The agent truncates items that exceed these limits. | https://docs.newrelic.com/docs/agents/ruby-agent/instrumented-gems/redis-instrumentation | 2019-06-16T03:40:57 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.newrelic.com |
This topic provides information about errors and events for the SQL Server Database Engine.
In This Section
Understanding Database Engine Errors
Describes the format of Database Engine error messages and explains how to view error messages and return error messages to applications.
Cause and Resolution of Database Engine Errors
Provides an explanation of system error messages, possible causes, and any actions you can take. | https://docs.microsoft.com/en-us/sql/relational-databases/errors-events/database-engine-events-and-errors | 2017-06-22T21:16:43 | CC-MAIN-2017-26 | 1498128319902.52 | [] | docs.microsoft.com |
Persisting PDX Metadata to Disk
Persisting PDX Metadata to Disk
GemFire allows you to persist PDX metadata to disk and specify the disk store to use.
Prerequisites
- Understand generally how to configure the GemFire cache. See Basic Configuration and Programming.
- Understand how GemFire disk stores work. See Disk Storage.
Procedure
- Set the <pdx> attribute persistent to true in your cache configuration. This is required for caches that use PDX with persistent regions and with regions that use a gateway sender to distribute events across a WAN. Otherwise, it is optional.
- (Optional) If you want to use a disk store that is not the GemFire default disk store, set the <pdx> attribute disk-store-name to the name of your non-default disk store.Note: If you are using PDX serialized objects as region entry keys and you are using persistent regions, then you must configure your PDX disk store to be a different one than the disk store used by the persistent regions.
- (Optional) If you later want to rename the PDX types that are persisted to disk, you can do so on your offline disk-stores by executing the pdx rename command. See pdx rename.
Example cache.xml:
This example cache.xml enables PDX persistence and sets a non-default disk store in a server cache configuration:
<pdx read- <pdx-serializer> <class-name>pdxSerialization.defaultSerializer</class-name> </pdx-serializer> </pdx> <region ... | http://gemfire.docs.gopivotal.com/docs-gemfire/developing/data_serialization/persist_pdx_metadata_to_disk.html | 2017-06-22T20:41:13 | CC-MAIN-2017-26 | 1498128319902.52 | [] | gemfire.docs.gopivotal.com |
If you are a user and not a developer, please consider using one of the prebuilt packages of KiCad which can be found at the download page on the KiCad website. Building KiCad from source is not for the faint of heart and is not recommended unless you have reasonable software development experience. This document contains the instructions on how to build KiCad from source on the supported platforms. It is not intended as a guide for installing or building library dependencies. Please consult your platforms documentation for installing packages or the source code when building the library dependencies. Currently the supported platforms are Windows Versions 7-10, just about any version of Linux, and macOS 10.9-10.12. You may be able to build KiCad on other platforms but it is not supported. On Windows and Linux the GNU GCC is the only supported compiler and on macOS Clang is the only supported compiler.
Before you begin building KiCad, there are a few tools required in addition to your compiler. Some of these tools are required to build from source and some are optional.
CMake is the build configuration and makefile generation tool used by KiCad. It is required.
The official source code repository is hosted on Launchpad and requires git to get the latest source. If you prefer to use GitHub there is a read only mirror of the official KiCad repository. Do not submit pull requests to GitHub. Changes should be sent to the KiCad developer's mailing list using
git format-patch and attaching the patch with [PATCH] at the beginning of the subject or using
git send-email to send your commit directly to the developer's mailing list.
The KiCad source code is documented using Doxygen which parses the KiCad source code files and builds a dependency tree along with the source documentation into HTML. Doxygen is only required if you are going to build the KiCad documentation.
SWIG is used to generate the Python scripting language extensions for KiCad. SWIG is not required if you are not going to build the KiCad scripting extension.
This section includes a list of library dependencies required to build KiCad. It does not include any dependencies of the libraries. Please consult the library's documentation for any additional dependencies. Some of these libraries are optional depending on you build configuration. This is not a guide on how to install the library dependencies using you systems package management tools or how to build the library from source. Consult the appropriate documentation to perform these tasks.
wxWidgets is the graphical user interface (GUI) library used by KiCad. The current minimum version is 3.0.0. However, 3.0.2 should be used whenever possible as there are some known bugs in prior versions that can cause problems on some platforms. Please note that there are also some platform specific patches that must be applied before building wxWidgets from source. These patches can be found in the patches folder in the KiCad source. These patches are named by the wxWidgets version and platform name they should be applied against. wxWidgets must be built with the –with-opengl option. If you installed the packaged version of wxWidgets on your system, verify that it was built with this option.
The Boost C++ library is required only if you intend to build KiCad with the system installed version of Boost instead of the default internally built version. If you use the system installed version of Boost, version 1.56 or greater is required. Please note there are some platform specific patches required to build a working Boost library. These patches can be found in the patches folder in the KiCad source. These patches are named by the platform name they should be applied against.
The OpenGL Extension Wrangler is an OpenGL helper library used by the KiCad graphics abstraction library [GAL] and is always required to build KiCad.
The OpenGL Mathematics Library is an OpenGL helper library used by the KiCad graphics abstraction library [GAL] and is always required to build KiCad.
The OpenGL Utility Toolkit is an OpenGL helper library used by the KiCad graphics abstraction library [GAL] and is always required to build KiCad.
The Cairo 2D graphics library is used as a fallback rendering canvas when OpenGL is not available and is always required to build KiCad.
The Python programming language is used to provide scripting support to KiCad. It only needs to be install if the KiCad scripting build configuration option is enabled.
The wxPython library is used to provide a scripting console for Pcbnew. It only needs to be installed if the wxPython scripting build configuration option is enabled. When building KiCad with wxPython support, make sure the version of the wxWidgets library and the version of wxPython installed on your system are the same. Mismatched versions have been known to cause runtime issues.
The Curl Multi-Protocol File Transfer Library is used to provide secure internet file transfer access for the GitHub plug in. This library only needs to be installed if the GitHub plug build option is enabled.
KiCad has many build options that can be configured to build different options depending on the availability of support for each option on a given platform. This section documents these options and their default values.
The USE_WX_GRAPHICS_CONTEXT option replaces wxDC with wxGraphicsContext for graphics rendering. This option is disabled by default. Warning: the is experimental and has not been maintained so use at your own risk.
The USE_WX_OVERLAY option is used to enable the optional wxOverlay class for graphics rendering on macOS. This is enabled on macOS by default and disabled on all other platforms.
The KICAD_SCRIPTING option is used to enable building the Python scripting support into Pcbnew. This options is disabled by default.
The KICAD_SCRIPTING_MODULES option is used to enable building and installing the Python modules supplied by KiCad. This option is disabled by default.
The KICAD_SCRIPTING_WXPYTHON option is used to enable building the wxPython interface into Pcbnew including the wxPython console. This option is disabled by default.
The BUILD_GITHUB_PLUGIN option is used to control if the GitHub plug in is built. This option is enabled by default.
The KICAD_SPICE option is used to control if the Spice simulator interface for Eeschema is built. When this option is enabled, it requires ngspice to be available as a shared library. This option is disabled by default.
The KICAD_USE_OCE is used for the 3D viewer plugin to support STEP and IGES 3D models. Build tools and plugins related to OpenCascade Community Edition (OCE) are enabled with this option. When enabled it requires OCE to be available, and the location of the installed OCE library to be passed via the OCE_DIR flag. This option is disabled by default.
The KiCad source code includes some demos and examples to showcase the program. You can choose whether install them or not with the KICAD_INSTALL_DEMOS option. You can also select where to install them with the KICAD_DEMOS variable. On Linux the demos are installed in $PREFIX/share/kicad/demos by default.
The KICAD_SCRIPTING_ACTION_MENU option allows Python scripts to be added directly to the Pcbnew menu. This option is disabled by default. Please note that this option is highly experimental and can cause Pcbnew to crash if Python scripts create an invalid object state within Pcbnew.
The KiCad version string is defined by the three CMake variables KICAD_VERSION, KICAD_BRANCH_NAME, and KICAD_VERSION_EXTRA. Variables KICAD_BRANCH_NAME and KICAD_VERSION_EXTRA are defined as empty strings and can be set at configuration. Unless the source branch is a stable release archive, KICAD_VERSION is set to "no-vcs-found". If an optional variable is not define, it is not appended to the full version string. If an optional variable is defined it is appended along with a leading '-' to the full version string as follows:
KICAD_VERSION[-KICAD_BRANCH_NAME][-KICAD_VERSION_EXTRA]
When the version string is set to "no-vcs-found", the build script automatically creates the version string information from the git repository information as follows:
(2016-08-26 revision 67230ac)-master | | | | | branch name, "HEAD" if not on a branch, | | or "unknown" if no .git present | | | abbreviated commit hash, or no-git if no .git | present | date of commit, or date of build if no .git present
There are several ways to get the KiCad source. If you want to build the stable version you can down load the source archive from the KiCad Launchpad developers page. Use tar or some other archive program to extract the source on your system. If you are using tar, use the following command:
tar -xzf kicad_src_archive.tar.gz
If you are contributing directly to the KiCad project on Launchpad, you can create a local copy on your machine by using the following command:
git clone -b master
Here is a list of source links:
Stable release archive:
Development branch:
GitHub mirror:
To perform a full build on Linux, run the following commands:
cd <your kicad source mirror> mkdir -p build/release mkdir build/debug # Optional for debug build. cd build/release cmake -DCMAKE_BUILD_TYPE=Release \ -DKICAD_SCRIPTING=ON \ -DKICAD_SCRIPTING_MODULES=ON \ -DKICAD_SCRIPTING_WXPYTHON=ON \ ../../ make sudo make install
If the CMake configuration fails, determine the missing dependencies and install them on your system. By default, CMake sets the install path on Linux to /usr/local. Use the CMAKE_INSTALL_PREFIX option to specify a different install path.
The preferred Windows build environment is MSYS2. The MinGW build environment is still supported but it is not recommended because the developer is responsible for building all of the dependencies from source which is a huge and frustrating undertaking. The MSYS2 project provides packages for all of the require dependencies to build KiCad. To setup the MSYS2 build environment, depending on your system download and run either the MSYS2 32-bit Installer or the MSYS2 64-bit Installer. After the installer is finished, update to the latest package versions by running the
msys2_shell.bat file located in the MSYS2 install path and running the command
pacman -Syu. If the msys2-runtime package is updated, close the shell and run
msys2_shell.bat.
The easiest way to build KiCad using the MSYS2 build environment is to use the KiCad PKGBUILD provided by the MSYS2 project to build package using the head of the KiCad development branch. To build the KiCad package, run the
msys2_shell.bat file located in the MSYS2 install path and run the following commands:
pacman -S base-devel git mkdir src cd src git clone cd MinGW-packages/mingw-w64-kicad-git makepkg-mingw -is
This will download and install all of the build dependencies, clone the KiCad source mirror from GitHub, create both 32-bit and 64-bit KiCad packages depending on your MSYS setup, and install the newly built KiCad packages. Please note that this build process takes a very long time to build even on a fast system.
If you do not want to create KiCad packages and prefer the traditional
make && make install method of building KiCad, your task is significantly more involved. For 64 bit builds run the
mingw64_shell.bat file located in the MSYS2 install path. At the command prompt run the the following commands:
pacman -S base-devel \ mingw-w64-x86_64-cmake \ mingw-w64-x86_64-doxygen \ mingw-w64-x86_64-gcc \ mingw-w64-x86_64-python2 \ mingw-w64-x86_64-pkg-config \ mingw-w64-x86_64-swig \ mingw-w64-x86_64-boost \ mingw-w64-x86_64-cairo \ mingw-w64-x86_64-glew \ mingw-w64-x86_64-curl \ mingw-w64-x86_64-wxPython \ mingw-w64-x86_64-wxWidgets \ mingw-w64-x86_64-toolchain \ mingw-w64-x86_64-glm cd kicad-source mkdir -p build/release mkdir build/debug # Optional for debug build. cd build/release cmake -DCMAKE_BUILD_TYPE=Release \ -G "MSYS Makefiles" \ -DCMAKE_PREFIX_PATH=/mingw64 \ -DCMAKE_INSTALL_PREFIX=/mingw64 \ -DDEFAULT_INSTALL_PATH=/mingw64 \ -DKICAD_SCRIPTING=ON \ -DKICAD_SCRIPTING_MODULES=ON \ -DKICAD_SCRIPTING_WXPYTHON=ON \ ../../ make install
For 32-bit builds, run
mingw32_shell.bat and change
x86_64 to
i686 in the package names and change the paths in the cmake configuration from
/mingw64 to
/mingw32.
There are some known issues that are specific to MSYS2. This section provides a list of the currently known issues when building KiCad using MSYS2.
The context library of the x86_64 package of Boost version 1.59 is broken and will cause KiCad to crash. You must downgrade to version 1.57 by running the command:
pacman -U /var/cache/pacman/pkg/mingw-w64-x86_64-boost-1.57.0-4-any.pkg.tar.xz
If the file mingw-w64-x86_64-boost-1.57.0-4-any.pkg.tar.xz is no longer in your pacman cache, you will have to down load it from the MSYS2 64-bit SourceForge repo. You should also configure pacman to prevent upgrading the 64-bit Boost package by adding:
IgnorePkg = mingw-w64-x86_64-boost
to your /etc/pacman.conf file.
Building on macOS is challenging at best. It typically requires building dependency libraries that require patching in order to work correctly. For more information on the complexities of building and packaging KiCad on macOS, see the [macOS bundle build scripts][].
In the following set of commands, replace the macOS version number (i.e. 10.9) with the desired minimum version. It may be easiest to build for the same version you are running.
Download the wxPython source and build using the following commands:
cd path-to-wxwidgets-src patch -p0 < path-to-kicad-src/patches/wxwidgets-3.0.0_macosx.patch patch -p0 < path-to-kicad-src/patches/wxwidgets-3.0.0_macosx_bug_15908.patch patch -p0 < path-to-kicad-src/patches/wxwidgets-3.0.0_macosx_soname.patch patch -p0 < path-to-kicad-src/patches/wxwidgets-3.0.2_macosx_yosemite.patch patch -p0 < path-to-kicad-src/patches/wxwidgets-3.0.0_macosx_scrolledwindow.patch patch -p0 < path-to-kicad-src/patches/wxwidgets-3.0.2_macosx_sierra.patch patch -p0 < path-to-kicad-src/patches/wxwidgets-3.0.2_macosx_unicode_pasteboard.patch mkdir build cd build export MAC_OS_X_VERSION_MIN_REQUIRED=10.9 ../configure \ --prefix=`pwd`/../wx-bin \ --with-opengl \ --enable-aui \ --enable-utf8 \ --enable-html \ --enable-stl \ --with-libjpeg=builtin \ --with-libpng=builtin \ --with-regex=builtin \ --with-libtiff=builtin \ --with-zlib=builtin \ --with-expat=builtin \ --without-liblzma \ --with-macosx-version-min=10.9 \ --enable-universal-binary=i386,x86_64 \ CC=clang \ CXX=clang++
Build KiCad using the following commands:
cd kicad-source mkdir -p build/release mkdir build/debug # Optional for debug build. cd build/release cmake -DCMAKE_C_COMPILER=clang \ -DCMAKE_CXX_COMPILER=clang++ \ -DCMAKE_OSX_DEPLOYMENT_TARGET=10.9 \ -DwxWidgets_CONFIG_EXECUTABLE=path-to-wx-install/bin/wx-config \ -DKICAD_SCRIPTING=ON \ -DKICAD_SCRIPTING_MODULES=ON \ -DKICAD_SCRIPTING_WXPYTHON=ON \ -DPYTHON_EXECUTABLE=path-to-python-exe/python \ -DPYTHON_SITE_PACKAGE_PATH=wx/wx-bin/lib/python2.7/site-packages \ -DCMAKE_INSTALL_PREFIX=../bin \ -DCMAKE_BUILD_TYPE=Release \ ../../ make make install
There are some known issues that effect all platforms. This section provides a list of the currently known issues when building KiCad on any platform.
As of version 5 of GNU GCC, using the default configuration of downloading, patching, and building of Boost 1.54 will cause the KiCad build to fail. Therefore a newer version of Boost must be used to build KiCad. If your system has Boost 1.56 or greater installed, you job is straight forward. If your system does not have Boost 1.56 or greater installed, you will have to download and build Boost from source. If you are building Boost on windows using MinGW you will have to apply the Boost patches in the KiCad source patches folder. | http://docs.kicad-pcb.org/doxygen/md_Documentation_development_compiling.html | 2017-06-22T20:21:51 | CC-MAIN-2017-26 | 1498128319902.52 | [] | docs.kicad-pcb.org |
Working with variables
Help Scout variables are placeholders for generated customer, User, mailbox, and conversation data. Variables are defined using specific text between curly brackets and % signs. Each type of variable has its own set of properties. For example, User variables pull data from the Help Scout User profile.
In this article
Variable usage
When editing saved replies, workflow emails, the mailbox signature, or your auto-reply, you'll find a dropdown list of valid variables in the top-right corner of the editor. To insert a variable, just open the menu and pick one from the list.
Let's say you wanted to automatically insert the customer's first name in a saved reply. Using the {%customer.firstName%} variable will automatically populate the customer's first name when the reply is sent; that way, you don't have to type it out manually when the editor is open. Remember to place all variables between a curly bracket and a % sign.
Variable fallback attribute
Use the fallback attribute with variables that might not populate with the correct or appropriate data in every use case. For example, some customer profiles do not contain a first and/or last name, only an email address. A variable with a fallback looks like this: {%customer.firstName,fallback=there%}!
In our example above, if the customer has a first name populated on their profile, let's say "John" in this case, the variable output would read: Hi John! If no first name was present, the variable output would read: Hi there!
Common use cases
Here are a few common scenarios where variables are super handy:
- Automatically adding the responding User name and job title to the mailbox signature.
- Using the customer's first or full name in the auto-reply greeting.
- Reference a specific conversation number in a follow-up workflow.
- Add the mailbox email address to the signature alongside other options to get help.
Complete variable list
Certain variables can only be used in certain areas of Help Scout. Follow the tables below to see where specific variables can be used. | http://docs.helpscout.net/article/468-variables | 2017-06-22T20:23:25 | CC-MAIN-2017-26 | 1498128319902.52 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/57ccfff8903360649f6e51d0/file-E7We3WwMxK.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/57cd009fc69791083999ff02/file-qRX17oN9z2.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/57cd00ad903360649f6e51d1/file-wceaAEIlrF.png',
None], dtype=object) ] | docs.helpscout.net |
Development Guide
Local Navigation
Push request response
The PPG sends the push request response to the Push Indicator to acknowledge that the push message arrived. It is not a notification of the result of the push message.
Next topic: Example XML: Reading a push response
Previous topic: Code sample: Creating a push request message
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/developers/deliverables/25167/Push_request_response_604570_11.jsp | 2015-03-27T03:40:15 | CC-MAIN-2015-14 | 1427131294307.1 | [] | docs.blackberry.com |
Django documentation¶
Everything you need to know about Django.
- Model instances: Instance methods | Accessing related objects
- Advanced: Managers | Raw SQL | Transactions | Aggregation | Custom fields | Multiple databases
- Other: Supported databases | Legacy databases | Providing initial data | Optimize database access
The view layer¶
Django has the concept of “views” to encapsulate the logic responsible for processing a user’s request and for returning the response. Find all you need to know about views via the links below:
- The basics: URLconfs | View functions | Shortcuts | Decorators
- Reference: Built-in Views | Request/response objects | TemplateResponse objects
- File uploads: Overview | File objects | Storage API | Managing files | Custom storage
- Class-based views: Overview | Built-in display views | Built-in editing views | Using mixins | API reference | Flattened index
- Advanced: Generating CSV | Generating PDF
- Middleware: Overview | Built-in middleware classes
The template layer¶
The template layer provides a designer-friendly syntax for rendering the information to be presented to the user. Learn how this syntax can be used by designers and how it can be extended by programmers:
-
- Exceptions: Overview
- django-admin.py and manage.py: Overview | Adding custom commands
- Testing: Introduction | Writing and running tests | Advanced topics
- Deployment: Overview | WSGI servers | FastCGI/SCGI/AJP | Deploying static files | Tracking code errors by email
The admin¶
Find all you need to know about the automated admin interface, one of Django’s most popular features:
Security¶
Security is a topic of paramount importance in the development of Web applications and Django provides multiple protection tools and mechanisms:
Internationalization and localization¶
Django offers a robust internationalization and localization framework to assist you in the development of applications for multiple languages and world regions:
Python compatibility¶
Django aims to be compatible with multiple different flavors and versions of Python:
Geographic framework¶
GeoDjango intends to be a world-class geographic Web framework. Its goal is to make it as easy as possible to build GIS Web applications and harness the power of spatially enabled data.
Common Web application tools¶
Django offers multiple tools commonly needed in the development of Web applications:
Other core functionalities¶
The Django open-source project¶
Learn about the development process for the Django project itself and about how you can contribute:
- Community: How to get involved | The release process | Team of committers | The Django source code repository | Security policies | Mailing lists
- Design philosophies: Overview
- Documentation: About this documentation
- Third-party distributions: Overview
- Django over time: API stability | Release notes and upgrading instructions | Deprecation Timeline | https://docs.djangoproject.com/en/1.6/ | 2015-03-27T03:36:59 | CC-MAIN-2015-14 | 1427131294307.1 | [] | docs.djangoproject.com |
Building Groovy Eclipse in Eclipse
Below).
Install Groovy-Eclipse.
Import Groovy-Eclipse source code into your workspace (the simple way).
Import Groovy-Eclipse source code into your workspace (the complicated way)::
Now, look for some interesting bugs to work on
See the list in jira. All the issues labelled help requested are a good place to start. Also, you can contact the eclipse plugin dev mailing list and we can help you out. | http://docs.codehaus.org/pages/diffpages.action?originalId=231735813&pageId=233053141 | 2015-03-27T03:38:53 | CC-MAIN-2015-14 | 1427131294307.1 | [array(['/download/attachments/152207389/project_import.png?version=1&modificationDate=1336150056486&api=v2',
'project_import.png'], dtype=object) ] | docs.codehaus.org |
Build System
The major options for modular build systems are:
- gradle
- mvn (+gmaven)
A decision has been made to go with Gradle as the first attempt. A Gradle replacement for the current Ant build will be created over the next few weeks. This can then be evolved into a multi-project build once the module structure is decided.
Modules
- compiler
- runtime
- grape
- numberUtils?
- stringUtils?
- collectionUtils? maps
- googleCollections?
- file, streams
- jsr203/nio
- process, threads
- templates
- regex?
- encoding?
- cli?
- xml
- dateUtils (inc time/calendar)
- groovysh (org.codehaus.groovy.tools.shell)
- groovyconsole (groovy.ui + friends)
- groovydoc
- swing
- jmx
- mock
- sql
- ant (org.codehaus.groovy.ant)
- javax.script
- bsf
- servlet
- inspect
- test/junit
- combinators
- Scala compiler hooks/Scalify?
Basically, anything not required for the the basic groovy language to execute, should be in a module. So
groovy.GroovyShell (legacy),
groovy.Grab*,
groovy.util.GroovyTestCase are not required for the runtime to function. While useful, they should be in optional modules organized by function. | http://docs.codehaus.org/pages/viewpage.action?pageId=136675450 | 2015-03-27T03:45:24 | CC-MAIN-2015-14 | 1427131294307.1 | [] | docs.codehaus.org |
and include-algol implementation departs from the Algol 60 specification in the following minor ways:
Strings are not permitted to contain nested quotes.
Identifiers cannot contain whitespace.
Argument separators are constrained to be identifiers (i.e., they cannot be keywords, and they cannot consist of multiple identifiers separated by whitespace.)
Numbers containing exponents (using the “10” subscript) are not supported.
Identifiers and keywords are case-sensitive. The boldface/underlined keywords of the report are represented by the obvious character sequence, as are most operators. A few operators do not fit into ASCII, and they are mapped as follows:
In addition to the standard functions, the following output functions are supported:
A prompt in DrRacket’s interactions area accepts whole programs only for the Algol 60 language. | http://docs.racket-lang.org/algol60/index.html | 2015-03-27T03:35:44 | CC-MAIN-2015-14 | 1427131294307.1 | [] | docs.racket-lang.org |
Annogen is a framework which helps you work with JSR175 Annotations. Annogen enables you to
Override JSR175 Annotation values
...with data from XML or arbitrary plugin code that you write.
Migrate your JDK1.4 code to JSR175
...by translating javadoc tags into Annotations
Work with popular introspection APIs
...including Reflection, Doclet, QDox, Mirror, and JAM | http://docs.codehaus.org/pages/viewpage.action?pageId=15665 | 2015-03-27T03:51:31 | CC-MAIN-2015-14 | 1427131294307.1 | [array(['http://annogen.codehaus.org/images/with-annogen.png', None],
dtype=object) ] | docs.codehaus.org |
You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
Version 1
Draws an arbitrary geometric path using the Batik library. Paths are described by a series of pathOperations:
Operation
Description
moveTo[x,y]
adds a point to the path by moving to the specified coordinates (x,y)
lineTo[x,y]
adds a point to the path by drawing a straight line from the current coordinates to the new specified coordinates (x,y)
curve
quad
hline[x]
adds a point to the path by drawing an horizontal line to the specified coordinates (x,current.y)
vline[y]
adds a point to the path by drawing a vertical line to the specified coordinates (current.x,y)
shapeTo[shape,connect]
appends the geometry of the specified Shape, shape operation or outline operation to the path, possibly connecting the new geometry to the existing path segments with a line segment
closes the current subpath by drawing a straight line back to the coordinates of the last moveTo
Powered by a free Atlassian Confluence Open Source Project License granted to Codehaus. Evaluate Confluence today. | http://docs.codehaus.org/pages/viewpage.action?pageId=35422289 | 2015-03-27T03:44:26 | CC-MAIN-2015-14 | 1427131294307.1 | [] | docs.codehaus.org |
Currently, the character encoding for source files needs to be configured individually for each and every plugin that processes source files. a property, thus plugins could immediately.)
Affected Codehaus plugins:
- modello-maven-plugin/modello-core (java source generation)
- plexus-maven-plugin (javadoc extraction)
- taglist-maven-plugin (javadoc extraction)
references
Please see [0] for the related thread from the mailing list, and [1] for some further descriptions.
[0]
[1] | http://docs.codehaus.org/pages/viewpage.action?pageId=77333044 | 2015-03-27T03:49:36 | CC-MAIN-2015-14 | 1427131294307.1 | [] | docs.codehaus.org |
Exporting Amazon EC2 Instances
If you have previously imported an instance into Amazon EC2, you can use the command line tools to export that instance to Citrix Xen, Microsoft Hyper-V, or VMware vSphere. Exporting an instance that you previously imported is useful when you want to deploy a copy of your EC2 instance in your on-site virtualization environment.
If you're using VMware vSphere, you can also use the AWS Connector for vCenter to export a VM from Amazon EC2. For more information, see Exporting a Migrated Amazon EC2 Instance in the AWS Management Portal for vCenter User Guide.
VM Export Prerequisites
Before you begin the process of exporting a VM from Amazon EC2, you must be aware of the operating systems and image formats that AWS supports, and understand the limitations on exporting instances and volumes.
To import or export a VM from Amazon EC2, you must also install the CLI tools:
For more information about installing the Amazon EC2 CLI, see the Amazon EC2 Command Line Reference.
For more information about installing the AWS CLI, see the AWS Command Line Interface User Guide. For more information about the Amazon EC2 commands in the AWS CLI, see ec2 in the AWS Command Line Interface Reference.
Known Limitations for Exporting a VM from Amazon EC2
Exporting instances and volumes is subject to the following limitations:
You can have up to five export tasks per region in progress at the same time.
You cannot export Amazon Elastic Block Store (Amazon EBS) data volumes.
You cannot export an instance or AMI that has more than one virtual disk.
You cannot export an instance or AMI that uses a fixed Virtual Hard Disk (VHD).
You cannot export an instance or AMI that has more than one network interface.
You cannot export an instance or AMI from Amazon EC2 unless you previously imported it into Amazon EC2 from another virtualization environment.
You cannot export an instance or AMI from Amazon EC2 if you've shared it from another AWS account.
Exporting Image Formats from Amazon EC2
AWS supports the following image formats for exporting both volumes and instances from Amazon EC2. Make sure that you convert your output file to the format that your VM environment supports:
Open Virtual Appliance (OVA) image format, which is compatible with VMware vSphere versions 4 and 5.
Virtual Hard Disk (VHD) image format, which is compatible with Citrix Xen and Microsoft Hyper-V virtualization products.
Stream-optimized ESX Virtual Machine Disk (VMDK) image format, which is compatible with VMware ESX and VMware vSphere versions 4 and 5 virtualization products.
Export an Instance
You can use the Amazon EC2 CLI to export an instance. If you haven't installed the CLI already, see Setting Up the Amazon EC2 Tools.
The ec2-create-instance-export-task command gathers all of the information necessary (e.g., instance ID; name of the Amazon S3 bucket that will hold the exported image; name of the exported image; VMDK, OVA, or VHD format) to properly export the instance to the selected virtualization format. The exported file is saved in the Amazon S3 bucket that you designate. VM Export supports the export of Dynamic Virtual Hard Disks (VHDs). Fixed VHDs are not supported.
Note
When you export an instance, you are charged the standard Amazon S3 rates for the bucket where the exported VM is stored. In addition, a small charge reflecting temporary use of an Amazon EBS snapshot might appear on your bill. For more information about Amazon S3 pricing, see Amazon Simple Storage Service (S3) Pricing.
To export an instance
Create an Amazon S3 bucket for storing the exported instances. The Amazon S3 bucket must grant Upload/Delete and View Permissions access to the [email protected] account. For more information, see Creating a Bucket and Editing Bucket Permissions in the Amazon Simple Storage Service Console User Guide.
Note
Instead of the [email protected] account, you can use region-specific canonical IDs. The Amazon S3 bucket for the destination image must exist and must have WRITE and READ_ACP permissions granted to the following region-specific accounts using their canonical) : af913ca13efe7a94b88392711f6cfc8aa07c9d1454d4f190a624b126733a5602
For more information, see Amazon Elastic Compute Cloud (Amazon EC2) in the AWS GovCloud (US) User Guide.
All other regions: c4d8eabf8db69dbe46bfe0e517100c554f01200b104d59cd408e777ba442a322
At a command prompt, type the following command:
ec2-create-instance-export-task
instance_id-e
target_environment-f
disk_image_format-c
container_format-b
s3_bucket
instance_id
The ID of the instance you want to export.
target_environment
VMware, Citrix, or Microsoft.
disk_image_format
VMDK for VMware or VHD for Microsoft Hyper-V and Citrix Xen.
Note
VM Export only supports Dynamic Virtual Hard Disks (VHDs). Fixed VHDs are not supported.
container_format
Optionally set to OVA when exporting to VMware.
s3_bucket
The name of the Amazon S3 bucket to which you want to export the instance.
To monitor the export of your instance, at the command prompt, type the following command, where
task_idis the ID of the export task:
ec2-describe-export-tasks
task_id
Cancel or Stop the Export of an Instance
You can use the Amazon EC2 CLI to cancel or stop the export of an instance up to the point of completion. The ec2-cancel-export-task command removes all artifacts of the export, including any partially created Amazon S3 objects. If the export task is complete or is in the process of transferring the final disk image, the command fails and returns an error.
To cancel or stop the export of an instance
At the command prompt, type the following command, where
task_id is the ID of the export task:
ec2-cancel-export-task
task_id | http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ExportingEC2Instances.html | 2016-07-23T11:09:44 | CC-MAIN-2016-30 | 1469257822172.7 | [] | docs.aws.amazon.com |
A common machine learning task is supervised learning. In supervised learning, the goal is to learn the functional relationship
between the input
and the output
. Predicting the qualitative output is called classification, while predicting the quantitative output is called regression.
Boosting is a powerful learning concept that provides a solution to the supervised classification learning task. It combines the performance of many “weak” classifiers to produce a powerful committee [HTF01]. A weak classifier is only required to be better than chance, and thus can be very simple and computationally inexpensive. However, many of them smartly combine results to a strong classifier that often outperforms most “monolithic” strong classifiers such as SVMs and Neural Networks.
Decision trees are the most popular weak classifiers used in boosting schemes. Often the simplest decision trees with only a single split node per tree (called stumps ) are sufficient.
The boosted model is based on
training examples
with
and
.
is a
-component vector. Each component encodes a feature relevant to the learning task at hand. The desired two-class output is encoded as -1 and +1.
Different variants of boosting are known as Discrete Adaboost, Real AdaBoost, LogitBoost, and Gentle AdaBoost [FHT98]. All of them are very similar in their overall structure. Therefore, this chapter focuses only on the standard two-class Discrete AdaBoost algorithm, outlined below. Initially the same weight is assigned to each sample (step 2). Then, a weak classifier
is trained on the weighted training data (step 3a). Its weighted training error and scaling factor
is computed (step 3b). The weights are increased for training samples that have been misclassified (step 3c). All weights are then normalized, and the process of finding the next weak classifier continues for another
-1 times. The final classifier
is the sign of the weighted sum over the individual weak classifiers (step 4).
Two-class Discrete AdaBoost Algorithm
Set
examples
with
.
Assign weights as
.
Repeat for
:
3.1. Fit the classifier
, using weights
on the training data.
3.2. Compute
.
3.3. Set
and renormalize so that
.
Classify new samples x using the formula:
.
Note
Similar to the classical boosting methods, the current implementation supports two-class classifiers only. For M > 2 classes, there is the AdaBoost.MH algorithm (described in [FHT98]) that reduces the problem to the two-class problem, yet with a much larger training set.
To reduce computation time for boosted models without substantially losing accuracy, the influence trimming technique can be employed. As the training algorithm proceeds and the number of trees in the ensemble is increased, a larger number of the training samples are classified correctly and with increasing confidence, thereby those samples receive smaller weights on the subsequent iterations. Examples with a very low relative weight have a small impact on the weak classifier training. Thus, such examples may be excluded during the weak classifier training without having much effect on the induced classifier. This process is controlled with the weight_trim_rate parameter. Only examples with the summary fraction weight_trim_rate of the total weight mass are used in the weak classifier training. Note that the weights for all training examples are recomputed at each training iteration. Examples deleted at a particular iteration may be used again for learning some of the weak classifiers further [FHT98].
Boosting training parameters.
There is one structure member that you can set directly:.
Boosted tree classifier derived from CvStatModel.
Default and training constructors.
The constructors follow conventions of CvStatModel::CvStatModel(). See CvStatModel::train() for parameters descriptions.
Trains a boosted tree classifier.
The train method follows the common template of CvStatModel::train(). The responses must be categorical, which means that boosted trees cannot be built for regression, and there should be two classes.
Predicts a response for an input sample.
The method runs the sample through the trees in the ensemble and returns the output class label based on the weighted voting.
Removes the specified weak classifiers.
The method removes the specified weak classifiers from the sequence.
Note
Do not confuse this method with the pruning of individual decision trees, which is currently not supported.
Returns error of the boosted tree classifier.
The method is identical to CvDTree::calc_error() but uses the boosted tree classifier as predictor.
Returns the sequence of weak tree classifiers.
The method returns the sequence of weak classifiers. Each element of the sequence is a pointer to the CvBoostTree class or to some of its derivatives.
Returns current parameters of the boosted tree classifier. | http://docs.opencv.org/2.4/modules/ml/doc/boosting.html | 2016-07-23T11:03:39 | CC-MAIN-2016-30 | 1469257822172.7 | [] | docs.opencv.org |
Compute the fft of a signal which spectrum has Hermitian symmetry.
Notes
These are a pair analogous to rfft/irfft, but for the opposite case: here the signal is real in the frequency domain and has Hermite symmetry in the time domain. So here it’s hermite_fft for which you must supply the length of the result if it is to be odd.
ihfft(hfft(a), len(a)) == a within numerical accuracy. | http://docs.scipy.org/doc/numpy-1.3.x/reference/generated/numpy.fft.hfft.html | 2016-07-23T11:09:07 | CC-MAIN-2016-30 | 1469257822172.7 | [] | docs.scipy.org |
The Groovy development team is pleased to announce the release of Groovy 1.7.2, a maintenance release of the 1.7.x stable branch.
For the details of bug fixes, improvements and new features, please have a look at our JIRA release notes:
I've updated the download page with links to the deliverables:
Thanks a lot to all involved, developers as well as users reporting issues improvement ideas, and helping us improve Groovy everyday! | http://docs.codehaus.org/display/GROOVY/2010/04/07 | 2012-05-24T13:27:26 | crawl-003 | crawl-003-004 | [] | docs.codehaus.org |
alert_actions.conf
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Contents
alert_actions.conf
Alert_actions.conf controls parameters for available alerting actions for scheduled searches.
To edit this configuration for your local Splunk server, make your edits in
$SPLUNK_HOME/etc/bundles/local/alert_actions.conf.
You can create this file by copying examples from
$SPLUNK_HOME/etc/bundles/READMEalert_actions.conf.example.
Never edit files in our default bundle in
$SPLUNK_HOME/etc/bundles/default or your changes may be overwritten in an upgrade.
alert_actions.conf.spec
# This file contains possible values for specific properties of email and rss # tsaved search action/alert in actions.conf file [<email saved search action>] from = <string> * Email address from where the email is coming from subject = <string> * By default the subject is SplunkAlert-<splunkname>. * You can use this to specify an alternative email subject format = <string> * Specify the format of the text in the email. * Possible values include: plain, html and csv. * The value for will also apply to any attachments as well as the text of an email. [rss saved search action] items_count = <number> * Threshold of how many rss feeds will be saved
This documentation applies to the following versions of Splunk: 3.0 , 3.0.1 , 3.0.2 , 3.1 , 3.1.1 , 3.1.2 , 3.1.3 View the Article History for its revisions. | http://docs.splunk.com/Documentation/Splunk/3.1/Admin/Alertactionsconf | 2012-05-24T23:01:42 | crawl-003 | crawl-003-004 | [] | docs.splunk.com |
Strip syslog headers before processing
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Strip syslog headers before processing
This example removes syslog headers from non-syslog events that have been passed through syslog to Splunk, such as log4j events from a log4j-to-syslog appender.
Splunk ships with a regex to do this for you.
etc/bundles
etc/bundles/local/props.conf
This example turns on the built-in regex for remote syslog inputs.
[syslog] TRANSFORMS= syslog-header-stripper-ts-host-proc
You can append helpful names onto the TRANSFORMS declaration, like this.
[syslog] TRANSFORMS-strip-syslog= syslog-header-stripper-ts-host-proc
There are no special keywords. "TRANSFORMS-I-hate-them" would work just as well.
Example
If you have a central syslog server (
syslog1.idkfa.kom) that is getting events from multiple servers, you can forward the events to a Splunk server and index the events based on the original host (
doom1.idkfa.kom) and original timesamp (
07:37:15). For this example the events are coming into Splunk via UDP port 514 and-host
This documentation applies to the following versions of Splunk: 3.0 , 3.0.1 , 3.0.2 , 3.1 , 3.1.1 , 3.1.2 , 3.1.3 , 3.1.4 View the Article History for its revisions. | http://docs.splunk.com/Documentation/Splunk/3.1.4/admin/StripSyslogHeadersBeforeProcessing | 2012-05-24T22:45:05 | crawl-003 | crawl-003-004 | [] | docs.splunk.com |
When called with an empty string argument, wln.scan and wln.activescan will attempt to find all wireless networks in range. After the task completes, the wln.scanresultssid R/O property will contain a comma-separated list of network names.
After the execution of the above, the s string may contain something like this: "TIBBO,c1100_1,.....,WNET2". Notice "....." -- these are five bytes with ASCII code 0. They represent a hidden network (i.e. the network that does not broadcast its SSID). Naturally, the name of this network is not revealed to your device. This will be the case regardless of whether you used wln.scan or wln.activescan. | http://docs.tibbo.com/taiko/wln_scanning_discovering.htm | 2012-05-24T14:56:08 | crawl-003 | crawl-003-004 | [] | docs.tibbo.com |
Apart from the need of protection for .NET code in general, a yFiles WPF licensee in particular is bound to the code protection requirements as stated in the license terms: the Software License Agreement for yFiles WPF (see section 2.1 Rights and Limitations) explicitly requires that all essential class, method, and field names of classes contained in the yFiles WPF library are obfuscated. The obfuscation's intended purpose, namely prevention of any unauthorized use of the library's functionality via the publicly available yFiles WPF API, is also expressed.
Name obfuscation completely defeats any attempts to access yFiles WPF functionality that is part of an application via publicly available class or method names.
All private and internal yFiles WPF classes, methods, properties, and fields are already name-obfuscated as a factory default, since they cannot be used for software development anyway. The remaining public and protected parts of the yFiles WPF API must be obfuscated before a yFiles WPF-based product can be released.
In order for an application to function properly, some of the types in yFiles WPF need to be excluded from name obfuscation. This includes any XAML markup extension types and all types or type members whose names appear plain-text in any of the application's XAML files or GraphML files.
Several yFiles WPF types and type members are already excluded from name obfuscation since they are very likely to be used in a yFiles WPF-based application. The following types are excluded as a factory default:
These excludes are realized via attribute annotations at the type level using the .NET framework's System.Reflection.ObfuscationAttribute (see Example A.2, “Obfuscation attribute annotation”).
It is crucial that an obfuscation tool that is used to obfuscate the yFiles WPF DLLs obeys the obfuscation attribute annotations. | http://docs.yworks.com/yfileswpf/dguide/yFilesDGLayout/obfuscation_application.html | 2012-05-25T01:04:44 | crawl-003 | crawl-003-004 | [] | docs.yworks.com |
Table of Contents
This appendix covers obfuscation of yFiles WPF classes in particular, but also obfuscation of .NET code in general. Obfuscation as discussed here means name obfuscation.
Generally, .NET code shipped in assemblies is inherently susceptible to reverse-engineering by simple decompilation. There are several decompilers available that can reproduce source code from given .NET code quite accurately.
Performing name obfuscation makes decompiled .NET code harder to read, if not unreadable at all.
Name obfuscation works by replacing names in .NET code, e.g., namespace names, class, method, property, and field names by nonsensical new names. The replacement is done in a consistent way, so that the .NET code still works as before.
Example A.1, “Method name obfuscation” conceptually shows the effects of name obfuscation for method names and method signatures. Several methods with distinct signature are mapped to a single new name.
Example A.1. Method name obfuscation
// Original method names/signatures. public void MyMethod(MyCustomType type, string name, bool enable){/* Code */} public void AnotherMethod(MyCustomType type){/* Code */} // Method names/signatures after name obfuscation. public void a(f b, string c, bool d){/* Code */} // Formerly 'MyMethod'. public void a(f b){/* Code */} // Formerly 'AnotherMethod'.
By replacing different method names with a single new name, decompiled .NET code is made rather incomprehensible to a human reader, making ad-hoc reverse-engineering attempts difficult. Note that, as a side effect of the obfuscation process, the size of any assemblies that bundle .NET code files is significantly reduced, too. | http://docs.yworks.com/yfileswpf/dguide/yFilesDGLayout/obfuscation.html | 2012-05-25T01:04:29 | crawl-003 | crawl-003-004 | [] | docs.yworks.com |
The yFiles library provides advanced functionality to be used in an application context. Besides support to undo/redo user actions inside a view, there is also support to generate an animated transformation for graph layout changes.
Class Graph2DUndoManager
provides
undo/redo support for Graph2D changes of both structural and graphical nature.
It implements the interfaces GraphListener
and Graph2D.BackupRealizersHandler
, and
is accordingly registered twice with a Graph2D object.
For graphical changes Graph2DUndoManager relies on the correct replication behavior of realizers since the actual undo mechanism keeps backup copies of all node and edge realizers that are associated with modified or even deleted graph elements.
Graph elements that have been deleted and then get reinserted as a result of an undo command are represented by their original objects. The realizer objects that are associated with these reinserted graph elements are also taken from the backup store. However, the graph structure has a different order after a reinsert operation since the graph elements are merely appended to the respective graph data structures.
To reduce the number of actual undo/redo steps, sequences of graph changes can
be grouped into single undo/redo commands.
So-called pre-event and post-event commands provided by class
GraphEvent
serve as special bracketing
indicators.
Class Graph
offers direct support to insert
these commands into the undo/redo history, see the following methods for undo/redo
history bracketing.
If used, pre-event and post-event commands have to be properly balanced to guarantee correct working of the undo/redo mechanism.
Graph2DUndoManager has getter methods that return complete Swing Actions for integration of both undo and redo operations into an application's context.
UndoRedoDemo.java is a tutorial demo that shows the yFiles support for undo/redo functionality in an application context.
Class Graph2DClipboard
provides clipboard
functionality for Graph2D
objects.
The clipboard can be used to create a copy of selected parts of a Graph2D
instance, and can also be used to paste a previously copied subgraph back into
a graph again.
Graph2DClipboard provides Java Swing actions that encapsulate all necessary
clipboard functionality, namely the three operations Cut, Copy, and Paste.
Copies of graph elements are created by means of a
GraphCopier.CopyFactory
implementation.
By default, the CopyFactory implementation that is registered with the graph is
used to this end.
Alternatively, the
setCopyFactory
method can be used to set a different CopyFactory.
Graph2DClipboard also supports copy and paste functionality for grouped nodes and
nested graph structures from a graph hierarchy, i.e., a graph with an associated
HierarchyManager
instance, in a
consistent manner.
The Paste action supports pasting the contents of the clipboard directly into a
group node.
Through the setPasteTargetGroupPolicy(byte)
method, the action can be customized to use one of several policies that determine
the group node into which to paste.
ClipboardDemo.java is a tutorial demo that shows the yFiles clipboard functionality in an application context.
Class LayoutMorpher
is an implementation
of the general animation concept defined by interface
AnimationObject
.
It generates a smooth animation that shows a graph's transformation from one
layout to another.
To this end class LayoutMorpher utilizes an object of type
GraphLayout
that is expected to hold
positional information for all graph elements from the original graph which is
displayed by the associated Graph2DView.
LayoutMorpher provides methods to optionally animate changes in the viewport's
clipping and zoom level, or to end the animation with a specific node being in
the center of the view.
To start the generated animation method
execute()
has to be called.
Note that the calculated animation highlights changes in the locations of nodes and the locations of control points of edges. In contrast, changes in width or height of any node are not animated. | http://docs.yworks.com/yfiles/doc/developers-guide/advanced_stuff.html | 2012-05-24T20:44:59 | crawl-003 | crawl-003-004 | [] | docs.yworks.com |
TCustomActionControl is the base class for ActionBand custom controls.
TCustomActionControl = class(TGraphicControl);
class TCustomActionControl : public TGraphicControl;
TCustomActionControl is a base class for several ActionBand components. Most notably, the standard menu item (TStandardMenuItem) and menu button (TStandardMenuButton) derive from this class.
Do not create instances of TCustomActionControl. If you would like to replace the standard menu item and/or menu button behaviors, use the classes TCustomMenuItem and/or TCustomMenuButton as parent classes for your replacement classes. | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/delphivclwin32/ActnMan_TCustomActionControl.html | 2012-05-24T22:42:32 | crawl-003 | crawl-003-004 | [] | docs.embarcadero.com |
Build SNMP discovery response packet.Namespace: SnmpSharpNet
Assembly: SnmpSharpNet (in SnmpSharpNet.dll) Version: 0.5.0.0 (0.5.0.0)
Syntax
public static SnmpV3Packet DiscoveryResponse( int messageId, int requestId, OctetString engineId, int engineBoots, int engineTime, int unknownEngineIdCount )
Public Shared Function DiscoveryResponse ( _ messageId As Integer, _ requestId As Integer, _ engineId As OctetString, _ engineBoots As Integer, _ engineTime As Integer, _ unknownEngineIdCount As Integer _ ) As SnmpV3Packet
public: static SnmpV3Packet^ DiscoveryResponse( int messageId, int requestId, OctetString^ engineId, int engineBoots, int engineTime, int unknownEngineIdCount )
Parameters
- engineId
- OctetString
Local engine id
Return ValueSNMP v3 packet properly formatted as a response to a discovery request
Remarks
Manager application has to be able to respond to discovery requests to be able to handle SNMPv3 INFORM notifications. In an INFORM packet, engineId value is set to the manager stations id (unlike all other requests where agent is the authoritative SNMP engine). For the agent to discover appropriate manager engine id, boots and time values (required for authentication and privacy packet handling), manager has to be able to respond to the discovery request. | http://www.docs.snmpsharpnet.com/docs-0-5-0/html/M_SnmpSharpNet_SnmpV3Packet_DiscoveryResponse.htm | 2012-05-24T14:47:09 | crawl-003 | crawl-003-004 | [] | www.docs.snmpsharpnet.com |
Intellicus Mobile BI takes your reports and analytics to tablets and phones, on both Android and iOS platforms, and gives you instant access to your business insights. Our mobile BI SDK embeds all the mobile BI features into your mobile application. You can create your own mobile application or release a customized version on the App store in a few hours by using Intellicus mobile SDK.
Mobile Integration
0 views December 28, 2020 0 | https://docs.intellicus.com/documentation/integration-and-management-19-0/mobile-integration-19-0/ | 2021-06-12T18:14:36 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.intellicus.com |
You can select the fields to be displayed on the report.
Figure 23: Selecting.
Width denotes the number of characters of the selected field to show on the report. Field data may wrap beyond this width.
If you check Add New Fields At Runtime option, you can dynamically add more fields during runtime.
In case of a hyperlinked field (specified at the query object level), you can drill down to open another report or URL on clicking the value of field on grid. | https://docs.intellicus.com/documentation/using-intellicus-19-0/smart-view-formerly-ad-hoc-visualizer-19-0/working-with-smart-view-19-0/designing-smart-reports/interactive-grid/fields/ | 2021-06-12T17:52:18 | CC-MAIN-2021-25 | 1623487586239.2 | [array(['https://docs.intellicus.com/wp-content/uploads/2019/12/Figure-23.png',
'.image.border Fields'], dtype=object) ] | docs.intellicus.com |
Adding Overview page
When widgets are enabled in your community, you can add an Overview page to a place.
Overview pages are based on widgets while other pages, such as Activity and custom pages are based on tiles. Please note that tiles provide a better user experience and perform better, especially on mobile devices.
Important: We do not recommend that you use widgets and widgetized Overview pages in your community. For more information, see Understanding pages in places.
If you want a place to have an Overview page, you can add either an Overview page or both Activity and Overview pages. You can't enable Overview and Custom Tile pages for a place at the same time.
Tip: If you use the checkpoint and status functionality for tracking project tasks, you may want to stick with the old-style Overview page rather than updating to the Place Template format. The widgets in the Overview page more effectively support Projects at this time.
For more information about Overview pages, see Designing Overview pages for places.
Adding an Overview Page to a place
To use an Overview page in your place:
- Go to the landing page for your place and click the Edit Group page opens.. The
- Click.
- Select Overview or Activity + Overview.
- If prompted, select the landing page, that is, determine which page must open when a user opens the place.
- Click OK.
- Click Save at the bottom of the page.
The Overview page is added to your place and becomes visible to other users. | https://docs.jivesoftware.com/9.0_on_prem_int/end_user/jive.help.core/admin/TilesversusWidgetsGroups.html | 2021-06-12T17:33:44 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.jivesoftware.com |
.
ArchitectureArchitecture
In order to perform attacks (correctly) we would encourage Validators and community members to understand Matic's Architecture and Code thoroughly. Without the thorough understanding of Core Components such as Heimdall and Bor, it would be difficult to perform attacks correctly on the network.
Please refer this link for more details.
SpecsSpecs
In order to understand the granular information about core components such as Heimdall, Bor and Contracts you can head over to the link below. This will not only help you understand how Matic works under-the-hood but also help you to execute attacks correctly
You can navigate to the specs with these links:
CodebasesCodebases
Codebase of Matic to understand the granular interaction of how Matic's core components work.
- Heimdall:
- Bor:
- Contracts: | https://docs.matic.today/docs/validate/orientation/ | 2021-06-12T18:09:01 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.matic.today |
As Kubernetes does not provide native support for inter-pod networking, MCP uses Calico as an L3 networking provider for all Kubernetes deployments through the Container Network Interface (CNI) plugin. The CNI plugin establishes a standard for the network interface configuration in Linux containers.
Calico ensures propagation of a container IP address to all Kubernetes Nodes over the BGP protocol, as well as provides network connectivity between the containers. It also provides dynamic enforcement of the Kubernetes network policies. Calico uses the etcd key-value store or the Kubernetes API datastore as a configuration data storage.
Calico runs in a container called
calico-node on every node in a
Kubernetes cluster. The
calico-node container is controlled by the
operating system directly as a
systemd service.
The
calico-node container incorporates the following main Calico services:
The primary Calico agent which is responsible for programming routes and ACLs, as well as for all components and services required to provide network connectivity on the host.
A lightweight BGP daemon that distributes routing information between the nodes of the Calico network.
Dynamic configuration manager for BIRD, triggered automatically by updates in the configuration data.
The Kubernetes controllers for Calico are deployed as a single pod
in a Calico
kube-controllers deployment that runs as a Kubernetes addon.
The Kubernetes controllers for Calico are only required when using
the etcd datastore, which is the default configuration in MCP.
The Kubernetes controllers for Calico are enabled by default and are
as follows:
Monitors network policies and programs the Calico policies.
Monitors namespaces and programs the Calico profiles.
Monitors changes in pod labels and updates the Calico workload endpoints.
Monitors removal of the Kubernetes Nodes and removes corresponding data from Calico. | https://docs.mirantis.com/mcp/q4-18/mcp-ref-arch/kubernetes-cluster-plan/kubernetes-networking-plan/calico-considerations.html | 2021-06-12T17:34:26 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.mirantis.com |
There are two ways to import Models into Unity:
Select the file in the Project view and navigate to the Model tab in the Inspector window to configure import options. See the [Model Import Settings window] reference documentation for details.
Note: You must store Textures in a folder called Textures, placed inside the Assets folder (next to the exported Mesh) within your Unity Project. This enables the Unity Editor to find the Textures and connect them to the generated Materials. For more information, see the Importing Textures documentation. | https://docs.unity3d.com/es/2018.1/Manual/HOWTO-importObject.html | 2021-06-12T18:15:48 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.unity3d.com |
SCUM BOY Short Film Competition
Screenings
Available beginning June 23 at 12:00 p.m. EDT with 48 hours to start watching after purchase. Once you begin, you’ll have 48 hours to finish watching.
Please refer to your order confirmation email for complete viewing instructions.
Details
Country: South Africa
Year: 2020
Director: Allison Swank
Producer: Allison Swank
Executive Producer: Cindy Gabriel
Director of Photography: Travys Owen
Editor: William Kalmer
Music: Will Mono, Rose Bonica, River Moon
Featuring: Oliver Pohorille aka Scum Boy, Beulah Kruger, Rowallan Vorster, Riley Wilken
Running Time (minutes): 17
Language: English
Accessibility: Closed Captions Available | https://docs.afi.com/2021/short-film-competition-2021/scum-boy | 2021-06-12T18:14:11 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.afi.com |
Walkthrough: Creating an ACL based time window to restrict users from running jobs
This walkthrough is mainly target for security administrators and patch administrators. In this walkthrough we are going to demonstrate how you can use the ACL policy time window to allow patching users to run deploy jobs only on weekends. Allowing patching users to run deploy jobs only on weekends can help prevent your servers from being over utilized.
What is an ACL policy time window?
While creating an ACL policy, you can create a time window, during which a role is assigned one or more additional authorizations. The role is assigned the additional authorizations only during that time window. For example, in the context of Patching, you might want to allow Patching users to run catalog update jobs or analysis jobs at any time, but restrict them to executing remediation jobs only on weekends.
The ACL policy time window is a complex feature and requires background knowledge of users, roles, authorizations and BSA objects. For information about these concepts, see Managing access.
Before you begin
To simplify the task of assigning ACL policies to a large number of servers, you can prepare server groups or server smart groups based on criteria that are relevant to your business needs.
Refer to the following pages for information about creating server groups or server smart groups:
Server groups: For information about creating a static group of servers, see Assigning servers to server groups.
Server smart groups: For a step-by-step example on creating a dynamic group of servers, see Walkthrough: Dynamically organizing assets with smart groups.
Note
A server smart group is a dynamic collection of servers that might change with time. However, while enabling maintenance windows on a server smart group, only the servers that are part of the server smart group at that particular time are enabled for the maintenance window feature. | https://docs.bmc.com/docs/ServerAutomation/86/using/managing-jobs/walkthrough-creating-an-acl-based-time-window-to-restrict-users-from-running-jobs | 2021-06-12T17:28:24 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.bmc.com |
-
-
expiry checks for pooled capacity licenses
You can now configure license expiry threshold for Citrix ADC pooled capacity licenses. By setting thresholds, Citrix Application Delivery Management (ADM) sends notifications via email or SMS when a license is due to expire. An SNMP trap and a notification will.
Note
When you add new licenses to the pool, the Citrix ADC instances use the new licenses on expiry of their existing licenses.. | https://docs.citrix.com/en-us/citrix-application-delivery-management-service/manage-licenses/pooled-licenses/configure-expiry-checks.html | 2021-06-12T17:15:41 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.citrix.com |
✕
Jive Cloud User Guide: Jive for SharePoint v5
Jive for SharePoint overview
System requirements
Supported browsers
Documentation PDFs
Using Jive for SharePoint
Connecting Jive places to SharePoint
Searching SharePoint connected site from Jive
About file sync between SharePoint and Jive
Disconnecting your place from SharePoint
Jive Cloud User Guide: Jive for SharePoint v5
Search
Jive for SharePoint overview
Jive for SharePoint On-Prem (v5) combines the secure, robust document management of SharePoint with the collaborative power of Jive.
Using Jive for SharePoint
Jive for SharePoint makes it easier to use Jive and SharePoint together. It brings the ease of Jive's communication and collaboration into SharePoint. Even when you need to spend your time in SharePoint, you can still be aware of and participate in Jive place activity.
©2021 Aurea, Inc.
| Aurea is an ESW Capital Group Company | https://docs.jivesoftware.com/cloud_int/end_user/jive.help.jiveforsharepointV5/index.html | 2021-06-12T17:41:04 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.jivesoftware.com |
Next Tech's products officially support the following operating systems:
Microsoft Windows 10
Mac OS 10.15
At any time, we only officially support the current version and one prior. If an operating system version stops receiving functionality updates from its manufacturer, we no longer officially support it.
Note that these are the officially supported operating systems, but Linux works great too! In fact, our entire development team uses Ubuntu Linux. | https://docs.next.tech/technical-requirements/operating-systems | 2021-06-12T16:48:16 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.next.tech |
If this is the first time installing the SPARK MAX Client or connecting a SPARK MAX in Recovery Mode, you may see an error the first time you try to update firmware on your computer. The DFU driver is one of two drivers installed by the Client and is used for updating firmware. It may not install completely until a SPARK MAX in DFU Mode (Recovery Mode) is plugged in to the computer.
If you see an error during your first firmware update, please do the following:
Close the Client application.
Unplug the SPARK MAX from the computer.
Plug the SPARK MAX back into the computer.
Open the Client application.
Alternatively, you can preemptively finalize the DFU driver installation by following the Recovery Mode steps before using the Client for the first time.
We are aware of this issue and will be releasing a fix in a future update of the SPARK MAX Client.
As we get feedback from users and identify exact causes for issues, please look back here for troubleshooting help. If you are running into issues running the SPARK MAX Client try the following BEFORE contacting [email protected]:
Try running the SPARK MAX Client as an Administrator
Make sure that Windows is fully up-to-date. Some computers have Windows Update disabled and need to be updated manually.
Check the Device Manager and verify that the SPARK MAX shows up as one of the following two devices with no caution symbols:
Normal operating mode: Device Manager -> Ports (COM & LPT) -> USB Serial Device (COMx)
Recovery mode: Device Manager -> Universal Serial Bus Controllers -> STM Device in DFU Mode
If the device shows up with errors or as STM32 BOOTLOADER, try installing the DFU drivers separately. | https://docs.revrobotics.com/sparkmax/spark-max-client/troubleshooting | 2021-06-12T18:16:08 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.revrobotics.com |
Extended XBL (eXBL)¶
It’s the metadata-enriched version of the eXploit BlockList, a list of public IP addresses where through behavioral heuristics we identify indicators of compromised machines.
Each record identifies a “detection” for the given IP, with multiple records for the same IP possible in case multiple bots (or what our analysts think are multiple bots) have been identified on the same IP resource.
Each record is composed by the following fields:
ipaddressThe IP address identified as the source of the bot-generated traffic. Always provided.
botnameThe bot name associated with the detected activity. Where a clear association is not possible, “unknown” will be returned. Always provided.
seenThe Unix timestamp (rounded to the minute) of the last detected event for the given IP and the given botname. Always provided.
firstseenUnix timestamp (rounded to the minute) of the first detection event for this IP+botname combination. This will match the value of
seenif it’s the first sighting of this type on this particular IP. When there has been no activity for this given combination for a month, the field is reset the next time it’s observed. Always provided.
listedThe Unix timestamp (rounded to the minute) of when the entry reached our database. Usually, this is very close to the value of
seenunless when the data is coming from batched processes. Always provided.
valid_untilUnix timestamp (rounded to the minute) of when the given entry will be considered “expired” from our dataset. Always provided.
detectionHuman-readable form, briefly describing how the data was collected. This field only appears when the heuristic can involve multiple ways of collecting said data.
ruleAn internal ID pointing to the rule operating the detection. Detections operated by different means or rules will show different IDs, even when they refer to the same detection. Always provided.
dstportThe destination port of the traffic that triggered the detection. Not always disclosed/available.
heloWhen the detection is operated from SMTP traffic, this is the HELO string used in the SMTP session triggering the detection.
helosSpecific to MPD detections only. This is an array enumerating all the HELO strings involved in the detection. Appears only in records for the MPD heuristic.
heuristicIt’s the heuristic applied to generate the detection. Has a limited number of possible values.
asnIt’s the Autonomous System Number (ASN) announcing the IP; predominantly obtained from routeviews data.
latGeographic latitude of the IP. Only provided when geolocation data is available.
lonGeographic longitude of the IP. Only provided when geolocation data is available.
ccThe ISO Country Code of the nation where the IP resides. Only provided when geolocation data is available.
protocolIP protocol of the traffic triggering the detection. Usually either UDP or TCP.
srcipSource IP of the traffic triggering the detection. Except in rare cases, this matches the argument of the listing for IPv4, while in IPv6 -for which the granularity of XBL is the
/64, this provides the specific IP (
/128) causing the listing.
uriSpecific to the “SINKHOLE” heuristic, and to HTTP sinkholes detections in particular. This is the URI of the HTTP request triggering the listing. Not always available.
useragentSpecific to the “SINKHOLE” heuristic, and to HTTP sinkhole detections in particular. It is the User-Agent header of the HTTP request triggering the listing. Not always available.
domainMostly specific to the “SINKHOLE” heuristic, and to HTTP sinkholes in particular. It’s the domain/hostname the traffic triggering the detection is reaching, i.e., the sinkhole’d domain. Often obtained from the “host” header of the HTTP request triggering the listing. Not always available. | https://docs.spamhaus.com/extended-data/docs/source/03-datasets/010-exbl.html | 2021-06-12T18:36:36 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.spamhaus.com |
You can configure a range of Player settingsSettings that let you set various player-specific options for the final game built by Unity. More info
See in Glossary in the Other Settings section. These options are organized into the following groups:
Use these settings to customize how Unity renders your application for the iOSApple’s mobile operating system. More info
See in Glossary platform.
Enter identifying information for your app.
You can choose your mono API compatibility level for all targets. Sometimes a third-party .NET library uses functionality that is outside of your .NET compatibility level. If you’re on Windows, you can use the third-party software Reflector to understand what’s happening and how to fix it. Follow these steps:. For any of these settings except Variant name, if you choose Custom value, an additional property appears underneath where you can enter your own value to use:
You can also add your own setting. Click the Add custom entry button and a new pair of text boxes appear:. This option has been deprecated and should no longer be used. | https://docs.unity3d.com/2020.1/Documentation/Manual/PlayerSettingsiOS-Other.html | 2021-06-12T18:56:40 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.unity3d.com |
To edit an existing attachment,
First, select an attachment from the listing of attachments.
On the Attachments tab, click the Edit icon. The Edit Attachments dialog window opens. The Title, URL and Description fields can all be edited.
Make any changes as needed.
Click OK. The Edit Attachments window closes.
The attachment row updates with the edited details. | https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.3/ncm-online-help-1013/GUID-13E988F1-69F6-40F0-91C2-F0C7364F93D2.html | 2021-06-12T18:43:27 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.vmware.com |
Bug Types¶
We organize bugs by type to make it easier to make triage decisions, get the bug to the right person to make a decision, and understand release quality.
Defect regression, crash, hang, security vulnerability and any other reported issue
Enhancement new feature, improvement in UI, performance, etc. and any other request for user-facing enhancements to the product, not engineering changes
Task refactoring, removal, replacement, enabling or disabling of functionality and any other engineering task
All bug types need triage decisions. Engineering triages defects and tasks. Product management triages enhancements.
It’s important to distinguish an enhancement from other types because they use different triage queues.
Distinguishing between defects and tasks is important because we want to understand code quality and reduce the number of defects we introduce as we work on new features and fix existing defects.
When triaging, a task can be as important as a defect. A behind the scenes change to how a thread is handled can affect performance as seen by a user. | https://firefox-source-docs.mozilla.org/bug-mgmt/guides/bug-types.html | 2021-06-12T16:58:13 | CC-MAIN-2021-25 | 1623487586239.2 | [] | firefox-source-docs.mozilla.org |
Standards and Specifications
- Foundational
- Data access
- US Core Data Profiles: FHIR data profiles for health data in the US (“core data for interoperability”)
- FHIR Bulk Data API Implementation Guide: FHIR export API for large-scale data access
- UI and Security Integration
- SMART App Launch: User-facing apps that connect to EHRs and health portals
- SMART Backend Services: Server-to-server FHIR connections
Tutorials
- Getting started with Browser-based Apps: Tutorial to create a simple app that launches via the SMART browser library
- Cerner’s Browser-based app tutorial: In-depth tutorial to build a simple browser-based app
- Getting started with CDS Hooks: Tutorial to create a simple CDS Hooks Service
- Getting started for EHRs: Tutorial to SMART-enable a clinical data system
Software Libraries
- JavaScript or TypeScript: Client-side and server-side library with support for SMART App Launch
- Node.js from Vermonster: An alternative Node.js implementation
- Python: Server-side Python library with support for SMART App Launch
- R
- Ruby
- Swift (iOS)
- Java
- .NET: FHIR client library from Firely
Test Environments
- SMART App Launcher (no registration required): Developer tool for SMART apps
- Docker Container: For local installation or experiments
- R4 open endpoint (see also R2, R3)
- SMART Bulk Data Server (no registration required): Developer tool for Bulk Data clients
- Logica Health Sandbox: Manage your own sandbox server and users over time
Vendor Sandboxes
- Allscripts
- Cerner - Provider and Patient Facing Apps
- Epic Provider Facing Apps
- Epic Patient Facing Apps
- Intersystems
- Meditech
Data
- Synthea: Open source synthetic FHIR data generator
- SMART Test Data: 60 de-identified records with Python to generate FHIR from CSVs deployment and is not supported for clinical use.
- SMART BP Centiles: Full featured app that has been deployed in care settings. Note that the open source version of this app requires review before production deployment and is not supported for clinical use.
- Cerner ASCVD Risk Calculator
Support
- FHIR Discussion Board (SMART Stream)
- SMART on FHIR community mailing list
- SMART Health IT: The team behind SMART on FHIR | http://docs.smarthealthit.org/ | 2021-06-12T16:49:14 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.smarthealthit.org |
Modifying database sets
After you have added a database member definition to dynamic or from a dynamic query‑based member definition to manual definition, you must delete the existing set and create a new set.
To modify settings for a database set:
- In the Admin Portal, click Resources, then click Databases to display the list of databases.
- In the Sets section, right-click a set name, then click Modify.
- Change the set name, set description, or both, as needed.
- If the membership definition is dynamic, you can modify the set membership by editing the Query field.
- Click Save.
For more information about modifying other database set information, see the following topics: | https://docs.centrify.com/Content/Infrastructure/resources-manage/svr-mgr-modifying-database-sets.htm | 2021-06-12T18:27:15 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.centrify.com |
Placement Reference¶
An important aspect of building a circuit layout is the placement of instances.
In IPKISS, this is done using
i3.place_insts. This placement function works by choosing a set
of instances, and defining specifications (placement, joining, alignment, …) that describe how these
instances should be placed.
Once pcells are placed in the layout, the netlist can be extracted automatically by using
i3.NetlistFromLayout. | https://docs.lucedaphotonics.com/reference/layout/placement.html | 2021-06-12T18:31:05 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.lucedaphotonics.com |
The best way to learn to code is by coding. If you're teaching a programming class, you've no doubt noticed that there are some students who are champing at the bit, wanting to be challenged.
Setting extended homework, optional projects, or enrichment assignments is a great way to allow these students to do more without disadvantaging the ones who are battling to keep up with the standard syllabus.
But setting an appropriate project can be challenging for teachers. In this guide, we'll take a look at what makes a good enrichment homework assignment with some practical examples of how to set and grade these.
A good enrichment assignment should be:
Finally, extended homework needs to find a delicate balance between between freedom and constraints. Students should have enough freedom to explore and learn related concepts on their own, while having some constraints so that you can fairly compare the work done by different students and to provide enough guidance as to not be overwhelming.
It's more inspiring for students if they can build something that seems real. Therefore the following topic areas often work great for enrichment assignments:
Different students are likely to find different topics more inspiring: some people just want to build games, while others will find 'serious' projects more interesting, so it's great to rotate through different topics.
An example of a simple game that is a great starting point for students who are interested in game development is the well-known memory game. In this game, we have a set of cards with two of each kind of color. The cards are placed face down and the player is allowed to turn over any two cards. If the cards match, the cards are removed. If not, the cards are turned face down again.
At the start, players turn over cards at random, trying to remember what cards they see and remembering where the matches are.
This is a good project for beginners as it is fairly simple, not requiring moving objects or physics simulations, while also being very extensible. Ambitious students can add many features such as scoring, animations, automatic card flipping, or more combinations of shapes and colour combinations for the cards.
To set this project as a homework assignment, you should give the student enough code to get started that they can immediately interact with the program and see how the basics of PyGame works, but not so much starter code that they are overwhelmed and confused by how it works. Giving them around 50 lines of code to start is usually a good ballpark for something that is useful but can also be understood easily.
We've created an example starter Repl that you can fork (and adapt if necessary before giving to your students) here. It demonstrates how to lay out the cards using random colours, and how to detect which card a user clicked on.
Here's an example of how you could introduce your students to this project. These instructions are also in the accompanying repl in Markdown format so you can easily edit them as required and the student can easily read the formatted version.
This week your enrichment task is to build a memory game in Python using PyGame.
Your game consists of a set of cards which have different colours on the front but have the same backs. The player has to find matching pairs of cards by turning them over.
You start with 16 cards of 8 colors: 2 cards of each color.
Start the game by laying out the 16 cards in a 4x4 grid, all front side down.
Each turn, the player may turn over two cards. If the cards match (have the same color on the front), then the cards should be removed from the game. If not, the cards should be turned upside down again and the player can try again.
The code in
main.py includes a basic PyGame example which lays out the cards face up. When the player clicks on the cards, they are turned face down.
The starter code shows you how to set up a basic GUI using PyGame and draw objects. It also hints at how you can interact with the user: it shows how you find out where the user clicked and modify the screen accordingly. However, it is missing most of the features of the game, which you still need to build.
Specifically you need to:
Feel free to implement any extra features that you think would be fun! Some ideas are:
Good luck!
Apart from checking that the basic and optional features that your students have implemented work as expected, there are some othe things you can look out for to assign a grade and give feedback to your students.
It is likely that your students will be able to find similar projects online, so as always plagiarism is likely to be a problem. A good example of this game with all of the features implemented is at InventWithPython's memory game which you can run on Repl.it from this repl.
It should be easy to spot if your students borrowed too heavily from that example without understanding what they were doing, as it is significantly different from the starter code provided here. If you are concerned about plagiarism, copying a few 10-40 character snippets of your students' code into Google in double quotation marks usually brings up their source fairly quickly.
For example, the Google search shown below shows many sources that use exactly the same code:
Games like this one are a great example to introduce your students to the idea of DRY (don't repeat yourself) in software engineering. Because the entire screen has to be completely redrawn even when only one card is changed, it's likely that your students will be tempted to copy-paste the same code into different places (for example, to set up the board, and to update it after a click).
If your students have already learned to use small functions or Object Oriented Programming, make sure that they are following these good practices. Especially with a solution that relies on classes like
Board,
Card,
Player,
Game, etc, it can be quite tricky to decide what functionality belongs where. Should
Card objects keep track of their own coordinates or is that the job of the
Board?
While there are many different ways to implement this solution, and in the end the 'best' solution might come down to personal taste, try to check that your students have thought about these issues (and hopefully left comments explaining their choices).
Games are also a good way to introduce concepts from User Experience (UX) to your students. Did they provide instructions to the player on how to use the game, either in comments or in the game interface itself? Does the game start up and exit cleanly? Is it easy to configure any options that they exposed like number of cards?
We went through an enrichment example using PyGame in this guide. PyGame is a great library for beginners as it gives them enough features to easily build advanced features (e.g. an easy way to draw a UI and track user events), but it is still low level enough for the student to have to understand fundamental concepts like the game loop and drawing objects based on pixel coordinates.
For more PyGame inspiration, take a look at our basic Juggling Game which includes an example of how to animate objects too.
As we mentioned, data science and web application development are also good topics to set for enrichment homework. Take a look at our collection of Python projects for beginners for more ideas. | https://docs.replit.com/Teams/EnrichmentHomework | 2021-06-12T18:00:40 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.replit.com |
The following tables provide the operating and mechanical specifications for the SPARK MAX motor controller.
DO NOT exceed the maximum electrical specifications. Doing so will cause permanent damage to the SPARK MAX and will void the warranty.
Continuous operation at 60A may produce high temperatures on the heat sink. Caution should be taken when handling the SPARK MAX if it has been running at higher current level for an extended period of time.
If using a battery to power SPARK MAX, make sure the fully charged voltage is below 24V allowing for sustained operation. Some battery chemistries and configurations, including 6S LiPo packs, have a charge voltage above the maximum operating voltage for SPARK MAX. | https://docs.revrobotics.com/sparkmax/specifications | 2021-06-12T17:49:57 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.revrobotics.com |
Remote Agent overall architecture¶
This document will cover the Remote Agent architecture by following the sequence of steps needed to start the agent, connect a client and debug a target.
Remote Agent startup¶
Everything starts with the
RemoteAgent implementation, which handles command line
arguments (–remote-debugging-port) to eventually
start a server listening on the TCP port 9222 (or the one specified by the command line).
The browser target websocket URL will be printed to stderr.
To do that this component glue together three main high level components:
server/HTTPDThis is a copy of httpd.js, from /netwerk/ folder. This is a JS implementation of an http server. This will be used to implement the various http endpoints of CDP. There is a few static URL implemented by
JSONHandlerand one dynamic URL per target.
cdp/JSONHandlerThis implements the following three static http endpoints:
/json/version: Returns information about the runtime as well as the url of the browser target websocket url.
/json/list: Returns a list of all debuggable targets with, for each, their dynamic websocket URL. For now it only reports tabs, but will report workers and addons as soon as we support them. The main browser target is the only one target not listed here.
/json/protocol: Returns a big dictionary describing the supported protocol. This is currently hard coded and returns the full CDP protocol schema, including APIs we don’t support. We have a future intention to fix this and report only what Firefox implements. You can connect to these websocket URL in order to debug things.
cdp/targets/TargetListThis component is responsible of maintaining the list of all debuggable targets. For now it can be either:
The main browser target A special target which allows to inspect the browser, but not any particular tab. This is implemented by
cdp/targets/MainProcessTargetand is instantiated on startup.
Tab targets Each opened tab will have a related
cdp/targets/TabTargetinstantiated on their opening, or on server startup for already opened ones. Each target aims at focusing on one particular context. This context is typically running in one particular environment. This can be a particular process or thread. In the future, we will most likely support targets for workers and add-ons. All targets inherit from
cdp/targets/Target.
Connecting to Websocket endpoints¶
Each target’s websocket URL will be registered as a HTTP endpoint via
server/HTTPD:registerPathHandler.
(This registration is done from
RemoteAgentClass:listen)
Once a HTTP request happens,
server/HTTPD will call the
handle method on the object passed to
registerPathHandler.
For static endpoints registered by
JSONHandler, this will call
JSONHandler:handle and return a JSON string as http body.
For target’s endpoint, it is slightly more complicated as it requires a special handshake to morph the HTTP connection into a WebSocket one.
The WebSocket is then going to be long lived and be used to inspect the target over time.
When a request is made to a target URL,
cdp/targets/Target:handle is called and:
delegate the complex HTTP to WebSocket handshake operation to
server/WebSocketHandshake:upgradeIn return we retrieve a WebSocket object.
hand over this WebSocket to
server/WebSocketTransportand get a transport object in return. The transport implements a basic JSON stream over WebSocket. With that, you can send and receive JSON objects over a WebSocket connection.
hand over the transport to a freshly instantiated
ConnectionThe Connection has two goals:
Interpret incoming CDP packets by reading the JSON object attribute (
id,
method,
paramsand
sessionId) This is done in
Connection:onPacket.
Format outgoing CDP packets by writing the right JSON object for command response (
id,
resultand
sessionId) and events (
method,
paramsand
sessionId)
Redirect CDP packet from/to the right session. A connection may have more than one session attached to it.
instantiate the default session The session is specific to each target kind and all of them inherit from
cdp/session/Session. For example, tabs targets uses
cdp/session/TabSessionand the main browser target uses
cdp/session/MainProcessSession. Which session class is used is defined by the Target subclass’ constructor, which pass a session class reference to
cdp/targets/Target:constructor. A session is mostly responsible of accommodating the eventual cross process/cross thread aspects of the target. The code we are currently describing (
cdp/targets/Target:handle) is running in the parent process. The session class receive CDP commands from the connection and first try to execute the Domain commands in the parent process. Then, if the target actually runs in some other context, the session tries to forward this command to this other context, which can be a thread or a process. Typically, the
cdp/sessions/TabSessionforward the CDP command to the content process where the tab is running. It also redirects back the command response as well as Domain events from that process back to the parent process in order to forward them to the connection. Sessions will be using the
DomainCacheclass as a helper to manage a list of Domain implementations in a given context.
Debugging additional Targets¶
From a given connection you can know about the other potential targets.
You typically do that via
Target.setDiscoverTargets(), which will emit
Target.targetCreated events providing a target ID.
You may create a new session for the new target by handing the ID to
Target.attachToTarget(), which will return a session ID.
“Target” here is a reference to the CDP Domain implemented in
cdp/domains/parent/Target.jsm. That is different from
cdp/targets/Target
class which is an implementation detail of the Remote Agent.
Then, there is two ways to communicate with the other targets:
Use
Target.sendMessageToTarget()and
Target.receivedMessageFromTargetYou will manually send commands via the
Target.sendMessageToTarget()command and receive command’s response as well as events via
Target.receivedMessageFromTarget. In both cases, a session ID attribute is passed in the command or event arguments in order to select which additional target you are communicating with.
Use
Target.attachToTarget({ flatten: true })and include
sessionIdin CDP packets This requires a special client, which will use the
sessionIdreturned by
Target.attachToTarget()in order to spawn a distinct client instance. This client will re-use the same WebSocket connection, but every single CDP packet will contain an additional
sessionIdattribute. This helps distinguish packets which relate to the original target as well as the multiple additional targets you may attach to.
In both cases,
Target.attachToTarget() is special as it will spawn
cdp/session/TabSession for the tab you are attaching to.
This is the codepath creating non-default session. The default session is related to the target you originally connected to,
so that you don’t need any ID for this one. When you want to debug more than one target over a single connection
you need additional sessions, which will have a unique ID.
Target.attachToTarget will compute this ID and instantiate a new session bound to the given target.
This additional session will be managed by the
Connection class, which will then redirect CDP packets to the
right session when you are using flatten session.
Cross Process / Layers¶
Because targets may runs in different contexts, the Remote Agent code runs in different processes.
The main and startup code of the Remote Agent code runs in the parent process.
The handling of the command line as well as all the HTTP and WebSocket work is all done in the parent process.
The browser target is also all implemented in the parent process.
But when it comes to a tab target, as the tab runs in the content process, we have to run code there as well.
Let’s start from the
cdp/sessions/TabSession class, which has already been described.
We receive here JSON packets from the WebSocket connection and we are in the parent process.
In this class, we route the messages to the parent process domains first.
If there is no implementation of the domain or the particular method,
we forward the command to a
cdp/session/ContentProcessSession which runs in the tab’s content process.
These two Session classes will interact with each other in order to forward back the returned value
of the method we just called, as well as piping back any event being sent by a Domain implemented in any
of the two processes.
Organizational chart of all the classes¶
┌─────────────────────────────────────────────────┐ │ │ 1 ▼ │ ┌───────────────┐ 1 ┌───────────────┐ 1..n┌───────────────┐ │ RemoteAgent │──────▶│ HttpServer │◀───────▶│ JsonHandler │ └───────────────┘ └───────────────┘ 1 └───────────────┘ │ │ │ 1 ┌────────────────┐ 1 └───────────────▶│ TargetList │◀─┐ └────────────────┘ │ │ │ ▼ 1..n │ ┌────────────┐ │ ┌─────────────────│ Target [1]│ │ │ └────────────┘ │ │ ▲ 1 │ ▼ 1..n │ │ ┌────────────┐ 1..n┌────────────┐ │ │ Connection │◀─────────▶│ Session [2]│──────┘ └────────────┘ 1 └────────────┘ │ 1 ▲ │ │ ▼ 1 ▼ 1 ┌────────────────────┐ ┌──────────────┐ 1..n┌────────────┐ │ WebSocketTransport │ │ DomainCache | │──────────▶│ Domain [3]│ └────────────────────┘ └──────────────┘ └────────────┘
[1] Target is inherited by TabTarget and MainProcessTarget. [2] Session is inherited by TabSession and MainProcessSession. [3] Domain is inherited by Log, Page, Browser, Target…. i.e. all domain implementations. From both cdp/domains/parent and cdp/domains/content folders. | https://firefox-source-docs.mozilla.org/remote/Architecture.html | 2021-06-12T18:06:32 | CC-MAIN-2021-25 | 1623487586239.2 | [] | firefox-source-docs.mozilla.org |
WE Feature Films
Screenings
Available beginning June 26 at 12:00 p.m. EDT with 48 hours to start watching after purchase. Once you begin, you’ll have 48 hours to finish watching.
Please refer to your order confirmation email for complete viewing instructions.
Details
Country: France
Year: 2020
Director: Alice Diop
Screenwriter: Alice Diop
Producer: Sophie Salbot
Directors of Photography: Sarah Blum, Sylvain Verdet, Clément Alline
Editor: Amrita David
Featuring: Ismael Soumaïla Sissoko, Bamba Sibi, N’deye Sighane Diop, Pierre Bergounioux, Marcel Balnoas, Ethan Balnoas
Running Time (minutes): 115
Languges: French, Subtitled | https://docs.afi.com/2021/movies/we/ | 2021-06-12T16:57:43 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.afi.com |
public class CustomFieldMapperValidatorImpl extends Object implements CustomFieldMapperValidator
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public CustomFieldMapperValidatorImpl(CustomFieldManager customFieldManager, ConstantsManager constantsManager, ProjectManager projectManager)
public MessageSet validateMappings(I18nHelper i18nHelper, BackupProject backupProject, IssueTypeMapper issueTypeMapper, CustomFieldMapper customFieldMapper)
Note that validation of the actual values in the Custom Fields is done separately by the Custom Field itself.
validateMappingsin interface
CustomFieldMapperValidator
i18nHelper- helper bean that allows us to get i18n translations
backupProject- is the backup project the data is mapped from
issueTypeMapper- is the populated issueTypeMapper
customFieldMapper- is the populated statusMapper
public boolean customFieldIsValidForRequiredContexts(ExternalCustomFieldConfiguration externalCustomFieldConfiguration, CustomField newCustomField, String oldCustomFieldId, CustomFieldMapper customFieldMapper, IssueTypeMapper issueTypeMapper, String projectKey)
customFieldIsValidForRequiredContextsin interface
CustomFieldMapperValidator
externalCustomFieldConfiguration- contains the configuration of the custom field as defined in the backup XML.
newCustomField- is a custom field in the current JIRA instance who's context is being checked.
oldCustomFieldId- the old custom field id from the backup XML which will indicate which issue types the field was used in.
customFieldMapper- is the populated custom field mapper.
issueTypeMapper- is the populated issue type mapper.
projectKey- is the project we are importing into.
public boolean customFieldTypeIsImportable(String customFieldTypeKey)
customFieldTypeIsImportablein interface
CustomFieldMapperValidator
customFieldTypeKey- Key of the CustomField Type that we are checking. | https://docs.atlassian.com/software/jira/docs/api/8.10.1/com/atlassian/jira/imports/project/validation/CustomFieldMapperValidatorImpl.html | 2021-06-12T18:30:18 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.atlassian.com |
Caching Android projects
While it is true that Android uses the Java toolchain as its foundation, there are nevertheless some significant differences from pure Java projects; these differences impact task cacheability.
This is even more true for Android projects that include Kotlin source code (and therefore use the
kotlin-android plugin).
Disambiguation
This guide is about Gradle’s build cache, but you may have also heard about the Android build cache. These are different things. The Android cache is internal to certain tasks in the Android plugin, and will eventually be removed in favor of native Gradle support.
Why use the build cache?
The build cache can significantly improve build performance for Android projects, in many cases by 30-40%. Many of the compilation and assembly tasks provided by the Android Gradle Plugin are cacheable, and more are made so with each new iteration.
Faster CI builds
CI builds benefit particularly from the build cache.
A typical CI build starts with a
clean, which means that pre-existing build outputs are deleted and none of the tasks that make up the build will be
UP-TO-DATE.
However, it is likely that many of those tasks will have been run with exactly the same inputs in a prior CI build, populating the build cache; the outputs from those prior runs can safely be reused, resulting in dramatic build performance improvements.
Reusing CI builds for local development
When you sign into work at the start of your day, it’s not unusual for your first task to be pulling the main branch and then running a build (Android Studio will probably do the latter, whether you ask it to or not). Assuming all merges to main are built on CI (a best practice!), you can expect this first local build of the day to enjoy a larger-than-typical benefit with Gradle’s remote cache. CI already built this commit — why should you re-do that work?
Switching branches
During local development, it is not uncommon to switch branches several times per day.
This defeats incremental build (i.e.,
UP-TO-DATE checks), but this issue is mitigated via use of the local build cache.
You might run a build on Branch A, which will populate the local cache.
You then switch to Branch B to conduct a code review, help a colleague, or address feedback on an open PR.
You then switch back to Branch A to continue your original work.
When you next build, all of the outputs previously built while working on Branch A can be reused from the cache, saving potentially a lot of time.
The Android Gradle Plugin and the Gradle Build Tool
The first thing you should always do when working to optimize your build is ensure you’re on the latest stable, supported versions of the Android Gradle Plugin and the Gradle Build Tool. At the time of writing, they are 3.3.0 and 5.0, respectively. Each new version of these tools includes many performance improvements, not least of which is to the build cache.
Java and Kotlin compilation
The discussion above in “Caching Java projects” is equally relevant here, with the caveat that, for projects that include Kotlin source code, the Kotlin compiler does not currently support compile avoidance in the way that the Java compiler does.
Annotation processors and Kotlin
The advice above for pure Java projects also applies to Android projects. However, if you are using annotation processors (such as Dagger2 or Butterknife) in conjunction with Kotlin and the kotlin-kapt plugin, you should know that before Kotlin 1.3.30 kapt was not cached by default.
You can opt into it (which is recommended) by adding the following to build scripts:
plugins.withId("kotlin-kapt") { kapt.useBuildCache = true }
pluginManager.withPlugin("kotlin-kapt") { configure<KaptExtension> { useBuildCache = true } }
Unit test execution
Unlike with unit tests in a pure Java project, the equivalent test task in an Android project (
AndroidUnitTest) is not cacheable.
The Google Team is working to make these tests cacheable.
Please see this issue.
Instrumented test execution (i.e., Espresso tests)
Android instrumented tests (
DeviceProviderInstrumentTestTask), often referred to as “Espresso” tests, are also not cacheable.
The Google Android team is also working to make such tests cacheable.
Please see this issue.
Lint
Users of Android’s
Lint task are well aware of the heavy performance penalty they pay for using it, but also know that it is indispensable for finding common issues in Android projects.
Currently, this task is not cacheable.
This task is planned to be cacheable with the release of Android Gradle Plugin 3.5.
This is another reason to always use the latest version of the Android plugin!
The Fabric Plugin and Crashlytics
The Fabric plugin, which is used to integrate the Crashlytics crash-reporting tool (among others), is very popular, yet imposes some hefty performance penalties during the build process. This is due to the need for each version of your app to have a unique identifier so that it can be identified in the Crashlytics dashboard. In practice, the default behavior of Crashlytics is to treat “each version” as synonymous with “each build”. This defeats incremental build, because each build will be unique. It also breaks the cacheability of certain tasks in the build, and for the same reason. This can be fixed by simply disabling Crashlytics in “debug” builds. You may find instructions for that in the Crashlytics documentation.
Kotlin DSL
The fix described in the referenced documentation does not work directly if you are using the Kotlin DSL; this is due to incompatibilities between that Kotlin DSL and the Fabric plugin. There is a simple workaround for this, based on this advice from the Kotlin DSL primer.
Create a file,
fabric.gradle, in the module where you apply the
io.fabric plugin. This file (known as a script plugin), should have the following contents:
plugins.withId("com.android.application") { // or "com.android.library" android.buildTypes.debug.ext.enableCrashlytics = false }
And then, in the module’s
build.gradle.kts file, apply this script plugin:
apply(from = "fabric.gradle") | https://docs.gradle.org/current/userguide/caching_android_projects.html | 2021-06-12T18:41:09 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.gradle.org |
Want to Shape Great Security Tools ?
Senior Program Manager (702519 -External)
Job Category: IT
Location: United States, WA, Redmond
Job ID: 702519 7205 Technical Program Manager (TPM) will work with the stakeholders and developers to develop and maintain a portfolio roadmap and design, implement and test innovative software to reduce the cost and improve the effectiveness of software security assessment and protection. Working with production teams in IT and product groups such as .NET and Visual Studio the PM will identify opportunities and build solutions.
This role requires a self-directed individual who is passionate about software security and creating tools that can change the industry.
A successful individual will have a track record of innovative solutions and delivery of projects in a fast paced environment. | https://docs.microsoft.com/en-us/archive/blogs/securitytools/want-to-shape-great-security-tools | 2021-06-12T18:53:55 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.microsoft.com |
Now that we’ve registered your Rover online, we need to download a file onto your thumbdrive to tell the Rover about its identity and how to get on your WiFi.
Click Download Credentials.
Enter your local Wi-Fi credential so it will be loaded onto your rover. Then click Download.
The credentials will download to a
rovercode_*.env file. Copy this file to your Rovercode thumb drive. | https://docs.rovercode.com/start/credentials/ | 2021-06-12T17:38:49 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.rovercode.com |
Webhook Alerts
This section provides detailed information on webhook support provided by Zebrium so you can build your own custom integration.
Zebrium provides four outgoing webhooks:
Alert
- Alert webhook payloads are sent when any alert rule is triggered.
- Alert rules are defined using the Views/Alerts menu item in the Logs tab of the Zebrium UI.
- Frequency of Alert webhooks depends on the Alert configuration. Alerts can be set to trigger every 5 min, 15 min, 30 min, 1 hour, or 1 day when their conditions are met.
Anomaly
- Anomaly webhook payloads are sent when data is ingested and our machine learning detects anomalous events that are Not part of an Incident that was created.
- Frequency of Anomaly webhooks depends on data ingest and detection of anomalies.
Incident
- Incident webhook payloads are sent when data is ingested and our machine learning detects an incident comprised of anomalous events.
- Frequency of Incident webhook depends on data ingest and detection of anomalies.
Metric Alerts (future)
- Metric Alerts webhook payloads are sent when custom metric alert rules are triggered.
- Frequency of Metric Alerts webhook depends on data ingest and detection of metrics that meet custom metric alert rules.
Configuring Webhooks in the Zebrium UI
- From the User menu area, select the Account Settings gear icon
- Click the Outbound Alerts menu item.
- Click the Create Outbound Alert button.
- Select Webhook as the Outbound Alert Type.
- Enter a Name that you will refer to this integration as when linking it to Alert Rules or Incident Types.
- Select which payload you wish to send to the endpoint using the Include Payloads multi-select drop-down. You can choose: Alert, Anomaly, Incident, Metric Alert or any combination of these.
- An endpoint can be the receiver of any combination of payloads.
- Payloads are always sent separately.
- In the case where you have one endpoint to handle all payload formats, there is a common identifier in every payload called
event_typethat your backend can use to tell which payload has been received and then call the appropriate payload handler.
- You can also configure a separate endpoint for each payload.
- You can have as many webhooks defined as you like.
- Set the Choose Deployment option to the desired deployment names or All.
- Note: This option will not be displayed if you have only one deployment.
- Enter the URL of your endpoint
- Select the Authentication method your backend requires
- NONE - Requires no additional configuration
- BASIC - Enter your Username and Password
- TOKEN - Enter the Token
- Prefix - When specifying either Basic or Token based authentication, you can enter any prefix string that your backend expects to see in the Authorization header. For Basic authentication, the typical prefix string is Basic (our default) and for token-based authentication, the typical prefix string is either Bearer (our default) or Token.
- Here is an example of an Authorization header for Basic Authentication. Note the prefix string (before the encoded username/password) is Basic
Authorization: Basic QWxhZGRpbjpPcGVuU2VzYW1l
- Click Create to save your outgoing webhook definition
Webhook Payload Format
- See links below for detailed description of each webhook payload. | https://docs.zebrium.com/docs/outbound_alerts/webhook_alerts/ | 2021-06-12T17:04:24 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.zebrium.com |
Using the EI Analytics Portal¶
Let's use EI Analytics to view and monitor statistics and message tracing.
You can monitor the following statistics and more through the EI Analytics portal:
- Request Count
- Overall TPS
- Overall Message Count
- Top Proxy Services by Request Count
- Top APIs by Request Count
- Top Endpoints by Request Count
- Top Inbound Endpoints by Request Count
- Top Sequences by Request Count
Tip
Monitoring the usage of the integration runtime using statistical information is very important for understanding the overall health of a system that runs in production. Statistical data helps to do proper capacity planning, to keep the runtimes in a healthy state, and for debugging and troubleshooting problems. When it comes to troubleshooting, the ability to trace messages that pass through the mediation flows of the Micro Integrator is very useful.
Before you begin¶
- Set up the EI Analytics deployment.
Note the following server directories in your deployment.
Starting the servers¶
Let's start the servers in the given order.
Starting the Analytics Server¶
Note
Be sure to start the Analytics server before starting the Micro Integrator.
- Open a terminal and navigate to the
<EI_ANALYTICS_HOME>/bindirectory.
Start the Analytics server by executing the following command:
sh server.sh
server.bat
Starting the Micro Integrator¶
Once you have started the Analytics Server, you can start the Micro Integrator.
Starting the Analytics Portal¶
- Open a terminal and navigate to the
<EI_ANALYTICS_HOME>/bindirectory.
Start the Analytics Portal's runtime by executing the following command:
sh portal.sh
portal.bat
In a new browser window or tab, open the Analytics portal using the following URL:.
Use
admin for both the username and password.
Publishing statistics to the Portal¶
Let's test this solution by running the service chaining tutorial. When the artifacts deployed in the Micro Integrator are invoked, the statistics will be available in the portal.
Follow the steps given below.
Step 1: Deploy integration artifacts
If you have already started the Micro Integrator server, let's deploy the artifacts. Let's use the integration artifacts from the service chaining tutorial.
Step 2: Start the backend
Let's start the hospital service that serves as the backend to the service chaining use case:
Step 3: Sending messages
Let's send 8 requests to the Micro Integrator to invoke the integration artifacts:
Tip
For the purpose of demonstrating how successful messages and message failures are illustrated in the portal,file:
If the messages are sent successfully, you will receive the following response for each request.
curl -v -X POST --data @request.json --header "Content-Type:application/json"
{ back-end service and send two more requests.
Viewing the Analytics Portal¶
Once you have signed in to the analytics portal server, click the Enterprise Integrator Analytics icon shown below to open the portal.
Statistics overview¶
View the statistics overview for all the integration artifacts that have published statistics:
Transactions per second¶
The number of transactions handled by the Micro Integrator per second is mapped on a graph as follows.
Overall message count¶
The success rate and the failure rate of the messages received by the Micro Integrator during the last hour are mapped in a graph as follows.
Top APIs by request¶
The
HealthcareAPI REST API is displayed under TOP APIS BY REQUEST COUNT as follows.
Endpoints by request¶
The three endpoints used for the message mediation are displayed under Top Endpoints by Request Count as shown below.
Per API requests¶
In the Top APIS BY Request COUNT gadget, click
HealthcareAPI to.
Per Endpoint requests¶.
Message tracing¶
When you go to the Analytics portal the message details will be logged as follows:
Top
| https://ei.docs.wso2.com/en/latest/micro-integrator/administer-and-observe/using-the-analytics-dashboard/ | 2021-06-12T18:12:30 | CC-MAIN-2021-25 | 1623487586239.2 | [array(['../../assets/img/ei-analytics/dashboard-login.png', None],
dtype=object)
array(['../../assets/img/ei-analytics/119132315/119132335.png',
'Opening the Analytics dashboard for WSO2 EI Opening the Analytics dashboard for WSO2 EI'],
dtype=object)
array(['../../assets/img/ei-analytics/119132315/119132316.png',
'ESB total request count ESB total request count'], dtype=object)
array(['../../assets/img/ei-analytics/119132315/119132326.png',
'ESB overall TPS ESB overall TPS'], dtype=object)
array(['../../assets/img/ei-analytics/119132315/119132325.png',
'ESB overall message count ESB overall message count'],
dtype=object)
array(['../../assets/img/ei-analytics/119132315/119132324.png',
'Top APIs by request count Top APIs by request count'],
dtype=object)
array(['../../assets/img/ei-analytics/119132315/119132318.png',
'Top endpoints by request count Top endpoints by request count'],
dtype=object)
array(['../../assets/img/ei-analytics/119132315/119132317.png',
'Dashboard navigation menu Dashboard navigation menu'],
dtype=object)
array(['../../assets/img/ei-analytics/119132315/message-tracing.png',
'Message tracing per API Message tracing per API'], dtype=object)] | ei.docs.wso2.com |
Apache Impala - Interactive SQL
The Apache Impala project "interactive" applied to these kinds of fast queries with human-scale response times.), or in the Amazon Simple Storage System (S3).
-.
Continue reading:
- Concepts and Architecture
- Deployment Planning
- Tutorials
- Administration
- SQL Reference
- Resource Management
- Performance Tuning
- Scalability Considerations
- Partitioning
- File Formats
- Using Impala to Query Kudu Tables
- HBase Tables
- S3 Tables
- ADLS Tables
- Logging
- Impala Client Access
- | https://docs.cloudera.com/documentation/enterprise/latest/topics/impala.html | 2021-06-12T17:51:34 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.cloudera.com |
Manage your MPN account locations and add (delete) a location
Appropriate roles: Global admin | Account admin
The location MPN ID identifies each specific location of your company. You use the location MPN ID to enroll in incentive programs, to transact Cloud Solution Provider (CSP) business, and other business transactions. The global MPN ID is used for non-transactional activities such as support requests.
The following scenario is typical:
Contoso has its Partner global account (PGA) in the UK. The PGA is their registered legal business, and its global MPN ID is used for managing all non-transactional business. Contoso also has Partner location accounts (PLA) equivalent to subsidiaries or divisions in another location in the UK, France, and the USA. In the MPN Account structure, these PLAs are represented as unique location MPN IDs. The PLAs are used for transactional business such as CSP or incentives programs. Payouts are tied to specific locations.
Note
There is a 1-1 relationship between a CSP tenant and an MPN location ID.
Prerequisites in order to add a new account for a CSP business
To add a new CSP business account, start by ensuring that you have fulfilled the prerequisites.
You must have a location MPN ID in the country where you want to do CSP business. To create a new MPN location, read “Add an MPN location” below.
To create a new CSP Indirect Reseller enrollment, read Work with Indirect providers
Note
Remember to sign in with the new credentials for the new CSP account. Don't use your existing credentials as Partner Center will recognize you as already having an account.
Accept the Microsoft Partner Agreement and activate the account.
If you want to enroll as a Direct Bill partner, read Requirements for Direct Bill partners
View and update your MPN locations
Sign into the Partner Center dashboard with your MPN account credentials. (Your MPN credentials may be different from your CSP credentials)
From the Settings icon, select Account settings, Organization profile, Legal.
On the Partner tab, verify that there isn't a banner error message asking you to fix migrated locations from PMC. If your locations were not set up correctly in PMC, and have not transitioned yet to PC, you need to update those locations.
- On the Review PMC locations screen, select Update. Update the following fields:
Name field: Make sure that the name of the company location is correct. If a duplicate error is displayed, try changing from, for example, Contoso to Contoso, Inc.
Legal Entity field: Make sure that you have chosen the legal entity the location is tied to
Address line 1 & 2 fields: Make sure that the address is correct
City & State/Province fields: Make sure the combination between the city and the state/province is correct. There are countries where the dropdown menu for choosing the State/Province will apply, and in other countries that field will need to manually be inserted.
ZIP/ Postal code field: Make sure the Zip Code field is matching your indicated Country, Region, City, or Address.
Primary contact information fields: Make sure the first and last name fields are filled and that the e-mail address indicated is a work e-mail address and not a personal one (for example, @outlook.com, @live.com, etc.)
Phone number field: Make sure that the Phone number does NOT include special characters, spaces, or country code. The value entered in the Phone Number field will always contain a maximum of 10 characters.
If there isn't an error message, then from Settings, select Account Settings, Organization profile, Identifiers.
Find the MPN ID with Type "Location" that matches the country of this CSP account and use it to complete the association.
If you can’t find the location MPN ID that matches the CSP account you want to use, you can add a new location, which will create a new MPN ID. See Add an MPN location below.
Add an MPN location
Sign in using the MPN account in Partner Center. (Your MPN credentials may be different from your CSP credentials.) The MPN account should have Global Admin or Account Admin privileges.
From the Settings icon, select the Account settings and then select Organization profile.
Select Legal and then, on the Partner tab, select Business locations, and select Add a location.
Provide the required details including business name, address, and contact for the location that you want to add to your company.
Select Add location. This will create a new MPN ID for the new location that you can use for CSP transactions and incentives.
Note
Once a location is added in Partner Center, you cannot remove it. You will see MPN in the left menu of Partner Center if you have used the correct MPN ID to sign in.
Add the registration number ID
If you are an Indirect provider, Direct bill partner, or Indirect reseller and you are doing business with new or existing customers in the following countries, you need to provide registration ID numbers for your business. If the country you are doing business in is not listed below, the registration ID is optional.
- Armenia
- Azerbaijan
- Belarus
- Brazil
- Hungary
- India
- Iraq
- Kazakhstan
- Kyrgyzstan
- Moldova
- Myanmar
- Poland
- Russia
- Saudi Arabia
- South Africa
- South Sudan
- Tajikistan
- Thailand
- Turkey
- Ukraine
- United Arab Emirates
- Uzbekistan
- Venezuela
- Vietnam
For more information, read Registration ID number information
Delete a location
To delete a location from your account, you will need to contact Partner Support. Make sure that you understand the impact this action has. Deleted locations cannot be retrieved and anything tied to that specific MPN ID will no longer be recognized or be active for your company.
Change country of Partner global account
Sign in using the MPN account in Partner Center. (Your MPN credentials may be different from your CSP credentials.) The MPN account should have Global Admin or Account Admin privileges.
On the Partner tab, go to Business locations and check the list of locations to ensure that the location you want as your legal entity is listed.
To add a location, click Add a location, and, in the fly out, provide the required details including business name, address, and primary contact for the location that you want to add to your company.
Select Change your country next to the Country/region drop-down list and follow the steps.
Select Save.
MPN global account country will be changed to the new legal country.
Next steps | https://docs.microsoft.com/en-us/partner-center/manage-locations | 2021-06-12T16:48:30 | CC-MAIN-2021-25 | 1623487586239.2 | [array(['images/locations/locations1.png', 'Structure of MPN locations'],
dtype=object)
array(['images/locations/location-two.png',
'Screencap shows how to update location.'], dtype=object)
array(['images/legal-biz.png', 'Add a new legal business'], dtype=object)
array(['images/lbp.png', 'Legal business profile data fly out'],
dtype=object) ] | docs.microsoft.com |
CBaseWindow::SetPalette
CBaseWindow::SetPalette
The SetPalette method installs a palette for the window.
Syntax
virtual HRESULT SetPalette( HPALETTE hPalette ); HRESULT SetPalette(void);
Parameters
hPalette
Handle to the new palette. Cannot be NULL.
Return Value
Returns one of the HRESULT values shown in the following table.
Remarks
The hPalette parameter specifies a new palette. If the method is called with no parameters, the palette given by the CBaseWindow::m_hPalette member variable is selected. The caller must ensure the validity of m_hPalette.
If the value of the CBaseWindow::m_bNoRealize member variable is FALSE (the default), this method selects the palette and realizes it. Otherwise, it selects the palette but does not realize it. The object does not delete any previous palette that it was using. The caller is responsible for deleting palettes.
Any thread can safely call this method, not just the thread that owns the window. The window sends a private message to itself, which triggers a call to the CBaseWindow::OnPaletteChange method.
Requirements
** Header:** Declared in Winutil.h; include Streams.h.
** Library:** Use Strmbase.lib (retail builds) or Strmbasd.lib (debug builds).
See Also | https://docs.microsoft.com/en-us/previous-versions/ms781083(v=vs.85) | 2021-06-12T18:49:14 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.microsoft.com |
[−][src]Crate i3nator
i3nator
i3nator is [Tmuxinator][gh-tmuxinator] for the [i3 window manager][i3wm].
It allows you to manage what are called "projects", which are used to easily restore saved i3 layouts (see [Layout saving in i3][i3wm-layout-saving]) and extending i3's base functionality by allowing you to automatically start applications too.
For detailed introductions, see the README.
License
DFW is licensed under either of
- Apache License, Version 2.0, ()
- MIT license ()
at your option. | https://docs.rs/i3nator/1.2.0/i3nator/ | 2021-06-12T17:41:04 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.rs |
Installing from source¶
gr-satellites is a GNU Radio out-of-tree module, and can be installed as such, by building it from source in a system where GNU Radio is already installed. Alternatively, it is possible to install gr-satellites and GNU Radio, and this might provide an easier or quicker way of installation, especially in Linux distributions where GNU Radio is not so easy to install, or in macOS.:
$ mkdir build $ cd build $ cmake .. $ make $ sudo make install $ sudo ldconfig
After running
make, you can run the tests by doing
make test in the
build/ directory.
Note
There are systems where the AO-73 and similar decoders fail. | https://gr-satellites.readthedocs.io/en/v3.3.0/installation.html | 2021-06-12T17:13:15 | CC-MAIN-2021-25 | 1623487586239.2 | [] | gr-satellites.readthedocs.io |
Installing using conda¶
gr-satellites and GNU Radio can also be installed using conda, both in Linux and in macOS (support for installing gr-satellites on Windows through conda might be available in the future). Conda is an open-source package management for Linux, macOS and Windows that can install packets and their dependencies in different virtual environments, independently from the rest of the packets installed in the OS. This section shows how to install miniconda, GNU Radio, and gr-satellites from scratch.
Miniconda¶
Miniconda is a minimial installer for conda, so it is the recommended way to get GNU Radio and gr-satellites quickly running in an OS that does not have conda already installed. Miniconda can be installed by downloading and running the installer for the appropriate platform from Miniconda’s page. The installer can be run as a regular user. It does not need root access.
After installing Miniconda, its
(base) virtual environment will be active by
default. This means that
(base) will be shown at the beginning of the
command line prompt and software will be run from the
version installed in the
(base) virtual environment (when it is installed),
and otherwise from the OS.
Users might prefer to run things from the conda virtual environment only upon
request. To disable the activation of the
(base) environment by default, we
can run
$ conda config --set auto_activate_base false
When the
(base) environment is not enabled by default, we can enter it by
running
$ conda activate base
and exit it by running
$ conda deactivate
When the
(base) environment is activated, the prompt will start by
(base). The
(base) environment needs to be activated in order to install
applications through conda into this environment, and also to run applications
that have been previously installed in this environment.
GNU Radio¶
To install GNU Radio, the
(base) environment (or another conda virtual
environment) needs to be activated as described above. Installing GNU Radio and
all its dependencies is as simple as doing
$ conda install -c conda-forge gnuradio
Then GNU Radio may be used normally whenever the virtual environment where it was installed is activated. For instance, it is possible to run
$ gnuradio-companion
gr-satellites¶
gr-satellites needs to be installed into a virtual environment where GNU Radio
has been previously installed (the
(base) environment, if following the
instructions here). To install gr-satellites and its dependecies, we do
$ conda install -c conda-forge -c petrush gnuradio-satellites
After installation, the
gr_satellites command line tool might be run as
$ gr_satellites
(provided that the virtual environment where it was installed is activated) and blocks from gr-satellites may be used in GNU Radio companion.
It might be convenient to download the sample recordings manually.
Acknowledgments¶
Thanks to Ryan Volz for packaging GNU Radio for Conda and to Petrus Hyvönen for putting together recipies to install gr-satellites and its dependencies through Conda. | https://gr-satellites.readthedocs.io/en/v3.3.0/installation_conda.html | 2021-06-12T18:39:42 | CC-MAIN-2021-25 | 1623487586239.2 | [] | gr-satellites.readthedocs.io |
:
- Supported web browser: Chrome (including Brave, Vivaldi, and other Chromium variants), Opera or Firefox
- Access to the Binance Chain web wallet () using your web browser
- Initialized Ledger Nano S device with firmware version 1.5.5 or newer
- The Ledger Live application installed on your computer for app installation
App Installation Instructions
1) Plug in and unlock your Ledger device, open Ledger Live on your computer, then open the "Manager" panel.
2) Within the "Manager" pane, type in "Binance" in the search field.
Locate "Binance Chain", then click on "Install".
3) The Binance app will now install on your Ledger device.
4) When you see a popup message indicating "Successfully installed Binance Chain", the installation is complete.
5) Check that the "Binance Chain" app is shown on your Ledger device dashboard as in the photo below.
If it is, the installation has been successful.
Setup/Login Instructions
6) Visit the binance.org website and click on the "Unlock Wallet" link on the top right part of the page.
Go to, click on "Start Trading".
Click "Unlock wallet" on the top right navigation bar.
Choose "Ledger Device" and verify your address.
Choose one address to use for this session and click on "Confirm".
You will then be redirected to the Trading Interface.
For your security, please read the information displayed in the following popup and confirm that the address shown on your Ledger device matches the one shown on-screen.
Press the right button on your device to confirm that the address matches (You must do this to continue).
How to send Binance Chain crypto assets
Confirming a trade on a Ledger Wallet:
You can view the transaction info and confirm it on Ledger:
Once the transaction has successfully been signed and broadcasted, your Ledger device will display this screen.
| https://docs.binance.org/wallets/tutorial/ledger-nano-s-usage-guide.html | 2021-06-12T17:17:52 | CC-MAIN-2021-25 | 1623487586239.2 | [array(['assets/ledger-nano-s-usage-guide/manage.png', 'img'], dtype=object)
array(['assets/ledger-nano-s-usage-guide/install.png', 'img'],
dtype=object)
array(['assets/ledger-nano-s-usage-guide/search.png', 'img'], dtype=object)
array(['assets/ledger-nano-s-usage-guide/installing.png', 'img'],
dtype=object)
array(['assets/ledger-nano-s-usage-guide/success.png', 'img'],
dtype=object)
array(['assets/ledger-nano-s-usage-guide/app.png', 'img'], dtype=object)
array(['assets/ledger-nano-s-usage-guide/start.png', 'img'], dtype=object)
array(['assets/ledger-nano-s-usage-guide/unlock.png', 'img'], dtype=object)
array(['assets/ledger-nano-s-usage-guide/ledger.png', 'img'], dtype=object)
array(['assets/ledger-nano-s-usage-guide/address.png', 'img'],
dtype=object)
array(['assets/ledger-nano-s-usage-guide/transaction.png', 'img'],
dtype=object)
array(['assets/ledger-nano-s-usage-guide/preview.png', 'img'],
dtype=object)
array(['assets/ledger-nano-s-usage-guide/sign.png', 'img'], dtype=object)] | docs.binance.org |
Using the Blackboard Learn AMI for REST and LTI Development
This document outlines usage of the Blackboard REST and LTI Developer AMI made available via the Amazon AWS Marketplace.. Please note that there is often a delay of 5-14 days before the AMI is available due to AMI and AWS processing time..
NOTE: Building Block installation is NOT supported on the AMIs.
Get the Blackboard REST and LTI Developer AMI
The easiest way to find Large Tier instance type. This gives you enough storage and power to run Blackboard Learn effectively and build your cool widget.
- If you see 502 Gateway errors, you may need to increase the size of your AMI. Additionally, you may periodically see a 502 Gateway error during use - keeping in mind EC2s based on this AMI are NOT intended for use as a Production service, you may simply issue a reboot to restart the server maintaining your AWS provisioned public IP and DNS settings.
-
- Startup time: The startup time for your EC2 will vary and may take as long as 15 minutes before you may access the site via your browser. SSH access may be available in 3 minutes or less.
- On initial startup the Original UX login screen appears. Note the messaging on that page as it informs you when the license expires. You will need to subscribe to a new AMI release prior to license expiration if you wish to migrate data from the old EC2 to the new. Licenses on AMIs are not extendible.
Support for Let’s Encrypt SSL Certificates
Starting with version 3300.6.0 the Learn for REST and LTI Developers AMI supports free Let’s Encrypt SSL Certificates. At this time we do not support alternative SSL certificate processes.
NOTE: Per the Let’s Encrypt FAQ certificates are valid for only 90 days. In order to update your Let’s Encrypt certificate you must perform a server reboot per below
- SSH to your EC2 instance and from the command line reboot the instance using the command:
$ sudo reboot now
On reboot the server will generate your Let’s Encrypt SSL certificate, on future reboots or restarts the server will check whether the certificate requires renewal. If renewal is required reboot the server to renew the Let’s Encrypt certificate. If your certificate is past expiration, because you ignored the renewal notices, sudo mv the /etc/letsencrypt directory to your home directory for safe keeping and reboot. Application on the AMI
The username is administrator. The password is the instance ID, e.g., i-234234234234. If you look at the log created when you spin it up it is also printed there. You can find the log from the EC2 console.
The first time you go to login, you will see text on the page like the following. NOTE: There is no way to upgrade an AMI. You will need to get the latest AMI, and transfer any necessary data, BEFORE the expiration date shown on the page you see.
««««< IMAGE »»»»>
Landing page seen the first time you login to the developer AMI
Configure Your AMI-based Blackboard Learn Instance
When you set up your instance of Blackboard Learn, you can configure different options. These options are discussed in Enable Learn Tool Interoperability (LTI) Links and Text.
Triage Your AMI-based Blackboard Learn Instance
Note that not stopping your EC2 when you encounter an error will continue to incur EC2 charges and we do not issue refunds.Always stop your EC2 if you encounter an error or do not require a 24x7 development instance.
- For General Learn System Administration you may visit: Blackboard Learn SaaS Deployments
- 504 Gateway Error
Visiting https:// displays a 504 error in your browser: 1. Shutdown the instance to stop accumulating charges and try again 2. Or reboot the instance:
Ssh into the instance
- Issue this command:
$ sudo reboot now
- Issue a reboot from the AWS console
The above restarts the instance and will typically correct the 504 error.
Migration Cookbook - Recreating Data between AMIs
Currently, there is no formal migration/transfer tool to port Blackboard Learn data between AMI (EC2) instances. However, there are several existing administrative tools that can be leveraged to capture the bulk of T&L (teaching/learning data) like courses, users, institutional roles, and enrollments, etc. from an existing (source) EC2 and reinstate/recreate the data onto a (new) EC2. The resources linked below will guide you through this data transfer process:
- Bb Learn EC2 Data Transfer.docx: A Word doc outlining a comprehensive step-by-step overview of the migration/transfer process between a source and destination EC2.
- EC2 Migration SQL Scripts and Feed Files.zip: A zip file containing all the SQL scripts (PostgreSQL format) and example feed files referenced in the Data Transfer overview document (above).
Notice - AVG on Windows Systems
While using the AVG antivirus product on a Windows system and attempting to create a course using Blackboard Learn, AVG may manifest what we believe is a false positive dialog regarding CVE-2014-0286-A. This can occur while using any browser, though the error message is specific to now unsupported versions of Microsoft Internet Explorer 6 through 11. Our security team has indicated that this is an issue with the AVG software. Blackboard will be reaching out to AVG to discuss. See the AVG website for questions about configuring the AVG software, and for their contact information. | https://docs.blackboard.com/dvba/developer-ami | 2021-06-12T18:24:06 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.blackboard.com |
DashboardDesigner.DashboardSaved Event
Namespace: DevExpress.DashboardWin
Assembly: DevExpress.Dashboard.v21.1.Win.dll
Declaration
public event DashboardSavedEventHandler DashboardSaved
Public Event DashboardSaved As DashboardSavedEventHandler
Event Data
The DashboardSaved event's data class is DashboardSavedEventArgs. The following properties provide information specific to this event:
Remarks.
See Also
Feedback | https://docs.devexpress.com/Dashboard/DevExpress.DashboardWin.DashboardDesigner.DashboardSaved | 2021-06-12T18:04:54 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.devexpress.com |
Glossary¶
This glossary explains the terminology used throughout Luceda’s software and documentation and in the field of photonic design automation. Several domains (photonics, design automation, software, …) are brought together when making photonic IC designs. Each field has its own nomenclature and the meaning of terms even within one of the domains can at times be confusing. In this glossary we give the explanation for the term as it is used by Luceda and in Luceda’s software. Where necessary, we relate the term to synonyms, near-synonyms or terms used in 3rd party software.
The glossary is broken down per domain: | https://docs.lucedaphotonics.com/glossary/index.html | 2021-06-12T17:30:30 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.lucedaphotonics.com |
End-of-Life (EoL)
Configure a Template Stack
A template stack is a combination of templates: Panorama pushes the settings from every template in the stack to the firewalls you assign to that stack. Panorama supports up to 1,024 template stacks. For details and planning, see Templates and Template Stacks.
- Plan the templates and their order in the stack.For each template you will assign to the stack, Add a Template.When planning the priority order of templates within the stack (for overlapping settings), remember that Panorama doesn’t check the order for invalid relationships. clickPanoramaTemplatesAdd Stack.
- Enter a uniqueNameto identify the stack.
- For each of the Templates the stack will combine (up to 16), clickAddand select the template. The dialog lists the added templates in order of priority with respect to duplicate settings, where values in the higher templates override those that are lower in the list. To change the order, select a template and clickMove UporMove Down.
- In the Devices section, select check boxes to assign firewalls. You can’t assign individual virtual systems, only an entire firewall. You can assign any firewall to only one template or stack. After you finish selecting, clickOK.
- Edit theNetworkandDevicesettings, if necessary.Renaming a vsys is allowed only on the local firewall. Renaming a vsys on Panorama is not supported. If you rename a vsys on Panorama, you will create an entirely new vsys, or the new vsys name may get mapped to the wrong vsys on the firewall.In an individual firewall context, you can override settings that Panorama pushes from a stack in the same way you override settings pushed from a template: see Override a Template Setting.
- Depending on the settings you will configure, select theNetworkorDevicetab and select the stack in theTemplatedrop-down. The tab settings are read-only when you select a stack.
- Filter the tabs to display only the mode-specific settings you want to edit:
- In theModedrop-down, select or clear theMulti VSYS,Operational Mode, andVPN Modefilter options.
- Set all theModeoptions to reflect the mode configuration of a particular firewall by selecting it in theDevicedrop-down.
- You can edit settings only at the template level, not at the stack level. To identify and access the template that contains the setting you want to edit:
- If the page displays a table, selectin the drop-down of any column header. The Template column displays the source template for each setting. If multiple templates have the same setting, the Template column displays the higher priority template. Click the template name in this column: theColumnsTemplateTemplatedrop-down changes to that template, at which point you can edit the setting.
- If the page doesn’t display a table, hover over the template icon (green cog) for a setting: a tooltip displays the source template. If multiple templates have the same setting, the tooltip displays the higher priority template. In theTemplatedrop-down, select the template that the tooltip displays to edit the setting.
-.Perform the same verification steps as when you Add a Template but select the template stack from theTemplatedrop-down:
Recommended For You
Recommended Videos
Recommended videos not found. | https://docs.paloaltonetworks.com/panorama/8-0/panorama-admin/manage-firewalls/manage-templates-and-template-stacks/configure-a-template-stack.html | 2021-06-12T17:15:22 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.paloaltonetworks.com |
The REV Robotics Control Hub (REV-31-1595) is an affordable all in one educational robotics controller that provides the interfaces required for building robots, as well as other mechatronics, with multiple programming language options. The Control Hub was designed and built as an easy to use, dependable, and durable device for use in classroom and the competition. It features an Android operating system, built-in dual band Wi-Fi (802.11 ac/b/g/n/w), and a mature software package designed for both basic and advanced use cases. When the Control Hub software is updated with new features, the controller can receive a "field upgrade," through an update process that is fast and simple.
The Control Hub is an approved device for use in FIRST® Global and FIRST Tech Challenge.
The following tables provide the operating and mechanical specifications for the Control Hub.
DO NOT exceed the absolute maximum electrical specifications. Doing so will cause permanent damage to the Control Hub and will void the warranty.
See Sensors - Encoders for more information on encoders and using the encoder ports. For using non-REV motor encoders see Using 5V. | https://docs.revrobotics.com/rev-control-system/control-system-overview/control-hub-basics | 2021-06-12T17:28:46 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.revrobotics.com |
Class FinSpaceDataClient
- Aws\AwsClient implements Aws\AwsClientInterface uses Aws\AwsClientTrait
Aws\FinSpaceData\FinSpaceDataClient
- Namespace: Aws\FinSpaceData
- Located at FinSpaceData/FinSpaceDataClient.php
This client is used to interact with the FinSpace Public API(),
getSignatureProvider()). | https://docs.aws.amazon.com/aws-sdk-php/v3/api/class-Aws.FinSpaceData.FinSpaceDataClient.html | 2021-06-12T18:20:28 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.aws.amazon.com |
Web Query Service¶
The Spamhaus Web Query Service (WQS) is a method of accessing Spamhaus block lists using the HTTPS protocol.
WQS is a simple API to query the Spamhaus zone files through a REST interface instead of the conventional DNS query method. This allows for a broad set of use cases to secure infrastructure and services not limited to just email. For example one could query the authentication block list AuthBL, and block all login requests to systems if the origin IP address is listed. You could also reject all incoming email from a source domain listed in the Spamhaus DBL.
There are many more use cases that can be customized based on one’s organizations requirements, and revised if those requirements change. The WQS REST API is open to customers subscribed to the Spamhaus Data Query Service (DQS). Those who would like to register for DQS please visit our registration page.
The documentation below will outline how to use the WQS API.
Web Query Service
- Access and Authentication
- The Requests API
- The Info API
- API Usage Examples
- Testing | https://docs.spamhaus.com/datasets/docs/source/70-access-methods/web-query-service/000-intro.html | 2021-06-12T18:10:19 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.spamhaus.com |
.
To perform a simple summing of rows without conditionals, use the
SUM function. See SUM Function.
Basic Usage
sumif(timeoutSecs, errors >= 1)
Output: Returns the sum of the
timeoutSecs column when the
errors value is greater than or equal to 1.
Syntax and Arguments
sum
SUMIF- Sum of a set of values by group that meet a specified condition. See SUMIF Function.
COUNTDISTINCTIF- Sum of a set of values by group that meet a specified condition. See COUNTDISTINCTIF Function.. | https://docs.trifacta.com/pages/viewpage.action?pageId=147194335&navigatingVersions=true | 2021-06-12T18:31:59 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.trifacta.com |
File Service FAQ¶
What is LifeOmic File Service?¶
- LifeOmic File Service is a managed service for storing and retrieving file data.
- Once files are uploaded, they are available to the other PHC services for use and analysis.
- Omics Explorer is an example where VCF files may be uploaded, and once indexed, will allow querying across genetic variants (SNV, CNV, Fusion) in real-time in the PHC Web Console.
- Task Service and Notebook Service are two examples where files may be brought in for analysis/compute with your own code.
Common file data examples uploaded are:
- Genomic Variants, Gene expression, Proteomics, Pharmacogenetics - file formats such as VCF, BAM, CSV, TSV
- Documents, Images, and Audio - file formats such as JPEG, PNG, DICOM, PDF, etc.
How are files organized? What is a project (aka data-set)?¶
- The PHC platform organizes data, including files, under user-defined projects (aka data-sets).
- Inside projects, files may be organized further into directory structures so files are not one large file list.
Does deleting a project (aka data-set) delete all associated files?¶
- Yes, deleting a project deletes the project and all associated data.
- After you select Delete this Project, the project stays active for 14 days before the files are actually deleted. You can cancel the deletion during this time period. You also have the option to delete the files immediately.
What access control is available?¶
- Projects allow for access control to be put in place by the organization.
- Application level access-controls are enforced when viewing, downloading, and uploading files.
- Example: A user can be configured to query and search genetic data for a subject, but be restricted in the ability to download the subject's file(s) on a per project basis.
- Access control refers to the ability to control who can interact with a resource within the platform. The LifeOmic platform uses Attribute Based Access Control (ABAC) to assign different attributes and dictate what information users have access to in cases requiring complex Access Control. For more information, see the Account Management Overview.
How durable is LifeOmic File Service?¶
- LifeOmic File Service is powered by AWS S3. For more information see AWS S3 FAQ.
Are my files backed up?¶
- Cross-region replication and backups are included as part of service.
How are my files protected?¶
- Uploaded files are encrypted at rest.
Can arbitrary files be stored without a subject/patient?¶
- The only required information to store a file is a project identifier.
- Files are not required to be tied to a specific subject within a project.
- If this link is desired, a FHIR DocumentReference can be used to provide this link using LifeOmic FHIR Service.
How can I get started uploading data?¶
- The LifeOmic CLI is the best option to get started uploading file data to the PHC. Once installed, files can be uploaded with the
lo files --helpcommand.
What limits are in place for LifeOmic File Service?¶
- The total amount of files one can store is unlimited. The maximum file size
5 terabytes.
- The LifeOmic CLI can manage uploading this amount of data to the PHC.
What file name restrictions are in place?¶
- File names must match,
^([a-zA-Z0-9!\\-+.*'()&$@=;+ ,?:_/]*)$, and be less than 970 characters in length.
What files can be viewed within the PHC Web Console?¶
- Common file types found on the web like text, images, and markdown will be viewable within the PHC Web Console.
- Certain file types will open into web-based viewers. Some examples are csv/tsv files, DICOM images, ipynb notebook files, and PDF files.
- Files larger than
5 MBwill be opened in a new tab.
How can I document my files?¶
- Placing a
README.mdin a directory of files will render the markdown below the file listing as an inline description.
How can I reference my file?¶
- A unique identifier is created for all files uploaded to the PHC. To make a reference to a file, use a LifeOmic Resource Name (LRN) which will remain as a stable pointer to this file. Future file renames will not break the LRN reference.
How can I share my project with collaborators?¶
- Users may create sharable links (URL) from the PHC Web Console for those who have access to the project.
- Granting access to projects is available through access control and ABAC.
What different methods are available for transferring data?¶
- LifeOmic Files API - Use the HTTPS API to upload data with TLS.
- LifeOmic CLI - Use a command-line interface to upload data through the API.
- PHC SDK for Python - Use Python to upload data through the API.
- SFTP upload to a Project - Create a location in a project that allows transfer of files into a project over SFTP.
How can I trigger automation to run against the recent set of file transfers?¶
- Users may define file glob patterns that may be used to trigger and start File Actions. File Actions allow one to automate the execution of behavior with Common Workflow Language (CWL).
What is the best method to send up large files (larger than
500 GB)?¶
- For the majority of use-cases the LifeOmic CLI will transfer files successfully and provide an easy to use terminal experience.
- When individual files are
500 GBor larger a limiting factor when using the LifeOmic CLI to upload is your internet uplink speed, time, and the connection not being interrupted.
- Configuring SFTP to a Project is recommended when transferring files of this size. Additionally, SFTP can resume a file transfer for a file that's been partially transferred.
Last update: 2021-04-22 | https://docs.us.lifeomic.com/faqs/files/ | 2021-06-12T18:34:45 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.us.lifeomic.com |
moznetwork — Get network information¶
moznetwork is a very simple module designed for one task: getting the network address of the current machine.
Example usage:
import moznetwork try: ip = moznetwork.get_ip() print "The external IP of your machine is '%s'" % ip except moznetwork.NetworkError: print "Unable to determine IP address of machine" raise | https://firefox-source-docs.mozilla.org/mozbase/moznetwork.html | 2021-06-12T18:30:15 | CC-MAIN-2021-25 | 1623487586239.2 | [] | firefox-source-docs.mozilla.org |
, see the Create New Customer section in the VMware SD-WAN Operator Guide available at.
Prerequisites
session.options.enableEdgeAnalytics
service.analytics.apiURL
service.analytics.apiToken
For more information, see Enable VMware Edge Network Intelligence on a VMware SD-WAN Orchestrator.
Results
The new customer's name is displayed in the Customers screen. You can click on the customer name to navigate to the Enterprise portal and add or modify Analytics configurations for the customer. | https://docs.vmware.com/en/VMware-SD-WAN/4.4/VMware-SD-WAN-Edge-Network-Intelligence-Configuration-Guide/GUID-DDBC502C-589B-4ABB-A4C1-38DC4E6279FD.html | 2021-06-12T18:12:45 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.vmware.com |
The DRS migration threshold allows you to specify which recommendations are generated and then applied (when the virtual machines involved in the recommendation are in fully automated mode) or shown (if in manual mode). This threshold is a measure of how aggressive DRS is in recommending migrations to improve VM happiness.
You can move the threshold slider to use one of five settings, ranging from Conservative to Aggressive. The higher the agressiveness setting, the more frequently DRS might recommend migrations to improve VM happiness. The Conservative setting generates only priority-one recommendations (mandatory recommendations)..)
DRS Score
Each migration recommendation is computed using the VM happiness metric which measures execution efficiency. This metric is displayed as DRS Score in the cluster's Summary tab in the vSphere Client. DRS load balancing recommendations attempt to improve the DRS score of a VM. The Cluster DRS score is a weighted average of the VM DRS Scores of all the powered on VMs in the cluster. The Cluster DRS Score is shown in the gauge component. The color of the filled in section changes depending on the value to match the corresponding bar in the VM DRS Score histogram. The bars in the histogram show the percentage of VMs that have a DRS Score in that range. You can view the list with server-side sorting and filtering by selecting the Monitor tab of the cluster and selecting vSphere DRS, which shows a list of the VMs in the cluster sorted by their DRS score in ascending order. | https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.resmgmt.doc/GUID-23A2BE80-BCE2-49D7-902E-F7B8FDD8F5F8.html | 2021-06-12T18:40:34 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.vmware.com |
Installing from source¶
gr-satellites is a GNU Radio out-of-tree module, and can be installed as such, by building it from source in a system where GNU Radio is already installed.
The decoders that use Mobitex or Mobitex-NX require the GNU Radio out-of-tree
module
gr-tnc_nx, which can be found in beesat-sdr (note that the
maint-3.8 branch is the one which supports GNU Radio 3.8).. The following can be run inside the directory containing the gr-satellites sources:
$ mkdir build $ cd build $ cmake .. $ make $ sudo make install $ sudo ldconfig
After running
make, you can run the tests by doing
make test in the
build/ directory.
Note
There are systems where the AO-73 and similar decoders fail to decode correctly. Additionally,
it is recommended to check if in
~/.volk/volk_config there is a line that
contains
volk_8u_x4_conv_k7_r2_8u avx2 avx2 and replace both occurences
of
avx2 by either
spiral or
generic..
Note
A permanent configuration of the
PYTHONPATH can be added to a script such as
~/.bashrc or
~/.bash_profile. This applies the correct
PYTHONPATH when
gr_satellites or
gnuradio-companion are run from
a
bash session. If
gnuradio-companion is run directly from the
graphical environment, then it is necessary to set the
PYTHONPATH in
xinitrc or xprofile. See the
Arch Linux documentation on environment variables
for more information,. | https://gr-satellites.readthedocs.io/en/latest/installation.html | 2021-06-12T18:25:46 | CC-MAIN-2021-25 | 1623487586239.2 | [] | gr-satellites.readthedocs.io |
- Release Notes >
- Release Notes for MongoDB 3.2 >
- Compatibility Changes in MongoDB 3.2
Compatibility Changes in MongoDB 3.2¶
On this page
The following 3.2 changes can affect the compatibility with older versions of MongoDB. See also Release Notes for MongoDB 3.2 for the list of the 3.2 changes.
Default Storage Engine Change¶
Starting in 3.2, MongoDB uses the WiredTiger as the default storage engine. Previous versions used the MMAPv1 as the default storage engine.
For existing deployments, if you do not specify the
--storageEngine
or the
storage.engine setting, MongoDB automatically
determines the storage engine used to create the data files in the
--dbpath or
storage.dbPath.
For new deployments, to use MMAPv1, you must explicitly specify the storage engine setting either:
On the command line with the
--storageEngineoption:
Or in a configuration file, using the
storage.enginesetting:
Index Changes¶
Version 0 Indexes¶
MongoDB 3.2 disallows the creation of version 0 indexes (i.e.
{v:
0}). If version 0 indexes exist, MongoDB 3.2 outputs a warning log
message, specifying the collection and the index.
Starting in MongoDB 2.0, MongoDB started automatically upgrading
v:
0 indexes during initial sync,
mongorestore or
reIndex operations.
If a version 0 index exists, you can use any of the aforementioned
operations as well as drop and recreate the index to upgrade to the
v: 1 version.
For example, if upon startup, a warning message indicated that an index
index { v: 0, key: { x: 1.0 }, name: "x_1", ns: "test.legacyOrders"
} is a version 0 index, to upgrade to the appropriate version, you
can drop and recreate the index:
Drop the index either by name:
or by key:
Recreate the index without the version option
v:
Text Index Version 3 Compatibility¶
Text index (version 3) is incompatible with earlier versions of MongoDB. Earlier versions of MongoDB will not start if text index (version 3) exists in the database.
2dsphere Index Version 3 Compatibility¶
2dsphere index (version 3) is
incompatible with earlier versions of MongoDB. Earlier versions of
MongoDB will not start if
2dsphere index (version 3) exists in the
database.
Aggregation Compatibility Changes¶
$avgaccumulator returns null when run against a non-existent field. Previous versions returned
0.
$substrerrors when the result is an invalid UTF-8. Previous versions output the invalid UTF-8 result.
- Array elements are no longer treated as literals in the aggregation pipeline. Instead, each element of an array is now parsed as an expression. To treat the element as a literal instead of an expression, use the
$literaloperator to create a literal value.
$unwindno longer errors on non-array operands. If the operand does not resolve to an array but is not missing, null, or an empty array,
$unwindtreats the operand as a single element array. Previously, if a value in the field specified by the field path was not an array,
db.collection.aggregate()generated an error.
SpiderMonkey Compatibility Changes¶
MongoDB 3.2 changes the JavaScript engine from V8 to SpiderMonkey. The change
allows the use of more modern JavaScript language features, and comes along with
minor
mongo shell improvements and compatibility changes.
See JavaScript Changes in MongoDB 3.2 for more information about this change.
Replica Set Configuration Validation¶
MongoDB 3.2 provides a stricter validation of replica set configuration settings:
Driver Compatibility Changes¶
A driver upgrade is necessary to support the
find and
getMore commands.
General Compatibility Changes¶
- In MongoDB 3.2,
cursor.showDiskLoc()is deprecated in favor of
cursor.showRecordId(), and both return a new document format.
- MongoDB 3.2 renamed the
serverStatus.repl.slavesfield to
repl.replicationProgress. See: the db.serverStatus() repl reference for more information.
- The default changed from
--moveParanoiato
--noMoveParanoia.
- MongoDB 3.2 replica set members with
1 votecannot sync from members with
0 votes.
mongooplogis deprecated starting in MongoDB 3.2.
Additional Information¶
See also Release Notes for MongoDB 3.2. | https://docs.mongodb.com/manual/release-notes/3.2-compatibility/ | 2018-09-18T20:20:59 | CC-MAIN-2018-39 | 1537267155676.21 | [] | docs.mongodb.com |
Does Full Page Zoom modify my theme templates?
The Full Page Zoom app does not need to modify any of your templates to run nicely in your store. We use a Shopify programming feature called ScriptTag (see for details) that allows our app to be injected dynamically by Shopify when a page loads.
While the ScriptTag feature is very convenient, scripts that run that way have to wait till other content in the page is loaded, what can be perceived as the app being slow.
In order to avoid that and load quickly, we provide the option to insert our app script in your main theme template ( theme.liquid). This option is called 'Fast loading' and can be configured in the app preferences page. Please note that this is completely optional, so you can avoid that one-line modification of your theme template by disabling the 'Fast loading' option.
On the other hand, in order to ensure maximum compatibility with some Shopify themes, our app could make minor cosmetic changes on your theme when the app is installed or when the main theme is changed. This is controlled by ' Autoconfigure' option that can be disabled in our app preferences page.
Finally, our app script may be upgraded automatically from time to time to take advantage of the latest features implemented in the app. This can also disabled with the ' Autoconfigure' option. | https://docs.codeblackbelt.com/article/1384-does-full-page-zoom-modify-my-theme-templates | 2019-10-13T23:06:13 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.codeblackbelt.com |
This documentation is entirely based on our Forest Live Demo available here and hosted on Github here, here and here. A database dump (PostgreSQL) is also available here if you want to run the Live Demo on your side.
This project illustrates with working examples all the possible features and available options. Please refer to the data model below from time to time to get a global overview of the collections and relationships used everywhere in our examples. | https://docs.forestadmin.com/documentation/reference-guide/live-demo | 2019-10-13T22:23:30 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.forestadmin.com |
xarray.Dataset.to_netcdf¶
Dataset.
to_netcdf(path=None, mode='w', format=None, group=None, engine=None, encoding=None, unlimited_dims=None, compute=True, invalid_netcdf=False)¶
Write dataset contents to a netCDF file.
- Parameters
path (str, Path or file-like object, optional) – Path to which to save this dataset. File-like objects are only supported by the scipy engine. If no path is provided, this function returns the resulting netCDF file as bytes; in this case, we need to use scipy, which does not support netCDF version 4 (the default format becomes NETCDF3_64BIT).
mode ({'w', 'a'}, optional) – Write (‘w’) or append (‘a’) mode. If mode=’w’, any existing file at this location will be overwritten. If mode=’a’, existing variables).
group (str, optional) – Path to the netCDF4 group in the given file to open (only works for format=’NETCDF4’). The group(s).
encoding (dict, optional) –
Nested dictionary with variable names as keys and dictionaries of variable specific encodings as values, e.g., ``{‘my_variable’: {‘dtype’: ‘int16’, ‘scale_factor’: 0.1,
’zlib’: True}, …}``
The h5netcdf engine supports both the NetCDF4-style compression encoding parameters
{'zlib': True, 'complevel': 9}and the h5py ones
{'compression': 'gzip', 'compression_opts': 9}. This allows using any compression plugin installed in the HDF5 library, e.g. LZF.
unlimited_dims (iterable of hashable, optional) – Dimension(s) that should be serialized as unlimited dimensions. By default, no dimensions are treated as unlimited dimensions. Note that unlimited_dims may also be set via
dataset.encoding['unlimited_dims'].
compute (boolean) – If true compute immediately, otherwise return a
dask.delayed.Delayedobject that can be computed later.
invalid_netcdf (boolean) – Only valid along with engine=’h5netcdf’. If True, allow writing hdf5 files which are valid netcdf as described in. Default: False. | https://xray.readthedocs.io/en/stable/generated/xarray.Dataset.to_netcdf.html | 2019-10-13T23:25:28 | CC-MAIN-2019-43 | 1570986648343.8 | [] | xray.readthedocs.io |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.